00:00:00.001 Started by upstream project "autotest-per-patch" build number 132285 00:00:00.001 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.075 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.079 The recommended git tool is: git 00:00:00.079 using credential 00000000-0000-0000-0000-000000000002 00:00:00.082 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.128 Fetching changes from the remote Git repository 00:00:00.130 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.183 Using shallow fetch with depth 1 00:00:00.183 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.183 > git --version # timeout=10 00:00:00.226 > git --version # 'git version 2.39.2' 00:00:00.226 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.257 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.257 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:05.109 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:05.121 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:05.133 Checking out Revision b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf (FETCH_HEAD) 00:00:05.133 > git config core.sparsecheckout # timeout=10 00:00:05.146 > git read-tree -mu HEAD # timeout=10 00:00:05.162 > git checkout -f b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf # timeout=5 00:00:05.179 Commit message: "jenkins/jjb-config: Ignore OS version mismatch under freebsd" 00:00:05.179 > git rev-list --no-walk b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf # timeout=10 00:00:05.263 [Pipeline] Start of Pipeline 00:00:05.276 [Pipeline] library 00:00:05.278 Loading library shm_lib@master 00:00:05.278 Library shm_lib@master is cached. Copying from home. 00:00:05.294 [Pipeline] node 00:00:05.303 Running on GP12 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:05.305 [Pipeline] { 00:00:05.316 [Pipeline] catchError 00:00:05.318 [Pipeline] { 00:00:05.333 [Pipeline] wrap 00:00:05.342 [Pipeline] { 00:00:05.349 [Pipeline] stage 00:00:05.350 [Pipeline] { (Prologue) 00:00:05.560 [Pipeline] sh 00:00:05.877 + logger -p user.info -t JENKINS-CI 00:00:05.913 [Pipeline] echo 00:00:05.915 Node: GP12 00:00:05.922 [Pipeline] sh 00:00:06.346 [Pipeline] setCustomBuildProperty 00:00:06.359 [Pipeline] echo 00:00:06.361 Cleanup processes 00:00:06.366 [Pipeline] sh 00:00:06.697 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:06.697 174665 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:06.749 [Pipeline] sh 00:00:07.069 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:07.069 ++ grep -v 'sudo pgrep' 00:00:07.069 ++ awk '{print $1}' 00:00:07.069 + sudo kill -9 00:00:07.069 + true 00:00:07.091 [Pipeline] cleanWs 00:00:07.104 [WS-CLEANUP] Deleting project workspace... 00:00:07.104 [WS-CLEANUP] Deferred wipeout is used... 00:00:07.115 [WS-CLEANUP] done 00:00:07.118 [Pipeline] setCustomBuildProperty 00:00:07.128 [Pipeline] sh 00:00:07.451 + sudo git config --global --replace-all safe.directory '*' 00:00:07.611 [Pipeline] httpRequest 00:00:08.726 [Pipeline] echo 00:00:08.727 Sorcerer 10.211.164.20 is alive 00:00:08.736 [Pipeline] retry 00:00:08.739 [Pipeline] { 00:00:08.753 [Pipeline] httpRequest 00:00:08.759 HttpMethod: GET 00:00:08.760 URL: http://10.211.164.20/packages/jbp_b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf.tar.gz 00:00:08.762 Sending request to url: http://10.211.164.20/packages/jbp_b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf.tar.gz 00:00:08.802 Response Code: HTTP/1.1 200 OK 00:00:08.802 Success: Status code 200 is in the accepted range: 200,404 00:00:08.803 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf.tar.gz 00:00:16.780 [Pipeline] } 00:00:16.798 [Pipeline] // retry 00:00:16.806 [Pipeline] sh 00:00:17.139 + tar --no-same-owner -xf jbp_b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf.tar.gz 00:00:17.175 [Pipeline] httpRequest 00:00:17.587 [Pipeline] echo 00:00:17.589 Sorcerer 10.211.164.20 is alive 00:00:17.599 [Pipeline] retry 00:00:17.601 [Pipeline] { 00:00:17.617 [Pipeline] httpRequest 00:00:17.623 HttpMethod: GET 00:00:17.623 URL: http://10.211.164.20/packages/spdk_318515b44ec8b67f83bcc9ca83f0c7d5ea919e62.tar.gz 00:00:17.625 Sending request to url: http://10.211.164.20/packages/spdk_318515b44ec8b67f83bcc9ca83f0c7d5ea919e62.tar.gz 00:00:17.644 Response Code: HTTP/1.1 200 OK 00:00:17.645 Success: Status code 200 is in the accepted range: 200,404 00:00:17.645 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_318515b44ec8b67f83bcc9ca83f0c7d5ea919e62.tar.gz 00:02:25.309 [Pipeline] } 00:02:25.326 [Pipeline] // retry 00:02:25.333 [Pipeline] sh 00:02:25.621 + tar --no-same-owner -xf spdk_318515b44ec8b67f83bcc9ca83f0c7d5ea919e62.tar.gz 00:02:28.975 [Pipeline] sh 00:02:29.261 + git -C spdk log --oneline -n5 00:02:29.261 318515b44 nvme/perf: interrupt mode support for pcie controller 00:02:29.261 7bc1134d6 test/scheduler: Read PID's status file only once 00:02:29.261 0b65bb478 test/scheduler: Account for multiple cpus in the affinity mask 00:02:29.261 a96685099 test/nvmf: Tweak nvme_connect() 00:02:29.261 90486f7e8 accel/dpdk_compressdev: Use the proper spdk_free function in error path 00:02:29.273 [Pipeline] } 00:02:29.286 [Pipeline] // stage 00:02:29.293 [Pipeline] stage 00:02:29.295 [Pipeline] { (Prepare) 00:02:29.311 [Pipeline] writeFile 00:02:29.327 [Pipeline] sh 00:02:29.614 + logger -p user.info -t JENKINS-CI 00:02:29.627 [Pipeline] sh 00:02:29.914 + logger -p user.info -t JENKINS-CI 00:02:29.927 [Pipeline] sh 00:02:30.215 + cat autorun-spdk.conf 00:02:30.215 SPDK_RUN_FUNCTIONAL_TEST=1 00:02:30.215 SPDK_TEST_NVMF=1 00:02:30.215 SPDK_TEST_NVME_CLI=1 00:02:30.215 SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:30.215 SPDK_TEST_NVMF_NICS=e810 00:02:30.215 SPDK_TEST_VFIOUSER=1 00:02:30.215 SPDK_RUN_UBSAN=1 00:02:30.215 NET_TYPE=phy 00:02:30.223 RUN_NIGHTLY=0 00:02:30.228 [Pipeline] readFile 00:02:30.253 [Pipeline] withEnv 00:02:30.255 [Pipeline] { 00:02:30.267 [Pipeline] sh 00:02:30.556 + set -ex 00:02:30.556 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:02:30.556 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:02:30.556 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:30.556 ++ SPDK_TEST_NVMF=1 00:02:30.556 ++ SPDK_TEST_NVME_CLI=1 00:02:30.556 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:30.556 ++ SPDK_TEST_NVMF_NICS=e810 00:02:30.556 ++ SPDK_TEST_VFIOUSER=1 00:02:30.556 ++ SPDK_RUN_UBSAN=1 00:02:30.556 ++ NET_TYPE=phy 00:02:30.556 ++ RUN_NIGHTLY=0 00:02:30.556 + case $SPDK_TEST_NVMF_NICS in 00:02:30.556 + DRIVERS=ice 00:02:30.556 + [[ tcp == \r\d\m\a ]] 00:02:30.556 + [[ -n ice ]] 00:02:30.556 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:02:30.556 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:02:33.856 rmmod: ERROR: Module irdma is not currently loaded 00:02:33.856 rmmod: ERROR: Module i40iw is not currently loaded 00:02:33.856 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:02:33.856 + true 00:02:33.856 + for D in $DRIVERS 00:02:33.856 + sudo modprobe ice 00:02:33.856 + exit 0 00:02:33.866 [Pipeline] } 00:02:33.881 [Pipeline] // withEnv 00:02:33.886 [Pipeline] } 00:02:33.900 [Pipeline] // stage 00:02:33.909 [Pipeline] catchError 00:02:33.911 [Pipeline] { 00:02:33.925 [Pipeline] timeout 00:02:33.925 Timeout set to expire in 1 hr 0 min 00:02:33.927 [Pipeline] { 00:02:33.941 [Pipeline] stage 00:02:33.943 [Pipeline] { (Tests) 00:02:33.957 [Pipeline] sh 00:02:34.245 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:02:34.245 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:02:34.245 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:02:34.245 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:02:34.245 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:34.245 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:02:34.245 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:02:34.245 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:02:34.245 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:02:34.245 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:02:34.245 + [[ nvmf-tcp-phy-autotest == pkgdep-* ]] 00:02:34.245 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:02:34.245 + source /etc/os-release 00:02:34.245 ++ NAME='Fedora Linux' 00:02:34.245 ++ VERSION='39 (Cloud Edition)' 00:02:34.245 ++ ID=fedora 00:02:34.245 ++ VERSION_ID=39 00:02:34.245 ++ VERSION_CODENAME= 00:02:34.245 ++ PLATFORM_ID=platform:f39 00:02:34.245 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:02:34.245 ++ ANSI_COLOR='0;38;2;60;110;180' 00:02:34.245 ++ LOGO=fedora-logo-icon 00:02:34.245 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:02:34.245 ++ HOME_URL=https://fedoraproject.org/ 00:02:34.245 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:02:34.245 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:02:34.245 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:02:34.245 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:02:34.245 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:02:34.245 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:02:34.245 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:02:34.245 ++ SUPPORT_END=2024-11-12 00:02:34.245 ++ VARIANT='Cloud Edition' 00:02:34.245 ++ VARIANT_ID=cloud 00:02:34.245 + uname -a 00:02:34.245 Linux spdk-gp-12 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:02:34.245 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:02:35.184 Hugepages 00:02:35.184 node hugesize free / total 00:02:35.184 node0 1048576kB 0 / 0 00:02:35.184 node0 2048kB 0 / 0 00:02:35.184 node1 1048576kB 0 / 0 00:02:35.184 node1 2048kB 0 / 0 00:02:35.184 00:02:35.184 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:35.184 I/OAT 0000:00:04.0 8086 0e20 0 ioatdma - - 00:02:35.184 I/OAT 0000:00:04.1 8086 0e21 0 ioatdma - - 00:02:35.184 I/OAT 0000:00:04.2 8086 0e22 0 ioatdma - - 00:02:35.184 I/OAT 0000:00:04.3 8086 0e23 0 ioatdma - - 00:02:35.184 I/OAT 0000:00:04.4 8086 0e24 0 ioatdma - - 00:02:35.184 I/OAT 0000:00:04.5 8086 0e25 0 ioatdma - - 00:02:35.184 I/OAT 0000:00:04.6 8086 0e26 0 ioatdma - - 00:02:35.184 I/OAT 0000:00:04.7 8086 0e27 0 ioatdma - - 00:02:35.184 I/OAT 0000:80:04.0 8086 0e20 1 ioatdma - - 00:02:35.184 I/OAT 0000:80:04.1 8086 0e21 1 ioatdma - - 00:02:35.184 I/OAT 0000:80:04.2 8086 0e22 1 ioatdma - - 00:02:35.184 I/OAT 0000:80:04.3 8086 0e23 1 ioatdma - - 00:02:35.184 I/OAT 0000:80:04.4 8086 0e24 1 ioatdma - - 00:02:35.184 I/OAT 0000:80:04.5 8086 0e25 1 ioatdma - - 00:02:35.185 I/OAT 0000:80:04.6 8086 0e26 1 ioatdma - - 00:02:35.185 I/OAT 0000:80:04.7 8086 0e27 1 ioatdma - - 00:02:35.185 NVMe 0000:81:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:02:35.443 + rm -f /tmp/spdk-ld-path 00:02:35.443 + source autorun-spdk.conf 00:02:35.443 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:35.443 ++ SPDK_TEST_NVMF=1 00:02:35.443 ++ SPDK_TEST_NVME_CLI=1 00:02:35.443 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:35.443 ++ SPDK_TEST_NVMF_NICS=e810 00:02:35.443 ++ SPDK_TEST_VFIOUSER=1 00:02:35.443 ++ SPDK_RUN_UBSAN=1 00:02:35.443 ++ NET_TYPE=phy 00:02:35.443 ++ RUN_NIGHTLY=0 00:02:35.443 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:02:35.443 + [[ -n '' ]] 00:02:35.443 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:35.443 + for M in /var/spdk/build-*-manifest.txt 00:02:35.443 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:02:35.443 + cp /var/spdk/build-kernel-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:02:35.443 + for M in /var/spdk/build-*-manifest.txt 00:02:35.443 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:02:35.443 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:02:35.443 + for M in /var/spdk/build-*-manifest.txt 00:02:35.443 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:02:35.443 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:02:35.443 ++ uname 00:02:35.443 + [[ Linux == \L\i\n\u\x ]] 00:02:35.443 + sudo dmesg -T 00:02:35.443 + sudo dmesg --clear 00:02:35.443 + dmesg_pid=176021 00:02:35.443 + [[ Fedora Linux == FreeBSD ]] 00:02:35.443 + sudo dmesg -Tw 00:02:35.443 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:35.443 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:35.443 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:02:35.443 + [[ -x /usr/src/fio-static/fio ]] 00:02:35.443 + export FIO_BIN=/usr/src/fio-static/fio 00:02:35.443 + FIO_BIN=/usr/src/fio-static/fio 00:02:35.443 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:02:35.443 + [[ ! -v VFIO_QEMU_BIN ]] 00:02:35.443 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:02:35.443 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:35.443 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:35.443 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:02:35.443 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:35.443 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:35.443 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:02:35.443 10:21:23 -- common/autotest_common.sh@1690 -- $ [[ n == y ]] 00:02:35.443 10:21:23 -- spdk/autorun.sh@20 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:02:35.443 10:21:23 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:35.443 10:21:23 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@2 -- $ SPDK_TEST_NVMF=1 00:02:35.443 10:21:23 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@3 -- $ SPDK_TEST_NVME_CLI=1 00:02:35.443 10:21:23 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@4 -- $ SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:35.443 10:21:23 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@5 -- $ SPDK_TEST_NVMF_NICS=e810 00:02:35.443 10:21:23 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@6 -- $ SPDK_TEST_VFIOUSER=1 00:02:35.443 10:21:23 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@7 -- $ SPDK_RUN_UBSAN=1 00:02:35.443 10:21:23 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@8 -- $ NET_TYPE=phy 00:02:35.443 10:21:23 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@9 -- $ RUN_NIGHTLY=0 00:02:35.443 10:21:23 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:02:35.443 10:21:23 -- spdk/autorun.sh@25 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autobuild.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:02:35.443 10:21:23 -- common/autotest_common.sh@1690 -- $ [[ n == y ]] 00:02:35.443 10:21:23 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:02:35.443 10:21:23 -- scripts/common.sh@15 -- $ shopt -s extglob 00:02:35.443 10:21:23 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:02:35.443 10:21:23 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:35.443 10:21:23 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:35.443 10:21:23 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:35.443 10:21:23 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:35.443 10:21:23 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:35.443 10:21:23 -- paths/export.sh@5 -- $ export PATH 00:02:35.443 10:21:23 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:35.443 10:21:23 -- common/autobuild_common.sh@485 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:02:35.443 10:21:23 -- common/autobuild_common.sh@486 -- $ date +%s 00:02:35.443 10:21:23 -- common/autobuild_common.sh@486 -- $ mktemp -dt spdk_1731662483.XXXXXX 00:02:35.443 10:21:23 -- common/autobuild_common.sh@486 -- $ SPDK_WORKSPACE=/tmp/spdk_1731662483.W0GQR8 00:02:35.443 10:21:23 -- common/autobuild_common.sh@488 -- $ [[ -n '' ]] 00:02:35.443 10:21:23 -- common/autobuild_common.sh@492 -- $ '[' -n '' ']' 00:02:35.443 10:21:23 -- common/autobuild_common.sh@495 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:02:35.443 10:21:23 -- common/autobuild_common.sh@499 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:02:35.443 10:21:23 -- common/autobuild_common.sh@501 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:02:35.443 10:21:23 -- common/autobuild_common.sh@502 -- $ get_config_params 00:02:35.443 10:21:23 -- common/autotest_common.sh@407 -- $ xtrace_disable 00:02:35.443 10:21:23 -- common/autotest_common.sh@10 -- $ set +x 00:02:35.443 10:21:23 -- common/autobuild_common.sh@502 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:02:35.443 10:21:23 -- common/autobuild_common.sh@504 -- $ start_monitor_resources 00:02:35.443 10:21:23 -- pm/common@17 -- $ local monitor 00:02:35.443 10:21:23 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:35.443 10:21:23 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:35.443 10:21:23 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:35.443 10:21:23 -- pm/common@21 -- $ date +%s 00:02:35.443 10:21:23 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:35.443 10:21:23 -- pm/common@21 -- $ date +%s 00:02:35.443 10:21:23 -- pm/common@25 -- $ sleep 1 00:02:35.443 10:21:23 -- pm/common@21 -- $ date +%s 00:02:35.443 10:21:23 -- pm/common@21 -- $ date +%s 00:02:35.443 10:21:23 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1731662483 00:02:35.443 10:21:23 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1731662483 00:02:35.443 10:21:23 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1731662483 00:02:35.443 10:21:23 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1731662483 00:02:35.443 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1731662483_collect-vmstat.pm.log 00:02:35.443 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1731662483_collect-cpu-load.pm.log 00:02:35.443 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1731662483_collect-cpu-temp.pm.log 00:02:35.443 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1731662483_collect-bmc-pm.bmc.pm.log 00:02:36.382 10:21:24 -- common/autobuild_common.sh@505 -- $ trap stop_monitor_resources EXIT 00:02:36.382 10:21:24 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:02:36.382 10:21:24 -- spdk/autobuild.sh@12 -- $ umask 022 00:02:36.382 10:21:24 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:36.382 10:21:24 -- spdk/autobuild.sh@16 -- $ date -u 00:02:36.382 Fri Nov 15 09:21:24 AM UTC 2024 00:02:36.382 10:21:24 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:02:36.382 v25.01-pre-185-g318515b44 00:02:36.382 10:21:24 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:02:36.382 10:21:24 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:02:36.382 10:21:24 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:02:36.382 10:21:24 -- common/autotest_common.sh@1103 -- $ '[' 3 -le 1 ']' 00:02:36.382 10:21:24 -- common/autotest_common.sh@1109 -- $ xtrace_disable 00:02:36.382 10:21:24 -- common/autotest_common.sh@10 -- $ set +x 00:02:36.641 ************************************ 00:02:36.641 START TEST ubsan 00:02:36.641 ************************************ 00:02:36.641 10:21:24 ubsan -- common/autotest_common.sh@1127 -- $ echo 'using ubsan' 00:02:36.641 using ubsan 00:02:36.641 00:02:36.641 real 0m0.000s 00:02:36.641 user 0m0.000s 00:02:36.641 sys 0m0.000s 00:02:36.641 10:21:24 ubsan -- common/autotest_common.sh@1128 -- $ xtrace_disable 00:02:36.641 10:21:24 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:02:36.641 ************************************ 00:02:36.641 END TEST ubsan 00:02:36.641 ************************************ 00:02:36.641 10:21:24 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:02:36.641 10:21:24 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:02:36.641 10:21:24 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:02:36.641 10:21:24 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:02:36.641 10:21:24 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:02:36.641 10:21:24 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:02:36.641 10:21:24 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:02:36.641 10:21:24 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:02:36.641 10:21:24 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-shared 00:02:36.641 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:02:36.641 Using default DPDK in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:02:36.900 Using 'verbs' RDMA provider 00:02:47.462 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal.log)...done. 00:02:57.454 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:02:57.454 Creating mk/config.mk...done. 00:02:57.454 Creating mk/cc.flags.mk...done. 00:02:57.454 Type 'make' to build. 00:02:57.454 10:21:45 -- spdk/autobuild.sh@70 -- $ run_test make make -j48 00:02:57.454 10:21:45 -- common/autotest_common.sh@1103 -- $ '[' 3 -le 1 ']' 00:02:57.454 10:21:45 -- common/autotest_common.sh@1109 -- $ xtrace_disable 00:02:57.454 10:21:45 -- common/autotest_common.sh@10 -- $ set +x 00:02:57.454 ************************************ 00:02:57.454 START TEST make 00:02:57.454 ************************************ 00:02:57.454 10:21:45 make -- common/autotest_common.sh@1127 -- $ make -j48 00:02:57.711 make[1]: Nothing to be done for 'all'. 00:02:59.669 The Meson build system 00:02:59.669 Version: 1.5.0 00:02:59.669 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user 00:02:59.669 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:02:59.669 Build type: native build 00:02:59.669 Project name: libvfio-user 00:02:59.669 Project version: 0.0.1 00:02:59.669 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:59.669 C linker for the host machine: cc ld.bfd 2.40-14 00:02:59.669 Host machine cpu family: x86_64 00:02:59.669 Host machine cpu: x86_64 00:02:59.669 Run-time dependency threads found: YES 00:02:59.669 Library dl found: YES 00:02:59.669 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:59.669 Run-time dependency json-c found: YES 0.17 00:02:59.669 Run-time dependency cmocka found: YES 1.1.7 00:02:59.669 Program pytest-3 found: NO 00:02:59.669 Program flake8 found: NO 00:02:59.669 Program misspell-fixer found: NO 00:02:59.669 Program restructuredtext-lint found: NO 00:02:59.669 Program valgrind found: YES (/usr/bin/valgrind) 00:02:59.669 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:59.669 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:59.669 Compiler for C supports arguments -Wwrite-strings: YES 00:02:59.669 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:02:59.669 Program test-lspci.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:02:59.669 Program test-linkage.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:02:59.669 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:02:59.669 Build targets in project: 8 00:02:59.669 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:02:59.669 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:02:59.669 00:02:59.669 libvfio-user 0.0.1 00:02:59.669 00:02:59.669 User defined options 00:02:59.669 buildtype : debug 00:02:59.669 default_library: shared 00:02:59.669 libdir : /usr/local/lib 00:02:59.669 00:02:59.669 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:03:00.621 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:03:00.621 [1/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:03:00.621 [2/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:03:00.621 [3/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:03:00.621 [4/37] Compiling C object samples/lspci.p/lspci.c.o 00:03:00.621 [5/37] Compiling C object samples/null.p/null.c.o 00:03:00.883 [6/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:03:00.883 [7/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:03:00.883 [8/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:03:00.883 [9/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:03:00.883 [10/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:03:00.883 [11/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:03:00.883 [12/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:03:00.883 [13/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:03:00.883 [14/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:03:00.883 [15/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:03:00.883 [16/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:03:00.883 [17/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:03:00.883 [18/37] Compiling C object test/unit_tests.p/mocks.c.o 00:03:00.883 [19/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:03:00.883 [20/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:03:00.883 [21/37] Compiling C object samples/server.p/server.c.o 00:03:00.883 [22/37] Compiling C object samples/client.p/client.c.o 00:03:00.883 [23/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:03:00.883 [24/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:03:00.883 [25/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:03:00.883 [26/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:03:00.883 [27/37] Linking target samples/client 00:03:00.883 [28/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:03:00.883 [29/37] Linking target lib/libvfio-user.so.0.0.1 00:03:01.146 [30/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:03:01.146 [31/37] Linking target test/unit_tests 00:03:01.146 [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:03:01.146 [33/37] Linking target samples/server 00:03:01.146 [34/37] Linking target samples/null 00:03:01.146 [35/37] Linking target samples/gpio-pci-idio-16 00:03:01.146 [36/37] Linking target samples/lspci 00:03:01.146 [37/37] Linking target samples/shadow_ioeventfd_server 00:03:01.405 INFO: autodetecting backend as ninja 00:03:01.405 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:03:01.406 DESTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:03:02.353 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:03:02.353 ninja: no work to do. 00:03:07.627 The Meson build system 00:03:07.627 Version: 1.5.0 00:03:07.627 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk 00:03:07.627 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp 00:03:07.627 Build type: native build 00:03:07.627 Program cat found: YES (/usr/bin/cat) 00:03:07.627 Project name: DPDK 00:03:07.627 Project version: 24.03.0 00:03:07.627 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:03:07.627 C linker for the host machine: cc ld.bfd 2.40-14 00:03:07.627 Host machine cpu family: x86_64 00:03:07.627 Host machine cpu: x86_64 00:03:07.627 Message: ## Building in Developer Mode ## 00:03:07.627 Program pkg-config found: YES (/usr/bin/pkg-config) 00:03:07.627 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:03:07.627 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:03:07.627 Program python3 found: YES (/usr/bin/python3) 00:03:07.627 Program cat found: YES (/usr/bin/cat) 00:03:07.627 Compiler for C supports arguments -march=native: YES 00:03:07.627 Checking for size of "void *" : 8 00:03:07.627 Checking for size of "void *" : 8 (cached) 00:03:07.627 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:03:07.627 Library m found: YES 00:03:07.627 Library numa found: YES 00:03:07.627 Has header "numaif.h" : YES 00:03:07.627 Library fdt found: NO 00:03:07.627 Library execinfo found: NO 00:03:07.627 Has header "execinfo.h" : YES 00:03:07.627 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:03:07.627 Run-time dependency libarchive found: NO (tried pkgconfig) 00:03:07.627 Run-time dependency libbsd found: NO (tried pkgconfig) 00:03:07.627 Run-time dependency jansson found: NO (tried pkgconfig) 00:03:07.627 Run-time dependency openssl found: YES 3.1.1 00:03:07.627 Run-time dependency libpcap found: YES 1.10.4 00:03:07.627 Has header "pcap.h" with dependency libpcap: YES 00:03:07.627 Compiler for C supports arguments -Wcast-qual: YES 00:03:07.627 Compiler for C supports arguments -Wdeprecated: YES 00:03:07.627 Compiler for C supports arguments -Wformat: YES 00:03:07.627 Compiler for C supports arguments -Wformat-nonliteral: NO 00:03:07.627 Compiler for C supports arguments -Wformat-security: NO 00:03:07.627 Compiler for C supports arguments -Wmissing-declarations: YES 00:03:07.627 Compiler for C supports arguments -Wmissing-prototypes: YES 00:03:07.627 Compiler for C supports arguments -Wnested-externs: YES 00:03:07.627 Compiler for C supports arguments -Wold-style-definition: YES 00:03:07.627 Compiler for C supports arguments -Wpointer-arith: YES 00:03:07.627 Compiler for C supports arguments -Wsign-compare: YES 00:03:07.627 Compiler for C supports arguments -Wstrict-prototypes: YES 00:03:07.627 Compiler for C supports arguments -Wundef: YES 00:03:07.627 Compiler for C supports arguments -Wwrite-strings: YES 00:03:07.627 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:03:07.627 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:03:07.627 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:03:07.627 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:03:07.627 Program objdump found: YES (/usr/bin/objdump) 00:03:07.627 Compiler for C supports arguments -mavx512f: YES 00:03:07.627 Checking if "AVX512 checking" compiles: YES 00:03:07.627 Fetching value of define "__SSE4_2__" : 1 00:03:07.627 Fetching value of define "__AES__" : 1 00:03:07.627 Fetching value of define "__AVX__" : 1 00:03:07.627 Fetching value of define "__AVX2__" : (undefined) 00:03:07.627 Fetching value of define "__AVX512BW__" : (undefined) 00:03:07.627 Fetching value of define "__AVX512CD__" : (undefined) 00:03:07.627 Fetching value of define "__AVX512DQ__" : (undefined) 00:03:07.627 Fetching value of define "__AVX512F__" : (undefined) 00:03:07.627 Fetching value of define "__AVX512VL__" : (undefined) 00:03:07.627 Fetching value of define "__PCLMUL__" : 1 00:03:07.627 Fetching value of define "__RDRND__" : 1 00:03:07.627 Fetching value of define "__RDSEED__" : (undefined) 00:03:07.627 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:03:07.627 Fetching value of define "__znver1__" : (undefined) 00:03:07.627 Fetching value of define "__znver2__" : (undefined) 00:03:07.627 Fetching value of define "__znver3__" : (undefined) 00:03:07.627 Fetching value of define "__znver4__" : (undefined) 00:03:07.627 Compiler for C supports arguments -Wno-format-truncation: YES 00:03:07.627 Message: lib/log: Defining dependency "log" 00:03:07.627 Message: lib/kvargs: Defining dependency "kvargs" 00:03:07.627 Message: lib/telemetry: Defining dependency "telemetry" 00:03:07.627 Checking for function "getentropy" : NO 00:03:07.627 Message: lib/eal: Defining dependency "eal" 00:03:07.627 Message: lib/ring: Defining dependency "ring" 00:03:07.627 Message: lib/rcu: Defining dependency "rcu" 00:03:07.627 Message: lib/mempool: Defining dependency "mempool" 00:03:07.627 Message: lib/mbuf: Defining dependency "mbuf" 00:03:07.627 Fetching value of define "__PCLMUL__" : 1 (cached) 00:03:07.627 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:03:07.627 Compiler for C supports arguments -mpclmul: YES 00:03:07.627 Compiler for C supports arguments -maes: YES 00:03:07.627 Compiler for C supports arguments -mavx512f: YES (cached) 00:03:07.627 Compiler for C supports arguments -mavx512bw: YES 00:03:07.627 Compiler for C supports arguments -mavx512dq: YES 00:03:07.627 Compiler for C supports arguments -mavx512vl: YES 00:03:07.627 Compiler for C supports arguments -mvpclmulqdq: YES 00:03:07.627 Compiler for C supports arguments -mavx2: YES 00:03:07.627 Compiler for C supports arguments -mavx: YES 00:03:07.627 Message: lib/net: Defining dependency "net" 00:03:07.627 Message: lib/meter: Defining dependency "meter" 00:03:07.627 Message: lib/ethdev: Defining dependency "ethdev" 00:03:07.627 Message: lib/pci: Defining dependency "pci" 00:03:07.627 Message: lib/cmdline: Defining dependency "cmdline" 00:03:07.627 Message: lib/hash: Defining dependency "hash" 00:03:07.627 Message: lib/timer: Defining dependency "timer" 00:03:07.627 Message: lib/compressdev: Defining dependency "compressdev" 00:03:07.627 Message: lib/cryptodev: Defining dependency "cryptodev" 00:03:07.627 Message: lib/dmadev: Defining dependency "dmadev" 00:03:07.627 Compiler for C supports arguments -Wno-cast-qual: YES 00:03:07.627 Message: lib/power: Defining dependency "power" 00:03:07.627 Message: lib/reorder: Defining dependency "reorder" 00:03:07.627 Message: lib/security: Defining dependency "security" 00:03:07.627 Has header "linux/userfaultfd.h" : YES 00:03:07.627 Has header "linux/vduse.h" : YES 00:03:07.627 Message: lib/vhost: Defining dependency "vhost" 00:03:07.627 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:03:07.627 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:03:07.627 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:03:07.627 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:03:07.627 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:03:07.627 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:03:07.627 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:03:07.627 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:03:07.627 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:03:07.627 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:03:07.627 Program doxygen found: YES (/usr/local/bin/doxygen) 00:03:07.628 Configuring doxy-api-html.conf using configuration 00:03:07.628 Configuring doxy-api-man.conf using configuration 00:03:07.628 Program mandb found: YES (/usr/bin/mandb) 00:03:07.628 Program sphinx-build found: NO 00:03:07.628 Configuring rte_build_config.h using configuration 00:03:07.628 Message: 00:03:07.628 ================= 00:03:07.628 Applications Enabled 00:03:07.628 ================= 00:03:07.628 00:03:07.628 apps: 00:03:07.628 00:03:07.628 00:03:07.628 Message: 00:03:07.628 ================= 00:03:07.628 Libraries Enabled 00:03:07.628 ================= 00:03:07.628 00:03:07.628 libs: 00:03:07.628 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:03:07.628 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:03:07.628 cryptodev, dmadev, power, reorder, security, vhost, 00:03:07.628 00:03:07.628 Message: 00:03:07.628 =============== 00:03:07.628 Drivers Enabled 00:03:07.628 =============== 00:03:07.628 00:03:07.628 common: 00:03:07.628 00:03:07.628 bus: 00:03:07.628 pci, vdev, 00:03:07.628 mempool: 00:03:07.628 ring, 00:03:07.628 dma: 00:03:07.628 00:03:07.628 net: 00:03:07.628 00:03:07.628 crypto: 00:03:07.628 00:03:07.628 compress: 00:03:07.628 00:03:07.628 vdpa: 00:03:07.628 00:03:07.628 00:03:07.628 Message: 00:03:07.628 ================= 00:03:07.628 Content Skipped 00:03:07.628 ================= 00:03:07.628 00:03:07.628 apps: 00:03:07.628 dumpcap: explicitly disabled via build config 00:03:07.628 graph: explicitly disabled via build config 00:03:07.628 pdump: explicitly disabled via build config 00:03:07.628 proc-info: explicitly disabled via build config 00:03:07.628 test-acl: explicitly disabled via build config 00:03:07.628 test-bbdev: explicitly disabled via build config 00:03:07.628 test-cmdline: explicitly disabled via build config 00:03:07.628 test-compress-perf: explicitly disabled via build config 00:03:07.628 test-crypto-perf: explicitly disabled via build config 00:03:07.628 test-dma-perf: explicitly disabled via build config 00:03:07.628 test-eventdev: explicitly disabled via build config 00:03:07.628 test-fib: explicitly disabled via build config 00:03:07.628 test-flow-perf: explicitly disabled via build config 00:03:07.628 test-gpudev: explicitly disabled via build config 00:03:07.628 test-mldev: explicitly disabled via build config 00:03:07.628 test-pipeline: explicitly disabled via build config 00:03:07.628 test-pmd: explicitly disabled via build config 00:03:07.628 test-regex: explicitly disabled via build config 00:03:07.628 test-sad: explicitly disabled via build config 00:03:07.628 test-security-perf: explicitly disabled via build config 00:03:07.628 00:03:07.628 libs: 00:03:07.628 argparse: explicitly disabled via build config 00:03:07.628 metrics: explicitly disabled via build config 00:03:07.628 acl: explicitly disabled via build config 00:03:07.628 bbdev: explicitly disabled via build config 00:03:07.628 bitratestats: explicitly disabled via build config 00:03:07.628 bpf: explicitly disabled via build config 00:03:07.628 cfgfile: explicitly disabled via build config 00:03:07.628 distributor: explicitly disabled via build config 00:03:07.628 efd: explicitly disabled via build config 00:03:07.628 eventdev: explicitly disabled via build config 00:03:07.628 dispatcher: explicitly disabled via build config 00:03:07.628 gpudev: explicitly disabled via build config 00:03:07.628 gro: explicitly disabled via build config 00:03:07.628 gso: explicitly disabled via build config 00:03:07.628 ip_frag: explicitly disabled via build config 00:03:07.628 jobstats: explicitly disabled via build config 00:03:07.628 latencystats: explicitly disabled via build config 00:03:07.628 lpm: explicitly disabled via build config 00:03:07.628 member: explicitly disabled via build config 00:03:07.628 pcapng: explicitly disabled via build config 00:03:07.628 rawdev: explicitly disabled via build config 00:03:07.628 regexdev: explicitly disabled via build config 00:03:07.628 mldev: explicitly disabled via build config 00:03:07.628 rib: explicitly disabled via build config 00:03:07.628 sched: explicitly disabled via build config 00:03:07.628 stack: explicitly disabled via build config 00:03:07.628 ipsec: explicitly disabled via build config 00:03:07.628 pdcp: explicitly disabled via build config 00:03:07.628 fib: explicitly disabled via build config 00:03:07.628 port: explicitly disabled via build config 00:03:07.628 pdump: explicitly disabled via build config 00:03:07.628 table: explicitly disabled via build config 00:03:07.628 pipeline: explicitly disabled via build config 00:03:07.628 graph: explicitly disabled via build config 00:03:07.628 node: explicitly disabled via build config 00:03:07.628 00:03:07.628 drivers: 00:03:07.628 common/cpt: not in enabled drivers build config 00:03:07.628 common/dpaax: not in enabled drivers build config 00:03:07.628 common/iavf: not in enabled drivers build config 00:03:07.628 common/idpf: not in enabled drivers build config 00:03:07.628 common/ionic: not in enabled drivers build config 00:03:07.628 common/mvep: not in enabled drivers build config 00:03:07.628 common/octeontx: not in enabled drivers build config 00:03:07.628 bus/auxiliary: not in enabled drivers build config 00:03:07.628 bus/cdx: not in enabled drivers build config 00:03:07.628 bus/dpaa: not in enabled drivers build config 00:03:07.628 bus/fslmc: not in enabled drivers build config 00:03:07.628 bus/ifpga: not in enabled drivers build config 00:03:07.628 bus/platform: not in enabled drivers build config 00:03:07.628 bus/uacce: not in enabled drivers build config 00:03:07.628 bus/vmbus: not in enabled drivers build config 00:03:07.628 common/cnxk: not in enabled drivers build config 00:03:07.628 common/mlx5: not in enabled drivers build config 00:03:07.628 common/nfp: not in enabled drivers build config 00:03:07.628 common/nitrox: not in enabled drivers build config 00:03:07.628 common/qat: not in enabled drivers build config 00:03:07.628 common/sfc_efx: not in enabled drivers build config 00:03:07.628 mempool/bucket: not in enabled drivers build config 00:03:07.628 mempool/cnxk: not in enabled drivers build config 00:03:07.628 mempool/dpaa: not in enabled drivers build config 00:03:07.628 mempool/dpaa2: not in enabled drivers build config 00:03:07.628 mempool/octeontx: not in enabled drivers build config 00:03:07.628 mempool/stack: not in enabled drivers build config 00:03:07.628 dma/cnxk: not in enabled drivers build config 00:03:07.628 dma/dpaa: not in enabled drivers build config 00:03:07.628 dma/dpaa2: not in enabled drivers build config 00:03:07.628 dma/hisilicon: not in enabled drivers build config 00:03:07.628 dma/idxd: not in enabled drivers build config 00:03:07.628 dma/ioat: not in enabled drivers build config 00:03:07.628 dma/skeleton: not in enabled drivers build config 00:03:07.628 net/af_packet: not in enabled drivers build config 00:03:07.628 net/af_xdp: not in enabled drivers build config 00:03:07.628 net/ark: not in enabled drivers build config 00:03:07.628 net/atlantic: not in enabled drivers build config 00:03:07.628 net/avp: not in enabled drivers build config 00:03:07.628 net/axgbe: not in enabled drivers build config 00:03:07.628 net/bnx2x: not in enabled drivers build config 00:03:07.628 net/bnxt: not in enabled drivers build config 00:03:07.628 net/bonding: not in enabled drivers build config 00:03:07.628 net/cnxk: not in enabled drivers build config 00:03:07.628 net/cpfl: not in enabled drivers build config 00:03:07.628 net/cxgbe: not in enabled drivers build config 00:03:07.628 net/dpaa: not in enabled drivers build config 00:03:07.628 net/dpaa2: not in enabled drivers build config 00:03:07.628 net/e1000: not in enabled drivers build config 00:03:07.628 net/ena: not in enabled drivers build config 00:03:07.628 net/enetc: not in enabled drivers build config 00:03:07.628 net/enetfec: not in enabled drivers build config 00:03:07.628 net/enic: not in enabled drivers build config 00:03:07.628 net/failsafe: not in enabled drivers build config 00:03:07.628 net/fm10k: not in enabled drivers build config 00:03:07.628 net/gve: not in enabled drivers build config 00:03:07.628 net/hinic: not in enabled drivers build config 00:03:07.628 net/hns3: not in enabled drivers build config 00:03:07.628 net/i40e: not in enabled drivers build config 00:03:07.628 net/iavf: not in enabled drivers build config 00:03:07.628 net/ice: not in enabled drivers build config 00:03:07.628 net/idpf: not in enabled drivers build config 00:03:07.628 net/igc: not in enabled drivers build config 00:03:07.628 net/ionic: not in enabled drivers build config 00:03:07.628 net/ipn3ke: not in enabled drivers build config 00:03:07.628 net/ixgbe: not in enabled drivers build config 00:03:07.628 net/mana: not in enabled drivers build config 00:03:07.628 net/memif: not in enabled drivers build config 00:03:07.628 net/mlx4: not in enabled drivers build config 00:03:07.628 net/mlx5: not in enabled drivers build config 00:03:07.628 net/mvneta: not in enabled drivers build config 00:03:07.628 net/mvpp2: not in enabled drivers build config 00:03:07.628 net/netvsc: not in enabled drivers build config 00:03:07.628 net/nfb: not in enabled drivers build config 00:03:07.628 net/nfp: not in enabled drivers build config 00:03:07.628 net/ngbe: not in enabled drivers build config 00:03:07.628 net/null: not in enabled drivers build config 00:03:07.628 net/octeontx: not in enabled drivers build config 00:03:07.628 net/octeon_ep: not in enabled drivers build config 00:03:07.628 net/pcap: not in enabled drivers build config 00:03:07.628 net/pfe: not in enabled drivers build config 00:03:07.628 net/qede: not in enabled drivers build config 00:03:07.628 net/ring: not in enabled drivers build config 00:03:07.628 net/sfc: not in enabled drivers build config 00:03:07.628 net/softnic: not in enabled drivers build config 00:03:07.628 net/tap: not in enabled drivers build config 00:03:07.628 net/thunderx: not in enabled drivers build config 00:03:07.628 net/txgbe: not in enabled drivers build config 00:03:07.628 net/vdev_netvsc: not in enabled drivers build config 00:03:07.628 net/vhost: not in enabled drivers build config 00:03:07.629 net/virtio: not in enabled drivers build config 00:03:07.629 net/vmxnet3: not in enabled drivers build config 00:03:07.629 raw/*: missing internal dependency, "rawdev" 00:03:07.629 crypto/armv8: not in enabled drivers build config 00:03:07.629 crypto/bcmfs: not in enabled drivers build config 00:03:07.629 crypto/caam_jr: not in enabled drivers build config 00:03:07.629 crypto/ccp: not in enabled drivers build config 00:03:07.629 crypto/cnxk: not in enabled drivers build config 00:03:07.629 crypto/dpaa_sec: not in enabled drivers build config 00:03:07.629 crypto/dpaa2_sec: not in enabled drivers build config 00:03:07.629 crypto/ipsec_mb: not in enabled drivers build config 00:03:07.629 crypto/mlx5: not in enabled drivers build config 00:03:07.629 crypto/mvsam: not in enabled drivers build config 00:03:07.629 crypto/nitrox: not in enabled drivers build config 00:03:07.629 crypto/null: not in enabled drivers build config 00:03:07.629 crypto/octeontx: not in enabled drivers build config 00:03:07.629 crypto/openssl: not in enabled drivers build config 00:03:07.629 crypto/scheduler: not in enabled drivers build config 00:03:07.629 crypto/uadk: not in enabled drivers build config 00:03:07.629 crypto/virtio: not in enabled drivers build config 00:03:07.629 compress/isal: not in enabled drivers build config 00:03:07.629 compress/mlx5: not in enabled drivers build config 00:03:07.629 compress/nitrox: not in enabled drivers build config 00:03:07.629 compress/octeontx: not in enabled drivers build config 00:03:07.629 compress/zlib: not in enabled drivers build config 00:03:07.629 regex/*: missing internal dependency, "regexdev" 00:03:07.629 ml/*: missing internal dependency, "mldev" 00:03:07.629 vdpa/ifc: not in enabled drivers build config 00:03:07.629 vdpa/mlx5: not in enabled drivers build config 00:03:07.629 vdpa/nfp: not in enabled drivers build config 00:03:07.629 vdpa/sfc: not in enabled drivers build config 00:03:07.629 event/*: missing internal dependency, "eventdev" 00:03:07.629 baseband/*: missing internal dependency, "bbdev" 00:03:07.629 gpu/*: missing internal dependency, "gpudev" 00:03:07.629 00:03:07.629 00:03:07.629 Build targets in project: 85 00:03:07.629 00:03:07.629 DPDK 24.03.0 00:03:07.629 00:03:07.629 User defined options 00:03:07.629 buildtype : debug 00:03:07.629 default_library : shared 00:03:07.629 libdir : lib 00:03:07.629 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:03:07.629 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:03:07.629 c_link_args : 00:03:07.629 cpu_instruction_set: native 00:03:07.629 disable_apps : test-sad,test-acl,test-dma-perf,test-pipeline,test-compress-perf,test-fib,test-flow-perf,test-crypto-perf,test-bbdev,test-eventdev,pdump,test-mldev,test-cmdline,graph,test-security-perf,test-pmd,test,proc-info,test-regex,dumpcap,test-gpudev 00:03:07.629 disable_libs : port,sched,rib,node,ipsec,distributor,gro,eventdev,pdcp,acl,member,latencystats,efd,stack,regexdev,rawdev,bpf,metrics,gpudev,pipeline,pdump,table,fib,dispatcher,mldev,gso,cfgfile,bitratestats,ip_frag,graph,lpm,jobstats,argparse,pcapng,bbdev 00:03:07.629 enable_docs : false 00:03:07.629 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:03:07.629 enable_kmods : false 00:03:07.629 max_lcores : 128 00:03:07.629 tests : false 00:03:07.629 00:03:07.629 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:03:07.629 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp' 00:03:07.629 [1/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:03:07.629 [2/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:03:07.629 [3/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:03:07.629 [4/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:03:07.629 [5/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:03:07.629 [6/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:03:07.629 [7/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:03:07.629 [8/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:03:07.629 [9/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:03:07.629 [10/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:03:07.629 [11/268] Linking static target lib/librte_kvargs.a 00:03:07.629 [12/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:03:07.888 [13/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:03:07.888 [14/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:03:07.888 [15/268] Linking static target lib/librte_log.a 00:03:07.888 [16/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:03:08.462 [17/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:03:08.462 [18/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:03:08.462 [19/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:03:08.462 [20/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:03:08.462 [21/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:03:08.462 [22/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:03:08.462 [23/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:03:08.462 [24/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:03:08.462 [25/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:03:08.462 [26/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:03:08.462 [27/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:03:08.462 [28/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:03:08.728 [29/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:03:08.728 [30/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:03:08.728 [31/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:03:08.728 [32/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:03:08.728 [33/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:03:08.728 [34/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:03:08.728 [35/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:03:08.728 [36/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:03:08.728 [37/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:03:08.728 [38/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:03:08.728 [39/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:03:08.728 [40/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:03:08.728 [41/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:03:08.728 [42/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:03:08.728 [43/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:03:08.728 [44/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:03:08.728 [45/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:03:08.728 [46/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:03:08.728 [47/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:03:08.728 [48/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:03:08.728 [49/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:03:08.728 [50/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:03:08.728 [51/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:03:08.728 [52/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:03:08.728 [53/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:03:08.728 [54/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:03:08.728 [55/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:03:08.728 [56/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:03:08.728 [57/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:03:08.728 [58/268] Linking static target lib/librte_telemetry.a 00:03:08.728 [59/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:03:08.992 [60/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:03:08.992 [61/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:03:08.992 [62/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:03:08.992 [63/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:03:08.992 [64/268] Linking target lib/librte_log.so.24.1 00:03:08.992 [65/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:03:08.992 [66/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:03:08.992 [67/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:03:08.992 [68/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:03:09.255 [69/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:03:09.255 [70/268] Linking static target lib/librte_pci.a 00:03:09.255 [71/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:03:09.255 [72/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:03:09.255 [73/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:03:09.255 [74/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:03:09.516 [75/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:03:09.516 [76/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:03:09.516 [77/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:03:09.516 [78/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:03:09.516 [79/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:03:09.516 [80/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:03:09.516 [81/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:03:09.516 [82/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:03:09.516 [83/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:03:09.516 [84/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:03:09.516 [85/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:03:09.516 [86/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:03:09.516 [87/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:03:09.516 [88/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:03:09.516 [89/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:03:09.516 [90/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:03:09.516 [91/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:03:09.516 [92/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:03:09.516 [93/268] Linking target lib/librte_kvargs.so.24.1 00:03:09.516 [94/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:03:09.778 [95/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:03:09.778 [96/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:03:09.778 [97/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:03:09.778 [98/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:03:09.778 [99/268] Linking static target lib/librte_ring.a 00:03:09.778 [100/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:03:09.778 [101/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:03:09.778 [102/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:03:09.778 [103/268] Linking static target lib/librte_meter.a 00:03:09.778 [104/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:03:09.778 [105/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:03:09.778 [106/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:03:09.778 [107/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:03:09.778 [108/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:03:09.778 [109/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:03:09.778 [110/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:03:09.778 [111/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:03:09.778 [112/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:03:09.778 [113/268] Linking static target lib/librte_eal.a 00:03:09.778 [114/268] Linking static target lib/librte_rcu.a 00:03:09.778 [115/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:03:09.778 [116/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:03:09.778 [117/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:03:09.778 [118/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:03:09.778 [119/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:03:09.778 [120/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:03:09.778 [121/268] Linking static target lib/librte_mempool.a 00:03:10.041 [122/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:03:10.041 [123/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:03:10.041 [124/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:03:10.041 [125/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:03:10.041 [126/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:03:10.041 [127/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:03:10.041 [128/268] Linking target lib/librte_telemetry.so.24.1 00:03:10.041 [129/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:03:10.302 [130/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:03:10.302 [131/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:03:10.302 [132/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:03:10.302 [133/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:03:10.302 [134/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:03:10.302 [135/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:03:10.302 [136/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:03:10.302 [137/268] Linking static target lib/librte_net.a 00:03:10.302 [138/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:03:10.302 [139/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:03:10.302 [140/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:03:10.302 [141/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:03:10.567 [142/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:03:10.567 [143/268] Linking static target lib/librte_cmdline.a 00:03:10.567 [144/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:03:10.567 [145/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:03:10.567 [146/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:03:10.567 [147/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:03:10.567 [148/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:03:10.567 [149/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:03:10.567 [150/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:03:10.567 [151/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:03:10.567 [152/268] Linking static target lib/librte_timer.a 00:03:10.567 [153/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:03:10.567 [154/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:03:10.827 [155/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:03:10.827 [156/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:03:10.827 [157/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:03:10.827 [158/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:03:10.827 [159/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:03:10.827 [160/268] Linking static target lib/librte_dmadev.a 00:03:10.827 [161/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:03:10.827 [162/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:03:10.827 [163/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:03:11.086 [164/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:03:11.086 [165/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:03:11.086 [166/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:03:11.086 [167/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:03:11.086 [168/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:03:11.086 [169/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:03:11.086 [170/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:03:11.086 [171/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:03:11.086 [172/268] Linking static target lib/librte_power.a 00:03:11.086 [173/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:03:11.086 [174/268] Linking static target lib/librte_compressdev.a 00:03:11.086 [175/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:03:11.086 [176/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:03:11.086 [177/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:03:11.086 [178/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:03:11.086 [179/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:03:11.345 [180/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:03:11.345 [181/268] Linking static target lib/librte_hash.a 00:03:11.345 [182/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:03:11.345 [183/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:03:11.345 [184/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:03:11.345 [185/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:11.345 [186/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:03:11.345 [187/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:03:11.345 [188/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:03:11.345 [189/268] Linking static target lib/librte_reorder.a 00:03:11.345 [190/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:03:11.345 [191/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:03:11.345 [192/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:03:11.345 [193/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:03:11.605 [194/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:03:11.605 [195/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:03:11.605 [196/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:03:11.605 [197/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:03:11.605 [198/268] Linking static target lib/librte_mbuf.a 00:03:11.605 [199/268] Linking static target drivers/librte_bus_vdev.a 00:03:11.605 [200/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:11.605 [201/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:03:11.605 [202/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:03:11.605 [203/268] Linking static target lib/librte_security.a 00:03:11.605 [204/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:03:11.605 [205/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:03:11.605 [206/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:03:11.605 [207/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:03:11.605 [208/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:03:11.605 [209/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:03:11.605 [210/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:03:11.605 [211/268] Linking static target drivers/librte_bus_pci.a 00:03:11.605 [212/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:03:11.863 [213/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:11.863 [214/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:03:11.863 [215/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:03:11.863 [216/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:03:11.863 [217/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:03:11.863 [218/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:11.863 [219/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:11.863 [220/268] Linking static target drivers/librte_mempool_ring.a 00:03:11.863 [221/268] Linking static target lib/librte_ethdev.a 00:03:12.122 [222/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:03:12.122 [223/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:03:12.122 [224/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:03:12.122 [225/268] Linking static target lib/librte_cryptodev.a 00:03:12.122 [226/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:03:13.057 [227/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:14.434 [228/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:03:16.334 [229/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:03:16.335 [230/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:16.335 [231/268] Linking target lib/librte_eal.so.24.1 00:03:16.335 [232/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:03:16.335 [233/268] Linking target lib/librte_ring.so.24.1 00:03:16.335 [234/268] Linking target lib/librte_timer.so.24.1 00:03:16.335 [235/268] Linking target lib/librte_pci.so.24.1 00:03:16.335 [236/268] Linking target lib/librte_meter.so.24.1 00:03:16.335 [237/268] Linking target drivers/librte_bus_vdev.so.24.1 00:03:16.335 [238/268] Linking target lib/librte_dmadev.so.24.1 00:03:16.335 [239/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:03:16.335 [240/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:03:16.335 [241/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:03:16.335 [242/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:03:16.335 [243/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:03:16.335 [244/268] Linking target lib/librte_rcu.so.24.1 00:03:16.335 [245/268] Linking target lib/librte_mempool.so.24.1 00:03:16.335 [246/268] Linking target drivers/librte_bus_pci.so.24.1 00:03:16.593 [247/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:03:16.593 [248/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:03:16.593 [249/268] Linking target drivers/librte_mempool_ring.so.24.1 00:03:16.593 [250/268] Linking target lib/librte_mbuf.so.24.1 00:03:16.593 [251/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:03:16.852 [252/268] Linking target lib/librte_reorder.so.24.1 00:03:16.852 [253/268] Linking target lib/librte_compressdev.so.24.1 00:03:16.852 [254/268] Linking target lib/librte_net.so.24.1 00:03:16.852 [255/268] Linking target lib/librte_cryptodev.so.24.1 00:03:16.852 [256/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:03:16.852 [257/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:03:16.852 [258/268] Linking target lib/librte_hash.so.24.1 00:03:16.852 [259/268] Linking target lib/librte_security.so.24.1 00:03:16.852 [260/268] Linking target lib/librte_cmdline.so.24.1 00:03:16.852 [261/268] Linking target lib/librte_ethdev.so.24.1 00:03:17.111 [262/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:03:17.111 [263/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:03:17.111 [264/268] Linking target lib/librte_power.so.24.1 00:03:20.394 [265/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:03:20.394 [266/268] Linking static target lib/librte_vhost.a 00:03:20.959 [267/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:03:21.217 [268/268] Linking target lib/librte_vhost.so.24.1 00:03:21.217 INFO: autodetecting backend as ninja 00:03:21.217 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp -j 48 00:03:43.144 CC lib/ut_mock/mock.o 00:03:43.144 CC lib/log/log.o 00:03:43.144 CC lib/log/log_flags.o 00:03:43.144 CC lib/ut/ut.o 00:03:43.144 CC lib/log/log_deprecated.o 00:03:43.144 LIB libspdk_ut.a 00:03:43.144 LIB libspdk_ut_mock.a 00:03:43.144 LIB libspdk_log.a 00:03:43.144 SO libspdk_ut.so.2.0 00:03:43.144 SO libspdk_ut_mock.so.6.0 00:03:43.144 SO libspdk_log.so.7.1 00:03:43.144 SYMLINK libspdk_ut_mock.so 00:03:43.144 SYMLINK libspdk_ut.so 00:03:43.144 SYMLINK libspdk_log.so 00:03:43.144 CC lib/ioat/ioat.o 00:03:43.144 CXX lib/trace_parser/trace.o 00:03:43.144 CC lib/util/base64.o 00:03:43.144 CC lib/dma/dma.o 00:03:43.144 CC lib/util/bit_array.o 00:03:43.144 CC lib/util/cpuset.o 00:03:43.144 CC lib/util/crc16.o 00:03:43.144 CC lib/util/crc32.o 00:03:43.144 CC lib/util/crc32c.o 00:03:43.144 CC lib/util/crc32_ieee.o 00:03:43.144 CC lib/util/crc64.o 00:03:43.144 CC lib/util/dif.o 00:03:43.144 CC lib/util/fd.o 00:03:43.144 CC lib/util/fd_group.o 00:03:43.144 CC lib/util/file.o 00:03:43.144 CC lib/util/hexlify.o 00:03:43.144 CC lib/util/iov.o 00:03:43.144 CC lib/util/math.o 00:03:43.144 CC lib/util/net.o 00:03:43.144 CC lib/util/pipe.o 00:03:43.144 CC lib/util/string.o 00:03:43.144 CC lib/util/strerror_tls.o 00:03:43.144 CC lib/util/uuid.o 00:03:43.144 CC lib/util/xor.o 00:03:43.144 CC lib/util/md5.o 00:03:43.144 CC lib/util/zipf.o 00:03:43.144 CC lib/vfio_user/host/vfio_user_pci.o 00:03:43.144 CC lib/vfio_user/host/vfio_user.o 00:03:43.144 LIB libspdk_dma.a 00:03:43.144 SO libspdk_dma.so.5.0 00:03:43.144 SYMLINK libspdk_dma.so 00:03:43.144 LIB libspdk_ioat.a 00:03:43.144 SO libspdk_ioat.so.7.0 00:03:43.144 LIB libspdk_vfio_user.a 00:03:43.144 SYMLINK libspdk_ioat.so 00:03:43.144 SO libspdk_vfio_user.so.5.0 00:03:43.144 SYMLINK libspdk_vfio_user.so 00:03:43.144 LIB libspdk_util.a 00:03:43.144 SO libspdk_util.so.10.1 00:03:43.144 SYMLINK libspdk_util.so 00:03:43.144 CC lib/json/json_parse.o 00:03:43.144 CC lib/json/json_util.o 00:03:43.144 CC lib/conf/conf.o 00:03:43.144 CC lib/idxd/idxd.o 00:03:43.144 CC lib/json/json_write.o 00:03:43.144 CC lib/env_dpdk/env.o 00:03:43.144 CC lib/rdma_utils/rdma_utils.o 00:03:43.144 CC lib/idxd/idxd_user.o 00:03:43.144 CC lib/vmd/vmd.o 00:03:43.144 CC lib/env_dpdk/memory.o 00:03:43.144 CC lib/idxd/idxd_kernel.o 00:03:43.144 CC lib/env_dpdk/pci.o 00:03:43.144 CC lib/vmd/led.o 00:03:43.144 CC lib/env_dpdk/init.o 00:03:43.144 CC lib/env_dpdk/threads.o 00:03:43.144 CC lib/env_dpdk/pci_ioat.o 00:03:43.144 CC lib/env_dpdk/pci_virtio.o 00:03:43.144 CC lib/env_dpdk/pci_vmd.o 00:03:43.144 CC lib/env_dpdk/pci_idxd.o 00:03:43.144 CC lib/env_dpdk/pci_event.o 00:03:43.144 CC lib/env_dpdk/sigbus_handler.o 00:03:43.144 CC lib/env_dpdk/pci_dpdk_2207.o 00:03:43.144 CC lib/env_dpdk/pci_dpdk.o 00:03:43.144 CC lib/env_dpdk/pci_dpdk_2211.o 00:03:43.144 LIB libspdk_trace_parser.a 00:03:43.144 SO libspdk_trace_parser.so.6.0 00:03:43.144 SYMLINK libspdk_trace_parser.so 00:03:43.144 LIB libspdk_conf.a 00:03:43.144 SO libspdk_conf.so.6.0 00:03:43.144 LIB libspdk_rdma_utils.a 00:03:43.144 LIB libspdk_json.a 00:03:43.144 SO libspdk_rdma_utils.so.1.0 00:03:43.144 SYMLINK libspdk_conf.so 00:03:43.144 SO libspdk_json.so.6.0 00:03:43.144 SYMLINK libspdk_rdma_utils.so 00:03:43.144 SYMLINK libspdk_json.so 00:03:43.144 CC lib/rdma_provider/common.o 00:03:43.144 CC lib/rdma_provider/rdma_provider_verbs.o 00:03:43.144 CC lib/jsonrpc/jsonrpc_server.o 00:03:43.144 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:03:43.144 CC lib/jsonrpc/jsonrpc_client.o 00:03:43.144 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:03:43.144 LIB libspdk_idxd.a 00:03:43.144 SO libspdk_idxd.so.12.1 00:03:43.144 LIB libspdk_vmd.a 00:03:43.144 SO libspdk_vmd.so.6.0 00:03:43.144 SYMLINK libspdk_idxd.so 00:03:43.144 LIB libspdk_rdma_provider.a 00:03:43.144 SYMLINK libspdk_vmd.so 00:03:43.144 SO libspdk_rdma_provider.so.7.0 00:03:43.144 LIB libspdk_jsonrpc.a 00:03:43.144 SYMLINK libspdk_rdma_provider.so 00:03:43.144 SO libspdk_jsonrpc.so.6.0 00:03:43.144 SYMLINK libspdk_jsonrpc.so 00:03:43.145 CC lib/rpc/rpc.o 00:03:43.145 LIB libspdk_rpc.a 00:03:43.145 SO libspdk_rpc.so.6.0 00:03:43.403 SYMLINK libspdk_rpc.so 00:03:43.403 CC lib/keyring/keyring.o 00:03:43.403 CC lib/trace/trace.o 00:03:43.403 CC lib/keyring/keyring_rpc.o 00:03:43.403 CC lib/trace/trace_flags.o 00:03:43.403 CC lib/trace/trace_rpc.o 00:03:43.403 CC lib/notify/notify.o 00:03:43.403 CC lib/notify/notify_rpc.o 00:03:43.662 LIB libspdk_notify.a 00:03:43.662 SO libspdk_notify.so.6.0 00:03:43.662 SYMLINK libspdk_notify.so 00:03:43.662 LIB libspdk_keyring.a 00:03:43.662 SO libspdk_keyring.so.2.0 00:03:43.662 LIB libspdk_trace.a 00:03:43.662 SO libspdk_trace.so.11.0 00:03:43.662 SYMLINK libspdk_keyring.so 00:03:43.920 SYMLINK libspdk_trace.so 00:03:43.920 CC lib/thread/thread.o 00:03:43.920 CC lib/thread/iobuf.o 00:03:43.920 CC lib/sock/sock.o 00:03:43.920 CC lib/sock/sock_rpc.o 00:03:43.920 LIB libspdk_env_dpdk.a 00:03:43.920 SO libspdk_env_dpdk.so.15.1 00:03:44.178 SYMLINK libspdk_env_dpdk.so 00:03:44.437 LIB libspdk_sock.a 00:03:44.437 SO libspdk_sock.so.10.0 00:03:44.437 SYMLINK libspdk_sock.so 00:03:44.695 CC lib/nvme/nvme_ctrlr_cmd.o 00:03:44.695 CC lib/nvme/nvme_ctrlr.o 00:03:44.695 CC lib/nvme/nvme_fabric.o 00:03:44.695 CC lib/nvme/nvme_ns_cmd.o 00:03:44.695 CC lib/nvme/nvme_ns.o 00:03:44.695 CC lib/nvme/nvme_pcie_common.o 00:03:44.695 CC lib/nvme/nvme_pcie.o 00:03:44.695 CC lib/nvme/nvme_qpair.o 00:03:44.695 CC lib/nvme/nvme.o 00:03:44.695 CC lib/nvme/nvme_quirks.o 00:03:44.695 CC lib/nvme/nvme_transport.o 00:03:44.695 CC lib/nvme/nvme_discovery.o 00:03:44.695 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:03:44.695 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:03:44.695 CC lib/nvme/nvme_tcp.o 00:03:44.695 CC lib/nvme/nvme_opal.o 00:03:44.695 CC lib/nvme/nvme_io_msg.o 00:03:44.695 CC lib/nvme/nvme_poll_group.o 00:03:44.695 CC lib/nvme/nvme_zns.o 00:03:44.695 CC lib/nvme/nvme_stubs.o 00:03:44.695 CC lib/nvme/nvme_auth.o 00:03:44.695 CC lib/nvme/nvme_cuse.o 00:03:44.695 CC lib/nvme/nvme_vfio_user.o 00:03:44.695 CC lib/nvme/nvme_rdma.o 00:03:45.630 LIB libspdk_thread.a 00:03:45.630 SO libspdk_thread.so.11.0 00:03:45.630 SYMLINK libspdk_thread.so 00:03:45.888 CC lib/accel/accel.o 00:03:45.888 CC lib/vfu_tgt/tgt_endpoint.o 00:03:45.888 CC lib/fsdev/fsdev.o 00:03:45.888 CC lib/virtio/virtio.o 00:03:45.888 CC lib/vfu_tgt/tgt_rpc.o 00:03:45.888 CC lib/init/json_config.o 00:03:45.888 CC lib/blob/blobstore.o 00:03:45.888 CC lib/virtio/virtio_vhost_user.o 00:03:45.888 CC lib/accel/accel_rpc.o 00:03:45.888 CC lib/init/subsystem.o 00:03:45.888 CC lib/blob/request.o 00:03:45.888 CC lib/fsdev/fsdev_io.o 00:03:45.888 CC lib/virtio/virtio_vfio_user.o 00:03:45.888 CC lib/accel/accel_sw.o 00:03:45.888 CC lib/init/subsystem_rpc.o 00:03:45.888 CC lib/blob/zeroes.o 00:03:45.888 CC lib/virtio/virtio_pci.o 00:03:45.888 CC lib/blob/blob_bs_dev.o 00:03:45.888 CC lib/fsdev/fsdev_rpc.o 00:03:45.888 CC lib/init/rpc.o 00:03:46.147 LIB libspdk_init.a 00:03:46.147 SO libspdk_init.so.6.0 00:03:46.147 LIB libspdk_vfu_tgt.a 00:03:46.147 SYMLINK libspdk_init.so 00:03:46.147 SO libspdk_vfu_tgt.so.3.0 00:03:46.147 LIB libspdk_virtio.a 00:03:46.147 SO libspdk_virtio.so.7.0 00:03:46.405 SYMLINK libspdk_vfu_tgt.so 00:03:46.405 SYMLINK libspdk_virtio.so 00:03:46.405 CC lib/event/app.o 00:03:46.405 CC lib/event/reactor.o 00:03:46.405 CC lib/event/log_rpc.o 00:03:46.405 CC lib/event/app_rpc.o 00:03:46.405 CC lib/event/scheduler_static.o 00:03:46.405 LIB libspdk_fsdev.a 00:03:46.663 SO libspdk_fsdev.so.2.0 00:03:46.663 SYMLINK libspdk_fsdev.so 00:03:46.663 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:03:46.922 LIB libspdk_event.a 00:03:46.922 SO libspdk_event.so.14.0 00:03:46.922 SYMLINK libspdk_event.so 00:03:46.922 LIB libspdk_accel.a 00:03:46.922 SO libspdk_accel.so.16.0 00:03:47.181 SYMLINK libspdk_accel.so 00:03:47.181 LIB libspdk_nvme.a 00:03:47.181 CC lib/bdev/bdev.o 00:03:47.181 CC lib/bdev/bdev_rpc.o 00:03:47.181 CC lib/bdev/bdev_zone.o 00:03:47.181 CC lib/bdev/part.o 00:03:47.181 CC lib/bdev/scsi_nvme.o 00:03:47.181 SO libspdk_nvme.so.15.0 00:03:47.440 LIB libspdk_fuse_dispatcher.a 00:03:47.440 SO libspdk_fuse_dispatcher.so.1.0 00:03:47.440 SYMLINK libspdk_nvme.so 00:03:47.440 SYMLINK libspdk_fuse_dispatcher.so 00:03:49.345 LIB libspdk_blob.a 00:03:49.345 SO libspdk_blob.so.11.0 00:03:49.345 SYMLINK libspdk_blob.so 00:03:49.346 CC lib/blobfs/blobfs.o 00:03:49.346 CC lib/blobfs/tree.o 00:03:49.346 CC lib/lvol/lvol.o 00:03:49.912 LIB libspdk_bdev.a 00:03:49.912 SO libspdk_bdev.so.17.0 00:03:49.912 SYMLINK libspdk_bdev.so 00:03:50.177 LIB libspdk_blobfs.a 00:03:50.177 SO libspdk_blobfs.so.10.0 00:03:50.177 SYMLINK libspdk_blobfs.so 00:03:50.177 LIB libspdk_lvol.a 00:03:50.177 SO libspdk_lvol.so.10.0 00:03:50.177 CC lib/ublk/ublk.o 00:03:50.177 CC lib/nbd/nbd.o 00:03:50.177 CC lib/nvmf/ctrlr.o 00:03:50.177 CC lib/scsi/dev.o 00:03:50.177 CC lib/nbd/nbd_rpc.o 00:03:50.177 CC lib/nvmf/ctrlr_discovery.o 00:03:50.177 CC lib/scsi/lun.o 00:03:50.177 CC lib/ublk/ublk_rpc.o 00:03:50.177 CC lib/ftl/ftl_core.o 00:03:50.177 CC lib/nvmf/ctrlr_bdev.o 00:03:50.177 CC lib/scsi/port.o 00:03:50.177 CC lib/ftl/ftl_init.o 00:03:50.177 CC lib/nvmf/subsystem.o 00:03:50.177 CC lib/ftl/ftl_layout.o 00:03:50.177 CC lib/scsi/scsi.o 00:03:50.177 CC lib/nvmf/nvmf.o 00:03:50.177 CC lib/scsi/scsi_bdev.o 00:03:50.177 CC lib/ftl/ftl_debug.o 00:03:50.177 CC lib/nvmf/nvmf_rpc.o 00:03:50.177 CC lib/ftl/ftl_io.o 00:03:50.177 CC lib/scsi/scsi_pr.o 00:03:50.177 CC lib/scsi/scsi_rpc.o 00:03:50.177 CC lib/nvmf/tcp.o 00:03:50.177 CC lib/nvmf/transport.o 00:03:50.177 CC lib/ftl/ftl_sb.o 00:03:50.177 CC lib/nvmf/stubs.o 00:03:50.177 CC lib/scsi/task.o 00:03:50.177 CC lib/ftl/ftl_l2p.o 00:03:50.177 CC lib/ftl/ftl_l2p_flat.o 00:03:50.177 CC lib/nvmf/mdns_server.o 00:03:50.177 CC lib/ftl/ftl_nv_cache.o 00:03:50.177 CC lib/nvmf/vfio_user.o 00:03:50.177 CC lib/ftl/ftl_band.o 00:03:50.177 CC lib/nvmf/rdma.o 00:03:50.177 CC lib/ftl/ftl_band_ops.o 00:03:50.177 CC lib/nvmf/auth.o 00:03:50.177 CC lib/ftl/ftl_writer.o 00:03:50.177 CC lib/ftl/ftl_rq.o 00:03:50.177 CC lib/ftl/ftl_reloc.o 00:03:50.177 CC lib/ftl/ftl_l2p_cache.o 00:03:50.177 CC lib/ftl/ftl_p2l.o 00:03:50.177 CC lib/ftl/ftl_p2l_log.o 00:03:50.177 CC lib/ftl/mngt/ftl_mngt.o 00:03:50.177 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:03:50.177 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:03:50.177 CC lib/ftl/mngt/ftl_mngt_startup.o 00:03:50.177 CC lib/ftl/mngt/ftl_mngt_md.o 00:03:50.177 SYMLINK libspdk_lvol.so 00:03:50.177 CC lib/ftl/mngt/ftl_mngt_misc.o 00:03:50.437 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:03:50.437 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:03:50.701 CC lib/ftl/mngt/ftl_mngt_band.o 00:03:50.701 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:03:50.701 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:03:50.701 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:03:50.701 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:03:50.701 CC lib/ftl/utils/ftl_conf.o 00:03:50.701 CC lib/ftl/utils/ftl_md.o 00:03:50.701 CC lib/ftl/utils/ftl_mempool.o 00:03:50.701 CC lib/ftl/utils/ftl_bitmap.o 00:03:50.701 CC lib/ftl/utils/ftl_property.o 00:03:50.701 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:03:50.701 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:03:50.701 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:03:50.701 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:03:50.701 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:03:50.701 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:03:50.960 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:03:50.960 CC lib/ftl/upgrade/ftl_sb_v3.o 00:03:50.960 CC lib/ftl/upgrade/ftl_sb_v5.o 00:03:50.960 CC lib/ftl/nvc/ftl_nvc_dev.o 00:03:50.960 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:03:50.960 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:03:50.960 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:03:50.960 CC lib/ftl/base/ftl_base_dev.o 00:03:50.960 CC lib/ftl/base/ftl_base_bdev.o 00:03:50.960 CC lib/ftl/ftl_trace.o 00:03:50.960 LIB libspdk_nbd.a 00:03:50.960 SO libspdk_nbd.so.7.0 00:03:51.220 SYMLINK libspdk_nbd.so 00:03:51.220 LIB libspdk_scsi.a 00:03:51.220 SO libspdk_scsi.so.9.0 00:03:51.220 SYMLINK libspdk_scsi.so 00:03:51.478 LIB libspdk_ublk.a 00:03:51.478 SO libspdk_ublk.so.3.0 00:03:51.478 SYMLINK libspdk_ublk.so 00:03:51.478 CC lib/vhost/vhost.o 00:03:51.478 CC lib/vhost/vhost_rpc.o 00:03:51.478 CC lib/iscsi/conn.o 00:03:51.478 CC lib/iscsi/init_grp.o 00:03:51.478 CC lib/vhost/vhost_scsi.o 00:03:51.478 CC lib/iscsi/iscsi.o 00:03:51.478 CC lib/vhost/vhost_blk.o 00:03:51.479 CC lib/vhost/rte_vhost_user.o 00:03:51.479 CC lib/iscsi/param.o 00:03:51.479 CC lib/iscsi/portal_grp.o 00:03:51.479 CC lib/iscsi/tgt_node.o 00:03:51.479 CC lib/iscsi/iscsi_subsystem.o 00:03:51.479 CC lib/iscsi/iscsi_rpc.o 00:03:51.479 CC lib/iscsi/task.o 00:03:51.736 LIB libspdk_ftl.a 00:03:51.993 SO libspdk_ftl.so.9.0 00:03:52.252 SYMLINK libspdk_ftl.so 00:03:52.819 LIB libspdk_vhost.a 00:03:52.819 SO libspdk_vhost.so.8.0 00:03:52.819 SYMLINK libspdk_vhost.so 00:03:52.819 LIB libspdk_nvmf.a 00:03:52.819 SO libspdk_nvmf.so.20.0 00:03:52.819 LIB libspdk_iscsi.a 00:03:53.077 SO libspdk_iscsi.so.8.0 00:03:53.077 SYMLINK libspdk_nvmf.so 00:03:53.077 SYMLINK libspdk_iscsi.so 00:03:53.335 CC module/env_dpdk/env_dpdk_rpc.o 00:03:53.335 CC module/vfu_device/vfu_virtio.o 00:03:53.335 CC module/vfu_device/vfu_virtio_blk.o 00:03:53.335 CC module/vfu_device/vfu_virtio_scsi.o 00:03:53.335 CC module/vfu_device/vfu_virtio_rpc.o 00:03:53.335 CC module/vfu_device/vfu_virtio_fs.o 00:03:53.335 CC module/fsdev/aio/fsdev_aio.o 00:03:53.335 CC module/scheduler/gscheduler/gscheduler.o 00:03:53.335 CC module/keyring/file/keyring.o 00:03:53.335 CC module/sock/posix/posix.o 00:03:53.335 CC module/accel/iaa/accel_iaa.o 00:03:53.335 CC module/accel/ioat/accel_ioat.o 00:03:53.335 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:03:53.335 CC module/keyring/file/keyring_rpc.o 00:03:53.335 CC module/accel/dsa/accel_dsa_rpc.o 00:03:53.335 CC module/accel/dsa/accel_dsa.o 00:03:53.335 CC module/fsdev/aio/fsdev_aio_rpc.o 00:03:53.335 CC module/keyring/linux/keyring.o 00:03:53.335 CC module/accel/ioat/accel_ioat_rpc.o 00:03:53.335 CC module/accel/error/accel_error.o 00:03:53.335 CC module/blob/bdev/blob_bdev.o 00:03:53.335 CC module/scheduler/dynamic/scheduler_dynamic.o 00:03:53.335 CC module/accel/iaa/accel_iaa_rpc.o 00:03:53.335 CC module/keyring/linux/keyring_rpc.o 00:03:53.335 CC module/accel/error/accel_error_rpc.o 00:03:53.335 CC module/fsdev/aio/linux_aio_mgr.o 00:03:53.594 LIB libspdk_env_dpdk_rpc.a 00:03:53.594 SO libspdk_env_dpdk_rpc.so.6.0 00:03:53.594 SYMLINK libspdk_env_dpdk_rpc.so 00:03:53.594 LIB libspdk_keyring_linux.a 00:03:53.594 LIB libspdk_scheduler_dpdk_governor.a 00:03:53.594 LIB libspdk_scheduler_gscheduler.a 00:03:53.594 SO libspdk_keyring_linux.so.1.0 00:03:53.594 SO libspdk_scheduler_dpdk_governor.so.4.0 00:03:53.594 SO libspdk_scheduler_gscheduler.so.4.0 00:03:53.594 LIB libspdk_scheduler_dynamic.a 00:03:53.594 LIB libspdk_accel_error.a 00:03:53.594 LIB libspdk_accel_iaa.a 00:03:53.594 SYMLINK libspdk_scheduler_dpdk_governor.so 00:03:53.594 SYMLINK libspdk_keyring_linux.so 00:03:53.594 SYMLINK libspdk_scheduler_gscheduler.so 00:03:53.853 SO libspdk_scheduler_dynamic.so.4.0 00:03:53.853 LIB libspdk_keyring_file.a 00:03:53.853 SO libspdk_accel_error.so.2.0 00:03:53.853 SO libspdk_accel_iaa.so.3.0 00:03:53.853 SO libspdk_keyring_file.so.2.0 00:03:53.853 LIB libspdk_accel_ioat.a 00:03:53.853 SYMLINK libspdk_scheduler_dynamic.so 00:03:53.853 SO libspdk_accel_ioat.so.6.0 00:03:53.853 LIB libspdk_blob_bdev.a 00:03:53.853 SYMLINK libspdk_accel_error.so 00:03:53.853 LIB libspdk_accel_dsa.a 00:03:53.853 SYMLINK libspdk_accel_iaa.so 00:03:53.853 SO libspdk_blob_bdev.so.11.0 00:03:53.853 SYMLINK libspdk_keyring_file.so 00:03:53.853 SO libspdk_accel_dsa.so.5.0 00:03:53.853 SYMLINK libspdk_accel_ioat.so 00:03:53.853 SYMLINK libspdk_blob_bdev.so 00:03:53.853 SYMLINK libspdk_accel_dsa.so 00:03:54.115 LIB libspdk_vfu_device.a 00:03:54.115 SO libspdk_vfu_device.so.3.0 00:03:54.115 CC module/blobfs/bdev/blobfs_bdev.o 00:03:54.115 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:03:54.115 CC module/bdev/malloc/bdev_malloc.o 00:03:54.115 CC module/bdev/lvol/vbdev_lvol.o 00:03:54.115 CC module/bdev/passthru/vbdev_passthru.o 00:03:54.115 CC module/bdev/gpt/gpt.o 00:03:54.115 CC module/bdev/malloc/bdev_malloc_rpc.o 00:03:54.115 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:03:54.115 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:03:54.115 CC module/bdev/gpt/vbdev_gpt.o 00:03:54.115 CC module/bdev/split/vbdev_split.o 00:03:54.115 CC module/bdev/aio/bdev_aio.o 00:03:54.115 CC module/bdev/delay/vbdev_delay.o 00:03:54.115 CC module/bdev/split/vbdev_split_rpc.o 00:03:54.115 CC module/bdev/null/bdev_null.o 00:03:54.115 CC module/bdev/delay/vbdev_delay_rpc.o 00:03:54.115 CC module/bdev/aio/bdev_aio_rpc.o 00:03:54.115 CC module/bdev/error/vbdev_error.o 00:03:54.115 CC module/bdev/null/bdev_null_rpc.o 00:03:54.115 CC module/bdev/nvme/bdev_nvme.o 00:03:54.115 CC module/bdev/nvme/bdev_nvme_rpc.o 00:03:54.115 CC module/bdev/error/vbdev_error_rpc.o 00:03:54.115 CC module/bdev/raid/bdev_raid.o 00:03:54.115 CC module/bdev/ftl/bdev_ftl.o 00:03:54.115 CC module/bdev/raid/bdev_raid_rpc.o 00:03:54.115 CC module/bdev/nvme/nvme_rpc.o 00:03:54.115 CC module/bdev/ftl/bdev_ftl_rpc.o 00:03:54.115 CC module/bdev/raid/bdev_raid_sb.o 00:03:54.115 CC module/bdev/nvme/bdev_mdns_client.o 00:03:54.115 CC module/bdev/virtio/bdev_virtio_scsi.o 00:03:54.115 CC module/bdev/nvme/vbdev_opal.o 00:03:54.115 CC module/bdev/raid/raid0.o 00:03:54.115 CC module/bdev/virtio/bdev_virtio_blk.o 00:03:54.115 CC module/bdev/iscsi/bdev_iscsi.o 00:03:54.115 CC module/bdev/raid/raid1.o 00:03:54.115 CC module/bdev/nvme/vbdev_opal_rpc.o 00:03:54.115 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:03:54.115 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:03:54.115 CC module/bdev/virtio/bdev_virtio_rpc.o 00:03:54.115 CC module/bdev/raid/concat.o 00:03:54.115 CC module/bdev/zone_block/vbdev_zone_block.o 00:03:54.115 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:03:54.116 SYMLINK libspdk_vfu_device.so 00:03:54.374 LIB libspdk_fsdev_aio.a 00:03:54.374 SO libspdk_fsdev_aio.so.1.0 00:03:54.374 LIB libspdk_sock_posix.a 00:03:54.374 SO libspdk_sock_posix.so.6.0 00:03:54.374 SYMLINK libspdk_fsdev_aio.so 00:03:54.374 LIB libspdk_blobfs_bdev.a 00:03:54.633 LIB libspdk_bdev_gpt.a 00:03:54.633 SO libspdk_blobfs_bdev.so.6.0 00:03:54.633 SYMLINK libspdk_sock_posix.so 00:03:54.633 SO libspdk_bdev_gpt.so.6.0 00:03:54.633 LIB libspdk_bdev_error.a 00:03:54.633 LIB libspdk_bdev_split.a 00:03:54.633 SYMLINK libspdk_blobfs_bdev.so 00:03:54.633 LIB libspdk_bdev_zone_block.a 00:03:54.633 SYMLINK libspdk_bdev_gpt.so 00:03:54.633 SO libspdk_bdev_error.so.6.0 00:03:54.633 SO libspdk_bdev_split.so.6.0 00:03:54.633 SO libspdk_bdev_zone_block.so.6.0 00:03:54.633 LIB libspdk_bdev_null.a 00:03:54.633 SO libspdk_bdev_null.so.6.0 00:03:54.633 SYMLINK libspdk_bdev_split.so 00:03:54.633 SYMLINK libspdk_bdev_error.so 00:03:54.633 LIB libspdk_bdev_ftl.a 00:03:54.633 SYMLINK libspdk_bdev_zone_block.so 00:03:54.633 LIB libspdk_bdev_passthru.a 00:03:54.633 SO libspdk_bdev_ftl.so.6.0 00:03:54.633 SO libspdk_bdev_passthru.so.6.0 00:03:54.633 SYMLINK libspdk_bdev_null.so 00:03:54.633 LIB libspdk_bdev_aio.a 00:03:54.633 LIB libspdk_bdev_delay.a 00:03:54.633 SO libspdk_bdev_aio.so.6.0 00:03:54.633 LIB libspdk_bdev_iscsi.a 00:03:54.633 LIB libspdk_bdev_malloc.a 00:03:54.633 SO libspdk_bdev_delay.so.6.0 00:03:54.633 SYMLINK libspdk_bdev_ftl.so 00:03:54.633 SO libspdk_bdev_iscsi.so.6.0 00:03:54.633 SYMLINK libspdk_bdev_passthru.so 00:03:54.893 SO libspdk_bdev_malloc.so.6.0 00:03:54.893 SYMLINK libspdk_bdev_aio.so 00:03:54.893 SYMLINK libspdk_bdev_delay.so 00:03:54.893 SYMLINK libspdk_bdev_iscsi.so 00:03:54.894 SYMLINK libspdk_bdev_malloc.so 00:03:54.894 LIB libspdk_bdev_virtio.a 00:03:54.894 LIB libspdk_bdev_lvol.a 00:03:54.894 SO libspdk_bdev_virtio.so.6.0 00:03:54.894 SO libspdk_bdev_lvol.so.6.0 00:03:54.894 SYMLINK libspdk_bdev_virtio.so 00:03:54.894 SYMLINK libspdk_bdev_lvol.so 00:03:55.464 LIB libspdk_bdev_raid.a 00:03:55.464 SO libspdk_bdev_raid.so.6.0 00:03:55.464 SYMLINK libspdk_bdev_raid.so 00:03:56.859 LIB libspdk_bdev_nvme.a 00:03:56.859 SO libspdk_bdev_nvme.so.7.1 00:03:56.859 SYMLINK libspdk_bdev_nvme.so 00:03:57.426 CC module/event/subsystems/vmd/vmd.o 00:03:57.426 CC module/event/subsystems/iobuf/iobuf.o 00:03:57.426 CC module/event/subsystems/vmd/vmd_rpc.o 00:03:57.426 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:03:57.426 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:03:57.426 CC module/event/subsystems/scheduler/scheduler.o 00:03:57.426 CC module/event/subsystems/fsdev/fsdev.o 00:03:57.426 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:03:57.426 CC module/event/subsystems/keyring/keyring.o 00:03:57.426 CC module/event/subsystems/sock/sock.o 00:03:57.426 LIB libspdk_event_keyring.a 00:03:57.427 LIB libspdk_event_vhost_blk.a 00:03:57.427 LIB libspdk_event_fsdev.a 00:03:57.427 LIB libspdk_event_vmd.a 00:03:57.427 LIB libspdk_event_vfu_tgt.a 00:03:57.427 LIB libspdk_event_scheduler.a 00:03:57.427 LIB libspdk_event_sock.a 00:03:57.427 SO libspdk_event_keyring.so.1.0 00:03:57.427 LIB libspdk_event_iobuf.a 00:03:57.427 SO libspdk_event_vhost_blk.so.3.0 00:03:57.427 SO libspdk_event_fsdev.so.1.0 00:03:57.427 SO libspdk_event_vfu_tgt.so.3.0 00:03:57.427 SO libspdk_event_scheduler.so.4.0 00:03:57.427 SO libspdk_event_sock.so.5.0 00:03:57.427 SO libspdk_event_vmd.so.6.0 00:03:57.427 SO libspdk_event_iobuf.so.3.0 00:03:57.427 SYMLINK libspdk_event_keyring.so 00:03:57.427 SYMLINK libspdk_event_vhost_blk.so 00:03:57.427 SYMLINK libspdk_event_fsdev.so 00:03:57.427 SYMLINK libspdk_event_vfu_tgt.so 00:03:57.427 SYMLINK libspdk_event_scheduler.so 00:03:57.427 SYMLINK libspdk_event_sock.so 00:03:57.427 SYMLINK libspdk_event_vmd.so 00:03:57.427 SYMLINK libspdk_event_iobuf.so 00:03:57.685 CC module/event/subsystems/accel/accel.o 00:03:57.944 LIB libspdk_event_accel.a 00:03:57.944 SO libspdk_event_accel.so.6.0 00:03:57.944 SYMLINK libspdk_event_accel.so 00:03:58.203 CC module/event/subsystems/bdev/bdev.o 00:03:58.203 LIB libspdk_event_bdev.a 00:03:58.203 SO libspdk_event_bdev.so.6.0 00:03:58.465 SYMLINK libspdk_event_bdev.so 00:03:58.465 CC module/event/subsystems/ublk/ublk.o 00:03:58.465 CC module/event/subsystems/nbd/nbd.o 00:03:58.465 CC module/event/subsystems/scsi/scsi.o 00:03:58.465 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:03:58.465 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:03:58.724 LIB libspdk_event_ublk.a 00:03:58.724 LIB libspdk_event_nbd.a 00:03:58.724 LIB libspdk_event_scsi.a 00:03:58.724 SO libspdk_event_ublk.so.3.0 00:03:58.724 SO libspdk_event_nbd.so.6.0 00:03:58.724 SO libspdk_event_scsi.so.6.0 00:03:58.724 SYMLINK libspdk_event_nbd.so 00:03:58.724 SYMLINK libspdk_event_ublk.so 00:03:58.724 SYMLINK libspdk_event_scsi.so 00:03:58.724 LIB libspdk_event_nvmf.a 00:03:58.724 SO libspdk_event_nvmf.so.6.0 00:03:58.724 SYMLINK libspdk_event_nvmf.so 00:03:58.982 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:03:58.982 CC module/event/subsystems/iscsi/iscsi.o 00:03:58.982 LIB libspdk_event_vhost_scsi.a 00:03:58.982 SO libspdk_event_vhost_scsi.so.3.0 00:03:58.982 LIB libspdk_event_iscsi.a 00:03:59.241 SO libspdk_event_iscsi.so.6.0 00:03:59.241 SYMLINK libspdk_event_vhost_scsi.so 00:03:59.241 SYMLINK libspdk_event_iscsi.so 00:03:59.241 SO libspdk.so.6.0 00:03:59.241 SYMLINK libspdk.so 00:03:59.507 CXX app/trace/trace.o 00:03:59.507 CC app/trace_record/trace_record.o 00:03:59.507 CC test/rpc_client/rpc_client_test.o 00:03:59.507 CC app/spdk_nvme_discover/discovery_aer.o 00:03:59.507 CC app/spdk_nvme_perf/perf.o 00:03:59.507 CC app/spdk_top/spdk_top.o 00:03:59.507 TEST_HEADER include/spdk/accel.h 00:03:59.507 TEST_HEADER include/spdk/accel_module.h 00:03:59.507 TEST_HEADER include/spdk/assert.h 00:03:59.507 CC app/spdk_lspci/spdk_lspci.o 00:03:59.507 TEST_HEADER include/spdk/barrier.h 00:03:59.507 TEST_HEADER include/spdk/base64.h 00:03:59.507 CC app/spdk_nvme_identify/identify.o 00:03:59.507 TEST_HEADER include/spdk/bdev.h 00:03:59.507 TEST_HEADER include/spdk/bdev_module.h 00:03:59.507 TEST_HEADER include/spdk/bdev_zone.h 00:03:59.507 TEST_HEADER include/spdk/bit_array.h 00:03:59.507 TEST_HEADER include/spdk/bit_pool.h 00:03:59.507 TEST_HEADER include/spdk/blob_bdev.h 00:03:59.507 TEST_HEADER include/spdk/blobfs_bdev.h 00:03:59.507 TEST_HEADER include/spdk/blobfs.h 00:03:59.507 TEST_HEADER include/spdk/conf.h 00:03:59.507 TEST_HEADER include/spdk/blob.h 00:03:59.507 TEST_HEADER include/spdk/config.h 00:03:59.507 TEST_HEADER include/spdk/cpuset.h 00:03:59.507 TEST_HEADER include/spdk/crc16.h 00:03:59.507 TEST_HEADER include/spdk/crc32.h 00:03:59.507 TEST_HEADER include/spdk/dif.h 00:03:59.507 TEST_HEADER include/spdk/crc64.h 00:03:59.507 TEST_HEADER include/spdk/dma.h 00:03:59.507 TEST_HEADER include/spdk/endian.h 00:03:59.507 TEST_HEADER include/spdk/env.h 00:03:59.507 TEST_HEADER include/spdk/env_dpdk.h 00:03:59.507 TEST_HEADER include/spdk/event.h 00:03:59.507 TEST_HEADER include/spdk/fd.h 00:03:59.507 TEST_HEADER include/spdk/fd_group.h 00:03:59.507 TEST_HEADER include/spdk/file.h 00:03:59.507 TEST_HEADER include/spdk/fsdev.h 00:03:59.507 TEST_HEADER include/spdk/fsdev_module.h 00:03:59.507 TEST_HEADER include/spdk/ftl.h 00:03:59.507 TEST_HEADER include/spdk/fuse_dispatcher.h 00:03:59.507 TEST_HEADER include/spdk/gpt_spec.h 00:03:59.507 TEST_HEADER include/spdk/hexlify.h 00:03:59.507 TEST_HEADER include/spdk/histogram_data.h 00:03:59.507 TEST_HEADER include/spdk/idxd.h 00:03:59.507 TEST_HEADER include/spdk/idxd_spec.h 00:03:59.507 TEST_HEADER include/spdk/init.h 00:03:59.507 TEST_HEADER include/spdk/ioat.h 00:03:59.507 TEST_HEADER include/spdk/ioat_spec.h 00:03:59.507 TEST_HEADER include/spdk/iscsi_spec.h 00:03:59.507 TEST_HEADER include/spdk/json.h 00:03:59.507 TEST_HEADER include/spdk/jsonrpc.h 00:03:59.507 TEST_HEADER include/spdk/keyring.h 00:03:59.507 TEST_HEADER include/spdk/keyring_module.h 00:03:59.507 TEST_HEADER include/spdk/log.h 00:03:59.507 TEST_HEADER include/spdk/likely.h 00:03:59.507 TEST_HEADER include/spdk/lvol.h 00:03:59.507 TEST_HEADER include/spdk/memory.h 00:03:59.507 TEST_HEADER include/spdk/md5.h 00:03:59.507 TEST_HEADER include/spdk/mmio.h 00:03:59.507 TEST_HEADER include/spdk/nbd.h 00:03:59.507 TEST_HEADER include/spdk/net.h 00:03:59.507 TEST_HEADER include/spdk/notify.h 00:03:59.507 TEST_HEADER include/spdk/nvme.h 00:03:59.507 TEST_HEADER include/spdk/nvme_intel.h 00:03:59.507 TEST_HEADER include/spdk/nvme_ocssd.h 00:03:59.507 TEST_HEADER include/spdk/nvme_spec.h 00:03:59.507 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:03:59.507 TEST_HEADER include/spdk/nvme_zns.h 00:03:59.507 TEST_HEADER include/spdk/nvmf_cmd.h 00:03:59.507 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:03:59.507 TEST_HEADER include/spdk/nvmf.h 00:03:59.507 TEST_HEADER include/spdk/nvmf_spec.h 00:03:59.507 TEST_HEADER include/spdk/opal.h 00:03:59.507 TEST_HEADER include/spdk/nvmf_transport.h 00:03:59.507 TEST_HEADER include/spdk/pci_ids.h 00:03:59.507 TEST_HEADER include/spdk/opal_spec.h 00:03:59.507 TEST_HEADER include/spdk/pipe.h 00:03:59.507 TEST_HEADER include/spdk/reduce.h 00:03:59.507 TEST_HEADER include/spdk/queue.h 00:03:59.507 TEST_HEADER include/spdk/rpc.h 00:03:59.507 TEST_HEADER include/spdk/scheduler.h 00:03:59.507 TEST_HEADER include/spdk/scsi.h 00:03:59.507 TEST_HEADER include/spdk/scsi_spec.h 00:03:59.507 CC examples/interrupt_tgt/interrupt_tgt.o 00:03:59.507 TEST_HEADER include/spdk/stdinc.h 00:03:59.507 TEST_HEADER include/spdk/sock.h 00:03:59.507 TEST_HEADER include/spdk/string.h 00:03:59.507 TEST_HEADER include/spdk/thread.h 00:03:59.507 TEST_HEADER include/spdk/trace.h 00:03:59.507 TEST_HEADER include/spdk/trace_parser.h 00:03:59.507 TEST_HEADER include/spdk/tree.h 00:03:59.507 TEST_HEADER include/spdk/util.h 00:03:59.507 TEST_HEADER include/spdk/ublk.h 00:03:59.507 TEST_HEADER include/spdk/uuid.h 00:03:59.507 TEST_HEADER include/spdk/version.h 00:03:59.507 TEST_HEADER include/spdk/vfio_user_spec.h 00:03:59.507 TEST_HEADER include/spdk/vfio_user_pci.h 00:03:59.507 TEST_HEADER include/spdk/vhost.h 00:03:59.508 TEST_HEADER include/spdk/vmd.h 00:03:59.508 TEST_HEADER include/spdk/xor.h 00:03:59.508 TEST_HEADER include/spdk/zipf.h 00:03:59.508 CXX test/cpp_headers/accel.o 00:03:59.508 CXX test/cpp_headers/accel_module.o 00:03:59.508 CXX test/cpp_headers/barrier.o 00:03:59.508 CXX test/cpp_headers/assert.o 00:03:59.508 CXX test/cpp_headers/base64.o 00:03:59.508 CXX test/cpp_headers/bdev.o 00:03:59.508 CXX test/cpp_headers/bdev_module.o 00:03:59.508 CXX test/cpp_headers/bdev_zone.o 00:03:59.508 CXX test/cpp_headers/bit_array.o 00:03:59.508 CXX test/cpp_headers/bit_pool.o 00:03:59.508 CXX test/cpp_headers/blob_bdev.o 00:03:59.508 CXX test/cpp_headers/blobfs_bdev.o 00:03:59.508 CXX test/cpp_headers/blobfs.o 00:03:59.508 CXX test/cpp_headers/blob.o 00:03:59.508 CXX test/cpp_headers/conf.o 00:03:59.508 CXX test/cpp_headers/config.o 00:03:59.508 CXX test/cpp_headers/cpuset.o 00:03:59.508 CC app/spdk_dd/spdk_dd.o 00:03:59.508 CXX test/cpp_headers/crc16.o 00:03:59.508 CC app/nvmf_tgt/nvmf_main.o 00:03:59.508 CC app/iscsi_tgt/iscsi_tgt.o 00:03:59.508 CXX test/cpp_headers/crc32.o 00:03:59.508 CC test/app/stub/stub.o 00:03:59.508 CC test/app/jsoncat/jsoncat.o 00:03:59.508 CC test/thread/poller_perf/poller_perf.o 00:03:59.508 CC test/env/vtophys/vtophys.o 00:03:59.508 CC examples/ioat/verify/verify.o 00:03:59.508 CC test/app/histogram_perf/histogram_perf.o 00:03:59.508 CC test/env/pci/pci_ut.o 00:03:59.508 CC test/env/memory/memory_ut.o 00:03:59.508 CC examples/ioat/perf/perf.o 00:03:59.508 CC examples/util/zipf/zipf.o 00:03:59.508 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:03:59.508 CC app/fio/nvme/fio_plugin.o 00:03:59.508 CC app/spdk_tgt/spdk_tgt.o 00:03:59.767 CC test/dma/test_dma/test_dma.o 00:03:59.767 CC test/app/bdev_svc/bdev_svc.o 00:03:59.767 CC app/fio/bdev/fio_plugin.o 00:03:59.767 CC test/env/mem_callbacks/mem_callbacks.o 00:03:59.767 LINK spdk_lspci 00:03:59.767 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:03:59.767 LINK rpc_client_test 00:04:00.033 LINK spdk_nvme_discover 00:04:00.033 LINK jsoncat 00:04:00.033 LINK vtophys 00:04:00.033 LINK interrupt_tgt 00:04:00.033 LINK histogram_perf 00:04:00.033 LINK poller_perf 00:04:00.033 LINK zipf 00:04:00.033 CXX test/cpp_headers/crc64.o 00:04:00.033 LINK spdk_trace_record 00:04:00.033 CXX test/cpp_headers/dif.o 00:04:00.033 LINK nvmf_tgt 00:04:00.033 CXX test/cpp_headers/dma.o 00:04:00.033 LINK env_dpdk_post_init 00:04:00.033 CXX test/cpp_headers/endian.o 00:04:00.033 CXX test/cpp_headers/env_dpdk.o 00:04:00.033 CXX test/cpp_headers/env.o 00:04:00.033 CXX test/cpp_headers/event.o 00:04:00.033 CXX test/cpp_headers/fd_group.o 00:04:00.033 LINK stub 00:04:00.033 CXX test/cpp_headers/fd.o 00:04:00.033 CXX test/cpp_headers/file.o 00:04:00.033 CXX test/cpp_headers/fsdev.o 00:04:00.033 LINK iscsi_tgt 00:04:00.033 CXX test/cpp_headers/fsdev_module.o 00:04:00.033 CXX test/cpp_headers/ftl.o 00:04:00.033 CXX test/cpp_headers/fuse_dispatcher.o 00:04:00.033 CXX test/cpp_headers/gpt_spec.o 00:04:00.033 CXX test/cpp_headers/hexlify.o 00:04:00.033 LINK verify 00:04:00.033 LINK ioat_perf 00:04:00.033 LINK bdev_svc 00:04:00.033 CXX test/cpp_headers/histogram_data.o 00:04:00.033 LINK spdk_tgt 00:04:00.033 CXX test/cpp_headers/idxd.o 00:04:00.293 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:04:00.293 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:04:00.293 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:04:00.293 CXX test/cpp_headers/idxd_spec.o 00:04:00.293 CXX test/cpp_headers/init.o 00:04:00.293 CXX test/cpp_headers/ioat.o 00:04:00.293 CXX test/cpp_headers/ioat_spec.o 00:04:00.293 CXX test/cpp_headers/iscsi_spec.o 00:04:00.293 CXX test/cpp_headers/json.o 00:04:00.293 LINK spdk_dd 00:04:00.293 CXX test/cpp_headers/jsonrpc.o 00:04:00.293 CXX test/cpp_headers/keyring.o 00:04:00.293 CXX test/cpp_headers/keyring_module.o 00:04:00.293 CXX test/cpp_headers/likely.o 00:04:00.293 CXX test/cpp_headers/log.o 00:04:00.293 LINK spdk_trace 00:04:00.293 CXX test/cpp_headers/lvol.o 00:04:00.293 CXX test/cpp_headers/md5.o 00:04:00.293 CXX test/cpp_headers/memory.o 00:04:00.293 CXX test/cpp_headers/mmio.o 00:04:00.558 CXX test/cpp_headers/nbd.o 00:04:00.558 CXX test/cpp_headers/net.o 00:04:00.558 LINK pci_ut 00:04:00.558 CXX test/cpp_headers/notify.o 00:04:00.558 CXX test/cpp_headers/nvme.o 00:04:00.558 CXX test/cpp_headers/nvme_intel.o 00:04:00.558 CXX test/cpp_headers/nvme_ocssd.o 00:04:00.558 CXX test/cpp_headers/nvme_ocssd_spec.o 00:04:00.558 CXX test/cpp_headers/nvme_spec.o 00:04:00.558 CXX test/cpp_headers/nvme_zns.o 00:04:00.558 CXX test/cpp_headers/nvmf_cmd.o 00:04:00.558 CC examples/sock/hello_world/hello_sock.o 00:04:00.558 CC test/event/event_perf/event_perf.o 00:04:00.558 CC test/event/reactor/reactor.o 00:04:00.558 CXX test/cpp_headers/nvmf_fc_spec.o 00:04:00.558 CC test/event/reactor_perf/reactor_perf.o 00:04:00.558 LINK nvme_fuzz 00:04:00.558 CC examples/vmd/lsvmd/lsvmd.o 00:04:00.558 CXX test/cpp_headers/nvmf.o 00:04:00.558 CXX test/cpp_headers/nvmf_spec.o 00:04:00.558 LINK spdk_nvme 00:04:00.558 CC test/event/app_repeat/app_repeat.o 00:04:00.558 CC examples/thread/thread/thread_ex.o 00:04:00.558 CXX test/cpp_headers/nvmf_transport.o 00:04:00.823 CXX test/cpp_headers/opal.o 00:04:00.823 CXX test/cpp_headers/opal_spec.o 00:04:00.823 CC examples/idxd/perf/perf.o 00:04:00.823 CXX test/cpp_headers/pci_ids.o 00:04:00.823 LINK spdk_bdev 00:04:00.823 LINK test_dma 00:04:00.823 CC examples/vmd/led/led.o 00:04:00.823 CXX test/cpp_headers/pipe.o 00:04:00.823 CC test/event/scheduler/scheduler.o 00:04:00.823 CXX test/cpp_headers/queue.o 00:04:00.823 CXX test/cpp_headers/reduce.o 00:04:00.823 CXX test/cpp_headers/rpc.o 00:04:00.823 CXX test/cpp_headers/scheduler.o 00:04:00.823 CXX test/cpp_headers/scsi.o 00:04:00.823 CXX test/cpp_headers/scsi_spec.o 00:04:00.823 CXX test/cpp_headers/sock.o 00:04:00.823 CXX test/cpp_headers/stdinc.o 00:04:00.823 CXX test/cpp_headers/string.o 00:04:00.823 CXX test/cpp_headers/thread.o 00:04:00.823 CXX test/cpp_headers/trace.o 00:04:00.823 LINK reactor 00:04:00.823 CXX test/cpp_headers/trace_parser.o 00:04:00.823 CXX test/cpp_headers/tree.o 00:04:00.823 LINK event_perf 00:04:00.823 CXX test/cpp_headers/ublk.o 00:04:00.823 CXX test/cpp_headers/util.o 00:04:00.823 CXX test/cpp_headers/uuid.o 00:04:00.823 CXX test/cpp_headers/version.o 00:04:00.823 LINK reactor_perf 00:04:00.823 CXX test/cpp_headers/vfio_user_pci.o 00:04:00.823 LINK lsvmd 00:04:00.823 CXX test/cpp_headers/vfio_user_spec.o 00:04:01.082 CXX test/cpp_headers/vhost.o 00:04:01.082 CXX test/cpp_headers/vmd.o 00:04:01.082 CXX test/cpp_headers/xor.o 00:04:01.082 CXX test/cpp_headers/zipf.o 00:04:01.082 LINK app_repeat 00:04:01.082 LINK vhost_fuzz 00:04:01.082 LINK spdk_nvme_perf 00:04:01.082 LINK mem_callbacks 00:04:01.082 LINK led 00:04:01.082 CC app/vhost/vhost.o 00:04:01.082 LINK hello_sock 00:04:01.082 LINK spdk_nvme_identify 00:04:01.082 LINK thread 00:04:01.082 LINK spdk_top 00:04:01.341 LINK scheduler 00:04:01.341 CC test/nvme/sgl/sgl.o 00:04:01.341 CC test/nvme/compliance/nvme_compliance.o 00:04:01.341 CC test/nvme/err_injection/err_injection.o 00:04:01.341 CC test/nvme/boot_partition/boot_partition.o 00:04:01.341 CC test/nvme/simple_copy/simple_copy.o 00:04:01.341 CC test/nvme/fdp/fdp.o 00:04:01.341 CC test/nvme/doorbell_aers/doorbell_aers.o 00:04:01.341 CC test/nvme/startup/startup.o 00:04:01.341 CC test/nvme/reserve/reserve.o 00:04:01.341 CC test/nvme/e2edp/nvme_dp.o 00:04:01.341 CC test/nvme/fused_ordering/fused_ordering.o 00:04:01.341 CC test/nvme/aer/aer.o 00:04:01.342 CC test/nvme/overhead/overhead.o 00:04:01.342 CC test/nvme/reset/reset.o 00:04:01.342 CC test/nvme/connect_stress/connect_stress.o 00:04:01.342 CC test/nvme/cuse/cuse.o 00:04:01.342 LINK idxd_perf 00:04:01.342 CC test/accel/dif/dif.o 00:04:01.342 CC test/blobfs/mkfs/mkfs.o 00:04:01.342 LINK vhost 00:04:01.342 CC test/lvol/esnap/esnap.o 00:04:01.600 CC examples/nvme/nvme_manage/nvme_manage.o 00:04:01.600 CC examples/nvme/arbitration/arbitration.o 00:04:01.600 CC examples/nvme/reconnect/reconnect.o 00:04:01.600 CC examples/nvme/hello_world/hello_world.o 00:04:01.600 CC examples/nvme/cmb_copy/cmb_copy.o 00:04:01.600 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:04:01.600 CC examples/nvme/hotplug/hotplug.o 00:04:01.600 CC examples/nvme/abort/abort.o 00:04:01.600 LINK boot_partition 00:04:01.600 LINK doorbell_aers 00:04:01.600 LINK connect_stress 00:04:01.600 LINK fused_ordering 00:04:01.600 CC examples/accel/perf/accel_perf.o 00:04:01.600 LINK simple_copy 00:04:01.600 LINK err_injection 00:04:01.600 LINK startup 00:04:01.600 CC examples/blob/cli/blobcli.o 00:04:01.600 LINK sgl 00:04:01.860 LINK nvme_dp 00:04:01.860 CC examples/blob/hello_world/hello_blob.o 00:04:01.860 LINK reset 00:04:01.860 LINK mkfs 00:04:01.860 CC examples/fsdev/hello_world/hello_fsdev.o 00:04:01.860 LINK reserve 00:04:01.860 LINK overhead 00:04:01.860 LINK pmr_persistence 00:04:01.860 LINK aer 00:04:01.860 LINK memory_ut 00:04:01.860 LINK nvme_compliance 00:04:01.860 LINK fdp 00:04:01.860 LINK hello_world 00:04:01.860 LINK cmb_copy 00:04:02.118 LINK hotplug 00:04:02.118 LINK reconnect 00:04:02.118 LINK abort 00:04:02.118 LINK hello_blob 00:04:02.118 LINK arbitration 00:04:02.118 LINK nvme_manage 00:04:02.118 LINK hello_fsdev 00:04:02.375 LINK blobcli 00:04:02.375 LINK accel_perf 00:04:02.375 LINK dif 00:04:02.633 LINK iscsi_fuzz 00:04:02.633 CC examples/bdev/hello_world/hello_bdev.o 00:04:02.633 CC examples/bdev/bdevperf/bdevperf.o 00:04:02.891 CC test/bdev/bdevio/bdevio.o 00:04:02.891 LINK hello_bdev 00:04:03.149 LINK cuse 00:04:03.149 LINK bdevio 00:04:03.408 LINK bdevperf 00:04:03.975 CC examples/nvmf/nvmf/nvmf.o 00:04:04.233 LINK nvmf 00:04:06.762 LINK esnap 00:04:07.021 00:04:07.021 real 1m9.390s 00:04:07.021 user 11m47.803s 00:04:07.021 sys 2m34.555s 00:04:07.021 10:22:55 make -- common/autotest_common.sh@1128 -- $ xtrace_disable 00:04:07.021 10:22:55 make -- common/autotest_common.sh@10 -- $ set +x 00:04:07.021 ************************************ 00:04:07.021 END TEST make 00:04:07.021 ************************************ 00:04:07.021 10:22:55 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:04:07.021 10:22:55 -- pm/common@29 -- $ signal_monitor_resources TERM 00:04:07.021 10:22:55 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:04:07.021 10:22:55 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:07.021 10:22:55 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:04:07.021 10:22:55 -- pm/common@44 -- $ pid=176063 00:04:07.021 10:22:55 -- pm/common@50 -- $ kill -TERM 176063 00:04:07.021 10:22:55 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:07.021 10:22:55 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:04:07.021 10:22:55 -- pm/common@44 -- $ pid=176065 00:04:07.021 10:22:55 -- pm/common@50 -- $ kill -TERM 176065 00:04:07.021 10:22:55 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:07.021 10:22:55 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:04:07.021 10:22:55 -- pm/common@44 -- $ pid=176067 00:04:07.021 10:22:55 -- pm/common@50 -- $ kill -TERM 176067 00:04:07.021 10:22:55 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:07.021 10:22:55 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:04:07.021 10:22:55 -- pm/common@44 -- $ pid=176097 00:04:07.021 10:22:55 -- pm/common@50 -- $ sudo -E kill -TERM 176097 00:04:07.021 10:22:55 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:04:07.021 10:22:55 -- spdk/autorun.sh@27 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:04:07.021 10:22:55 -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:07.021 10:22:55 -- common/autotest_common.sh@1691 -- # lcov --version 00:04:07.021 10:22:55 -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:07.021 10:22:55 -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:07.021 10:22:55 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:07.021 10:22:55 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:07.021 10:22:55 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:07.021 10:22:55 -- scripts/common.sh@336 -- # IFS=.-: 00:04:07.021 10:22:55 -- scripts/common.sh@336 -- # read -ra ver1 00:04:07.021 10:22:55 -- scripts/common.sh@337 -- # IFS=.-: 00:04:07.021 10:22:55 -- scripts/common.sh@337 -- # read -ra ver2 00:04:07.021 10:22:55 -- scripts/common.sh@338 -- # local 'op=<' 00:04:07.021 10:22:55 -- scripts/common.sh@340 -- # ver1_l=2 00:04:07.021 10:22:55 -- scripts/common.sh@341 -- # ver2_l=1 00:04:07.021 10:22:55 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:07.021 10:22:55 -- scripts/common.sh@344 -- # case "$op" in 00:04:07.021 10:22:55 -- scripts/common.sh@345 -- # : 1 00:04:07.021 10:22:55 -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:07.021 10:22:55 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:07.021 10:22:55 -- scripts/common.sh@365 -- # decimal 1 00:04:07.021 10:22:55 -- scripts/common.sh@353 -- # local d=1 00:04:07.021 10:22:55 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:07.021 10:22:55 -- scripts/common.sh@355 -- # echo 1 00:04:07.022 10:22:55 -- scripts/common.sh@365 -- # ver1[v]=1 00:04:07.022 10:22:55 -- scripts/common.sh@366 -- # decimal 2 00:04:07.022 10:22:55 -- scripts/common.sh@353 -- # local d=2 00:04:07.022 10:22:55 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:07.022 10:22:55 -- scripts/common.sh@355 -- # echo 2 00:04:07.022 10:22:55 -- scripts/common.sh@366 -- # ver2[v]=2 00:04:07.022 10:22:55 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:07.022 10:22:55 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:07.022 10:22:55 -- scripts/common.sh@368 -- # return 0 00:04:07.022 10:22:55 -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:07.022 10:22:55 -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:07.022 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:07.022 --rc genhtml_branch_coverage=1 00:04:07.022 --rc genhtml_function_coverage=1 00:04:07.022 --rc genhtml_legend=1 00:04:07.022 --rc geninfo_all_blocks=1 00:04:07.022 --rc geninfo_unexecuted_blocks=1 00:04:07.022 00:04:07.022 ' 00:04:07.022 10:22:55 -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:07.022 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:07.022 --rc genhtml_branch_coverage=1 00:04:07.022 --rc genhtml_function_coverage=1 00:04:07.022 --rc genhtml_legend=1 00:04:07.022 --rc geninfo_all_blocks=1 00:04:07.022 --rc geninfo_unexecuted_blocks=1 00:04:07.022 00:04:07.022 ' 00:04:07.022 10:22:55 -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:07.022 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:07.022 --rc genhtml_branch_coverage=1 00:04:07.022 --rc genhtml_function_coverage=1 00:04:07.022 --rc genhtml_legend=1 00:04:07.022 --rc geninfo_all_blocks=1 00:04:07.022 --rc geninfo_unexecuted_blocks=1 00:04:07.022 00:04:07.022 ' 00:04:07.022 10:22:55 -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:07.022 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:07.022 --rc genhtml_branch_coverage=1 00:04:07.022 --rc genhtml_function_coverage=1 00:04:07.022 --rc genhtml_legend=1 00:04:07.022 --rc geninfo_all_blocks=1 00:04:07.022 --rc geninfo_unexecuted_blocks=1 00:04:07.022 00:04:07.022 ' 00:04:07.022 10:22:55 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:04:07.022 10:22:55 -- nvmf/common.sh@7 -- # uname -s 00:04:07.022 10:22:55 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:07.022 10:22:55 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:07.022 10:22:55 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:07.022 10:22:55 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:07.022 10:22:55 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:07.022 10:22:55 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:07.022 10:22:55 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:07.022 10:22:55 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:07.022 10:22:55 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:07.022 10:22:55 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:07.022 10:22:55 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:04:07.022 10:22:55 -- nvmf/common.sh@18 -- # NVME_HOSTID=8b464f06-2980-e311-ba20-001e67a94acd 00:04:07.022 10:22:55 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:07.022 10:22:55 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:07.022 10:22:55 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:04:07.022 10:22:55 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:07.022 10:22:55 -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:04:07.022 10:22:55 -- scripts/common.sh@15 -- # shopt -s extglob 00:04:07.022 10:22:55 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:07.022 10:22:55 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:07.022 10:22:55 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:07.022 10:22:55 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:07.022 10:22:55 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:07.022 10:22:55 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:07.022 10:22:55 -- paths/export.sh@5 -- # export PATH 00:04:07.022 10:22:55 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:07.022 10:22:55 -- nvmf/common.sh@51 -- # : 0 00:04:07.022 10:22:55 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:07.022 10:22:55 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:07.022 10:22:55 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:07.022 10:22:55 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:07.022 10:22:55 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:07.022 10:22:55 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:07.022 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:07.022 10:22:55 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:07.022 10:22:55 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:07.022 10:22:55 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:07.022 10:22:55 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:04:07.282 10:22:55 -- spdk/autotest.sh@32 -- # uname -s 00:04:07.282 10:22:55 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:04:07.282 10:22:55 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:04:07.282 10:22:55 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:04:07.282 10:22:55 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:04:07.282 10:22:55 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:04:07.282 10:22:55 -- spdk/autotest.sh@44 -- # modprobe nbd 00:04:07.282 10:22:55 -- spdk/autotest.sh@46 -- # type -P udevadm 00:04:07.282 10:22:55 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:04:07.282 10:22:55 -- spdk/autotest.sh@48 -- # udevadm_pid=235358 00:04:07.282 10:22:55 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:04:07.282 10:22:55 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:04:07.282 10:22:55 -- pm/common@17 -- # local monitor 00:04:07.282 10:22:55 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:07.282 10:22:55 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:07.282 10:22:55 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:07.282 10:22:55 -- pm/common@21 -- # date +%s 00:04:07.282 10:22:55 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:07.282 10:22:55 -- pm/common@21 -- # date +%s 00:04:07.282 10:22:55 -- pm/common@25 -- # sleep 1 00:04:07.282 10:22:55 -- pm/common@21 -- # date +%s 00:04:07.282 10:22:55 -- pm/common@21 -- # date +%s 00:04:07.283 10:22:55 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1731662575 00:04:07.283 10:22:55 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1731662575 00:04:07.283 10:22:55 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1731662575 00:04:07.283 10:22:55 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1731662575 00:04:07.283 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1731662575_collect-cpu-load.pm.log 00:04:07.283 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1731662575_collect-vmstat.pm.log 00:04:07.283 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1731662575_collect-cpu-temp.pm.log 00:04:07.283 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1731662575_collect-bmc-pm.bmc.pm.log 00:04:08.220 10:22:56 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:04:08.220 10:22:56 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:04:08.220 10:22:56 -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:08.220 10:22:56 -- common/autotest_common.sh@10 -- # set +x 00:04:08.220 10:22:56 -- spdk/autotest.sh@59 -- # create_test_list 00:04:08.220 10:22:56 -- common/autotest_common.sh@750 -- # xtrace_disable 00:04:08.220 10:22:56 -- common/autotest_common.sh@10 -- # set +x 00:04:08.220 10:22:56 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:04:08.220 10:22:56 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:08.220 10:22:56 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:08.220 10:22:56 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:04:08.220 10:22:56 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:08.220 10:22:56 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:04:08.220 10:22:56 -- common/autotest_common.sh@1455 -- # uname 00:04:08.220 10:22:56 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:04:08.220 10:22:56 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:04:08.220 10:22:56 -- common/autotest_common.sh@1475 -- # uname 00:04:08.220 10:22:56 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:04:08.220 10:22:56 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:04:08.220 10:22:56 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:04:08.220 lcov: LCOV version 1.15 00:04:08.220 10:22:56 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:04:26.296 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:04:26.296 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:04:48.248 10:23:33 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:04:48.248 10:23:33 -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:48.248 10:23:33 -- common/autotest_common.sh@10 -- # set +x 00:04:48.248 10:23:33 -- spdk/autotest.sh@78 -- # rm -f 00:04:48.248 10:23:33 -- spdk/autotest.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:48.248 0000:81:00.0 (8086 0a54): Already using the nvme driver 00:04:48.248 0000:00:04.7 (8086 0e27): Already using the ioatdma driver 00:04:48.248 0000:00:04.6 (8086 0e26): Already using the ioatdma driver 00:04:48.248 0000:00:04.5 (8086 0e25): Already using the ioatdma driver 00:04:48.248 0000:00:04.4 (8086 0e24): Already using the ioatdma driver 00:04:48.248 0000:00:04.3 (8086 0e23): Already using the ioatdma driver 00:04:48.248 0000:00:04.2 (8086 0e22): Already using the ioatdma driver 00:04:48.248 0000:00:04.1 (8086 0e21): Already using the ioatdma driver 00:04:48.248 0000:00:04.0 (8086 0e20): Already using the ioatdma driver 00:04:48.248 0000:80:04.7 (8086 0e27): Already using the ioatdma driver 00:04:48.248 0000:80:04.6 (8086 0e26): Already using the ioatdma driver 00:04:48.248 0000:80:04.5 (8086 0e25): Already using the ioatdma driver 00:04:48.248 0000:80:04.4 (8086 0e24): Already using the ioatdma driver 00:04:48.248 0000:80:04.3 (8086 0e23): Already using the ioatdma driver 00:04:48.248 0000:80:04.2 (8086 0e22): Already using the ioatdma driver 00:04:48.248 0000:80:04.1 (8086 0e21): Already using the ioatdma driver 00:04:48.248 0000:80:04.0 (8086 0e20): Already using the ioatdma driver 00:04:48.248 10:23:35 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:04:48.248 10:23:35 -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:04:48.248 10:23:35 -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:04:48.248 10:23:35 -- common/autotest_common.sh@1656 -- # local nvme bdf 00:04:48.248 10:23:35 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:04:48.248 10:23:35 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:04:48.248 10:23:35 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:04:48.248 10:23:35 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:48.248 10:23:35 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:04:48.248 10:23:35 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:04:48.248 10:23:35 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:48.248 10:23:35 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:48.248 10:23:35 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:04:48.248 10:23:35 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:04:48.248 10:23:35 -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:04:48.248 No valid GPT data, bailing 00:04:48.248 10:23:35 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:48.248 10:23:35 -- scripts/common.sh@394 -- # pt= 00:04:48.248 10:23:35 -- scripts/common.sh@395 -- # return 1 00:04:48.248 10:23:35 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:04:48.248 1+0 records in 00:04:48.248 1+0 records out 00:04:48.248 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00238799 s, 439 MB/s 00:04:48.248 10:23:35 -- spdk/autotest.sh@105 -- # sync 00:04:48.248 10:23:35 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:04:48.248 10:23:35 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:04:48.248 10:23:35 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:04:49.184 10:23:37 -- spdk/autotest.sh@111 -- # uname -s 00:04:49.184 10:23:37 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:04:49.184 10:23:37 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:04:49.184 10:23:37 -- spdk/autotest.sh@115 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:04:50.565 Hugepages 00:04:50.565 node hugesize free / total 00:04:50.565 node0 1048576kB 0 / 0 00:04:50.565 node0 2048kB 0 / 0 00:04:50.565 node1 1048576kB 0 / 0 00:04:50.565 node1 2048kB 0 / 0 00:04:50.565 00:04:50.565 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:50.565 I/OAT 0000:00:04.0 8086 0e20 0 ioatdma - - 00:04:50.565 I/OAT 0000:00:04.1 8086 0e21 0 ioatdma - - 00:04:50.565 I/OAT 0000:00:04.2 8086 0e22 0 ioatdma - - 00:04:50.565 I/OAT 0000:00:04.3 8086 0e23 0 ioatdma - - 00:04:50.565 I/OAT 0000:00:04.4 8086 0e24 0 ioatdma - - 00:04:50.565 I/OAT 0000:00:04.5 8086 0e25 0 ioatdma - - 00:04:50.565 I/OAT 0000:00:04.6 8086 0e26 0 ioatdma - - 00:04:50.565 I/OAT 0000:00:04.7 8086 0e27 0 ioatdma - - 00:04:50.565 I/OAT 0000:80:04.0 8086 0e20 1 ioatdma - - 00:04:50.565 I/OAT 0000:80:04.1 8086 0e21 1 ioatdma - - 00:04:50.565 I/OAT 0000:80:04.2 8086 0e22 1 ioatdma - - 00:04:50.565 I/OAT 0000:80:04.3 8086 0e23 1 ioatdma - - 00:04:50.565 I/OAT 0000:80:04.4 8086 0e24 1 ioatdma - - 00:04:50.565 I/OAT 0000:80:04.5 8086 0e25 1 ioatdma - - 00:04:50.565 I/OAT 0000:80:04.6 8086 0e26 1 ioatdma - - 00:04:50.565 I/OAT 0000:80:04.7 8086 0e27 1 ioatdma - - 00:04:50.565 NVMe 0000:81:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:04:50.565 10:23:38 -- spdk/autotest.sh@117 -- # uname -s 00:04:50.565 10:23:38 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:04:50.565 10:23:38 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:04:50.565 10:23:38 -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:51.506 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:04:51.506 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:04:51.506 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:04:51.506 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:04:51.506 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:04:51.765 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:04:51.765 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:04:51.765 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:04:51.765 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:04:51.765 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:04:51.765 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:04:51.765 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:04:51.765 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:04:51.765 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:04:51.765 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:04:51.765 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:04:53.681 0000:81:00.0 (8086 0a54): nvme -> vfio-pci 00:04:53.681 10:23:42 -- common/autotest_common.sh@1515 -- # sleep 1 00:04:54.621 10:23:43 -- common/autotest_common.sh@1516 -- # bdfs=() 00:04:54.621 10:23:43 -- common/autotest_common.sh@1516 -- # local bdfs 00:04:54.621 10:23:43 -- common/autotest_common.sh@1518 -- # bdfs=($(get_nvme_bdfs)) 00:04:54.622 10:23:43 -- common/autotest_common.sh@1518 -- # get_nvme_bdfs 00:04:54.622 10:23:43 -- common/autotest_common.sh@1496 -- # bdfs=() 00:04:54.622 10:23:43 -- common/autotest_common.sh@1496 -- # local bdfs 00:04:54.622 10:23:43 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:54.622 10:23:43 -- common/autotest_common.sh@1497 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:04:54.622 10:23:43 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:04:54.622 10:23:43 -- common/autotest_common.sh@1498 -- # (( 1 == 0 )) 00:04:54.622 10:23:43 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:81:00.0 00:04:54.622 10:23:43 -- common/autotest_common.sh@1520 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:56.004 Waiting for block devices as requested 00:04:56.004 0000:81:00.0 (8086 0a54): vfio-pci -> nvme 00:04:56.004 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:04:56.264 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:04:56.264 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:04:56.264 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:04:56.264 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:04:56.536 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:04:56.536 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:04:56.536 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:04:56.536 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:04:56.795 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:04:56.795 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:04:56.795 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:04:57.056 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:04:57.056 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:04:57.056 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:04:57.056 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:04:57.316 10:23:45 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:04:57.316 10:23:45 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:81:00.0 00:04:57.316 10:23:45 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 00:04:57.316 10:23:45 -- common/autotest_common.sh@1485 -- # grep 0000:81:00.0/nvme/nvme 00:04:57.316 10:23:45 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:80/0000:80:01.0/0000:81:00.0/nvme/nvme0 00:04:57.316 10:23:45 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:80/0000:80:01.0/0000:81:00.0/nvme/nvme0 ]] 00:04:57.316 10:23:45 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:80/0000:80:01.0/0000:81:00.0/nvme/nvme0 00:04:57.316 10:23:45 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme0 00:04:57.317 10:23:45 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme0 00:04:57.317 10:23:45 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme0 ]] 00:04:57.317 10:23:45 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme0 00:04:57.317 10:23:45 -- common/autotest_common.sh@1529 -- # grep oacs 00:04:57.317 10:23:45 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:04:57.317 10:23:45 -- common/autotest_common.sh@1529 -- # oacs=' 0xe' 00:04:57.317 10:23:45 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:04:57.317 10:23:45 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:04:57.317 10:23:45 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme0 00:04:57.317 10:23:45 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:04:57.317 10:23:45 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:04:57.317 10:23:45 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:04:57.317 10:23:45 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:04:57.317 10:23:45 -- common/autotest_common.sh@1541 -- # continue 00:04:57.317 10:23:45 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:04:57.317 10:23:45 -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:57.317 10:23:45 -- common/autotest_common.sh@10 -- # set +x 00:04:57.317 10:23:45 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:04:57.317 10:23:45 -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:57.317 10:23:45 -- common/autotest_common.sh@10 -- # set +x 00:04:57.317 10:23:45 -- spdk/autotest.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:58.695 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:04:58.695 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:04:58.695 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:04:58.695 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:04:58.695 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:04:58.695 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:04:58.695 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:04:58.695 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:04:58.695 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:04:58.695 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:04:58.695 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:04:58.695 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:04:58.695 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:04:58.695 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:04:58.695 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:04:58.695 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:05:00.607 0000:81:00.0 (8086 0a54): nvme -> vfio-pci 00:05:00.607 10:23:48 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:05:00.607 10:23:48 -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:00.607 10:23:48 -- common/autotest_common.sh@10 -- # set +x 00:05:00.607 10:23:48 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:05:00.607 10:23:48 -- common/autotest_common.sh@1576 -- # mapfile -t bdfs 00:05:00.607 10:23:48 -- common/autotest_common.sh@1576 -- # get_nvme_bdfs_by_id 0x0a54 00:05:00.607 10:23:48 -- common/autotest_common.sh@1561 -- # bdfs=() 00:05:00.607 10:23:48 -- common/autotest_common.sh@1561 -- # _bdfs=() 00:05:00.607 10:23:48 -- common/autotest_common.sh@1561 -- # local bdfs _bdfs 00:05:00.607 10:23:48 -- common/autotest_common.sh@1562 -- # _bdfs=($(get_nvme_bdfs)) 00:05:00.607 10:23:48 -- common/autotest_common.sh@1562 -- # get_nvme_bdfs 00:05:00.607 10:23:48 -- common/autotest_common.sh@1496 -- # bdfs=() 00:05:00.607 10:23:48 -- common/autotest_common.sh@1496 -- # local bdfs 00:05:00.607 10:23:48 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:00.607 10:23:48 -- common/autotest_common.sh@1497 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:05:00.607 10:23:48 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:05:00.607 10:23:48 -- common/autotest_common.sh@1498 -- # (( 1 == 0 )) 00:05:00.607 10:23:48 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:81:00.0 00:05:00.607 10:23:48 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:05:00.607 10:23:49 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:81:00.0/device 00:05:00.607 10:23:49 -- common/autotest_common.sh@1564 -- # device=0x0a54 00:05:00.607 10:23:49 -- common/autotest_common.sh@1565 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:05:00.607 10:23:49 -- common/autotest_common.sh@1566 -- # bdfs+=($bdf) 00:05:00.607 10:23:49 -- common/autotest_common.sh@1570 -- # (( 1 > 0 )) 00:05:00.607 10:23:49 -- common/autotest_common.sh@1571 -- # printf '%s\n' 0000:81:00.0 00:05:00.608 10:23:49 -- common/autotest_common.sh@1577 -- # [[ -z 0000:81:00.0 ]] 00:05:00.608 10:23:49 -- common/autotest_common.sh@1582 -- # spdk_tgt_pid=245890 00:05:00.608 10:23:49 -- common/autotest_common.sh@1581 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:00.608 10:23:49 -- common/autotest_common.sh@1583 -- # waitforlisten 245890 00:05:00.608 10:23:49 -- common/autotest_common.sh@833 -- # '[' -z 245890 ']' 00:05:00.608 10:23:49 -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:00.608 10:23:49 -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:00.608 10:23:49 -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:00.608 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:00.608 10:23:49 -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:00.608 10:23:49 -- common/autotest_common.sh@10 -- # set +x 00:05:00.608 [2024-11-15 10:23:49.057474] Starting SPDK v25.01-pre git sha1 318515b44 / DPDK 24.03.0 initialization... 00:05:00.608 [2024-11-15 10:23:49.057565] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid245890 ] 00:05:00.866 [2024-11-15 10:23:49.122835] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:00.866 [2024-11-15 10:23:49.175885] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:01.125 10:23:49 -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:01.125 10:23:49 -- common/autotest_common.sh@866 -- # return 0 00:05:01.125 10:23:49 -- common/autotest_common.sh@1585 -- # bdf_id=0 00:05:01.125 10:23:49 -- common/autotest_common.sh@1586 -- # for bdf in "${bdfs[@]}" 00:05:01.125 10:23:49 -- common/autotest_common.sh@1587 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:81:00.0 00:05:04.413 nvme0n1 00:05:04.413 10:23:52 -- common/autotest_common.sh@1589 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:05:04.413 [2024-11-15 10:23:52.785991] vbdev_opal_rpc.c: 125:rpc_bdev_nvme_opal_revert: *ERROR*: nvme0 not support opal 00:05:04.413 request: 00:05:04.413 { 00:05:04.413 "nvme_ctrlr_name": "nvme0", 00:05:04.413 "password": "test", 00:05:04.413 "method": "bdev_nvme_opal_revert", 00:05:04.413 "req_id": 1 00:05:04.413 } 00:05:04.413 Got JSON-RPC error response 00:05:04.413 response: 00:05:04.413 { 00:05:04.413 "code": -32602, 00:05:04.413 "message": "Invalid parameters" 00:05:04.413 } 00:05:04.413 10:23:52 -- common/autotest_common.sh@1589 -- # true 00:05:04.413 10:23:52 -- common/autotest_common.sh@1590 -- # (( ++bdf_id )) 00:05:04.413 10:23:52 -- common/autotest_common.sh@1593 -- # killprocess 245890 00:05:04.413 10:23:52 -- common/autotest_common.sh@952 -- # '[' -z 245890 ']' 00:05:04.413 10:23:52 -- common/autotest_common.sh@956 -- # kill -0 245890 00:05:04.413 10:23:52 -- common/autotest_common.sh@957 -- # uname 00:05:04.413 10:23:52 -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:04.413 10:23:52 -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 245890 00:05:04.413 10:23:52 -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:05:04.413 10:23:52 -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:05:04.413 10:23:52 -- common/autotest_common.sh@970 -- # echo 'killing process with pid 245890' 00:05:04.413 killing process with pid 245890 00:05:04.413 10:23:52 -- common/autotest_common.sh@971 -- # kill 245890 00:05:04.413 10:23:52 -- common/autotest_common.sh@976 -- # wait 245890 00:05:07.697 10:23:55 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:05:07.697 10:23:55 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:05:07.697 10:23:55 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:05:07.697 10:23:55 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:05:07.697 10:23:55 -- spdk/autotest.sh@149 -- # timing_enter lib 00:05:07.697 10:23:55 -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:07.697 10:23:55 -- common/autotest_common.sh@10 -- # set +x 00:05:07.697 10:23:55 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:05:07.697 10:23:55 -- spdk/autotest.sh@155 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:05:07.697 10:23:55 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:07.697 10:23:55 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:07.697 10:23:55 -- common/autotest_common.sh@10 -- # set +x 00:05:07.697 ************************************ 00:05:07.697 START TEST env 00:05:07.697 ************************************ 00:05:07.697 10:23:55 env -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:05:07.697 * Looking for test storage... 00:05:07.697 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:05:07.697 10:23:55 env -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:07.697 10:23:55 env -- common/autotest_common.sh@1691 -- # lcov --version 00:05:07.697 10:23:55 env -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:07.697 10:23:55 env -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:07.697 10:23:55 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:07.697 10:23:55 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:07.697 10:23:55 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:07.697 10:23:55 env -- scripts/common.sh@336 -- # IFS=.-: 00:05:07.697 10:23:55 env -- scripts/common.sh@336 -- # read -ra ver1 00:05:07.697 10:23:55 env -- scripts/common.sh@337 -- # IFS=.-: 00:05:07.697 10:23:55 env -- scripts/common.sh@337 -- # read -ra ver2 00:05:07.697 10:23:55 env -- scripts/common.sh@338 -- # local 'op=<' 00:05:07.697 10:23:55 env -- scripts/common.sh@340 -- # ver1_l=2 00:05:07.697 10:23:55 env -- scripts/common.sh@341 -- # ver2_l=1 00:05:07.697 10:23:55 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:07.697 10:23:55 env -- scripts/common.sh@344 -- # case "$op" in 00:05:07.697 10:23:55 env -- scripts/common.sh@345 -- # : 1 00:05:07.697 10:23:55 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:07.697 10:23:55 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:07.697 10:23:55 env -- scripts/common.sh@365 -- # decimal 1 00:05:07.697 10:23:55 env -- scripts/common.sh@353 -- # local d=1 00:05:07.697 10:23:55 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:07.697 10:23:55 env -- scripts/common.sh@355 -- # echo 1 00:05:07.697 10:23:55 env -- scripts/common.sh@365 -- # ver1[v]=1 00:05:07.697 10:23:55 env -- scripts/common.sh@366 -- # decimal 2 00:05:07.697 10:23:55 env -- scripts/common.sh@353 -- # local d=2 00:05:07.697 10:23:55 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:07.697 10:23:55 env -- scripts/common.sh@355 -- # echo 2 00:05:07.697 10:23:55 env -- scripts/common.sh@366 -- # ver2[v]=2 00:05:07.697 10:23:55 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:07.697 10:23:55 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:07.697 10:23:55 env -- scripts/common.sh@368 -- # return 0 00:05:07.697 10:23:55 env -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:07.697 10:23:55 env -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:07.697 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:07.697 --rc genhtml_branch_coverage=1 00:05:07.697 --rc genhtml_function_coverage=1 00:05:07.697 --rc genhtml_legend=1 00:05:07.698 --rc geninfo_all_blocks=1 00:05:07.698 --rc geninfo_unexecuted_blocks=1 00:05:07.698 00:05:07.698 ' 00:05:07.698 10:23:55 env -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:07.698 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:07.698 --rc genhtml_branch_coverage=1 00:05:07.698 --rc genhtml_function_coverage=1 00:05:07.698 --rc genhtml_legend=1 00:05:07.698 --rc geninfo_all_blocks=1 00:05:07.698 --rc geninfo_unexecuted_blocks=1 00:05:07.698 00:05:07.698 ' 00:05:07.698 10:23:55 env -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:07.698 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:07.698 --rc genhtml_branch_coverage=1 00:05:07.698 --rc genhtml_function_coverage=1 00:05:07.698 --rc genhtml_legend=1 00:05:07.698 --rc geninfo_all_blocks=1 00:05:07.698 --rc geninfo_unexecuted_blocks=1 00:05:07.698 00:05:07.698 ' 00:05:07.698 10:23:55 env -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:07.698 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:07.698 --rc genhtml_branch_coverage=1 00:05:07.698 --rc genhtml_function_coverage=1 00:05:07.698 --rc genhtml_legend=1 00:05:07.698 --rc geninfo_all_blocks=1 00:05:07.698 --rc geninfo_unexecuted_blocks=1 00:05:07.698 00:05:07.698 ' 00:05:07.698 10:23:55 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:05:07.698 10:23:55 env -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:07.698 10:23:55 env -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:07.698 10:23:55 env -- common/autotest_common.sh@10 -- # set +x 00:05:07.698 ************************************ 00:05:07.698 START TEST env_memory 00:05:07.698 ************************************ 00:05:07.698 10:23:55 env.env_memory -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:05:07.698 00:05:07.698 00:05:07.698 CUnit - A unit testing framework for C - Version 2.1-3 00:05:07.698 http://cunit.sourceforge.net/ 00:05:07.698 00:05:07.698 00:05:07.698 Suite: memory 00:05:07.698 Test: alloc and free memory map ...[2024-11-15 10:23:55.765879] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:05:07.698 passed 00:05:07.698 Test: mem map translation ...[2024-11-15 10:23:55.785788] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:05:07.698 [2024-11-15 10:23:55.785810] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:05:07.698 [2024-11-15 10:23:55.785860] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:05:07.698 [2024-11-15 10:23:55.785873] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:05:07.698 passed 00:05:07.698 Test: mem map registration ...[2024-11-15 10:23:55.826323] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:05:07.698 [2024-11-15 10:23:55.826342] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:05:07.698 passed 00:05:07.698 Test: mem map adjacent registrations ...passed 00:05:07.698 00:05:07.698 Run Summary: Type Total Ran Passed Failed Inactive 00:05:07.698 suites 1 1 n/a 0 0 00:05:07.698 tests 4 4 4 0 0 00:05:07.698 asserts 152 152 152 0 n/a 00:05:07.698 00:05:07.698 Elapsed time = 0.141 seconds 00:05:07.698 00:05:07.698 real 0m0.149s 00:05:07.698 user 0m0.141s 00:05:07.698 sys 0m0.008s 00:05:07.698 10:23:55 env.env_memory -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:07.698 10:23:55 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:05:07.698 ************************************ 00:05:07.698 END TEST env_memory 00:05:07.698 ************************************ 00:05:07.698 10:23:55 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:05:07.698 10:23:55 env -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:07.698 10:23:55 env -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:07.698 10:23:55 env -- common/autotest_common.sh@10 -- # set +x 00:05:07.698 ************************************ 00:05:07.698 START TEST env_vtophys 00:05:07.698 ************************************ 00:05:07.698 10:23:55 env.env_vtophys -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:05:07.698 EAL: lib.eal log level changed from notice to debug 00:05:07.698 EAL: Detected lcore 0 as core 0 on socket 0 00:05:07.698 EAL: Detected lcore 1 as core 1 on socket 0 00:05:07.698 EAL: Detected lcore 2 as core 2 on socket 0 00:05:07.698 EAL: Detected lcore 3 as core 3 on socket 0 00:05:07.698 EAL: Detected lcore 4 as core 4 on socket 0 00:05:07.698 EAL: Detected lcore 5 as core 5 on socket 0 00:05:07.698 EAL: Detected lcore 6 as core 8 on socket 0 00:05:07.698 EAL: Detected lcore 7 as core 9 on socket 0 00:05:07.698 EAL: Detected lcore 8 as core 10 on socket 0 00:05:07.698 EAL: Detected lcore 9 as core 11 on socket 0 00:05:07.698 EAL: Detected lcore 10 as core 12 on socket 0 00:05:07.698 EAL: Detected lcore 11 as core 13 on socket 0 00:05:07.698 EAL: Detected lcore 12 as core 0 on socket 1 00:05:07.698 EAL: Detected lcore 13 as core 1 on socket 1 00:05:07.698 EAL: Detected lcore 14 as core 2 on socket 1 00:05:07.698 EAL: Detected lcore 15 as core 3 on socket 1 00:05:07.698 EAL: Detected lcore 16 as core 4 on socket 1 00:05:07.698 EAL: Detected lcore 17 as core 5 on socket 1 00:05:07.698 EAL: Detected lcore 18 as core 8 on socket 1 00:05:07.698 EAL: Detected lcore 19 as core 9 on socket 1 00:05:07.698 EAL: Detected lcore 20 as core 10 on socket 1 00:05:07.698 EAL: Detected lcore 21 as core 11 on socket 1 00:05:07.698 EAL: Detected lcore 22 as core 12 on socket 1 00:05:07.698 EAL: Detected lcore 23 as core 13 on socket 1 00:05:07.698 EAL: Detected lcore 24 as core 0 on socket 0 00:05:07.698 EAL: Detected lcore 25 as core 1 on socket 0 00:05:07.698 EAL: Detected lcore 26 as core 2 on socket 0 00:05:07.698 EAL: Detected lcore 27 as core 3 on socket 0 00:05:07.698 EAL: Detected lcore 28 as core 4 on socket 0 00:05:07.698 EAL: Detected lcore 29 as core 5 on socket 0 00:05:07.698 EAL: Detected lcore 30 as core 8 on socket 0 00:05:07.698 EAL: Detected lcore 31 as core 9 on socket 0 00:05:07.698 EAL: Detected lcore 32 as core 10 on socket 0 00:05:07.698 EAL: Detected lcore 33 as core 11 on socket 0 00:05:07.698 EAL: Detected lcore 34 as core 12 on socket 0 00:05:07.698 EAL: Detected lcore 35 as core 13 on socket 0 00:05:07.698 EAL: Detected lcore 36 as core 0 on socket 1 00:05:07.698 EAL: Detected lcore 37 as core 1 on socket 1 00:05:07.698 EAL: Detected lcore 38 as core 2 on socket 1 00:05:07.698 EAL: Detected lcore 39 as core 3 on socket 1 00:05:07.698 EAL: Detected lcore 40 as core 4 on socket 1 00:05:07.698 EAL: Detected lcore 41 as core 5 on socket 1 00:05:07.698 EAL: Detected lcore 42 as core 8 on socket 1 00:05:07.698 EAL: Detected lcore 43 as core 9 on socket 1 00:05:07.698 EAL: Detected lcore 44 as core 10 on socket 1 00:05:07.698 EAL: Detected lcore 45 as core 11 on socket 1 00:05:07.698 EAL: Detected lcore 46 as core 12 on socket 1 00:05:07.698 EAL: Detected lcore 47 as core 13 on socket 1 00:05:07.698 EAL: Maximum logical cores by configuration: 128 00:05:07.698 EAL: Detected CPU lcores: 48 00:05:07.698 EAL: Detected NUMA nodes: 2 00:05:07.698 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:05:07.698 EAL: Detected shared linkage of DPDK 00:05:07.698 EAL: No shared files mode enabled, IPC will be disabled 00:05:07.698 EAL: Bus pci wants IOVA as 'DC' 00:05:07.698 EAL: Buses did not request a specific IOVA mode. 00:05:07.698 EAL: IOMMU is available, selecting IOVA as VA mode. 00:05:07.698 EAL: Selected IOVA mode 'VA' 00:05:07.698 EAL: Probing VFIO support... 00:05:07.698 EAL: IOMMU type 1 (Type 1) is supported 00:05:07.698 EAL: IOMMU type 7 (sPAPR) is not supported 00:05:07.698 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:05:07.698 EAL: VFIO support initialized 00:05:07.698 EAL: Ask a virtual area of 0x2e000 bytes 00:05:07.698 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:05:07.698 EAL: Setting up physically contiguous memory... 00:05:07.698 EAL: Setting maximum number of open files to 524288 00:05:07.698 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:05:07.698 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:05:07.698 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:05:07.698 EAL: Ask a virtual area of 0x61000 bytes 00:05:07.698 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:05:07.698 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:07.698 EAL: Ask a virtual area of 0x400000000 bytes 00:05:07.698 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:05:07.698 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:05:07.698 EAL: Ask a virtual area of 0x61000 bytes 00:05:07.698 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:05:07.698 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:07.698 EAL: Ask a virtual area of 0x400000000 bytes 00:05:07.698 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:05:07.698 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:05:07.698 EAL: Ask a virtual area of 0x61000 bytes 00:05:07.698 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:05:07.698 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:07.698 EAL: Ask a virtual area of 0x400000000 bytes 00:05:07.698 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:05:07.698 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:05:07.698 EAL: Ask a virtual area of 0x61000 bytes 00:05:07.698 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:05:07.698 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:07.698 EAL: Ask a virtual area of 0x400000000 bytes 00:05:07.698 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:05:07.698 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:05:07.698 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:05:07.698 EAL: Ask a virtual area of 0x61000 bytes 00:05:07.698 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:05:07.698 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:07.699 EAL: Ask a virtual area of 0x400000000 bytes 00:05:07.699 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:05:07.699 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:05:07.699 EAL: Ask a virtual area of 0x61000 bytes 00:05:07.699 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:05:07.699 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:07.699 EAL: Ask a virtual area of 0x400000000 bytes 00:05:07.699 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:05:07.699 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:05:07.699 EAL: Ask a virtual area of 0x61000 bytes 00:05:07.699 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:05:07.699 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:07.699 EAL: Ask a virtual area of 0x400000000 bytes 00:05:07.699 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:05:07.699 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:05:07.699 EAL: Ask a virtual area of 0x61000 bytes 00:05:07.699 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:05:07.699 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:07.699 EAL: Ask a virtual area of 0x400000000 bytes 00:05:07.699 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:05:07.699 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:05:07.699 EAL: Hugepages will be freed exactly as allocated. 00:05:07.699 EAL: No shared files mode enabled, IPC is disabled 00:05:07.699 EAL: No shared files mode enabled, IPC is disabled 00:05:07.699 EAL: TSC frequency is ~2700000 KHz 00:05:07.699 EAL: Main lcore 0 is ready (tid=7f04acfcda00;cpuset=[0]) 00:05:07.699 EAL: Trying to obtain current memory policy. 00:05:07.699 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:07.699 EAL: Restoring previous memory policy: 0 00:05:07.699 EAL: request: mp_malloc_sync 00:05:07.699 EAL: No shared files mode enabled, IPC is disabled 00:05:07.699 EAL: Heap on socket 0 was expanded by 2MB 00:05:07.699 EAL: No shared files mode enabled, IPC is disabled 00:05:07.699 EAL: No PCI address specified using 'addr=' in: bus=pci 00:05:07.699 EAL: Mem event callback 'spdk:(nil)' registered 00:05:07.699 00:05:07.699 00:05:07.699 CUnit - A unit testing framework for C - Version 2.1-3 00:05:07.699 http://cunit.sourceforge.net/ 00:05:07.699 00:05:07.699 00:05:07.699 Suite: components_suite 00:05:07.699 Test: vtophys_malloc_test ...passed 00:05:07.699 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:05:07.699 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:07.699 EAL: Restoring previous memory policy: 4 00:05:07.699 EAL: Calling mem event callback 'spdk:(nil)' 00:05:07.699 EAL: request: mp_malloc_sync 00:05:07.699 EAL: No shared files mode enabled, IPC is disabled 00:05:07.699 EAL: Heap on socket 0 was expanded by 4MB 00:05:07.699 EAL: Calling mem event callback 'spdk:(nil)' 00:05:07.699 EAL: request: mp_malloc_sync 00:05:07.699 EAL: No shared files mode enabled, IPC is disabled 00:05:07.699 EAL: Heap on socket 0 was shrunk by 4MB 00:05:07.699 EAL: Trying to obtain current memory policy. 00:05:07.699 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:07.699 EAL: Restoring previous memory policy: 4 00:05:07.699 EAL: Calling mem event callback 'spdk:(nil)' 00:05:07.699 EAL: request: mp_malloc_sync 00:05:07.699 EAL: No shared files mode enabled, IPC is disabled 00:05:07.699 EAL: Heap on socket 0 was expanded by 6MB 00:05:07.699 EAL: Calling mem event callback 'spdk:(nil)' 00:05:07.699 EAL: request: mp_malloc_sync 00:05:07.699 EAL: No shared files mode enabled, IPC is disabled 00:05:07.699 EAL: Heap on socket 0 was shrunk by 6MB 00:05:07.699 EAL: Trying to obtain current memory policy. 00:05:07.699 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:07.699 EAL: Restoring previous memory policy: 4 00:05:07.699 EAL: Calling mem event callback 'spdk:(nil)' 00:05:07.699 EAL: request: mp_malloc_sync 00:05:07.699 EAL: No shared files mode enabled, IPC is disabled 00:05:07.699 EAL: Heap on socket 0 was expanded by 10MB 00:05:07.699 EAL: Calling mem event callback 'spdk:(nil)' 00:05:07.699 EAL: request: mp_malloc_sync 00:05:07.699 EAL: No shared files mode enabled, IPC is disabled 00:05:07.699 EAL: Heap on socket 0 was shrunk by 10MB 00:05:07.699 EAL: Trying to obtain current memory policy. 00:05:07.699 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:07.699 EAL: Restoring previous memory policy: 4 00:05:07.699 EAL: Calling mem event callback 'spdk:(nil)' 00:05:07.699 EAL: request: mp_malloc_sync 00:05:07.699 EAL: No shared files mode enabled, IPC is disabled 00:05:07.699 EAL: Heap on socket 0 was expanded by 18MB 00:05:07.699 EAL: Calling mem event callback 'spdk:(nil)' 00:05:07.699 EAL: request: mp_malloc_sync 00:05:07.699 EAL: No shared files mode enabled, IPC is disabled 00:05:07.699 EAL: Heap on socket 0 was shrunk by 18MB 00:05:07.699 EAL: Trying to obtain current memory policy. 00:05:07.699 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:07.699 EAL: Restoring previous memory policy: 4 00:05:07.699 EAL: Calling mem event callback 'spdk:(nil)' 00:05:07.699 EAL: request: mp_malloc_sync 00:05:07.699 EAL: No shared files mode enabled, IPC is disabled 00:05:07.699 EAL: Heap on socket 0 was expanded by 34MB 00:05:07.699 EAL: Calling mem event callback 'spdk:(nil)' 00:05:07.699 EAL: request: mp_malloc_sync 00:05:07.699 EAL: No shared files mode enabled, IPC is disabled 00:05:07.699 EAL: Heap on socket 0 was shrunk by 34MB 00:05:07.699 EAL: Trying to obtain current memory policy. 00:05:07.699 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:07.699 EAL: Restoring previous memory policy: 4 00:05:07.699 EAL: Calling mem event callback 'spdk:(nil)' 00:05:07.699 EAL: request: mp_malloc_sync 00:05:07.699 EAL: No shared files mode enabled, IPC is disabled 00:05:07.699 EAL: Heap on socket 0 was expanded by 66MB 00:05:07.699 EAL: Calling mem event callback 'spdk:(nil)' 00:05:07.699 EAL: request: mp_malloc_sync 00:05:07.699 EAL: No shared files mode enabled, IPC is disabled 00:05:07.699 EAL: Heap on socket 0 was shrunk by 66MB 00:05:07.699 EAL: Trying to obtain current memory policy. 00:05:07.699 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:07.699 EAL: Restoring previous memory policy: 4 00:05:07.699 EAL: Calling mem event callback 'spdk:(nil)' 00:05:07.699 EAL: request: mp_malloc_sync 00:05:07.699 EAL: No shared files mode enabled, IPC is disabled 00:05:07.699 EAL: Heap on socket 0 was expanded by 130MB 00:05:07.699 EAL: Calling mem event callback 'spdk:(nil)' 00:05:07.699 EAL: request: mp_malloc_sync 00:05:07.699 EAL: No shared files mode enabled, IPC is disabled 00:05:07.699 EAL: Heap on socket 0 was shrunk by 130MB 00:05:07.699 EAL: Trying to obtain current memory policy. 00:05:07.699 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:07.958 EAL: Restoring previous memory policy: 4 00:05:07.958 EAL: Calling mem event callback 'spdk:(nil)' 00:05:07.958 EAL: request: mp_malloc_sync 00:05:07.958 EAL: No shared files mode enabled, IPC is disabled 00:05:07.958 EAL: Heap on socket 0 was expanded by 258MB 00:05:07.958 EAL: Calling mem event callback 'spdk:(nil)' 00:05:07.958 EAL: request: mp_malloc_sync 00:05:07.958 EAL: No shared files mode enabled, IPC is disabled 00:05:07.958 EAL: Heap on socket 0 was shrunk by 258MB 00:05:07.958 EAL: Trying to obtain current memory policy. 00:05:07.958 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:08.217 EAL: Restoring previous memory policy: 4 00:05:08.217 EAL: Calling mem event callback 'spdk:(nil)' 00:05:08.217 EAL: request: mp_malloc_sync 00:05:08.217 EAL: No shared files mode enabled, IPC is disabled 00:05:08.217 EAL: Heap on socket 0 was expanded by 514MB 00:05:08.217 EAL: Calling mem event callback 'spdk:(nil)' 00:05:08.217 EAL: request: mp_malloc_sync 00:05:08.217 EAL: No shared files mode enabled, IPC is disabled 00:05:08.217 EAL: Heap on socket 0 was shrunk by 514MB 00:05:08.217 EAL: Trying to obtain current memory policy. 00:05:08.217 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:08.475 EAL: Restoring previous memory policy: 4 00:05:08.475 EAL: Calling mem event callback 'spdk:(nil)' 00:05:08.475 EAL: request: mp_malloc_sync 00:05:08.475 EAL: No shared files mode enabled, IPC is disabled 00:05:08.475 EAL: Heap on socket 0 was expanded by 1026MB 00:05:08.732 EAL: Calling mem event callback 'spdk:(nil)' 00:05:08.992 EAL: request: mp_malloc_sync 00:05:08.992 EAL: No shared files mode enabled, IPC is disabled 00:05:08.992 EAL: Heap on socket 0 was shrunk by 1026MB 00:05:08.992 passed 00:05:08.992 00:05:08.992 Run Summary: Type Total Ran Passed Failed Inactive 00:05:08.992 suites 1 1 n/a 0 0 00:05:08.992 tests 2 2 2 0 0 00:05:08.992 asserts 497 497 497 0 n/a 00:05:08.992 00:05:08.992 Elapsed time = 1.309 seconds 00:05:08.992 EAL: Calling mem event callback 'spdk:(nil)' 00:05:08.992 EAL: request: mp_malloc_sync 00:05:08.992 EAL: No shared files mode enabled, IPC is disabled 00:05:08.992 EAL: Heap on socket 0 was shrunk by 2MB 00:05:08.992 EAL: No shared files mode enabled, IPC is disabled 00:05:08.992 EAL: No shared files mode enabled, IPC is disabled 00:05:08.992 EAL: No shared files mode enabled, IPC is disabled 00:05:08.992 00:05:08.992 real 0m1.421s 00:05:08.992 user 0m0.835s 00:05:08.992 sys 0m0.555s 00:05:08.992 10:23:57 env.env_vtophys -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:08.992 10:23:57 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:05:08.992 ************************************ 00:05:08.992 END TEST env_vtophys 00:05:08.992 ************************************ 00:05:08.992 10:23:57 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:05:08.992 10:23:57 env -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:08.992 10:23:57 env -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:08.992 10:23:57 env -- common/autotest_common.sh@10 -- # set +x 00:05:08.992 ************************************ 00:05:08.992 START TEST env_pci 00:05:08.992 ************************************ 00:05:08.992 10:23:57 env.env_pci -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:05:08.992 00:05:08.992 00:05:08.992 CUnit - A unit testing framework for C - Version 2.1-3 00:05:08.992 http://cunit.sourceforge.net/ 00:05:08.992 00:05:08.992 00:05:08.992 Suite: pci 00:05:08.992 Test: pci_hook ...[2024-11-15 10:23:57.409928] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 246918 has claimed it 00:05:08.992 EAL: Cannot find device (10000:00:01.0) 00:05:08.992 EAL: Failed to attach device on primary process 00:05:08.992 passed 00:05:08.992 00:05:08.992 Run Summary: Type Total Ran Passed Failed Inactive 00:05:08.992 suites 1 1 n/a 0 0 00:05:08.992 tests 1 1 1 0 0 00:05:08.992 asserts 25 25 25 0 n/a 00:05:08.992 00:05:08.992 Elapsed time = 0.022 seconds 00:05:08.992 00:05:08.992 real 0m0.035s 00:05:08.992 user 0m0.009s 00:05:08.992 sys 0m0.026s 00:05:08.992 10:23:57 env.env_pci -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:08.992 10:23:57 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:05:08.992 ************************************ 00:05:08.992 END TEST env_pci 00:05:08.992 ************************************ 00:05:08.992 10:23:57 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:05:09.252 10:23:57 env -- env/env.sh@15 -- # uname 00:05:09.252 10:23:57 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:05:09.252 10:23:57 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:05:09.252 10:23:57 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:09.252 10:23:57 env -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:05:09.252 10:23:57 env -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:09.252 10:23:57 env -- common/autotest_common.sh@10 -- # set +x 00:05:09.252 ************************************ 00:05:09.252 START TEST env_dpdk_post_init 00:05:09.252 ************************************ 00:05:09.252 10:23:57 env.env_dpdk_post_init -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:09.252 EAL: Detected CPU lcores: 48 00:05:09.252 EAL: Detected NUMA nodes: 2 00:05:09.252 EAL: Detected shared linkage of DPDK 00:05:09.252 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:09.252 EAL: Selected IOVA mode 'VA' 00:05:09.252 EAL: VFIO support initialized 00:05:09.252 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:09.252 EAL: Using IOMMU type 1 (Type 1) 00:05:09.252 EAL: Probe PCI driver: spdk_ioat (8086:0e20) device: 0000:00:04.0 (socket 0) 00:05:09.252 EAL: Probe PCI driver: spdk_ioat (8086:0e21) device: 0000:00:04.1 (socket 0) 00:05:09.252 EAL: Probe PCI driver: spdk_ioat (8086:0e22) device: 0000:00:04.2 (socket 0) 00:05:09.252 EAL: Probe PCI driver: spdk_ioat (8086:0e23) device: 0000:00:04.3 (socket 0) 00:05:09.252 EAL: Probe PCI driver: spdk_ioat (8086:0e24) device: 0000:00:04.4 (socket 0) 00:05:09.252 EAL: Probe PCI driver: spdk_ioat (8086:0e25) device: 0000:00:04.5 (socket 0) 00:05:09.252 EAL: Probe PCI driver: spdk_ioat (8086:0e26) device: 0000:00:04.6 (socket 0) 00:05:09.252 EAL: Probe PCI driver: spdk_ioat (8086:0e27) device: 0000:00:04.7 (socket 0) 00:05:09.252 EAL: Probe PCI driver: spdk_ioat (8086:0e20) device: 0000:80:04.0 (socket 1) 00:05:09.252 EAL: Probe PCI driver: spdk_ioat (8086:0e21) device: 0000:80:04.1 (socket 1) 00:05:09.512 EAL: Probe PCI driver: spdk_ioat (8086:0e22) device: 0000:80:04.2 (socket 1) 00:05:09.512 EAL: Probe PCI driver: spdk_ioat (8086:0e23) device: 0000:80:04.3 (socket 1) 00:05:09.512 EAL: Probe PCI driver: spdk_ioat (8086:0e24) device: 0000:80:04.4 (socket 1) 00:05:09.512 EAL: Probe PCI driver: spdk_ioat (8086:0e25) device: 0000:80:04.5 (socket 1) 00:05:09.512 EAL: Probe PCI driver: spdk_ioat (8086:0e26) device: 0000:80:04.6 (socket 1) 00:05:09.512 EAL: Probe PCI driver: spdk_ioat (8086:0e27) device: 0000:80:04.7 (socket 1) 00:05:10.083 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:81:00.0 (socket 1) 00:05:14.268 EAL: Releasing PCI mapped resource for 0000:81:00.0 00:05:14.268 EAL: Calling pci_unmap_resource for 0000:81:00.0 at 0x202001040000 00:05:14.268 Starting DPDK initialization... 00:05:14.268 Starting SPDK post initialization... 00:05:14.268 SPDK NVMe probe 00:05:14.268 Attaching to 0000:81:00.0 00:05:14.268 Attached to 0000:81:00.0 00:05:14.268 Cleaning up... 00:05:14.268 00:05:14.268 real 0m5.182s 00:05:14.268 user 0m3.711s 00:05:14.268 sys 0m0.528s 00:05:14.268 10:24:02 env.env_dpdk_post_init -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:14.268 10:24:02 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:05:14.268 ************************************ 00:05:14.268 END TEST env_dpdk_post_init 00:05:14.268 ************************************ 00:05:14.269 10:24:02 env -- env/env.sh@26 -- # uname 00:05:14.269 10:24:02 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:05:14.269 10:24:02 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:05:14.269 10:24:02 env -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:14.269 10:24:02 env -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:14.269 10:24:02 env -- common/autotest_common.sh@10 -- # set +x 00:05:14.269 ************************************ 00:05:14.269 START TEST env_mem_callbacks 00:05:14.269 ************************************ 00:05:14.269 10:24:02 env.env_mem_callbacks -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:05:14.527 EAL: Detected CPU lcores: 48 00:05:14.528 EAL: Detected NUMA nodes: 2 00:05:14.528 EAL: Detected shared linkage of DPDK 00:05:14.528 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:14.528 EAL: Selected IOVA mode 'VA' 00:05:14.528 EAL: VFIO support initialized 00:05:14.528 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:14.528 00:05:14.528 00:05:14.528 CUnit - A unit testing framework for C - Version 2.1-3 00:05:14.528 http://cunit.sourceforge.net/ 00:05:14.528 00:05:14.528 00:05:14.528 Suite: memory 00:05:14.528 Test: test ... 00:05:14.528 register 0x200000200000 2097152 00:05:14.528 malloc 3145728 00:05:14.528 register 0x200000400000 4194304 00:05:14.528 buf 0x200000500000 len 3145728 PASSED 00:05:14.528 malloc 64 00:05:14.528 buf 0x2000004fff40 len 64 PASSED 00:05:14.528 malloc 4194304 00:05:14.528 register 0x200000800000 6291456 00:05:14.528 buf 0x200000a00000 len 4194304 PASSED 00:05:14.528 free 0x200000500000 3145728 00:05:14.528 free 0x2000004fff40 64 00:05:14.528 unregister 0x200000400000 4194304 PASSED 00:05:14.528 free 0x200000a00000 4194304 00:05:14.528 unregister 0x200000800000 6291456 PASSED 00:05:14.528 malloc 8388608 00:05:14.528 register 0x200000400000 10485760 00:05:14.528 buf 0x200000600000 len 8388608 PASSED 00:05:14.528 free 0x200000600000 8388608 00:05:14.528 unregister 0x200000400000 10485760 PASSED 00:05:14.528 passed 00:05:14.528 00:05:14.528 Run Summary: Type Total Ran Passed Failed Inactive 00:05:14.528 suites 1 1 n/a 0 0 00:05:14.528 tests 1 1 1 0 0 00:05:14.528 asserts 15 15 15 0 n/a 00:05:14.528 00:05:14.528 Elapsed time = 0.005 seconds 00:05:14.528 00:05:14.528 real 0m0.048s 00:05:14.528 user 0m0.010s 00:05:14.528 sys 0m0.038s 00:05:14.528 10:24:02 env.env_mem_callbacks -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:14.528 10:24:02 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:05:14.528 ************************************ 00:05:14.528 END TEST env_mem_callbacks 00:05:14.528 ************************************ 00:05:14.528 00:05:14.528 real 0m7.236s 00:05:14.528 user 0m4.898s 00:05:14.528 sys 0m1.387s 00:05:14.528 10:24:02 env -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:14.528 10:24:02 env -- common/autotest_common.sh@10 -- # set +x 00:05:14.528 ************************************ 00:05:14.528 END TEST env 00:05:14.528 ************************************ 00:05:14.528 10:24:02 -- spdk/autotest.sh@156 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:05:14.528 10:24:02 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:14.528 10:24:02 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:14.528 10:24:02 -- common/autotest_common.sh@10 -- # set +x 00:05:14.528 ************************************ 00:05:14.528 START TEST rpc 00:05:14.528 ************************************ 00:05:14.528 10:24:02 rpc -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:05:14.528 * Looking for test storage... 00:05:14.528 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:14.528 10:24:02 rpc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:14.528 10:24:02 rpc -- common/autotest_common.sh@1691 -- # lcov --version 00:05:14.528 10:24:02 rpc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:14.528 10:24:02 rpc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:14.528 10:24:02 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:14.528 10:24:02 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:14.528 10:24:02 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:14.528 10:24:02 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:14.528 10:24:02 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:14.528 10:24:02 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:14.528 10:24:02 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:14.528 10:24:02 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:14.528 10:24:02 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:14.528 10:24:02 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:14.528 10:24:02 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:14.528 10:24:02 rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:14.528 10:24:02 rpc -- scripts/common.sh@345 -- # : 1 00:05:14.528 10:24:02 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:14.528 10:24:02 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:14.528 10:24:02 rpc -- scripts/common.sh@365 -- # decimal 1 00:05:14.528 10:24:02 rpc -- scripts/common.sh@353 -- # local d=1 00:05:14.528 10:24:02 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:14.528 10:24:02 rpc -- scripts/common.sh@355 -- # echo 1 00:05:14.528 10:24:02 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:14.528 10:24:02 rpc -- scripts/common.sh@366 -- # decimal 2 00:05:14.528 10:24:02 rpc -- scripts/common.sh@353 -- # local d=2 00:05:14.528 10:24:02 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:14.528 10:24:02 rpc -- scripts/common.sh@355 -- # echo 2 00:05:14.528 10:24:02 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:14.528 10:24:02 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:14.528 10:24:02 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:14.528 10:24:02 rpc -- scripts/common.sh@368 -- # return 0 00:05:14.528 10:24:02 rpc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:14.528 10:24:02 rpc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:14.528 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:14.528 --rc genhtml_branch_coverage=1 00:05:14.528 --rc genhtml_function_coverage=1 00:05:14.528 --rc genhtml_legend=1 00:05:14.528 --rc geninfo_all_blocks=1 00:05:14.528 --rc geninfo_unexecuted_blocks=1 00:05:14.528 00:05:14.528 ' 00:05:14.528 10:24:02 rpc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:14.528 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:14.528 --rc genhtml_branch_coverage=1 00:05:14.528 --rc genhtml_function_coverage=1 00:05:14.528 --rc genhtml_legend=1 00:05:14.528 --rc geninfo_all_blocks=1 00:05:14.528 --rc geninfo_unexecuted_blocks=1 00:05:14.528 00:05:14.528 ' 00:05:14.528 10:24:02 rpc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:14.528 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:14.528 --rc genhtml_branch_coverage=1 00:05:14.528 --rc genhtml_function_coverage=1 00:05:14.528 --rc genhtml_legend=1 00:05:14.528 --rc geninfo_all_blocks=1 00:05:14.528 --rc geninfo_unexecuted_blocks=1 00:05:14.528 00:05:14.528 ' 00:05:14.528 10:24:02 rpc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:14.528 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:14.528 --rc genhtml_branch_coverage=1 00:05:14.528 --rc genhtml_function_coverage=1 00:05:14.528 --rc genhtml_legend=1 00:05:14.528 --rc geninfo_all_blocks=1 00:05:14.528 --rc geninfo_unexecuted_blocks=1 00:05:14.528 00:05:14.528 ' 00:05:14.528 10:24:02 rpc -- rpc/rpc.sh@65 -- # spdk_pid=247831 00:05:14.528 10:24:02 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:05:14.528 10:24:02 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:14.528 10:24:02 rpc -- rpc/rpc.sh@67 -- # waitforlisten 247831 00:05:14.528 10:24:02 rpc -- common/autotest_common.sh@833 -- # '[' -z 247831 ']' 00:05:14.528 10:24:02 rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:14.528 10:24:02 rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:14.528 10:24:02 rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:14.528 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:14.528 10:24:02 rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:14.528 10:24:02 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:14.787 [2024-11-15 10:24:03.030460] Starting SPDK v25.01-pre git sha1 318515b44 / DPDK 24.03.0 initialization... 00:05:14.787 [2024-11-15 10:24:03.030545] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid247831 ] 00:05:14.787 [2024-11-15 10:24:03.100005] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:14.787 [2024-11-15 10:24:03.156024] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:05:14.787 [2024-11-15 10:24:03.156078] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 247831' to capture a snapshot of events at runtime. 00:05:14.787 [2024-11-15 10:24:03.156105] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:14.787 [2024-11-15 10:24:03.156116] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:14.787 [2024-11-15 10:24:03.156126] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid247831 for offline analysis/debug. 00:05:14.787 [2024-11-15 10:24:03.156712] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:15.046 10:24:03 rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:15.046 10:24:03 rpc -- common/autotest_common.sh@866 -- # return 0 00:05:15.046 10:24:03 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:15.046 10:24:03 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:15.046 10:24:03 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:05:15.046 10:24:03 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:05:15.046 10:24:03 rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:15.046 10:24:03 rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:15.046 10:24:03 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:15.046 ************************************ 00:05:15.046 START TEST rpc_integrity 00:05:15.046 ************************************ 00:05:15.046 10:24:03 rpc.rpc_integrity -- common/autotest_common.sh@1127 -- # rpc_integrity 00:05:15.046 10:24:03 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:15.046 10:24:03 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:15.046 10:24:03 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:15.046 10:24:03 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:15.046 10:24:03 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:15.046 10:24:03 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:15.046 10:24:03 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:15.046 10:24:03 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:15.046 10:24:03 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:15.046 10:24:03 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:15.046 10:24:03 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:15.046 10:24:03 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:05:15.046 10:24:03 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:15.046 10:24:03 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:15.046 10:24:03 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:15.046 10:24:03 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:15.046 10:24:03 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:15.046 { 00:05:15.046 "name": "Malloc0", 00:05:15.046 "aliases": [ 00:05:15.046 "9429e4b6-c809-47b5-82a7-6c6cee7b7148" 00:05:15.046 ], 00:05:15.046 "product_name": "Malloc disk", 00:05:15.046 "block_size": 512, 00:05:15.046 "num_blocks": 16384, 00:05:15.046 "uuid": "9429e4b6-c809-47b5-82a7-6c6cee7b7148", 00:05:15.046 "assigned_rate_limits": { 00:05:15.046 "rw_ios_per_sec": 0, 00:05:15.046 "rw_mbytes_per_sec": 0, 00:05:15.046 "r_mbytes_per_sec": 0, 00:05:15.046 "w_mbytes_per_sec": 0 00:05:15.046 }, 00:05:15.046 "claimed": false, 00:05:15.046 "zoned": false, 00:05:15.046 "supported_io_types": { 00:05:15.046 "read": true, 00:05:15.046 "write": true, 00:05:15.047 "unmap": true, 00:05:15.047 "flush": true, 00:05:15.047 "reset": true, 00:05:15.047 "nvme_admin": false, 00:05:15.047 "nvme_io": false, 00:05:15.047 "nvme_io_md": false, 00:05:15.047 "write_zeroes": true, 00:05:15.047 "zcopy": true, 00:05:15.047 "get_zone_info": false, 00:05:15.047 "zone_management": false, 00:05:15.047 "zone_append": false, 00:05:15.047 "compare": false, 00:05:15.047 "compare_and_write": false, 00:05:15.047 "abort": true, 00:05:15.047 "seek_hole": false, 00:05:15.047 "seek_data": false, 00:05:15.047 "copy": true, 00:05:15.047 "nvme_iov_md": false 00:05:15.047 }, 00:05:15.047 "memory_domains": [ 00:05:15.047 { 00:05:15.047 "dma_device_id": "system", 00:05:15.047 "dma_device_type": 1 00:05:15.047 }, 00:05:15.047 { 00:05:15.047 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:15.047 "dma_device_type": 2 00:05:15.047 } 00:05:15.047 ], 00:05:15.047 "driver_specific": {} 00:05:15.047 } 00:05:15.047 ]' 00:05:15.047 10:24:03 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:15.305 10:24:03 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:15.305 10:24:03 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:05:15.305 10:24:03 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:15.305 10:24:03 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:15.305 [2024-11-15 10:24:03.550034] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:05:15.305 [2024-11-15 10:24:03.550088] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:15.305 [2024-11-15 10:24:03.550110] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x873740 00:05:15.305 [2024-11-15 10:24:03.550123] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:15.305 [2024-11-15 10:24:03.551448] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:15.305 [2024-11-15 10:24:03.551473] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:15.305 Passthru0 00:05:15.305 10:24:03 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:15.305 10:24:03 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:15.305 10:24:03 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:15.305 10:24:03 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:15.305 10:24:03 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:15.305 10:24:03 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:15.305 { 00:05:15.305 "name": "Malloc0", 00:05:15.305 "aliases": [ 00:05:15.305 "9429e4b6-c809-47b5-82a7-6c6cee7b7148" 00:05:15.305 ], 00:05:15.305 "product_name": "Malloc disk", 00:05:15.305 "block_size": 512, 00:05:15.305 "num_blocks": 16384, 00:05:15.305 "uuid": "9429e4b6-c809-47b5-82a7-6c6cee7b7148", 00:05:15.305 "assigned_rate_limits": { 00:05:15.305 "rw_ios_per_sec": 0, 00:05:15.305 "rw_mbytes_per_sec": 0, 00:05:15.305 "r_mbytes_per_sec": 0, 00:05:15.305 "w_mbytes_per_sec": 0 00:05:15.305 }, 00:05:15.305 "claimed": true, 00:05:15.305 "claim_type": "exclusive_write", 00:05:15.305 "zoned": false, 00:05:15.305 "supported_io_types": { 00:05:15.305 "read": true, 00:05:15.305 "write": true, 00:05:15.305 "unmap": true, 00:05:15.305 "flush": true, 00:05:15.305 "reset": true, 00:05:15.305 "nvme_admin": false, 00:05:15.305 "nvme_io": false, 00:05:15.305 "nvme_io_md": false, 00:05:15.305 "write_zeroes": true, 00:05:15.305 "zcopy": true, 00:05:15.305 "get_zone_info": false, 00:05:15.305 "zone_management": false, 00:05:15.305 "zone_append": false, 00:05:15.305 "compare": false, 00:05:15.305 "compare_and_write": false, 00:05:15.305 "abort": true, 00:05:15.305 "seek_hole": false, 00:05:15.305 "seek_data": false, 00:05:15.305 "copy": true, 00:05:15.305 "nvme_iov_md": false 00:05:15.305 }, 00:05:15.305 "memory_domains": [ 00:05:15.305 { 00:05:15.305 "dma_device_id": "system", 00:05:15.305 "dma_device_type": 1 00:05:15.305 }, 00:05:15.305 { 00:05:15.305 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:15.305 "dma_device_type": 2 00:05:15.305 } 00:05:15.305 ], 00:05:15.305 "driver_specific": {} 00:05:15.305 }, 00:05:15.305 { 00:05:15.305 "name": "Passthru0", 00:05:15.305 "aliases": [ 00:05:15.305 "1fb65bad-3549-5dc5-abdd-75f553aa2f30" 00:05:15.305 ], 00:05:15.305 "product_name": "passthru", 00:05:15.305 "block_size": 512, 00:05:15.305 "num_blocks": 16384, 00:05:15.305 "uuid": "1fb65bad-3549-5dc5-abdd-75f553aa2f30", 00:05:15.305 "assigned_rate_limits": { 00:05:15.305 "rw_ios_per_sec": 0, 00:05:15.305 "rw_mbytes_per_sec": 0, 00:05:15.305 "r_mbytes_per_sec": 0, 00:05:15.305 "w_mbytes_per_sec": 0 00:05:15.305 }, 00:05:15.305 "claimed": false, 00:05:15.305 "zoned": false, 00:05:15.305 "supported_io_types": { 00:05:15.305 "read": true, 00:05:15.305 "write": true, 00:05:15.305 "unmap": true, 00:05:15.305 "flush": true, 00:05:15.305 "reset": true, 00:05:15.305 "nvme_admin": false, 00:05:15.305 "nvme_io": false, 00:05:15.305 "nvme_io_md": false, 00:05:15.305 "write_zeroes": true, 00:05:15.305 "zcopy": true, 00:05:15.305 "get_zone_info": false, 00:05:15.305 "zone_management": false, 00:05:15.305 "zone_append": false, 00:05:15.305 "compare": false, 00:05:15.305 "compare_and_write": false, 00:05:15.305 "abort": true, 00:05:15.305 "seek_hole": false, 00:05:15.305 "seek_data": false, 00:05:15.305 "copy": true, 00:05:15.305 "nvme_iov_md": false 00:05:15.305 }, 00:05:15.305 "memory_domains": [ 00:05:15.305 { 00:05:15.305 "dma_device_id": "system", 00:05:15.305 "dma_device_type": 1 00:05:15.305 }, 00:05:15.305 { 00:05:15.305 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:15.306 "dma_device_type": 2 00:05:15.306 } 00:05:15.306 ], 00:05:15.306 "driver_specific": { 00:05:15.306 "passthru": { 00:05:15.306 "name": "Passthru0", 00:05:15.306 "base_bdev_name": "Malloc0" 00:05:15.306 } 00:05:15.306 } 00:05:15.306 } 00:05:15.306 ]' 00:05:15.306 10:24:03 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:15.306 10:24:03 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:15.306 10:24:03 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:15.306 10:24:03 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:15.306 10:24:03 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:15.306 10:24:03 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:15.306 10:24:03 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:05:15.306 10:24:03 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:15.306 10:24:03 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:15.306 10:24:03 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:15.306 10:24:03 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:15.306 10:24:03 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:15.306 10:24:03 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:15.306 10:24:03 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:15.306 10:24:03 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:15.306 10:24:03 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:15.306 10:24:03 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:15.306 00:05:15.306 real 0m0.216s 00:05:15.306 user 0m0.142s 00:05:15.306 sys 0m0.016s 00:05:15.306 10:24:03 rpc.rpc_integrity -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:15.306 10:24:03 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:15.306 ************************************ 00:05:15.306 END TEST rpc_integrity 00:05:15.306 ************************************ 00:05:15.306 10:24:03 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:05:15.306 10:24:03 rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:15.306 10:24:03 rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:15.306 10:24:03 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:15.306 ************************************ 00:05:15.306 START TEST rpc_plugins 00:05:15.306 ************************************ 00:05:15.306 10:24:03 rpc.rpc_plugins -- common/autotest_common.sh@1127 -- # rpc_plugins 00:05:15.306 10:24:03 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:05:15.306 10:24:03 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:15.306 10:24:03 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:15.306 10:24:03 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:15.306 10:24:03 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:05:15.306 10:24:03 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:05:15.306 10:24:03 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:15.306 10:24:03 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:15.306 10:24:03 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:15.306 10:24:03 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:05:15.306 { 00:05:15.306 "name": "Malloc1", 00:05:15.306 "aliases": [ 00:05:15.306 "de1c4c6e-3cb9-41eb-9cf3-356544e1f2e8" 00:05:15.306 ], 00:05:15.306 "product_name": "Malloc disk", 00:05:15.306 "block_size": 4096, 00:05:15.306 "num_blocks": 256, 00:05:15.306 "uuid": "de1c4c6e-3cb9-41eb-9cf3-356544e1f2e8", 00:05:15.306 "assigned_rate_limits": { 00:05:15.306 "rw_ios_per_sec": 0, 00:05:15.306 "rw_mbytes_per_sec": 0, 00:05:15.306 "r_mbytes_per_sec": 0, 00:05:15.306 "w_mbytes_per_sec": 0 00:05:15.306 }, 00:05:15.306 "claimed": false, 00:05:15.306 "zoned": false, 00:05:15.306 "supported_io_types": { 00:05:15.306 "read": true, 00:05:15.306 "write": true, 00:05:15.306 "unmap": true, 00:05:15.306 "flush": true, 00:05:15.306 "reset": true, 00:05:15.306 "nvme_admin": false, 00:05:15.306 "nvme_io": false, 00:05:15.306 "nvme_io_md": false, 00:05:15.306 "write_zeroes": true, 00:05:15.306 "zcopy": true, 00:05:15.306 "get_zone_info": false, 00:05:15.306 "zone_management": false, 00:05:15.306 "zone_append": false, 00:05:15.306 "compare": false, 00:05:15.306 "compare_and_write": false, 00:05:15.306 "abort": true, 00:05:15.306 "seek_hole": false, 00:05:15.306 "seek_data": false, 00:05:15.306 "copy": true, 00:05:15.306 "nvme_iov_md": false 00:05:15.306 }, 00:05:15.306 "memory_domains": [ 00:05:15.306 { 00:05:15.306 "dma_device_id": "system", 00:05:15.306 "dma_device_type": 1 00:05:15.306 }, 00:05:15.306 { 00:05:15.306 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:15.306 "dma_device_type": 2 00:05:15.306 } 00:05:15.306 ], 00:05:15.306 "driver_specific": {} 00:05:15.306 } 00:05:15.306 ]' 00:05:15.306 10:24:03 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:05:15.564 10:24:03 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:05:15.564 10:24:03 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:05:15.564 10:24:03 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:15.564 10:24:03 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:15.564 10:24:03 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:15.564 10:24:03 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:05:15.564 10:24:03 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:15.564 10:24:03 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:15.564 10:24:03 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:15.564 10:24:03 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:05:15.564 10:24:03 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:05:15.564 10:24:03 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:05:15.564 00:05:15.564 real 0m0.116s 00:05:15.564 user 0m0.067s 00:05:15.564 sys 0m0.011s 00:05:15.564 10:24:03 rpc.rpc_plugins -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:15.564 10:24:03 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:15.564 ************************************ 00:05:15.564 END TEST rpc_plugins 00:05:15.564 ************************************ 00:05:15.564 10:24:03 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:05:15.564 10:24:03 rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:15.564 10:24:03 rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:15.564 10:24:03 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:15.564 ************************************ 00:05:15.564 START TEST rpc_trace_cmd_test 00:05:15.564 ************************************ 00:05:15.564 10:24:03 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1127 -- # rpc_trace_cmd_test 00:05:15.564 10:24:03 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:05:15.564 10:24:03 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:05:15.564 10:24:03 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:15.564 10:24:03 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:15.564 10:24:03 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:15.564 10:24:03 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:05:15.564 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid247831", 00:05:15.564 "tpoint_group_mask": "0x8", 00:05:15.564 "iscsi_conn": { 00:05:15.564 "mask": "0x2", 00:05:15.564 "tpoint_mask": "0x0" 00:05:15.564 }, 00:05:15.564 "scsi": { 00:05:15.564 "mask": "0x4", 00:05:15.564 "tpoint_mask": "0x0" 00:05:15.564 }, 00:05:15.564 "bdev": { 00:05:15.564 "mask": "0x8", 00:05:15.564 "tpoint_mask": "0xffffffffffffffff" 00:05:15.564 }, 00:05:15.564 "nvmf_rdma": { 00:05:15.564 "mask": "0x10", 00:05:15.564 "tpoint_mask": "0x0" 00:05:15.564 }, 00:05:15.564 "nvmf_tcp": { 00:05:15.564 "mask": "0x20", 00:05:15.564 "tpoint_mask": "0x0" 00:05:15.564 }, 00:05:15.564 "ftl": { 00:05:15.564 "mask": "0x40", 00:05:15.564 "tpoint_mask": "0x0" 00:05:15.564 }, 00:05:15.564 "blobfs": { 00:05:15.564 "mask": "0x80", 00:05:15.564 "tpoint_mask": "0x0" 00:05:15.564 }, 00:05:15.564 "dsa": { 00:05:15.564 "mask": "0x200", 00:05:15.564 "tpoint_mask": "0x0" 00:05:15.564 }, 00:05:15.564 "thread": { 00:05:15.564 "mask": "0x400", 00:05:15.564 "tpoint_mask": "0x0" 00:05:15.564 }, 00:05:15.564 "nvme_pcie": { 00:05:15.564 "mask": "0x800", 00:05:15.564 "tpoint_mask": "0x0" 00:05:15.564 }, 00:05:15.564 "iaa": { 00:05:15.564 "mask": "0x1000", 00:05:15.564 "tpoint_mask": "0x0" 00:05:15.564 }, 00:05:15.564 "nvme_tcp": { 00:05:15.564 "mask": "0x2000", 00:05:15.564 "tpoint_mask": "0x0" 00:05:15.564 }, 00:05:15.564 "bdev_nvme": { 00:05:15.564 "mask": "0x4000", 00:05:15.564 "tpoint_mask": "0x0" 00:05:15.564 }, 00:05:15.564 "sock": { 00:05:15.564 "mask": "0x8000", 00:05:15.564 "tpoint_mask": "0x0" 00:05:15.564 }, 00:05:15.564 "blob": { 00:05:15.564 "mask": "0x10000", 00:05:15.564 "tpoint_mask": "0x0" 00:05:15.564 }, 00:05:15.564 "bdev_raid": { 00:05:15.564 "mask": "0x20000", 00:05:15.564 "tpoint_mask": "0x0" 00:05:15.564 }, 00:05:15.564 "scheduler": { 00:05:15.564 "mask": "0x40000", 00:05:15.564 "tpoint_mask": "0x0" 00:05:15.564 } 00:05:15.564 }' 00:05:15.564 10:24:03 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:05:15.564 10:24:03 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:05:15.564 10:24:03 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:05:15.564 10:24:03 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:05:15.564 10:24:03 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:05:15.564 10:24:03 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:05:15.564 10:24:03 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:05:15.564 10:24:04 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:05:15.564 10:24:04 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:05:15.823 10:24:04 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:05:15.823 00:05:15.823 real 0m0.183s 00:05:15.823 user 0m0.164s 00:05:15.823 sys 0m0.013s 00:05:15.823 10:24:04 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:15.823 10:24:04 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:15.823 ************************************ 00:05:15.823 END TEST rpc_trace_cmd_test 00:05:15.823 ************************************ 00:05:15.823 10:24:04 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:05:15.823 10:24:04 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:05:15.823 10:24:04 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:05:15.823 10:24:04 rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:15.823 10:24:04 rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:15.823 10:24:04 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:15.823 ************************************ 00:05:15.823 START TEST rpc_daemon_integrity 00:05:15.823 ************************************ 00:05:15.823 10:24:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1127 -- # rpc_integrity 00:05:15.823 10:24:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:15.823 10:24:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:15.823 10:24:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:15.823 10:24:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:15.823 10:24:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:15.823 10:24:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:15.823 10:24:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:15.823 10:24:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:15.823 10:24:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:15.823 10:24:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:15.823 10:24:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:15.823 10:24:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:05:15.823 10:24:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:15.823 10:24:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:15.823 10:24:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:15.823 10:24:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:15.823 10:24:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:15.823 { 00:05:15.823 "name": "Malloc2", 00:05:15.823 "aliases": [ 00:05:15.823 "ce955960-249c-4964-88d9-791c72e43b5b" 00:05:15.823 ], 00:05:15.823 "product_name": "Malloc disk", 00:05:15.823 "block_size": 512, 00:05:15.823 "num_blocks": 16384, 00:05:15.823 "uuid": "ce955960-249c-4964-88d9-791c72e43b5b", 00:05:15.823 "assigned_rate_limits": { 00:05:15.823 "rw_ios_per_sec": 0, 00:05:15.823 "rw_mbytes_per_sec": 0, 00:05:15.823 "r_mbytes_per_sec": 0, 00:05:15.823 "w_mbytes_per_sec": 0 00:05:15.823 }, 00:05:15.823 "claimed": false, 00:05:15.823 "zoned": false, 00:05:15.823 "supported_io_types": { 00:05:15.823 "read": true, 00:05:15.823 "write": true, 00:05:15.823 "unmap": true, 00:05:15.823 "flush": true, 00:05:15.823 "reset": true, 00:05:15.823 "nvme_admin": false, 00:05:15.823 "nvme_io": false, 00:05:15.823 "nvme_io_md": false, 00:05:15.823 "write_zeroes": true, 00:05:15.823 "zcopy": true, 00:05:15.823 "get_zone_info": false, 00:05:15.823 "zone_management": false, 00:05:15.823 "zone_append": false, 00:05:15.823 "compare": false, 00:05:15.823 "compare_and_write": false, 00:05:15.823 "abort": true, 00:05:15.823 "seek_hole": false, 00:05:15.823 "seek_data": false, 00:05:15.823 "copy": true, 00:05:15.823 "nvme_iov_md": false 00:05:15.823 }, 00:05:15.823 "memory_domains": [ 00:05:15.823 { 00:05:15.823 "dma_device_id": "system", 00:05:15.823 "dma_device_type": 1 00:05:15.823 }, 00:05:15.823 { 00:05:15.823 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:15.823 "dma_device_type": 2 00:05:15.823 } 00:05:15.823 ], 00:05:15.823 "driver_specific": {} 00:05:15.823 } 00:05:15.823 ]' 00:05:15.823 10:24:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:15.823 10:24:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:15.823 10:24:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:05:15.823 10:24:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:15.823 10:24:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:15.823 [2024-11-15 10:24:04.192415] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:05:15.823 [2024-11-15 10:24:04.192457] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:15.823 [2024-11-15 10:24:04.192483] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x873d20 00:05:15.823 [2024-11-15 10:24:04.192497] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:15.823 [2024-11-15 10:24:04.193695] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:15.823 [2024-11-15 10:24:04.193730] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:15.823 Passthru0 00:05:15.823 10:24:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:15.823 10:24:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:15.823 10:24:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:15.823 10:24:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:15.823 10:24:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:15.823 10:24:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:15.823 { 00:05:15.823 "name": "Malloc2", 00:05:15.823 "aliases": [ 00:05:15.823 "ce955960-249c-4964-88d9-791c72e43b5b" 00:05:15.823 ], 00:05:15.823 "product_name": "Malloc disk", 00:05:15.823 "block_size": 512, 00:05:15.823 "num_blocks": 16384, 00:05:15.823 "uuid": "ce955960-249c-4964-88d9-791c72e43b5b", 00:05:15.823 "assigned_rate_limits": { 00:05:15.823 "rw_ios_per_sec": 0, 00:05:15.823 "rw_mbytes_per_sec": 0, 00:05:15.823 "r_mbytes_per_sec": 0, 00:05:15.824 "w_mbytes_per_sec": 0 00:05:15.824 }, 00:05:15.824 "claimed": true, 00:05:15.824 "claim_type": "exclusive_write", 00:05:15.824 "zoned": false, 00:05:15.824 "supported_io_types": { 00:05:15.824 "read": true, 00:05:15.824 "write": true, 00:05:15.824 "unmap": true, 00:05:15.824 "flush": true, 00:05:15.824 "reset": true, 00:05:15.824 "nvme_admin": false, 00:05:15.824 "nvme_io": false, 00:05:15.824 "nvme_io_md": false, 00:05:15.824 "write_zeroes": true, 00:05:15.824 "zcopy": true, 00:05:15.824 "get_zone_info": false, 00:05:15.824 "zone_management": false, 00:05:15.824 "zone_append": false, 00:05:15.824 "compare": false, 00:05:15.824 "compare_and_write": false, 00:05:15.824 "abort": true, 00:05:15.824 "seek_hole": false, 00:05:15.824 "seek_data": false, 00:05:15.824 "copy": true, 00:05:15.824 "nvme_iov_md": false 00:05:15.824 }, 00:05:15.824 "memory_domains": [ 00:05:15.824 { 00:05:15.824 "dma_device_id": "system", 00:05:15.824 "dma_device_type": 1 00:05:15.824 }, 00:05:15.824 { 00:05:15.824 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:15.824 "dma_device_type": 2 00:05:15.824 } 00:05:15.824 ], 00:05:15.824 "driver_specific": {} 00:05:15.824 }, 00:05:15.824 { 00:05:15.824 "name": "Passthru0", 00:05:15.824 "aliases": [ 00:05:15.824 "0f087ba2-82f6-5245-aac9-f645c38a61ae" 00:05:15.824 ], 00:05:15.824 "product_name": "passthru", 00:05:15.824 "block_size": 512, 00:05:15.824 "num_blocks": 16384, 00:05:15.824 "uuid": "0f087ba2-82f6-5245-aac9-f645c38a61ae", 00:05:15.824 "assigned_rate_limits": { 00:05:15.824 "rw_ios_per_sec": 0, 00:05:15.824 "rw_mbytes_per_sec": 0, 00:05:15.824 "r_mbytes_per_sec": 0, 00:05:15.824 "w_mbytes_per_sec": 0 00:05:15.824 }, 00:05:15.824 "claimed": false, 00:05:15.824 "zoned": false, 00:05:15.824 "supported_io_types": { 00:05:15.824 "read": true, 00:05:15.824 "write": true, 00:05:15.824 "unmap": true, 00:05:15.824 "flush": true, 00:05:15.824 "reset": true, 00:05:15.824 "nvme_admin": false, 00:05:15.824 "nvme_io": false, 00:05:15.824 "nvme_io_md": false, 00:05:15.824 "write_zeroes": true, 00:05:15.824 "zcopy": true, 00:05:15.824 "get_zone_info": false, 00:05:15.824 "zone_management": false, 00:05:15.824 "zone_append": false, 00:05:15.824 "compare": false, 00:05:15.824 "compare_and_write": false, 00:05:15.824 "abort": true, 00:05:15.824 "seek_hole": false, 00:05:15.824 "seek_data": false, 00:05:15.824 "copy": true, 00:05:15.824 "nvme_iov_md": false 00:05:15.824 }, 00:05:15.824 "memory_domains": [ 00:05:15.824 { 00:05:15.824 "dma_device_id": "system", 00:05:15.824 "dma_device_type": 1 00:05:15.824 }, 00:05:15.824 { 00:05:15.824 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:15.824 "dma_device_type": 2 00:05:15.824 } 00:05:15.824 ], 00:05:15.824 "driver_specific": { 00:05:15.824 "passthru": { 00:05:15.824 "name": "Passthru0", 00:05:15.824 "base_bdev_name": "Malloc2" 00:05:15.824 } 00:05:15.824 } 00:05:15.824 } 00:05:15.824 ]' 00:05:15.824 10:24:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:15.824 10:24:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:15.824 10:24:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:15.824 10:24:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:15.824 10:24:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:15.824 10:24:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:15.824 10:24:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:05:15.824 10:24:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:15.824 10:24:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:15.824 10:24:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:15.824 10:24:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:15.824 10:24:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:15.824 10:24:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:15.824 10:24:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:15.824 10:24:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:15.824 10:24:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:16.082 10:24:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:16.082 00:05:16.082 real 0m0.209s 00:05:16.082 user 0m0.129s 00:05:16.082 sys 0m0.025s 00:05:16.082 10:24:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:16.082 10:24:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:16.082 ************************************ 00:05:16.082 END TEST rpc_daemon_integrity 00:05:16.082 ************************************ 00:05:16.082 10:24:04 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:05:16.082 10:24:04 rpc -- rpc/rpc.sh@84 -- # killprocess 247831 00:05:16.082 10:24:04 rpc -- common/autotest_common.sh@952 -- # '[' -z 247831 ']' 00:05:16.083 10:24:04 rpc -- common/autotest_common.sh@956 -- # kill -0 247831 00:05:16.083 10:24:04 rpc -- common/autotest_common.sh@957 -- # uname 00:05:16.083 10:24:04 rpc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:16.083 10:24:04 rpc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 247831 00:05:16.083 10:24:04 rpc -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:05:16.083 10:24:04 rpc -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:05:16.083 10:24:04 rpc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 247831' 00:05:16.083 killing process with pid 247831 00:05:16.083 10:24:04 rpc -- common/autotest_common.sh@971 -- # kill 247831 00:05:16.083 10:24:04 rpc -- common/autotest_common.sh@976 -- # wait 247831 00:05:16.342 00:05:16.342 real 0m1.939s 00:05:16.342 user 0m2.414s 00:05:16.342 sys 0m0.579s 00:05:16.342 10:24:04 rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:16.342 10:24:04 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:16.342 ************************************ 00:05:16.342 END TEST rpc 00:05:16.342 ************************************ 00:05:16.342 10:24:04 -- spdk/autotest.sh@157 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:05:16.342 10:24:04 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:16.342 10:24:04 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:16.342 10:24:04 -- common/autotest_common.sh@10 -- # set +x 00:05:16.601 ************************************ 00:05:16.601 START TEST skip_rpc 00:05:16.601 ************************************ 00:05:16.601 10:24:04 skip_rpc -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:05:16.601 * Looking for test storage... 00:05:16.601 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:16.601 10:24:04 skip_rpc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:16.601 10:24:04 skip_rpc -- common/autotest_common.sh@1691 -- # lcov --version 00:05:16.601 10:24:04 skip_rpc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:16.601 10:24:04 skip_rpc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:16.601 10:24:04 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:16.601 10:24:04 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:16.601 10:24:04 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:16.601 10:24:04 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:16.601 10:24:04 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:16.601 10:24:04 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:16.601 10:24:04 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:16.601 10:24:04 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:16.601 10:24:04 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:16.601 10:24:04 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:16.601 10:24:04 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:16.601 10:24:04 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:16.601 10:24:04 skip_rpc -- scripts/common.sh@345 -- # : 1 00:05:16.601 10:24:04 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:16.601 10:24:04 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:16.601 10:24:04 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:05:16.601 10:24:04 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:05:16.601 10:24:04 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:16.601 10:24:04 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:05:16.601 10:24:04 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:16.601 10:24:04 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:05:16.601 10:24:04 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:05:16.601 10:24:04 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:16.601 10:24:04 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:05:16.601 10:24:04 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:16.601 10:24:04 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:16.601 10:24:04 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:16.601 10:24:04 skip_rpc -- scripts/common.sh@368 -- # return 0 00:05:16.601 10:24:04 skip_rpc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:16.601 10:24:04 skip_rpc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:16.601 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:16.601 --rc genhtml_branch_coverage=1 00:05:16.601 --rc genhtml_function_coverage=1 00:05:16.601 --rc genhtml_legend=1 00:05:16.602 --rc geninfo_all_blocks=1 00:05:16.602 --rc geninfo_unexecuted_blocks=1 00:05:16.602 00:05:16.602 ' 00:05:16.602 10:24:04 skip_rpc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:16.602 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:16.602 --rc genhtml_branch_coverage=1 00:05:16.602 --rc genhtml_function_coverage=1 00:05:16.602 --rc genhtml_legend=1 00:05:16.602 --rc geninfo_all_blocks=1 00:05:16.602 --rc geninfo_unexecuted_blocks=1 00:05:16.602 00:05:16.602 ' 00:05:16.602 10:24:04 skip_rpc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:16.602 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:16.602 --rc genhtml_branch_coverage=1 00:05:16.602 --rc genhtml_function_coverage=1 00:05:16.602 --rc genhtml_legend=1 00:05:16.602 --rc geninfo_all_blocks=1 00:05:16.602 --rc geninfo_unexecuted_blocks=1 00:05:16.602 00:05:16.602 ' 00:05:16.602 10:24:04 skip_rpc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:16.602 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:16.602 --rc genhtml_branch_coverage=1 00:05:16.602 --rc genhtml_function_coverage=1 00:05:16.602 --rc genhtml_legend=1 00:05:16.602 --rc geninfo_all_blocks=1 00:05:16.602 --rc geninfo_unexecuted_blocks=1 00:05:16.602 00:05:16.602 ' 00:05:16.602 10:24:04 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:16.602 10:24:04 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:05:16.602 10:24:04 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:05:16.602 10:24:04 skip_rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:16.602 10:24:04 skip_rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:16.602 10:24:04 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:16.602 ************************************ 00:05:16.602 START TEST skip_rpc 00:05:16.602 ************************************ 00:05:16.602 10:24:05 skip_rpc.skip_rpc -- common/autotest_common.sh@1127 -- # test_skip_rpc 00:05:16.602 10:24:05 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=248270 00:05:16.602 10:24:05 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:05:16.602 10:24:05 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:16.602 10:24:05 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:05:16.602 [2024-11-15 10:24:05.055800] Starting SPDK v25.01-pre git sha1 318515b44 / DPDK 24.03.0 initialization... 00:05:16.602 [2024-11-15 10:24:05.055866] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid248270 ] 00:05:16.860 [2024-11-15 10:24:05.119532] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:16.860 [2024-11-15 10:24:05.176708] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:22.126 10:24:10 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:05:22.126 10:24:10 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # local es=0 00:05:22.126 10:24:10 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd spdk_get_version 00:05:22.126 10:24:10 skip_rpc.skip_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:05:22.126 10:24:10 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:22.126 10:24:10 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:05:22.126 10:24:10 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:22.126 10:24:10 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # rpc_cmd spdk_get_version 00:05:22.126 10:24:10 skip_rpc.skip_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:22.126 10:24:10 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:22.126 10:24:10 skip_rpc.skip_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:05:22.126 10:24:10 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # es=1 00:05:22.126 10:24:10 skip_rpc.skip_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:22.126 10:24:10 skip_rpc.skip_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:22.126 10:24:10 skip_rpc.skip_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:22.126 10:24:10 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:05:22.126 10:24:10 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 248270 00:05:22.126 10:24:10 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # '[' -z 248270 ']' 00:05:22.126 10:24:10 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # kill -0 248270 00:05:22.126 10:24:10 skip_rpc.skip_rpc -- common/autotest_common.sh@957 -- # uname 00:05:22.126 10:24:10 skip_rpc.skip_rpc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:22.126 10:24:10 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 248270 00:05:22.126 10:24:10 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:05:22.126 10:24:10 skip_rpc.skip_rpc -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:05:22.126 10:24:10 skip_rpc.skip_rpc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 248270' 00:05:22.126 killing process with pid 248270 00:05:22.126 10:24:10 skip_rpc.skip_rpc -- common/autotest_common.sh@971 -- # kill 248270 00:05:22.126 10:24:10 skip_rpc.skip_rpc -- common/autotest_common.sh@976 -- # wait 248270 00:05:22.126 00:05:22.126 real 0m5.451s 00:05:22.126 user 0m5.156s 00:05:22.126 sys 0m0.305s 00:05:22.126 10:24:10 skip_rpc.skip_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:22.126 10:24:10 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:22.126 ************************************ 00:05:22.126 END TEST skip_rpc 00:05:22.126 ************************************ 00:05:22.126 10:24:10 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:05:22.126 10:24:10 skip_rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:22.126 10:24:10 skip_rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:22.126 10:24:10 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:22.126 ************************************ 00:05:22.126 START TEST skip_rpc_with_json 00:05:22.126 ************************************ 00:05:22.126 10:24:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1127 -- # test_skip_rpc_with_json 00:05:22.126 10:24:10 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:05:22.126 10:24:10 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=249461 00:05:22.126 10:24:10 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:22.126 10:24:10 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:22.126 10:24:10 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 249461 00:05:22.126 10:24:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@833 -- # '[' -z 249461 ']' 00:05:22.126 10:24:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:22.126 10:24:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:22.126 10:24:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:22.126 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:22.126 10:24:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:22.126 10:24:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:22.126 [2024-11-15 10:24:10.553921] Starting SPDK v25.01-pre git sha1 318515b44 / DPDK 24.03.0 initialization... 00:05:22.126 [2024-11-15 10:24:10.554014] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid249461 ] 00:05:22.384 [2024-11-15 10:24:10.621725] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:22.384 [2024-11-15 10:24:10.683355] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:22.643 10:24:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:22.643 10:24:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@866 -- # return 0 00:05:22.643 10:24:10 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:05:22.643 10:24:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:22.643 10:24:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:22.643 [2024-11-15 10:24:10.957286] nvmf_rpc.c:2703:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:05:22.643 request: 00:05:22.643 { 00:05:22.643 "trtype": "tcp", 00:05:22.643 "method": "nvmf_get_transports", 00:05:22.643 "req_id": 1 00:05:22.643 } 00:05:22.643 Got JSON-RPC error response 00:05:22.643 response: 00:05:22.643 { 00:05:22.643 "code": -19, 00:05:22.643 "message": "No such device" 00:05:22.643 } 00:05:22.643 10:24:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:05:22.643 10:24:10 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:05:22.643 10:24:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:22.643 10:24:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:22.643 [2024-11-15 10:24:10.965424] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:22.643 10:24:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:22.643 10:24:10 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:05:22.643 10:24:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:22.643 10:24:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:22.902 10:24:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:22.902 10:24:11 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:22.902 { 00:05:22.902 "subsystems": [ 00:05:22.902 { 00:05:22.902 "subsystem": "fsdev", 00:05:22.902 "config": [ 00:05:22.902 { 00:05:22.902 "method": "fsdev_set_opts", 00:05:22.902 "params": { 00:05:22.902 "fsdev_io_pool_size": 65535, 00:05:22.902 "fsdev_io_cache_size": 256 00:05:22.902 } 00:05:22.902 } 00:05:22.902 ] 00:05:22.902 }, 00:05:22.902 { 00:05:22.902 "subsystem": "vfio_user_target", 00:05:22.902 "config": null 00:05:22.902 }, 00:05:22.902 { 00:05:22.902 "subsystem": "keyring", 00:05:22.902 "config": [] 00:05:22.902 }, 00:05:22.902 { 00:05:22.902 "subsystem": "iobuf", 00:05:22.902 "config": [ 00:05:22.902 { 00:05:22.902 "method": "iobuf_set_options", 00:05:22.902 "params": { 00:05:22.902 "small_pool_count": 8192, 00:05:22.902 "large_pool_count": 1024, 00:05:22.902 "small_bufsize": 8192, 00:05:22.902 "large_bufsize": 135168, 00:05:22.902 "enable_numa": false 00:05:22.902 } 00:05:22.902 } 00:05:22.902 ] 00:05:22.902 }, 00:05:22.902 { 00:05:22.902 "subsystem": "sock", 00:05:22.902 "config": [ 00:05:22.902 { 00:05:22.902 "method": "sock_set_default_impl", 00:05:22.902 "params": { 00:05:22.902 "impl_name": "posix" 00:05:22.902 } 00:05:22.902 }, 00:05:22.902 { 00:05:22.902 "method": "sock_impl_set_options", 00:05:22.902 "params": { 00:05:22.902 "impl_name": "ssl", 00:05:22.902 "recv_buf_size": 4096, 00:05:22.902 "send_buf_size": 4096, 00:05:22.902 "enable_recv_pipe": true, 00:05:22.902 "enable_quickack": false, 00:05:22.902 "enable_placement_id": 0, 00:05:22.902 "enable_zerocopy_send_server": true, 00:05:22.902 "enable_zerocopy_send_client": false, 00:05:22.902 "zerocopy_threshold": 0, 00:05:22.902 "tls_version": 0, 00:05:22.902 "enable_ktls": false 00:05:22.902 } 00:05:22.902 }, 00:05:22.902 { 00:05:22.902 "method": "sock_impl_set_options", 00:05:22.902 "params": { 00:05:22.902 "impl_name": "posix", 00:05:22.902 "recv_buf_size": 2097152, 00:05:22.902 "send_buf_size": 2097152, 00:05:22.902 "enable_recv_pipe": true, 00:05:22.902 "enable_quickack": false, 00:05:22.902 "enable_placement_id": 0, 00:05:22.902 "enable_zerocopy_send_server": true, 00:05:22.902 "enable_zerocopy_send_client": false, 00:05:22.902 "zerocopy_threshold": 0, 00:05:22.902 "tls_version": 0, 00:05:22.902 "enable_ktls": false 00:05:22.902 } 00:05:22.902 } 00:05:22.902 ] 00:05:22.902 }, 00:05:22.902 { 00:05:22.902 "subsystem": "vmd", 00:05:22.902 "config": [] 00:05:22.902 }, 00:05:22.902 { 00:05:22.902 "subsystem": "accel", 00:05:22.902 "config": [ 00:05:22.902 { 00:05:22.902 "method": "accel_set_options", 00:05:22.902 "params": { 00:05:22.902 "small_cache_size": 128, 00:05:22.902 "large_cache_size": 16, 00:05:22.902 "task_count": 2048, 00:05:22.902 "sequence_count": 2048, 00:05:22.902 "buf_count": 2048 00:05:22.902 } 00:05:22.902 } 00:05:22.902 ] 00:05:22.902 }, 00:05:22.902 { 00:05:22.902 "subsystem": "bdev", 00:05:22.902 "config": [ 00:05:22.902 { 00:05:22.902 "method": "bdev_set_options", 00:05:22.902 "params": { 00:05:22.902 "bdev_io_pool_size": 65535, 00:05:22.902 "bdev_io_cache_size": 256, 00:05:22.902 "bdev_auto_examine": true, 00:05:22.902 "iobuf_small_cache_size": 128, 00:05:22.902 "iobuf_large_cache_size": 16 00:05:22.902 } 00:05:22.902 }, 00:05:22.902 { 00:05:22.902 "method": "bdev_raid_set_options", 00:05:22.902 "params": { 00:05:22.902 "process_window_size_kb": 1024, 00:05:22.902 "process_max_bandwidth_mb_sec": 0 00:05:22.902 } 00:05:22.902 }, 00:05:22.902 { 00:05:22.902 "method": "bdev_iscsi_set_options", 00:05:22.902 "params": { 00:05:22.902 "timeout_sec": 30 00:05:22.902 } 00:05:22.902 }, 00:05:22.902 { 00:05:22.902 "method": "bdev_nvme_set_options", 00:05:22.902 "params": { 00:05:22.902 "action_on_timeout": "none", 00:05:22.902 "timeout_us": 0, 00:05:22.902 "timeout_admin_us": 0, 00:05:22.902 "keep_alive_timeout_ms": 10000, 00:05:22.902 "arbitration_burst": 0, 00:05:22.902 "low_priority_weight": 0, 00:05:22.902 "medium_priority_weight": 0, 00:05:22.902 "high_priority_weight": 0, 00:05:22.902 "nvme_adminq_poll_period_us": 10000, 00:05:22.902 "nvme_ioq_poll_period_us": 0, 00:05:22.902 "io_queue_requests": 0, 00:05:22.902 "delay_cmd_submit": true, 00:05:22.902 "transport_retry_count": 4, 00:05:22.902 "bdev_retry_count": 3, 00:05:22.902 "transport_ack_timeout": 0, 00:05:22.902 "ctrlr_loss_timeout_sec": 0, 00:05:22.902 "reconnect_delay_sec": 0, 00:05:22.902 "fast_io_fail_timeout_sec": 0, 00:05:22.902 "disable_auto_failback": false, 00:05:22.902 "generate_uuids": false, 00:05:22.902 "transport_tos": 0, 00:05:22.902 "nvme_error_stat": false, 00:05:22.902 "rdma_srq_size": 0, 00:05:22.902 "io_path_stat": false, 00:05:22.902 "allow_accel_sequence": false, 00:05:22.902 "rdma_max_cq_size": 0, 00:05:22.902 "rdma_cm_event_timeout_ms": 0, 00:05:22.902 "dhchap_digests": [ 00:05:22.902 "sha256", 00:05:22.902 "sha384", 00:05:22.902 "sha512" 00:05:22.902 ], 00:05:22.902 "dhchap_dhgroups": [ 00:05:22.902 "null", 00:05:22.902 "ffdhe2048", 00:05:22.902 "ffdhe3072", 00:05:22.902 "ffdhe4096", 00:05:22.902 "ffdhe6144", 00:05:22.902 "ffdhe8192" 00:05:22.902 ] 00:05:22.902 } 00:05:22.902 }, 00:05:22.902 { 00:05:22.902 "method": "bdev_nvme_set_hotplug", 00:05:22.902 "params": { 00:05:22.902 "period_us": 100000, 00:05:22.902 "enable": false 00:05:22.902 } 00:05:22.902 }, 00:05:22.902 { 00:05:22.902 "method": "bdev_wait_for_examine" 00:05:22.902 } 00:05:22.902 ] 00:05:22.902 }, 00:05:22.902 { 00:05:22.902 "subsystem": "scsi", 00:05:22.902 "config": null 00:05:22.902 }, 00:05:22.902 { 00:05:22.902 "subsystem": "scheduler", 00:05:22.902 "config": [ 00:05:22.902 { 00:05:22.902 "method": "framework_set_scheduler", 00:05:22.902 "params": { 00:05:22.902 "name": "static" 00:05:22.902 } 00:05:22.902 } 00:05:22.902 ] 00:05:22.902 }, 00:05:22.902 { 00:05:22.902 "subsystem": "vhost_scsi", 00:05:22.902 "config": [] 00:05:22.902 }, 00:05:22.902 { 00:05:22.902 "subsystem": "vhost_blk", 00:05:22.902 "config": [] 00:05:22.902 }, 00:05:22.902 { 00:05:22.902 "subsystem": "ublk", 00:05:22.902 "config": [] 00:05:22.902 }, 00:05:22.902 { 00:05:22.902 "subsystem": "nbd", 00:05:22.902 "config": [] 00:05:22.902 }, 00:05:22.902 { 00:05:22.902 "subsystem": "nvmf", 00:05:22.902 "config": [ 00:05:22.902 { 00:05:22.902 "method": "nvmf_set_config", 00:05:22.903 "params": { 00:05:22.903 "discovery_filter": "match_any", 00:05:22.903 "admin_cmd_passthru": { 00:05:22.903 "identify_ctrlr": false 00:05:22.903 }, 00:05:22.903 "dhchap_digests": [ 00:05:22.903 "sha256", 00:05:22.903 "sha384", 00:05:22.903 "sha512" 00:05:22.903 ], 00:05:22.903 "dhchap_dhgroups": [ 00:05:22.903 "null", 00:05:22.903 "ffdhe2048", 00:05:22.903 "ffdhe3072", 00:05:22.903 "ffdhe4096", 00:05:22.903 "ffdhe6144", 00:05:22.903 "ffdhe8192" 00:05:22.903 ] 00:05:22.903 } 00:05:22.903 }, 00:05:22.903 { 00:05:22.903 "method": "nvmf_set_max_subsystems", 00:05:22.903 "params": { 00:05:22.903 "max_subsystems": 1024 00:05:22.903 } 00:05:22.903 }, 00:05:22.903 { 00:05:22.903 "method": "nvmf_set_crdt", 00:05:22.903 "params": { 00:05:22.903 "crdt1": 0, 00:05:22.903 "crdt2": 0, 00:05:22.903 "crdt3": 0 00:05:22.903 } 00:05:22.903 }, 00:05:22.903 { 00:05:22.903 "method": "nvmf_create_transport", 00:05:22.903 "params": { 00:05:22.903 "trtype": "TCP", 00:05:22.903 "max_queue_depth": 128, 00:05:22.903 "max_io_qpairs_per_ctrlr": 127, 00:05:22.903 "in_capsule_data_size": 4096, 00:05:22.903 "max_io_size": 131072, 00:05:22.903 "io_unit_size": 131072, 00:05:22.903 "max_aq_depth": 128, 00:05:22.903 "num_shared_buffers": 511, 00:05:22.903 "buf_cache_size": 4294967295, 00:05:22.903 "dif_insert_or_strip": false, 00:05:22.903 "zcopy": false, 00:05:22.903 "c2h_success": true, 00:05:22.903 "sock_priority": 0, 00:05:22.903 "abort_timeout_sec": 1, 00:05:22.903 "ack_timeout": 0, 00:05:22.903 "data_wr_pool_size": 0 00:05:22.903 } 00:05:22.903 } 00:05:22.903 ] 00:05:22.903 }, 00:05:22.903 { 00:05:22.903 "subsystem": "iscsi", 00:05:22.903 "config": [ 00:05:22.903 { 00:05:22.903 "method": "iscsi_set_options", 00:05:22.903 "params": { 00:05:22.903 "node_base": "iqn.2016-06.io.spdk", 00:05:22.903 "max_sessions": 128, 00:05:22.903 "max_connections_per_session": 2, 00:05:22.903 "max_queue_depth": 64, 00:05:22.903 "default_time2wait": 2, 00:05:22.903 "default_time2retain": 20, 00:05:22.903 "first_burst_length": 8192, 00:05:22.903 "immediate_data": true, 00:05:22.903 "allow_duplicated_isid": false, 00:05:22.903 "error_recovery_level": 0, 00:05:22.903 "nop_timeout": 60, 00:05:22.903 "nop_in_interval": 30, 00:05:22.903 "disable_chap": false, 00:05:22.903 "require_chap": false, 00:05:22.903 "mutual_chap": false, 00:05:22.903 "chap_group": 0, 00:05:22.903 "max_large_datain_per_connection": 64, 00:05:22.903 "max_r2t_per_connection": 4, 00:05:22.903 "pdu_pool_size": 36864, 00:05:22.903 "immediate_data_pool_size": 16384, 00:05:22.903 "data_out_pool_size": 2048 00:05:22.903 } 00:05:22.903 } 00:05:22.903 ] 00:05:22.903 } 00:05:22.903 ] 00:05:22.903 } 00:05:22.903 10:24:11 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:05:22.903 10:24:11 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 249461 00:05:22.903 10:24:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # '[' -z 249461 ']' 00:05:22.903 10:24:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # kill -0 249461 00:05:22.903 10:24:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@957 -- # uname 00:05:22.903 10:24:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:22.903 10:24:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 249461 00:05:22.903 10:24:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:05:22.903 10:24:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:05:22.903 10:24:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@970 -- # echo 'killing process with pid 249461' 00:05:22.903 killing process with pid 249461 00:05:22.903 10:24:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@971 -- # kill 249461 00:05:22.903 10:24:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@976 -- # wait 249461 00:05:23.163 10:24:11 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=249603 00:05:23.163 10:24:11 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:23.163 10:24:11 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:05:28.429 10:24:16 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 249603 00:05:28.429 10:24:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # '[' -z 249603 ']' 00:05:28.429 10:24:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # kill -0 249603 00:05:28.429 10:24:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@957 -- # uname 00:05:28.429 10:24:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:28.429 10:24:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 249603 00:05:28.429 10:24:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:05:28.429 10:24:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:05:28.429 10:24:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@970 -- # echo 'killing process with pid 249603' 00:05:28.429 killing process with pid 249603 00:05:28.429 10:24:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@971 -- # kill 249603 00:05:28.429 10:24:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@976 -- # wait 249603 00:05:28.687 10:24:17 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:05:28.687 10:24:17 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:05:28.687 00:05:28.687 real 0m6.527s 00:05:28.687 user 0m6.173s 00:05:28.687 sys 0m0.690s 00:05:28.687 10:24:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:28.687 10:24:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:28.687 ************************************ 00:05:28.687 END TEST skip_rpc_with_json 00:05:28.687 ************************************ 00:05:28.687 10:24:17 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:05:28.687 10:24:17 skip_rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:28.687 10:24:17 skip_rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:28.687 10:24:17 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:28.687 ************************************ 00:05:28.687 START TEST skip_rpc_with_delay 00:05:28.687 ************************************ 00:05:28.687 10:24:17 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1127 -- # test_skip_rpc_with_delay 00:05:28.687 10:24:17 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:28.687 10:24:17 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # local es=0 00:05:28.687 10:24:17 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:28.687 10:24:17 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:28.688 10:24:17 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:28.688 10:24:17 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:28.688 10:24:17 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:28.688 10:24:17 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:28.688 10:24:17 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:28.688 10:24:17 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:28.688 10:24:17 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:05:28.688 10:24:17 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:28.688 [2024-11-15 10:24:17.135682] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:05:28.688 10:24:17 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # es=1 00:05:28.688 10:24:17 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:28.688 10:24:17 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:28.688 10:24:17 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:28.688 00:05:28.688 real 0m0.074s 00:05:28.688 user 0m0.052s 00:05:28.688 sys 0m0.022s 00:05:28.688 10:24:17 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:28.688 10:24:17 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:05:28.688 ************************************ 00:05:28.688 END TEST skip_rpc_with_delay 00:05:28.688 ************************************ 00:05:28.946 10:24:17 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:05:28.946 10:24:17 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:05:28.947 10:24:17 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:05:28.947 10:24:17 skip_rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:28.947 10:24:17 skip_rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:28.947 10:24:17 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:28.947 ************************************ 00:05:28.947 START TEST exit_on_failed_rpc_init 00:05:28.947 ************************************ 00:05:28.947 10:24:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1127 -- # test_exit_on_failed_rpc_init 00:05:28.947 10:24:17 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=250326 00:05:28.947 10:24:17 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:28.947 10:24:17 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 250326 00:05:28.947 10:24:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@833 -- # '[' -z 250326 ']' 00:05:28.947 10:24:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:28.947 10:24:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:28.947 10:24:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:28.947 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:28.947 10:24:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:28.947 10:24:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:28.947 [2024-11-15 10:24:17.256791] Starting SPDK v25.01-pre git sha1 318515b44 / DPDK 24.03.0 initialization... 00:05:28.947 [2024-11-15 10:24:17.256877] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid250326 ] 00:05:28.947 [2024-11-15 10:24:17.319343] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:28.947 [2024-11-15 10:24:17.373539] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:29.205 10:24:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:29.205 10:24:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@866 -- # return 0 00:05:29.205 10:24:17 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:29.205 10:24:17 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:29.205 10:24:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # local es=0 00:05:29.205 10:24:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:29.205 10:24:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:29.205 10:24:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:29.205 10:24:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:29.205 10:24:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:29.205 10:24:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:29.205 10:24:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:29.205 10:24:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:29.205 10:24:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:05:29.205 10:24:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:29.464 [2024-11-15 10:24:17.692108] Starting SPDK v25.01-pre git sha1 318515b44 / DPDK 24.03.0 initialization... 00:05:29.464 [2024-11-15 10:24:17.692173] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid250331 ] 00:05:29.464 [2024-11-15 10:24:17.756499] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:29.464 [2024-11-15 10:24:17.813833] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:29.464 [2024-11-15 10:24:17.813957] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:05:29.464 [2024-11-15 10:24:17.813976] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:05:29.464 [2024-11-15 10:24:17.813988] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:29.464 10:24:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # es=234 00:05:29.464 10:24:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:29.464 10:24:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@662 -- # es=106 00:05:29.464 10:24:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # case "$es" in 00:05:29.464 10:24:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@670 -- # es=1 00:05:29.464 10:24:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:29.464 10:24:17 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:05:29.464 10:24:17 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 250326 00:05:29.464 10:24:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # '[' -z 250326 ']' 00:05:29.464 10:24:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # kill -0 250326 00:05:29.464 10:24:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@957 -- # uname 00:05:29.464 10:24:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:29.464 10:24:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 250326 00:05:29.722 10:24:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:05:29.722 10:24:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:05:29.722 10:24:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@970 -- # echo 'killing process with pid 250326' 00:05:29.722 killing process with pid 250326 00:05:29.722 10:24:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@971 -- # kill 250326 00:05:29.722 10:24:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@976 -- # wait 250326 00:05:29.982 00:05:29.983 real 0m1.137s 00:05:29.983 user 0m1.229s 00:05:29.983 sys 0m0.442s 00:05:29.983 10:24:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:29.983 10:24:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:29.983 ************************************ 00:05:29.983 END TEST exit_on_failed_rpc_init 00:05:29.983 ************************************ 00:05:29.983 10:24:18 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:29.983 00:05:29.983 real 0m13.534s 00:05:29.983 user 0m12.788s 00:05:29.983 sys 0m1.642s 00:05:29.983 10:24:18 skip_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:29.983 10:24:18 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:29.983 ************************************ 00:05:29.983 END TEST skip_rpc 00:05:29.983 ************************************ 00:05:29.983 10:24:18 -- spdk/autotest.sh@158 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:05:29.983 10:24:18 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:29.983 10:24:18 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:29.983 10:24:18 -- common/autotest_common.sh@10 -- # set +x 00:05:29.983 ************************************ 00:05:29.983 START TEST rpc_client 00:05:29.983 ************************************ 00:05:29.983 10:24:18 rpc_client -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:05:30.242 * Looking for test storage... 00:05:30.242 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:05:30.242 10:24:18 rpc_client -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:30.242 10:24:18 rpc_client -- common/autotest_common.sh@1691 -- # lcov --version 00:05:30.242 10:24:18 rpc_client -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:30.242 10:24:18 rpc_client -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:30.243 10:24:18 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:30.243 10:24:18 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:30.243 10:24:18 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:30.243 10:24:18 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:05:30.243 10:24:18 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:05:30.243 10:24:18 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:05:30.243 10:24:18 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:05:30.243 10:24:18 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:05:30.243 10:24:18 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:05:30.243 10:24:18 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:05:30.243 10:24:18 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:30.243 10:24:18 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:05:30.243 10:24:18 rpc_client -- scripts/common.sh@345 -- # : 1 00:05:30.243 10:24:18 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:30.243 10:24:18 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:30.243 10:24:18 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:05:30.243 10:24:18 rpc_client -- scripts/common.sh@353 -- # local d=1 00:05:30.243 10:24:18 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:30.243 10:24:18 rpc_client -- scripts/common.sh@355 -- # echo 1 00:05:30.243 10:24:18 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:05:30.243 10:24:18 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:05:30.243 10:24:18 rpc_client -- scripts/common.sh@353 -- # local d=2 00:05:30.243 10:24:18 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:30.243 10:24:18 rpc_client -- scripts/common.sh@355 -- # echo 2 00:05:30.243 10:24:18 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:05:30.243 10:24:18 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:30.243 10:24:18 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:30.243 10:24:18 rpc_client -- scripts/common.sh@368 -- # return 0 00:05:30.243 10:24:18 rpc_client -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:30.243 10:24:18 rpc_client -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:30.243 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:30.243 --rc genhtml_branch_coverage=1 00:05:30.243 --rc genhtml_function_coverage=1 00:05:30.243 --rc genhtml_legend=1 00:05:30.243 --rc geninfo_all_blocks=1 00:05:30.243 --rc geninfo_unexecuted_blocks=1 00:05:30.243 00:05:30.243 ' 00:05:30.243 10:24:18 rpc_client -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:30.243 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:30.243 --rc genhtml_branch_coverage=1 00:05:30.243 --rc genhtml_function_coverage=1 00:05:30.243 --rc genhtml_legend=1 00:05:30.243 --rc geninfo_all_blocks=1 00:05:30.243 --rc geninfo_unexecuted_blocks=1 00:05:30.243 00:05:30.243 ' 00:05:30.243 10:24:18 rpc_client -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:30.243 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:30.243 --rc genhtml_branch_coverage=1 00:05:30.243 --rc genhtml_function_coverage=1 00:05:30.243 --rc genhtml_legend=1 00:05:30.243 --rc geninfo_all_blocks=1 00:05:30.243 --rc geninfo_unexecuted_blocks=1 00:05:30.243 00:05:30.243 ' 00:05:30.243 10:24:18 rpc_client -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:30.243 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:30.243 --rc genhtml_branch_coverage=1 00:05:30.243 --rc genhtml_function_coverage=1 00:05:30.243 --rc genhtml_legend=1 00:05:30.243 --rc geninfo_all_blocks=1 00:05:30.243 --rc geninfo_unexecuted_blocks=1 00:05:30.243 00:05:30.243 ' 00:05:30.243 10:24:18 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:05:30.243 OK 00:05:30.243 10:24:18 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:05:30.243 00:05:30.243 real 0m0.162s 00:05:30.243 user 0m0.112s 00:05:30.243 sys 0m0.058s 00:05:30.243 10:24:18 rpc_client -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:30.243 10:24:18 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:05:30.243 ************************************ 00:05:30.243 END TEST rpc_client 00:05:30.243 ************************************ 00:05:30.243 10:24:18 -- spdk/autotest.sh@159 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:05:30.243 10:24:18 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:30.243 10:24:18 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:30.243 10:24:18 -- common/autotest_common.sh@10 -- # set +x 00:05:30.243 ************************************ 00:05:30.243 START TEST json_config 00:05:30.243 ************************************ 00:05:30.243 10:24:18 json_config -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:05:30.243 10:24:18 json_config -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:30.243 10:24:18 json_config -- common/autotest_common.sh@1691 -- # lcov --version 00:05:30.243 10:24:18 json_config -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:30.502 10:24:18 json_config -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:30.502 10:24:18 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:30.502 10:24:18 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:30.502 10:24:18 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:30.502 10:24:18 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:05:30.502 10:24:18 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:05:30.502 10:24:18 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:05:30.502 10:24:18 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:05:30.502 10:24:18 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:05:30.502 10:24:18 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:05:30.502 10:24:18 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:05:30.502 10:24:18 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:30.502 10:24:18 json_config -- scripts/common.sh@344 -- # case "$op" in 00:05:30.503 10:24:18 json_config -- scripts/common.sh@345 -- # : 1 00:05:30.503 10:24:18 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:30.503 10:24:18 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:30.503 10:24:18 json_config -- scripts/common.sh@365 -- # decimal 1 00:05:30.503 10:24:18 json_config -- scripts/common.sh@353 -- # local d=1 00:05:30.503 10:24:18 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:30.503 10:24:18 json_config -- scripts/common.sh@355 -- # echo 1 00:05:30.503 10:24:18 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:05:30.503 10:24:18 json_config -- scripts/common.sh@366 -- # decimal 2 00:05:30.503 10:24:18 json_config -- scripts/common.sh@353 -- # local d=2 00:05:30.503 10:24:18 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:30.503 10:24:18 json_config -- scripts/common.sh@355 -- # echo 2 00:05:30.503 10:24:18 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:05:30.503 10:24:18 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:30.503 10:24:18 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:30.503 10:24:18 json_config -- scripts/common.sh@368 -- # return 0 00:05:30.503 10:24:18 json_config -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:30.503 10:24:18 json_config -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:30.503 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:30.503 --rc genhtml_branch_coverage=1 00:05:30.503 --rc genhtml_function_coverage=1 00:05:30.503 --rc genhtml_legend=1 00:05:30.503 --rc geninfo_all_blocks=1 00:05:30.503 --rc geninfo_unexecuted_blocks=1 00:05:30.503 00:05:30.503 ' 00:05:30.503 10:24:18 json_config -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:30.503 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:30.503 --rc genhtml_branch_coverage=1 00:05:30.503 --rc genhtml_function_coverage=1 00:05:30.503 --rc genhtml_legend=1 00:05:30.503 --rc geninfo_all_blocks=1 00:05:30.503 --rc geninfo_unexecuted_blocks=1 00:05:30.503 00:05:30.503 ' 00:05:30.503 10:24:18 json_config -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:30.503 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:30.503 --rc genhtml_branch_coverage=1 00:05:30.503 --rc genhtml_function_coverage=1 00:05:30.503 --rc genhtml_legend=1 00:05:30.503 --rc geninfo_all_blocks=1 00:05:30.503 --rc geninfo_unexecuted_blocks=1 00:05:30.503 00:05:30.503 ' 00:05:30.503 10:24:18 json_config -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:30.503 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:30.503 --rc genhtml_branch_coverage=1 00:05:30.503 --rc genhtml_function_coverage=1 00:05:30.503 --rc genhtml_legend=1 00:05:30.503 --rc geninfo_all_blocks=1 00:05:30.503 --rc geninfo_unexecuted_blocks=1 00:05:30.503 00:05:30.503 ' 00:05:30.503 10:24:18 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:30.503 10:24:18 json_config -- nvmf/common.sh@7 -- # uname -s 00:05:30.503 10:24:18 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:30.503 10:24:18 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:30.503 10:24:18 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:30.503 10:24:18 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:30.503 10:24:18 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:30.503 10:24:18 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:30.503 10:24:18 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:30.503 10:24:18 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:30.503 10:24:18 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:30.503 10:24:18 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:30.503 10:24:18 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:05:30.503 10:24:18 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=8b464f06-2980-e311-ba20-001e67a94acd 00:05:30.503 10:24:18 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:30.503 10:24:18 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:30.503 10:24:18 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:30.503 10:24:18 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:30.503 10:24:18 json_config -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:30.503 10:24:18 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:05:30.503 10:24:18 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:30.503 10:24:18 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:30.503 10:24:18 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:30.503 10:24:18 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:30.503 10:24:18 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:30.503 10:24:18 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:30.503 10:24:18 json_config -- paths/export.sh@5 -- # export PATH 00:05:30.503 10:24:18 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:30.503 10:24:18 json_config -- nvmf/common.sh@51 -- # : 0 00:05:30.503 10:24:18 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:30.503 10:24:18 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:30.503 10:24:18 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:30.503 10:24:18 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:30.503 10:24:18 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:30.503 10:24:18 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:30.503 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:30.503 10:24:18 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:30.503 10:24:18 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:30.503 10:24:18 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:30.503 10:24:18 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:05:30.503 10:24:18 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:05:30.503 10:24:18 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:05:30.503 10:24:18 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:05:30.503 10:24:18 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:05:30.503 10:24:18 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:05:30.503 10:24:18 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:05:30.503 10:24:18 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:05:30.503 10:24:18 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:05:30.503 10:24:18 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:05:30.503 10:24:18 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:05:30.503 10:24:18 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:05:30.503 10:24:18 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:05:30.503 10:24:18 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:05:30.503 10:24:18 json_config -- json_config/json_config.sh@362 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:30.503 10:24:18 json_config -- json_config/json_config.sh@363 -- # echo 'INFO: JSON configuration test init' 00:05:30.503 INFO: JSON configuration test init 00:05:30.503 10:24:18 json_config -- json_config/json_config.sh@364 -- # json_config_test_init 00:05:30.503 10:24:18 json_config -- json_config/json_config.sh@269 -- # timing_enter json_config_test_init 00:05:30.503 10:24:18 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:30.503 10:24:18 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:30.503 10:24:18 json_config -- json_config/json_config.sh@270 -- # timing_enter json_config_setup_target 00:05:30.503 10:24:18 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:30.503 10:24:18 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:30.503 10:24:18 json_config -- json_config/json_config.sh@272 -- # json_config_test_start_app target --wait-for-rpc 00:05:30.503 10:24:18 json_config -- json_config/common.sh@9 -- # local app=target 00:05:30.503 10:24:18 json_config -- json_config/common.sh@10 -- # shift 00:05:30.503 10:24:18 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:30.503 10:24:18 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:30.503 10:24:18 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:05:30.503 10:24:18 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:30.503 10:24:18 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:30.503 10:24:18 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=250594 00:05:30.503 10:24:18 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:05:30.503 10:24:18 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:30.503 Waiting for target to run... 00:05:30.503 10:24:18 json_config -- json_config/common.sh@25 -- # waitforlisten 250594 /var/tmp/spdk_tgt.sock 00:05:30.503 10:24:18 json_config -- common/autotest_common.sh@833 -- # '[' -z 250594 ']' 00:05:30.504 10:24:18 json_config -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:30.504 10:24:18 json_config -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:30.504 10:24:18 json_config -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:30.504 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:30.504 10:24:18 json_config -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:30.504 10:24:18 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:30.504 [2024-11-15 10:24:18.834023] Starting SPDK v25.01-pre git sha1 318515b44 / DPDK 24.03.0 initialization... 00:05:30.504 [2024-11-15 10:24:18.834116] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid250594 ] 00:05:30.763 [2024-11-15 10:24:19.167496] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:30.763 [2024-11-15 10:24:19.209110] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:31.698 10:24:19 json_config -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:31.698 10:24:19 json_config -- common/autotest_common.sh@866 -- # return 0 00:05:31.698 10:24:19 json_config -- json_config/common.sh@26 -- # echo '' 00:05:31.698 00:05:31.698 10:24:19 json_config -- json_config/json_config.sh@276 -- # create_accel_config 00:05:31.698 10:24:19 json_config -- json_config/json_config.sh@100 -- # timing_enter create_accel_config 00:05:31.698 10:24:19 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:31.698 10:24:19 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:31.698 10:24:19 json_config -- json_config/json_config.sh@102 -- # [[ 0 -eq 1 ]] 00:05:31.698 10:24:19 json_config -- json_config/json_config.sh@108 -- # timing_exit create_accel_config 00:05:31.698 10:24:19 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:31.698 10:24:19 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:31.698 10:24:19 json_config -- json_config/json_config.sh@280 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:05:31.698 10:24:19 json_config -- json_config/json_config.sh@281 -- # tgt_rpc load_config 00:05:31.698 10:24:19 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:05:34.985 10:24:23 json_config -- json_config/json_config.sh@283 -- # tgt_check_notification_types 00:05:34.985 10:24:23 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:05:34.985 10:24:23 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:34.985 10:24:23 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:34.985 10:24:23 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:05:34.985 10:24:23 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:05:34.985 10:24:23 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:05:34.985 10:24:23 json_config -- json_config/json_config.sh@47 -- # [[ y == y ]] 00:05:34.985 10:24:23 json_config -- json_config/json_config.sh@48 -- # enabled_types+=("fsdev_register" "fsdev_unregister") 00:05:34.985 10:24:23 json_config -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:05:34.985 10:24:23 json_config -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:05:34.985 10:24:23 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:05:34.985 10:24:23 json_config -- json_config/json_config.sh@51 -- # get_types=('fsdev_register' 'fsdev_unregister' 'bdev_register' 'bdev_unregister') 00:05:34.985 10:24:23 json_config -- json_config/json_config.sh@51 -- # local get_types 00:05:34.985 10:24:23 json_config -- json_config/json_config.sh@53 -- # local type_diff 00:05:34.985 10:24:23 json_config -- json_config/json_config.sh@54 -- # echo bdev_register bdev_unregister fsdev_register fsdev_unregister fsdev_register fsdev_unregister bdev_register bdev_unregister 00:05:34.985 10:24:23 json_config -- json_config/json_config.sh@54 -- # tr ' ' '\n' 00:05:34.985 10:24:23 json_config -- json_config/json_config.sh@54 -- # sort 00:05:34.985 10:24:23 json_config -- json_config/json_config.sh@54 -- # uniq -u 00:05:34.985 10:24:23 json_config -- json_config/json_config.sh@54 -- # type_diff= 00:05:34.985 10:24:23 json_config -- json_config/json_config.sh@56 -- # [[ -n '' ]] 00:05:34.985 10:24:23 json_config -- json_config/json_config.sh@61 -- # timing_exit tgt_check_notification_types 00:05:34.985 10:24:23 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:34.985 10:24:23 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:34.985 10:24:23 json_config -- json_config/json_config.sh@62 -- # return 0 00:05:34.985 10:24:23 json_config -- json_config/json_config.sh@285 -- # [[ 0 -eq 1 ]] 00:05:34.985 10:24:23 json_config -- json_config/json_config.sh@289 -- # [[ 0 -eq 1 ]] 00:05:34.985 10:24:23 json_config -- json_config/json_config.sh@293 -- # [[ 0 -eq 1 ]] 00:05:34.985 10:24:23 json_config -- json_config/json_config.sh@297 -- # [[ 1 -eq 1 ]] 00:05:34.985 10:24:23 json_config -- json_config/json_config.sh@298 -- # create_nvmf_subsystem_config 00:05:34.985 10:24:23 json_config -- json_config/json_config.sh@237 -- # timing_enter create_nvmf_subsystem_config 00:05:34.985 10:24:23 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:34.985 10:24:23 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:34.985 10:24:23 json_config -- json_config/json_config.sh@239 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:05:34.985 10:24:23 json_config -- json_config/json_config.sh@240 -- # [[ tcp == \r\d\m\a ]] 00:05:34.985 10:24:23 json_config -- json_config/json_config.sh@244 -- # [[ -z 127.0.0.1 ]] 00:05:34.985 10:24:23 json_config -- json_config/json_config.sh@249 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:34.985 10:24:23 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:35.243 MallocForNvmf0 00:05:35.243 10:24:23 json_config -- json_config/json_config.sh@250 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:35.243 10:24:23 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:35.511 MallocForNvmf1 00:05:35.511 10:24:23 json_config -- json_config/json_config.sh@252 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:05:35.511 10:24:23 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:05:35.769 [2024-11-15 10:24:24.101215] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:35.769 10:24:24 json_config -- json_config/json_config.sh@253 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:35.769 10:24:24 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:36.027 10:24:24 json_config -- json_config/json_config.sh@254 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:36.027 10:24:24 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:36.285 10:24:24 json_config -- json_config/json_config.sh@255 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:36.285 10:24:24 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:36.543 10:24:24 json_config -- json_config/json_config.sh@256 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:36.543 10:24:24 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:36.802 [2024-11-15 10:24:25.168581] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:36.802 10:24:25 json_config -- json_config/json_config.sh@258 -- # timing_exit create_nvmf_subsystem_config 00:05:36.802 10:24:25 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:36.802 10:24:25 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:36.802 10:24:25 json_config -- json_config/json_config.sh@300 -- # timing_exit json_config_setup_target 00:05:36.802 10:24:25 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:36.802 10:24:25 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:36.802 10:24:25 json_config -- json_config/json_config.sh@302 -- # [[ 0 -eq 1 ]] 00:05:36.802 10:24:25 json_config -- json_config/json_config.sh@307 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:36.802 10:24:25 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:37.060 MallocBdevForConfigChangeCheck 00:05:37.060 10:24:25 json_config -- json_config/json_config.sh@309 -- # timing_exit json_config_test_init 00:05:37.060 10:24:25 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:37.060 10:24:25 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:37.060 10:24:25 json_config -- json_config/json_config.sh@366 -- # tgt_rpc save_config 00:05:37.060 10:24:25 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:37.626 10:24:25 json_config -- json_config/json_config.sh@368 -- # echo 'INFO: shutting down applications...' 00:05:37.626 INFO: shutting down applications... 00:05:37.626 10:24:25 json_config -- json_config/json_config.sh@369 -- # [[ 0 -eq 1 ]] 00:05:37.626 10:24:25 json_config -- json_config/json_config.sh@375 -- # json_config_clear target 00:05:37.626 10:24:25 json_config -- json_config/json_config.sh@339 -- # [[ -n 22 ]] 00:05:37.626 10:24:25 json_config -- json_config/json_config.sh@340 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:05:40.157 Calling clear_iscsi_subsystem 00:05:40.157 Calling clear_nvmf_subsystem 00:05:40.157 Calling clear_nbd_subsystem 00:05:40.157 Calling clear_ublk_subsystem 00:05:40.157 Calling clear_vhost_blk_subsystem 00:05:40.157 Calling clear_vhost_scsi_subsystem 00:05:40.157 Calling clear_bdev_subsystem 00:05:40.157 10:24:28 json_config -- json_config/json_config.sh@344 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:05:40.157 10:24:28 json_config -- json_config/json_config.sh@350 -- # count=100 00:05:40.157 10:24:28 json_config -- json_config/json_config.sh@351 -- # '[' 100 -gt 0 ']' 00:05:40.157 10:24:28 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:40.157 10:24:28 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:05:40.157 10:24:28 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:05:40.416 10:24:28 json_config -- json_config/json_config.sh@352 -- # break 00:05:40.416 10:24:28 json_config -- json_config/json_config.sh@357 -- # '[' 100 -eq 0 ']' 00:05:40.416 10:24:28 json_config -- json_config/json_config.sh@376 -- # json_config_test_shutdown_app target 00:05:40.416 10:24:28 json_config -- json_config/common.sh@31 -- # local app=target 00:05:40.416 10:24:28 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:40.416 10:24:28 json_config -- json_config/common.sh@35 -- # [[ -n 250594 ]] 00:05:40.416 10:24:28 json_config -- json_config/common.sh@38 -- # kill -SIGINT 250594 00:05:40.416 10:24:28 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:40.416 10:24:28 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:40.416 10:24:28 json_config -- json_config/common.sh@41 -- # kill -0 250594 00:05:40.416 10:24:28 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:05:40.985 10:24:29 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:05:40.986 10:24:29 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:40.986 10:24:29 json_config -- json_config/common.sh@41 -- # kill -0 250594 00:05:40.986 10:24:29 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:40.986 10:24:29 json_config -- json_config/common.sh@43 -- # break 00:05:40.986 10:24:29 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:40.986 10:24:29 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:40.986 SPDK target shutdown done 00:05:40.986 10:24:29 json_config -- json_config/json_config.sh@378 -- # echo 'INFO: relaunching applications...' 00:05:40.986 INFO: relaunching applications... 00:05:40.986 10:24:29 json_config -- json_config/json_config.sh@379 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:40.986 10:24:29 json_config -- json_config/common.sh@9 -- # local app=target 00:05:40.986 10:24:29 json_config -- json_config/common.sh@10 -- # shift 00:05:40.986 10:24:29 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:40.986 10:24:29 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:40.986 10:24:29 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:05:40.986 10:24:29 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:40.986 10:24:29 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:40.986 10:24:29 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=251927 00:05:40.986 10:24:29 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:40.986 10:24:29 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:40.986 Waiting for target to run... 00:05:40.986 10:24:29 json_config -- json_config/common.sh@25 -- # waitforlisten 251927 /var/tmp/spdk_tgt.sock 00:05:40.986 10:24:29 json_config -- common/autotest_common.sh@833 -- # '[' -z 251927 ']' 00:05:40.986 10:24:29 json_config -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:40.986 10:24:29 json_config -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:40.986 10:24:29 json_config -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:40.986 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:40.986 10:24:29 json_config -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:40.986 10:24:29 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:40.986 [2024-11-15 10:24:29.431890] Starting SPDK v25.01-pre git sha1 318515b44 / DPDK 24.03.0 initialization... 00:05:40.986 [2024-11-15 10:24:29.431987] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid251927 ] 00:05:41.555 [2024-11-15 10:24:29.942522] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:41.555 [2024-11-15 10:24:29.994498] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:44.842 [2024-11-15 10:24:33.052185] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:44.842 [2024-11-15 10:24:33.084630] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:44.842 10:24:33 json_config -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:44.842 10:24:33 json_config -- common/autotest_common.sh@866 -- # return 0 00:05:44.842 10:24:33 json_config -- json_config/common.sh@26 -- # echo '' 00:05:44.842 00:05:44.842 10:24:33 json_config -- json_config/json_config.sh@380 -- # [[ 0 -eq 1 ]] 00:05:44.842 10:24:33 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: Checking if target configuration is the same...' 00:05:44.842 INFO: Checking if target configuration is the same... 00:05:44.842 10:24:33 json_config -- json_config/json_config.sh@385 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:44.842 10:24:33 json_config -- json_config/json_config.sh@385 -- # tgt_rpc save_config 00:05:44.842 10:24:33 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:44.842 + '[' 2 -ne 2 ']' 00:05:44.842 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:05:44.842 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:05:44.842 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:44.842 +++ basename /dev/fd/62 00:05:44.842 ++ mktemp /tmp/62.XXX 00:05:44.842 + tmp_file_1=/tmp/62.mc6 00:05:44.842 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:44.842 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:44.842 + tmp_file_2=/tmp/spdk_tgt_config.json.eY2 00:05:44.842 + ret=0 00:05:44.842 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:45.100 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:45.357 + diff -u /tmp/62.mc6 /tmp/spdk_tgt_config.json.eY2 00:05:45.357 + echo 'INFO: JSON config files are the same' 00:05:45.357 INFO: JSON config files are the same 00:05:45.357 + rm /tmp/62.mc6 /tmp/spdk_tgt_config.json.eY2 00:05:45.357 + exit 0 00:05:45.357 10:24:33 json_config -- json_config/json_config.sh@386 -- # [[ 0 -eq 1 ]] 00:05:45.357 10:24:33 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:05:45.357 INFO: changing configuration and checking if this can be detected... 00:05:45.357 10:24:33 json_config -- json_config/json_config.sh@393 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:45.357 10:24:33 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:45.614 10:24:33 json_config -- json_config/json_config.sh@394 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:45.614 10:24:33 json_config -- json_config/json_config.sh@394 -- # tgt_rpc save_config 00:05:45.614 10:24:33 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:45.614 + '[' 2 -ne 2 ']' 00:05:45.614 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:05:45.614 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:05:45.614 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:45.614 +++ basename /dev/fd/62 00:05:45.614 ++ mktemp /tmp/62.XXX 00:05:45.614 + tmp_file_1=/tmp/62.Mfx 00:05:45.614 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:45.614 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:45.614 + tmp_file_2=/tmp/spdk_tgt_config.json.gQc 00:05:45.614 + ret=0 00:05:45.614 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:45.871 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:45.871 + diff -u /tmp/62.Mfx /tmp/spdk_tgt_config.json.gQc 00:05:45.871 + ret=1 00:05:45.871 + echo '=== Start of file: /tmp/62.Mfx ===' 00:05:45.871 + cat /tmp/62.Mfx 00:05:45.871 + echo '=== End of file: /tmp/62.Mfx ===' 00:05:45.871 + echo '' 00:05:45.871 + echo '=== Start of file: /tmp/spdk_tgt_config.json.gQc ===' 00:05:45.871 + cat /tmp/spdk_tgt_config.json.gQc 00:05:45.871 + echo '=== End of file: /tmp/spdk_tgt_config.json.gQc ===' 00:05:45.871 + echo '' 00:05:45.871 + rm /tmp/62.Mfx /tmp/spdk_tgt_config.json.gQc 00:05:45.871 + exit 1 00:05:45.871 10:24:34 json_config -- json_config/json_config.sh@398 -- # echo 'INFO: configuration change detected.' 00:05:45.871 INFO: configuration change detected. 00:05:45.871 10:24:34 json_config -- json_config/json_config.sh@401 -- # json_config_test_fini 00:05:45.871 10:24:34 json_config -- json_config/json_config.sh@313 -- # timing_enter json_config_test_fini 00:05:45.871 10:24:34 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:45.871 10:24:34 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:45.871 10:24:34 json_config -- json_config/json_config.sh@314 -- # local ret=0 00:05:45.871 10:24:34 json_config -- json_config/json_config.sh@316 -- # [[ -n '' ]] 00:05:45.871 10:24:34 json_config -- json_config/json_config.sh@324 -- # [[ -n 251927 ]] 00:05:45.871 10:24:34 json_config -- json_config/json_config.sh@327 -- # cleanup_bdev_subsystem_config 00:05:45.871 10:24:34 json_config -- json_config/json_config.sh@191 -- # timing_enter cleanup_bdev_subsystem_config 00:05:45.871 10:24:34 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:45.871 10:24:34 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:45.871 10:24:34 json_config -- json_config/json_config.sh@193 -- # [[ 0 -eq 1 ]] 00:05:45.871 10:24:34 json_config -- json_config/json_config.sh@200 -- # uname -s 00:05:45.871 10:24:34 json_config -- json_config/json_config.sh@200 -- # [[ Linux = Linux ]] 00:05:45.871 10:24:34 json_config -- json_config/json_config.sh@201 -- # rm -f /sample_aio 00:05:45.871 10:24:34 json_config -- json_config/json_config.sh@204 -- # [[ 0 -eq 1 ]] 00:05:45.871 10:24:34 json_config -- json_config/json_config.sh@208 -- # timing_exit cleanup_bdev_subsystem_config 00:05:45.871 10:24:34 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:45.871 10:24:34 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:46.128 10:24:34 json_config -- json_config/json_config.sh@330 -- # killprocess 251927 00:05:46.128 10:24:34 json_config -- common/autotest_common.sh@952 -- # '[' -z 251927 ']' 00:05:46.128 10:24:34 json_config -- common/autotest_common.sh@956 -- # kill -0 251927 00:05:46.128 10:24:34 json_config -- common/autotest_common.sh@957 -- # uname 00:05:46.129 10:24:34 json_config -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:46.129 10:24:34 json_config -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 251927 00:05:46.129 10:24:34 json_config -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:05:46.129 10:24:34 json_config -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:05:46.129 10:24:34 json_config -- common/autotest_common.sh@970 -- # echo 'killing process with pid 251927' 00:05:46.129 killing process with pid 251927 00:05:46.129 10:24:34 json_config -- common/autotest_common.sh@971 -- # kill 251927 00:05:46.129 10:24:34 json_config -- common/autotest_common.sh@976 -- # wait 251927 00:05:48.658 10:24:36 json_config -- json_config/json_config.sh@333 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:48.658 10:24:36 json_config -- json_config/json_config.sh@334 -- # timing_exit json_config_test_fini 00:05:48.658 10:24:36 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:48.658 10:24:36 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:48.658 10:24:36 json_config -- json_config/json_config.sh@335 -- # return 0 00:05:48.658 10:24:36 json_config -- json_config/json_config.sh@403 -- # echo 'INFO: Success' 00:05:48.658 INFO: Success 00:05:48.658 00:05:48.658 real 0m18.332s 00:05:48.658 user 0m19.929s 00:05:48.658 sys 0m2.621s 00:05:48.658 10:24:36 json_config -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:48.658 10:24:36 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:48.658 ************************************ 00:05:48.658 END TEST json_config 00:05:48.658 ************************************ 00:05:48.658 10:24:36 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:05:48.658 10:24:36 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:48.658 10:24:36 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:48.658 10:24:36 -- common/autotest_common.sh@10 -- # set +x 00:05:48.658 ************************************ 00:05:48.658 START TEST json_config_extra_key 00:05:48.658 ************************************ 00:05:48.658 10:24:37 json_config_extra_key -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:05:48.658 10:24:37 json_config_extra_key -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:48.658 10:24:37 json_config_extra_key -- common/autotest_common.sh@1691 -- # lcov --version 00:05:48.658 10:24:37 json_config_extra_key -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:48.918 10:24:37 json_config_extra_key -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:48.918 10:24:37 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:48.918 10:24:37 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:48.918 10:24:37 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:48.918 10:24:37 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:05:48.918 10:24:37 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:05:48.918 10:24:37 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:05:48.918 10:24:37 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:05:48.918 10:24:37 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:05:48.918 10:24:37 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:05:48.918 10:24:37 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:05:48.918 10:24:37 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:48.918 10:24:37 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:05:48.918 10:24:37 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:05:48.918 10:24:37 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:48.918 10:24:37 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:48.918 10:24:37 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:05:48.918 10:24:37 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:05:48.918 10:24:37 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:48.918 10:24:37 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:05:48.918 10:24:37 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:05:48.918 10:24:37 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:05:48.918 10:24:37 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:05:48.918 10:24:37 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:48.918 10:24:37 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:05:48.918 10:24:37 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:05:48.918 10:24:37 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:48.918 10:24:37 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:48.918 10:24:37 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:05:48.918 10:24:37 json_config_extra_key -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:48.918 10:24:37 json_config_extra_key -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:48.918 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:48.918 --rc genhtml_branch_coverage=1 00:05:48.918 --rc genhtml_function_coverage=1 00:05:48.918 --rc genhtml_legend=1 00:05:48.918 --rc geninfo_all_blocks=1 00:05:48.918 --rc geninfo_unexecuted_blocks=1 00:05:48.918 00:05:48.918 ' 00:05:48.918 10:24:37 json_config_extra_key -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:48.918 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:48.918 --rc genhtml_branch_coverage=1 00:05:48.918 --rc genhtml_function_coverage=1 00:05:48.918 --rc genhtml_legend=1 00:05:48.918 --rc geninfo_all_blocks=1 00:05:48.918 --rc geninfo_unexecuted_blocks=1 00:05:48.918 00:05:48.918 ' 00:05:48.918 10:24:37 json_config_extra_key -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:48.918 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:48.918 --rc genhtml_branch_coverage=1 00:05:48.918 --rc genhtml_function_coverage=1 00:05:48.918 --rc genhtml_legend=1 00:05:48.918 --rc geninfo_all_blocks=1 00:05:48.918 --rc geninfo_unexecuted_blocks=1 00:05:48.918 00:05:48.918 ' 00:05:48.918 10:24:37 json_config_extra_key -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:48.918 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:48.918 --rc genhtml_branch_coverage=1 00:05:48.918 --rc genhtml_function_coverage=1 00:05:48.918 --rc genhtml_legend=1 00:05:48.918 --rc geninfo_all_blocks=1 00:05:48.918 --rc geninfo_unexecuted_blocks=1 00:05:48.918 00:05:48.918 ' 00:05:48.918 10:24:37 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:48.918 10:24:37 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:05:48.918 10:24:37 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:48.918 10:24:37 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:48.918 10:24:37 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:48.918 10:24:37 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:48.918 10:24:37 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:48.918 10:24:37 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:48.918 10:24:37 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:48.918 10:24:37 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:48.918 10:24:37 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:48.918 10:24:37 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:48.918 10:24:37 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:05:48.918 10:24:37 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=8b464f06-2980-e311-ba20-001e67a94acd 00:05:48.918 10:24:37 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:48.918 10:24:37 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:48.918 10:24:37 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:48.918 10:24:37 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:48.918 10:24:37 json_config_extra_key -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:48.918 10:24:37 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:05:48.918 10:24:37 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:48.918 10:24:37 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:48.919 10:24:37 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:48.919 10:24:37 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:48.919 10:24:37 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:48.919 10:24:37 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:48.919 10:24:37 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:05:48.919 10:24:37 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:48.919 10:24:37 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:05:48.919 10:24:37 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:48.919 10:24:37 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:48.919 10:24:37 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:48.919 10:24:37 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:48.919 10:24:37 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:48.919 10:24:37 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:48.919 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:48.919 10:24:37 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:48.919 10:24:37 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:48.919 10:24:37 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:48.919 10:24:37 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:05:48.919 10:24:37 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:05:48.919 10:24:37 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:05:48.919 10:24:37 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:05:48.919 10:24:37 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:05:48.919 10:24:37 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:05:48.919 10:24:37 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:05:48.919 10:24:37 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:05:48.919 10:24:37 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:05:48.919 10:24:37 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:48.919 10:24:37 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:05:48.919 INFO: launching applications... 00:05:48.919 10:24:37 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:05:48.919 10:24:37 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:05:48.919 10:24:37 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:05:48.919 10:24:37 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:48.919 10:24:37 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:48.919 10:24:37 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:05:48.919 10:24:37 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:48.919 10:24:37 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:48.919 10:24:37 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=252977 00:05:48.919 10:24:37 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:05:48.919 10:24:37 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:48.919 Waiting for target to run... 00:05:48.919 10:24:37 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 252977 /var/tmp/spdk_tgt.sock 00:05:48.919 10:24:37 json_config_extra_key -- common/autotest_common.sh@833 -- # '[' -z 252977 ']' 00:05:48.919 10:24:37 json_config_extra_key -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:48.919 10:24:37 json_config_extra_key -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:48.919 10:24:37 json_config_extra_key -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:48.919 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:48.919 10:24:37 json_config_extra_key -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:48.919 10:24:37 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:48.919 [2024-11-15 10:24:37.204858] Starting SPDK v25.01-pre git sha1 318515b44 / DPDK 24.03.0 initialization... 00:05:48.919 [2024-11-15 10:24:37.204946] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid252977 ] 00:05:49.177 [2024-11-15 10:24:37.539844] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:49.177 [2024-11-15 10:24:37.582030] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:49.744 10:24:38 json_config_extra_key -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:49.744 10:24:38 json_config_extra_key -- common/autotest_common.sh@866 -- # return 0 00:05:49.744 10:24:38 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:05:49.744 00:05:49.744 10:24:38 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:05:49.744 INFO: shutting down applications... 00:05:49.744 10:24:38 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:05:49.744 10:24:38 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:05:49.744 10:24:38 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:49.744 10:24:38 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 252977 ]] 00:05:49.744 10:24:38 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 252977 00:05:49.744 10:24:38 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:49.744 10:24:38 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:49.744 10:24:38 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 252977 00:05:49.744 10:24:38 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:50.311 10:24:38 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:50.311 10:24:38 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:50.311 10:24:38 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 252977 00:05:50.311 10:24:38 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:50.311 10:24:38 json_config_extra_key -- json_config/common.sh@43 -- # break 00:05:50.311 10:24:38 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:50.311 10:24:38 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:50.311 SPDK target shutdown done 00:05:50.311 10:24:38 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:05:50.311 Success 00:05:50.311 00:05:50.311 real 0m1.679s 00:05:50.311 user 0m1.690s 00:05:50.311 sys 0m0.432s 00:05:50.311 10:24:38 json_config_extra_key -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:50.311 10:24:38 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:50.311 ************************************ 00:05:50.311 END TEST json_config_extra_key 00:05:50.311 ************************************ 00:05:50.311 10:24:38 -- spdk/autotest.sh@161 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:50.311 10:24:38 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:50.311 10:24:38 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:50.311 10:24:38 -- common/autotest_common.sh@10 -- # set +x 00:05:50.311 ************************************ 00:05:50.311 START TEST alias_rpc 00:05:50.311 ************************************ 00:05:50.311 10:24:38 alias_rpc -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:50.569 * Looking for test storage... 00:05:50.569 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:05:50.569 10:24:38 alias_rpc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:50.569 10:24:38 alias_rpc -- common/autotest_common.sh@1691 -- # lcov --version 00:05:50.569 10:24:38 alias_rpc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:50.569 10:24:38 alias_rpc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:50.569 10:24:38 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:50.569 10:24:38 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:50.569 10:24:38 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:50.569 10:24:38 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:50.569 10:24:38 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:50.569 10:24:38 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:50.569 10:24:38 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:50.569 10:24:38 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:50.569 10:24:38 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:50.569 10:24:38 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:50.569 10:24:38 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:50.569 10:24:38 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:50.569 10:24:38 alias_rpc -- scripts/common.sh@345 -- # : 1 00:05:50.569 10:24:38 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:50.569 10:24:38 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:50.569 10:24:38 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:05:50.569 10:24:38 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:05:50.569 10:24:38 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:50.569 10:24:38 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:05:50.569 10:24:38 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:50.569 10:24:38 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:05:50.569 10:24:38 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:05:50.569 10:24:38 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:50.569 10:24:38 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:05:50.569 10:24:38 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:50.569 10:24:38 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:50.569 10:24:38 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:50.569 10:24:38 alias_rpc -- scripts/common.sh@368 -- # return 0 00:05:50.569 10:24:38 alias_rpc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:50.569 10:24:38 alias_rpc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:50.569 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:50.569 --rc genhtml_branch_coverage=1 00:05:50.569 --rc genhtml_function_coverage=1 00:05:50.569 --rc genhtml_legend=1 00:05:50.569 --rc geninfo_all_blocks=1 00:05:50.569 --rc geninfo_unexecuted_blocks=1 00:05:50.569 00:05:50.569 ' 00:05:50.569 10:24:38 alias_rpc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:50.569 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:50.569 --rc genhtml_branch_coverage=1 00:05:50.569 --rc genhtml_function_coverage=1 00:05:50.569 --rc genhtml_legend=1 00:05:50.569 --rc geninfo_all_blocks=1 00:05:50.569 --rc geninfo_unexecuted_blocks=1 00:05:50.569 00:05:50.569 ' 00:05:50.569 10:24:38 alias_rpc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:50.569 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:50.569 --rc genhtml_branch_coverage=1 00:05:50.569 --rc genhtml_function_coverage=1 00:05:50.569 --rc genhtml_legend=1 00:05:50.569 --rc geninfo_all_blocks=1 00:05:50.569 --rc geninfo_unexecuted_blocks=1 00:05:50.569 00:05:50.569 ' 00:05:50.569 10:24:38 alias_rpc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:50.569 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:50.569 --rc genhtml_branch_coverage=1 00:05:50.569 --rc genhtml_function_coverage=1 00:05:50.569 --rc genhtml_legend=1 00:05:50.569 --rc geninfo_all_blocks=1 00:05:50.569 --rc geninfo_unexecuted_blocks=1 00:05:50.569 00:05:50.569 ' 00:05:50.569 10:24:38 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:50.569 10:24:38 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=253292 00:05:50.569 10:24:38 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:50.569 10:24:38 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 253292 00:05:50.569 10:24:38 alias_rpc -- common/autotest_common.sh@833 -- # '[' -z 253292 ']' 00:05:50.570 10:24:38 alias_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:50.570 10:24:38 alias_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:50.570 10:24:38 alias_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:50.570 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:50.570 10:24:38 alias_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:50.570 10:24:38 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:50.570 [2024-11-15 10:24:38.937950] Starting SPDK v25.01-pre git sha1 318515b44 / DPDK 24.03.0 initialization... 00:05:50.570 [2024-11-15 10:24:38.938038] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid253292 ] 00:05:50.570 [2024-11-15 10:24:39.002405] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:50.827 [2024-11-15 10:24:39.060546] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:51.083 10:24:39 alias_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:51.083 10:24:39 alias_rpc -- common/autotest_common.sh@866 -- # return 0 00:05:51.083 10:24:39 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:05:51.340 10:24:39 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 253292 00:05:51.340 10:24:39 alias_rpc -- common/autotest_common.sh@952 -- # '[' -z 253292 ']' 00:05:51.340 10:24:39 alias_rpc -- common/autotest_common.sh@956 -- # kill -0 253292 00:05:51.340 10:24:39 alias_rpc -- common/autotest_common.sh@957 -- # uname 00:05:51.340 10:24:39 alias_rpc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:51.340 10:24:39 alias_rpc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 253292 00:05:51.340 10:24:39 alias_rpc -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:05:51.340 10:24:39 alias_rpc -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:05:51.340 10:24:39 alias_rpc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 253292' 00:05:51.340 killing process with pid 253292 00:05:51.340 10:24:39 alias_rpc -- common/autotest_common.sh@971 -- # kill 253292 00:05:51.340 10:24:39 alias_rpc -- common/autotest_common.sh@976 -- # wait 253292 00:05:51.597 00:05:51.597 real 0m1.319s 00:05:51.597 user 0m1.419s 00:05:51.597 sys 0m0.445s 00:05:51.597 10:24:40 alias_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:51.597 10:24:40 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:51.597 ************************************ 00:05:51.597 END TEST alias_rpc 00:05:51.597 ************************************ 00:05:51.854 10:24:40 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:05:51.854 10:24:40 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:05:51.854 10:24:40 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:51.854 10:24:40 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:51.854 10:24:40 -- common/autotest_common.sh@10 -- # set +x 00:05:51.854 ************************************ 00:05:51.854 START TEST spdkcli_tcp 00:05:51.854 ************************************ 00:05:51.854 10:24:40 spdkcli_tcp -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:05:51.854 * Looking for test storage... 00:05:51.854 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:05:51.854 10:24:40 spdkcli_tcp -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:51.854 10:24:40 spdkcli_tcp -- common/autotest_common.sh@1691 -- # lcov --version 00:05:51.854 10:24:40 spdkcli_tcp -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:51.854 10:24:40 spdkcli_tcp -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:51.854 10:24:40 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:51.854 10:24:40 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:51.854 10:24:40 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:51.854 10:24:40 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:05:51.854 10:24:40 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:05:51.854 10:24:40 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:05:51.854 10:24:40 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:05:51.854 10:24:40 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:05:51.854 10:24:40 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:05:51.854 10:24:40 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:05:51.854 10:24:40 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:51.854 10:24:40 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:05:51.854 10:24:40 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:05:51.854 10:24:40 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:51.854 10:24:40 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:51.854 10:24:40 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:05:51.854 10:24:40 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:05:51.854 10:24:40 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:51.854 10:24:40 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:05:51.854 10:24:40 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:05:51.854 10:24:40 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:05:51.854 10:24:40 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:05:51.854 10:24:40 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:51.854 10:24:40 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:05:51.854 10:24:40 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:05:51.854 10:24:40 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:51.854 10:24:40 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:51.854 10:24:40 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:05:51.854 10:24:40 spdkcli_tcp -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:51.854 10:24:40 spdkcli_tcp -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:51.854 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:51.854 --rc genhtml_branch_coverage=1 00:05:51.854 --rc genhtml_function_coverage=1 00:05:51.854 --rc genhtml_legend=1 00:05:51.854 --rc geninfo_all_blocks=1 00:05:51.854 --rc geninfo_unexecuted_blocks=1 00:05:51.854 00:05:51.854 ' 00:05:51.854 10:24:40 spdkcli_tcp -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:51.854 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:51.854 --rc genhtml_branch_coverage=1 00:05:51.854 --rc genhtml_function_coverage=1 00:05:51.854 --rc genhtml_legend=1 00:05:51.854 --rc geninfo_all_blocks=1 00:05:51.854 --rc geninfo_unexecuted_blocks=1 00:05:51.854 00:05:51.854 ' 00:05:51.854 10:24:40 spdkcli_tcp -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:51.854 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:51.854 --rc genhtml_branch_coverage=1 00:05:51.854 --rc genhtml_function_coverage=1 00:05:51.854 --rc genhtml_legend=1 00:05:51.854 --rc geninfo_all_blocks=1 00:05:51.854 --rc geninfo_unexecuted_blocks=1 00:05:51.854 00:05:51.854 ' 00:05:51.854 10:24:40 spdkcli_tcp -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:51.854 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:51.854 --rc genhtml_branch_coverage=1 00:05:51.854 --rc genhtml_function_coverage=1 00:05:51.854 --rc genhtml_legend=1 00:05:51.854 --rc geninfo_all_blocks=1 00:05:51.854 --rc geninfo_unexecuted_blocks=1 00:05:51.854 00:05:51.854 ' 00:05:51.854 10:24:40 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:05:51.854 10:24:40 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:05:51.854 10:24:40 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:05:51.854 10:24:40 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:05:51.854 10:24:40 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:05:51.854 10:24:40 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:05:51.854 10:24:40 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:05:51.854 10:24:40 spdkcli_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:51.854 10:24:40 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:51.854 10:24:40 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=253493 00:05:51.854 10:24:40 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:05:51.854 10:24:40 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 253493 00:05:51.854 10:24:40 spdkcli_tcp -- common/autotest_common.sh@833 -- # '[' -z 253493 ']' 00:05:51.854 10:24:40 spdkcli_tcp -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:51.854 10:24:40 spdkcli_tcp -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:51.854 10:24:40 spdkcli_tcp -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:51.854 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:51.854 10:24:40 spdkcli_tcp -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:51.854 10:24:40 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:51.854 [2024-11-15 10:24:40.307523] Starting SPDK v25.01-pre git sha1 318515b44 / DPDK 24.03.0 initialization... 00:05:51.854 [2024-11-15 10:24:40.307632] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid253493 ] 00:05:52.111 [2024-11-15 10:24:40.373830] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:52.111 [2024-11-15 10:24:40.431684] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:52.111 [2024-11-15 10:24:40.431689] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:52.368 10:24:40 spdkcli_tcp -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:52.368 10:24:40 spdkcli_tcp -- common/autotest_common.sh@866 -- # return 0 00:05:52.368 10:24:40 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=253503 00:05:52.368 10:24:40 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:05:52.368 10:24:40 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:05:52.627 [ 00:05:52.627 "bdev_malloc_delete", 00:05:52.627 "bdev_malloc_create", 00:05:52.627 "bdev_null_resize", 00:05:52.627 "bdev_null_delete", 00:05:52.627 "bdev_null_create", 00:05:52.627 "bdev_nvme_cuse_unregister", 00:05:52.627 "bdev_nvme_cuse_register", 00:05:52.627 "bdev_opal_new_user", 00:05:52.627 "bdev_opal_set_lock_state", 00:05:52.627 "bdev_opal_delete", 00:05:52.627 "bdev_opal_get_info", 00:05:52.627 "bdev_opal_create", 00:05:52.627 "bdev_nvme_opal_revert", 00:05:52.627 "bdev_nvme_opal_init", 00:05:52.627 "bdev_nvme_send_cmd", 00:05:52.627 "bdev_nvme_set_keys", 00:05:52.627 "bdev_nvme_get_path_iostat", 00:05:52.627 "bdev_nvme_get_mdns_discovery_info", 00:05:52.627 "bdev_nvme_stop_mdns_discovery", 00:05:52.627 "bdev_nvme_start_mdns_discovery", 00:05:52.627 "bdev_nvme_set_multipath_policy", 00:05:52.627 "bdev_nvme_set_preferred_path", 00:05:52.627 "bdev_nvme_get_io_paths", 00:05:52.627 "bdev_nvme_remove_error_injection", 00:05:52.627 "bdev_nvme_add_error_injection", 00:05:52.627 "bdev_nvme_get_discovery_info", 00:05:52.627 "bdev_nvme_stop_discovery", 00:05:52.627 "bdev_nvme_start_discovery", 00:05:52.627 "bdev_nvme_get_controller_health_info", 00:05:52.627 "bdev_nvme_disable_controller", 00:05:52.627 "bdev_nvme_enable_controller", 00:05:52.627 "bdev_nvme_reset_controller", 00:05:52.627 "bdev_nvme_get_transport_statistics", 00:05:52.627 "bdev_nvme_apply_firmware", 00:05:52.627 "bdev_nvme_detach_controller", 00:05:52.627 "bdev_nvme_get_controllers", 00:05:52.627 "bdev_nvme_attach_controller", 00:05:52.627 "bdev_nvme_set_hotplug", 00:05:52.627 "bdev_nvme_set_options", 00:05:52.627 "bdev_passthru_delete", 00:05:52.627 "bdev_passthru_create", 00:05:52.627 "bdev_lvol_set_parent_bdev", 00:05:52.627 "bdev_lvol_set_parent", 00:05:52.627 "bdev_lvol_check_shallow_copy", 00:05:52.627 "bdev_lvol_start_shallow_copy", 00:05:52.627 "bdev_lvol_grow_lvstore", 00:05:52.627 "bdev_lvol_get_lvols", 00:05:52.627 "bdev_lvol_get_lvstores", 00:05:52.627 "bdev_lvol_delete", 00:05:52.627 "bdev_lvol_set_read_only", 00:05:52.627 "bdev_lvol_resize", 00:05:52.627 "bdev_lvol_decouple_parent", 00:05:52.627 "bdev_lvol_inflate", 00:05:52.627 "bdev_lvol_rename", 00:05:52.627 "bdev_lvol_clone_bdev", 00:05:52.627 "bdev_lvol_clone", 00:05:52.627 "bdev_lvol_snapshot", 00:05:52.627 "bdev_lvol_create", 00:05:52.627 "bdev_lvol_delete_lvstore", 00:05:52.627 "bdev_lvol_rename_lvstore", 00:05:52.627 "bdev_lvol_create_lvstore", 00:05:52.627 "bdev_raid_set_options", 00:05:52.627 "bdev_raid_remove_base_bdev", 00:05:52.627 "bdev_raid_add_base_bdev", 00:05:52.627 "bdev_raid_delete", 00:05:52.627 "bdev_raid_create", 00:05:52.627 "bdev_raid_get_bdevs", 00:05:52.627 "bdev_error_inject_error", 00:05:52.627 "bdev_error_delete", 00:05:52.627 "bdev_error_create", 00:05:52.627 "bdev_split_delete", 00:05:52.627 "bdev_split_create", 00:05:52.627 "bdev_delay_delete", 00:05:52.627 "bdev_delay_create", 00:05:52.627 "bdev_delay_update_latency", 00:05:52.627 "bdev_zone_block_delete", 00:05:52.627 "bdev_zone_block_create", 00:05:52.627 "blobfs_create", 00:05:52.627 "blobfs_detect", 00:05:52.627 "blobfs_set_cache_size", 00:05:52.627 "bdev_aio_delete", 00:05:52.627 "bdev_aio_rescan", 00:05:52.627 "bdev_aio_create", 00:05:52.627 "bdev_ftl_set_property", 00:05:52.627 "bdev_ftl_get_properties", 00:05:52.627 "bdev_ftl_get_stats", 00:05:52.627 "bdev_ftl_unmap", 00:05:52.627 "bdev_ftl_unload", 00:05:52.627 "bdev_ftl_delete", 00:05:52.627 "bdev_ftl_load", 00:05:52.627 "bdev_ftl_create", 00:05:52.627 "bdev_virtio_attach_controller", 00:05:52.627 "bdev_virtio_scsi_get_devices", 00:05:52.627 "bdev_virtio_detach_controller", 00:05:52.627 "bdev_virtio_blk_set_hotplug", 00:05:52.627 "bdev_iscsi_delete", 00:05:52.627 "bdev_iscsi_create", 00:05:52.627 "bdev_iscsi_set_options", 00:05:52.627 "accel_error_inject_error", 00:05:52.627 "ioat_scan_accel_module", 00:05:52.627 "dsa_scan_accel_module", 00:05:52.627 "iaa_scan_accel_module", 00:05:52.627 "vfu_virtio_create_fs_endpoint", 00:05:52.627 "vfu_virtio_create_scsi_endpoint", 00:05:52.627 "vfu_virtio_scsi_remove_target", 00:05:52.627 "vfu_virtio_scsi_add_target", 00:05:52.627 "vfu_virtio_create_blk_endpoint", 00:05:52.627 "vfu_virtio_delete_endpoint", 00:05:52.627 "keyring_file_remove_key", 00:05:52.627 "keyring_file_add_key", 00:05:52.627 "keyring_linux_set_options", 00:05:52.627 "fsdev_aio_delete", 00:05:52.627 "fsdev_aio_create", 00:05:52.627 "iscsi_get_histogram", 00:05:52.627 "iscsi_enable_histogram", 00:05:52.627 "iscsi_set_options", 00:05:52.627 "iscsi_get_auth_groups", 00:05:52.627 "iscsi_auth_group_remove_secret", 00:05:52.627 "iscsi_auth_group_add_secret", 00:05:52.627 "iscsi_delete_auth_group", 00:05:52.627 "iscsi_create_auth_group", 00:05:52.627 "iscsi_set_discovery_auth", 00:05:52.627 "iscsi_get_options", 00:05:52.627 "iscsi_target_node_request_logout", 00:05:52.627 "iscsi_target_node_set_redirect", 00:05:52.627 "iscsi_target_node_set_auth", 00:05:52.627 "iscsi_target_node_add_lun", 00:05:52.627 "iscsi_get_stats", 00:05:52.627 "iscsi_get_connections", 00:05:52.627 "iscsi_portal_group_set_auth", 00:05:52.627 "iscsi_start_portal_group", 00:05:52.627 "iscsi_delete_portal_group", 00:05:52.627 "iscsi_create_portal_group", 00:05:52.627 "iscsi_get_portal_groups", 00:05:52.627 "iscsi_delete_target_node", 00:05:52.627 "iscsi_target_node_remove_pg_ig_maps", 00:05:52.627 "iscsi_target_node_add_pg_ig_maps", 00:05:52.627 "iscsi_create_target_node", 00:05:52.627 "iscsi_get_target_nodes", 00:05:52.627 "iscsi_delete_initiator_group", 00:05:52.627 "iscsi_initiator_group_remove_initiators", 00:05:52.627 "iscsi_initiator_group_add_initiators", 00:05:52.627 "iscsi_create_initiator_group", 00:05:52.627 "iscsi_get_initiator_groups", 00:05:52.627 "nvmf_set_crdt", 00:05:52.627 "nvmf_set_config", 00:05:52.627 "nvmf_set_max_subsystems", 00:05:52.627 "nvmf_stop_mdns_prr", 00:05:52.627 "nvmf_publish_mdns_prr", 00:05:52.627 "nvmf_subsystem_get_listeners", 00:05:52.627 "nvmf_subsystem_get_qpairs", 00:05:52.627 "nvmf_subsystem_get_controllers", 00:05:52.627 "nvmf_get_stats", 00:05:52.627 "nvmf_get_transports", 00:05:52.627 "nvmf_create_transport", 00:05:52.627 "nvmf_get_targets", 00:05:52.627 "nvmf_delete_target", 00:05:52.627 "nvmf_create_target", 00:05:52.627 "nvmf_subsystem_allow_any_host", 00:05:52.627 "nvmf_subsystem_set_keys", 00:05:52.627 "nvmf_subsystem_remove_host", 00:05:52.627 "nvmf_subsystem_add_host", 00:05:52.627 "nvmf_ns_remove_host", 00:05:52.627 "nvmf_ns_add_host", 00:05:52.627 "nvmf_subsystem_remove_ns", 00:05:52.627 "nvmf_subsystem_set_ns_ana_group", 00:05:52.627 "nvmf_subsystem_add_ns", 00:05:52.627 "nvmf_subsystem_listener_set_ana_state", 00:05:52.627 "nvmf_discovery_get_referrals", 00:05:52.627 "nvmf_discovery_remove_referral", 00:05:52.627 "nvmf_discovery_add_referral", 00:05:52.627 "nvmf_subsystem_remove_listener", 00:05:52.627 "nvmf_subsystem_add_listener", 00:05:52.627 "nvmf_delete_subsystem", 00:05:52.627 "nvmf_create_subsystem", 00:05:52.627 "nvmf_get_subsystems", 00:05:52.627 "env_dpdk_get_mem_stats", 00:05:52.627 "nbd_get_disks", 00:05:52.627 "nbd_stop_disk", 00:05:52.627 "nbd_start_disk", 00:05:52.627 "ublk_recover_disk", 00:05:52.627 "ublk_get_disks", 00:05:52.627 "ublk_stop_disk", 00:05:52.627 "ublk_start_disk", 00:05:52.627 "ublk_destroy_target", 00:05:52.627 "ublk_create_target", 00:05:52.627 "virtio_blk_create_transport", 00:05:52.627 "virtio_blk_get_transports", 00:05:52.627 "vhost_controller_set_coalescing", 00:05:52.627 "vhost_get_controllers", 00:05:52.627 "vhost_delete_controller", 00:05:52.627 "vhost_create_blk_controller", 00:05:52.627 "vhost_scsi_controller_remove_target", 00:05:52.627 "vhost_scsi_controller_add_target", 00:05:52.627 "vhost_start_scsi_controller", 00:05:52.627 "vhost_create_scsi_controller", 00:05:52.627 "thread_set_cpumask", 00:05:52.627 "scheduler_set_options", 00:05:52.627 "framework_get_governor", 00:05:52.627 "framework_get_scheduler", 00:05:52.627 "framework_set_scheduler", 00:05:52.627 "framework_get_reactors", 00:05:52.627 "thread_get_io_channels", 00:05:52.627 "thread_get_pollers", 00:05:52.627 "thread_get_stats", 00:05:52.627 "framework_monitor_context_switch", 00:05:52.627 "spdk_kill_instance", 00:05:52.627 "log_enable_timestamps", 00:05:52.627 "log_get_flags", 00:05:52.627 "log_clear_flag", 00:05:52.627 "log_set_flag", 00:05:52.627 "log_get_level", 00:05:52.627 "log_set_level", 00:05:52.627 "log_get_print_level", 00:05:52.627 "log_set_print_level", 00:05:52.627 "framework_enable_cpumask_locks", 00:05:52.627 "framework_disable_cpumask_locks", 00:05:52.627 "framework_wait_init", 00:05:52.627 "framework_start_init", 00:05:52.627 "scsi_get_devices", 00:05:52.628 "bdev_get_histogram", 00:05:52.628 "bdev_enable_histogram", 00:05:52.628 "bdev_set_qos_limit", 00:05:52.628 "bdev_set_qd_sampling_period", 00:05:52.628 "bdev_get_bdevs", 00:05:52.628 "bdev_reset_iostat", 00:05:52.628 "bdev_get_iostat", 00:05:52.628 "bdev_examine", 00:05:52.628 "bdev_wait_for_examine", 00:05:52.628 "bdev_set_options", 00:05:52.628 "accel_get_stats", 00:05:52.628 "accel_set_options", 00:05:52.628 "accel_set_driver", 00:05:52.628 "accel_crypto_key_destroy", 00:05:52.628 "accel_crypto_keys_get", 00:05:52.628 "accel_crypto_key_create", 00:05:52.628 "accel_assign_opc", 00:05:52.628 "accel_get_module_info", 00:05:52.628 "accel_get_opc_assignments", 00:05:52.628 "vmd_rescan", 00:05:52.628 "vmd_remove_device", 00:05:52.628 "vmd_enable", 00:05:52.628 "sock_get_default_impl", 00:05:52.628 "sock_set_default_impl", 00:05:52.628 "sock_impl_set_options", 00:05:52.628 "sock_impl_get_options", 00:05:52.628 "iobuf_get_stats", 00:05:52.628 "iobuf_set_options", 00:05:52.628 "keyring_get_keys", 00:05:52.628 "vfu_tgt_set_base_path", 00:05:52.628 "framework_get_pci_devices", 00:05:52.628 "framework_get_config", 00:05:52.628 "framework_get_subsystems", 00:05:52.628 "fsdev_set_opts", 00:05:52.628 "fsdev_get_opts", 00:05:52.628 "trace_get_info", 00:05:52.628 "trace_get_tpoint_group_mask", 00:05:52.628 "trace_disable_tpoint_group", 00:05:52.628 "trace_enable_tpoint_group", 00:05:52.628 "trace_clear_tpoint_mask", 00:05:52.628 "trace_set_tpoint_mask", 00:05:52.628 "notify_get_notifications", 00:05:52.628 "notify_get_types", 00:05:52.628 "spdk_get_version", 00:05:52.628 "rpc_get_methods" 00:05:52.628 ] 00:05:52.628 10:24:40 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:05:52.628 10:24:40 spdkcli_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:52.628 10:24:40 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:52.628 10:24:41 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:05:52.628 10:24:41 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 253493 00:05:52.628 10:24:41 spdkcli_tcp -- common/autotest_common.sh@952 -- # '[' -z 253493 ']' 00:05:52.628 10:24:41 spdkcli_tcp -- common/autotest_common.sh@956 -- # kill -0 253493 00:05:52.628 10:24:41 spdkcli_tcp -- common/autotest_common.sh@957 -- # uname 00:05:52.628 10:24:41 spdkcli_tcp -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:52.628 10:24:41 spdkcli_tcp -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 253493 00:05:52.628 10:24:41 spdkcli_tcp -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:05:52.628 10:24:41 spdkcli_tcp -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:05:52.628 10:24:41 spdkcli_tcp -- common/autotest_common.sh@970 -- # echo 'killing process with pid 253493' 00:05:52.628 killing process with pid 253493 00:05:52.628 10:24:41 spdkcli_tcp -- common/autotest_common.sh@971 -- # kill 253493 00:05:52.628 10:24:41 spdkcli_tcp -- common/autotest_common.sh@976 -- # wait 253493 00:05:53.194 00:05:53.194 real 0m1.370s 00:05:53.194 user 0m2.450s 00:05:53.194 sys 0m0.479s 00:05:53.194 10:24:41 spdkcli_tcp -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:53.194 10:24:41 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:53.194 ************************************ 00:05:53.194 END TEST spdkcli_tcp 00:05:53.194 ************************************ 00:05:53.194 10:24:41 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:53.194 10:24:41 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:53.194 10:24:41 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:53.194 10:24:41 -- common/autotest_common.sh@10 -- # set +x 00:05:53.194 ************************************ 00:05:53.194 START TEST dpdk_mem_utility 00:05:53.194 ************************************ 00:05:53.194 10:24:41 dpdk_mem_utility -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:53.194 * Looking for test storage... 00:05:53.194 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:05:53.194 10:24:41 dpdk_mem_utility -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:53.194 10:24:41 dpdk_mem_utility -- common/autotest_common.sh@1691 -- # lcov --version 00:05:53.194 10:24:41 dpdk_mem_utility -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:53.452 10:24:41 dpdk_mem_utility -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:53.452 10:24:41 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:53.452 10:24:41 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:53.452 10:24:41 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:53.452 10:24:41 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:05:53.452 10:24:41 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:05:53.452 10:24:41 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:05:53.452 10:24:41 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:05:53.452 10:24:41 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:05:53.452 10:24:41 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:05:53.452 10:24:41 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:05:53.452 10:24:41 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:53.452 10:24:41 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:05:53.452 10:24:41 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:05:53.452 10:24:41 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:53.452 10:24:41 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:53.452 10:24:41 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:05:53.452 10:24:41 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:05:53.452 10:24:41 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:53.452 10:24:41 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:05:53.452 10:24:41 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:05:53.452 10:24:41 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:05:53.452 10:24:41 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:05:53.452 10:24:41 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:53.452 10:24:41 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:05:53.452 10:24:41 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:05:53.452 10:24:41 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:53.452 10:24:41 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:53.452 10:24:41 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:05:53.452 10:24:41 dpdk_mem_utility -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:53.452 10:24:41 dpdk_mem_utility -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:53.452 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:53.452 --rc genhtml_branch_coverage=1 00:05:53.452 --rc genhtml_function_coverage=1 00:05:53.452 --rc genhtml_legend=1 00:05:53.452 --rc geninfo_all_blocks=1 00:05:53.452 --rc geninfo_unexecuted_blocks=1 00:05:53.452 00:05:53.452 ' 00:05:53.452 10:24:41 dpdk_mem_utility -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:53.452 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:53.452 --rc genhtml_branch_coverage=1 00:05:53.452 --rc genhtml_function_coverage=1 00:05:53.453 --rc genhtml_legend=1 00:05:53.453 --rc geninfo_all_blocks=1 00:05:53.453 --rc geninfo_unexecuted_blocks=1 00:05:53.453 00:05:53.453 ' 00:05:53.453 10:24:41 dpdk_mem_utility -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:53.453 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:53.453 --rc genhtml_branch_coverage=1 00:05:53.453 --rc genhtml_function_coverage=1 00:05:53.453 --rc genhtml_legend=1 00:05:53.453 --rc geninfo_all_blocks=1 00:05:53.453 --rc geninfo_unexecuted_blocks=1 00:05:53.453 00:05:53.453 ' 00:05:53.453 10:24:41 dpdk_mem_utility -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:53.453 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:53.453 --rc genhtml_branch_coverage=1 00:05:53.453 --rc genhtml_function_coverage=1 00:05:53.453 --rc genhtml_legend=1 00:05:53.453 --rc geninfo_all_blocks=1 00:05:53.453 --rc geninfo_unexecuted_blocks=1 00:05:53.453 00:05:53.453 ' 00:05:53.453 10:24:41 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:05:53.453 10:24:41 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=253703 00:05:53.453 10:24:41 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:53.453 10:24:41 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 253703 00:05:53.453 10:24:41 dpdk_mem_utility -- common/autotest_common.sh@833 -- # '[' -z 253703 ']' 00:05:53.453 10:24:41 dpdk_mem_utility -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:53.453 10:24:41 dpdk_mem_utility -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:53.453 10:24:41 dpdk_mem_utility -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:53.453 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:53.453 10:24:41 dpdk_mem_utility -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:53.453 10:24:41 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:53.453 [2024-11-15 10:24:41.734786] Starting SPDK v25.01-pre git sha1 318515b44 / DPDK 24.03.0 initialization... 00:05:53.453 [2024-11-15 10:24:41.734882] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid253703 ] 00:05:53.453 [2024-11-15 10:24:41.799786] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:53.453 [2024-11-15 10:24:41.857959] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:53.710 10:24:42 dpdk_mem_utility -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:53.710 10:24:42 dpdk_mem_utility -- common/autotest_common.sh@866 -- # return 0 00:05:53.710 10:24:42 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:05:53.710 10:24:42 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:05:53.711 10:24:42 dpdk_mem_utility -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:53.711 10:24:42 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:53.711 { 00:05:53.711 "filename": "/tmp/spdk_mem_dump.txt" 00:05:53.711 } 00:05:53.711 10:24:42 dpdk_mem_utility -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:53.711 10:24:42 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:05:53.969 DPDK memory size 810.000000 MiB in 1 heap(s) 00:05:53.969 1 heaps totaling size 810.000000 MiB 00:05:53.969 size: 810.000000 MiB heap id: 0 00:05:53.969 end heaps---------- 00:05:53.969 9 mempools totaling size 595.772034 MiB 00:05:53.969 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:05:53.969 size: 158.602051 MiB name: PDU_data_out_Pool 00:05:53.969 size: 92.545471 MiB name: bdev_io_253703 00:05:53.969 size: 50.003479 MiB name: msgpool_253703 00:05:53.969 size: 36.509338 MiB name: fsdev_io_253703 00:05:53.969 size: 21.763794 MiB name: PDU_Pool 00:05:53.969 size: 19.513306 MiB name: SCSI_TASK_Pool 00:05:53.969 size: 4.133484 MiB name: evtpool_253703 00:05:53.969 size: 0.026123 MiB name: Session_Pool 00:05:53.969 end mempools------- 00:05:53.969 6 memzones totaling size 4.142822 MiB 00:05:53.969 size: 1.000366 MiB name: RG_ring_0_253703 00:05:53.969 size: 1.000366 MiB name: RG_ring_1_253703 00:05:53.969 size: 1.000366 MiB name: RG_ring_4_253703 00:05:53.969 size: 1.000366 MiB name: RG_ring_5_253703 00:05:53.969 size: 0.125366 MiB name: RG_ring_2_253703 00:05:53.969 size: 0.015991 MiB name: RG_ring_3_253703 00:05:53.969 end memzones------- 00:05:53.969 10:24:42 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:05:53.969 heap id: 0 total size: 810.000000 MiB number of busy elements: 44 number of free elements: 15 00:05:53.969 list of free elements. size: 10.862488 MiB 00:05:53.969 element at address: 0x200018a00000 with size: 0.999878 MiB 00:05:53.969 element at address: 0x200018c00000 with size: 0.999878 MiB 00:05:53.969 element at address: 0x200000400000 with size: 0.998535 MiB 00:05:53.969 element at address: 0x200031800000 with size: 0.994446 MiB 00:05:53.969 element at address: 0x200006400000 with size: 0.959839 MiB 00:05:53.969 element at address: 0x200012c00000 with size: 0.954285 MiB 00:05:53.969 element at address: 0x200018e00000 with size: 0.936584 MiB 00:05:53.969 element at address: 0x200000200000 with size: 0.717346 MiB 00:05:53.969 element at address: 0x20001a600000 with size: 0.582886 MiB 00:05:53.969 element at address: 0x200000c00000 with size: 0.495422 MiB 00:05:53.969 element at address: 0x20000a600000 with size: 0.490723 MiB 00:05:53.969 element at address: 0x200019000000 with size: 0.485657 MiB 00:05:53.969 element at address: 0x200003e00000 with size: 0.481934 MiB 00:05:53.969 element at address: 0x200027a00000 with size: 0.410034 MiB 00:05:53.969 element at address: 0x200000800000 with size: 0.355042 MiB 00:05:53.969 list of standard malloc elements. size: 199.218628 MiB 00:05:53.969 element at address: 0x20000a7fff80 with size: 132.000122 MiB 00:05:53.969 element at address: 0x2000065fff80 with size: 64.000122 MiB 00:05:53.969 element at address: 0x200018afff80 with size: 1.000122 MiB 00:05:53.969 element at address: 0x200018cfff80 with size: 1.000122 MiB 00:05:53.969 element at address: 0x200018efff80 with size: 1.000122 MiB 00:05:53.969 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:05:53.970 element at address: 0x200018eeff00 with size: 0.062622 MiB 00:05:53.970 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:05:53.970 element at address: 0x200018eefdc0 with size: 0.000305 MiB 00:05:53.970 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:05:53.970 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:05:53.970 element at address: 0x2000004ffa00 with size: 0.000183 MiB 00:05:53.970 element at address: 0x2000004ffac0 with size: 0.000183 MiB 00:05:53.970 element at address: 0x2000004ffb80 with size: 0.000183 MiB 00:05:53.970 element at address: 0x2000004ffd80 with size: 0.000183 MiB 00:05:53.970 element at address: 0x2000004ffe40 with size: 0.000183 MiB 00:05:53.970 element at address: 0x20000085ae40 with size: 0.000183 MiB 00:05:53.970 element at address: 0x20000085b040 with size: 0.000183 MiB 00:05:53.970 element at address: 0x20000085f300 with size: 0.000183 MiB 00:05:53.970 element at address: 0x20000087f5c0 with size: 0.000183 MiB 00:05:53.970 element at address: 0x20000087f680 with size: 0.000183 MiB 00:05:53.970 element at address: 0x2000008ff940 with size: 0.000183 MiB 00:05:53.970 element at address: 0x2000008ffb40 with size: 0.000183 MiB 00:05:53.970 element at address: 0x200000c7ed40 with size: 0.000183 MiB 00:05:53.970 element at address: 0x200000cff000 with size: 0.000183 MiB 00:05:53.970 element at address: 0x200000cff0c0 with size: 0.000183 MiB 00:05:53.970 element at address: 0x200003e7b600 with size: 0.000183 MiB 00:05:53.970 element at address: 0x200003e7b6c0 with size: 0.000183 MiB 00:05:53.970 element at address: 0x200003efb980 with size: 0.000183 MiB 00:05:53.970 element at address: 0x2000064fdd80 with size: 0.000183 MiB 00:05:53.970 element at address: 0x20000a67da00 with size: 0.000183 MiB 00:05:53.970 element at address: 0x20000a67dac0 with size: 0.000183 MiB 00:05:53.970 element at address: 0x20000a6fdd80 with size: 0.000183 MiB 00:05:53.970 element at address: 0x200012cf44c0 with size: 0.000183 MiB 00:05:53.970 element at address: 0x200018eefc40 with size: 0.000183 MiB 00:05:53.970 element at address: 0x200018eefd00 with size: 0.000183 MiB 00:05:53.970 element at address: 0x2000190bc740 with size: 0.000183 MiB 00:05:53.970 element at address: 0x20001a695380 with size: 0.000183 MiB 00:05:53.970 element at address: 0x20001a695440 with size: 0.000183 MiB 00:05:53.970 element at address: 0x200027a68f80 with size: 0.000183 MiB 00:05:53.970 element at address: 0x200027a69040 with size: 0.000183 MiB 00:05:53.970 element at address: 0x200027a6fc40 with size: 0.000183 MiB 00:05:53.970 element at address: 0x200027a6fe40 with size: 0.000183 MiB 00:05:53.970 element at address: 0x200027a6ff00 with size: 0.000183 MiB 00:05:53.970 list of memzone associated elements. size: 599.918884 MiB 00:05:53.970 element at address: 0x20001a695500 with size: 211.416748 MiB 00:05:53.970 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:05:53.970 element at address: 0x200027a6ffc0 with size: 157.562561 MiB 00:05:53.970 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:05:53.970 element at address: 0x200012df4780 with size: 92.045044 MiB 00:05:53.970 associated memzone info: size: 92.044922 MiB name: MP_bdev_io_253703_0 00:05:53.970 element at address: 0x200000dff380 with size: 48.003052 MiB 00:05:53.970 associated memzone info: size: 48.002930 MiB name: MP_msgpool_253703_0 00:05:53.970 element at address: 0x200003ffdb80 with size: 36.008911 MiB 00:05:53.970 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_253703_0 00:05:53.970 element at address: 0x2000191be940 with size: 20.255554 MiB 00:05:53.970 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:05:53.970 element at address: 0x2000319feb40 with size: 18.005066 MiB 00:05:53.970 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:05:53.970 element at address: 0x2000004fff00 with size: 3.000244 MiB 00:05:53.970 associated memzone info: size: 3.000122 MiB name: MP_evtpool_253703_0 00:05:53.970 element at address: 0x2000009ffe00 with size: 2.000488 MiB 00:05:53.970 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_253703 00:05:53.970 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:05:53.970 associated memzone info: size: 1.007996 MiB name: MP_evtpool_253703 00:05:53.970 element at address: 0x20000a6fde40 with size: 1.008118 MiB 00:05:53.970 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:05:53.970 element at address: 0x2000190bc800 with size: 1.008118 MiB 00:05:53.970 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:05:53.970 element at address: 0x2000064fde40 with size: 1.008118 MiB 00:05:53.970 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:05:53.970 element at address: 0x200003efba40 with size: 1.008118 MiB 00:05:53.970 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:05:53.970 element at address: 0x200000cff180 with size: 1.000488 MiB 00:05:53.970 associated memzone info: size: 1.000366 MiB name: RG_ring_0_253703 00:05:53.970 element at address: 0x2000008ffc00 with size: 1.000488 MiB 00:05:53.970 associated memzone info: size: 1.000366 MiB name: RG_ring_1_253703 00:05:53.970 element at address: 0x200012cf4580 with size: 1.000488 MiB 00:05:53.970 associated memzone info: size: 1.000366 MiB name: RG_ring_4_253703 00:05:53.970 element at address: 0x2000318fe940 with size: 1.000488 MiB 00:05:53.970 associated memzone info: size: 1.000366 MiB name: RG_ring_5_253703 00:05:53.970 element at address: 0x20000087f740 with size: 0.500488 MiB 00:05:53.970 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_253703 00:05:53.970 element at address: 0x200000c7ee00 with size: 0.500488 MiB 00:05:53.970 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_253703 00:05:53.970 element at address: 0x20000a67db80 with size: 0.500488 MiB 00:05:53.970 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:05:53.970 element at address: 0x200003e7b780 with size: 0.500488 MiB 00:05:53.970 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:05:53.970 element at address: 0x20001907c540 with size: 0.250488 MiB 00:05:53.970 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:05:53.970 element at address: 0x2000002b7a40 with size: 0.125488 MiB 00:05:53.970 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_253703 00:05:53.970 element at address: 0x20000085f3c0 with size: 0.125488 MiB 00:05:53.970 associated memzone info: size: 0.125366 MiB name: RG_ring_2_253703 00:05:53.970 element at address: 0x2000064f5b80 with size: 0.031738 MiB 00:05:53.970 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:05:53.970 element at address: 0x200027a69100 with size: 0.023743 MiB 00:05:53.970 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:05:53.970 element at address: 0x20000085b100 with size: 0.016113 MiB 00:05:53.970 associated memzone info: size: 0.015991 MiB name: RG_ring_3_253703 00:05:53.970 element at address: 0x200027a6f240 with size: 0.002441 MiB 00:05:53.970 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:05:53.970 element at address: 0x2000004ffc40 with size: 0.000305 MiB 00:05:53.970 associated memzone info: size: 0.000183 MiB name: MP_msgpool_253703 00:05:53.970 element at address: 0x2000008ffa00 with size: 0.000305 MiB 00:05:53.970 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_253703 00:05:53.970 element at address: 0x20000085af00 with size: 0.000305 MiB 00:05:53.970 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_253703 00:05:53.970 element at address: 0x200027a6fd00 with size: 0.000305 MiB 00:05:53.970 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:05:53.970 10:24:42 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:05:53.970 10:24:42 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 253703 00:05:53.970 10:24:42 dpdk_mem_utility -- common/autotest_common.sh@952 -- # '[' -z 253703 ']' 00:05:53.970 10:24:42 dpdk_mem_utility -- common/autotest_common.sh@956 -- # kill -0 253703 00:05:53.970 10:24:42 dpdk_mem_utility -- common/autotest_common.sh@957 -- # uname 00:05:53.970 10:24:42 dpdk_mem_utility -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:53.970 10:24:42 dpdk_mem_utility -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 253703 00:05:53.970 10:24:42 dpdk_mem_utility -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:05:53.970 10:24:42 dpdk_mem_utility -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:05:53.970 10:24:42 dpdk_mem_utility -- common/autotest_common.sh@970 -- # echo 'killing process with pid 253703' 00:05:53.970 killing process with pid 253703 00:05:53.970 10:24:42 dpdk_mem_utility -- common/autotest_common.sh@971 -- # kill 253703 00:05:53.970 10:24:42 dpdk_mem_utility -- common/autotest_common.sh@976 -- # wait 253703 00:05:54.537 00:05:54.537 real 0m1.166s 00:05:54.537 user 0m1.164s 00:05:54.537 sys 0m0.409s 00:05:54.537 10:24:42 dpdk_mem_utility -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:54.537 10:24:42 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:54.537 ************************************ 00:05:54.537 END TEST dpdk_mem_utility 00:05:54.537 ************************************ 00:05:54.537 10:24:42 -- spdk/autotest.sh@168 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:05:54.537 10:24:42 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:54.537 10:24:42 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:54.537 10:24:42 -- common/autotest_common.sh@10 -- # set +x 00:05:54.537 ************************************ 00:05:54.537 START TEST event 00:05:54.537 ************************************ 00:05:54.537 10:24:42 event -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:05:54.537 * Looking for test storage... 00:05:54.537 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:05:54.537 10:24:42 event -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:54.537 10:24:42 event -- common/autotest_common.sh@1691 -- # lcov --version 00:05:54.537 10:24:42 event -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:54.537 10:24:42 event -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:54.537 10:24:42 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:54.537 10:24:42 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:54.537 10:24:42 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:54.537 10:24:42 event -- scripts/common.sh@336 -- # IFS=.-: 00:05:54.537 10:24:42 event -- scripts/common.sh@336 -- # read -ra ver1 00:05:54.538 10:24:42 event -- scripts/common.sh@337 -- # IFS=.-: 00:05:54.538 10:24:42 event -- scripts/common.sh@337 -- # read -ra ver2 00:05:54.538 10:24:42 event -- scripts/common.sh@338 -- # local 'op=<' 00:05:54.538 10:24:42 event -- scripts/common.sh@340 -- # ver1_l=2 00:05:54.538 10:24:42 event -- scripts/common.sh@341 -- # ver2_l=1 00:05:54.538 10:24:42 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:54.538 10:24:42 event -- scripts/common.sh@344 -- # case "$op" in 00:05:54.538 10:24:42 event -- scripts/common.sh@345 -- # : 1 00:05:54.538 10:24:42 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:54.538 10:24:42 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:54.538 10:24:42 event -- scripts/common.sh@365 -- # decimal 1 00:05:54.538 10:24:42 event -- scripts/common.sh@353 -- # local d=1 00:05:54.538 10:24:42 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:54.538 10:24:42 event -- scripts/common.sh@355 -- # echo 1 00:05:54.538 10:24:42 event -- scripts/common.sh@365 -- # ver1[v]=1 00:05:54.538 10:24:42 event -- scripts/common.sh@366 -- # decimal 2 00:05:54.538 10:24:42 event -- scripts/common.sh@353 -- # local d=2 00:05:54.538 10:24:42 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:54.538 10:24:42 event -- scripts/common.sh@355 -- # echo 2 00:05:54.538 10:24:42 event -- scripts/common.sh@366 -- # ver2[v]=2 00:05:54.538 10:24:42 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:54.538 10:24:42 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:54.538 10:24:42 event -- scripts/common.sh@368 -- # return 0 00:05:54.538 10:24:42 event -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:54.538 10:24:42 event -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:54.538 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:54.538 --rc genhtml_branch_coverage=1 00:05:54.538 --rc genhtml_function_coverage=1 00:05:54.538 --rc genhtml_legend=1 00:05:54.538 --rc geninfo_all_blocks=1 00:05:54.538 --rc geninfo_unexecuted_blocks=1 00:05:54.538 00:05:54.538 ' 00:05:54.538 10:24:42 event -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:54.538 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:54.538 --rc genhtml_branch_coverage=1 00:05:54.538 --rc genhtml_function_coverage=1 00:05:54.538 --rc genhtml_legend=1 00:05:54.538 --rc geninfo_all_blocks=1 00:05:54.538 --rc geninfo_unexecuted_blocks=1 00:05:54.538 00:05:54.538 ' 00:05:54.538 10:24:42 event -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:54.538 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:54.538 --rc genhtml_branch_coverage=1 00:05:54.538 --rc genhtml_function_coverage=1 00:05:54.538 --rc genhtml_legend=1 00:05:54.538 --rc geninfo_all_blocks=1 00:05:54.538 --rc geninfo_unexecuted_blocks=1 00:05:54.538 00:05:54.538 ' 00:05:54.538 10:24:42 event -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:54.538 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:54.538 --rc genhtml_branch_coverage=1 00:05:54.538 --rc genhtml_function_coverage=1 00:05:54.538 --rc genhtml_legend=1 00:05:54.538 --rc geninfo_all_blocks=1 00:05:54.538 --rc geninfo_unexecuted_blocks=1 00:05:54.538 00:05:54.538 ' 00:05:54.538 10:24:42 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:05:54.538 10:24:42 event -- bdev/nbd_common.sh@6 -- # set -e 00:05:54.538 10:24:42 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:54.538 10:24:42 event -- common/autotest_common.sh@1103 -- # '[' 6 -le 1 ']' 00:05:54.538 10:24:42 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:54.538 10:24:42 event -- common/autotest_common.sh@10 -- # set +x 00:05:54.538 ************************************ 00:05:54.538 START TEST event_perf 00:05:54.538 ************************************ 00:05:54.538 10:24:42 event.event_perf -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:54.538 Running I/O for 1 seconds...[2024-11-15 10:24:42.940014] Starting SPDK v25.01-pre git sha1 318515b44 / DPDK 24.03.0 initialization... 00:05:54.538 [2024-11-15 10:24:42.940080] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid253899 ] 00:05:54.796 [2024-11-15 10:24:43.008836] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:54.796 [2024-11-15 10:24:43.070779] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:54.796 [2024-11-15 10:24:43.070842] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:54.796 [2024-11-15 10:24:43.070909] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:54.796 [2024-11-15 10:24:43.070912] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:55.730 Running I/O for 1 seconds... 00:05:55.730 lcore 0: 234579 00:05:55.730 lcore 1: 234578 00:05:55.730 lcore 2: 234578 00:05:55.730 lcore 3: 234578 00:05:55.730 done. 00:05:55.730 00:05:55.730 real 0m1.210s 00:05:55.730 user 0m4.126s 00:05:55.730 sys 0m0.080s 00:05:55.730 10:24:44 event.event_perf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:55.730 10:24:44 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:05:55.730 ************************************ 00:05:55.730 END TEST event_perf 00:05:55.730 ************************************ 00:05:55.730 10:24:44 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:05:55.730 10:24:44 event -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:05:55.730 10:24:44 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:55.730 10:24:44 event -- common/autotest_common.sh@10 -- # set +x 00:05:55.730 ************************************ 00:05:55.730 START TEST event_reactor 00:05:55.730 ************************************ 00:05:55.730 10:24:44 event.event_reactor -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:05:55.988 [2024-11-15 10:24:44.199937] Starting SPDK v25.01-pre git sha1 318515b44 / DPDK 24.03.0 initialization... 00:05:55.988 [2024-11-15 10:24:44.200007] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid254118 ] 00:05:55.988 [2024-11-15 10:24:44.268829] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:55.989 [2024-11-15 10:24:44.325680] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:56.923 test_start 00:05:56.923 oneshot 00:05:56.923 tick 100 00:05:56.923 tick 100 00:05:56.923 tick 250 00:05:56.923 tick 100 00:05:56.923 tick 100 00:05:56.923 tick 100 00:05:56.923 tick 250 00:05:56.923 tick 500 00:05:56.923 tick 100 00:05:56.923 tick 100 00:05:56.923 tick 250 00:05:56.923 tick 100 00:05:56.923 tick 100 00:05:56.923 test_end 00:05:56.923 00:05:56.923 real 0m1.203s 00:05:56.923 user 0m1.134s 00:05:56.923 sys 0m0.066s 00:05:57.182 10:24:45 event.event_reactor -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:57.182 10:24:45 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:05:57.182 ************************************ 00:05:57.182 END TEST event_reactor 00:05:57.182 ************************************ 00:05:57.182 10:24:45 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:57.182 10:24:45 event -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:05:57.182 10:24:45 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:57.182 10:24:45 event -- common/autotest_common.sh@10 -- # set +x 00:05:57.182 ************************************ 00:05:57.182 START TEST event_reactor_perf 00:05:57.182 ************************************ 00:05:57.182 10:24:45 event.event_reactor_perf -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:57.182 [2024-11-15 10:24:45.451861] Starting SPDK v25.01-pre git sha1 318515b44 / DPDK 24.03.0 initialization... 00:05:57.182 [2024-11-15 10:24:45.451923] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid254329 ] 00:05:57.182 [2024-11-15 10:24:45.517754] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:57.182 [2024-11-15 10:24:45.575596] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:58.557 test_start 00:05:58.557 test_end 00:05:58.557 Performance: 450096 events per second 00:05:58.557 00:05:58.557 real 0m1.200s 00:05:58.557 user 0m1.131s 00:05:58.557 sys 0m0.065s 00:05:58.557 10:24:46 event.event_reactor_perf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:58.557 10:24:46 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:05:58.557 ************************************ 00:05:58.557 END TEST event_reactor_perf 00:05:58.557 ************************************ 00:05:58.557 10:24:46 event -- event/event.sh@49 -- # uname -s 00:05:58.557 10:24:46 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:05:58.557 10:24:46 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:05:58.557 10:24:46 event -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:58.557 10:24:46 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:58.557 10:24:46 event -- common/autotest_common.sh@10 -- # set +x 00:05:58.557 ************************************ 00:05:58.557 START TEST event_scheduler 00:05:58.557 ************************************ 00:05:58.557 10:24:46 event.event_scheduler -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:05:58.557 * Looking for test storage... 00:05:58.557 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:05:58.557 10:24:46 event.event_scheduler -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:58.557 10:24:46 event.event_scheduler -- common/autotest_common.sh@1691 -- # lcov --version 00:05:58.557 10:24:46 event.event_scheduler -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:58.557 10:24:46 event.event_scheduler -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:58.557 10:24:46 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:58.557 10:24:46 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:58.557 10:24:46 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:58.557 10:24:46 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:05:58.557 10:24:46 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:05:58.557 10:24:46 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:05:58.557 10:24:46 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:05:58.557 10:24:46 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:05:58.557 10:24:46 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:05:58.557 10:24:46 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:05:58.557 10:24:46 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:58.557 10:24:46 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:05:58.557 10:24:46 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:05:58.557 10:24:46 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:58.557 10:24:46 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:58.557 10:24:46 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:05:58.557 10:24:46 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:05:58.557 10:24:46 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:58.557 10:24:46 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:05:58.557 10:24:46 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:05:58.557 10:24:46 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:05:58.557 10:24:46 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:05:58.557 10:24:46 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:58.557 10:24:46 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:05:58.557 10:24:46 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:05:58.557 10:24:46 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:58.557 10:24:46 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:58.557 10:24:46 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:05:58.557 10:24:46 event.event_scheduler -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:58.557 10:24:46 event.event_scheduler -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:58.557 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:58.557 --rc genhtml_branch_coverage=1 00:05:58.557 --rc genhtml_function_coverage=1 00:05:58.557 --rc genhtml_legend=1 00:05:58.557 --rc geninfo_all_blocks=1 00:05:58.557 --rc geninfo_unexecuted_blocks=1 00:05:58.557 00:05:58.557 ' 00:05:58.557 10:24:46 event.event_scheduler -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:58.557 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:58.557 --rc genhtml_branch_coverage=1 00:05:58.557 --rc genhtml_function_coverage=1 00:05:58.557 --rc genhtml_legend=1 00:05:58.557 --rc geninfo_all_blocks=1 00:05:58.557 --rc geninfo_unexecuted_blocks=1 00:05:58.557 00:05:58.557 ' 00:05:58.557 10:24:46 event.event_scheduler -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:58.557 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:58.557 --rc genhtml_branch_coverage=1 00:05:58.557 --rc genhtml_function_coverage=1 00:05:58.557 --rc genhtml_legend=1 00:05:58.557 --rc geninfo_all_blocks=1 00:05:58.557 --rc geninfo_unexecuted_blocks=1 00:05:58.557 00:05:58.557 ' 00:05:58.557 10:24:46 event.event_scheduler -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:58.557 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:58.557 --rc genhtml_branch_coverage=1 00:05:58.557 --rc genhtml_function_coverage=1 00:05:58.557 --rc genhtml_legend=1 00:05:58.557 --rc geninfo_all_blocks=1 00:05:58.557 --rc geninfo_unexecuted_blocks=1 00:05:58.557 00:05:58.557 ' 00:05:58.557 10:24:46 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:05:58.557 10:24:46 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=254526 00:05:58.558 10:24:46 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:05:58.558 10:24:46 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:05:58.558 10:24:46 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 254526 00:05:58.558 10:24:46 event.event_scheduler -- common/autotest_common.sh@833 -- # '[' -z 254526 ']' 00:05:58.558 10:24:46 event.event_scheduler -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:58.558 10:24:46 event.event_scheduler -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:58.558 10:24:46 event.event_scheduler -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:58.558 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:58.558 10:24:46 event.event_scheduler -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:58.558 10:24:46 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:58.558 [2024-11-15 10:24:46.873002] Starting SPDK v25.01-pre git sha1 318515b44 / DPDK 24.03.0 initialization... 00:05:58.558 [2024-11-15 10:24:46.873093] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid254526 ] 00:05:58.558 [2024-11-15 10:24:46.941550] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:58.558 [2024-11-15 10:24:47.002009] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:58.558 [2024-11-15 10:24:47.002069] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:58.558 [2024-11-15 10:24:47.002136] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:58.558 [2024-11-15 10:24:47.002139] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:58.816 10:24:47 event.event_scheduler -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:58.816 10:24:47 event.event_scheduler -- common/autotest_common.sh@866 -- # return 0 00:05:58.816 10:24:47 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:05:58.816 10:24:47 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:58.816 10:24:47 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:58.816 [2024-11-15 10:24:47.098988] dpdk_governor.c: 173:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings 00:05:58.816 [2024-11-15 10:24:47.099013] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:05:58.816 [2024-11-15 10:24:47.099046] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:05:58.816 [2024-11-15 10:24:47.099057] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:05:58.816 [2024-11-15 10:24:47.099067] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:05:58.816 10:24:47 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:58.816 10:24:47 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:05:58.816 10:24:47 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:58.816 10:24:47 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:58.816 [2024-11-15 10:24:47.198733] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:05:58.816 10:24:47 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:58.816 10:24:47 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:05:58.816 10:24:47 event.event_scheduler -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:58.816 10:24:47 event.event_scheduler -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:58.816 10:24:47 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:58.816 ************************************ 00:05:58.816 START TEST scheduler_create_thread 00:05:58.816 ************************************ 00:05:58.816 10:24:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1127 -- # scheduler_create_thread 00:05:58.816 10:24:47 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:05:58.816 10:24:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:58.816 10:24:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:58.816 2 00:05:58.816 10:24:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:58.816 10:24:47 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:05:58.816 10:24:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:58.816 10:24:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:58.816 3 00:05:58.816 10:24:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:58.816 10:24:47 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:05:58.816 10:24:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:58.816 10:24:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:58.816 4 00:05:58.816 10:24:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:58.816 10:24:47 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:05:58.816 10:24:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:58.816 10:24:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:58.816 5 00:05:58.816 10:24:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:58.816 10:24:47 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:05:58.816 10:24:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:58.816 10:24:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:58.816 6 00:05:58.816 10:24:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:58.816 10:24:47 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:05:58.816 10:24:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:58.816 10:24:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:58.816 7 00:05:58.816 10:24:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:58.816 10:24:47 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:05:58.816 10:24:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:58.816 10:24:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:59.075 8 00:05:59.075 10:24:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:59.075 10:24:47 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:05:59.075 10:24:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:59.075 10:24:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:59.075 9 00:05:59.075 10:24:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:59.075 10:24:47 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:05:59.075 10:24:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:59.075 10:24:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:59.075 10 00:05:59.075 10:24:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:59.075 10:24:47 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:05:59.075 10:24:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:59.075 10:24:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:59.075 10:24:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:59.075 10:24:47 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:05:59.075 10:24:47 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:05:59.075 10:24:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:59.075 10:24:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:59.075 10:24:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:59.075 10:24:47 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:05:59.075 10:24:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:59.075 10:24:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:59.075 10:24:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:59.075 10:24:47 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:05:59.075 10:24:47 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:05:59.075 10:24:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:59.075 10:24:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:59.641 10:24:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:59.641 00:05:59.641 real 0m0.591s 00:05:59.641 user 0m0.017s 00:05:59.641 sys 0m0.002s 00:05:59.641 10:24:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:59.641 10:24:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:59.641 ************************************ 00:05:59.641 END TEST scheduler_create_thread 00:05:59.641 ************************************ 00:05:59.641 10:24:47 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:05:59.641 10:24:47 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 254526 00:05:59.641 10:24:47 event.event_scheduler -- common/autotest_common.sh@952 -- # '[' -z 254526 ']' 00:05:59.641 10:24:47 event.event_scheduler -- common/autotest_common.sh@956 -- # kill -0 254526 00:05:59.641 10:24:47 event.event_scheduler -- common/autotest_common.sh@957 -- # uname 00:05:59.641 10:24:47 event.event_scheduler -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:59.641 10:24:47 event.event_scheduler -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 254526 00:05:59.641 10:24:47 event.event_scheduler -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:05:59.641 10:24:47 event.event_scheduler -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:05:59.641 10:24:47 event.event_scheduler -- common/autotest_common.sh@970 -- # echo 'killing process with pid 254526' 00:05:59.641 killing process with pid 254526 00:05:59.641 10:24:47 event.event_scheduler -- common/autotest_common.sh@971 -- # kill 254526 00:05:59.641 10:24:47 event.event_scheduler -- common/autotest_common.sh@976 -- # wait 254526 00:05:59.899 [2024-11-15 10:24:48.298812] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:06:00.160 00:06:00.160 real 0m1.824s 00:06:00.160 user 0m2.461s 00:06:00.160 sys 0m0.341s 00:06:00.160 10:24:48 event.event_scheduler -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:00.160 10:24:48 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:00.160 ************************************ 00:06:00.160 END TEST event_scheduler 00:06:00.160 ************************************ 00:06:00.160 10:24:48 event -- event/event.sh@51 -- # modprobe -n nbd 00:06:00.160 10:24:48 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:06:00.160 10:24:48 event -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:00.160 10:24:48 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:00.160 10:24:48 event -- common/autotest_common.sh@10 -- # set +x 00:06:00.160 ************************************ 00:06:00.160 START TEST app_repeat 00:06:00.160 ************************************ 00:06:00.160 10:24:48 event.app_repeat -- common/autotest_common.sh@1127 -- # app_repeat_test 00:06:00.160 10:24:48 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:00.160 10:24:48 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:00.160 10:24:48 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:06:00.160 10:24:48 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:00.160 10:24:48 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:06:00.160 10:24:48 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:06:00.160 10:24:48 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:06:00.160 10:24:48 event.app_repeat -- event/event.sh@19 -- # repeat_pid=254716 00:06:00.160 10:24:48 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:06:00.160 10:24:48 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:06:00.160 10:24:48 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 254716' 00:06:00.160 Process app_repeat pid: 254716 00:06:00.160 10:24:48 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:00.160 10:24:48 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:06:00.160 spdk_app_start Round 0 00:06:00.160 10:24:48 event.app_repeat -- event/event.sh@25 -- # waitforlisten 254716 /var/tmp/spdk-nbd.sock 00:06:00.160 10:24:48 event.app_repeat -- common/autotest_common.sh@833 -- # '[' -z 254716 ']' 00:06:00.160 10:24:48 event.app_repeat -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:00.160 10:24:48 event.app_repeat -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:00.160 10:24:48 event.app_repeat -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:00.160 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:00.160 10:24:48 event.app_repeat -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:00.160 10:24:48 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:00.160 [2024-11-15 10:24:48.597649] Starting SPDK v25.01-pre git sha1 318515b44 / DPDK 24.03.0 initialization... 00:06:00.160 [2024-11-15 10:24:48.597714] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid254716 ] 00:06:00.418 [2024-11-15 10:24:48.666506] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:00.418 [2024-11-15 10:24:48.727878] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:00.418 [2024-11-15 10:24:48.727881] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:00.418 10:24:48 event.app_repeat -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:00.418 10:24:48 event.app_repeat -- common/autotest_common.sh@866 -- # return 0 00:06:00.418 10:24:48 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:00.677 Malloc0 00:06:00.935 10:24:49 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:01.193 Malloc1 00:06:01.193 10:24:49 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:01.193 10:24:49 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:01.193 10:24:49 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:01.193 10:24:49 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:01.193 10:24:49 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:01.193 10:24:49 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:01.193 10:24:49 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:01.193 10:24:49 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:01.193 10:24:49 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:01.193 10:24:49 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:01.193 10:24:49 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:01.193 10:24:49 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:01.193 10:24:49 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:01.193 10:24:49 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:01.193 10:24:49 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:01.193 10:24:49 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:01.452 /dev/nbd0 00:06:01.452 10:24:49 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:01.452 10:24:49 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:01.452 10:24:49 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:06:01.452 10:24:49 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:06:01.452 10:24:49 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:06:01.452 10:24:49 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:06:01.452 10:24:49 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:06:01.452 10:24:49 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:06:01.452 10:24:49 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:06:01.452 10:24:49 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:06:01.452 10:24:49 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:01.452 1+0 records in 00:06:01.452 1+0 records out 00:06:01.452 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000220816 s, 18.5 MB/s 00:06:01.452 10:24:49 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:01.452 10:24:49 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:06:01.452 10:24:49 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:01.452 10:24:49 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:06:01.452 10:24:49 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:06:01.452 10:24:49 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:01.452 10:24:49 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:01.452 10:24:49 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:01.711 /dev/nbd1 00:06:01.711 10:24:50 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:01.711 10:24:50 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:01.711 10:24:50 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:06:01.711 10:24:50 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:06:01.711 10:24:50 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:06:01.711 10:24:50 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:06:01.711 10:24:50 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:06:01.711 10:24:50 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:06:01.711 10:24:50 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:06:01.711 10:24:50 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:06:01.711 10:24:50 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:01.711 1+0 records in 00:06:01.711 1+0 records out 00:06:01.711 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000231202 s, 17.7 MB/s 00:06:01.711 10:24:50 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:01.711 10:24:50 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:06:01.711 10:24:50 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:01.711 10:24:50 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:06:01.711 10:24:50 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:06:01.711 10:24:50 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:01.711 10:24:50 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:01.711 10:24:50 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:01.711 10:24:50 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:01.711 10:24:50 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:01.969 10:24:50 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:01.969 { 00:06:01.969 "nbd_device": "/dev/nbd0", 00:06:01.969 "bdev_name": "Malloc0" 00:06:01.969 }, 00:06:01.969 { 00:06:01.969 "nbd_device": "/dev/nbd1", 00:06:01.969 "bdev_name": "Malloc1" 00:06:01.969 } 00:06:01.969 ]' 00:06:01.969 10:24:50 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:01.969 { 00:06:01.969 "nbd_device": "/dev/nbd0", 00:06:01.969 "bdev_name": "Malloc0" 00:06:01.969 }, 00:06:01.969 { 00:06:01.969 "nbd_device": "/dev/nbd1", 00:06:01.969 "bdev_name": "Malloc1" 00:06:01.969 } 00:06:01.969 ]' 00:06:01.969 10:24:50 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:01.969 10:24:50 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:01.969 /dev/nbd1' 00:06:01.969 10:24:50 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:01.969 /dev/nbd1' 00:06:01.969 10:24:50 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:01.969 10:24:50 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:01.969 10:24:50 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:01.969 10:24:50 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:01.969 10:24:50 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:01.969 10:24:50 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:01.969 10:24:50 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:01.969 10:24:50 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:01.969 10:24:50 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:01.969 10:24:50 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:01.969 10:24:50 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:01.969 10:24:50 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:02.227 256+0 records in 00:06:02.227 256+0 records out 00:06:02.227 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00522238 s, 201 MB/s 00:06:02.227 10:24:50 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:02.227 10:24:50 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:02.227 256+0 records in 00:06:02.227 256+0 records out 00:06:02.227 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0206965 s, 50.7 MB/s 00:06:02.227 10:24:50 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:02.227 10:24:50 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:02.227 256+0 records in 00:06:02.227 256+0 records out 00:06:02.227 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.023269 s, 45.1 MB/s 00:06:02.227 10:24:50 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:02.227 10:24:50 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:02.227 10:24:50 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:02.227 10:24:50 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:02.227 10:24:50 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:02.227 10:24:50 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:02.227 10:24:50 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:02.227 10:24:50 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:02.227 10:24:50 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:02.227 10:24:50 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:02.227 10:24:50 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:02.227 10:24:50 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:02.227 10:24:50 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:02.227 10:24:50 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:02.227 10:24:50 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:02.227 10:24:50 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:02.227 10:24:50 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:02.227 10:24:50 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:02.227 10:24:50 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:02.485 10:24:50 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:02.485 10:24:50 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:02.485 10:24:50 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:02.485 10:24:50 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:02.485 10:24:50 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:02.485 10:24:50 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:02.485 10:24:50 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:02.485 10:24:50 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:02.485 10:24:50 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:02.485 10:24:50 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:02.744 10:24:51 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:02.744 10:24:51 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:02.744 10:24:51 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:02.744 10:24:51 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:02.744 10:24:51 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:02.744 10:24:51 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:02.744 10:24:51 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:02.744 10:24:51 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:02.744 10:24:51 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:02.744 10:24:51 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:02.744 10:24:51 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:03.002 10:24:51 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:03.003 10:24:51 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:03.003 10:24:51 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:03.003 10:24:51 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:03.003 10:24:51 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:03.003 10:24:51 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:03.003 10:24:51 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:03.003 10:24:51 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:03.003 10:24:51 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:03.003 10:24:51 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:03.003 10:24:51 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:03.003 10:24:51 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:03.003 10:24:51 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:03.262 10:24:51 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:03.520 [2024-11-15 10:24:51.925825] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:03.520 [2024-11-15 10:24:51.979799] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:03.520 [2024-11-15 10:24:51.979803] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:03.778 [2024-11-15 10:24:52.034937] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:03.778 [2024-11-15 10:24:52.035003] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:06.304 10:24:54 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:06.304 10:24:54 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:06:06.304 spdk_app_start Round 1 00:06:06.304 10:24:54 event.app_repeat -- event/event.sh@25 -- # waitforlisten 254716 /var/tmp/spdk-nbd.sock 00:06:06.304 10:24:54 event.app_repeat -- common/autotest_common.sh@833 -- # '[' -z 254716 ']' 00:06:06.304 10:24:54 event.app_repeat -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:06.304 10:24:54 event.app_repeat -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:06.304 10:24:54 event.app_repeat -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:06.304 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:06.304 10:24:54 event.app_repeat -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:06.304 10:24:54 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:06.563 10:24:54 event.app_repeat -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:06.563 10:24:54 event.app_repeat -- common/autotest_common.sh@866 -- # return 0 00:06:06.563 10:24:54 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:06.822 Malloc0 00:06:06.822 10:24:55 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:07.080 Malloc1 00:06:07.337 10:24:55 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:07.337 10:24:55 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:07.337 10:24:55 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:07.337 10:24:55 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:07.337 10:24:55 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:07.337 10:24:55 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:07.337 10:24:55 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:07.337 10:24:55 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:07.337 10:24:55 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:07.337 10:24:55 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:07.337 10:24:55 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:07.337 10:24:55 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:07.337 10:24:55 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:07.337 10:24:55 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:07.337 10:24:55 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:07.337 10:24:55 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:07.595 /dev/nbd0 00:06:07.595 10:24:55 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:07.595 10:24:55 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:07.595 10:24:55 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:06:07.595 10:24:55 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:06:07.595 10:24:55 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:06:07.595 10:24:55 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:06:07.595 10:24:55 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:06:07.595 10:24:55 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:06:07.595 10:24:55 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:06:07.595 10:24:55 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:06:07.595 10:24:55 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:07.595 1+0 records in 00:06:07.595 1+0 records out 00:06:07.595 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000147515 s, 27.8 MB/s 00:06:07.595 10:24:55 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:07.595 10:24:55 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:06:07.595 10:24:55 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:07.595 10:24:55 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:06:07.595 10:24:55 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:06:07.595 10:24:55 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:07.595 10:24:55 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:07.595 10:24:55 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:07.852 /dev/nbd1 00:06:07.852 10:24:56 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:07.852 10:24:56 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:07.852 10:24:56 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:06:07.852 10:24:56 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:06:07.852 10:24:56 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:06:07.852 10:24:56 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:06:07.852 10:24:56 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:06:07.852 10:24:56 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:06:07.852 10:24:56 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:06:07.852 10:24:56 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:06:07.852 10:24:56 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:07.852 1+0 records in 00:06:07.852 1+0 records out 00:06:07.852 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000243421 s, 16.8 MB/s 00:06:07.852 10:24:56 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:07.852 10:24:56 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:06:07.852 10:24:56 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:07.852 10:24:56 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:06:07.852 10:24:56 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:06:07.852 10:24:56 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:07.852 10:24:56 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:07.852 10:24:56 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:07.852 10:24:56 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:07.852 10:24:56 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:08.111 10:24:56 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:08.111 { 00:06:08.111 "nbd_device": "/dev/nbd0", 00:06:08.111 "bdev_name": "Malloc0" 00:06:08.111 }, 00:06:08.111 { 00:06:08.111 "nbd_device": "/dev/nbd1", 00:06:08.111 "bdev_name": "Malloc1" 00:06:08.111 } 00:06:08.111 ]' 00:06:08.111 10:24:56 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:08.111 { 00:06:08.111 "nbd_device": "/dev/nbd0", 00:06:08.111 "bdev_name": "Malloc0" 00:06:08.111 }, 00:06:08.111 { 00:06:08.111 "nbd_device": "/dev/nbd1", 00:06:08.111 "bdev_name": "Malloc1" 00:06:08.111 } 00:06:08.111 ]' 00:06:08.111 10:24:56 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:08.111 10:24:56 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:08.111 /dev/nbd1' 00:06:08.111 10:24:56 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:08.111 /dev/nbd1' 00:06:08.111 10:24:56 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:08.111 10:24:56 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:08.111 10:24:56 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:08.111 10:24:56 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:08.111 10:24:56 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:08.111 10:24:56 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:08.111 10:24:56 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:08.111 10:24:56 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:08.111 10:24:56 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:08.111 10:24:56 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:08.111 10:24:56 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:08.111 10:24:56 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:08.111 256+0 records in 00:06:08.111 256+0 records out 00:06:08.111 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00433419 s, 242 MB/s 00:06:08.111 10:24:56 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:08.111 10:24:56 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:08.111 256+0 records in 00:06:08.111 256+0 records out 00:06:08.111 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0215882 s, 48.6 MB/s 00:06:08.111 10:24:56 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:08.111 10:24:56 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:08.111 256+0 records in 00:06:08.111 256+0 records out 00:06:08.111 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0226049 s, 46.4 MB/s 00:06:08.111 10:24:56 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:08.111 10:24:56 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:08.111 10:24:56 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:08.111 10:24:56 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:08.111 10:24:56 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:08.111 10:24:56 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:08.111 10:24:56 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:08.111 10:24:56 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:08.111 10:24:56 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:08.111 10:24:56 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:08.111 10:24:56 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:08.111 10:24:56 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:08.111 10:24:56 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:08.111 10:24:56 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:08.111 10:24:56 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:08.111 10:24:56 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:08.111 10:24:56 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:08.111 10:24:56 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:08.111 10:24:56 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:08.678 10:24:56 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:08.678 10:24:56 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:08.678 10:24:56 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:08.678 10:24:56 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:08.678 10:24:56 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:08.678 10:24:56 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:08.678 10:24:56 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:08.678 10:24:56 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:08.678 10:24:56 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:08.678 10:24:56 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:08.936 10:24:57 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:08.936 10:24:57 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:08.936 10:24:57 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:08.936 10:24:57 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:08.936 10:24:57 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:08.936 10:24:57 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:08.936 10:24:57 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:08.936 10:24:57 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:08.936 10:24:57 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:08.936 10:24:57 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:08.936 10:24:57 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:09.195 10:24:57 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:09.195 10:24:57 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:09.195 10:24:57 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:09.195 10:24:57 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:09.195 10:24:57 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:09.195 10:24:57 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:09.195 10:24:57 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:09.195 10:24:57 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:09.195 10:24:57 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:09.195 10:24:57 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:09.195 10:24:57 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:09.195 10:24:57 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:09.195 10:24:57 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:09.453 10:24:57 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:09.712 [2024-11-15 10:24:58.007897] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:09.712 [2024-11-15 10:24:58.062991] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:09.712 [2024-11-15 10:24:58.062991] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:09.712 [2024-11-15 10:24:58.117281] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:09.712 [2024-11-15 10:24:58.117377] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:12.993 10:25:00 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:12.993 10:25:00 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:06:12.993 spdk_app_start Round 2 00:06:12.993 10:25:00 event.app_repeat -- event/event.sh@25 -- # waitforlisten 254716 /var/tmp/spdk-nbd.sock 00:06:12.993 10:25:00 event.app_repeat -- common/autotest_common.sh@833 -- # '[' -z 254716 ']' 00:06:12.993 10:25:00 event.app_repeat -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:12.993 10:25:00 event.app_repeat -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:12.993 10:25:00 event.app_repeat -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:12.993 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:12.993 10:25:00 event.app_repeat -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:12.993 10:25:00 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:12.993 10:25:01 event.app_repeat -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:12.993 10:25:01 event.app_repeat -- common/autotest_common.sh@866 -- # return 0 00:06:12.993 10:25:01 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:12.993 Malloc0 00:06:12.993 10:25:01 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:13.252 Malloc1 00:06:13.252 10:25:01 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:13.252 10:25:01 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:13.252 10:25:01 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:13.252 10:25:01 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:13.252 10:25:01 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:13.252 10:25:01 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:13.252 10:25:01 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:13.252 10:25:01 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:13.252 10:25:01 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:13.252 10:25:01 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:13.252 10:25:01 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:13.252 10:25:01 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:13.252 10:25:01 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:13.252 10:25:01 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:13.252 10:25:01 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:13.252 10:25:01 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:13.509 /dev/nbd0 00:06:13.509 10:25:01 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:13.509 10:25:01 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:13.509 10:25:01 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:06:13.509 10:25:01 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:06:13.509 10:25:01 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:06:13.509 10:25:01 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:06:13.509 10:25:01 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:06:13.509 10:25:01 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:06:13.509 10:25:01 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:06:13.509 10:25:01 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:06:13.509 10:25:01 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:13.509 1+0 records in 00:06:13.509 1+0 records out 00:06:13.509 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000282574 s, 14.5 MB/s 00:06:13.509 10:25:01 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:13.509 10:25:01 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:06:13.509 10:25:01 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:13.509 10:25:01 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:06:13.509 10:25:01 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:06:13.509 10:25:01 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:13.509 10:25:01 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:13.509 10:25:01 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:14.074 /dev/nbd1 00:06:14.074 10:25:02 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:14.074 10:25:02 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:14.074 10:25:02 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:06:14.074 10:25:02 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:06:14.074 10:25:02 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:06:14.074 10:25:02 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:06:14.074 10:25:02 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:06:14.074 10:25:02 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:06:14.074 10:25:02 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:06:14.074 10:25:02 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:06:14.074 10:25:02 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:14.074 1+0 records in 00:06:14.074 1+0 records out 00:06:14.074 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000237748 s, 17.2 MB/s 00:06:14.074 10:25:02 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:14.074 10:25:02 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:06:14.074 10:25:02 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:14.074 10:25:02 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:06:14.074 10:25:02 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:06:14.074 10:25:02 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:14.074 10:25:02 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:14.075 10:25:02 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:14.075 10:25:02 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:14.075 10:25:02 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:14.333 10:25:02 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:14.333 { 00:06:14.333 "nbd_device": "/dev/nbd0", 00:06:14.333 "bdev_name": "Malloc0" 00:06:14.333 }, 00:06:14.333 { 00:06:14.333 "nbd_device": "/dev/nbd1", 00:06:14.333 "bdev_name": "Malloc1" 00:06:14.333 } 00:06:14.333 ]' 00:06:14.333 10:25:02 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:14.333 { 00:06:14.333 "nbd_device": "/dev/nbd0", 00:06:14.333 "bdev_name": "Malloc0" 00:06:14.333 }, 00:06:14.333 { 00:06:14.333 "nbd_device": "/dev/nbd1", 00:06:14.333 "bdev_name": "Malloc1" 00:06:14.333 } 00:06:14.333 ]' 00:06:14.333 10:25:02 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:14.333 10:25:02 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:14.333 /dev/nbd1' 00:06:14.333 10:25:02 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:14.333 /dev/nbd1' 00:06:14.333 10:25:02 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:14.333 10:25:02 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:14.333 10:25:02 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:14.333 10:25:02 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:14.333 10:25:02 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:14.333 10:25:02 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:14.333 10:25:02 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:14.333 10:25:02 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:14.333 10:25:02 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:14.333 10:25:02 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:14.333 10:25:02 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:14.333 10:25:02 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:14.333 256+0 records in 00:06:14.333 256+0 records out 00:06:14.333 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00478562 s, 219 MB/s 00:06:14.334 10:25:02 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:14.334 10:25:02 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:14.334 256+0 records in 00:06:14.334 256+0 records out 00:06:14.334 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0204751 s, 51.2 MB/s 00:06:14.334 10:25:02 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:14.334 10:25:02 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:14.334 256+0 records in 00:06:14.334 256+0 records out 00:06:14.334 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0222796 s, 47.1 MB/s 00:06:14.334 10:25:02 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:14.334 10:25:02 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:14.334 10:25:02 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:14.334 10:25:02 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:14.334 10:25:02 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:14.334 10:25:02 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:14.334 10:25:02 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:14.334 10:25:02 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:14.334 10:25:02 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:14.334 10:25:02 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:14.334 10:25:02 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:14.334 10:25:02 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:14.334 10:25:02 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:14.334 10:25:02 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:14.334 10:25:02 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:14.334 10:25:02 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:14.334 10:25:02 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:14.334 10:25:02 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:14.334 10:25:02 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:14.592 10:25:02 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:14.592 10:25:02 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:14.592 10:25:02 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:14.592 10:25:02 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:14.592 10:25:02 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:14.592 10:25:02 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:14.592 10:25:02 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:14.592 10:25:02 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:14.592 10:25:02 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:14.592 10:25:02 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:14.849 10:25:03 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:14.849 10:25:03 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:14.849 10:25:03 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:14.849 10:25:03 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:14.849 10:25:03 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:14.849 10:25:03 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:14.849 10:25:03 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:14.849 10:25:03 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:14.849 10:25:03 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:14.849 10:25:03 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:14.849 10:25:03 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:15.107 10:25:03 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:15.107 10:25:03 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:15.107 10:25:03 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:15.365 10:25:03 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:15.365 10:25:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:15.365 10:25:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:15.365 10:25:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:15.365 10:25:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:15.365 10:25:03 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:15.365 10:25:03 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:15.366 10:25:03 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:15.366 10:25:03 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:15.366 10:25:03 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:15.624 10:25:03 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:15.882 [2024-11-15 10:25:04.105577] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:15.882 [2024-11-15 10:25:04.160319] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:15.882 [2024-11-15 10:25:04.160323] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:15.882 [2024-11-15 10:25:04.218195] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:15.882 [2024-11-15 10:25:04.218270] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:19.172 10:25:06 event.app_repeat -- event/event.sh@38 -- # waitforlisten 254716 /var/tmp/spdk-nbd.sock 00:06:19.172 10:25:06 event.app_repeat -- common/autotest_common.sh@833 -- # '[' -z 254716 ']' 00:06:19.172 10:25:06 event.app_repeat -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:19.172 10:25:06 event.app_repeat -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:19.172 10:25:06 event.app_repeat -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:19.172 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:19.172 10:25:06 event.app_repeat -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:19.172 10:25:06 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:19.172 10:25:07 event.app_repeat -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:19.172 10:25:07 event.app_repeat -- common/autotest_common.sh@866 -- # return 0 00:06:19.172 10:25:07 event.app_repeat -- event/event.sh@39 -- # killprocess 254716 00:06:19.172 10:25:07 event.app_repeat -- common/autotest_common.sh@952 -- # '[' -z 254716 ']' 00:06:19.172 10:25:07 event.app_repeat -- common/autotest_common.sh@956 -- # kill -0 254716 00:06:19.172 10:25:07 event.app_repeat -- common/autotest_common.sh@957 -- # uname 00:06:19.172 10:25:07 event.app_repeat -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:19.172 10:25:07 event.app_repeat -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 254716 00:06:19.172 10:25:07 event.app_repeat -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:06:19.172 10:25:07 event.app_repeat -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:06:19.172 10:25:07 event.app_repeat -- common/autotest_common.sh@970 -- # echo 'killing process with pid 254716' 00:06:19.172 killing process with pid 254716 00:06:19.172 10:25:07 event.app_repeat -- common/autotest_common.sh@971 -- # kill 254716 00:06:19.172 10:25:07 event.app_repeat -- common/autotest_common.sh@976 -- # wait 254716 00:06:19.172 spdk_app_start is called in Round 0. 00:06:19.172 Shutdown signal received, stop current app iteration 00:06:19.172 Starting SPDK v25.01-pre git sha1 318515b44 / DPDK 24.03.0 reinitialization... 00:06:19.172 spdk_app_start is called in Round 1. 00:06:19.172 Shutdown signal received, stop current app iteration 00:06:19.172 Starting SPDK v25.01-pre git sha1 318515b44 / DPDK 24.03.0 reinitialization... 00:06:19.172 spdk_app_start is called in Round 2. 00:06:19.172 Shutdown signal received, stop current app iteration 00:06:19.172 Starting SPDK v25.01-pre git sha1 318515b44 / DPDK 24.03.0 reinitialization... 00:06:19.172 spdk_app_start is called in Round 3. 00:06:19.172 Shutdown signal received, stop current app iteration 00:06:19.172 10:25:07 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:06:19.172 10:25:07 event.app_repeat -- event/event.sh@42 -- # return 0 00:06:19.172 00:06:19.172 real 0m18.807s 00:06:19.172 user 0m41.516s 00:06:19.172 sys 0m3.317s 00:06:19.172 10:25:07 event.app_repeat -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:19.172 10:25:07 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:19.172 ************************************ 00:06:19.172 END TEST app_repeat 00:06:19.172 ************************************ 00:06:19.172 10:25:07 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:06:19.172 10:25:07 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:06:19.172 10:25:07 event -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:19.172 10:25:07 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:19.172 10:25:07 event -- common/autotest_common.sh@10 -- # set +x 00:06:19.172 ************************************ 00:06:19.172 START TEST cpu_locks 00:06:19.172 ************************************ 00:06:19.172 10:25:07 event.cpu_locks -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:06:19.172 * Looking for test storage... 00:06:19.172 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:06:19.172 10:25:07 event.cpu_locks -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:06:19.172 10:25:07 event.cpu_locks -- common/autotest_common.sh@1691 -- # lcov --version 00:06:19.172 10:25:07 event.cpu_locks -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:06:19.172 10:25:07 event.cpu_locks -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:06:19.172 10:25:07 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:19.172 10:25:07 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:19.172 10:25:07 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:19.173 10:25:07 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:06:19.173 10:25:07 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:06:19.173 10:25:07 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:06:19.173 10:25:07 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:06:19.173 10:25:07 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:06:19.173 10:25:07 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:06:19.173 10:25:07 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:06:19.173 10:25:07 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:19.173 10:25:07 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:06:19.173 10:25:07 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:06:19.173 10:25:07 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:19.173 10:25:07 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:19.173 10:25:07 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:06:19.173 10:25:07 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:06:19.173 10:25:07 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:19.173 10:25:07 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:06:19.173 10:25:07 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:06:19.173 10:25:07 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:06:19.173 10:25:07 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:06:19.173 10:25:07 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:19.173 10:25:07 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:06:19.173 10:25:07 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:06:19.173 10:25:07 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:19.173 10:25:07 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:19.173 10:25:07 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:06:19.173 10:25:07 event.cpu_locks -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:19.173 10:25:07 event.cpu_locks -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:06:19.173 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:19.173 --rc genhtml_branch_coverage=1 00:06:19.173 --rc genhtml_function_coverage=1 00:06:19.173 --rc genhtml_legend=1 00:06:19.173 --rc geninfo_all_blocks=1 00:06:19.173 --rc geninfo_unexecuted_blocks=1 00:06:19.173 00:06:19.173 ' 00:06:19.173 10:25:07 event.cpu_locks -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:06:19.173 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:19.173 --rc genhtml_branch_coverage=1 00:06:19.173 --rc genhtml_function_coverage=1 00:06:19.173 --rc genhtml_legend=1 00:06:19.173 --rc geninfo_all_blocks=1 00:06:19.173 --rc geninfo_unexecuted_blocks=1 00:06:19.173 00:06:19.173 ' 00:06:19.173 10:25:07 event.cpu_locks -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:06:19.173 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:19.173 --rc genhtml_branch_coverage=1 00:06:19.173 --rc genhtml_function_coverage=1 00:06:19.173 --rc genhtml_legend=1 00:06:19.173 --rc geninfo_all_blocks=1 00:06:19.173 --rc geninfo_unexecuted_blocks=1 00:06:19.173 00:06:19.173 ' 00:06:19.173 10:25:07 event.cpu_locks -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:06:19.173 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:19.173 --rc genhtml_branch_coverage=1 00:06:19.173 --rc genhtml_function_coverage=1 00:06:19.173 --rc genhtml_legend=1 00:06:19.173 --rc geninfo_all_blocks=1 00:06:19.173 --rc geninfo_unexecuted_blocks=1 00:06:19.173 00:06:19.173 ' 00:06:19.173 10:25:07 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:06:19.173 10:25:07 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:06:19.173 10:25:07 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:06:19.173 10:25:07 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:06:19.173 10:25:07 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:19.173 10:25:07 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:19.173 10:25:07 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:19.173 ************************************ 00:06:19.173 START TEST default_locks 00:06:19.173 ************************************ 00:06:19.173 10:25:07 event.cpu_locks.default_locks -- common/autotest_common.sh@1127 -- # default_locks 00:06:19.173 10:25:07 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=257213 00:06:19.173 10:25:07 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:19.173 10:25:07 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 257213 00:06:19.173 10:25:07 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # '[' -z 257213 ']' 00:06:19.173 10:25:07 event.cpu_locks.default_locks -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:19.173 10:25:07 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:19.173 10:25:07 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:19.173 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:19.173 10:25:07 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:19.173 10:25:07 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:19.431 [2024-11-15 10:25:07.660961] Starting SPDK v25.01-pre git sha1 318515b44 / DPDK 24.03.0 initialization... 00:06:19.431 [2024-11-15 10:25:07.661052] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid257213 ] 00:06:19.431 [2024-11-15 10:25:07.730044] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:19.431 [2024-11-15 10:25:07.785402] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:19.689 10:25:08 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:19.689 10:25:08 event.cpu_locks.default_locks -- common/autotest_common.sh@866 -- # return 0 00:06:19.689 10:25:08 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 257213 00:06:19.689 10:25:08 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 257213 00:06:19.689 10:25:08 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:19.947 lslocks: write error 00:06:19.947 10:25:08 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 257213 00:06:19.947 10:25:08 event.cpu_locks.default_locks -- common/autotest_common.sh@952 -- # '[' -z 257213 ']' 00:06:19.947 10:25:08 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # kill -0 257213 00:06:19.947 10:25:08 event.cpu_locks.default_locks -- common/autotest_common.sh@957 -- # uname 00:06:19.947 10:25:08 event.cpu_locks.default_locks -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:19.947 10:25:08 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 257213 00:06:19.947 10:25:08 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:06:19.947 10:25:08 event.cpu_locks.default_locks -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:06:19.947 10:25:08 event.cpu_locks.default_locks -- common/autotest_common.sh@970 -- # echo 'killing process with pid 257213' 00:06:19.947 killing process with pid 257213 00:06:19.947 10:25:08 event.cpu_locks.default_locks -- common/autotest_common.sh@971 -- # kill 257213 00:06:19.947 10:25:08 event.cpu_locks.default_locks -- common/autotest_common.sh@976 -- # wait 257213 00:06:20.514 10:25:08 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 257213 00:06:20.514 10:25:08 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # local es=0 00:06:20.514 10:25:08 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 257213 00:06:20.514 10:25:08 event.cpu_locks.default_locks -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:06:20.514 10:25:08 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:20.514 10:25:08 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:06:20.514 10:25:08 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:20.514 10:25:08 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # waitforlisten 257213 00:06:20.514 10:25:08 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # '[' -z 257213 ']' 00:06:20.514 10:25:08 event.cpu_locks.default_locks -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:20.514 10:25:08 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:20.514 10:25:08 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:20.514 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:20.514 10:25:08 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:20.514 10:25:08 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:20.514 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 848: kill: (257213) - No such process 00:06:20.514 ERROR: process (pid: 257213) is no longer running 00:06:20.514 10:25:08 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:20.514 10:25:08 event.cpu_locks.default_locks -- common/autotest_common.sh@866 -- # return 1 00:06:20.514 10:25:08 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # es=1 00:06:20.514 10:25:08 event.cpu_locks.default_locks -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:20.514 10:25:08 event.cpu_locks.default_locks -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:20.514 10:25:08 event.cpu_locks.default_locks -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:20.514 10:25:08 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:06:20.514 10:25:08 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:20.514 10:25:08 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:06:20.514 10:25:08 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:20.514 00:06:20.514 real 0m1.173s 00:06:20.514 user 0m1.120s 00:06:20.514 sys 0m0.513s 00:06:20.514 10:25:08 event.cpu_locks.default_locks -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:20.514 10:25:08 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:20.514 ************************************ 00:06:20.514 END TEST default_locks 00:06:20.514 ************************************ 00:06:20.514 10:25:08 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:06:20.514 10:25:08 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:20.514 10:25:08 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:20.514 10:25:08 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:20.514 ************************************ 00:06:20.514 START TEST default_locks_via_rpc 00:06:20.514 ************************************ 00:06:20.515 10:25:08 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1127 -- # default_locks_via_rpc 00:06:20.515 10:25:08 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=257446 00:06:20.515 10:25:08 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:20.515 10:25:08 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 257446 00:06:20.515 10:25:08 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@833 -- # '[' -z 257446 ']' 00:06:20.515 10:25:08 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:20.515 10:25:08 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:20.515 10:25:08 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:20.515 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:20.515 10:25:08 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:20.515 10:25:08 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:20.515 [2024-11-15 10:25:08.874443] Starting SPDK v25.01-pre git sha1 318515b44 / DPDK 24.03.0 initialization... 00:06:20.515 [2024-11-15 10:25:08.874540] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid257446 ] 00:06:20.515 [2024-11-15 10:25:08.941873] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:20.773 [2024-11-15 10:25:09.003479] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:21.031 10:25:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:21.031 10:25:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@866 -- # return 0 00:06:21.031 10:25:09 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:06:21.031 10:25:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:21.031 10:25:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:21.031 10:25:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:21.031 10:25:09 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:06:21.031 10:25:09 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:21.031 10:25:09 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:06:21.031 10:25:09 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:21.031 10:25:09 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:06:21.031 10:25:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:21.031 10:25:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:21.031 10:25:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:21.031 10:25:09 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 257446 00:06:21.031 10:25:09 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 257446 00:06:21.031 10:25:09 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:21.290 10:25:09 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 257446 00:06:21.290 10:25:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@952 -- # '[' -z 257446 ']' 00:06:21.290 10:25:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # kill -0 257446 00:06:21.290 10:25:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@957 -- # uname 00:06:21.290 10:25:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:21.290 10:25:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 257446 00:06:21.290 10:25:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:06:21.290 10:25:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:06:21.290 10:25:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 257446' 00:06:21.290 killing process with pid 257446 00:06:21.290 10:25:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@971 -- # kill 257446 00:06:21.290 10:25:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@976 -- # wait 257446 00:06:21.549 00:06:21.549 real 0m1.180s 00:06:21.549 user 0m1.142s 00:06:21.549 sys 0m0.508s 00:06:21.549 10:25:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:21.549 10:25:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:21.549 ************************************ 00:06:21.549 END TEST default_locks_via_rpc 00:06:21.549 ************************************ 00:06:21.808 10:25:10 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:06:21.808 10:25:10 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:21.808 10:25:10 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:21.808 10:25:10 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:21.808 ************************************ 00:06:21.808 START TEST non_locking_app_on_locked_coremask 00:06:21.808 ************************************ 00:06:21.808 10:25:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1127 -- # non_locking_app_on_locked_coremask 00:06:21.808 10:25:10 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=257652 00:06:21.808 10:25:10 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:21.808 10:25:10 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 257652 /var/tmp/spdk.sock 00:06:21.808 10:25:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # '[' -z 257652 ']' 00:06:21.808 10:25:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:21.808 10:25:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:21.808 10:25:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:21.808 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:21.808 10:25:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:21.808 10:25:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:21.808 [2024-11-15 10:25:10.105201] Starting SPDK v25.01-pre git sha1 318515b44 / DPDK 24.03.0 initialization... 00:06:21.808 [2024-11-15 10:25:10.105289] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid257652 ] 00:06:21.808 [2024-11-15 10:25:10.171435] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:21.808 [2024-11-15 10:25:10.228963] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:22.067 10:25:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:22.067 10:25:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@866 -- # return 0 00:06:22.067 10:25:10 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=257661 00:06:22.067 10:25:10 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:06:22.067 10:25:10 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 257661 /var/tmp/spdk2.sock 00:06:22.067 10:25:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # '[' -z 257661 ']' 00:06:22.067 10:25:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:22.067 10:25:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:22.067 10:25:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:22.067 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:22.067 10:25:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:22.067 10:25:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:22.325 [2024-11-15 10:25:10.563525] Starting SPDK v25.01-pre git sha1 318515b44 / DPDK 24.03.0 initialization... 00:06:22.325 [2024-11-15 10:25:10.563609] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid257661 ] 00:06:22.325 [2024-11-15 10:25:10.668731] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:22.325 [2024-11-15 10:25:10.668767] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:22.325 [2024-11-15 10:25:10.787532] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:23.259 10:25:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:23.259 10:25:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@866 -- # return 0 00:06:23.259 10:25:11 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 257652 00:06:23.259 10:25:11 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 257652 00:06:23.259 10:25:11 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:23.824 lslocks: write error 00:06:23.824 10:25:11 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 257652 00:06:23.824 10:25:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # '[' -z 257652 ']' 00:06:23.824 10:25:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # kill -0 257652 00:06:23.824 10:25:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # uname 00:06:23.824 10:25:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:23.824 10:25:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 257652 00:06:23.824 10:25:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:06:23.824 10:25:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:06:23.824 10:25:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 257652' 00:06:23.824 killing process with pid 257652 00:06:23.824 10:25:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@971 -- # kill 257652 00:06:23.824 10:25:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@976 -- # wait 257652 00:06:24.388 10:25:12 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 257661 00:06:24.388 10:25:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # '[' -z 257661 ']' 00:06:24.388 10:25:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # kill -0 257661 00:06:24.389 10:25:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # uname 00:06:24.389 10:25:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:24.389 10:25:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 257661 00:06:24.646 10:25:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:06:24.646 10:25:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:06:24.646 10:25:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 257661' 00:06:24.646 killing process with pid 257661 00:06:24.646 10:25:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@971 -- # kill 257661 00:06:24.646 10:25:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@976 -- # wait 257661 00:06:24.904 00:06:24.904 real 0m3.214s 00:06:24.904 user 0m3.429s 00:06:24.904 sys 0m1.052s 00:06:24.904 10:25:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:24.904 10:25:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:24.904 ************************************ 00:06:24.904 END TEST non_locking_app_on_locked_coremask 00:06:24.904 ************************************ 00:06:24.904 10:25:13 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:06:24.904 10:25:13 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:24.904 10:25:13 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:24.904 10:25:13 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:24.904 ************************************ 00:06:24.904 START TEST locking_app_on_unlocked_coremask 00:06:24.904 ************************************ 00:06:24.904 10:25:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1127 -- # locking_app_on_unlocked_coremask 00:06:24.904 10:25:13 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=258028 00:06:24.904 10:25:13 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:06:24.904 10:25:13 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 258028 /var/tmp/spdk.sock 00:06:24.904 10:25:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # '[' -z 258028 ']' 00:06:24.904 10:25:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:24.904 10:25:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:24.904 10:25:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:24.904 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:24.904 10:25:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:24.904 10:25:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:25.162 [2024-11-15 10:25:13.370617] Starting SPDK v25.01-pre git sha1 318515b44 / DPDK 24.03.0 initialization... 00:06:25.163 [2024-11-15 10:25:13.370699] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid258028 ] 00:06:25.163 [2024-11-15 10:25:13.434482] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:25.163 [2024-11-15 10:25:13.434511] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:25.163 [2024-11-15 10:25:13.488171] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:25.420 10:25:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:25.420 10:25:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@866 -- # return 0 00:06:25.420 10:25:13 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=258097 00:06:25.420 10:25:13 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:25.420 10:25:13 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 258097 /var/tmp/spdk2.sock 00:06:25.420 10:25:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # '[' -z 258097 ']' 00:06:25.420 10:25:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:25.420 10:25:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:25.421 10:25:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:25.421 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:25.421 10:25:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:25.421 10:25:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:25.421 [2024-11-15 10:25:13.806023] Starting SPDK v25.01-pre git sha1 318515b44 / DPDK 24.03.0 initialization... 00:06:25.421 [2024-11-15 10:25:13.806109] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid258097 ] 00:06:25.678 [2024-11-15 10:25:13.903973] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:25.678 [2024-11-15 10:25:14.015398] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:26.611 10:25:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:26.611 10:25:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@866 -- # return 0 00:06:26.611 10:25:14 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 258097 00:06:26.611 10:25:14 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 258097 00:06:26.611 10:25:14 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:27.177 lslocks: write error 00:06:27.177 10:25:15 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 258028 00:06:27.177 10:25:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # '[' -z 258028 ']' 00:06:27.177 10:25:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # kill -0 258028 00:06:27.177 10:25:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@957 -- # uname 00:06:27.177 10:25:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:27.177 10:25:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 258028 00:06:27.177 10:25:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:06:27.177 10:25:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:06:27.177 10:25:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 258028' 00:06:27.177 killing process with pid 258028 00:06:27.177 10:25:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@971 -- # kill 258028 00:06:27.177 10:25:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@976 -- # wait 258028 00:06:27.743 10:25:16 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 258097 00:06:27.743 10:25:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # '[' -z 258097 ']' 00:06:27.743 10:25:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # kill -0 258097 00:06:27.743 10:25:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@957 -- # uname 00:06:27.743 10:25:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:27.743 10:25:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 258097 00:06:28.001 10:25:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:06:28.001 10:25:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:06:28.001 10:25:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 258097' 00:06:28.001 killing process with pid 258097 00:06:28.001 10:25:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@971 -- # kill 258097 00:06:28.001 10:25:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@976 -- # wait 258097 00:06:28.260 00:06:28.260 real 0m3.311s 00:06:28.260 user 0m3.580s 00:06:28.260 sys 0m1.025s 00:06:28.260 10:25:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:28.260 10:25:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:28.260 ************************************ 00:06:28.260 END TEST locking_app_on_unlocked_coremask 00:06:28.260 ************************************ 00:06:28.260 10:25:16 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:06:28.260 10:25:16 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:28.260 10:25:16 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:28.260 10:25:16 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:28.260 ************************************ 00:06:28.260 START TEST locking_app_on_locked_coremask 00:06:28.260 ************************************ 00:06:28.260 10:25:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1127 -- # locking_app_on_locked_coremask 00:06:28.260 10:25:16 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=258421 00:06:28.260 10:25:16 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:28.260 10:25:16 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 258421 /var/tmp/spdk.sock 00:06:28.260 10:25:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # '[' -z 258421 ']' 00:06:28.260 10:25:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:28.260 10:25:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:28.260 10:25:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:28.260 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:28.260 10:25:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:28.261 10:25:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:28.519 [2024-11-15 10:25:16.734073] Starting SPDK v25.01-pre git sha1 318515b44 / DPDK 24.03.0 initialization... 00:06:28.519 [2024-11-15 10:25:16.734150] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid258421 ] 00:06:28.519 [2024-11-15 10:25:16.802841] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:28.519 [2024-11-15 10:25:16.858416] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:28.778 10:25:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:28.778 10:25:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@866 -- # return 0 00:06:28.778 10:25:17 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=258531 00:06:28.778 10:25:17 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:28.778 10:25:17 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 258531 /var/tmp/spdk2.sock 00:06:28.778 10:25:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # local es=0 00:06:28.778 10:25:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 258531 /var/tmp/spdk2.sock 00:06:28.778 10:25:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:06:28.778 10:25:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:28.778 10:25:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:06:28.778 10:25:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:28.778 10:25:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # waitforlisten 258531 /var/tmp/spdk2.sock 00:06:28.778 10:25:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # '[' -z 258531 ']' 00:06:28.778 10:25:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:28.778 10:25:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:28.778 10:25:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:28.778 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:28.778 10:25:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:28.778 10:25:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:28.778 [2024-11-15 10:25:17.171848] Starting SPDK v25.01-pre git sha1 318515b44 / DPDK 24.03.0 initialization... 00:06:28.778 [2024-11-15 10:25:17.171930] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid258531 ] 00:06:29.035 [2024-11-15 10:25:17.269844] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 258421 has claimed it. 00:06:29.035 [2024-11-15 10:25:17.269912] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:29.602 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 848: kill: (258531) - No such process 00:06:29.602 ERROR: process (pid: 258531) is no longer running 00:06:29.602 10:25:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:29.602 10:25:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@866 -- # return 1 00:06:29.602 10:25:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # es=1 00:06:29.602 10:25:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:29.602 10:25:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:29.602 10:25:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:29.602 10:25:17 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 258421 00:06:29.602 10:25:17 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 258421 00:06:29.602 10:25:17 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:29.860 lslocks: write error 00:06:29.860 10:25:18 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 258421 00:06:29.860 10:25:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # '[' -z 258421 ']' 00:06:29.860 10:25:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # kill -0 258421 00:06:29.860 10:25:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # uname 00:06:29.860 10:25:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:29.860 10:25:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 258421 00:06:30.118 10:25:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:06:30.118 10:25:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:06:30.118 10:25:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 258421' 00:06:30.118 killing process with pid 258421 00:06:30.118 10:25:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@971 -- # kill 258421 00:06:30.118 10:25:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@976 -- # wait 258421 00:06:30.377 00:06:30.377 real 0m2.084s 00:06:30.377 user 0m2.295s 00:06:30.377 sys 0m0.656s 00:06:30.377 10:25:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:30.377 10:25:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:30.377 ************************************ 00:06:30.377 END TEST locking_app_on_locked_coremask 00:06:30.377 ************************************ 00:06:30.377 10:25:18 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:06:30.377 10:25:18 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:30.377 10:25:18 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:30.377 10:25:18 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:30.377 ************************************ 00:06:30.377 START TEST locking_overlapped_coremask 00:06:30.377 ************************************ 00:06:30.377 10:25:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1127 -- # locking_overlapped_coremask 00:06:30.377 10:25:18 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=258709 00:06:30.377 10:25:18 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:06:30.377 10:25:18 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 258709 /var/tmp/spdk.sock 00:06:30.377 10:25:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # '[' -z 258709 ']' 00:06:30.377 10:25:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:30.377 10:25:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:30.377 10:25:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:30.377 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:30.377 10:25:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:30.377 10:25:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:30.636 [2024-11-15 10:25:18.874902] Starting SPDK v25.01-pre git sha1 318515b44 / DPDK 24.03.0 initialization... 00:06:30.636 [2024-11-15 10:25:18.875005] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid258709 ] 00:06:30.636 [2024-11-15 10:25:18.943797] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:30.636 [2024-11-15 10:25:19.005935] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:30.636 [2024-11-15 10:25:19.006000] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:30.636 [2024-11-15 10:25:19.006003] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:30.894 10:25:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:30.894 10:25:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@866 -- # return 0 00:06:30.894 10:25:19 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=258829 00:06:30.894 10:25:19 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 258829 /var/tmp/spdk2.sock 00:06:30.894 10:25:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # local es=0 00:06:30.894 10:25:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 258829 /var/tmp/spdk2.sock 00:06:30.894 10:25:19 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:06:30.894 10:25:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:06:30.894 10:25:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:30.894 10:25:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:06:30.894 10:25:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:30.894 10:25:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # waitforlisten 258829 /var/tmp/spdk2.sock 00:06:30.894 10:25:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # '[' -z 258829 ']' 00:06:30.894 10:25:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:30.894 10:25:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:30.894 10:25:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:30.894 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:30.894 10:25:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:30.894 10:25:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:30.894 [2024-11-15 10:25:19.337535] Starting SPDK v25.01-pre git sha1 318515b44 / DPDK 24.03.0 initialization... 00:06:30.894 [2024-11-15 10:25:19.337618] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid258829 ] 00:06:31.152 [2024-11-15 10:25:19.442052] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 258709 has claimed it. 00:06:31.152 [2024-11-15 10:25:19.442112] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:31.719 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 848: kill: (258829) - No such process 00:06:31.719 ERROR: process (pid: 258829) is no longer running 00:06:31.719 10:25:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:31.719 10:25:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@866 -- # return 1 00:06:31.719 10:25:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # es=1 00:06:31.719 10:25:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:31.719 10:25:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:31.719 10:25:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:31.719 10:25:20 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:06:31.719 10:25:20 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:31.719 10:25:20 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:31.719 10:25:20 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:31.719 10:25:20 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 258709 00:06:31.719 10:25:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@952 -- # '[' -z 258709 ']' 00:06:31.719 10:25:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # kill -0 258709 00:06:31.719 10:25:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@957 -- # uname 00:06:31.719 10:25:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:31.719 10:25:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 258709 00:06:31.719 10:25:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:06:31.719 10:25:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:06:31.719 10:25:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 258709' 00:06:31.719 killing process with pid 258709 00:06:31.719 10:25:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@971 -- # kill 258709 00:06:31.719 10:25:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@976 -- # wait 258709 00:06:32.285 00:06:32.285 real 0m1.692s 00:06:32.285 user 0m4.698s 00:06:32.285 sys 0m0.477s 00:06:32.285 10:25:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:32.285 10:25:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:32.285 ************************************ 00:06:32.285 END TEST locking_overlapped_coremask 00:06:32.285 ************************************ 00:06:32.285 10:25:20 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:06:32.285 10:25:20 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:32.285 10:25:20 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:32.285 10:25:20 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:32.285 ************************************ 00:06:32.285 START TEST locking_overlapped_coremask_via_rpc 00:06:32.285 ************************************ 00:06:32.285 10:25:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1127 -- # locking_overlapped_coremask_via_rpc 00:06:32.285 10:25:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=258993 00:06:32.285 10:25:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:06:32.285 10:25:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 258993 /var/tmp/spdk.sock 00:06:32.285 10:25:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # '[' -z 258993 ']' 00:06:32.285 10:25:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:32.285 10:25:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:32.285 10:25:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:32.285 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:32.285 10:25:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:32.285 10:25:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:32.285 [2024-11-15 10:25:20.615100] Starting SPDK v25.01-pre git sha1 318515b44 / DPDK 24.03.0 initialization... 00:06:32.285 [2024-11-15 10:25:20.615189] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid258993 ] 00:06:32.286 [2024-11-15 10:25:20.680389] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:32.286 [2024-11-15 10:25:20.680420] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:32.286 [2024-11-15 10:25:20.736061] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:32.286 [2024-11-15 10:25:20.736167] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:32.286 [2024-11-15 10:25:20.736176] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:32.544 10:25:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:32.544 10:25:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@866 -- # return 0 00:06:32.544 10:25:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=259012 00:06:32.544 10:25:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:06:32.544 10:25:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 259012 /var/tmp/spdk2.sock 00:06:32.544 10:25:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # '[' -z 259012 ']' 00:06:32.544 10:25:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:32.544 10:25:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:32.544 10:25:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:32.544 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:32.544 10:25:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:32.544 10:25:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:32.802 [2024-11-15 10:25:21.050451] Starting SPDK v25.01-pre git sha1 318515b44 / DPDK 24.03.0 initialization... 00:06:32.802 [2024-11-15 10:25:21.050531] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid259012 ] 00:06:32.802 [2024-11-15 10:25:21.153165] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:32.802 [2024-11-15 10:25:21.153211] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:33.060 [2024-11-15 10:25:21.280732] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:33.060 [2024-11-15 10:25:21.284423] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:06:33.060 [2024-11-15 10:25:21.284426] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:33.627 10:25:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:33.627 10:25:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@866 -- # return 0 00:06:33.627 10:25:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:06:33.627 10:25:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:33.627 10:25:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:33.627 10:25:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:33.627 10:25:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:33.627 10:25:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # local es=0 00:06:33.627 10:25:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:33.627 10:25:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:06:33.627 10:25:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:33.627 10:25:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:06:33.627 10:25:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:33.627 10:25:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:33.627 10:25:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:33.627 10:25:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:33.627 [2024-11-15 10:25:22.067455] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 258993 has claimed it. 00:06:33.627 request: 00:06:33.627 { 00:06:33.627 "method": "framework_enable_cpumask_locks", 00:06:33.627 "req_id": 1 00:06:33.627 } 00:06:33.627 Got JSON-RPC error response 00:06:33.627 response: 00:06:33.627 { 00:06:33.627 "code": -32603, 00:06:33.627 "message": "Failed to claim CPU core: 2" 00:06:33.627 } 00:06:33.627 10:25:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:06:33.627 10:25:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # es=1 00:06:33.627 10:25:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:33.627 10:25:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:33.627 10:25:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:33.627 10:25:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 258993 /var/tmp/spdk.sock 00:06:33.627 10:25:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # '[' -z 258993 ']' 00:06:33.627 10:25:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:33.627 10:25:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:33.627 10:25:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:33.627 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:33.627 10:25:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:33.627 10:25:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:33.886 10:25:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:33.886 10:25:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@866 -- # return 0 00:06:33.886 10:25:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 259012 /var/tmp/spdk2.sock 00:06:33.886 10:25:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # '[' -z 259012 ']' 00:06:33.886 10:25:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:33.886 10:25:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:33.886 10:25:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:33.886 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:33.886 10:25:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:33.886 10:25:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:34.452 10:25:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:34.452 10:25:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@866 -- # return 0 00:06:34.452 10:25:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:06:34.452 10:25:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:34.452 10:25:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:34.452 10:25:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:34.452 00:06:34.452 real 0m2.070s 00:06:34.452 user 0m1.154s 00:06:34.452 sys 0m0.170s 00:06:34.452 10:25:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:34.452 10:25:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:34.452 ************************************ 00:06:34.452 END TEST locking_overlapped_coremask_via_rpc 00:06:34.452 ************************************ 00:06:34.452 10:25:22 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:06:34.452 10:25:22 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 258993 ]] 00:06:34.452 10:25:22 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 258993 00:06:34.452 10:25:22 event.cpu_locks -- common/autotest_common.sh@952 -- # '[' -z 258993 ']' 00:06:34.452 10:25:22 event.cpu_locks -- common/autotest_common.sh@956 -- # kill -0 258993 00:06:34.452 10:25:22 event.cpu_locks -- common/autotest_common.sh@957 -- # uname 00:06:34.453 10:25:22 event.cpu_locks -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:34.453 10:25:22 event.cpu_locks -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 258993 00:06:34.453 10:25:22 event.cpu_locks -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:06:34.453 10:25:22 event.cpu_locks -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:06:34.453 10:25:22 event.cpu_locks -- common/autotest_common.sh@970 -- # echo 'killing process with pid 258993' 00:06:34.453 killing process with pid 258993 00:06:34.453 10:25:22 event.cpu_locks -- common/autotest_common.sh@971 -- # kill 258993 00:06:34.453 10:25:22 event.cpu_locks -- common/autotest_common.sh@976 -- # wait 258993 00:06:34.711 10:25:23 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 259012 ]] 00:06:34.711 10:25:23 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 259012 00:06:34.711 10:25:23 event.cpu_locks -- common/autotest_common.sh@952 -- # '[' -z 259012 ']' 00:06:34.711 10:25:23 event.cpu_locks -- common/autotest_common.sh@956 -- # kill -0 259012 00:06:34.711 10:25:23 event.cpu_locks -- common/autotest_common.sh@957 -- # uname 00:06:34.711 10:25:23 event.cpu_locks -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:34.711 10:25:23 event.cpu_locks -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 259012 00:06:34.711 10:25:23 event.cpu_locks -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:06:34.711 10:25:23 event.cpu_locks -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:06:34.711 10:25:23 event.cpu_locks -- common/autotest_common.sh@970 -- # echo 'killing process with pid 259012' 00:06:34.711 killing process with pid 259012 00:06:34.711 10:25:23 event.cpu_locks -- common/autotest_common.sh@971 -- # kill 259012 00:06:34.711 10:25:23 event.cpu_locks -- common/autotest_common.sh@976 -- # wait 259012 00:06:35.276 10:25:23 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:35.276 10:25:23 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:06:35.276 10:25:23 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 258993 ]] 00:06:35.276 10:25:23 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 258993 00:06:35.276 10:25:23 event.cpu_locks -- common/autotest_common.sh@952 -- # '[' -z 258993 ']' 00:06:35.276 10:25:23 event.cpu_locks -- common/autotest_common.sh@956 -- # kill -0 258993 00:06:35.276 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 956: kill: (258993) - No such process 00:06:35.276 10:25:23 event.cpu_locks -- common/autotest_common.sh@979 -- # echo 'Process with pid 258993 is not found' 00:06:35.276 Process with pid 258993 is not found 00:06:35.276 10:25:23 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 259012 ]] 00:06:35.276 10:25:23 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 259012 00:06:35.276 10:25:23 event.cpu_locks -- common/autotest_common.sh@952 -- # '[' -z 259012 ']' 00:06:35.276 10:25:23 event.cpu_locks -- common/autotest_common.sh@956 -- # kill -0 259012 00:06:35.276 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 956: kill: (259012) - No such process 00:06:35.276 10:25:23 event.cpu_locks -- common/autotest_common.sh@979 -- # echo 'Process with pid 259012 is not found' 00:06:35.276 Process with pid 259012 is not found 00:06:35.276 10:25:23 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:35.276 00:06:35.276 real 0m16.154s 00:06:35.276 user 0m29.268s 00:06:35.276 sys 0m5.335s 00:06:35.276 10:25:23 event.cpu_locks -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:35.276 10:25:23 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:35.276 ************************************ 00:06:35.276 END TEST cpu_locks 00:06:35.276 ************************************ 00:06:35.276 00:06:35.276 real 0m40.856s 00:06:35.276 user 1m19.868s 00:06:35.276 sys 0m9.453s 00:06:35.276 10:25:23 event -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:35.276 10:25:23 event -- common/autotest_common.sh@10 -- # set +x 00:06:35.276 ************************************ 00:06:35.276 END TEST event 00:06:35.276 ************************************ 00:06:35.276 10:25:23 -- spdk/autotest.sh@169 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:06:35.276 10:25:23 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:35.277 10:25:23 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:35.277 10:25:23 -- common/autotest_common.sh@10 -- # set +x 00:06:35.277 ************************************ 00:06:35.277 START TEST thread 00:06:35.277 ************************************ 00:06:35.277 10:25:23 thread -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:06:35.277 * Looking for test storage... 00:06:35.277 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:06:35.277 10:25:23 thread -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:06:35.277 10:25:23 thread -- common/autotest_common.sh@1691 -- # lcov --version 00:06:35.277 10:25:23 thread -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:06:35.536 10:25:23 thread -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:06:35.536 10:25:23 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:35.536 10:25:23 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:35.536 10:25:23 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:35.536 10:25:23 thread -- scripts/common.sh@336 -- # IFS=.-: 00:06:35.536 10:25:23 thread -- scripts/common.sh@336 -- # read -ra ver1 00:06:35.536 10:25:23 thread -- scripts/common.sh@337 -- # IFS=.-: 00:06:35.536 10:25:23 thread -- scripts/common.sh@337 -- # read -ra ver2 00:06:35.536 10:25:23 thread -- scripts/common.sh@338 -- # local 'op=<' 00:06:35.536 10:25:23 thread -- scripts/common.sh@340 -- # ver1_l=2 00:06:35.536 10:25:23 thread -- scripts/common.sh@341 -- # ver2_l=1 00:06:35.536 10:25:23 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:35.536 10:25:23 thread -- scripts/common.sh@344 -- # case "$op" in 00:06:35.536 10:25:23 thread -- scripts/common.sh@345 -- # : 1 00:06:35.536 10:25:23 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:35.536 10:25:23 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:35.536 10:25:23 thread -- scripts/common.sh@365 -- # decimal 1 00:06:35.536 10:25:23 thread -- scripts/common.sh@353 -- # local d=1 00:06:35.536 10:25:23 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:35.536 10:25:23 thread -- scripts/common.sh@355 -- # echo 1 00:06:35.536 10:25:23 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:06:35.536 10:25:23 thread -- scripts/common.sh@366 -- # decimal 2 00:06:35.536 10:25:23 thread -- scripts/common.sh@353 -- # local d=2 00:06:35.536 10:25:23 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:35.536 10:25:23 thread -- scripts/common.sh@355 -- # echo 2 00:06:35.536 10:25:23 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:06:35.536 10:25:23 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:35.536 10:25:23 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:35.536 10:25:23 thread -- scripts/common.sh@368 -- # return 0 00:06:35.536 10:25:23 thread -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:35.536 10:25:23 thread -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:06:35.536 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:35.536 --rc genhtml_branch_coverage=1 00:06:35.536 --rc genhtml_function_coverage=1 00:06:35.536 --rc genhtml_legend=1 00:06:35.536 --rc geninfo_all_blocks=1 00:06:35.536 --rc geninfo_unexecuted_blocks=1 00:06:35.536 00:06:35.536 ' 00:06:35.536 10:25:23 thread -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:06:35.536 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:35.536 --rc genhtml_branch_coverage=1 00:06:35.536 --rc genhtml_function_coverage=1 00:06:35.536 --rc genhtml_legend=1 00:06:35.536 --rc geninfo_all_blocks=1 00:06:35.536 --rc geninfo_unexecuted_blocks=1 00:06:35.536 00:06:35.536 ' 00:06:35.536 10:25:23 thread -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:06:35.536 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:35.536 --rc genhtml_branch_coverage=1 00:06:35.536 --rc genhtml_function_coverage=1 00:06:35.536 --rc genhtml_legend=1 00:06:35.536 --rc geninfo_all_blocks=1 00:06:35.536 --rc geninfo_unexecuted_blocks=1 00:06:35.536 00:06:35.536 ' 00:06:35.536 10:25:23 thread -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:06:35.536 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:35.536 --rc genhtml_branch_coverage=1 00:06:35.536 --rc genhtml_function_coverage=1 00:06:35.536 --rc genhtml_legend=1 00:06:35.536 --rc geninfo_all_blocks=1 00:06:35.536 --rc geninfo_unexecuted_blocks=1 00:06:35.536 00:06:35.536 ' 00:06:35.536 10:25:23 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:35.536 10:25:23 thread -- common/autotest_common.sh@1103 -- # '[' 8 -le 1 ']' 00:06:35.536 10:25:23 thread -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:35.536 10:25:23 thread -- common/autotest_common.sh@10 -- # set +x 00:06:35.536 ************************************ 00:06:35.536 START TEST thread_poller_perf 00:06:35.536 ************************************ 00:06:35.536 10:25:23 thread.thread_poller_perf -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:35.536 [2024-11-15 10:25:23.845224] Starting SPDK v25.01-pre git sha1 318515b44 / DPDK 24.03.0 initialization... 00:06:35.536 [2024-11-15 10:25:23.845287] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid259506 ] 00:06:35.536 [2024-11-15 10:25:23.908261] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:35.536 [2024-11-15 10:25:23.963276] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:35.536 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:06:36.919 [2024-11-15T09:25:25.382Z] ====================================== 00:06:36.919 [2024-11-15T09:25:25.382Z] busy:2710767711 (cyc) 00:06:36.919 [2024-11-15T09:25:25.382Z] total_run_count: 365000 00:06:36.919 [2024-11-15T09:25:25.382Z] tsc_hz: 2700000000 (cyc) 00:06:36.919 [2024-11-15T09:25:25.382Z] ====================================== 00:06:36.919 [2024-11-15T09:25:25.382Z] poller_cost: 7426 (cyc), 2750 (nsec) 00:06:36.919 00:06:36.919 real 0m1.202s 00:06:36.919 user 0m1.132s 00:06:36.919 sys 0m0.065s 00:06:36.919 10:25:25 thread.thread_poller_perf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:36.919 10:25:25 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:36.919 ************************************ 00:06:36.919 END TEST thread_poller_perf 00:06:36.919 ************************************ 00:06:36.919 10:25:25 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:36.919 10:25:25 thread -- common/autotest_common.sh@1103 -- # '[' 8 -le 1 ']' 00:06:36.919 10:25:25 thread -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:36.919 10:25:25 thread -- common/autotest_common.sh@10 -- # set +x 00:06:36.919 ************************************ 00:06:36.919 START TEST thread_poller_perf 00:06:36.919 ************************************ 00:06:36.919 10:25:25 thread.thread_poller_perf -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:36.919 [2024-11-15 10:25:25.094136] Starting SPDK v25.01-pre git sha1 318515b44 / DPDK 24.03.0 initialization... 00:06:36.919 [2024-11-15 10:25:25.094195] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid259659 ] 00:06:36.919 [2024-11-15 10:25:25.157955] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:36.920 [2024-11-15 10:25:25.215329] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:36.920 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:06:37.854 [2024-11-15T09:25:26.317Z] ====================================== 00:06:37.854 [2024-11-15T09:25:26.317Z] busy:2702555976 (cyc) 00:06:37.854 [2024-11-15T09:25:26.317Z] total_run_count: 4850000 00:06:37.854 [2024-11-15T09:25:26.317Z] tsc_hz: 2700000000 (cyc) 00:06:37.854 [2024-11-15T09:25:26.317Z] ====================================== 00:06:37.854 [2024-11-15T09:25:26.317Z] poller_cost: 557 (cyc), 206 (nsec) 00:06:37.854 00:06:37.854 real 0m1.197s 00:06:37.854 user 0m1.132s 00:06:37.854 sys 0m0.060s 00:06:37.854 10:25:26 thread.thread_poller_perf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:37.854 10:25:26 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:37.854 ************************************ 00:06:37.854 END TEST thread_poller_perf 00:06:37.854 ************************************ 00:06:37.854 10:25:26 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:06:37.854 00:06:37.854 real 0m2.643s 00:06:37.854 user 0m2.404s 00:06:37.854 sys 0m0.245s 00:06:37.854 10:25:26 thread -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:37.854 10:25:26 thread -- common/autotest_common.sh@10 -- # set +x 00:06:37.854 ************************************ 00:06:37.854 END TEST thread 00:06:37.854 ************************************ 00:06:38.113 10:25:26 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:06:38.113 10:25:26 -- spdk/autotest.sh@176 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:06:38.113 10:25:26 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:38.113 10:25:26 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:38.113 10:25:26 -- common/autotest_common.sh@10 -- # set +x 00:06:38.113 ************************************ 00:06:38.113 START TEST app_cmdline 00:06:38.113 ************************************ 00:06:38.113 10:25:26 app_cmdline -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:06:38.113 * Looking for test storage... 00:06:38.113 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:06:38.113 10:25:26 app_cmdline -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:06:38.113 10:25:26 app_cmdline -- common/autotest_common.sh@1691 -- # lcov --version 00:06:38.113 10:25:26 app_cmdline -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:06:38.113 10:25:26 app_cmdline -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:06:38.113 10:25:26 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:38.113 10:25:26 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:38.113 10:25:26 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:38.113 10:25:26 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:06:38.113 10:25:26 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:06:38.113 10:25:26 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:06:38.113 10:25:26 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:06:38.113 10:25:26 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:06:38.113 10:25:26 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:06:38.113 10:25:26 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:06:38.113 10:25:26 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:38.113 10:25:26 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:06:38.113 10:25:26 app_cmdline -- scripts/common.sh@345 -- # : 1 00:06:38.113 10:25:26 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:38.113 10:25:26 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:38.114 10:25:26 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:06:38.114 10:25:26 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:06:38.114 10:25:26 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:38.114 10:25:26 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:06:38.114 10:25:26 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:06:38.114 10:25:26 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:06:38.114 10:25:26 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:06:38.114 10:25:26 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:38.114 10:25:26 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:06:38.114 10:25:26 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:06:38.114 10:25:26 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:38.114 10:25:26 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:38.114 10:25:26 app_cmdline -- scripts/common.sh@368 -- # return 0 00:06:38.114 10:25:26 app_cmdline -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:38.114 10:25:26 app_cmdline -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:06:38.114 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:38.114 --rc genhtml_branch_coverage=1 00:06:38.114 --rc genhtml_function_coverage=1 00:06:38.114 --rc genhtml_legend=1 00:06:38.114 --rc geninfo_all_blocks=1 00:06:38.114 --rc geninfo_unexecuted_blocks=1 00:06:38.114 00:06:38.114 ' 00:06:38.114 10:25:26 app_cmdline -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:06:38.114 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:38.114 --rc genhtml_branch_coverage=1 00:06:38.114 --rc genhtml_function_coverage=1 00:06:38.114 --rc genhtml_legend=1 00:06:38.114 --rc geninfo_all_blocks=1 00:06:38.114 --rc geninfo_unexecuted_blocks=1 00:06:38.114 00:06:38.114 ' 00:06:38.114 10:25:26 app_cmdline -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:06:38.114 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:38.114 --rc genhtml_branch_coverage=1 00:06:38.114 --rc genhtml_function_coverage=1 00:06:38.114 --rc genhtml_legend=1 00:06:38.114 --rc geninfo_all_blocks=1 00:06:38.114 --rc geninfo_unexecuted_blocks=1 00:06:38.114 00:06:38.114 ' 00:06:38.114 10:25:26 app_cmdline -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:06:38.114 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:38.114 --rc genhtml_branch_coverage=1 00:06:38.114 --rc genhtml_function_coverage=1 00:06:38.114 --rc genhtml_legend=1 00:06:38.114 --rc geninfo_all_blocks=1 00:06:38.114 --rc geninfo_unexecuted_blocks=1 00:06:38.114 00:06:38.114 ' 00:06:38.114 10:25:26 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:06:38.114 10:25:26 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=259864 00:06:38.114 10:25:26 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:06:38.114 10:25:26 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 259864 00:06:38.114 10:25:26 app_cmdline -- common/autotest_common.sh@833 -- # '[' -z 259864 ']' 00:06:38.114 10:25:26 app_cmdline -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:38.114 10:25:26 app_cmdline -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:38.114 10:25:26 app_cmdline -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:38.114 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:38.114 10:25:26 app_cmdline -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:38.114 10:25:26 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:38.114 [2024-11-15 10:25:26.556308] Starting SPDK v25.01-pre git sha1 318515b44 / DPDK 24.03.0 initialization... 00:06:38.114 [2024-11-15 10:25:26.556425] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid259864 ] 00:06:38.372 [2024-11-15 10:25:26.622104] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:38.372 [2024-11-15 10:25:26.679341] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:38.630 10:25:26 app_cmdline -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:38.630 10:25:26 app_cmdline -- common/autotest_common.sh@866 -- # return 0 00:06:38.630 10:25:26 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:06:38.888 { 00:06:38.888 "version": "SPDK v25.01-pre git sha1 318515b44", 00:06:38.888 "fields": { 00:06:38.888 "major": 25, 00:06:38.888 "minor": 1, 00:06:38.888 "patch": 0, 00:06:38.888 "suffix": "-pre", 00:06:38.888 "commit": "318515b44" 00:06:38.888 } 00:06:38.888 } 00:06:38.888 10:25:27 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:06:38.888 10:25:27 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:06:38.888 10:25:27 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:06:38.888 10:25:27 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:06:38.888 10:25:27 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:06:38.888 10:25:27 app_cmdline -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:38.888 10:25:27 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:38.888 10:25:27 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:06:38.888 10:25:27 app_cmdline -- app/cmdline.sh@26 -- # sort 00:06:38.888 10:25:27 app_cmdline -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:38.888 10:25:27 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:06:38.888 10:25:27 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:06:38.888 10:25:27 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:38.888 10:25:27 app_cmdline -- common/autotest_common.sh@650 -- # local es=0 00:06:38.888 10:25:27 app_cmdline -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:38.888 10:25:27 app_cmdline -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:38.888 10:25:27 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:38.888 10:25:27 app_cmdline -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:38.888 10:25:27 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:38.888 10:25:27 app_cmdline -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:38.888 10:25:27 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:38.888 10:25:27 app_cmdline -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:38.888 10:25:27 app_cmdline -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:06:38.888 10:25:27 app_cmdline -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:39.146 request: 00:06:39.146 { 00:06:39.146 "method": "env_dpdk_get_mem_stats", 00:06:39.146 "req_id": 1 00:06:39.146 } 00:06:39.146 Got JSON-RPC error response 00:06:39.146 response: 00:06:39.146 { 00:06:39.146 "code": -32601, 00:06:39.146 "message": "Method not found" 00:06:39.146 } 00:06:39.146 10:25:27 app_cmdline -- common/autotest_common.sh@653 -- # es=1 00:06:39.146 10:25:27 app_cmdline -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:39.146 10:25:27 app_cmdline -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:39.146 10:25:27 app_cmdline -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:39.146 10:25:27 app_cmdline -- app/cmdline.sh@1 -- # killprocess 259864 00:06:39.146 10:25:27 app_cmdline -- common/autotest_common.sh@952 -- # '[' -z 259864 ']' 00:06:39.146 10:25:27 app_cmdline -- common/autotest_common.sh@956 -- # kill -0 259864 00:06:39.146 10:25:27 app_cmdline -- common/autotest_common.sh@957 -- # uname 00:06:39.146 10:25:27 app_cmdline -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:39.146 10:25:27 app_cmdline -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 259864 00:06:39.146 10:25:27 app_cmdline -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:06:39.146 10:25:27 app_cmdline -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:06:39.146 10:25:27 app_cmdline -- common/autotest_common.sh@970 -- # echo 'killing process with pid 259864' 00:06:39.146 killing process with pid 259864 00:06:39.146 10:25:27 app_cmdline -- common/autotest_common.sh@971 -- # kill 259864 00:06:39.146 10:25:27 app_cmdline -- common/autotest_common.sh@976 -- # wait 259864 00:06:39.713 00:06:39.713 real 0m1.609s 00:06:39.713 user 0m1.986s 00:06:39.713 sys 0m0.477s 00:06:39.713 10:25:27 app_cmdline -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:39.713 10:25:27 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:39.713 ************************************ 00:06:39.713 END TEST app_cmdline 00:06:39.713 ************************************ 00:06:39.713 10:25:27 -- spdk/autotest.sh@177 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:06:39.713 10:25:27 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:39.713 10:25:27 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:39.713 10:25:27 -- common/autotest_common.sh@10 -- # set +x 00:06:39.713 ************************************ 00:06:39.713 START TEST version 00:06:39.713 ************************************ 00:06:39.713 10:25:28 version -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:06:39.713 * Looking for test storage... 00:06:39.713 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:06:39.713 10:25:28 version -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:06:39.713 10:25:28 version -- common/autotest_common.sh@1691 -- # lcov --version 00:06:39.713 10:25:28 version -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:06:39.713 10:25:28 version -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:06:39.713 10:25:28 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:39.713 10:25:28 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:39.713 10:25:28 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:39.713 10:25:28 version -- scripts/common.sh@336 -- # IFS=.-: 00:06:39.713 10:25:28 version -- scripts/common.sh@336 -- # read -ra ver1 00:06:39.713 10:25:28 version -- scripts/common.sh@337 -- # IFS=.-: 00:06:39.713 10:25:28 version -- scripts/common.sh@337 -- # read -ra ver2 00:06:39.713 10:25:28 version -- scripts/common.sh@338 -- # local 'op=<' 00:06:39.713 10:25:28 version -- scripts/common.sh@340 -- # ver1_l=2 00:06:39.713 10:25:28 version -- scripts/common.sh@341 -- # ver2_l=1 00:06:39.713 10:25:28 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:39.713 10:25:28 version -- scripts/common.sh@344 -- # case "$op" in 00:06:39.713 10:25:28 version -- scripts/common.sh@345 -- # : 1 00:06:39.713 10:25:28 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:39.713 10:25:28 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:39.713 10:25:28 version -- scripts/common.sh@365 -- # decimal 1 00:06:39.713 10:25:28 version -- scripts/common.sh@353 -- # local d=1 00:06:39.713 10:25:28 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:39.713 10:25:28 version -- scripts/common.sh@355 -- # echo 1 00:06:39.713 10:25:28 version -- scripts/common.sh@365 -- # ver1[v]=1 00:06:39.713 10:25:28 version -- scripts/common.sh@366 -- # decimal 2 00:06:39.713 10:25:28 version -- scripts/common.sh@353 -- # local d=2 00:06:39.713 10:25:28 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:39.713 10:25:28 version -- scripts/common.sh@355 -- # echo 2 00:06:39.713 10:25:28 version -- scripts/common.sh@366 -- # ver2[v]=2 00:06:39.713 10:25:28 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:39.713 10:25:28 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:39.713 10:25:28 version -- scripts/common.sh@368 -- # return 0 00:06:39.713 10:25:28 version -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:39.713 10:25:28 version -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:06:39.713 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:39.713 --rc genhtml_branch_coverage=1 00:06:39.713 --rc genhtml_function_coverage=1 00:06:39.713 --rc genhtml_legend=1 00:06:39.713 --rc geninfo_all_blocks=1 00:06:39.713 --rc geninfo_unexecuted_blocks=1 00:06:39.713 00:06:39.713 ' 00:06:39.713 10:25:28 version -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:06:39.713 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:39.713 --rc genhtml_branch_coverage=1 00:06:39.713 --rc genhtml_function_coverage=1 00:06:39.713 --rc genhtml_legend=1 00:06:39.713 --rc geninfo_all_blocks=1 00:06:39.713 --rc geninfo_unexecuted_blocks=1 00:06:39.713 00:06:39.713 ' 00:06:39.713 10:25:28 version -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:06:39.713 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:39.713 --rc genhtml_branch_coverage=1 00:06:39.713 --rc genhtml_function_coverage=1 00:06:39.713 --rc genhtml_legend=1 00:06:39.713 --rc geninfo_all_blocks=1 00:06:39.713 --rc geninfo_unexecuted_blocks=1 00:06:39.713 00:06:39.713 ' 00:06:39.713 10:25:28 version -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:06:39.713 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:39.713 --rc genhtml_branch_coverage=1 00:06:39.713 --rc genhtml_function_coverage=1 00:06:39.713 --rc genhtml_legend=1 00:06:39.713 --rc geninfo_all_blocks=1 00:06:39.713 --rc geninfo_unexecuted_blocks=1 00:06:39.713 00:06:39.713 ' 00:06:39.713 10:25:28 version -- app/version.sh@17 -- # get_header_version major 00:06:39.713 10:25:28 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:39.713 10:25:28 version -- app/version.sh@14 -- # cut -f2 00:06:39.713 10:25:28 version -- app/version.sh@14 -- # tr -d '"' 00:06:39.713 10:25:28 version -- app/version.sh@17 -- # major=25 00:06:39.713 10:25:28 version -- app/version.sh@18 -- # get_header_version minor 00:06:39.713 10:25:28 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:39.713 10:25:28 version -- app/version.sh@14 -- # cut -f2 00:06:39.714 10:25:28 version -- app/version.sh@14 -- # tr -d '"' 00:06:39.714 10:25:28 version -- app/version.sh@18 -- # minor=1 00:06:39.714 10:25:28 version -- app/version.sh@19 -- # get_header_version patch 00:06:39.714 10:25:28 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:39.714 10:25:28 version -- app/version.sh@14 -- # cut -f2 00:06:39.714 10:25:28 version -- app/version.sh@14 -- # tr -d '"' 00:06:39.714 10:25:28 version -- app/version.sh@19 -- # patch=0 00:06:39.972 10:25:28 version -- app/version.sh@20 -- # get_header_version suffix 00:06:39.972 10:25:28 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:39.972 10:25:28 version -- app/version.sh@14 -- # cut -f2 00:06:39.972 10:25:28 version -- app/version.sh@14 -- # tr -d '"' 00:06:39.972 10:25:28 version -- app/version.sh@20 -- # suffix=-pre 00:06:39.972 10:25:28 version -- app/version.sh@22 -- # version=25.1 00:06:39.972 10:25:28 version -- app/version.sh@25 -- # (( patch != 0 )) 00:06:39.972 10:25:28 version -- app/version.sh@28 -- # version=25.1rc0 00:06:39.972 10:25:28 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:06:39.972 10:25:28 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:06:39.972 10:25:28 version -- app/version.sh@30 -- # py_version=25.1rc0 00:06:39.972 10:25:28 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:06:39.972 00:06:39.972 real 0m0.205s 00:06:39.972 user 0m0.133s 00:06:39.972 sys 0m0.097s 00:06:39.972 10:25:28 version -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:39.972 10:25:28 version -- common/autotest_common.sh@10 -- # set +x 00:06:39.972 ************************************ 00:06:39.972 END TEST version 00:06:39.972 ************************************ 00:06:39.972 10:25:28 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:06:39.972 10:25:28 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:06:39.972 10:25:28 -- spdk/autotest.sh@194 -- # uname -s 00:06:39.972 10:25:28 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:06:39.972 10:25:28 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:06:39.972 10:25:28 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:06:39.972 10:25:28 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:06:39.972 10:25:28 -- spdk/autotest.sh@252 -- # '[' 0 -eq 1 ']' 00:06:39.972 10:25:28 -- spdk/autotest.sh@256 -- # timing_exit lib 00:06:39.972 10:25:28 -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:39.972 10:25:28 -- common/autotest_common.sh@10 -- # set +x 00:06:39.972 10:25:28 -- spdk/autotest.sh@258 -- # '[' 0 -eq 1 ']' 00:06:39.972 10:25:28 -- spdk/autotest.sh@263 -- # '[' 0 -eq 1 ']' 00:06:39.972 10:25:28 -- spdk/autotest.sh@272 -- # '[' 1 -eq 1 ']' 00:06:39.972 10:25:28 -- spdk/autotest.sh@273 -- # export NET_TYPE 00:06:39.972 10:25:28 -- spdk/autotest.sh@276 -- # '[' tcp = rdma ']' 00:06:39.972 10:25:28 -- spdk/autotest.sh@279 -- # '[' tcp = tcp ']' 00:06:39.972 10:25:28 -- spdk/autotest.sh@280 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:06:39.972 10:25:28 -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:06:39.972 10:25:28 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:39.972 10:25:28 -- common/autotest_common.sh@10 -- # set +x 00:06:39.972 ************************************ 00:06:39.972 START TEST nvmf_tcp 00:06:39.972 ************************************ 00:06:39.972 10:25:28 nvmf_tcp -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:06:39.972 * Looking for test storage... 00:06:39.972 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:06:39.972 10:25:28 nvmf_tcp -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:06:39.972 10:25:28 nvmf_tcp -- common/autotest_common.sh@1691 -- # lcov --version 00:06:39.972 10:25:28 nvmf_tcp -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:06:39.972 10:25:28 nvmf_tcp -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:06:39.972 10:25:28 nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:39.972 10:25:28 nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:39.972 10:25:28 nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:39.972 10:25:28 nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:06:39.972 10:25:28 nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:06:39.972 10:25:28 nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:06:39.972 10:25:28 nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:06:39.972 10:25:28 nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:06:39.972 10:25:28 nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:06:39.972 10:25:28 nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:06:39.972 10:25:28 nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:39.972 10:25:28 nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:06:39.972 10:25:28 nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:06:39.972 10:25:28 nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:39.972 10:25:28 nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:39.972 10:25:28 nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:06:39.972 10:25:28 nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:06:39.973 10:25:28 nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:39.973 10:25:28 nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:06:39.973 10:25:28 nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:06:39.973 10:25:28 nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:06:39.973 10:25:28 nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:06:39.973 10:25:28 nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:39.973 10:25:28 nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:06:40.232 10:25:28 nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:06:40.232 10:25:28 nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:40.232 10:25:28 nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:40.232 10:25:28 nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:06:40.232 10:25:28 nvmf_tcp -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:40.232 10:25:28 nvmf_tcp -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:06:40.232 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:40.232 --rc genhtml_branch_coverage=1 00:06:40.233 --rc genhtml_function_coverage=1 00:06:40.233 --rc genhtml_legend=1 00:06:40.233 --rc geninfo_all_blocks=1 00:06:40.233 --rc geninfo_unexecuted_blocks=1 00:06:40.233 00:06:40.233 ' 00:06:40.233 10:25:28 nvmf_tcp -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:06:40.233 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:40.233 --rc genhtml_branch_coverage=1 00:06:40.233 --rc genhtml_function_coverage=1 00:06:40.233 --rc genhtml_legend=1 00:06:40.233 --rc geninfo_all_blocks=1 00:06:40.233 --rc geninfo_unexecuted_blocks=1 00:06:40.233 00:06:40.233 ' 00:06:40.233 10:25:28 nvmf_tcp -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:06:40.233 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:40.233 --rc genhtml_branch_coverage=1 00:06:40.233 --rc genhtml_function_coverage=1 00:06:40.233 --rc genhtml_legend=1 00:06:40.233 --rc geninfo_all_blocks=1 00:06:40.233 --rc geninfo_unexecuted_blocks=1 00:06:40.233 00:06:40.233 ' 00:06:40.233 10:25:28 nvmf_tcp -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:06:40.233 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:40.233 --rc genhtml_branch_coverage=1 00:06:40.233 --rc genhtml_function_coverage=1 00:06:40.233 --rc genhtml_legend=1 00:06:40.233 --rc geninfo_all_blocks=1 00:06:40.233 --rc geninfo_unexecuted_blocks=1 00:06:40.233 00:06:40.233 ' 00:06:40.233 10:25:28 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:06:40.233 10:25:28 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:06:40.233 10:25:28 nvmf_tcp -- nvmf/nvmf.sh@14 -- # run_test nvmf_target_core /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:06:40.233 10:25:28 nvmf_tcp -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:06:40.233 10:25:28 nvmf_tcp -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:40.233 10:25:28 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:40.233 ************************************ 00:06:40.233 START TEST nvmf_target_core 00:06:40.233 ************************************ 00:06:40.233 10:25:28 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:06:40.233 * Looking for test storage... 00:06:40.233 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:06:40.233 10:25:28 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:06:40.233 10:25:28 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1691 -- # lcov --version 00:06:40.233 10:25:28 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:06:40.233 10:25:28 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:06:40.233 10:25:28 nvmf_tcp.nvmf_target_core -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:40.233 10:25:28 nvmf_tcp.nvmf_target_core -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:40.233 10:25:28 nvmf_tcp.nvmf_target_core -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:40.233 10:25:28 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # IFS=.-: 00:06:40.233 10:25:28 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # read -ra ver1 00:06:40.233 10:25:28 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # IFS=.-: 00:06:40.233 10:25:28 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # read -ra ver2 00:06:40.233 10:25:28 nvmf_tcp.nvmf_target_core -- scripts/common.sh@338 -- # local 'op=<' 00:06:40.233 10:25:28 nvmf_tcp.nvmf_target_core -- scripts/common.sh@340 -- # ver1_l=2 00:06:40.233 10:25:28 nvmf_tcp.nvmf_target_core -- scripts/common.sh@341 -- # ver2_l=1 00:06:40.233 10:25:28 nvmf_tcp.nvmf_target_core -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:40.233 10:25:28 nvmf_tcp.nvmf_target_core -- scripts/common.sh@344 -- # case "$op" in 00:06:40.233 10:25:28 nvmf_tcp.nvmf_target_core -- scripts/common.sh@345 -- # : 1 00:06:40.233 10:25:28 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:40.233 10:25:28 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:40.233 10:25:28 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # decimal 1 00:06:40.233 10:25:28 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=1 00:06:40.233 10:25:28 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:40.233 10:25:28 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 1 00:06:40.233 10:25:28 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # ver1[v]=1 00:06:40.233 10:25:28 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # decimal 2 00:06:40.233 10:25:28 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=2 00:06:40.233 10:25:28 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:40.233 10:25:28 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 2 00:06:40.233 10:25:28 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # ver2[v]=2 00:06:40.233 10:25:28 nvmf_tcp.nvmf_target_core -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:40.233 10:25:28 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:40.233 10:25:28 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # return 0 00:06:40.233 10:25:28 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:40.233 10:25:28 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:06:40.233 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:40.233 --rc genhtml_branch_coverage=1 00:06:40.233 --rc genhtml_function_coverage=1 00:06:40.233 --rc genhtml_legend=1 00:06:40.233 --rc geninfo_all_blocks=1 00:06:40.233 --rc geninfo_unexecuted_blocks=1 00:06:40.233 00:06:40.233 ' 00:06:40.233 10:25:28 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:06:40.233 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:40.233 --rc genhtml_branch_coverage=1 00:06:40.233 --rc genhtml_function_coverage=1 00:06:40.233 --rc genhtml_legend=1 00:06:40.233 --rc geninfo_all_blocks=1 00:06:40.233 --rc geninfo_unexecuted_blocks=1 00:06:40.233 00:06:40.233 ' 00:06:40.233 10:25:28 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:06:40.233 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:40.233 --rc genhtml_branch_coverage=1 00:06:40.233 --rc genhtml_function_coverage=1 00:06:40.233 --rc genhtml_legend=1 00:06:40.233 --rc geninfo_all_blocks=1 00:06:40.233 --rc geninfo_unexecuted_blocks=1 00:06:40.233 00:06:40.233 ' 00:06:40.233 10:25:28 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:06:40.233 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:40.233 --rc genhtml_branch_coverage=1 00:06:40.233 --rc genhtml_function_coverage=1 00:06:40.233 --rc genhtml_legend=1 00:06:40.233 --rc geninfo_all_blocks=1 00:06:40.233 --rc geninfo_unexecuted_blocks=1 00:06:40.233 00:06:40.233 ' 00:06:40.233 10:25:28 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:06:40.233 10:25:28 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:06:40.233 10:25:28 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:40.233 10:25:28 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # uname -s 00:06:40.233 10:25:28 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:40.233 10:25:28 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:40.233 10:25:28 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:40.233 10:25:28 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:40.233 10:25:28 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:40.233 10:25:28 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:40.233 10:25:28 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:40.233 10:25:28 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:40.233 10:25:28 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:40.233 10:25:28 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:40.233 10:25:28 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:06:40.233 10:25:28 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@18 -- # NVME_HOSTID=8b464f06-2980-e311-ba20-001e67a94acd 00:06:40.233 10:25:28 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:40.233 10:25:28 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:40.233 10:25:28 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:40.233 10:25:28 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:40.233 10:25:28 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:40.233 10:25:28 nvmf_tcp.nvmf_target_core -- scripts/common.sh@15 -- # shopt -s extglob 00:06:40.233 10:25:28 nvmf_tcp.nvmf_target_core -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:40.233 10:25:28 nvmf_tcp.nvmf_target_core -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:40.233 10:25:28 nvmf_tcp.nvmf_target_core -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:40.234 10:25:28 nvmf_tcp.nvmf_target_core -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:40.234 10:25:28 nvmf_tcp.nvmf_target_core -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:40.234 10:25:28 nvmf_tcp.nvmf_target_core -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:40.234 10:25:28 nvmf_tcp.nvmf_target_core -- paths/export.sh@5 -- # export PATH 00:06:40.234 10:25:28 nvmf_tcp.nvmf_target_core -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:40.234 10:25:28 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@51 -- # : 0 00:06:40.234 10:25:28 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:40.234 10:25:28 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:40.234 10:25:28 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:40.234 10:25:28 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:40.234 10:25:28 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:40.234 10:25:28 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:40.234 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:40.234 10:25:28 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:40.234 10:25:28 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:40.234 10:25:28 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:40.234 10:25:28 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:06:40.234 10:25:28 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:06:40.234 10:25:28 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:06:40.234 10:25:28 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:06:40.234 10:25:28 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:06:40.234 10:25:28 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:40.234 10:25:28 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:40.234 ************************************ 00:06:40.234 START TEST nvmf_abort 00:06:40.234 ************************************ 00:06:40.234 10:25:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:06:40.494 * Looking for test storage... 00:06:40.494 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:40.494 10:25:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:06:40.494 10:25:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1691 -- # lcov --version 00:06:40.494 10:25:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:06:40.494 10:25:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:06:40.494 10:25:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:40.494 10:25:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:40.494 10:25:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:40.494 10:25:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:06:40.494 10:25:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:06:40.494 10:25:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:06:40.494 10:25:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:06:40.494 10:25:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:06:40.494 10:25:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:06:40.494 10:25:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:06:40.494 10:25:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:40.494 10:25:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:06:40.494 10:25:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:06:40.494 10:25:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:40.494 10:25:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:40.494 10:25:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:06:40.494 10:25:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:06:40.494 10:25:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:40.494 10:25:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:06:40.494 10:25:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:06:40.494 10:25:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:06:40.494 10:25:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:06:40.494 10:25:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:40.494 10:25:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:06:40.494 10:25:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:06:40.494 10:25:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:40.494 10:25:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:40.494 10:25:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:06:40.494 10:25:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:40.494 10:25:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:06:40.494 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:40.494 --rc genhtml_branch_coverage=1 00:06:40.494 --rc genhtml_function_coverage=1 00:06:40.494 --rc genhtml_legend=1 00:06:40.494 --rc geninfo_all_blocks=1 00:06:40.494 --rc geninfo_unexecuted_blocks=1 00:06:40.494 00:06:40.494 ' 00:06:40.494 10:25:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:06:40.494 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:40.494 --rc genhtml_branch_coverage=1 00:06:40.494 --rc genhtml_function_coverage=1 00:06:40.494 --rc genhtml_legend=1 00:06:40.494 --rc geninfo_all_blocks=1 00:06:40.494 --rc geninfo_unexecuted_blocks=1 00:06:40.494 00:06:40.494 ' 00:06:40.494 10:25:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:06:40.494 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:40.494 --rc genhtml_branch_coverage=1 00:06:40.494 --rc genhtml_function_coverage=1 00:06:40.494 --rc genhtml_legend=1 00:06:40.494 --rc geninfo_all_blocks=1 00:06:40.494 --rc geninfo_unexecuted_blocks=1 00:06:40.494 00:06:40.494 ' 00:06:40.494 10:25:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:06:40.494 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:40.494 --rc genhtml_branch_coverage=1 00:06:40.494 --rc genhtml_function_coverage=1 00:06:40.494 --rc genhtml_legend=1 00:06:40.494 --rc geninfo_all_blocks=1 00:06:40.494 --rc geninfo_unexecuted_blocks=1 00:06:40.494 00:06:40.494 ' 00:06:40.494 10:25:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:40.494 10:25:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:06:40.494 10:25:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:40.494 10:25:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:40.494 10:25:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:40.494 10:25:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:40.494 10:25:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:40.494 10:25:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:40.494 10:25:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:40.494 10:25:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:40.494 10:25:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:40.494 10:25:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:40.494 10:25:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:06:40.494 10:25:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=8b464f06-2980-e311-ba20-001e67a94acd 00:06:40.494 10:25:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:40.494 10:25:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:40.494 10:25:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:40.494 10:25:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:40.494 10:25:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:40.494 10:25:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:06:40.494 10:25:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:40.494 10:25:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:40.494 10:25:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:40.494 10:25:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:40.494 10:25:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:40.495 10:25:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:40.495 10:25:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:06:40.495 10:25:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:40.495 10:25:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:06:40.495 10:25:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:40.495 10:25:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:40.495 10:25:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:40.495 10:25:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:40.495 10:25:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:40.495 10:25:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:40.495 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:40.495 10:25:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:40.495 10:25:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:40.495 10:25:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:40.495 10:25:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:06:40.495 10:25:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:06:40.495 10:25:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:06:40.495 10:25:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:06:40.495 10:25:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:40.495 10:25:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@476 -- # prepare_net_devs 00:06:40.495 10:25:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@438 -- # local -g is_hw=no 00:06:40.495 10:25:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@440 -- # remove_spdk_ns 00:06:40.495 10:25:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:40.495 10:25:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:40.495 10:25:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:40.495 10:25:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:06:40.495 10:25:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:06:40.495 10:25:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@309 -- # xtrace_disable 00:06:40.495 10:25:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:43.031 10:25:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:43.031 10:25:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # pci_devs=() 00:06:43.031 10:25:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # local -a pci_devs 00:06:43.031 10:25:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # pci_net_devs=() 00:06:43.031 10:25:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:06:43.031 10:25:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # pci_drivers=() 00:06:43.031 10:25:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # local -A pci_drivers 00:06:43.031 10:25:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # net_devs=() 00:06:43.031 10:25:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # local -ga net_devs 00:06:43.031 10:25:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # e810=() 00:06:43.031 10:25:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # local -ga e810 00:06:43.031 10:25:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # x722=() 00:06:43.031 10:25:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # local -ga x722 00:06:43.031 10:25:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # mlx=() 00:06:43.031 10:25:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # local -ga mlx 00:06:43.031 10:25:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:43.031 10:25:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:43.031 10:25:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:43.031 10:25:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:43.031 10:25:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:43.031 10:25:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:43.031 10:25:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:43.031 10:25:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:06:43.031 10:25:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:43.031 10:25:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:43.031 10:25:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:43.031 10:25:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:43.031 10:25:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:06:43.031 10:25:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:06:43.031 10:25:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:06:43.031 10:25:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:06:43.031 10:25:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:06:43.031 10:25:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:06:43.031 10:25:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:43.031 10:25:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:82:00.0 (0x8086 - 0x159b)' 00:06:43.031 Found 0000:82:00.0 (0x8086 - 0x159b) 00:06:43.031 10:25:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:43.031 10:25:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:43.031 10:25:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:43.031 10:25:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:43.031 10:25:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:43.031 10:25:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:43.031 10:25:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:82:00.1 (0x8086 - 0x159b)' 00:06:43.031 Found 0000:82:00.1 (0x8086 - 0x159b) 00:06:43.031 10:25:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:43.031 10:25:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:43.031 10:25:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:43.031 10:25:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:43.031 10:25:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:43.031 10:25:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:06:43.031 10:25:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:06:43.031 10:25:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:06:43.031 10:25:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:43.031 10:25:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:43.031 10:25:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:43.031 10:25:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:43.031 10:25:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:43.031 10:25:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:43.031 10:25:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:43.031 10:25:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:82:00.0: cvl_0_0' 00:06:43.031 Found net devices under 0000:82:00.0: cvl_0_0 00:06:43.031 10:25:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:43.031 10:25:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:43.031 10:25:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:43.031 10:25:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:43.031 10:25:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:43.031 10:25:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:43.031 10:25:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:43.031 10:25:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:43.031 10:25:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:82:00.1: cvl_0_1' 00:06:43.031 Found net devices under 0000:82:00.1: cvl_0_1 00:06:43.031 10:25:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:43.031 10:25:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:06:43.031 10:25:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # is_hw=yes 00:06:43.031 10:25:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:06:43.031 10:25:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:06:43.031 10:25:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:06:43.031 10:25:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:06:43.031 10:25:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:43.031 10:25:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:43.031 10:25:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:43.031 10:25:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:06:43.031 10:25:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:43.031 10:25:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:43.031 10:25:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:06:43.031 10:25:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:06:43.031 10:25:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:43.031 10:25:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:43.031 10:25:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:06:43.031 10:25:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:06:43.031 10:25:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:06:43.031 10:25:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:43.031 10:25:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:43.031 10:25:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:43.031 10:25:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:06:43.031 10:25:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:43.031 10:25:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:43.031 10:25:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:43.031 10:25:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:06:43.032 10:25:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:06:43.032 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:43.032 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.216 ms 00:06:43.032 00:06:43.032 --- 10.0.0.2 ping statistics --- 00:06:43.032 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:43.032 rtt min/avg/max/mdev = 0.216/0.216/0.216/0.000 ms 00:06:43.032 10:25:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:43.032 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:43.032 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.111 ms 00:06:43.032 00:06:43.032 --- 10.0.0.1 ping statistics --- 00:06:43.032 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:43.032 rtt min/avg/max/mdev = 0.111/0.111/0.111/0.000 ms 00:06:43.032 10:25:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:43.032 10:25:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@450 -- # return 0 00:06:43.032 10:25:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:06:43.032 10:25:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:43.032 10:25:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:06:43.032 10:25:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:06:43.032 10:25:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:43.032 10:25:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:06:43.032 10:25:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:06:43.032 10:25:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:06:43.032 10:25:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:06:43.032 10:25:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:43.032 10:25:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:43.032 10:25:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@509 -- # nvmfpid=261958 00:06:43.032 10:25:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:06:43.032 10:25:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@510 -- # waitforlisten 261958 00:06:43.032 10:25:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@833 -- # '[' -z 261958 ']' 00:06:43.032 10:25:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:43.032 10:25:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:43.032 10:25:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:43.032 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:43.032 10:25:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:43.032 10:25:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:43.032 [2024-11-15 10:25:31.262276] Starting SPDK v25.01-pre git sha1 318515b44 / DPDK 24.03.0 initialization... 00:06:43.032 [2024-11-15 10:25:31.262358] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:43.032 [2024-11-15 10:25:31.338277] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:43.032 [2024-11-15 10:25:31.402449] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:43.032 [2024-11-15 10:25:31.402511] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:43.032 [2024-11-15 10:25:31.402524] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:43.032 [2024-11-15 10:25:31.402536] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:43.032 [2024-11-15 10:25:31.402545] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:43.032 [2024-11-15 10:25:31.404056] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:43.032 [2024-11-15 10:25:31.404112] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:43.032 [2024-11-15 10:25:31.404115] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:43.291 10:25:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:43.291 10:25:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@866 -- # return 0 00:06:43.291 10:25:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:06:43.291 10:25:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:43.291 10:25:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:43.291 10:25:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:43.291 10:25:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:06:43.291 10:25:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:43.291 10:25:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:43.291 [2024-11-15 10:25:31.552974] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:43.291 10:25:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:43.291 10:25:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:06:43.291 10:25:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:43.291 10:25:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:43.291 Malloc0 00:06:43.291 10:25:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:43.291 10:25:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:06:43.291 10:25:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:43.291 10:25:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:43.291 Delay0 00:06:43.291 10:25:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:43.291 10:25:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:06:43.291 10:25:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:43.291 10:25:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:43.291 10:25:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:43.291 10:25:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:06:43.291 10:25:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:43.291 10:25:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:43.291 10:25:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:43.291 10:25:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:06:43.291 10:25:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:43.291 10:25:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:43.291 [2024-11-15 10:25:31.624737] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:43.291 10:25:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:43.291 10:25:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:06:43.291 10:25:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:43.291 10:25:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:43.291 10:25:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:43.291 10:25:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:06:43.291 [2024-11-15 10:25:31.731271] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:06:45.825 Initializing NVMe Controllers 00:06:45.825 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:06:45.825 controller IO queue size 128 less than required 00:06:45.825 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:06:45.825 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:06:45.825 Initialization complete. Launching workers. 00:06:45.825 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 30296 00:06:45.826 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 30357, failed to submit 62 00:06:45.826 success 30300, unsuccessful 57, failed 0 00:06:45.826 10:25:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:06:45.826 10:25:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:45.826 10:25:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:45.826 10:25:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:45.826 10:25:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:06:45.826 10:25:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:06:45.826 10:25:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@516 -- # nvmfcleanup 00:06:45.826 10:25:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:06:45.826 10:25:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:06:45.826 10:25:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:06:45.826 10:25:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:06:45.826 10:25:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:06:45.826 rmmod nvme_tcp 00:06:45.826 rmmod nvme_fabrics 00:06:45.826 rmmod nvme_keyring 00:06:45.826 10:25:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:06:45.826 10:25:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:06:45.826 10:25:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:06:45.826 10:25:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@517 -- # '[' -n 261958 ']' 00:06:45.826 10:25:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@518 -- # killprocess 261958 00:06:45.826 10:25:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@952 -- # '[' -z 261958 ']' 00:06:45.826 10:25:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@956 -- # kill -0 261958 00:06:45.826 10:25:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@957 -- # uname 00:06:45.826 10:25:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:45.826 10:25:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 261958 00:06:45.826 10:25:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:06:45.826 10:25:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:06:45.826 10:25:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@970 -- # echo 'killing process with pid 261958' 00:06:45.826 killing process with pid 261958 00:06:45.826 10:25:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@971 -- # kill 261958 00:06:45.826 10:25:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@976 -- # wait 261958 00:06:45.826 10:25:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:06:45.826 10:25:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:06:45.826 10:25:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:06:45.826 10:25:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:06:45.826 10:25:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # iptables-save 00:06:45.826 10:25:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:06:45.826 10:25:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # iptables-restore 00:06:45.826 10:25:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:06:45.826 10:25:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@302 -- # remove_spdk_ns 00:06:45.826 10:25:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:45.826 10:25:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:45.826 10:25:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:47.745 10:25:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:06:48.006 00:06:48.006 real 0m7.548s 00:06:48.006 user 0m10.857s 00:06:48.006 sys 0m2.528s 00:06:48.006 10:25:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:48.006 10:25:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:48.006 ************************************ 00:06:48.006 END TEST nvmf_abort 00:06:48.006 ************************************ 00:06:48.006 10:25:36 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:06:48.006 10:25:36 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:06:48.006 10:25:36 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:48.006 10:25:36 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:48.006 ************************************ 00:06:48.006 START TEST nvmf_ns_hotplug_stress 00:06:48.006 ************************************ 00:06:48.006 10:25:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:06:48.006 * Looking for test storage... 00:06:48.006 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:48.006 10:25:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:06:48.006 10:25:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1691 -- # lcov --version 00:06:48.006 10:25:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:06:48.006 10:25:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:06:48.006 10:25:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:48.006 10:25:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:48.006 10:25:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:48.006 10:25:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:06:48.006 10:25:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:06:48.006 10:25:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:06:48.006 10:25:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:06:48.006 10:25:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:06:48.006 10:25:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:06:48.006 10:25:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:06:48.006 10:25:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:48.006 10:25:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:06:48.006 10:25:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:06:48.006 10:25:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:48.006 10:25:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:48.006 10:25:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:06:48.006 10:25:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:06:48.006 10:25:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:48.006 10:25:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:06:48.006 10:25:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:06:48.006 10:25:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:06:48.006 10:25:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:06:48.006 10:25:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:48.006 10:25:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:06:48.006 10:25:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:06:48.006 10:25:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:48.006 10:25:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:48.006 10:25:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:06:48.006 10:25:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:48.006 10:25:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:06:48.006 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:48.006 --rc genhtml_branch_coverage=1 00:06:48.006 --rc genhtml_function_coverage=1 00:06:48.006 --rc genhtml_legend=1 00:06:48.006 --rc geninfo_all_blocks=1 00:06:48.006 --rc geninfo_unexecuted_blocks=1 00:06:48.006 00:06:48.006 ' 00:06:48.006 10:25:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:06:48.006 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:48.006 --rc genhtml_branch_coverage=1 00:06:48.006 --rc genhtml_function_coverage=1 00:06:48.006 --rc genhtml_legend=1 00:06:48.006 --rc geninfo_all_blocks=1 00:06:48.006 --rc geninfo_unexecuted_blocks=1 00:06:48.006 00:06:48.006 ' 00:06:48.006 10:25:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:06:48.006 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:48.006 --rc genhtml_branch_coverage=1 00:06:48.006 --rc genhtml_function_coverage=1 00:06:48.006 --rc genhtml_legend=1 00:06:48.006 --rc geninfo_all_blocks=1 00:06:48.006 --rc geninfo_unexecuted_blocks=1 00:06:48.006 00:06:48.006 ' 00:06:48.006 10:25:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:06:48.006 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:48.006 --rc genhtml_branch_coverage=1 00:06:48.006 --rc genhtml_function_coverage=1 00:06:48.006 --rc genhtml_legend=1 00:06:48.006 --rc geninfo_all_blocks=1 00:06:48.006 --rc geninfo_unexecuted_blocks=1 00:06:48.006 00:06:48.006 ' 00:06:48.006 10:25:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:48.006 10:25:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:06:48.006 10:25:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:48.006 10:25:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:48.006 10:25:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:48.006 10:25:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:48.007 10:25:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:48.007 10:25:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:48.007 10:25:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:48.007 10:25:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:48.007 10:25:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:48.007 10:25:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:48.007 10:25:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:06:48.007 10:25:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=8b464f06-2980-e311-ba20-001e67a94acd 00:06:48.007 10:25:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:48.007 10:25:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:48.007 10:25:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:48.007 10:25:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:48.007 10:25:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:48.007 10:25:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:06:48.007 10:25:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:48.007 10:25:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:48.007 10:25:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:48.007 10:25:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:48.007 10:25:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:48.007 10:25:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:48.007 10:25:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:06:48.007 10:25:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:48.007 10:25:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:06:48.007 10:25:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:48.007 10:25:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:48.007 10:25:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:48.007 10:25:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:48.007 10:25:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:48.007 10:25:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:48.007 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:48.007 10:25:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:48.007 10:25:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:48.007 10:25:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:48.007 10:25:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:48.007 10:25:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:06:48.007 10:25:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:06:48.007 10:25:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:48.007 10:25:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:06:48.007 10:25:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:06:48.007 10:25:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:06:48.007 10:25:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:48.007 10:25:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:48.007 10:25:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:48.007 10:25:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:06:48.007 10:25:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:06:48.007 10:25:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:06:48.007 10:25:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:50.542 10:25:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:50.542 10:25:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:06:50.542 10:25:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:06:50.542 10:25:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:06:50.542 10:25:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:06:50.542 10:25:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:06:50.542 10:25:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:06:50.542 10:25:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # net_devs=() 00:06:50.542 10:25:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:06:50.542 10:25:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # e810=() 00:06:50.542 10:25:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # local -ga e810 00:06:50.542 10:25:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # x722=() 00:06:50.542 10:25:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # local -ga x722 00:06:50.542 10:25:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # mlx=() 00:06:50.542 10:25:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:06:50.542 10:25:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:50.543 10:25:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:50.543 10:25:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:50.543 10:25:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:50.543 10:25:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:50.543 10:25:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:50.543 10:25:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:50.543 10:25:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:06:50.543 10:25:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:50.543 10:25:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:50.543 10:25:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:50.543 10:25:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:50.543 10:25:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:06:50.543 10:25:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:06:50.543 10:25:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:06:50.543 10:25:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:06:50.543 10:25:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:06:50.543 10:25:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:06:50.543 10:25:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:50.543 10:25:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:82:00.0 (0x8086 - 0x159b)' 00:06:50.543 Found 0000:82:00.0 (0x8086 - 0x159b) 00:06:50.543 10:25:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:50.543 10:25:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:50.543 10:25:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:50.543 10:25:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:50.543 10:25:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:50.543 10:25:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:50.543 10:25:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:82:00.1 (0x8086 - 0x159b)' 00:06:50.543 Found 0000:82:00.1 (0x8086 - 0x159b) 00:06:50.543 10:25:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:50.543 10:25:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:50.543 10:25:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:50.543 10:25:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:50.543 10:25:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:50.543 10:25:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:06:50.543 10:25:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:06:50.543 10:25:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:06:50.543 10:25:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:50.543 10:25:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:50.543 10:25:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:50.543 10:25:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:50.543 10:25:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:50.543 10:25:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:50.543 10:25:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:50.543 10:25:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:82:00.0: cvl_0_0' 00:06:50.543 Found net devices under 0000:82:00.0: cvl_0_0 00:06:50.543 10:25:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:50.543 10:25:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:50.543 10:25:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:50.543 10:25:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:50.543 10:25:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:50.543 10:25:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:50.543 10:25:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:50.543 10:25:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:50.543 10:25:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:82:00.1: cvl_0_1' 00:06:50.543 Found net devices under 0000:82:00.1: cvl_0_1 00:06:50.543 10:25:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:50.543 10:25:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:06:50.543 10:25:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:06:50.543 10:25:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:06:50.543 10:25:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:06:50.543 10:25:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:06:50.543 10:25:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:06:50.543 10:25:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:50.543 10:25:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:50.543 10:25:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:50.543 10:25:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:06:50.543 10:25:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:50.543 10:25:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:50.543 10:25:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:06:50.543 10:25:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:06:50.543 10:25:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:50.543 10:25:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:50.543 10:25:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:06:50.543 10:25:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:06:50.543 10:25:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:06:50.543 10:25:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:50.543 10:25:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:50.543 10:25:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:50.543 10:25:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:06:50.543 10:25:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:50.543 10:25:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:50.543 10:25:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:50.543 10:25:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:06:50.543 10:25:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:06:50.544 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:50.544 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.263 ms 00:06:50.544 00:06:50.544 --- 10.0.0.2 ping statistics --- 00:06:50.544 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:50.544 rtt min/avg/max/mdev = 0.263/0.263/0.263/0.000 ms 00:06:50.544 10:25:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:50.544 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:50.544 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.094 ms 00:06:50.544 00:06:50.544 --- 10.0.0.1 ping statistics --- 00:06:50.544 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:50.544 rtt min/avg/max/mdev = 0.094/0.094/0.094/0.000 ms 00:06:50.544 10:25:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:50.544 10:25:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # return 0 00:06:50.544 10:25:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:06:50.544 10:25:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:50.544 10:25:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:06:50.544 10:25:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:06:50.544 10:25:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:50.544 10:25:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:06:50.544 10:25:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:06:50.544 10:25:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:06:50.544 10:25:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:06:50.544 10:25:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:50.544 10:25:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:50.544 10:25:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # nvmfpid=264315 00:06:50.544 10:25:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:06:50.544 10:25:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # waitforlisten 264315 00:06:50.544 10:25:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@833 -- # '[' -z 264315 ']' 00:06:50.544 10:25:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:50.544 10:25:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:50.544 10:25:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:50.544 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:50.544 10:25:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:50.544 10:25:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:50.544 [2024-11-15 10:25:38.922890] Starting SPDK v25.01-pre git sha1 318515b44 / DPDK 24.03.0 initialization... 00:06:50.544 [2024-11-15 10:25:38.922962] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:50.544 [2024-11-15 10:25:38.994073] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:50.802 [2024-11-15 10:25:39.055629] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:50.802 [2024-11-15 10:25:39.055693] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:50.802 [2024-11-15 10:25:39.055721] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:50.802 [2024-11-15 10:25:39.055733] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:50.802 [2024-11-15 10:25:39.055742] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:50.802 [2024-11-15 10:25:39.060385] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:50.802 [2024-11-15 10:25:39.060454] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:50.802 [2024-11-15 10:25:39.060458] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:50.802 10:25:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:50.802 10:25:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@866 -- # return 0 00:06:50.802 10:25:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:06:50.802 10:25:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:50.802 10:25:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:50.802 10:25:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:50.802 10:25:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:06:50.802 10:25:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:06:51.060 [2024-11-15 10:25:39.469168] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:51.060 10:25:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:06:51.318 10:25:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:51.577 [2024-11-15 10:25:40.007906] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:51.577 10:25:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:06:51.834 10:25:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:06:52.399 Malloc0 00:06:52.399 10:25:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:06:52.399 Delay0 00:06:52.399 10:25:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:52.967 10:25:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:06:52.967 NULL1 00:06:52.967 10:25:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:06:53.225 10:25:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=264624 00:06:53.225 10:25:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:06:53.225 10:25:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 264624 00:06:53.225 10:25:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:54.599 Read completed with error (sct=0, sc=11) 00:06:54.599 10:25:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:54.599 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:54.599 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:54.599 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:54.599 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:54.599 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:54.857 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:54.857 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:54.857 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:54.857 10:25:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:06:54.857 10:25:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:06:55.116 true 00:06:55.116 10:25:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 264624 00:06:55.116 10:25:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:56.052 10:25:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:56.052 10:25:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:06:56.052 10:25:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:06:56.310 true 00:06:56.310 10:25:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 264624 00:06:56.310 10:25:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:56.567 10:25:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:57.133 10:25:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:06:57.133 10:25:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:06:57.133 true 00:06:57.133 10:25:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 264624 00:06:57.133 10:25:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:57.390 10:25:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:57.648 10:25:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:06:57.648 10:25:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:06:57.906 true 00:06:58.164 10:25:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 264624 00:06:58.164 10:25:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:59.097 10:25:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:59.354 10:25:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:06:59.354 10:25:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:06:59.611 true 00:06:59.611 10:25:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 264624 00:06:59.611 10:25:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:59.868 10:25:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:00.124 10:25:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:07:00.124 10:25:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:07:00.381 true 00:07:00.381 10:25:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 264624 00:07:00.381 10:25:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:00.638 10:25:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:00.894 10:25:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:07:00.894 10:25:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:07:01.151 true 00:07:01.151 10:25:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 264624 00:07:01.151 10:25:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:02.084 10:25:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:02.649 10:25:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:07:02.649 10:25:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:07:02.649 true 00:07:02.649 10:25:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 264624 00:07:02.649 10:25:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:02.907 10:25:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:03.472 10:25:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:07:03.472 10:25:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:07:03.472 true 00:07:03.472 10:25:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 264624 00:07:03.472 10:25:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:03.730 10:25:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:04.296 10:25:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:07:04.296 10:25:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:07:04.296 true 00:07:04.296 10:25:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 264624 00:07:04.296 10:25:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:05.230 10:25:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:05.230 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:05.230 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:05.230 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:05.487 10:25:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:07:05.487 10:25:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:07:05.745 true 00:07:05.745 10:25:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 264624 00:07:05.745 10:25:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:06.004 10:25:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:06.261 10:25:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:07:06.261 10:25:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:07:06.826 true 00:07:06.826 10:25:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 264624 00:07:06.826 10:25:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:07.392 10:25:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:07.392 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:07.649 10:25:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:07:07.649 10:25:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:07:07.907 true 00:07:07.907 10:25:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 264624 00:07:07.907 10:25:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:08.165 10:25:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:08.422 10:25:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:07:08.422 10:25:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:07:08.680 true 00:07:08.680 10:25:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 264624 00:07:08.680 10:25:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:09.613 10:25:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:09.613 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:09.613 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:09.613 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:09.871 10:25:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:07:09.871 10:25:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:07:10.129 true 00:07:10.129 10:25:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 264624 00:07:10.129 10:25:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:10.387 10:25:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:10.644 10:25:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:07:10.644 10:25:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:07:10.902 true 00:07:10.902 10:25:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 264624 00:07:10.902 10:25:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:11.835 10:26:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:11.835 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:11.835 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:11.835 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:11.835 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:12.093 10:26:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:07:12.093 10:26:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:07:12.350 true 00:07:12.350 10:26:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 264624 00:07:12.350 10:26:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:12.608 10:26:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:12.865 10:26:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:07:12.865 10:26:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:07:13.123 true 00:07:13.123 10:26:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 264624 00:07:13.123 10:26:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:14.055 10:26:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:14.314 10:26:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:07:14.314 10:26:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:07:14.571 true 00:07:14.571 10:26:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 264624 00:07:14.571 10:26:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:14.830 10:26:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:15.088 10:26:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:07:15.088 10:26:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:07:15.345 true 00:07:15.345 10:26:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 264624 00:07:15.345 10:26:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:15.603 10:26:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:16.168 10:26:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:07:16.168 10:26:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:07:16.168 true 00:07:16.168 10:26:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 264624 00:07:16.168 10:26:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:17.541 10:26:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:17.541 10:26:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:07:17.541 10:26:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:07:17.798 true 00:07:17.798 10:26:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 264624 00:07:17.798 10:26:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:18.057 10:26:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:18.314 10:26:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:07:18.315 10:26:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:07:18.572 true 00:07:18.572 10:26:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 264624 00:07:18.572 10:26:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:18.830 10:26:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:19.087 10:26:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:07:19.087 10:26:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:07:19.345 true 00:07:19.602 10:26:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 264624 00:07:19.603 10:26:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:20.536 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:20.536 10:26:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:20.536 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:20.536 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:20.536 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:20.794 10:26:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:07:20.794 10:26:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:07:20.794 true 00:07:21.051 10:26:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 264624 00:07:21.051 10:26:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:21.308 10:26:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:21.567 10:26:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:07:21.567 10:26:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:07:21.912 true 00:07:21.912 10:26:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 264624 00:07:21.912 10:26:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:22.845 10:26:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:22.845 10:26:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:07:22.845 10:26:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:07:23.103 true 00:07:23.103 10:26:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 264624 00:07:23.103 10:26:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:23.362 10:26:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:23.619 Initializing NVMe Controllers 00:07:23.619 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:07:23.619 Controller IO queue size 128, less than required. 00:07:23.619 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:23.619 Controller IO queue size 128, less than required. 00:07:23.619 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:23.619 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:07:23.620 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:07:23.620 Initialization complete. Launching workers. 00:07:23.620 ======================================================== 00:07:23.620 Latency(us) 00:07:23.620 Device Information : IOPS MiB/s Average min max 00:07:23.620 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 830.30 0.41 70521.60 2501.72 1065682.97 00:07:23.620 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 9636.45 4.71 13284.06 2910.89 450701.11 00:07:23.620 ======================================================== 00:07:23.620 Total : 10466.75 5.11 17824.55 2501.72 1065682.97 00:07:23.620 00:07:23.620 10:26:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:07:23.620 10:26:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:07:23.877 true 00:07:24.135 10:26:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 264624 00:07:24.135 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (264624) - No such process 00:07:24.135 10:26:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 264624 00:07:24.135 10:26:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:24.393 10:26:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:24.650 10:26:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:07:24.650 10:26:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:07:24.651 10:26:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:07:24.651 10:26:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:24.651 10:26:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:07:24.908 null0 00:07:24.909 10:26:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:24.909 10:26:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:24.909 10:26:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:07:25.166 null1 00:07:25.166 10:26:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:25.166 10:26:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:25.166 10:26:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:07:25.423 null2 00:07:25.423 10:26:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:25.423 10:26:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:25.423 10:26:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:07:25.681 null3 00:07:25.681 10:26:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:25.681 10:26:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:25.681 10:26:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:07:25.939 null4 00:07:25.939 10:26:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:25.939 10:26:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:25.939 10:26:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:07:26.196 null5 00:07:26.196 10:26:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:26.196 10:26:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:26.196 10:26:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:07:26.454 null6 00:07:26.454 10:26:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:26.454 10:26:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:26.454 10:26:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:07:26.714 null7 00:07:26.714 10:26:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:26.714 10:26:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:26.714 10:26:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:07:26.714 10:26:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:26.714 10:26:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:26.714 10:26:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:07:26.714 10:26:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:26.714 10:26:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:07:26.714 10:26:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:26.714 10:26:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:26.714 10:26:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:26.714 10:26:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:26.714 10:26:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:26.714 10:26:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:07:26.714 10:26:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:26.714 10:26:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:07:26.714 10:26:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:26.714 10:26:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:26.714 10:26:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:26.714 10:26:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:26.714 10:26:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:26.714 10:26:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:07:26.714 10:26:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:26.714 10:26:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:07:26.714 10:26:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:26.714 10:26:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:26.714 10:26:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:26.714 10:26:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:26.714 10:26:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:26.714 10:26:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:07:26.714 10:26:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:26.714 10:26:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:07:26.714 10:26:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:26.714 10:26:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:26.714 10:26:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:26.714 10:26:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:26.714 10:26:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:26.714 10:26:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:07:26.714 10:26:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:26.714 10:26:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:07:26.714 10:26:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:26.714 10:26:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:26.714 10:26:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:26.714 10:26:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:26.714 10:26:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:26.714 10:26:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:07:26.714 10:26:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:26.714 10:26:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:07:26.714 10:26:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:26.714 10:26:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:26.714 10:26:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:26.714 10:26:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:26.714 10:26:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:26.714 10:26:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:07:26.714 10:26:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:26.714 10:26:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:26.714 10:26:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:07:26.714 10:26:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:26.714 10:26:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:26.714 10:26:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:26.714 10:26:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:26.714 10:26:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:07:26.714 10:26:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:26.714 10:26:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:07:26.714 10:26:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:26.714 10:26:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:26.714 10:26:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 268702 268703 268705 268707 268709 268711 268713 268715 00:07:26.714 10:26:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:26.714 10:26:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:26.973 10:26:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:26.973 10:26:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:26.973 10:26:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:26.973 10:26:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:26.973 10:26:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:26.973 10:26:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:26.973 10:26:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:26.973 10:26:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:27.231 10:26:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:27.232 10:26:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:27.232 10:26:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:27.232 10:26:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:27.232 10:26:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:27.232 10:26:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:27.232 10:26:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:27.232 10:26:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:27.232 10:26:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:27.232 10:26:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:27.232 10:26:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:27.232 10:26:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:27.232 10:26:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:27.232 10:26:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:27.232 10:26:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:27.232 10:26:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:27.232 10:26:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:27.232 10:26:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:27.232 10:26:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:27.232 10:26:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:27.232 10:26:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:27.232 10:26:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:27.232 10:26:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:27.232 10:26:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:27.489 10:26:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:27.490 10:26:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:27.490 10:26:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:27.490 10:26:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:27.490 10:26:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:27.490 10:26:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:27.490 10:26:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:27.747 10:26:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:28.006 10:26:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:28.006 10:26:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:28.006 10:26:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:28.006 10:26:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:28.006 10:26:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:28.006 10:26:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:28.006 10:26:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:28.006 10:26:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:28.006 10:26:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:28.006 10:26:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:28.006 10:26:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:28.006 10:26:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:28.006 10:26:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:28.006 10:26:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:28.006 10:26:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:28.006 10:26:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:28.006 10:26:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:28.006 10:26:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:28.006 10:26:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:28.006 10:26:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:28.007 10:26:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:28.007 10:26:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:28.007 10:26:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:28.007 10:26:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:28.264 10:26:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:28.264 10:26:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:28.265 10:26:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:28.265 10:26:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:28.265 10:26:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:28.265 10:26:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:28.265 10:26:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:28.265 10:26:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:28.523 10:26:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:28.523 10:26:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:28.523 10:26:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:28.523 10:26:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:28.523 10:26:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:28.523 10:26:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:28.523 10:26:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:28.523 10:26:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:28.523 10:26:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:28.523 10:26:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:28.523 10:26:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:28.523 10:26:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:28.523 10:26:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:28.523 10:26:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:28.523 10:26:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:28.523 10:26:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:28.523 10:26:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:28.523 10:26:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:28.523 10:26:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:28.523 10:26:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:28.523 10:26:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:28.523 10:26:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:28.523 10:26:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:28.523 10:26:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:28.781 10:26:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:28.781 10:26:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:28.781 10:26:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:28.781 10:26:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:28.781 10:26:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:28.781 10:26:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:28.781 10:26:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:28.781 10:26:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:29.040 10:26:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:29.040 10:26:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:29.040 10:26:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:29.040 10:26:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:29.040 10:26:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:29.040 10:26:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:29.040 10:26:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:29.040 10:26:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:29.040 10:26:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:29.040 10:26:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:29.040 10:26:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:29.040 10:26:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:29.040 10:26:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:29.040 10:26:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:29.040 10:26:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:29.040 10:26:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:29.040 10:26:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:29.040 10:26:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:29.040 10:26:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:29.040 10:26:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:29.040 10:26:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:29.040 10:26:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:29.040 10:26:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:29.040 10:26:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:29.299 10:26:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:29.299 10:26:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:29.299 10:26:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:29.557 10:26:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:29.557 10:26:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:29.557 10:26:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:29.557 10:26:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:29.557 10:26:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:29.815 10:26:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:29.815 10:26:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:29.815 10:26:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:29.815 10:26:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:29.815 10:26:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:29.816 10:26:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:29.816 10:26:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:29.816 10:26:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:29.816 10:26:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:29.816 10:26:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:29.816 10:26:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:29.816 10:26:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:29.816 10:26:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:29.816 10:26:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:29.816 10:26:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:29.816 10:26:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:29.816 10:26:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:29.816 10:26:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:29.816 10:26:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:29.816 10:26:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:29.816 10:26:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:29.816 10:26:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:29.816 10:26:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:29.816 10:26:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:30.074 10:26:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:30.074 10:26:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:30.074 10:26:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:30.074 10:26:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:30.074 10:26:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:30.074 10:26:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:30.074 10:26:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:30.074 10:26:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:30.333 10:26:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:30.333 10:26:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:30.333 10:26:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:30.333 10:26:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:30.333 10:26:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:30.333 10:26:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:30.333 10:26:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:30.333 10:26:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:30.333 10:26:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:30.333 10:26:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:30.333 10:26:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:30.333 10:26:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:30.333 10:26:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:30.333 10:26:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:30.333 10:26:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:30.333 10:26:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:30.333 10:26:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:30.333 10:26:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:30.333 10:26:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:30.333 10:26:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:30.333 10:26:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:30.333 10:26:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:30.333 10:26:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:30.333 10:26:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:30.592 10:26:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:30.592 10:26:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:30.593 10:26:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:30.593 10:26:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:30.593 10:26:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:30.593 10:26:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:30.593 10:26:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:30.593 10:26:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:30.851 10:26:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:30.851 10:26:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:30.851 10:26:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:30.851 10:26:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:30.851 10:26:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:30.851 10:26:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:30.851 10:26:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:30.851 10:26:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:30.851 10:26:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:30.851 10:26:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:30.851 10:26:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:30.851 10:26:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:30.851 10:26:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:30.851 10:26:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:30.851 10:26:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:30.851 10:26:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:30.851 10:26:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:30.851 10:26:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:30.851 10:26:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:30.851 10:26:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:30.851 10:26:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:30.851 10:26:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:30.851 10:26:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:30.852 10:26:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:31.110 10:26:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:31.368 10:26:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:31.368 10:26:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:31.368 10:26:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:31.368 10:26:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:31.368 10:26:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:31.368 10:26:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:31.368 10:26:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:31.626 10:26:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:31.626 10:26:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:31.626 10:26:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:31.626 10:26:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:31.626 10:26:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:31.626 10:26:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:31.626 10:26:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:31.626 10:26:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:31.627 10:26:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:31.627 10:26:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:31.627 10:26:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:31.627 10:26:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:31.627 10:26:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:31.627 10:26:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:31.627 10:26:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:31.627 10:26:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:31.627 10:26:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:31.627 10:26:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:31.627 10:26:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:31.627 10:26:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:31.627 10:26:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:31.627 10:26:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:31.627 10:26:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:31.627 10:26:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:31.885 10:26:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:31.885 10:26:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:31.885 10:26:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:31.885 10:26:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:31.885 10:26:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:31.885 10:26:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:31.885 10:26:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:31.885 10:26:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:32.144 10:26:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:32.144 10:26:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:32.144 10:26:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:32.144 10:26:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:32.144 10:26:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:32.144 10:26:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:32.144 10:26:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:32.144 10:26:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:32.144 10:26:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:32.144 10:26:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:32.144 10:26:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:32.144 10:26:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:32.144 10:26:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:32.144 10:26:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:32.144 10:26:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:32.144 10:26:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:32.144 10:26:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:32.144 10:26:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:32.144 10:26:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:32.144 10:26:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:32.144 10:26:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:32.144 10:26:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:32.144 10:26:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:32.144 10:26:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:32.403 10:26:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:32.403 10:26:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:32.403 10:26:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:32.403 10:26:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:32.403 10:26:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:32.403 10:26:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:32.403 10:26:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:32.403 10:26:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:32.661 10:26:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:32.661 10:26:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:32.661 10:26:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:32.661 10:26:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:32.661 10:26:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:32.661 10:26:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:32.661 10:26:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:32.661 10:26:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:32.661 10:26:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:32.661 10:26:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:32.661 10:26:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:32.661 10:26:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:32.661 10:26:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:32.662 10:26:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:32.662 10:26:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:32.662 10:26:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:32.662 10:26:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:07:32.662 10:26:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:07:32.662 10:26:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:32.662 10:26:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:07:32.662 10:26:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:32.662 10:26:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:07:32.662 10:26:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:32.662 10:26:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:32.662 rmmod nvme_tcp 00:07:32.920 rmmod nvme_fabrics 00:07:32.920 rmmod nvme_keyring 00:07:32.920 10:26:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:32.920 10:26:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:07:32.920 10:26:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:07:32.920 10:26:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@517 -- # '[' -n 264315 ']' 00:07:32.920 10:26:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # killprocess 264315 00:07:32.920 10:26:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@952 -- # '[' -z 264315 ']' 00:07:32.920 10:26:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@956 -- # kill -0 264315 00:07:32.920 10:26:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@957 -- # uname 00:07:32.920 10:26:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:07:32.920 10:26:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 264315 00:07:32.920 10:26:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:07:32.920 10:26:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:07:32.920 10:26:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@970 -- # echo 'killing process with pid 264315' 00:07:32.920 killing process with pid 264315 00:07:32.920 10:26:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@971 -- # kill 264315 00:07:32.920 10:26:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@976 -- # wait 264315 00:07:33.181 10:26:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:07:33.181 10:26:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:07:33.181 10:26:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:07:33.181 10:26:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:07:33.181 10:26:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-save 00:07:33.181 10:26:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:07:33.181 10:26:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-restore 00:07:33.181 10:26:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:33.181 10:26:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:07:33.181 10:26:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:33.181 10:26:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:33.181 10:26:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:35.092 10:26:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:07:35.092 00:07:35.092 real 0m47.228s 00:07:35.092 user 3m39.672s 00:07:35.092 sys 0m16.363s 00:07:35.092 10:26:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:35.092 10:26:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:35.092 ************************************ 00:07:35.092 END TEST nvmf_ns_hotplug_stress 00:07:35.092 ************************************ 00:07:35.092 10:26:23 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:07:35.092 10:26:23 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:07:35.092 10:26:23 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:35.092 10:26:23 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:35.092 ************************************ 00:07:35.092 START TEST nvmf_delete_subsystem 00:07:35.092 ************************************ 00:07:35.092 10:26:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:07:35.352 * Looking for test storage... 00:07:35.352 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:35.352 10:26:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:07:35.352 10:26:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1691 -- # lcov --version 00:07:35.352 10:26:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:07:35.352 10:26:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:07:35.352 10:26:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:35.352 10:26:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:35.352 10:26:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:35.352 10:26:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:07:35.352 10:26:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:07:35.352 10:26:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:07:35.352 10:26:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:07:35.352 10:26:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:07:35.352 10:26:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:07:35.352 10:26:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:07:35.352 10:26:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:35.352 10:26:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:07:35.352 10:26:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:07:35.352 10:26:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:35.352 10:26:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:35.352 10:26:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:07:35.352 10:26:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:07:35.352 10:26:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:35.352 10:26:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:07:35.352 10:26:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:07:35.352 10:26:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:07:35.352 10:26:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:07:35.352 10:26:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:35.352 10:26:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:07:35.352 10:26:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:07:35.352 10:26:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:35.352 10:26:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:35.352 10:26:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:07:35.352 10:26:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:35.352 10:26:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:07:35.352 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:35.352 --rc genhtml_branch_coverage=1 00:07:35.352 --rc genhtml_function_coverage=1 00:07:35.352 --rc genhtml_legend=1 00:07:35.352 --rc geninfo_all_blocks=1 00:07:35.352 --rc geninfo_unexecuted_blocks=1 00:07:35.352 00:07:35.352 ' 00:07:35.352 10:26:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:07:35.352 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:35.352 --rc genhtml_branch_coverage=1 00:07:35.353 --rc genhtml_function_coverage=1 00:07:35.353 --rc genhtml_legend=1 00:07:35.353 --rc geninfo_all_blocks=1 00:07:35.353 --rc geninfo_unexecuted_blocks=1 00:07:35.353 00:07:35.353 ' 00:07:35.353 10:26:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:07:35.353 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:35.353 --rc genhtml_branch_coverage=1 00:07:35.353 --rc genhtml_function_coverage=1 00:07:35.353 --rc genhtml_legend=1 00:07:35.353 --rc geninfo_all_blocks=1 00:07:35.353 --rc geninfo_unexecuted_blocks=1 00:07:35.353 00:07:35.353 ' 00:07:35.353 10:26:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:07:35.353 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:35.353 --rc genhtml_branch_coverage=1 00:07:35.353 --rc genhtml_function_coverage=1 00:07:35.353 --rc genhtml_legend=1 00:07:35.353 --rc geninfo_all_blocks=1 00:07:35.353 --rc geninfo_unexecuted_blocks=1 00:07:35.353 00:07:35.353 ' 00:07:35.353 10:26:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:35.353 10:26:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:07:35.353 10:26:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:35.353 10:26:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:35.353 10:26:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:35.353 10:26:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:35.353 10:26:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:35.353 10:26:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:35.353 10:26:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:35.353 10:26:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:35.353 10:26:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:35.353 10:26:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:35.353 10:26:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:07:35.353 10:26:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=8b464f06-2980-e311-ba20-001e67a94acd 00:07:35.353 10:26:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:35.353 10:26:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:35.353 10:26:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:35.353 10:26:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:35.353 10:26:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:35.353 10:26:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:07:35.353 10:26:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:35.353 10:26:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:35.353 10:26:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:35.353 10:26:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:35.353 10:26:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:35.353 10:26:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:35.353 10:26:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:07:35.353 10:26:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:35.353 10:26:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:07:35.353 10:26:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:35.353 10:26:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:35.353 10:26:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:35.353 10:26:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:35.353 10:26:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:35.353 10:26:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:35.353 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:35.353 10:26:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:35.353 10:26:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:35.353 10:26:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:35.353 10:26:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:07:35.353 10:26:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:07:35.353 10:26:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:35.353 10:26:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:35.353 10:26:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:35.353 10:26:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:35.353 10:26:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:35.353 10:26:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:35.353 10:26:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:35.353 10:26:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:07:35.353 10:26:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:07:35.353 10:26:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # xtrace_disable 00:07:35.353 10:26:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:37.889 10:26:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:37.889 10:26:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # pci_devs=() 00:07:37.890 10:26:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:37.890 10:26:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:37.890 10:26:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:37.890 10:26:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:37.890 10:26:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:37.890 10:26:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # net_devs=() 00:07:37.890 10:26:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:37.890 10:26:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # e810=() 00:07:37.890 10:26:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # local -ga e810 00:07:37.890 10:26:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # x722=() 00:07:37.890 10:26:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # local -ga x722 00:07:37.890 10:26:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # mlx=() 00:07:37.890 10:26:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # local -ga mlx 00:07:37.890 10:26:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:37.890 10:26:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:37.890 10:26:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:37.890 10:26:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:37.890 10:26:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:37.890 10:26:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:37.890 10:26:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:37.890 10:26:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:37.890 10:26:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:37.890 10:26:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:37.890 10:26:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:37.890 10:26:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:37.890 10:26:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:07:37.890 10:26:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:07:37.890 10:26:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:07:37.890 10:26:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:07:37.890 10:26:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:07:37.890 10:26:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:07:37.890 10:26:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:37.890 10:26:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:82:00.0 (0x8086 - 0x159b)' 00:07:37.890 Found 0000:82:00.0 (0x8086 - 0x159b) 00:07:37.890 10:26:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:37.890 10:26:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:37.890 10:26:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:37.890 10:26:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:37.890 10:26:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:37.890 10:26:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:37.890 10:26:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:82:00.1 (0x8086 - 0x159b)' 00:07:37.890 Found 0000:82:00.1 (0x8086 - 0x159b) 00:07:37.890 10:26:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:37.890 10:26:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:37.890 10:26:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:37.890 10:26:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:37.890 10:26:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:37.890 10:26:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:07:37.890 10:26:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:07:37.890 10:26:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:07:37.890 10:26:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:37.890 10:26:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:37.890 10:26:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:37.890 10:26:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:37.890 10:26:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:37.890 10:26:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:37.890 10:26:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:37.890 10:26:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:82:00.0: cvl_0_0' 00:07:37.890 Found net devices under 0000:82:00.0: cvl_0_0 00:07:37.890 10:26:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:37.890 10:26:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:37.890 10:26:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:37.890 10:26:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:37.890 10:26:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:37.890 10:26:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:37.890 10:26:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:37.890 10:26:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:37.890 10:26:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:82:00.1: cvl_0_1' 00:07:37.890 Found net devices under 0000:82:00.1: cvl_0_1 00:07:37.890 10:26:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:37.890 10:26:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:07:37.890 10:26:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # is_hw=yes 00:07:37.890 10:26:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:07:37.890 10:26:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:07:37.890 10:26:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:07:37.890 10:26:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:37.890 10:26:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:37.890 10:26:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:37.890 10:26:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:37.890 10:26:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:07:37.890 10:26:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:37.890 10:26:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:37.890 10:26:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:07:37.890 10:26:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:07:37.890 10:26:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:37.890 10:26:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:37.890 10:26:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:07:37.890 10:26:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:07:37.890 10:26:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:07:37.890 10:26:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:37.890 10:26:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:37.890 10:26:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:37.890 10:26:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:07:37.890 10:26:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:37.890 10:26:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:37.891 10:26:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:37.891 10:26:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:07:37.891 10:26:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:07:37.891 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:37.891 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.382 ms 00:07:37.891 00:07:37.891 --- 10.0.0.2 ping statistics --- 00:07:37.891 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:37.891 rtt min/avg/max/mdev = 0.382/0.382/0.382/0.000 ms 00:07:37.891 10:26:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:37.891 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:37.891 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.159 ms 00:07:37.891 00:07:37.891 --- 10.0.0.1 ping statistics --- 00:07:37.891 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:37.891 rtt min/avg/max/mdev = 0.159/0.159/0.159/0.000 ms 00:07:37.891 10:26:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:37.891 10:26:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # return 0 00:07:37.891 10:26:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:37.891 10:26:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:37.891 10:26:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:07:37.891 10:26:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:07:37.891 10:26:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:37.891 10:26:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:07:37.891 10:26:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:07:37.891 10:26:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:07:37.891 10:26:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:37.891 10:26:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:37.891 10:26:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:37.891 10:26:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # nvmfpid=271607 00:07:37.891 10:26:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:07:37.891 10:26:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # waitforlisten 271607 00:07:37.891 10:26:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@833 -- # '[' -z 271607 ']' 00:07:37.891 10:26:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:37.891 10:26:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:37.891 10:26:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:37.891 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:37.891 10:26:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:37.891 10:26:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:37.891 [2024-11-15 10:26:26.059604] Starting SPDK v25.01-pre git sha1 318515b44 / DPDK 24.03.0 initialization... 00:07:37.891 [2024-11-15 10:26:26.059692] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:37.891 [2024-11-15 10:26:26.128484] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:37.891 [2024-11-15 10:26:26.181250] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:37.891 [2024-11-15 10:26:26.181313] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:37.891 [2024-11-15 10:26:26.181343] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:37.891 [2024-11-15 10:26:26.181354] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:37.891 [2024-11-15 10:26:26.181372] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:37.891 [2024-11-15 10:26:26.182811] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:37.891 [2024-11-15 10:26:26.182817] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:37.891 10:26:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:37.891 10:26:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@866 -- # return 0 00:07:37.891 10:26:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:37.891 10:26:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:37.891 10:26:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:37.891 10:26:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:37.891 10:26:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:37.891 10:26:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:37.891 10:26:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:37.891 [2024-11-15 10:26:26.323921] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:37.891 10:26:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:37.891 10:26:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:07:37.891 10:26:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:37.891 10:26:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:37.891 10:26:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:37.891 10:26:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:37.891 10:26:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:37.891 10:26:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:37.891 [2024-11-15 10:26:26.340119] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:37.891 10:26:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:37.891 10:26:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:07:37.891 10:26:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:37.891 10:26:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:37.891 NULL1 00:07:37.891 10:26:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:37.891 10:26:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:07:37.891 10:26:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:37.891 10:26:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:38.150 Delay0 00:07:38.150 10:26:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:38.150 10:26:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:38.150 10:26:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:38.150 10:26:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:38.150 10:26:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:38.150 10:26:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=271649 00:07:38.150 10:26:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:07:38.150 10:26:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:07:38.150 [2024-11-15 10:26:26.425034] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:07:40.048 10:26:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:40.048 10:26:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:40.048 10:26:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:40.307 Write completed with error (sct=0, sc=8) 00:07:40.307 Write completed with error (sct=0, sc=8) 00:07:40.307 starting I/O failed: -6 00:07:40.307 Read completed with error (sct=0, sc=8) 00:07:40.307 Read completed with error (sct=0, sc=8) 00:07:40.307 Read completed with error (sct=0, sc=8) 00:07:40.307 Read completed with error (sct=0, sc=8) 00:07:40.307 starting I/O failed: -6 00:07:40.307 Read completed with error (sct=0, sc=8) 00:07:40.307 Write completed with error (sct=0, sc=8) 00:07:40.307 Read completed with error (sct=0, sc=8) 00:07:40.307 Read completed with error (sct=0, sc=8) 00:07:40.307 starting I/O failed: -6 00:07:40.307 Write completed with error (sct=0, sc=8) 00:07:40.307 Write completed with error (sct=0, sc=8) 00:07:40.307 Read completed with error (sct=0, sc=8) 00:07:40.307 Read completed with error (sct=0, sc=8) 00:07:40.307 starting I/O failed: -6 00:07:40.307 Read completed with error (sct=0, sc=8) 00:07:40.307 Write completed with error (sct=0, sc=8) 00:07:40.307 Read completed with error (sct=0, sc=8) 00:07:40.307 Write completed with error (sct=0, sc=8) 00:07:40.307 starting I/O failed: -6 00:07:40.307 Read completed with error (sct=0, sc=8) 00:07:40.307 Read completed with error (sct=0, sc=8) 00:07:40.307 Read completed with error (sct=0, sc=8) 00:07:40.307 Write completed with error (sct=0, sc=8) 00:07:40.307 starting I/O failed: -6 00:07:40.307 Read completed with error (sct=0, sc=8) 00:07:40.307 Read completed with error (sct=0, sc=8) 00:07:40.307 Read completed with error (sct=0, sc=8) 00:07:40.307 Read completed with error (sct=0, sc=8) 00:07:40.307 starting I/O failed: -6 00:07:40.307 Read completed with error (sct=0, sc=8) 00:07:40.307 Read completed with error (sct=0, sc=8) 00:07:40.307 Read completed with error (sct=0, sc=8) 00:07:40.307 Read completed with error (sct=0, sc=8) 00:07:40.307 starting I/O failed: -6 00:07:40.307 Read completed with error (sct=0, sc=8) 00:07:40.307 Read completed with error (sct=0, sc=8) 00:07:40.307 Read completed with error (sct=0, sc=8) 00:07:40.307 Write completed with error (sct=0, sc=8) 00:07:40.307 starting I/O failed: -6 00:07:40.307 Read completed with error (sct=0, sc=8) 00:07:40.307 Read completed with error (sct=0, sc=8) 00:07:40.307 Read completed with error (sct=0, sc=8) 00:07:40.307 Read completed with error (sct=0, sc=8) 00:07:40.307 starting I/O failed: -6 00:07:40.307 Read completed with error (sct=0, sc=8) 00:07:40.307 Read completed with error (sct=0, sc=8) 00:07:40.307 Write completed with error (sct=0, sc=8) 00:07:40.307 Read completed with error (sct=0, sc=8) 00:07:40.307 starting I/O failed: -6 00:07:40.307 [2024-11-15 10:26:28.549120] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fa14c000c40 is same with the state(6) to be set 00:07:40.307 Read completed with error (sct=0, sc=8) 00:07:40.307 Read completed with error (sct=0, sc=8) 00:07:40.307 Read completed with error (sct=0, sc=8) 00:07:40.307 Read completed with error (sct=0, sc=8) 00:07:40.307 starting I/O failed: -6 00:07:40.307 Read completed with error (sct=0, sc=8) 00:07:40.307 Read completed with error (sct=0, sc=8) 00:07:40.307 Read completed with error (sct=0, sc=8) 00:07:40.307 Read completed with error (sct=0, sc=8) 00:07:40.307 Read completed with error (sct=0, sc=8) 00:07:40.307 Write completed with error (sct=0, sc=8) 00:07:40.307 Write completed with error (sct=0, sc=8) 00:07:40.307 Write completed with error (sct=0, sc=8) 00:07:40.307 Read completed with error (sct=0, sc=8) 00:07:40.307 Read completed with error (sct=0, sc=8) 00:07:40.307 starting I/O failed: -6 00:07:40.307 Write completed with error (sct=0, sc=8) 00:07:40.307 Read completed with error (sct=0, sc=8) 00:07:40.307 Read completed with error (sct=0, sc=8) 00:07:40.307 Read completed with error (sct=0, sc=8) 00:07:40.307 Write completed with error (sct=0, sc=8) 00:07:40.307 Write completed with error (sct=0, sc=8) 00:07:40.307 Read completed with error (sct=0, sc=8) 00:07:40.307 Write completed with error (sct=0, sc=8) 00:07:40.307 Write completed with error (sct=0, sc=8) 00:07:40.307 Write completed with error (sct=0, sc=8) 00:07:40.307 Write completed with error (sct=0, sc=8) 00:07:40.307 starting I/O failed: -6 00:07:40.307 Read completed with error (sct=0, sc=8) 00:07:40.307 Read completed with error (sct=0, sc=8) 00:07:40.307 Write completed with error (sct=0, sc=8) 00:07:40.307 Read completed with error (sct=0, sc=8) 00:07:40.307 Read completed with error (sct=0, sc=8) 00:07:40.307 Read completed with error (sct=0, sc=8) 00:07:40.307 Write completed with error (sct=0, sc=8) 00:07:40.307 Read completed with error (sct=0, sc=8) 00:07:40.307 Read completed with error (sct=0, sc=8) 00:07:40.307 Write completed with error (sct=0, sc=8) 00:07:40.307 Write completed with error (sct=0, sc=8) 00:07:40.307 Write completed with error (sct=0, sc=8) 00:07:40.307 starting I/O failed: -6 00:07:40.307 Read completed with error (sct=0, sc=8) 00:07:40.307 Read completed with error (sct=0, sc=8) 00:07:40.307 Read completed with error (sct=0, sc=8) 00:07:40.307 Write completed with error (sct=0, sc=8) 00:07:40.307 Read completed with error (sct=0, sc=8) 00:07:40.307 Read completed with error (sct=0, sc=8) 00:07:40.307 Write completed with error (sct=0, sc=8) 00:07:40.307 Read completed with error (sct=0, sc=8) 00:07:40.307 Read completed with error (sct=0, sc=8) 00:07:40.307 Read completed with error (sct=0, sc=8) 00:07:40.307 starting I/O failed: -6 00:07:40.307 Read completed with error (sct=0, sc=8) 00:07:40.307 Write completed with error (sct=0, sc=8) 00:07:40.307 Read completed with error (sct=0, sc=8) 00:07:40.307 Read completed with error (sct=0, sc=8) 00:07:40.307 Write completed with error (sct=0, sc=8) 00:07:40.307 Read completed with error (sct=0, sc=8) 00:07:40.307 Write completed with error (sct=0, sc=8) 00:07:40.307 Read completed with error (sct=0, sc=8) 00:07:40.307 Write completed with error (sct=0, sc=8) 00:07:40.307 starting I/O failed: -6 00:07:40.307 Write completed with error (sct=0, sc=8) 00:07:40.307 Read completed with error (sct=0, sc=8) 00:07:40.307 Write completed with error (sct=0, sc=8) 00:07:40.307 Read completed with error (sct=0, sc=8) 00:07:40.307 Write completed with error (sct=0, sc=8) 00:07:40.307 Read completed with error (sct=0, sc=8) 00:07:40.307 Read completed with error (sct=0, sc=8) 00:07:40.307 Write completed with error (sct=0, sc=8) 00:07:40.307 Write completed with error (sct=0, sc=8) 00:07:40.308 starting I/O failed: -6 00:07:40.308 Read completed with error (sct=0, sc=8) 00:07:40.308 Read completed with error (sct=0, sc=8) 00:07:40.308 Read completed with error (sct=0, sc=8) 00:07:40.308 Read completed with error (sct=0, sc=8) 00:07:40.308 Read completed with error (sct=0, sc=8) 00:07:40.308 Write completed with error (sct=0, sc=8) 00:07:40.308 Read completed with error (sct=0, sc=8) 00:07:40.308 Read completed with error (sct=0, sc=8) 00:07:40.308 Read completed with error (sct=0, sc=8) 00:07:40.308 Read completed with error (sct=0, sc=8) 00:07:40.308 Read completed with error (sct=0, sc=8) 00:07:40.308 Write completed with error (sct=0, sc=8) 00:07:40.308 starting I/O failed: -6 00:07:40.308 Write completed with error (sct=0, sc=8) 00:07:40.308 Read completed with error (sct=0, sc=8) 00:07:40.308 Read completed with error (sct=0, sc=8) 00:07:40.308 Read completed with error (sct=0, sc=8) 00:07:40.308 Read completed with error (sct=0, sc=8) 00:07:40.308 Read completed with error (sct=0, sc=8) 00:07:40.308 Write completed with error (sct=0, sc=8) 00:07:40.308 Read completed with error (sct=0, sc=8) 00:07:40.308 Read completed with error (sct=0, sc=8) 00:07:40.308 Read completed with error (sct=0, sc=8) 00:07:40.308 starting I/O failed: -6 00:07:40.308 Write completed with error (sct=0, sc=8) 00:07:40.308 Read completed with error (sct=0, sc=8) 00:07:40.308 Read completed with error (sct=0, sc=8) 00:07:40.308 Read completed with error (sct=0, sc=8) 00:07:40.308 starting I/O failed: -6 00:07:40.308 Read completed with error (sct=0, sc=8) 00:07:40.308 Read completed with error (sct=0, sc=8) 00:07:40.308 Read completed with error (sct=0, sc=8) 00:07:40.308 Read completed with error (sct=0, sc=8) 00:07:40.308 starting I/O failed: -6 00:07:40.308 [2024-11-15 10:26:28.549950] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6774a0 is same with the state(6) to be set 00:07:40.308 Read completed with error (sct=0, sc=8) 00:07:40.308 Read completed with error (sct=0, sc=8) 00:07:40.308 Read completed with error (sct=0, sc=8) 00:07:40.308 Write completed with error (sct=0, sc=8) 00:07:40.308 Read completed with error (sct=0, sc=8) 00:07:40.308 Write completed with error (sct=0, sc=8) 00:07:40.308 Read completed with error (sct=0, sc=8) 00:07:40.308 Write completed with error (sct=0, sc=8) 00:07:40.308 Read completed with error (sct=0, sc=8) 00:07:40.308 Write completed with error (sct=0, sc=8) 00:07:40.308 Read completed with error (sct=0, sc=8) 00:07:40.308 Write completed with error (sct=0, sc=8) 00:07:40.308 Read completed with error (sct=0, sc=8) 00:07:40.308 Write completed with error (sct=0, sc=8) 00:07:40.308 Read completed with error (sct=0, sc=8) 00:07:40.308 Read completed with error (sct=0, sc=8) 00:07:40.308 Read completed with error (sct=0, sc=8) 00:07:40.308 Write completed with error (sct=0, sc=8) 00:07:40.308 Read completed with error (sct=0, sc=8) 00:07:40.308 Write completed with error (sct=0, sc=8) 00:07:40.308 Read completed with error (sct=0, sc=8) 00:07:40.308 Read completed with error (sct=0, sc=8) 00:07:40.308 Write completed with error (sct=0, sc=8) 00:07:40.308 Read completed with error (sct=0, sc=8) 00:07:40.308 Read completed with error (sct=0, sc=8) 00:07:40.308 Read completed with error (sct=0, sc=8) 00:07:40.308 Read completed with error (sct=0, sc=8) 00:07:40.308 Write completed with error (sct=0, sc=8) 00:07:40.308 Read completed with error (sct=0, sc=8) 00:07:40.308 Read completed with error (sct=0, sc=8) 00:07:40.308 Read completed with error (sct=0, sc=8) 00:07:40.308 Read completed with error (sct=0, sc=8) 00:07:40.308 Write completed with error (sct=0, sc=8) 00:07:40.308 Write completed with error (sct=0, sc=8) 00:07:40.308 Write completed with error (sct=0, sc=8) 00:07:40.308 Write completed with error (sct=0, sc=8) 00:07:40.308 Read completed with error (sct=0, sc=8) 00:07:40.308 Read completed with error (sct=0, sc=8) 00:07:40.308 Write completed with error (sct=0, sc=8) 00:07:40.308 Read completed with error (sct=0, sc=8) 00:07:40.308 Read completed with error (sct=0, sc=8) 00:07:40.308 Read completed with error (sct=0, sc=8) 00:07:40.308 Read completed with error (sct=0, sc=8) 00:07:40.308 Write completed with error (sct=0, sc=8) 00:07:40.308 Read completed with error (sct=0, sc=8) 00:07:40.308 Read completed with error (sct=0, sc=8) 00:07:40.308 Read completed with error (sct=0, sc=8) 00:07:40.308 Read completed with error (sct=0, sc=8) 00:07:40.308 Read completed with error (sct=0, sc=8) 00:07:40.308 Read completed with error (sct=0, sc=8) 00:07:40.308 Write completed with error (sct=0, sc=8) 00:07:40.308 Read completed with error (sct=0, sc=8) 00:07:40.308 Read completed with error (sct=0, sc=8) 00:07:41.242 [2024-11-15 10:26:29.519404] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6789a0 is same with the state(6) to be set 00:07:41.242 Write completed with error (sct=0, sc=8) 00:07:41.242 Write completed with error (sct=0, sc=8) 00:07:41.242 Write completed with error (sct=0, sc=8) 00:07:41.242 Read completed with error (sct=0, sc=8) 00:07:41.242 Read completed with error (sct=0, sc=8) 00:07:41.242 Read completed with error (sct=0, sc=8) 00:07:41.242 Read completed with error (sct=0, sc=8) 00:07:41.242 Read completed with error (sct=0, sc=8) 00:07:41.242 Read completed with error (sct=0, sc=8) 00:07:41.242 Read completed with error (sct=0, sc=8) 00:07:41.242 Read completed with error (sct=0, sc=8) 00:07:41.242 Write completed with error (sct=0, sc=8) 00:07:41.242 Read completed with error (sct=0, sc=8) 00:07:41.242 Read completed with error (sct=0, sc=8) 00:07:41.242 Read completed with error (sct=0, sc=8) 00:07:41.242 Read completed with error (sct=0, sc=8) 00:07:41.242 Read completed with error (sct=0, sc=8) 00:07:41.242 Read completed with error (sct=0, sc=8) 00:07:41.242 Read completed with error (sct=0, sc=8) 00:07:41.242 Read completed with error (sct=0, sc=8) 00:07:41.242 [2024-11-15 10:26:29.549995] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fa14c00d7e0 is same with the state(6) to be set 00:07:41.242 Read completed with error (sct=0, sc=8) 00:07:41.242 Read completed with error (sct=0, sc=8) 00:07:41.242 Read completed with error (sct=0, sc=8) 00:07:41.242 Write completed with error (sct=0, sc=8) 00:07:41.242 Read completed with error (sct=0, sc=8) 00:07:41.242 Write completed with error (sct=0, sc=8) 00:07:41.242 Read completed with error (sct=0, sc=8) 00:07:41.242 Read completed with error (sct=0, sc=8) 00:07:41.242 Read completed with error (sct=0, sc=8) 00:07:41.242 Read completed with error (sct=0, sc=8) 00:07:41.242 Read completed with error (sct=0, sc=8) 00:07:41.242 Write completed with error (sct=0, sc=8) 00:07:41.242 Read completed with error (sct=0, sc=8) 00:07:41.242 Read completed with error (sct=0, sc=8) 00:07:41.242 Read completed with error (sct=0, sc=8) 00:07:41.242 Read completed with error (sct=0, sc=8) 00:07:41.242 Read completed with error (sct=0, sc=8) 00:07:41.242 Read completed with error (sct=0, sc=8) 00:07:41.242 Read completed with error (sct=0, sc=8) 00:07:41.242 Read completed with error (sct=0, sc=8) 00:07:41.242 [2024-11-15 10:26:29.551294] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fa14c00d020 is same with the state(6) to be set 00:07:41.242 Read completed with error (sct=0, sc=8) 00:07:41.242 Read completed with error (sct=0, sc=8) 00:07:41.242 Read completed with error (sct=0, sc=8) 00:07:41.242 Read completed with error (sct=0, sc=8) 00:07:41.242 Read completed with error (sct=0, sc=8) 00:07:41.242 Read completed with error (sct=0, sc=8) 00:07:41.242 Read completed with error (sct=0, sc=8) 00:07:41.242 Write completed with error (sct=0, sc=8) 00:07:41.242 Read completed with error (sct=0, sc=8) 00:07:41.242 Read completed with error (sct=0, sc=8) 00:07:41.242 Write completed with error (sct=0, sc=8) 00:07:41.242 Read completed with error (sct=0, sc=8) 00:07:41.242 Read completed with error (sct=0, sc=8) 00:07:41.242 Read completed with error (sct=0, sc=8) 00:07:41.242 Write completed with error (sct=0, sc=8) 00:07:41.242 Read completed with error (sct=0, sc=8) 00:07:41.242 Read completed with error (sct=0, sc=8) 00:07:41.242 Read completed with error (sct=0, sc=8) 00:07:41.242 Read completed with error (sct=0, sc=8) 00:07:41.242 Read completed with error (sct=0, sc=8) 00:07:41.242 Read completed with error (sct=0, sc=8) 00:07:41.242 [2024-11-15 10:26:29.552081] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x677680 is same with the state(6) to be set 00:07:41.242 Read completed with error (sct=0, sc=8) 00:07:41.242 Read completed with error (sct=0, sc=8) 00:07:41.242 Read completed with error (sct=0, sc=8) 00:07:41.242 Read completed with error (sct=0, sc=8) 00:07:41.242 Read completed with error (sct=0, sc=8) 00:07:41.242 Read completed with error (sct=0, sc=8) 00:07:41.242 Read completed with error (sct=0, sc=8) 00:07:41.242 Read completed with error (sct=0, sc=8) 00:07:41.242 Read completed with error (sct=0, sc=8) 00:07:41.242 Write completed with error (sct=0, sc=8) 00:07:41.242 Read completed with error (sct=0, sc=8) 00:07:41.242 Write completed with error (sct=0, sc=8) 00:07:41.242 Read completed with error (sct=0, sc=8) 00:07:41.242 Read completed with error (sct=0, sc=8) 00:07:41.242 Read completed with error (sct=0, sc=8) 00:07:41.242 Write completed with error (sct=0, sc=8) 00:07:41.242 Read completed with error (sct=0, sc=8) 00:07:41.242 Read completed with error (sct=0, sc=8) 00:07:41.242 Read completed with error (sct=0, sc=8) 00:07:41.242 Write completed with error (sct=0, sc=8) 00:07:41.242 Write completed with error (sct=0, sc=8) 00:07:41.242 Write completed with error (sct=0, sc=8) 00:07:41.242 [2024-11-15 10:26:29.552812] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6772c0 is same with the state(6) to be set 00:07:41.242 Initializing NVMe Controllers 00:07:41.242 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:07:41.242 Controller IO queue size 128, less than required. 00:07:41.242 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:41.242 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:07:41.242 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:07:41.242 Initialization complete. Launching workers. 00:07:41.242 ======================================================== 00:07:41.242 Latency(us) 00:07:41.242 Device Information : IOPS MiB/s Average min max 00:07:41.242 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 164.08 0.08 909784.35 414.93 1014773.96 00:07:41.242 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 161.60 0.08 914197.75 505.34 1013951.63 00:07:41.242 ======================================================== 00:07:41.242 Total : 325.69 0.16 911974.26 414.93 1014773.96 00:07:41.242 00:07:41.242 [2024-11-15 10:26:29.553248] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6789a0 (9): Bad file descriptor 00:07:41.242 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:07:41.242 10:26:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:41.242 10:26:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:07:41.242 10:26:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 271649 00:07:41.242 10:26:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:07:41.808 10:26:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:07:41.808 10:26:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 271649 00:07:41.808 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (271649) - No such process 00:07:41.808 10:26:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 271649 00:07:41.808 10:26:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@650 -- # local es=0 00:07:41.808 10:26:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # valid_exec_arg wait 271649 00:07:41.808 10:26:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@638 -- # local arg=wait 00:07:41.808 10:26:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:41.808 10:26:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # type -t wait 00:07:41.808 10:26:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:41.808 10:26:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@653 -- # wait 271649 00:07:41.808 10:26:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@653 -- # es=1 00:07:41.808 10:26:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:41.808 10:26:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:41.808 10:26:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:41.808 10:26:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:07:41.808 10:26:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:41.808 10:26:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:41.808 10:26:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:41.808 10:26:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:41.808 10:26:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:41.808 10:26:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:41.808 [2024-11-15 10:26:30.077192] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:41.808 10:26:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:41.808 10:26:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:41.808 10:26:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:41.808 10:26:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:41.808 10:26:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:41.808 10:26:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=272155 00:07:41.808 10:26:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:07:41.808 10:26:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:07:41.809 10:26:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 272155 00:07:41.809 10:26:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:41.809 [2024-11-15 10:26:30.148472] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:07:42.375 10:26:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:42.375 10:26:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 272155 00:07:42.375 10:26:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:42.633 10:26:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:42.633 10:26:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 272155 00:07:42.633 10:26:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:43.198 10:26:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:43.198 10:26:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 272155 00:07:43.198 10:26:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:43.763 10:26:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:43.763 10:26:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 272155 00:07:43.763 10:26:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:44.328 10:26:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:44.328 10:26:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 272155 00:07:44.328 10:26:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:44.895 10:26:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:44.895 10:26:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 272155 00:07:44.895 10:26:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:44.895 Initializing NVMe Controllers 00:07:44.895 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:07:44.895 Controller IO queue size 128, less than required. 00:07:44.895 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:44.895 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:07:44.895 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:07:44.895 Initialization complete. Launching workers. 00:07:44.895 ======================================================== 00:07:44.895 Latency(us) 00:07:44.895 Device Information : IOPS MiB/s Average min max 00:07:44.895 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1004120.04 1000174.22 1012319.21 00:07:44.895 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1005033.45 1000325.93 1042800.80 00:07:44.895 ======================================================== 00:07:44.895 Total : 256.00 0.12 1004576.75 1000174.22 1042800.80 00:07:44.895 00:07:45.154 10:26:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:45.154 10:26:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 272155 00:07:45.154 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (272155) - No such process 00:07:45.154 10:26:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 272155 00:07:45.154 10:26:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:07:45.154 10:26:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:07:45.154 10:26:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:45.154 10:26:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:07:45.154 10:26:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:45.154 10:26:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:07:45.154 10:26:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:45.154 10:26:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:45.154 rmmod nvme_tcp 00:07:45.413 rmmod nvme_fabrics 00:07:45.413 rmmod nvme_keyring 00:07:45.413 10:26:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:45.413 10:26:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:07:45.413 10:26:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:07:45.413 10:26:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@517 -- # '[' -n 271607 ']' 00:07:45.413 10:26:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # killprocess 271607 00:07:45.413 10:26:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@952 -- # '[' -z 271607 ']' 00:07:45.413 10:26:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@956 -- # kill -0 271607 00:07:45.413 10:26:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@957 -- # uname 00:07:45.413 10:26:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:07:45.413 10:26:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 271607 00:07:45.413 10:26:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:07:45.413 10:26:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:07:45.413 10:26:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@970 -- # echo 'killing process with pid 271607' 00:07:45.413 killing process with pid 271607 00:07:45.413 10:26:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@971 -- # kill 271607 00:07:45.413 10:26:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@976 -- # wait 271607 00:07:45.673 10:26:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:07:45.673 10:26:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:07:45.673 10:26:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:07:45.673 10:26:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:07:45.673 10:26:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-save 00:07:45.673 10:26:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:07:45.673 10:26:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-restore 00:07:45.673 10:26:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:45.673 10:26:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:07:45.673 10:26:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:45.673 10:26:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:45.673 10:26:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:47.580 10:26:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:07:47.580 00:07:47.580 real 0m12.436s 00:07:47.580 user 0m27.825s 00:07:47.580 sys 0m3.123s 00:07:47.580 10:26:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:47.580 10:26:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:47.580 ************************************ 00:07:47.580 END TEST nvmf_delete_subsystem 00:07:47.580 ************************************ 00:07:47.580 10:26:36 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:07:47.580 10:26:36 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:07:47.580 10:26:36 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:47.580 10:26:36 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:47.580 ************************************ 00:07:47.580 START TEST nvmf_host_management 00:07:47.580 ************************************ 00:07:47.580 10:26:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:07:47.839 * Looking for test storage... 00:07:47.839 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:47.839 10:26:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:07:47.839 10:26:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1691 -- # lcov --version 00:07:47.839 10:26:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:07:47.839 10:26:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:07:47.839 10:26:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:47.839 10:26:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:47.839 10:26:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:47.839 10:26:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:07:47.839 10:26:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:07:47.839 10:26:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:07:47.839 10:26:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:07:47.839 10:26:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:07:47.839 10:26:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:07:47.839 10:26:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:07:47.839 10:26:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:47.839 10:26:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:07:47.839 10:26:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:07:47.839 10:26:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:47.839 10:26:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:47.839 10:26:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:07:47.839 10:26:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:07:47.839 10:26:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:47.839 10:26:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:07:47.839 10:26:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:07:47.839 10:26:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:07:47.839 10:26:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:07:47.839 10:26:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:47.839 10:26:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:07:47.839 10:26:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:07:47.839 10:26:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:47.839 10:26:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:47.839 10:26:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:07:47.839 10:26:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:47.839 10:26:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:07:47.839 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:47.839 --rc genhtml_branch_coverage=1 00:07:47.839 --rc genhtml_function_coverage=1 00:07:47.839 --rc genhtml_legend=1 00:07:47.839 --rc geninfo_all_blocks=1 00:07:47.839 --rc geninfo_unexecuted_blocks=1 00:07:47.839 00:07:47.839 ' 00:07:47.839 10:26:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:07:47.839 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:47.839 --rc genhtml_branch_coverage=1 00:07:47.839 --rc genhtml_function_coverage=1 00:07:47.839 --rc genhtml_legend=1 00:07:47.839 --rc geninfo_all_blocks=1 00:07:47.839 --rc geninfo_unexecuted_blocks=1 00:07:47.839 00:07:47.839 ' 00:07:47.839 10:26:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:07:47.839 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:47.839 --rc genhtml_branch_coverage=1 00:07:47.839 --rc genhtml_function_coverage=1 00:07:47.839 --rc genhtml_legend=1 00:07:47.839 --rc geninfo_all_blocks=1 00:07:47.839 --rc geninfo_unexecuted_blocks=1 00:07:47.839 00:07:47.839 ' 00:07:47.839 10:26:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:07:47.839 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:47.839 --rc genhtml_branch_coverage=1 00:07:47.839 --rc genhtml_function_coverage=1 00:07:47.839 --rc genhtml_legend=1 00:07:47.839 --rc geninfo_all_blocks=1 00:07:47.839 --rc geninfo_unexecuted_blocks=1 00:07:47.839 00:07:47.839 ' 00:07:47.839 10:26:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:47.839 10:26:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:07:47.839 10:26:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:47.839 10:26:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:47.839 10:26:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:47.839 10:26:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:47.839 10:26:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:47.839 10:26:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:47.839 10:26:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:47.839 10:26:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:47.839 10:26:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:47.839 10:26:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:47.839 10:26:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:07:47.839 10:26:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=8b464f06-2980-e311-ba20-001e67a94acd 00:07:47.839 10:26:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:47.839 10:26:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:47.839 10:26:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:47.839 10:26:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:47.839 10:26:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:47.839 10:26:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:07:47.839 10:26:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:47.839 10:26:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:47.839 10:26:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:47.839 10:26:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:47.839 10:26:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:47.840 10:26:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:47.840 10:26:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:07:47.840 10:26:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:47.840 10:26:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:07:47.840 10:26:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:47.840 10:26:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:47.840 10:26:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:47.840 10:26:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:47.840 10:26:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:47.840 10:26:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:47.840 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:47.840 10:26:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:47.840 10:26:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:47.840 10:26:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:47.840 10:26:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:47.840 10:26:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:47.840 10:26:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:07:47.840 10:26:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:07:47.840 10:26:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:47.840 10:26:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:47.840 10:26:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:47.840 10:26:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:47.840 10:26:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:47.840 10:26:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:47.840 10:26:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:47.840 10:26:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:07:47.840 10:26:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:07:47.840 10:26:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@309 -- # xtrace_disable 00:07:47.840 10:26:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:50.374 10:26:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:50.374 10:26:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # pci_devs=() 00:07:50.374 10:26:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:50.374 10:26:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:50.374 10:26:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:50.374 10:26:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:50.374 10:26:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:50.374 10:26:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # net_devs=() 00:07:50.374 10:26:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:50.374 10:26:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # e810=() 00:07:50.374 10:26:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # local -ga e810 00:07:50.374 10:26:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # x722=() 00:07:50.374 10:26:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # local -ga x722 00:07:50.374 10:26:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # mlx=() 00:07:50.374 10:26:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # local -ga mlx 00:07:50.374 10:26:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:50.374 10:26:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:50.374 10:26:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:50.374 10:26:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:50.374 10:26:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:50.374 10:26:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:50.374 10:26:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:50.374 10:26:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:50.374 10:26:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:50.375 10:26:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:50.375 10:26:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:50.375 10:26:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:50.375 10:26:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:07:50.375 10:26:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:07:50.375 10:26:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:07:50.375 10:26:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:07:50.375 10:26:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:07:50.375 10:26:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:07:50.375 10:26:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:50.375 10:26:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:82:00.0 (0x8086 - 0x159b)' 00:07:50.375 Found 0000:82:00.0 (0x8086 - 0x159b) 00:07:50.375 10:26:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:50.375 10:26:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:50.375 10:26:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:50.375 10:26:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:50.375 10:26:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:50.375 10:26:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:50.375 10:26:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:82:00.1 (0x8086 - 0x159b)' 00:07:50.375 Found 0000:82:00.1 (0x8086 - 0x159b) 00:07:50.375 10:26:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:50.375 10:26:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:50.375 10:26:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:50.375 10:26:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:50.375 10:26:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:50.375 10:26:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:07:50.375 10:26:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:07:50.375 10:26:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:07:50.375 10:26:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:50.375 10:26:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:50.375 10:26:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:50.375 10:26:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:50.375 10:26:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:50.375 10:26:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:50.375 10:26:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:50.375 10:26:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:82:00.0: cvl_0_0' 00:07:50.375 Found net devices under 0000:82:00.0: cvl_0_0 00:07:50.375 10:26:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:50.375 10:26:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:50.375 10:26:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:50.375 10:26:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:50.375 10:26:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:50.375 10:26:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:50.375 10:26:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:50.375 10:26:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:50.375 10:26:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:82:00.1: cvl_0_1' 00:07:50.375 Found net devices under 0000:82:00.1: cvl_0_1 00:07:50.375 10:26:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:50.375 10:26:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:07:50.375 10:26:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # is_hw=yes 00:07:50.375 10:26:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:07:50.375 10:26:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:07:50.375 10:26:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:07:50.375 10:26:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:50.375 10:26:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:50.375 10:26:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:50.375 10:26:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:50.375 10:26:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:07:50.375 10:26:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:50.375 10:26:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:50.375 10:26:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:07:50.375 10:26:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:07:50.375 10:26:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:50.375 10:26:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:50.375 10:26:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:07:50.375 10:26:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:07:50.375 10:26:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:07:50.375 10:26:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:50.375 10:26:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:50.375 10:26:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:50.375 10:26:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:07:50.375 10:26:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:50.375 10:26:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:50.375 10:26:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:50.375 10:26:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:07:50.375 10:26:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:07:50.375 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:50.375 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.269 ms 00:07:50.375 00:07:50.375 --- 10.0.0.2 ping statistics --- 00:07:50.375 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:50.375 rtt min/avg/max/mdev = 0.269/0.269/0.269/0.000 ms 00:07:50.375 10:26:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:50.375 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:50.375 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.124 ms 00:07:50.375 00:07:50.375 --- 10.0.0.1 ping statistics --- 00:07:50.375 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:50.375 rtt min/avg/max/mdev = 0.124/0.124/0.124/0.000 ms 00:07:50.375 10:26:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:50.375 10:26:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@450 -- # return 0 00:07:50.375 10:26:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:50.375 10:26:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:50.375 10:26:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:07:50.375 10:26:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:07:50.375 10:26:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:50.375 10:26:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:07:50.375 10:26:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:07:50.375 10:26:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:07:50.375 10:26:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:07:50.375 10:26:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:07:50.375 10:26:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:50.375 10:26:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:50.375 10:26:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:50.375 10:26:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@509 -- # nvmfpid=274509 00:07:50.375 10:26:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:07:50.375 10:26:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@510 -- # waitforlisten 274509 00:07:50.375 10:26:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@833 -- # '[' -z 274509 ']' 00:07:50.375 10:26:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:50.375 10:26:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:50.376 10:26:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:50.376 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:50.376 10:26:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:50.376 10:26:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:50.376 [2024-11-15 10:26:38.597060] Starting SPDK v25.01-pre git sha1 318515b44 / DPDK 24.03.0 initialization... 00:07:50.376 [2024-11-15 10:26:38.597144] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:50.376 [2024-11-15 10:26:38.668071] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:50.376 [2024-11-15 10:26:38.726789] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:50.376 [2024-11-15 10:26:38.726846] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:50.376 [2024-11-15 10:26:38.726875] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:50.376 [2024-11-15 10:26:38.726886] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:50.376 [2024-11-15 10:26:38.726894] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:50.376 [2024-11-15 10:26:38.728520] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:50.376 [2024-11-15 10:26:38.728585] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:07:50.376 [2024-11-15 10:26:38.728634] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:07:50.376 [2024-11-15 10:26:38.728638] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:50.634 10:26:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:50.634 10:26:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@866 -- # return 0 00:07:50.634 10:26:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:50.634 10:26:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:50.634 10:26:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:50.634 10:26:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:50.634 10:26:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:50.634 10:26:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:50.634 10:26:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:50.634 [2024-11-15 10:26:38.878863] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:50.634 10:26:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:50.634 10:26:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:07:50.634 10:26:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:50.634 10:26:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:50.634 10:26:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:07:50.634 10:26:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:07:50.634 10:26:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:07:50.634 10:26:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:50.634 10:26:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:50.634 Malloc0 00:07:50.634 [2024-11-15 10:26:38.954149] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:50.634 10:26:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:50.634 10:26:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:07:50.634 10:26:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:50.635 10:26:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:50.635 10:26:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=274636 00:07:50.635 10:26:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 274636 /var/tmp/bdevperf.sock 00:07:50.635 10:26:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@833 -- # '[' -z 274636 ']' 00:07:50.635 10:26:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:07:50.635 10:26:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:07:50.635 10:26:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:07:50.635 10:26:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:50.635 10:26:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:07:50.635 10:26:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:07:50.635 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:07:50.635 10:26:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:07:50.635 10:26:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:50.635 10:26:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:50.635 10:26:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:07:50.635 10:26:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:07:50.635 { 00:07:50.635 "params": { 00:07:50.635 "name": "Nvme$subsystem", 00:07:50.635 "trtype": "$TEST_TRANSPORT", 00:07:50.635 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:50.635 "adrfam": "ipv4", 00:07:50.635 "trsvcid": "$NVMF_PORT", 00:07:50.635 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:50.635 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:50.635 "hdgst": ${hdgst:-false}, 00:07:50.635 "ddgst": ${ddgst:-false} 00:07:50.635 }, 00:07:50.635 "method": "bdev_nvme_attach_controller" 00:07:50.635 } 00:07:50.635 EOF 00:07:50.635 )") 00:07:50.635 10:26:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:07:50.635 10:26:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:07:50.635 10:26:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:07:50.635 10:26:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:07:50.635 "params": { 00:07:50.635 "name": "Nvme0", 00:07:50.635 "trtype": "tcp", 00:07:50.635 "traddr": "10.0.0.2", 00:07:50.635 "adrfam": "ipv4", 00:07:50.635 "trsvcid": "4420", 00:07:50.635 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:50.635 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:07:50.635 "hdgst": false, 00:07:50.635 "ddgst": false 00:07:50.635 }, 00:07:50.635 "method": "bdev_nvme_attach_controller" 00:07:50.635 }' 00:07:50.635 [2024-11-15 10:26:39.035979] Starting SPDK v25.01-pre git sha1 318515b44 / DPDK 24.03.0 initialization... 00:07:50.635 [2024-11-15 10:26:39.036052] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid274636 ] 00:07:50.893 [2024-11-15 10:26:39.105695] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:50.893 [2024-11-15 10:26:39.165209] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:51.152 Running I/O for 10 seconds... 00:07:51.152 10:26:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:51.152 10:26:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@866 -- # return 0 00:07:51.152 10:26:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:07:51.152 10:26:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:51.152 10:26:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:51.152 10:26:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:51.152 10:26:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:51.152 10:26:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:07:51.152 10:26:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:07:51.152 10:26:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:07:51.152 10:26:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:07:51.152 10:26:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:07:51.152 10:26:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:07:51.152 10:26:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:07:51.152 10:26:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:07:51.152 10:26:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:07:51.152 10:26:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:51.152 10:26:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:51.152 10:26:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:51.152 10:26:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=67 00:07:51.152 10:26:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 67 -ge 100 ']' 00:07:51.152 10:26:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@62 -- # sleep 0.25 00:07:51.409 10:26:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i-- )) 00:07:51.409 10:26:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:07:51.409 10:26:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:07:51.409 10:26:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:07:51.409 10:26:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:51.409 10:26:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:51.668 10:26:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:51.668 10:26:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=554 00:07:51.668 10:26:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 554 -ge 100 ']' 00:07:51.668 10:26:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:07:51.668 10:26:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@60 -- # break 00:07:51.668 10:26:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:07:51.668 10:26:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:07:51.668 10:26:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:51.668 10:26:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:51.668 [2024-11-15 10:26:39.918042] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218cf10 is same with the state(6) to be set 00:07:51.668 [2024-11-15 10:26:39.918174] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218cf10 is same with the state(6) to be set 00:07:51.668 [2024-11-15 10:26:39.918217] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218cf10 is same with the state(6) to be set 00:07:51.668 [2024-11-15 10:26:39.918230] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218cf10 is same with the state(6) to be set 00:07:51.668 [2024-11-15 10:26:39.918242] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218cf10 is same with the state(6) to be set 00:07:51.668 [2024-11-15 10:26:39.918254] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218cf10 is same with the state(6) to be set 00:07:51.668 [2024-11-15 10:26:39.918265] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218cf10 is same with the state(6) to be set 00:07:51.668 [2024-11-15 10:26:39.918277] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218cf10 is same with the state(6) to be set 00:07:51.668 [2024-11-15 10:26:39.918289] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218cf10 is same with the state(6) to be set 00:07:51.668 [2024-11-15 10:26:39.918300] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218cf10 is same with the state(6) to be set 00:07:51.668 [2024-11-15 10:26:39.918312] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218cf10 is same with the state(6) to be set 00:07:51.668 [2024-11-15 10:26:39.918323] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218cf10 is same with the state(6) to be set 00:07:51.668 [2024-11-15 10:26:39.918336] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218cf10 is same with the state(6) to be set 00:07:51.668 [2024-11-15 10:26:39.918348] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218cf10 is same with the state(6) to be set 00:07:51.668 [2024-11-15 10:26:39.918360] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218cf10 is same with the state(6) to be set 00:07:51.668 [2024-11-15 10:26:39.918383] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218cf10 is same with the state(6) to be set 00:07:51.668 [2024-11-15 10:26:39.918406] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218cf10 is same with the state(6) to be set 00:07:51.668 [2024-11-15 10:26:39.918417] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218cf10 is same with the state(6) to be set 00:07:51.668 [2024-11-15 10:26:39.918429] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218cf10 is same with the state(6) to be set 00:07:51.668 [2024-11-15 10:26:39.918441] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218cf10 is same with the state(6) to be set 00:07:51.668 [2024-11-15 10:26:39.918452] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218cf10 is same with the state(6) to be set 00:07:51.668 [2024-11-15 10:26:39.918464] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218cf10 is same with the state(6) to be set 00:07:51.669 [2024-11-15 10:26:39.918475] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218cf10 is same with the state(6) to be set 00:07:51.669 [2024-11-15 10:26:39.918486] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218cf10 is same with the state(6) to be set 00:07:51.669 [2024-11-15 10:26:39.918498] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218cf10 is same with the state(6) to be set 00:07:51.669 [2024-11-15 10:26:39.918509] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218cf10 is same with the state(6) to be set 00:07:51.669 [2024-11-15 10:26:39.918521] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218cf10 is same with the state(6) to be set 00:07:51.669 [2024-11-15 10:26:39.918533] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218cf10 is same with the state(6) to be set 00:07:51.669 [2024-11-15 10:26:39.918545] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218cf10 is same with the state(6) to be set 00:07:51.669 [2024-11-15 10:26:39.918560] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218cf10 is same with the state(6) to be set 00:07:51.669 [2024-11-15 10:26:39.918572] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218cf10 is same with the state(6) to be set 00:07:51.669 [2024-11-15 10:26:39.918584] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218cf10 is same with the state(6) to be set 00:07:51.669 [2024-11-15 10:26:39.918605] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218cf10 is same with the state(6) to be set 00:07:51.669 [2024-11-15 10:26:39.918618] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218cf10 is same with the state(6) to be set 00:07:51.669 [2024-11-15 10:26:39.918629] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218cf10 is same with the state(6) to be set 00:07:51.669 [2024-11-15 10:26:39.918641] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218cf10 is same with the state(6) to be set 00:07:51.669 [2024-11-15 10:26:39.918652] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218cf10 is same with the state(6) to be set 00:07:51.669 10:26:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:51.669 10:26:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:07:51.669 10:26:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:51.669 10:26:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:51.669 [2024-11-15 10:26:39.925155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.669 [2024-11-15 10:26:39.925213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:51.669 [2024-11-15 10:26:39.925243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:82048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.669 [2024-11-15 10:26:39.925260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:51.669 [2024-11-15 10:26:39.925277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:82176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.669 [2024-11-15 10:26:39.925291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:51.669 [2024-11-15 10:26:39.925307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:82304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.669 [2024-11-15 10:26:39.925321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:51.669 [2024-11-15 10:26:39.925336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:82432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.669 [2024-11-15 10:26:39.925351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:51.669 [2024-11-15 10:26:39.925374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:82560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.669 [2024-11-15 10:26:39.925390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:51.669 [2024-11-15 10:26:39.925417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:82688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.669 [2024-11-15 10:26:39.925430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:51.669 [2024-11-15 10:26:39.925446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:82816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.669 [2024-11-15 10:26:39.925466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:51.669 [2024-11-15 10:26:39.925482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:82944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.669 [2024-11-15 10:26:39.925497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:51.669 [2024-11-15 10:26:39.925512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:83072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.669 [2024-11-15 10:26:39.925526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:51.669 [2024-11-15 10:26:39.925542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:83200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.669 [2024-11-15 10:26:39.925556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:51.669 [2024-11-15 10:26:39.925571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:83328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.669 [2024-11-15 10:26:39.925585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:51.669 [2024-11-15 10:26:39.925600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:83456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.669 [2024-11-15 10:26:39.925614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:51.669 [2024-11-15 10:26:39.925629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:83584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.669 [2024-11-15 10:26:39.925643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:51.669 [2024-11-15 10:26:39.925666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:83712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.669 [2024-11-15 10:26:39.925681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:51.669 [2024-11-15 10:26:39.925696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:83840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.669 [2024-11-15 10:26:39.925710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:51.669 [2024-11-15 10:26:39.925724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:83968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.669 [2024-11-15 10:26:39.925738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:51.669 [2024-11-15 10:26:39.925753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:84096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.669 [2024-11-15 10:26:39.925767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:51.669 [2024-11-15 10:26:39.925782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:84224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.669 [2024-11-15 10:26:39.925795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:51.669 [2024-11-15 10:26:39.925810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:84352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.669 [2024-11-15 10:26:39.925823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:51.669 [2024-11-15 10:26:39.925842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:84480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.669 [2024-11-15 10:26:39.925857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:51.669 [2024-11-15 10:26:39.925872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:84608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.669 [2024-11-15 10:26:39.925886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:51.669 [2024-11-15 10:26:39.925900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:84736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.669 [2024-11-15 10:26:39.925914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:51.669 [2024-11-15 10:26:39.925929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:84864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.669 [2024-11-15 10:26:39.925943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:51.669 [2024-11-15 10:26:39.925957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:84992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.670 [2024-11-15 10:26:39.925972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:51.670 [2024-11-15 10:26:39.925987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:85120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.670 [2024-11-15 10:26:39.926001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:51.670 [2024-11-15 10:26:39.926016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:85248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.670 [2024-11-15 10:26:39.926030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:51.670 [2024-11-15 10:26:39.926045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:85376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.670 [2024-11-15 10:26:39.926058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:51.670 [2024-11-15 10:26:39.926074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:85504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.670 [2024-11-15 10:26:39.926087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:51.670 [2024-11-15 10:26:39.926102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:85632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.670 [2024-11-15 10:26:39.926115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:51.670 [2024-11-15 10:26:39.926130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:85760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.670 [2024-11-15 10:26:39.926144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:51.670 [2024-11-15 10:26:39.926159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:85888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.670 [2024-11-15 10:26:39.926185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:51.670 [2024-11-15 10:26:39.926201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:86016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.670 [2024-11-15 10:26:39.926223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:51.670 [2024-11-15 10:26:39.926239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:86144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.670 [2024-11-15 10:26:39.926253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:51.670 [2024-11-15 10:26:39.926269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:86272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.670 [2024-11-15 10:26:39.926283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:51.670 [2024-11-15 10:26:39.926297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:86400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.670 [2024-11-15 10:26:39.926311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:51.670 [2024-11-15 10:26:39.926326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:86528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.670 [2024-11-15 10:26:39.926340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:51.670 [2024-11-15 10:26:39.926355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:86656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.670 [2024-11-15 10:26:39.926376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:51.670 [2024-11-15 10:26:39.926393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:86784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.670 [2024-11-15 10:26:39.926407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:51.670 [2024-11-15 10:26:39.926430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:86912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.670 [2024-11-15 10:26:39.926443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:51.670 [2024-11-15 10:26:39.926458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:87040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.670 [2024-11-15 10:26:39.926472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:51.670 [2024-11-15 10:26:39.926487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:87168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.670 [2024-11-15 10:26:39.926501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:51.670 [2024-11-15 10:26:39.926516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:87296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.670 [2024-11-15 10:26:39.926529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:51.670 [2024-11-15 10:26:39.926544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:87424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.670 [2024-11-15 10:26:39.926558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:51.670 [2024-11-15 10:26:39.926572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:87552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.670 [2024-11-15 10:26:39.926586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:51.670 [2024-11-15 10:26:39.926605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:87680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.670 [2024-11-15 10:26:39.926619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:51.670 [2024-11-15 10:26:39.926635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:87808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.670 [2024-11-15 10:26:39.926648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:51.670 [2024-11-15 10:26:39.926663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:87936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.670 [2024-11-15 10:26:39.926682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:51.670 [2024-11-15 10:26:39.926697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:88064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.670 [2024-11-15 10:26:39.926711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:51.670 [2024-11-15 10:26:39.926726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:88192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.670 [2024-11-15 10:26:39.926740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:51.670 [2024-11-15 10:26:39.926755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:88320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.670 [2024-11-15 10:26:39.926768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:51.670 [2024-11-15 10:26:39.926783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:88448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.670 [2024-11-15 10:26:39.926797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:51.670 [2024-11-15 10:26:39.926812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:88576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.670 [2024-11-15 10:26:39.926826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:51.670 [2024-11-15 10:26:39.926840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:88704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.670 [2024-11-15 10:26:39.926854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:51.670 [2024-11-15 10:26:39.926869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:88832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.670 [2024-11-15 10:26:39.926883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:51.670 [2024-11-15 10:26:39.926898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:88960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.670 [2024-11-15 10:26:39.926911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:51.670 [2024-11-15 10:26:39.926926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:89088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.670 [2024-11-15 10:26:39.926940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:51.670 [2024-11-15 10:26:39.926955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:89216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.670 [2024-11-15 10:26:39.926973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:51.670 [2024-11-15 10:26:39.926988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:89344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.670 [2024-11-15 10:26:39.927002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:51.670 [2024-11-15 10:26:39.927017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:89472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.670 [2024-11-15 10:26:39.927031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:51.670 [2024-11-15 10:26:39.927046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:89600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.671 [2024-11-15 10:26:39.927060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:51.671 [2024-11-15 10:26:39.927074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:89728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.671 [2024-11-15 10:26:39.927088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:51.671 [2024-11-15 10:26:39.927103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:89856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.671 [2024-11-15 10:26:39.927117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:51.671 [2024-11-15 10:26:39.927131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:89984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.671 [2024-11-15 10:26:39.927150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:51.671 [2024-11-15 10:26:39.927192] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:07:51.671 [2024-11-15 10:26:39.927336] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:07:51.671 [2024-11-15 10:26:39.927358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:51.671 [2024-11-15 10:26:39.927383] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:07:51.671 [2024-11-15 10:26:39.927397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:51.671 [2024-11-15 10:26:39.927414] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:07:51.671 [2024-11-15 10:26:39.927428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:51.671 [2024-11-15 10:26:39.927443] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:07:51.671 [2024-11-15 10:26:39.927456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:51.671 [2024-11-15 10:26:39.927479] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f4a40 is same with the state(6) to be set 00:07:51.671 [2024-11-15 10:26:39.928581] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:07:51.671 10:26:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:51.671 10:26:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:07:51.671 task offset: 81920 on job bdev=Nvme0n1 fails 00:07:51.671 00:07:51.671 Latency(us) 00:07:51.671 [2024-11-15T09:26:40.134Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:51.671 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:07:51.671 Job: Nvme0n1 ended in about 0.41 seconds with error 00:07:51.671 Verification LBA range: start 0x0 length 0x400 00:07:51.671 Nvme0n1 : 0.41 1556.57 97.29 155.66 0.00 36334.37 2961.26 34175.81 00:07:51.671 [2024-11-15T09:26:40.134Z] =================================================================================================================== 00:07:51.671 [2024-11-15T09:26:40.134Z] Total : 1556.57 97.29 155.66 0.00 36334.37 2961.26 34175.81 00:07:51.671 [2024-11-15 10:26:39.931284] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:51.671 [2024-11-15 10:26:39.931316] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6f4a40 (9): Bad file descriptor 00:07:51.671 [2024-11-15 10:26:39.980758] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:07:52.604 10:26:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 274636 00:07:52.604 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (274636) - No such process 00:07:52.604 10:26:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # true 00:07:52.604 10:26:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:07:52.604 10:26:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:07:52.604 10:26:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:07:52.604 10:26:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:07:52.604 10:26:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:07:52.604 10:26:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:07:52.604 10:26:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:07:52.604 { 00:07:52.604 "params": { 00:07:52.604 "name": "Nvme$subsystem", 00:07:52.604 "trtype": "$TEST_TRANSPORT", 00:07:52.604 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:52.604 "adrfam": "ipv4", 00:07:52.604 "trsvcid": "$NVMF_PORT", 00:07:52.604 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:52.604 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:52.604 "hdgst": ${hdgst:-false}, 00:07:52.604 "ddgst": ${ddgst:-false} 00:07:52.604 }, 00:07:52.604 "method": "bdev_nvme_attach_controller" 00:07:52.604 } 00:07:52.604 EOF 00:07:52.604 )") 00:07:52.604 10:26:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:07:52.604 10:26:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:07:52.604 10:26:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:07:52.604 10:26:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:07:52.604 "params": { 00:07:52.604 "name": "Nvme0", 00:07:52.604 "trtype": "tcp", 00:07:52.604 "traddr": "10.0.0.2", 00:07:52.604 "adrfam": "ipv4", 00:07:52.604 "trsvcid": "4420", 00:07:52.604 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:52.604 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:07:52.604 "hdgst": false, 00:07:52.604 "ddgst": false 00:07:52.604 }, 00:07:52.604 "method": "bdev_nvme_attach_controller" 00:07:52.604 }' 00:07:52.604 [2024-11-15 10:26:40.981934] Starting SPDK v25.01-pre git sha1 318515b44 / DPDK 24.03.0 initialization... 00:07:52.604 [2024-11-15 10:26:40.982005] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid274841 ] 00:07:52.604 [2024-11-15 10:26:41.051554] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:52.863 [2024-11-15 10:26:41.112082] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:53.121 Running I/O for 1 seconds... 00:07:54.055 1536.00 IOPS, 96.00 MiB/s 00:07:54.055 Latency(us) 00:07:54.055 [2024-11-15T09:26:42.518Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:54.055 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:07:54.055 Verification LBA range: start 0x0 length 0x400 00:07:54.055 Nvme0n1 : 1.01 1579.51 98.72 0.00 0.00 39875.11 5485.61 34369.99 00:07:54.055 [2024-11-15T09:26:42.519Z] =================================================================================================================== 00:07:54.056 [2024-11-15T09:26:42.519Z] Total : 1579.51 98.72 0.00 0.00 39875.11 5485.61 34369.99 00:07:54.313 10:26:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:07:54.313 10:26:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:07:54.313 10:26:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:07:54.313 10:26:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:07:54.313 10:26:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:07:54.313 10:26:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:54.313 10:26:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:07:54.313 10:26:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:54.313 10:26:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:07:54.313 10:26:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:54.313 10:26:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:54.314 rmmod nvme_tcp 00:07:54.314 rmmod nvme_fabrics 00:07:54.314 rmmod nvme_keyring 00:07:54.314 10:26:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:54.314 10:26:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:07:54.314 10:26:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:07:54.314 10:26:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@517 -- # '[' -n 274509 ']' 00:07:54.314 10:26:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@518 -- # killprocess 274509 00:07:54.314 10:26:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@952 -- # '[' -z 274509 ']' 00:07:54.314 10:26:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@956 -- # kill -0 274509 00:07:54.314 10:26:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@957 -- # uname 00:07:54.314 10:26:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:07:54.314 10:26:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 274509 00:07:54.314 10:26:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:07:54.314 10:26:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:07:54.314 10:26:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@970 -- # echo 'killing process with pid 274509' 00:07:54.314 killing process with pid 274509 00:07:54.314 10:26:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@971 -- # kill 274509 00:07:54.314 10:26:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@976 -- # wait 274509 00:07:54.572 [2024-11-15 10:26:42.982904] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:07:54.572 10:26:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:07:54.572 10:26:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:07:54.572 10:26:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:07:54.572 10:26:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:07:54.572 10:26:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-save 00:07:54.573 10:26:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:07:54.573 10:26:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-restore 00:07:54.573 10:26:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:54.573 10:26:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@302 -- # remove_spdk_ns 00:07:54.573 10:26:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:54.573 10:26:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:54.573 10:26:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:57.113 10:26:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:07:57.113 10:26:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:07:57.113 00:07:57.113 real 0m9.033s 00:07:57.113 user 0m20.341s 00:07:57.113 sys 0m2.888s 00:07:57.113 10:26:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:57.113 10:26:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:57.113 ************************************ 00:07:57.113 END TEST nvmf_host_management 00:07:57.113 ************************************ 00:07:57.113 10:26:45 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:07:57.113 10:26:45 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:07:57.113 10:26:45 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:57.113 10:26:45 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:57.113 ************************************ 00:07:57.113 START TEST nvmf_lvol 00:07:57.113 ************************************ 00:07:57.113 10:26:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:07:57.113 * Looking for test storage... 00:07:57.113 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:57.113 10:26:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:07:57.113 10:26:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1691 -- # lcov --version 00:07:57.113 10:26:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:07:57.113 10:26:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:07:57.113 10:26:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:57.113 10:26:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:57.113 10:26:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:57.113 10:26:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:07:57.113 10:26:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:07:57.113 10:26:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:07:57.113 10:26:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:07:57.113 10:26:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:07:57.113 10:26:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:07:57.113 10:26:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:07:57.113 10:26:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:57.113 10:26:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:07:57.113 10:26:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:07:57.113 10:26:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:57.113 10:26:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:57.113 10:26:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:07:57.113 10:26:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:07:57.113 10:26:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:57.113 10:26:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:07:57.113 10:26:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:07:57.113 10:26:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:07:57.113 10:26:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:07:57.113 10:26:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:57.113 10:26:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:07:57.113 10:26:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:07:57.113 10:26:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:57.113 10:26:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:57.113 10:26:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:07:57.114 10:26:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:57.114 10:26:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:07:57.114 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:57.114 --rc genhtml_branch_coverage=1 00:07:57.114 --rc genhtml_function_coverage=1 00:07:57.114 --rc genhtml_legend=1 00:07:57.114 --rc geninfo_all_blocks=1 00:07:57.114 --rc geninfo_unexecuted_blocks=1 00:07:57.114 00:07:57.114 ' 00:07:57.114 10:26:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:07:57.114 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:57.114 --rc genhtml_branch_coverage=1 00:07:57.114 --rc genhtml_function_coverage=1 00:07:57.114 --rc genhtml_legend=1 00:07:57.114 --rc geninfo_all_blocks=1 00:07:57.114 --rc geninfo_unexecuted_blocks=1 00:07:57.114 00:07:57.114 ' 00:07:57.114 10:26:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:07:57.114 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:57.114 --rc genhtml_branch_coverage=1 00:07:57.114 --rc genhtml_function_coverage=1 00:07:57.114 --rc genhtml_legend=1 00:07:57.114 --rc geninfo_all_blocks=1 00:07:57.114 --rc geninfo_unexecuted_blocks=1 00:07:57.114 00:07:57.114 ' 00:07:57.114 10:26:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:07:57.114 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:57.114 --rc genhtml_branch_coverage=1 00:07:57.114 --rc genhtml_function_coverage=1 00:07:57.114 --rc genhtml_legend=1 00:07:57.114 --rc geninfo_all_blocks=1 00:07:57.114 --rc geninfo_unexecuted_blocks=1 00:07:57.114 00:07:57.114 ' 00:07:57.114 10:26:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:57.114 10:26:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:07:57.114 10:26:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:57.114 10:26:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:57.114 10:26:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:57.114 10:26:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:57.114 10:26:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:57.114 10:26:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:57.114 10:26:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:57.114 10:26:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:57.114 10:26:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:57.114 10:26:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:57.114 10:26:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:07:57.114 10:26:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=8b464f06-2980-e311-ba20-001e67a94acd 00:07:57.114 10:26:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:57.114 10:26:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:57.114 10:26:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:57.114 10:26:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:57.114 10:26:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:57.114 10:26:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:07:57.114 10:26:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:57.114 10:26:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:57.114 10:26:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:57.114 10:26:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:57.114 10:26:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:57.114 10:26:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:57.114 10:26:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:07:57.114 10:26:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:57.114 10:26:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:07:57.114 10:26:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:57.114 10:26:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:57.114 10:26:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:57.114 10:26:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:57.114 10:26:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:57.114 10:26:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:57.114 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:57.114 10:26:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:57.114 10:26:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:57.114 10:26:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:57.114 10:26:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:57.114 10:26:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:57.114 10:26:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:07:57.114 10:26:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:07:57.114 10:26:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:57.114 10:26:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:07:57.114 10:26:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:07:57.114 10:26:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:57.114 10:26:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:57.114 10:26:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:57.114 10:26:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:57.114 10:26:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:57.114 10:26:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:57.114 10:26:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:57.114 10:26:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:07:57.114 10:26:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:07:57.114 10:26:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@309 -- # xtrace_disable 00:07:57.114 10:26:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:59.019 10:26:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:59.019 10:26:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # pci_devs=() 00:07:59.019 10:26:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:59.019 10:26:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:59.019 10:26:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:59.019 10:26:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:59.019 10:26:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:59.019 10:26:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # net_devs=() 00:07:59.019 10:26:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:59.019 10:26:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # e810=() 00:07:59.019 10:26:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # local -ga e810 00:07:59.019 10:26:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # x722=() 00:07:59.019 10:26:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # local -ga x722 00:07:59.019 10:26:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # mlx=() 00:07:59.019 10:26:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # local -ga mlx 00:07:59.019 10:26:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:59.019 10:26:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:59.019 10:26:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:59.019 10:26:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:59.019 10:26:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:59.019 10:26:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:59.019 10:26:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:59.019 10:26:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:59.019 10:26:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:59.019 10:26:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:59.019 10:26:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:59.019 10:26:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:59.019 10:26:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:07:59.019 10:26:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:07:59.019 10:26:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:07:59.019 10:26:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:07:59.019 10:26:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:07:59.020 10:26:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:07:59.020 10:26:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:59.020 10:26:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:82:00.0 (0x8086 - 0x159b)' 00:07:59.020 Found 0000:82:00.0 (0x8086 - 0x159b) 00:07:59.020 10:26:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:59.020 10:26:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:59.020 10:26:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:59.020 10:26:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:59.020 10:26:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:59.020 10:26:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:59.020 10:26:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:82:00.1 (0x8086 - 0x159b)' 00:07:59.020 Found 0000:82:00.1 (0x8086 - 0x159b) 00:07:59.020 10:26:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:59.020 10:26:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:59.020 10:26:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:59.020 10:26:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:59.020 10:26:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:59.020 10:26:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:07:59.020 10:26:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:07:59.020 10:26:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:07:59.020 10:26:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:59.020 10:26:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:59.020 10:26:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:59.020 10:26:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:59.020 10:26:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:59.020 10:26:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:59.020 10:26:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:59.020 10:26:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:82:00.0: cvl_0_0' 00:07:59.020 Found net devices under 0000:82:00.0: cvl_0_0 00:07:59.020 10:26:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:59.020 10:26:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:59.020 10:26:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:59.020 10:26:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:59.020 10:26:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:59.020 10:26:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:59.020 10:26:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:59.020 10:26:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:59.020 10:26:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:82:00.1: cvl_0_1' 00:07:59.020 Found net devices under 0000:82:00.1: cvl_0_1 00:07:59.020 10:26:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:59.020 10:26:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:07:59.020 10:26:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # is_hw=yes 00:07:59.020 10:26:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:07:59.020 10:26:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:07:59.020 10:26:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:07:59.020 10:26:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:59.020 10:26:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:59.020 10:26:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:59.020 10:26:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:59.020 10:26:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:07:59.020 10:26:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:59.020 10:26:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:59.020 10:26:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:07:59.020 10:26:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:07:59.020 10:26:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:59.020 10:26:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:59.020 10:26:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:07:59.020 10:26:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:07:59.020 10:26:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:07:59.020 10:26:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:59.020 10:26:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:59.020 10:26:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:59.020 10:26:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:07:59.020 10:26:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:59.278 10:26:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:59.278 10:26:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:59.278 10:26:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:07:59.278 10:26:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:07:59.278 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:59.278 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.307 ms 00:07:59.278 00:07:59.278 --- 10.0.0.2 ping statistics --- 00:07:59.278 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:59.278 rtt min/avg/max/mdev = 0.307/0.307/0.307/0.000 ms 00:07:59.278 10:26:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:59.278 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:59.278 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.087 ms 00:07:59.278 00:07:59.278 --- 10.0.0.1 ping statistics --- 00:07:59.278 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:59.278 rtt min/avg/max/mdev = 0.087/0.087/0.087/0.000 ms 00:07:59.278 10:26:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:59.278 10:26:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@450 -- # return 0 00:07:59.278 10:26:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:59.278 10:26:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:59.278 10:26:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:07:59.278 10:26:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:07:59.278 10:26:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:59.278 10:26:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:07:59.278 10:26:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:07:59.278 10:26:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:07:59.278 10:26:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:59.278 10:26:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:59.278 10:26:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:59.278 10:26:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@509 -- # nvmfpid=277047 00:07:59.278 10:26:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:07:59.278 10:26:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@510 -- # waitforlisten 277047 00:07:59.278 10:26:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@833 -- # '[' -z 277047 ']' 00:07:59.278 10:26:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:59.278 10:26:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:59.278 10:26:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:59.278 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:59.278 10:26:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:59.278 10:26:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:59.278 [2024-11-15 10:26:47.591247] Starting SPDK v25.01-pre git sha1 318515b44 / DPDK 24.03.0 initialization... 00:07:59.278 [2024-11-15 10:26:47.591329] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:59.278 [2024-11-15 10:26:47.661303] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:59.278 [2024-11-15 10:26:47.718912] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:59.278 [2024-11-15 10:26:47.718976] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:59.278 [2024-11-15 10:26:47.718989] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:59.278 [2024-11-15 10:26:47.719014] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:59.278 [2024-11-15 10:26:47.719023] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:59.278 [2024-11-15 10:26:47.720495] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:59.278 [2024-11-15 10:26:47.720553] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:59.278 [2024-11-15 10:26:47.720557] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:59.536 10:26:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:59.536 10:26:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@866 -- # return 0 00:07:59.536 10:26:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:59.536 10:26:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:59.536 10:26:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:59.536 10:26:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:59.536 10:26:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:07:59.793 [2024-11-15 10:26:48.110930] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:59.793 10:26:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:00.052 10:26:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:08:00.052 10:26:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:00.310 10:26:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:08:00.310 10:26:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:08:00.568 10:26:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:08:00.826 10:26:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=3e5a6f6b-88f9-4012-9065-ed9c638f66df 00:08:00.826 10:26:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 3e5a6f6b-88f9-4012-9065-ed9c638f66df lvol 20 00:08:01.084 10:26:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=782e42d5-6795-466b-b244-f03f3a991221 00:08:01.084 10:26:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:08:01.341 10:26:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 782e42d5-6795-466b-b244-f03f3a991221 00:08:01.908 10:26:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:08:01.908 [2024-11-15 10:26:50.336874] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:01.908 10:26:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:02.166 10:26:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=277472 00:08:02.166 10:26:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:08:02.166 10:26:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:08:03.545 10:26:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 782e42d5-6795-466b-b244-f03f3a991221 MY_SNAPSHOT 00:08:03.545 10:26:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=5755ea4f-f501-4ccc-a1f4-cff9642eafa1 00:08:03.545 10:26:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 782e42d5-6795-466b-b244-f03f3a991221 30 00:08:04.111 10:26:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 5755ea4f-f501-4ccc-a1f4-cff9642eafa1 MY_CLONE 00:08:04.370 10:26:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=715ce386-82af-43e2-8b5b-b5250e505c1c 00:08:04.370 10:26:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 715ce386-82af-43e2-8b5b-b5250e505c1c 00:08:05.305 10:26:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 277472 00:08:13.413 Initializing NVMe Controllers 00:08:13.413 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:08:13.413 Controller IO queue size 128, less than required. 00:08:13.413 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:13.413 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:08:13.413 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:08:13.413 Initialization complete. Launching workers. 00:08:13.413 ======================================================== 00:08:13.413 Latency(us) 00:08:13.413 Device Information : IOPS MiB/s Average min max 00:08:13.413 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 10500.60 41.02 12195.84 2101.13 75741.64 00:08:13.413 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 10407.60 40.65 12301.39 2149.71 75203.43 00:08:13.413 ======================================================== 00:08:13.413 Total : 20908.20 81.67 12248.38 2101.13 75741.64 00:08:13.413 00:08:13.413 10:27:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:08:13.413 10:27:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 782e42d5-6795-466b-b244-f03f3a991221 00:08:13.413 10:27:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 3e5a6f6b-88f9-4012-9065-ed9c638f66df 00:08:13.671 10:27:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:08:13.671 10:27:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:08:13.671 10:27:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:08:13.671 10:27:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:13.671 10:27:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:08:13.671 10:27:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:13.671 10:27:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:08:13.671 10:27:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:13.671 10:27:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:13.671 rmmod nvme_tcp 00:08:13.671 rmmod nvme_fabrics 00:08:13.671 rmmod nvme_keyring 00:08:13.671 10:27:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:13.671 10:27:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:08:13.671 10:27:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:08:13.671 10:27:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@517 -- # '[' -n 277047 ']' 00:08:13.671 10:27:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@518 -- # killprocess 277047 00:08:13.671 10:27:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@952 -- # '[' -z 277047 ']' 00:08:13.671 10:27:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@956 -- # kill -0 277047 00:08:13.671 10:27:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@957 -- # uname 00:08:13.671 10:27:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:08:13.671 10:27:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 277047 00:08:13.671 10:27:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:08:13.671 10:27:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:08:13.671 10:27:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@970 -- # echo 'killing process with pid 277047' 00:08:13.671 killing process with pid 277047 00:08:13.671 10:27:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@971 -- # kill 277047 00:08:13.671 10:27:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@976 -- # wait 277047 00:08:13.932 10:27:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:13.932 10:27:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:13.932 10:27:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:13.932 10:27:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:08:13.932 10:27:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-save 00:08:13.932 10:27:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:13.932 10:27:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-restore 00:08:13.932 10:27:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:13.932 10:27:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:13.932 10:27:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:13.932 10:27:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:13.932 10:27:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:16.472 10:27:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:16.472 00:08:16.472 real 0m19.205s 00:08:16.472 user 1m5.760s 00:08:16.472 sys 0m5.659s 00:08:16.472 10:27:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:16.472 10:27:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:16.472 ************************************ 00:08:16.472 END TEST nvmf_lvol 00:08:16.472 ************************************ 00:08:16.472 10:27:04 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:08:16.472 10:27:04 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:08:16.472 10:27:04 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:16.472 10:27:04 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:16.472 ************************************ 00:08:16.472 START TEST nvmf_lvs_grow 00:08:16.472 ************************************ 00:08:16.472 10:27:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:08:16.472 * Looking for test storage... 00:08:16.472 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:16.472 10:27:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:08:16.472 10:27:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1691 -- # lcov --version 00:08:16.472 10:27:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:08:16.472 10:27:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:08:16.472 10:27:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:16.472 10:27:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:16.472 10:27:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:16.472 10:27:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:08:16.472 10:27:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:08:16.472 10:27:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:08:16.472 10:27:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:08:16.472 10:27:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:08:16.472 10:27:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:08:16.472 10:27:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:08:16.472 10:27:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:16.472 10:27:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:08:16.472 10:27:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:08:16.472 10:27:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:16.472 10:27:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:16.472 10:27:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:08:16.472 10:27:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:08:16.472 10:27:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:16.472 10:27:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:08:16.472 10:27:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:08:16.472 10:27:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:08:16.472 10:27:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:08:16.472 10:27:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:16.472 10:27:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:08:16.472 10:27:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:08:16.472 10:27:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:16.472 10:27:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:16.472 10:27:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:08:16.472 10:27:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:16.472 10:27:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:08:16.472 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:16.472 --rc genhtml_branch_coverage=1 00:08:16.472 --rc genhtml_function_coverage=1 00:08:16.472 --rc genhtml_legend=1 00:08:16.472 --rc geninfo_all_blocks=1 00:08:16.472 --rc geninfo_unexecuted_blocks=1 00:08:16.472 00:08:16.472 ' 00:08:16.472 10:27:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:08:16.472 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:16.472 --rc genhtml_branch_coverage=1 00:08:16.472 --rc genhtml_function_coverage=1 00:08:16.472 --rc genhtml_legend=1 00:08:16.472 --rc geninfo_all_blocks=1 00:08:16.472 --rc geninfo_unexecuted_blocks=1 00:08:16.472 00:08:16.472 ' 00:08:16.472 10:27:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:08:16.472 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:16.472 --rc genhtml_branch_coverage=1 00:08:16.472 --rc genhtml_function_coverage=1 00:08:16.472 --rc genhtml_legend=1 00:08:16.472 --rc geninfo_all_blocks=1 00:08:16.472 --rc geninfo_unexecuted_blocks=1 00:08:16.472 00:08:16.472 ' 00:08:16.472 10:27:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:08:16.472 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:16.472 --rc genhtml_branch_coverage=1 00:08:16.472 --rc genhtml_function_coverage=1 00:08:16.472 --rc genhtml_legend=1 00:08:16.472 --rc geninfo_all_blocks=1 00:08:16.472 --rc geninfo_unexecuted_blocks=1 00:08:16.472 00:08:16.472 ' 00:08:16.472 10:27:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:16.472 10:27:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:08:16.472 10:27:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:16.472 10:27:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:16.472 10:27:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:16.472 10:27:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:16.472 10:27:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:16.472 10:27:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:16.472 10:27:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:16.472 10:27:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:16.472 10:27:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:16.472 10:27:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:16.472 10:27:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:08:16.472 10:27:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=8b464f06-2980-e311-ba20-001e67a94acd 00:08:16.472 10:27:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:16.472 10:27:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:16.472 10:27:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:16.472 10:27:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:16.472 10:27:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:16.472 10:27:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:08:16.472 10:27:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:16.472 10:27:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:16.472 10:27:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:16.472 10:27:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:16.473 10:27:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:16.473 10:27:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:16.473 10:27:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:08:16.473 10:27:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:16.473 10:27:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:08:16.473 10:27:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:16.473 10:27:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:16.473 10:27:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:16.473 10:27:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:16.473 10:27:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:16.473 10:27:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:16.473 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:16.473 10:27:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:16.473 10:27:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:16.473 10:27:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:16.473 10:27:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:16.473 10:27:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:08:16.473 10:27:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:08:16.473 10:27:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:16.473 10:27:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:16.473 10:27:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:16.473 10:27:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:16.473 10:27:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:16.473 10:27:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:16.473 10:27:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:16.473 10:27:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:16.473 10:27:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:16.473 10:27:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:16.473 10:27:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@309 -- # xtrace_disable 00:08:16.473 10:27:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:18.492 10:27:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:18.492 10:27:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # pci_devs=() 00:08:18.492 10:27:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:18.492 10:27:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:18.492 10:27:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:18.492 10:27:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:18.492 10:27:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:18.492 10:27:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # net_devs=() 00:08:18.492 10:27:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:18.492 10:27:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # e810=() 00:08:18.492 10:27:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # local -ga e810 00:08:18.492 10:27:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # x722=() 00:08:18.492 10:27:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # local -ga x722 00:08:18.492 10:27:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # mlx=() 00:08:18.492 10:27:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # local -ga mlx 00:08:18.492 10:27:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:18.492 10:27:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:18.492 10:27:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:18.492 10:27:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:18.492 10:27:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:18.492 10:27:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:18.492 10:27:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:18.492 10:27:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:18.492 10:27:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:18.492 10:27:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:18.492 10:27:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:18.492 10:27:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:18.492 10:27:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:18.492 10:27:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:18.492 10:27:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:18.492 10:27:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:18.492 10:27:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:18.492 10:27:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:18.492 10:27:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:18.492 10:27:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:82:00.0 (0x8086 - 0x159b)' 00:08:18.492 Found 0000:82:00.0 (0x8086 - 0x159b) 00:08:18.492 10:27:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:18.492 10:27:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:18.492 10:27:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:18.492 10:27:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:18.492 10:27:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:18.492 10:27:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:18.492 10:27:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:82:00.1 (0x8086 - 0x159b)' 00:08:18.492 Found 0000:82:00.1 (0x8086 - 0x159b) 00:08:18.492 10:27:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:18.492 10:27:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:18.492 10:27:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:18.492 10:27:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:18.492 10:27:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:18.492 10:27:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:18.492 10:27:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:18.492 10:27:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:18.492 10:27:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:18.492 10:27:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:18.492 10:27:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:18.492 10:27:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:18.492 10:27:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:18.492 10:27:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:18.492 10:27:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:18.492 10:27:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:82:00.0: cvl_0_0' 00:08:18.492 Found net devices under 0000:82:00.0: cvl_0_0 00:08:18.492 10:27:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:18.492 10:27:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:18.492 10:27:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:18.492 10:27:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:18.492 10:27:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:18.492 10:27:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:18.492 10:27:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:18.492 10:27:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:18.492 10:27:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:82:00.1: cvl_0_1' 00:08:18.492 Found net devices under 0000:82:00.1: cvl_0_1 00:08:18.493 10:27:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:18.493 10:27:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:18.493 10:27:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # is_hw=yes 00:08:18.493 10:27:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:18.493 10:27:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:08:18.493 10:27:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:08:18.493 10:27:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:18.493 10:27:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:18.493 10:27:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:18.493 10:27:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:18.493 10:27:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:18.493 10:27:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:18.493 10:27:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:18.493 10:27:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:18.493 10:27:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:18.493 10:27:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:18.493 10:27:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:18.493 10:27:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:18.493 10:27:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:18.493 10:27:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:18.493 10:27:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:18.493 10:27:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:18.493 10:27:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:18.493 10:27:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:18.493 10:27:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:18.493 10:27:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:18.493 10:27:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:18.493 10:27:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:18.493 10:27:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:18.493 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:18.493 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.165 ms 00:08:18.493 00:08:18.493 --- 10.0.0.2 ping statistics --- 00:08:18.493 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:18.493 rtt min/avg/max/mdev = 0.165/0.165/0.165/0.000 ms 00:08:18.493 10:27:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:18.493 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:18.493 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.177 ms 00:08:18.493 00:08:18.493 --- 10.0.0.1 ping statistics --- 00:08:18.493 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:18.493 rtt min/avg/max/mdev = 0.177/0.177/0.177/0.000 ms 00:08:18.493 10:27:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:18.493 10:27:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@450 -- # return 0 00:08:18.493 10:27:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:18.493 10:27:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:18.493 10:27:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:18.493 10:27:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:18.493 10:27:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:18.493 10:27:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:18.493 10:27:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:18.493 10:27:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:08:18.493 10:27:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:18.493 10:27:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:18.493 10:27:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:18.493 10:27:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@509 -- # nvmfpid=280877 00:08:18.493 10:27:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:08:18.493 10:27:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@510 -- # waitforlisten 280877 00:08:18.493 10:27:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@833 -- # '[' -z 280877 ']' 00:08:18.493 10:27:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:18.493 10:27:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:18.493 10:27:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:18.493 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:18.493 10:27:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:18.493 10:27:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:18.493 [2024-11-15 10:27:06.821841] Starting SPDK v25.01-pre git sha1 318515b44 / DPDK 24.03.0 initialization... 00:08:18.493 [2024-11-15 10:27:06.821942] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:18.493 [2024-11-15 10:27:06.896694] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:18.795 [2024-11-15 10:27:06.960063] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:18.795 [2024-11-15 10:27:06.960113] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:18.795 [2024-11-15 10:27:06.960141] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:18.795 [2024-11-15 10:27:06.960152] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:18.795 [2024-11-15 10:27:06.960162] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:18.795 [2024-11-15 10:27:06.960797] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:18.795 10:27:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:18.795 10:27:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@866 -- # return 0 00:08:18.795 10:27:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:18.795 10:27:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:18.795 10:27:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:18.795 10:27:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:18.795 10:27:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:08:19.066 [2024-11-15 10:27:07.375101] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:19.066 10:27:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:08:19.066 10:27:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:08:19.066 10:27:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:19.066 10:27:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:19.066 ************************************ 00:08:19.066 START TEST lvs_grow_clean 00:08:19.066 ************************************ 00:08:19.066 10:27:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1127 -- # lvs_grow 00:08:19.066 10:27:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:08:19.066 10:27:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:08:19.066 10:27:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:08:19.066 10:27:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:08:19.066 10:27:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:08:19.066 10:27:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:08:19.066 10:27:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:19.066 10:27:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:19.067 10:27:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:19.365 10:27:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:08:19.365 10:27:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:08:19.667 10:27:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=2b526245-2942-4195-8d68-1b68e4dc6c18 00:08:19.668 10:27:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2b526245-2942-4195-8d68-1b68e4dc6c18 00:08:19.668 10:27:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:08:19.938 10:27:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:08:19.938 10:27:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:08:19.938 10:27:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 2b526245-2942-4195-8d68-1b68e4dc6c18 lvol 150 00:08:20.213 10:27:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=5b7139aa-92ca-47a7-9dd2-fd8cd8343363 00:08:20.213 10:27:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:20.213 10:27:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:08:20.489 [2024-11-15 10:27:08.842903] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:08:20.489 [2024-11-15 10:27:08.843010] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:08:20.489 true 00:08:20.489 10:27:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2b526245-2942-4195-8d68-1b68e4dc6c18 00:08:20.489 10:27:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:08:20.766 10:27:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:08:20.766 10:27:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:08:21.046 10:27:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 5b7139aa-92ca-47a7-9dd2-fd8cd8343363 00:08:21.324 10:27:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:08:21.616 [2024-11-15 10:27:09.950306] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:21.616 10:27:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:21.901 10:27:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=281853 00:08:21.901 10:27:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:08:21.901 10:27:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:21.901 10:27:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 281853 /var/tmp/bdevperf.sock 00:08:21.901 10:27:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@833 -- # '[' -z 281853 ']' 00:08:21.901 10:27:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:21.901 10:27:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:21.901 10:27:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:21.901 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:21.901 10:27:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:21.901 10:27:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:08:21.901 [2024-11-15 10:27:10.278250] Starting SPDK v25.01-pre git sha1 318515b44 / DPDK 24.03.0 initialization... 00:08:21.901 [2024-11-15 10:27:10.278336] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid281853 ] 00:08:21.901 [2024-11-15 10:27:10.344733] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:22.173 [2024-11-15 10:27:10.405020] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:22.173 10:27:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:22.173 10:27:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@866 -- # return 0 00:08:22.173 10:27:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:08:22.444 Nvme0n1 00:08:22.444 10:27:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:08:22.716 [ 00:08:22.716 { 00:08:22.716 "name": "Nvme0n1", 00:08:22.716 "aliases": [ 00:08:22.716 "5b7139aa-92ca-47a7-9dd2-fd8cd8343363" 00:08:22.716 ], 00:08:22.716 "product_name": "NVMe disk", 00:08:22.716 "block_size": 4096, 00:08:22.716 "num_blocks": 38912, 00:08:22.716 "uuid": "5b7139aa-92ca-47a7-9dd2-fd8cd8343363", 00:08:22.716 "numa_id": 1, 00:08:22.716 "assigned_rate_limits": { 00:08:22.716 "rw_ios_per_sec": 0, 00:08:22.716 "rw_mbytes_per_sec": 0, 00:08:22.716 "r_mbytes_per_sec": 0, 00:08:22.716 "w_mbytes_per_sec": 0 00:08:22.716 }, 00:08:22.716 "claimed": false, 00:08:22.716 "zoned": false, 00:08:22.716 "supported_io_types": { 00:08:22.716 "read": true, 00:08:22.716 "write": true, 00:08:22.716 "unmap": true, 00:08:22.716 "flush": true, 00:08:22.716 "reset": true, 00:08:22.716 "nvme_admin": true, 00:08:22.716 "nvme_io": true, 00:08:22.716 "nvme_io_md": false, 00:08:22.716 "write_zeroes": true, 00:08:22.716 "zcopy": false, 00:08:22.716 "get_zone_info": false, 00:08:22.716 "zone_management": false, 00:08:22.716 "zone_append": false, 00:08:22.716 "compare": true, 00:08:22.716 "compare_and_write": true, 00:08:22.716 "abort": true, 00:08:22.716 "seek_hole": false, 00:08:22.716 "seek_data": false, 00:08:22.716 "copy": true, 00:08:22.716 "nvme_iov_md": false 00:08:22.716 }, 00:08:22.716 "memory_domains": [ 00:08:22.716 { 00:08:22.716 "dma_device_id": "system", 00:08:22.716 "dma_device_type": 1 00:08:22.716 } 00:08:22.716 ], 00:08:22.716 "driver_specific": { 00:08:22.716 "nvme": [ 00:08:22.716 { 00:08:22.716 "trid": { 00:08:22.716 "trtype": "TCP", 00:08:22.716 "adrfam": "IPv4", 00:08:22.716 "traddr": "10.0.0.2", 00:08:22.716 "trsvcid": "4420", 00:08:22.716 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:08:22.716 }, 00:08:22.716 "ctrlr_data": { 00:08:22.716 "cntlid": 1, 00:08:22.716 "vendor_id": "0x8086", 00:08:22.716 "model_number": "SPDK bdev Controller", 00:08:22.716 "serial_number": "SPDK0", 00:08:22.716 "firmware_revision": "25.01", 00:08:22.716 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:22.716 "oacs": { 00:08:22.716 "security": 0, 00:08:22.716 "format": 0, 00:08:22.716 "firmware": 0, 00:08:22.716 "ns_manage": 0 00:08:22.716 }, 00:08:22.716 "multi_ctrlr": true, 00:08:22.716 "ana_reporting": false 00:08:22.716 }, 00:08:22.716 "vs": { 00:08:22.716 "nvme_version": "1.3" 00:08:22.716 }, 00:08:22.716 "ns_data": { 00:08:22.716 "id": 1, 00:08:22.716 "can_share": true 00:08:22.716 } 00:08:22.716 } 00:08:22.716 ], 00:08:22.716 "mp_policy": "active_passive" 00:08:22.716 } 00:08:22.716 } 00:08:22.716 ] 00:08:22.716 10:27:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=281992 00:08:22.716 10:27:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:08:22.716 10:27:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:08:22.980 Running I/O for 10 seconds... 00:08:23.914 Latency(us) 00:08:23.914 [2024-11-15T09:27:12.377Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:23.914 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:23.914 Nvme0n1 : 1.00 16206.00 63.30 0.00 0.00 0.00 0.00 0.00 00:08:23.914 [2024-11-15T09:27:12.377Z] =================================================================================================================== 00:08:23.914 [2024-11-15T09:27:12.377Z] Total : 16206.00 63.30 0.00 0.00 0.00 0.00 0.00 00:08:23.914 00:08:24.847 10:27:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 2b526245-2942-4195-8d68-1b68e4dc6c18 00:08:24.847 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:24.847 Nvme0n1 : 2.00 16485.50 64.40 0.00 0.00 0.00 0.00 0.00 00:08:24.847 [2024-11-15T09:27:13.310Z] =================================================================================================================== 00:08:24.847 [2024-11-15T09:27:13.310Z] Total : 16485.50 64.40 0.00 0.00 0.00 0.00 0.00 00:08:24.847 00:08:25.105 true 00:08:25.105 10:27:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2b526245-2942-4195-8d68-1b68e4dc6c18 00:08:25.105 10:27:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:08:25.363 10:27:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:08:25.363 10:27:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:08:25.363 10:27:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 281992 00:08:25.929 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:25.929 Nvme0n1 : 3.00 16578.67 64.76 0.00 0.00 0.00 0.00 0.00 00:08:25.929 [2024-11-15T09:27:14.392Z] =================================================================================================================== 00:08:25.929 [2024-11-15T09:27:14.392Z] Total : 16578.67 64.76 0.00 0.00 0.00 0.00 0.00 00:08:25.929 00:08:26.863 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:26.863 Nvme0n1 : 4.00 16679.50 65.15 0.00 0.00 0.00 0.00 0.00 00:08:26.863 [2024-11-15T09:27:15.326Z] =================================================================================================================== 00:08:26.863 [2024-11-15T09:27:15.326Z] Total : 16679.50 65.15 0.00 0.00 0.00 0.00 0.00 00:08:26.863 00:08:27.797 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:27.797 Nvme0n1 : 5.00 16772.60 65.52 0.00 0.00 0.00 0.00 0.00 00:08:27.797 [2024-11-15T09:27:16.260Z] =================================================================================================================== 00:08:27.797 [2024-11-15T09:27:16.260Z] Total : 16772.60 65.52 0.00 0.00 0.00 0.00 0.00 00:08:27.797 00:08:29.169 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:29.169 Nvme0n1 : 6.00 16824.33 65.72 0.00 0.00 0.00 0.00 0.00 00:08:29.169 [2024-11-15T09:27:17.632Z] =================================================================================================================== 00:08:29.169 [2024-11-15T09:27:17.632Z] Total : 16824.33 65.72 0.00 0.00 0.00 0.00 0.00 00:08:29.169 00:08:30.102 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:30.102 Nvme0n1 : 7.00 16875.43 65.92 0.00 0.00 0.00 0.00 0.00 00:08:30.102 [2024-11-15T09:27:18.565Z] =================================================================================================================== 00:08:30.102 [2024-11-15T09:27:18.565Z] Total : 16875.43 65.92 0.00 0.00 0.00 0.00 0.00 00:08:30.102 00:08:31.033 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:31.033 Nvme0n1 : 8.00 16929.25 66.13 0.00 0.00 0.00 0.00 0.00 00:08:31.033 [2024-11-15T09:27:19.496Z] =================================================================================================================== 00:08:31.033 [2024-11-15T09:27:19.496Z] Total : 16929.25 66.13 0.00 0.00 0.00 0.00 0.00 00:08:31.033 00:08:31.969 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:31.969 Nvme0n1 : 9.00 16932.22 66.14 0.00 0.00 0.00 0.00 0.00 00:08:31.969 [2024-11-15T09:27:20.432Z] =================================================================================================================== 00:08:31.969 [2024-11-15T09:27:20.432Z] Total : 16932.22 66.14 0.00 0.00 0.00 0.00 0.00 00:08:31.969 00:08:32.903 00:08:32.903 Latency(us) 00:08:32.903 [2024-11-15T09:27:21.366Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:32.903 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:32.903 Nvme0n1 : 10.00 16945.18 66.19 0.00 0.00 7549.72 2415.12 15534.46 00:08:32.903 [2024-11-15T09:27:21.366Z] =================================================================================================================== 00:08:32.903 [2024-11-15T09:27:21.366Z] Total : 16945.18 66.19 0.00 0.00 7549.72 2415.12 15534.46 00:08:32.903 { 00:08:32.903 "results": [ 00:08:32.903 { 00:08:32.903 "job": "Nvme0n1", 00:08:32.903 "core_mask": "0x2", 00:08:32.903 "workload": "randwrite", 00:08:32.903 "status": "finished", 00:08:32.903 "queue_depth": 128, 00:08:32.903 "io_size": 4096, 00:08:32.903 "runtime": 10.001665, 00:08:32.903 "iops": 16945.17862775848, 00:08:32.903 "mibps": 66.19210401468156, 00:08:32.903 "io_failed": 0, 00:08:32.903 "io_timeout": 0, 00:08:32.903 "avg_latency_us": 7549.722931703948, 00:08:32.903 "min_latency_us": 2415.122962962963, 00:08:32.903 "max_latency_us": 15534.45925925926 00:08:32.903 } 00:08:32.903 ], 00:08:32.903 "core_count": 1 00:08:32.903 } 00:08:32.903 10:27:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 281853 00:08:32.903 10:27:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@952 -- # '[' -z 281853 ']' 00:08:32.903 10:27:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # kill -0 281853 00:08:32.903 10:27:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@957 -- # uname 00:08:32.903 10:27:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:08:32.903 10:27:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 281853 00:08:32.903 10:27:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:08:32.903 10:27:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:08:32.903 10:27:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@970 -- # echo 'killing process with pid 281853' 00:08:32.903 killing process with pid 281853 00:08:32.903 10:27:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@971 -- # kill 281853 00:08:32.903 Received shutdown signal, test time was about 10.000000 seconds 00:08:32.903 00:08:32.903 Latency(us) 00:08:32.903 [2024-11-15T09:27:21.366Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:32.903 [2024-11-15T09:27:21.366Z] =================================================================================================================== 00:08:32.903 [2024-11-15T09:27:21.366Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:08:32.903 10:27:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@976 -- # wait 281853 00:08:33.161 10:27:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:33.418 10:27:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:08:33.676 10:27:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2b526245-2942-4195-8d68-1b68e4dc6c18 00:08:33.676 10:27:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:08:33.933 10:27:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:08:33.933 10:27:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:08:33.933 10:27:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:34.191 [2024-11-15 10:27:22.602214] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:08:34.191 10:27:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2b526245-2942-4195-8d68-1b68e4dc6c18 00:08:34.191 10:27:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@650 -- # local es=0 00:08:34.191 10:27:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2b526245-2942-4195-8d68-1b68e4dc6c18 00:08:34.191 10:27:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:34.191 10:27:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:34.191 10:27:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:34.191 10:27:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:34.191 10:27:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:34.191 10:27:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:34.191 10:27:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:34.191 10:27:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:08:34.191 10:27:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2b526245-2942-4195-8d68-1b68e4dc6c18 00:08:34.449 request: 00:08:34.449 { 00:08:34.449 "uuid": "2b526245-2942-4195-8d68-1b68e4dc6c18", 00:08:34.449 "method": "bdev_lvol_get_lvstores", 00:08:34.449 "req_id": 1 00:08:34.449 } 00:08:34.449 Got JSON-RPC error response 00:08:34.449 response: 00:08:34.449 { 00:08:34.449 "code": -19, 00:08:34.449 "message": "No such device" 00:08:34.449 } 00:08:34.449 10:27:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # es=1 00:08:34.449 10:27:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:34.449 10:27:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:34.450 10:27:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:34.450 10:27:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:34.707 aio_bdev 00:08:34.965 10:27:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 5b7139aa-92ca-47a7-9dd2-fd8cd8343363 00:08:34.965 10:27:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@901 -- # local bdev_name=5b7139aa-92ca-47a7-9dd2-fd8cd8343363 00:08:34.965 10:27:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:08:34.965 10:27:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local i 00:08:34.965 10:27:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:08:34.965 10:27:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:08:34.965 10:27:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:08:35.223 10:27:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 5b7139aa-92ca-47a7-9dd2-fd8cd8343363 -t 2000 00:08:35.480 [ 00:08:35.480 { 00:08:35.480 "name": "5b7139aa-92ca-47a7-9dd2-fd8cd8343363", 00:08:35.480 "aliases": [ 00:08:35.480 "lvs/lvol" 00:08:35.480 ], 00:08:35.480 "product_name": "Logical Volume", 00:08:35.480 "block_size": 4096, 00:08:35.480 "num_blocks": 38912, 00:08:35.480 "uuid": "5b7139aa-92ca-47a7-9dd2-fd8cd8343363", 00:08:35.480 "assigned_rate_limits": { 00:08:35.480 "rw_ios_per_sec": 0, 00:08:35.480 "rw_mbytes_per_sec": 0, 00:08:35.480 "r_mbytes_per_sec": 0, 00:08:35.480 "w_mbytes_per_sec": 0 00:08:35.480 }, 00:08:35.480 "claimed": false, 00:08:35.480 "zoned": false, 00:08:35.480 "supported_io_types": { 00:08:35.480 "read": true, 00:08:35.480 "write": true, 00:08:35.480 "unmap": true, 00:08:35.480 "flush": false, 00:08:35.480 "reset": true, 00:08:35.480 "nvme_admin": false, 00:08:35.480 "nvme_io": false, 00:08:35.480 "nvme_io_md": false, 00:08:35.480 "write_zeroes": true, 00:08:35.480 "zcopy": false, 00:08:35.480 "get_zone_info": false, 00:08:35.480 "zone_management": false, 00:08:35.480 "zone_append": false, 00:08:35.480 "compare": false, 00:08:35.480 "compare_and_write": false, 00:08:35.480 "abort": false, 00:08:35.480 "seek_hole": true, 00:08:35.480 "seek_data": true, 00:08:35.480 "copy": false, 00:08:35.480 "nvme_iov_md": false 00:08:35.480 }, 00:08:35.480 "driver_specific": { 00:08:35.480 "lvol": { 00:08:35.480 "lvol_store_uuid": "2b526245-2942-4195-8d68-1b68e4dc6c18", 00:08:35.480 "base_bdev": "aio_bdev", 00:08:35.480 "thin_provision": false, 00:08:35.480 "num_allocated_clusters": 38, 00:08:35.480 "snapshot": false, 00:08:35.480 "clone": false, 00:08:35.480 "esnap_clone": false 00:08:35.480 } 00:08:35.480 } 00:08:35.480 } 00:08:35.480 ] 00:08:35.480 10:27:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@909 -- # return 0 00:08:35.480 10:27:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2b526245-2942-4195-8d68-1b68e4dc6c18 00:08:35.480 10:27:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:08:35.740 10:27:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:08:35.740 10:27:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2b526245-2942-4195-8d68-1b68e4dc6c18 00:08:35.740 10:27:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:08:36.002 10:27:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:08:36.002 10:27:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 5b7139aa-92ca-47a7-9dd2-fd8cd8343363 00:08:36.259 10:27:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 2b526245-2942-4195-8d68-1b68e4dc6c18 00:08:36.518 10:27:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:36.776 10:27:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:36.776 00:08:36.776 real 0m17.696s 00:08:36.776 user 0m17.180s 00:08:36.776 sys 0m1.947s 00:08:36.776 10:27:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:36.776 10:27:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:08:36.776 ************************************ 00:08:36.776 END TEST lvs_grow_clean 00:08:36.776 ************************************ 00:08:36.776 10:27:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:08:36.776 10:27:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:08:36.776 10:27:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:36.776 10:27:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:36.776 ************************************ 00:08:36.776 START TEST lvs_grow_dirty 00:08:36.776 ************************************ 00:08:36.776 10:27:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1127 -- # lvs_grow dirty 00:08:36.776 10:27:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:08:36.776 10:27:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:08:36.776 10:27:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:08:36.776 10:27:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:08:36.776 10:27:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:08:36.776 10:27:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:08:36.776 10:27:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:36.776 10:27:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:36.776 10:27:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:37.034 10:27:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:08:37.034 10:27:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:08:37.292 10:27:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=3857557e-94b9-4eda-8abe-dc40d983dfb3 00:08:37.292 10:27:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3857557e-94b9-4eda-8abe-dc40d983dfb3 00:08:37.292 10:27:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:08:37.550 10:27:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:08:37.550 10:27:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:08:37.550 10:27:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 3857557e-94b9-4eda-8abe-dc40d983dfb3 lvol 150 00:08:38.116 10:27:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=2815a1c3-ca41-4331-8a4a-ad7bb38b860a 00:08:38.116 10:27:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:38.116 10:27:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:08:38.116 [2024-11-15 10:27:26.534734] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:08:38.116 [2024-11-15 10:27:26.534827] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:08:38.116 true 00:08:38.116 10:27:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3857557e-94b9-4eda-8abe-dc40d983dfb3 00:08:38.116 10:27:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:08:38.374 10:27:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:08:38.374 10:27:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:08:38.631 10:27:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 2815a1c3-ca41-4331-8a4a-ad7bb38b860a 00:08:39.196 10:27:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:08:39.196 [2024-11-15 10:27:27.597948] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:39.196 10:27:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:39.454 10:27:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=284042 00:08:39.454 10:27:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:08:39.454 10:27:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:39.454 10:27:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 284042 /var/tmp/bdevperf.sock 00:08:39.454 10:27:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # '[' -z 284042 ']' 00:08:39.454 10:27:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:39.454 10:27:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:39.454 10:27:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:39.454 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:39.454 10:27:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:39.454 10:27:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:39.712 [2024-11-15 10:27:27.937500] Starting SPDK v25.01-pre git sha1 318515b44 / DPDK 24.03.0 initialization... 00:08:39.712 [2024-11-15 10:27:27.937586] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid284042 ] 00:08:39.712 [2024-11-15 10:27:28.002036] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:39.712 [2024-11-15 10:27:28.058323] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:39.712 10:27:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:39.712 10:27:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@866 -- # return 0 00:08:39.712 10:27:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:08:40.313 Nvme0n1 00:08:40.313 10:27:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:08:40.586 [ 00:08:40.586 { 00:08:40.586 "name": "Nvme0n1", 00:08:40.586 "aliases": [ 00:08:40.586 "2815a1c3-ca41-4331-8a4a-ad7bb38b860a" 00:08:40.587 ], 00:08:40.587 "product_name": "NVMe disk", 00:08:40.587 "block_size": 4096, 00:08:40.587 "num_blocks": 38912, 00:08:40.587 "uuid": "2815a1c3-ca41-4331-8a4a-ad7bb38b860a", 00:08:40.587 "numa_id": 1, 00:08:40.587 "assigned_rate_limits": { 00:08:40.587 "rw_ios_per_sec": 0, 00:08:40.587 "rw_mbytes_per_sec": 0, 00:08:40.587 "r_mbytes_per_sec": 0, 00:08:40.587 "w_mbytes_per_sec": 0 00:08:40.587 }, 00:08:40.587 "claimed": false, 00:08:40.587 "zoned": false, 00:08:40.587 "supported_io_types": { 00:08:40.587 "read": true, 00:08:40.587 "write": true, 00:08:40.587 "unmap": true, 00:08:40.587 "flush": true, 00:08:40.587 "reset": true, 00:08:40.587 "nvme_admin": true, 00:08:40.587 "nvme_io": true, 00:08:40.587 "nvme_io_md": false, 00:08:40.587 "write_zeroes": true, 00:08:40.587 "zcopy": false, 00:08:40.587 "get_zone_info": false, 00:08:40.587 "zone_management": false, 00:08:40.587 "zone_append": false, 00:08:40.587 "compare": true, 00:08:40.587 "compare_and_write": true, 00:08:40.587 "abort": true, 00:08:40.587 "seek_hole": false, 00:08:40.587 "seek_data": false, 00:08:40.587 "copy": true, 00:08:40.587 "nvme_iov_md": false 00:08:40.587 }, 00:08:40.587 "memory_domains": [ 00:08:40.587 { 00:08:40.587 "dma_device_id": "system", 00:08:40.587 "dma_device_type": 1 00:08:40.587 } 00:08:40.587 ], 00:08:40.587 "driver_specific": { 00:08:40.587 "nvme": [ 00:08:40.587 { 00:08:40.587 "trid": { 00:08:40.587 "trtype": "TCP", 00:08:40.587 "adrfam": "IPv4", 00:08:40.587 "traddr": "10.0.0.2", 00:08:40.587 "trsvcid": "4420", 00:08:40.587 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:08:40.587 }, 00:08:40.587 "ctrlr_data": { 00:08:40.587 "cntlid": 1, 00:08:40.587 "vendor_id": "0x8086", 00:08:40.587 "model_number": "SPDK bdev Controller", 00:08:40.587 "serial_number": "SPDK0", 00:08:40.587 "firmware_revision": "25.01", 00:08:40.587 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:40.587 "oacs": { 00:08:40.587 "security": 0, 00:08:40.587 "format": 0, 00:08:40.587 "firmware": 0, 00:08:40.587 "ns_manage": 0 00:08:40.587 }, 00:08:40.587 "multi_ctrlr": true, 00:08:40.587 "ana_reporting": false 00:08:40.587 }, 00:08:40.587 "vs": { 00:08:40.587 "nvme_version": "1.3" 00:08:40.587 }, 00:08:40.587 "ns_data": { 00:08:40.587 "id": 1, 00:08:40.587 "can_share": true 00:08:40.587 } 00:08:40.587 } 00:08:40.587 ], 00:08:40.587 "mp_policy": "active_passive" 00:08:40.587 } 00:08:40.587 } 00:08:40.587 ] 00:08:40.587 10:27:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=284101 00:08:40.587 10:27:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:08:40.587 10:27:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:08:40.587 Running I/O for 10 seconds... 00:08:41.530 Latency(us) 00:08:41.530 [2024-11-15T09:27:29.993Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:41.530 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:41.530 Nvme0n1 : 1.00 16200.00 63.28 0.00 0.00 0.00 0.00 0.00 00:08:41.530 [2024-11-15T09:27:29.993Z] =================================================================================================================== 00:08:41.530 [2024-11-15T09:27:29.993Z] Total : 16200.00 63.28 0.00 0.00 0.00 0.00 0.00 00:08:41.530 00:08:42.465 10:27:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 3857557e-94b9-4eda-8abe-dc40d983dfb3 00:08:42.465 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:42.465 Nvme0n1 : 2.00 16168.50 63.16 0.00 0.00 0.00 0.00 0.00 00:08:42.465 [2024-11-15T09:27:30.928Z] =================================================================================================================== 00:08:42.465 [2024-11-15T09:27:30.928Z] Total : 16168.50 63.16 0.00 0.00 0.00 0.00 0.00 00:08:42.465 00:08:42.723 true 00:08:42.723 10:27:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3857557e-94b9-4eda-8abe-dc40d983dfb3 00:08:42.723 10:27:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:08:42.981 10:27:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:08:42.981 10:27:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:08:42.981 10:27:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 284101 00:08:43.547 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:43.547 Nvme0n1 : 3.00 16346.33 63.85 0.00 0.00 0.00 0.00 0.00 00:08:43.547 [2024-11-15T09:27:32.010Z] =================================================================================================================== 00:08:43.547 [2024-11-15T09:27:32.010Z] Total : 16346.33 63.85 0.00 0.00 0.00 0.00 0.00 00:08:43.547 00:08:44.479 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:44.479 Nvme0n1 : 4.00 16488.25 64.41 0.00 0.00 0.00 0.00 0.00 00:08:44.479 [2024-11-15T09:27:32.942Z] =================================================================================================================== 00:08:44.479 [2024-11-15T09:27:32.942Z] Total : 16488.25 64.41 0.00 0.00 0.00 0.00 0.00 00:08:44.479 00:08:45.851 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:45.851 Nvme0n1 : 5.00 16619.60 64.92 0.00 0.00 0.00 0.00 0.00 00:08:45.851 [2024-11-15T09:27:34.314Z] =================================================================================================================== 00:08:45.851 [2024-11-15T09:27:34.314Z] Total : 16619.60 64.92 0.00 0.00 0.00 0.00 0.00 00:08:45.851 00:08:46.784 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:46.784 Nvme0n1 : 6.00 16686.50 65.18 0.00 0.00 0.00 0.00 0.00 00:08:46.784 [2024-11-15T09:27:35.247Z] =================================================================================================================== 00:08:46.784 [2024-11-15T09:27:35.247Z] Total : 16686.50 65.18 0.00 0.00 0.00 0.00 0.00 00:08:46.784 00:08:47.716 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:47.717 Nvme0n1 : 7.00 16743.14 65.40 0.00 0.00 0.00 0.00 0.00 00:08:47.717 [2024-11-15T09:27:36.180Z] =================================================================================================================== 00:08:47.717 [2024-11-15T09:27:36.180Z] Total : 16743.14 65.40 0.00 0.00 0.00 0.00 0.00 00:08:47.717 00:08:48.651 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:48.651 Nvme0n1 : 8.00 16802.25 65.63 0.00 0.00 0.00 0.00 0.00 00:08:48.651 [2024-11-15T09:27:37.114Z] =================================================================================================================== 00:08:48.651 [2024-11-15T09:27:37.114Z] Total : 16802.25 65.63 0.00 0.00 0.00 0.00 0.00 00:08:48.651 00:08:49.586 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:49.586 Nvme0n1 : 9.00 16836.44 65.77 0.00 0.00 0.00 0.00 0.00 00:08:49.586 [2024-11-15T09:27:38.049Z] =================================================================================================================== 00:08:49.586 [2024-11-15T09:27:38.049Z] Total : 16836.44 65.77 0.00 0.00 0.00 0.00 0.00 00:08:49.586 00:08:50.518 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:50.518 Nvme0n1 : 10.00 16864.90 65.88 0.00 0.00 0.00 0.00 0.00 00:08:50.518 [2024-11-15T09:27:38.981Z] =================================================================================================================== 00:08:50.518 [2024-11-15T09:27:38.981Z] Total : 16864.90 65.88 0.00 0.00 0.00 0.00 0.00 00:08:50.518 00:08:50.518 00:08:50.518 Latency(us) 00:08:50.518 [2024-11-15T09:27:38.981Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:50.518 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:50.518 Nvme0n1 : 10.01 16866.40 65.88 0.00 0.00 7585.13 3907.89 17282.09 00:08:50.518 [2024-11-15T09:27:38.981Z] =================================================================================================================== 00:08:50.518 [2024-11-15T09:27:38.981Z] Total : 16866.40 65.88 0.00 0.00 7585.13 3907.89 17282.09 00:08:50.518 { 00:08:50.518 "results": [ 00:08:50.518 { 00:08:50.518 "job": "Nvme0n1", 00:08:50.518 "core_mask": "0x2", 00:08:50.518 "workload": "randwrite", 00:08:50.518 "status": "finished", 00:08:50.518 "queue_depth": 128, 00:08:50.518 "io_size": 4096, 00:08:50.518 "runtime": 10.006698, 00:08:50.518 "iops": 16866.402883348732, 00:08:50.518 "mibps": 65.88438626308098, 00:08:50.518 "io_failed": 0, 00:08:50.518 "io_timeout": 0, 00:08:50.518 "avg_latency_us": 7585.125502136393, 00:08:50.518 "min_latency_us": 3907.8874074074074, 00:08:50.518 "max_latency_us": 17282.085925925927 00:08:50.518 } 00:08:50.518 ], 00:08:50.518 "core_count": 1 00:08:50.518 } 00:08:50.518 10:27:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 284042 00:08:50.518 10:27:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@952 -- # '[' -z 284042 ']' 00:08:50.518 10:27:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # kill -0 284042 00:08:50.518 10:27:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@957 -- # uname 00:08:50.518 10:27:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:08:50.518 10:27:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 284042 00:08:50.776 10:27:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:08:50.776 10:27:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:08:50.776 10:27:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@970 -- # echo 'killing process with pid 284042' 00:08:50.776 killing process with pid 284042 00:08:50.776 10:27:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@971 -- # kill 284042 00:08:50.776 Received shutdown signal, test time was about 10.000000 seconds 00:08:50.776 00:08:50.776 Latency(us) 00:08:50.776 [2024-11-15T09:27:39.239Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:50.776 [2024-11-15T09:27:39.239Z] =================================================================================================================== 00:08:50.776 [2024-11-15T09:27:39.239Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:08:50.776 10:27:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@976 -- # wait 284042 00:08:50.776 10:27:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:51.034 10:27:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:08:51.600 10:27:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3857557e-94b9-4eda-8abe-dc40d983dfb3 00:08:51.600 10:27:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:08:51.600 10:27:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:08:51.600 10:27:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:08:51.600 10:27:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 280877 00:08:51.600 10:27:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 280877 00:08:51.859 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 280877 Killed "${NVMF_APP[@]}" "$@" 00:08:51.859 10:27:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:08:51.859 10:27:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:08:51.859 10:27:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:51.859 10:27:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:51.859 10:27:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:51.859 10:27:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # nvmfpid=285402 00:08:51.859 10:27:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:08:51.859 10:27:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # waitforlisten 285402 00:08:51.859 10:27:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # '[' -z 285402 ']' 00:08:51.859 10:27:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:51.859 10:27:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:51.859 10:27:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:51.859 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:51.859 10:27:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:51.859 10:27:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:51.859 [2024-11-15 10:27:40.138902] Starting SPDK v25.01-pre git sha1 318515b44 / DPDK 24.03.0 initialization... 00:08:51.859 [2024-11-15 10:27:40.138989] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:51.859 [2024-11-15 10:27:40.216276] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:51.859 [2024-11-15 10:27:40.273003] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:51.859 [2024-11-15 10:27:40.273064] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:51.859 [2024-11-15 10:27:40.273091] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:51.859 [2024-11-15 10:27:40.273102] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:51.859 [2024-11-15 10:27:40.273112] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:51.859 [2024-11-15 10:27:40.273729] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:52.117 10:27:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:52.117 10:27:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@866 -- # return 0 00:08:52.117 10:27:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:52.117 10:27:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:52.117 10:27:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:52.117 10:27:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:52.117 10:27:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:52.374 [2024-11-15 10:27:40.667643] blobstore.c:4875:bs_recover: *NOTICE*: Performing recovery on blobstore 00:08:52.374 [2024-11-15 10:27:40.667819] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:08:52.374 [2024-11-15 10:27:40.667872] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:08:52.374 10:27:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:08:52.374 10:27:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 2815a1c3-ca41-4331-8a4a-ad7bb38b860a 00:08:52.374 10:27:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local bdev_name=2815a1c3-ca41-4331-8a4a-ad7bb38b860a 00:08:52.374 10:27:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:08:52.374 10:27:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local i 00:08:52.375 10:27:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:08:52.375 10:27:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:08:52.375 10:27:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:08:52.632 10:27:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 2815a1c3-ca41-4331-8a4a-ad7bb38b860a -t 2000 00:08:52.890 [ 00:08:52.890 { 00:08:52.890 "name": "2815a1c3-ca41-4331-8a4a-ad7bb38b860a", 00:08:52.890 "aliases": [ 00:08:52.890 "lvs/lvol" 00:08:52.890 ], 00:08:52.890 "product_name": "Logical Volume", 00:08:52.890 "block_size": 4096, 00:08:52.890 "num_blocks": 38912, 00:08:52.890 "uuid": "2815a1c3-ca41-4331-8a4a-ad7bb38b860a", 00:08:52.890 "assigned_rate_limits": { 00:08:52.890 "rw_ios_per_sec": 0, 00:08:52.890 "rw_mbytes_per_sec": 0, 00:08:52.890 "r_mbytes_per_sec": 0, 00:08:52.890 "w_mbytes_per_sec": 0 00:08:52.890 }, 00:08:52.890 "claimed": false, 00:08:52.890 "zoned": false, 00:08:52.890 "supported_io_types": { 00:08:52.890 "read": true, 00:08:52.890 "write": true, 00:08:52.890 "unmap": true, 00:08:52.890 "flush": false, 00:08:52.890 "reset": true, 00:08:52.890 "nvme_admin": false, 00:08:52.890 "nvme_io": false, 00:08:52.890 "nvme_io_md": false, 00:08:52.890 "write_zeroes": true, 00:08:52.890 "zcopy": false, 00:08:52.890 "get_zone_info": false, 00:08:52.890 "zone_management": false, 00:08:52.890 "zone_append": false, 00:08:52.890 "compare": false, 00:08:52.890 "compare_and_write": false, 00:08:52.890 "abort": false, 00:08:52.890 "seek_hole": true, 00:08:52.890 "seek_data": true, 00:08:52.890 "copy": false, 00:08:52.890 "nvme_iov_md": false 00:08:52.890 }, 00:08:52.890 "driver_specific": { 00:08:52.890 "lvol": { 00:08:52.890 "lvol_store_uuid": "3857557e-94b9-4eda-8abe-dc40d983dfb3", 00:08:52.890 "base_bdev": "aio_bdev", 00:08:52.890 "thin_provision": false, 00:08:52.890 "num_allocated_clusters": 38, 00:08:52.890 "snapshot": false, 00:08:52.890 "clone": false, 00:08:52.890 "esnap_clone": false 00:08:52.890 } 00:08:52.890 } 00:08:52.890 } 00:08:52.890 ] 00:08:52.890 10:27:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@909 -- # return 0 00:08:52.890 10:27:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3857557e-94b9-4eda-8abe-dc40d983dfb3 00:08:52.890 10:27:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:08:53.148 10:27:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:08:53.148 10:27:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3857557e-94b9-4eda-8abe-dc40d983dfb3 00:08:53.148 10:27:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:08:53.405 10:27:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:08:53.405 10:27:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:53.663 [2024-11-15 10:27:42.033149] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:08:53.663 10:27:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3857557e-94b9-4eda-8abe-dc40d983dfb3 00:08:53.663 10:27:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@650 -- # local es=0 00:08:53.663 10:27:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3857557e-94b9-4eda-8abe-dc40d983dfb3 00:08:53.663 10:27:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:53.663 10:27:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:53.663 10:27:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:53.663 10:27:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:53.663 10:27:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:53.663 10:27:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:53.663 10:27:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:53.663 10:27:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:08:53.663 10:27:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3857557e-94b9-4eda-8abe-dc40d983dfb3 00:08:53.921 request: 00:08:53.921 { 00:08:53.921 "uuid": "3857557e-94b9-4eda-8abe-dc40d983dfb3", 00:08:53.921 "method": "bdev_lvol_get_lvstores", 00:08:53.921 "req_id": 1 00:08:53.921 } 00:08:53.921 Got JSON-RPC error response 00:08:53.921 response: 00:08:53.921 { 00:08:53.921 "code": -19, 00:08:53.921 "message": "No such device" 00:08:53.921 } 00:08:53.921 10:27:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # es=1 00:08:53.921 10:27:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:53.921 10:27:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:53.921 10:27:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:53.921 10:27:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:54.179 aio_bdev 00:08:54.179 10:27:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 2815a1c3-ca41-4331-8a4a-ad7bb38b860a 00:08:54.179 10:27:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local bdev_name=2815a1c3-ca41-4331-8a4a-ad7bb38b860a 00:08:54.179 10:27:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:08:54.179 10:27:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local i 00:08:54.179 10:27:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:08:54.179 10:27:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:08:54.179 10:27:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:08:54.436 10:27:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 2815a1c3-ca41-4331-8a4a-ad7bb38b860a -t 2000 00:08:54.695 [ 00:08:54.695 { 00:08:54.695 "name": "2815a1c3-ca41-4331-8a4a-ad7bb38b860a", 00:08:54.695 "aliases": [ 00:08:54.695 "lvs/lvol" 00:08:54.695 ], 00:08:54.695 "product_name": "Logical Volume", 00:08:54.695 "block_size": 4096, 00:08:54.695 "num_blocks": 38912, 00:08:54.695 "uuid": "2815a1c3-ca41-4331-8a4a-ad7bb38b860a", 00:08:54.695 "assigned_rate_limits": { 00:08:54.695 "rw_ios_per_sec": 0, 00:08:54.695 "rw_mbytes_per_sec": 0, 00:08:54.695 "r_mbytes_per_sec": 0, 00:08:54.695 "w_mbytes_per_sec": 0 00:08:54.695 }, 00:08:54.695 "claimed": false, 00:08:54.695 "zoned": false, 00:08:54.695 "supported_io_types": { 00:08:54.695 "read": true, 00:08:54.695 "write": true, 00:08:54.695 "unmap": true, 00:08:54.695 "flush": false, 00:08:54.695 "reset": true, 00:08:54.695 "nvme_admin": false, 00:08:54.695 "nvme_io": false, 00:08:54.695 "nvme_io_md": false, 00:08:54.695 "write_zeroes": true, 00:08:54.695 "zcopy": false, 00:08:54.695 "get_zone_info": false, 00:08:54.695 "zone_management": false, 00:08:54.695 "zone_append": false, 00:08:54.695 "compare": false, 00:08:54.695 "compare_and_write": false, 00:08:54.695 "abort": false, 00:08:54.695 "seek_hole": true, 00:08:54.695 "seek_data": true, 00:08:54.695 "copy": false, 00:08:54.695 "nvme_iov_md": false 00:08:54.695 }, 00:08:54.695 "driver_specific": { 00:08:54.695 "lvol": { 00:08:54.695 "lvol_store_uuid": "3857557e-94b9-4eda-8abe-dc40d983dfb3", 00:08:54.695 "base_bdev": "aio_bdev", 00:08:54.695 "thin_provision": false, 00:08:54.695 "num_allocated_clusters": 38, 00:08:54.695 "snapshot": false, 00:08:54.695 "clone": false, 00:08:54.695 "esnap_clone": false 00:08:54.695 } 00:08:54.695 } 00:08:54.695 } 00:08:54.695 ] 00:08:54.695 10:27:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@909 -- # return 0 00:08:54.695 10:27:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3857557e-94b9-4eda-8abe-dc40d983dfb3 00:08:54.695 10:27:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:08:54.953 10:27:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:08:54.953 10:27:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3857557e-94b9-4eda-8abe-dc40d983dfb3 00:08:54.953 10:27:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:08:55.518 10:27:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:08:55.518 10:27:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 2815a1c3-ca41-4331-8a4a-ad7bb38b860a 00:08:55.518 10:27:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 3857557e-94b9-4eda-8abe-dc40d983dfb3 00:08:56.084 10:27:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:56.084 10:27:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:56.084 00:08:56.084 real 0m19.364s 00:08:56.084 user 0m48.865s 00:08:56.084 sys 0m4.856s 00:08:56.084 10:27:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:56.084 10:27:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:56.084 ************************************ 00:08:56.084 END TEST lvs_grow_dirty 00:08:56.084 ************************************ 00:08:56.342 10:27:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:08:56.342 10:27:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@810 -- # type=--id 00:08:56.342 10:27:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@811 -- # id=0 00:08:56.342 10:27:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # '[' --id = --pid ']' 00:08:56.342 10:27:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@816 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:08:56.342 10:27:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@816 -- # shm_files=nvmf_trace.0 00:08:56.342 10:27:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # [[ -z nvmf_trace.0 ]] 00:08:56.342 10:27:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@822 -- # for n in $shm_files 00:08:56.342 10:27:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@823 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:08:56.342 nvmf_trace.0 00:08:56.343 10:27:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # return 0 00:08:56.343 10:27:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:08:56.343 10:27:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:56.343 10:27:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:08:56.343 10:27:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:56.343 10:27:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:08:56.343 10:27:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:56.343 10:27:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:56.343 rmmod nvme_tcp 00:08:56.343 rmmod nvme_fabrics 00:08:56.343 rmmod nvme_keyring 00:08:56.343 10:27:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:56.343 10:27:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:08:56.343 10:27:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:08:56.343 10:27:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@517 -- # '[' -n 285402 ']' 00:08:56.343 10:27:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@518 -- # killprocess 285402 00:08:56.343 10:27:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@952 -- # '[' -z 285402 ']' 00:08:56.343 10:27:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # kill -0 285402 00:08:56.343 10:27:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@957 -- # uname 00:08:56.343 10:27:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:08:56.343 10:27:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 285402 00:08:56.343 10:27:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:08:56.343 10:27:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:08:56.343 10:27:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@970 -- # echo 'killing process with pid 285402' 00:08:56.343 killing process with pid 285402 00:08:56.343 10:27:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@971 -- # kill 285402 00:08:56.343 10:27:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@976 -- # wait 285402 00:08:56.603 10:27:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:56.603 10:27:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:56.603 10:27:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:56.603 10:27:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:08:56.603 10:27:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-save 00:08:56.603 10:27:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:56.603 10:27:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-restore 00:08:56.603 10:27:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:56.603 10:27:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:56.603 10:27:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:56.603 10:27:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:56.603 10:27:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:58.511 10:27:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:58.511 00:08:58.511 real 0m42.588s 00:08:58.511 user 1m12.099s 00:08:58.511 sys 0m8.816s 00:08:58.511 10:27:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:58.511 10:27:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:58.511 ************************************ 00:08:58.511 END TEST nvmf_lvs_grow 00:08:58.511 ************************************ 00:08:58.511 10:27:46 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:08:58.511 10:27:46 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:08:58.511 10:27:46 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:58.511 10:27:46 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:58.770 ************************************ 00:08:58.770 START TEST nvmf_bdev_io_wait 00:08:58.770 ************************************ 00:08:58.770 10:27:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:08:58.770 * Looking for test storage... 00:08:58.770 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:58.770 10:27:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:08:58.770 10:27:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1691 -- # lcov --version 00:08:58.770 10:27:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:08:58.770 10:27:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:08:58.770 10:27:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:58.770 10:27:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:58.770 10:27:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:58.770 10:27:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:08:58.770 10:27:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:08:58.770 10:27:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:08:58.770 10:27:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:08:58.770 10:27:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:08:58.770 10:27:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:08:58.770 10:27:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:08:58.770 10:27:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:58.770 10:27:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:08:58.770 10:27:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:08:58.770 10:27:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:58.770 10:27:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:58.770 10:27:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:08:58.770 10:27:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:08:58.770 10:27:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:58.770 10:27:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:08:58.770 10:27:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:08:58.770 10:27:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:08:58.770 10:27:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:08:58.770 10:27:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:58.770 10:27:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:08:58.770 10:27:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:08:58.770 10:27:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:58.770 10:27:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:58.770 10:27:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:08:58.770 10:27:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:58.770 10:27:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:08:58.770 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:58.770 --rc genhtml_branch_coverage=1 00:08:58.770 --rc genhtml_function_coverage=1 00:08:58.770 --rc genhtml_legend=1 00:08:58.770 --rc geninfo_all_blocks=1 00:08:58.771 --rc geninfo_unexecuted_blocks=1 00:08:58.771 00:08:58.771 ' 00:08:58.771 10:27:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:08:58.771 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:58.771 --rc genhtml_branch_coverage=1 00:08:58.771 --rc genhtml_function_coverage=1 00:08:58.771 --rc genhtml_legend=1 00:08:58.771 --rc geninfo_all_blocks=1 00:08:58.771 --rc geninfo_unexecuted_blocks=1 00:08:58.771 00:08:58.771 ' 00:08:58.771 10:27:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:08:58.771 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:58.771 --rc genhtml_branch_coverage=1 00:08:58.771 --rc genhtml_function_coverage=1 00:08:58.771 --rc genhtml_legend=1 00:08:58.771 --rc geninfo_all_blocks=1 00:08:58.771 --rc geninfo_unexecuted_blocks=1 00:08:58.771 00:08:58.771 ' 00:08:58.771 10:27:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:08:58.771 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:58.771 --rc genhtml_branch_coverage=1 00:08:58.771 --rc genhtml_function_coverage=1 00:08:58.771 --rc genhtml_legend=1 00:08:58.771 --rc geninfo_all_blocks=1 00:08:58.771 --rc geninfo_unexecuted_blocks=1 00:08:58.771 00:08:58.771 ' 00:08:58.771 10:27:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:58.771 10:27:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:08:58.771 10:27:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:58.771 10:27:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:58.771 10:27:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:58.771 10:27:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:58.771 10:27:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:58.771 10:27:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:58.771 10:27:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:58.771 10:27:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:58.771 10:27:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:58.771 10:27:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:58.771 10:27:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:08:58.771 10:27:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=8b464f06-2980-e311-ba20-001e67a94acd 00:08:58.771 10:27:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:58.771 10:27:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:58.771 10:27:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:58.771 10:27:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:58.771 10:27:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:58.771 10:27:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:08:58.771 10:27:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:58.771 10:27:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:58.771 10:27:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:58.771 10:27:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:58.771 10:27:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:58.771 10:27:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:58.771 10:27:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:08:58.771 10:27:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:58.771 10:27:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:08:58.771 10:27:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:58.771 10:27:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:58.771 10:27:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:58.771 10:27:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:58.771 10:27:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:58.771 10:27:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:58.771 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:58.771 10:27:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:58.771 10:27:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:58.771 10:27:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:58.771 10:27:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:58.771 10:27:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:58.771 10:27:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:08:58.771 10:27:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:58.771 10:27:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:58.771 10:27:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:58.771 10:27:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:58.771 10:27:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:58.771 10:27:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:58.771 10:27:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:58.771 10:27:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:58.771 10:27:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:58.771 10:27:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:58.771 10:27:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # xtrace_disable 00:08:58.771 10:27:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:01.301 10:27:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:01.301 10:27:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # pci_devs=() 00:09:01.301 10:27:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:01.301 10:27:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:01.301 10:27:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:01.301 10:27:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:01.301 10:27:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:01.301 10:27:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # net_devs=() 00:09:01.301 10:27:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:01.301 10:27:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # e810=() 00:09:01.301 10:27:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # local -ga e810 00:09:01.301 10:27:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # x722=() 00:09:01.301 10:27:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # local -ga x722 00:09:01.301 10:27:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # mlx=() 00:09:01.301 10:27:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # local -ga mlx 00:09:01.301 10:27:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:01.301 10:27:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:01.301 10:27:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:01.301 10:27:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:01.301 10:27:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:01.301 10:27:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:01.301 10:27:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:01.301 10:27:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:01.301 10:27:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:01.302 10:27:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:01.302 10:27:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:01.302 10:27:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:01.302 10:27:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:01.302 10:27:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:01.302 10:27:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:01.302 10:27:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:01.302 10:27:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:01.302 10:27:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:01.302 10:27:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:01.302 10:27:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:82:00.0 (0x8086 - 0x159b)' 00:09:01.302 Found 0000:82:00.0 (0x8086 - 0x159b) 00:09:01.302 10:27:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:01.302 10:27:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:01.302 10:27:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:01.302 10:27:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:01.302 10:27:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:01.302 10:27:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:01.302 10:27:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:82:00.1 (0x8086 - 0x159b)' 00:09:01.302 Found 0000:82:00.1 (0x8086 - 0x159b) 00:09:01.302 10:27:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:01.302 10:27:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:01.302 10:27:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:01.302 10:27:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:01.302 10:27:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:01.302 10:27:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:01.302 10:27:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:01.302 10:27:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:01.302 10:27:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:01.302 10:27:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:01.302 10:27:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:01.302 10:27:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:01.302 10:27:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:01.302 10:27:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:01.302 10:27:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:01.302 10:27:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:82:00.0: cvl_0_0' 00:09:01.302 Found net devices under 0000:82:00.0: cvl_0_0 00:09:01.302 10:27:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:01.302 10:27:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:01.302 10:27:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:01.302 10:27:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:01.302 10:27:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:01.302 10:27:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:01.302 10:27:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:01.302 10:27:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:01.302 10:27:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:82:00.1: cvl_0_1' 00:09:01.302 Found net devices under 0000:82:00.1: cvl_0_1 00:09:01.302 10:27:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:01.302 10:27:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:01.302 10:27:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # is_hw=yes 00:09:01.302 10:27:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:01.302 10:27:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:09:01.302 10:27:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:09:01.302 10:27:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:01.302 10:27:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:01.302 10:27:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:01.302 10:27:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:01.302 10:27:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:01.302 10:27:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:01.302 10:27:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:01.302 10:27:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:01.302 10:27:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:01.302 10:27:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:01.302 10:27:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:01.302 10:27:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:01.302 10:27:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:01.302 10:27:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:01.302 10:27:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:01.302 10:27:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:01.302 10:27:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:01.302 10:27:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:01.302 10:27:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:01.302 10:27:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:01.302 10:27:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:01.302 10:27:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:01.302 10:27:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:01.302 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:01.302 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.337 ms 00:09:01.302 00:09:01.302 --- 10.0.0.2 ping statistics --- 00:09:01.302 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:01.302 rtt min/avg/max/mdev = 0.337/0.337/0.337/0.000 ms 00:09:01.302 10:27:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:01.302 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:01.302 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.163 ms 00:09:01.302 00:09:01.302 --- 10.0.0.1 ping statistics --- 00:09:01.302 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:01.302 rtt min/avg/max/mdev = 0.163/0.163/0.163/0.000 ms 00:09:01.302 10:27:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:01.302 10:27:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # return 0 00:09:01.302 10:27:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:01.302 10:27:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:01.302 10:27:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:01.302 10:27:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:01.302 10:27:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:01.302 10:27:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:01.302 10:27:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:01.302 10:27:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:09:01.302 10:27:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:01.302 10:27:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:01.302 10:27:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:01.302 10:27:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # nvmfpid=288053 00:09:01.302 10:27:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:09:01.302 10:27:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # waitforlisten 288053 00:09:01.302 10:27:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@833 -- # '[' -z 288053 ']' 00:09:01.302 10:27:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:01.302 10:27:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@838 -- # local max_retries=100 00:09:01.302 10:27:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:01.302 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:01.302 10:27:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # xtrace_disable 00:09:01.302 10:27:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:01.302 [2024-11-15 10:27:49.596998] Starting SPDK v25.01-pre git sha1 318515b44 / DPDK 24.03.0 initialization... 00:09:01.303 [2024-11-15 10:27:49.597085] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:01.303 [2024-11-15 10:27:49.667377] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:01.303 [2024-11-15 10:27:49.726250] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:01.303 [2024-11-15 10:27:49.726302] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:01.303 [2024-11-15 10:27:49.726330] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:01.303 [2024-11-15 10:27:49.726341] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:01.303 [2024-11-15 10:27:49.726351] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:01.303 [2024-11-15 10:27:49.727984] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:01.303 [2024-11-15 10:27:49.728091] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:01.303 [2024-11-15 10:27:49.728184] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:01.303 [2024-11-15 10:27:49.728193] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:01.561 10:27:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:09:01.561 10:27:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@866 -- # return 0 00:09:01.561 10:27:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:01.561 10:27:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:01.561 10:27:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:01.561 10:27:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:01.561 10:27:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:09:01.561 10:27:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:01.561 10:27:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:01.561 10:27:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:01.561 10:27:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:09:01.561 10:27:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:01.561 10:27:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:01.561 10:27:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:01.561 10:27:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:01.561 10:27:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:01.561 10:27:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:01.561 [2024-11-15 10:27:49.918091] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:01.561 10:27:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:01.561 10:27:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:01.561 10:27:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:01.561 10:27:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:01.561 Malloc0 00:09:01.561 10:27:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:01.561 10:27:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:01.561 10:27:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:01.561 10:27:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:01.561 10:27:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:01.561 10:27:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:01.561 10:27:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:01.561 10:27:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:01.561 10:27:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:01.562 10:27:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:01.562 10:27:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:01.562 10:27:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:01.562 [2024-11-15 10:27:49.971536] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:01.562 10:27:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:01.562 10:27:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=288090 00:09:01.562 10:27:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=288092 00:09:01.562 10:27:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:09:01.562 10:27:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:09:01.562 10:27:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:09:01.562 10:27:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:09:01.562 10:27:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:01.562 10:27:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=288094 00:09:01.562 10:27:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:09:01.562 10:27:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:09:01.562 10:27:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:01.562 { 00:09:01.562 "params": { 00:09:01.562 "name": "Nvme$subsystem", 00:09:01.562 "trtype": "$TEST_TRANSPORT", 00:09:01.562 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:01.562 "adrfam": "ipv4", 00:09:01.562 "trsvcid": "$NVMF_PORT", 00:09:01.562 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:01.562 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:01.562 "hdgst": ${hdgst:-false}, 00:09:01.562 "ddgst": ${ddgst:-false} 00:09:01.562 }, 00:09:01.562 "method": "bdev_nvme_attach_controller" 00:09:01.562 } 00:09:01.562 EOF 00:09:01.562 )") 00:09:01.562 10:27:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:09:01.562 10:27:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:09:01.562 10:27:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:01.562 10:27:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:01.562 { 00:09:01.562 "params": { 00:09:01.562 "name": "Nvme$subsystem", 00:09:01.562 "trtype": "$TEST_TRANSPORT", 00:09:01.562 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:01.562 "adrfam": "ipv4", 00:09:01.562 "trsvcid": "$NVMF_PORT", 00:09:01.562 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:01.562 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:01.562 "hdgst": ${hdgst:-false}, 00:09:01.562 "ddgst": ${ddgst:-false} 00:09:01.562 }, 00:09:01.562 "method": "bdev_nvme_attach_controller" 00:09:01.562 } 00:09:01.562 EOF 00:09:01.562 )") 00:09:01.562 10:27:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=288096 00:09:01.562 10:27:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:09:01.562 10:27:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:09:01.562 10:27:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:09:01.562 10:27:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:09:01.562 10:27:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:09:01.562 10:27:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:09:01.562 10:27:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:09:01.562 10:27:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:09:01.562 10:27:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:01.562 10:27:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:09:01.562 10:27:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:01.562 { 00:09:01.562 "params": { 00:09:01.562 "name": "Nvme$subsystem", 00:09:01.562 "trtype": "$TEST_TRANSPORT", 00:09:01.562 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:01.562 "adrfam": "ipv4", 00:09:01.562 "trsvcid": "$NVMF_PORT", 00:09:01.562 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:01.562 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:01.562 "hdgst": ${hdgst:-false}, 00:09:01.562 "ddgst": ${ddgst:-false} 00:09:01.562 }, 00:09:01.562 "method": "bdev_nvme_attach_controller" 00:09:01.562 } 00:09:01.562 EOF 00:09:01.562 )") 00:09:01.562 10:27:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:09:01.562 10:27:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:01.562 10:27:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:01.562 { 00:09:01.562 "params": { 00:09:01.562 "name": "Nvme$subsystem", 00:09:01.562 "trtype": "$TEST_TRANSPORT", 00:09:01.562 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:01.562 "adrfam": "ipv4", 00:09:01.562 "trsvcid": "$NVMF_PORT", 00:09:01.562 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:01.562 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:01.562 "hdgst": ${hdgst:-false}, 00:09:01.562 "ddgst": ${ddgst:-false} 00:09:01.562 }, 00:09:01.562 "method": "bdev_nvme_attach_controller" 00:09:01.562 } 00:09:01.562 EOF 00:09:01.562 )") 00:09:01.562 10:27:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:09:01.562 10:27:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:09:01.562 10:27:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 288090 00:09:01.562 10:27:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:09:01.562 10:27:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:09:01.562 10:27:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:09:01.562 10:27:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:09:01.562 10:27:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:09:01.562 10:27:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:01.562 "params": { 00:09:01.562 "name": "Nvme1", 00:09:01.562 "trtype": "tcp", 00:09:01.562 "traddr": "10.0.0.2", 00:09:01.562 "adrfam": "ipv4", 00:09:01.562 "trsvcid": "4420", 00:09:01.562 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:01.562 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:01.562 "hdgst": false, 00:09:01.562 "ddgst": false 00:09:01.562 }, 00:09:01.562 "method": "bdev_nvme_attach_controller" 00:09:01.562 }' 00:09:01.562 10:27:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:09:01.562 10:27:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:09:01.562 10:27:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:01.562 "params": { 00:09:01.562 "name": "Nvme1", 00:09:01.562 "trtype": "tcp", 00:09:01.562 "traddr": "10.0.0.2", 00:09:01.562 "adrfam": "ipv4", 00:09:01.562 "trsvcid": "4420", 00:09:01.562 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:01.562 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:01.562 "hdgst": false, 00:09:01.562 "ddgst": false 00:09:01.562 }, 00:09:01.562 "method": "bdev_nvme_attach_controller" 00:09:01.562 }' 00:09:01.562 10:27:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:09:01.562 10:27:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:01.562 "params": { 00:09:01.562 "name": "Nvme1", 00:09:01.562 "trtype": "tcp", 00:09:01.562 "traddr": "10.0.0.2", 00:09:01.562 "adrfam": "ipv4", 00:09:01.562 "trsvcid": "4420", 00:09:01.563 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:01.563 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:01.563 "hdgst": false, 00:09:01.563 "ddgst": false 00:09:01.563 }, 00:09:01.563 "method": "bdev_nvme_attach_controller" 00:09:01.563 }' 00:09:01.563 10:27:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:09:01.563 10:27:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:01.563 "params": { 00:09:01.563 "name": "Nvme1", 00:09:01.563 "trtype": "tcp", 00:09:01.563 "traddr": "10.0.0.2", 00:09:01.563 "adrfam": "ipv4", 00:09:01.563 "trsvcid": "4420", 00:09:01.563 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:01.563 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:01.563 "hdgst": false, 00:09:01.563 "ddgst": false 00:09:01.563 }, 00:09:01.563 "method": "bdev_nvme_attach_controller" 00:09:01.563 }' 00:09:01.563 [2024-11-15 10:27:50.024178] Starting SPDK v25.01-pre git sha1 318515b44 / DPDK 24.03.0 initialization... 00:09:01.563 [2024-11-15 10:27:50.024179] Starting SPDK v25.01-pre git sha1 318515b44 / DPDK 24.03.0 initialization... 00:09:01.563 [2024-11-15 10:27:50.024179] Starting SPDK v25.01-pre git sha1 318515b44 / DPDK 24.03.0 initialization... 00:09:01.563 [2024-11-15 10:27:50.024267] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-11-15 10:27:50.024267] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-11-15 10:27:50.024266] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:09:01.563 --proc-type=auto ] 00:09:01.563 --proc-type=auto ] 00:09:01.563 [2024-11-15 10:27:50.025491] Starting SPDK v25.01-pre git sha1 318515b44 / DPDK 24.03.0 initialization... 00:09:01.563 [2024-11-15 10:27:50.025568] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:09:01.820 [2024-11-15 10:27:50.206107] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:01.820 [2024-11-15 10:27:50.261086] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:09:02.078 [2024-11-15 10:27:50.306158] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:02.078 [2024-11-15 10:27:50.359474] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:09:02.078 [2024-11-15 10:27:50.404946] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:02.078 [2024-11-15 10:27:50.459095] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:09:02.078 [2024-11-15 10:27:50.475933] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:02.078 [2024-11-15 10:27:50.527691] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:09:02.337 Running I/O for 1 seconds... 00:09:02.337 Running I/O for 1 seconds... 00:09:02.337 Running I/O for 1 seconds... 00:09:02.337 Running I/O for 1 seconds... 00:09:03.271 10445.00 IOPS, 40.80 MiB/s 00:09:03.271 Latency(us) 00:09:03.271 [2024-11-15T09:27:51.734Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:03.271 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:09:03.271 Nvme1n1 : 1.01 10488.24 40.97 0.00 0.00 12151.22 6844.87 17573.36 00:09:03.271 [2024-11-15T09:27:51.734Z] =================================================================================================================== 00:09:03.271 [2024-11-15T09:27:51.734Z] Total : 10488.24 40.97 0.00 0.00 12151.22 6844.87 17573.36 00:09:03.271 200776.00 IOPS, 784.28 MiB/s 00:09:03.271 Latency(us) 00:09:03.271 [2024-11-15T09:27:51.734Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:03.271 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:09:03.271 Nvme1n1 : 1.00 200400.63 782.81 0.00 0.00 635.25 286.72 1856.85 00:09:03.271 [2024-11-15T09:27:51.734Z] =================================================================================================================== 00:09:03.271 [2024-11-15T09:27:51.734Z] Total : 200400.63 782.81 0.00 0.00 635.25 286.72 1856.85 00:09:03.271 9322.00 IOPS, 36.41 MiB/s 00:09:03.271 Latency(us) 00:09:03.271 [2024-11-15T09:27:51.734Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:03.271 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:09:03.271 Nvme1n1 : 1.01 9385.74 36.66 0.00 0.00 13582.31 5704.06 23592.96 00:09:03.271 [2024-11-15T09:27:51.735Z] =================================================================================================================== 00:09:03.272 [2024-11-15T09:27:51.735Z] Total : 9385.74 36.66 0.00 0.00 13582.31 5704.06 23592.96 00:09:03.529 8430.00 IOPS, 32.93 MiB/s 00:09:03.529 Latency(us) 00:09:03.529 [2024-11-15T09:27:51.992Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:03.529 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:09:03.529 Nvme1n1 : 1.01 8495.58 33.19 0.00 0.00 15000.60 2160.26 22330.79 00:09:03.529 [2024-11-15T09:27:51.992Z] =================================================================================================================== 00:09:03.529 [2024-11-15T09:27:51.992Z] Total : 8495.58 33.19 0.00 0.00 15000.60 2160.26 22330.79 00:09:03.529 10:27:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 288092 00:09:03.529 10:27:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 288094 00:09:03.529 10:27:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 288096 00:09:03.529 10:27:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:03.529 10:27:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:03.530 10:27:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:03.530 10:27:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:03.530 10:27:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:09:03.530 10:27:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:09:03.530 10:27:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:03.530 10:27:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:09:03.530 10:27:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:03.530 10:27:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:09:03.530 10:27:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:03.530 10:27:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:03.530 rmmod nvme_tcp 00:09:03.530 rmmod nvme_fabrics 00:09:03.787 rmmod nvme_keyring 00:09:03.787 10:27:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:03.787 10:27:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:09:03.787 10:27:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:09:03.787 10:27:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@517 -- # '[' -n 288053 ']' 00:09:03.787 10:27:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # killprocess 288053 00:09:03.787 10:27:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@952 -- # '[' -z 288053 ']' 00:09:03.787 10:27:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # kill -0 288053 00:09:03.787 10:27:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@957 -- # uname 00:09:03.788 10:27:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:09:03.788 10:27:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 288053 00:09:03.788 10:27:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:09:03.788 10:27:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:09:03.788 10:27:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@970 -- # echo 'killing process with pid 288053' 00:09:03.788 killing process with pid 288053 00:09:03.788 10:27:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@971 -- # kill 288053 00:09:03.788 10:27:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@976 -- # wait 288053 00:09:04.047 10:27:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:04.048 10:27:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:04.048 10:27:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:04.048 10:27:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:09:04.048 10:27:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-save 00:09:04.048 10:27:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:04.048 10:27:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-restore 00:09:04.048 10:27:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:04.048 10:27:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:04.048 10:27:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:04.048 10:27:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:04.048 10:27:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:05.954 10:27:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:05.954 00:09:05.954 real 0m7.314s 00:09:05.954 user 0m15.996s 00:09:05.954 sys 0m3.652s 00:09:05.954 10:27:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:05.954 10:27:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:05.954 ************************************ 00:09:05.954 END TEST nvmf_bdev_io_wait 00:09:05.954 ************************************ 00:09:05.954 10:27:54 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:09:05.954 10:27:54 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:09:05.954 10:27:54 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:05.954 10:27:54 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:05.954 ************************************ 00:09:05.954 START TEST nvmf_queue_depth 00:09:05.954 ************************************ 00:09:05.954 10:27:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:09:06.215 * Looking for test storage... 00:09:06.215 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:06.215 10:27:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:09:06.215 10:27:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1691 -- # lcov --version 00:09:06.215 10:27:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:09:06.215 10:27:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:09:06.215 10:27:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:06.215 10:27:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:06.215 10:27:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:06.215 10:27:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:09:06.215 10:27:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:09:06.215 10:27:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:09:06.215 10:27:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:09:06.215 10:27:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:09:06.215 10:27:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:09:06.215 10:27:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:09:06.215 10:27:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:06.215 10:27:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:09:06.215 10:27:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:09:06.215 10:27:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:06.215 10:27:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:06.215 10:27:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:09:06.215 10:27:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:09:06.215 10:27:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:06.215 10:27:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:09:06.215 10:27:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:09:06.215 10:27:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:09:06.215 10:27:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:09:06.215 10:27:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:06.215 10:27:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:09:06.215 10:27:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:09:06.215 10:27:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:06.215 10:27:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:06.215 10:27:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:09:06.215 10:27:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:06.215 10:27:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:09:06.215 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:06.215 --rc genhtml_branch_coverage=1 00:09:06.215 --rc genhtml_function_coverage=1 00:09:06.215 --rc genhtml_legend=1 00:09:06.215 --rc geninfo_all_blocks=1 00:09:06.215 --rc geninfo_unexecuted_blocks=1 00:09:06.215 00:09:06.215 ' 00:09:06.215 10:27:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:09:06.215 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:06.215 --rc genhtml_branch_coverage=1 00:09:06.215 --rc genhtml_function_coverage=1 00:09:06.215 --rc genhtml_legend=1 00:09:06.215 --rc geninfo_all_blocks=1 00:09:06.215 --rc geninfo_unexecuted_blocks=1 00:09:06.215 00:09:06.215 ' 00:09:06.215 10:27:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:09:06.215 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:06.215 --rc genhtml_branch_coverage=1 00:09:06.215 --rc genhtml_function_coverage=1 00:09:06.215 --rc genhtml_legend=1 00:09:06.215 --rc geninfo_all_blocks=1 00:09:06.215 --rc geninfo_unexecuted_blocks=1 00:09:06.215 00:09:06.215 ' 00:09:06.215 10:27:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:09:06.215 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:06.215 --rc genhtml_branch_coverage=1 00:09:06.215 --rc genhtml_function_coverage=1 00:09:06.215 --rc genhtml_legend=1 00:09:06.215 --rc geninfo_all_blocks=1 00:09:06.215 --rc geninfo_unexecuted_blocks=1 00:09:06.215 00:09:06.215 ' 00:09:06.215 10:27:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:06.215 10:27:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:09:06.215 10:27:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:06.215 10:27:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:06.215 10:27:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:06.215 10:27:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:06.215 10:27:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:06.215 10:27:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:06.215 10:27:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:06.215 10:27:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:06.215 10:27:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:06.215 10:27:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:06.215 10:27:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:09:06.215 10:27:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=8b464f06-2980-e311-ba20-001e67a94acd 00:09:06.215 10:27:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:06.215 10:27:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:06.215 10:27:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:06.215 10:27:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:06.215 10:27:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:06.215 10:27:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:09:06.215 10:27:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:06.215 10:27:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:06.215 10:27:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:06.215 10:27:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:06.216 10:27:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:06.216 10:27:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:06.216 10:27:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:09:06.216 10:27:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:06.216 10:27:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:09:06.216 10:27:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:06.216 10:27:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:06.216 10:27:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:06.216 10:27:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:06.216 10:27:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:06.216 10:27:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:06.216 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:06.216 10:27:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:06.216 10:27:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:06.216 10:27:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:06.216 10:27:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:09:06.216 10:27:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:09:06.216 10:27:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:09:06.216 10:27:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:09:06.216 10:27:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:06.216 10:27:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:06.216 10:27:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:06.216 10:27:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:06.216 10:27:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:06.216 10:27:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:06.216 10:27:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:06.216 10:27:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:06.216 10:27:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:06.216 10:27:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:06.216 10:27:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@309 -- # xtrace_disable 00:09:06.216 10:27:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:08.750 10:27:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:08.750 10:27:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # pci_devs=() 00:09:08.750 10:27:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:08.750 10:27:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:08.750 10:27:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:08.750 10:27:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:08.750 10:27:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:08.750 10:27:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # net_devs=() 00:09:08.750 10:27:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:08.750 10:27:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # e810=() 00:09:08.750 10:27:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # local -ga e810 00:09:08.750 10:27:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # x722=() 00:09:08.750 10:27:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # local -ga x722 00:09:08.750 10:27:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # mlx=() 00:09:08.750 10:27:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # local -ga mlx 00:09:08.750 10:27:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:08.750 10:27:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:08.750 10:27:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:08.750 10:27:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:08.750 10:27:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:08.750 10:27:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:08.750 10:27:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:08.750 10:27:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:08.750 10:27:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:08.750 10:27:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:08.750 10:27:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:08.750 10:27:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:08.750 10:27:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:08.750 10:27:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:08.750 10:27:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:08.750 10:27:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:08.750 10:27:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:08.750 10:27:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:08.750 10:27:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:08.750 10:27:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:82:00.0 (0x8086 - 0x159b)' 00:09:08.750 Found 0000:82:00.0 (0x8086 - 0x159b) 00:09:08.750 10:27:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:08.750 10:27:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:08.750 10:27:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:08.750 10:27:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:08.750 10:27:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:08.750 10:27:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:08.750 10:27:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:82:00.1 (0x8086 - 0x159b)' 00:09:08.750 Found 0000:82:00.1 (0x8086 - 0x159b) 00:09:08.750 10:27:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:08.750 10:27:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:08.750 10:27:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:08.750 10:27:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:08.750 10:27:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:08.750 10:27:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:08.750 10:27:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:08.750 10:27:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:08.750 10:27:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:08.750 10:27:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:08.750 10:27:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:08.750 10:27:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:08.750 10:27:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:08.750 10:27:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:08.750 10:27:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:08.750 10:27:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:82:00.0: cvl_0_0' 00:09:08.750 Found net devices under 0000:82:00.0: cvl_0_0 00:09:08.750 10:27:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:08.750 10:27:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:08.750 10:27:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:08.750 10:27:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:08.750 10:27:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:08.751 10:27:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:08.751 10:27:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:08.751 10:27:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:08.751 10:27:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:82:00.1: cvl_0_1' 00:09:08.751 Found net devices under 0000:82:00.1: cvl_0_1 00:09:08.751 10:27:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:08.751 10:27:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:08.751 10:27:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # is_hw=yes 00:09:08.751 10:27:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:08.751 10:27:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:09:08.751 10:27:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:09:08.751 10:27:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:08.751 10:27:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:08.751 10:27:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:08.751 10:27:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:08.751 10:27:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:08.751 10:27:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:08.751 10:27:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:08.751 10:27:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:08.751 10:27:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:08.751 10:27:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:08.751 10:27:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:08.751 10:27:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:08.751 10:27:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:08.751 10:27:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:08.751 10:27:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:08.751 10:27:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:08.751 10:27:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:08.751 10:27:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:08.751 10:27:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:08.751 10:27:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:08.751 10:27:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:08.751 10:27:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:08.751 10:27:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:08.751 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:08.751 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.402 ms 00:09:08.751 00:09:08.751 --- 10.0.0.2 ping statistics --- 00:09:08.751 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:08.751 rtt min/avg/max/mdev = 0.402/0.402/0.402/0.000 ms 00:09:08.751 10:27:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:08.751 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:08.751 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.148 ms 00:09:08.751 00:09:08.751 --- 10.0.0.1 ping statistics --- 00:09:08.751 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:08.751 rtt min/avg/max/mdev = 0.148/0.148/0.148/0.000 ms 00:09:08.751 10:27:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:08.751 10:27:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@450 -- # return 0 00:09:08.751 10:27:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:08.751 10:27:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:08.751 10:27:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:08.751 10:27:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:08.751 10:27:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:08.751 10:27:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:08.751 10:27:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:08.751 10:27:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:09:08.751 10:27:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:08.751 10:27:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:08.751 10:27:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:08.751 10:27:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@509 -- # nvmfpid=290328 00:09:08.751 10:27:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:09:08.751 10:27:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@510 -- # waitforlisten 290328 00:09:08.751 10:27:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@833 -- # '[' -z 290328 ']' 00:09:08.751 10:27:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:08.751 10:27:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@838 -- # local max_retries=100 00:09:08.751 10:27:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:08.751 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:08.751 10:27:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # xtrace_disable 00:09:08.751 10:27:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:08.751 [2024-11-15 10:27:57.019236] Starting SPDK v25.01-pre git sha1 318515b44 / DPDK 24.03.0 initialization... 00:09:08.751 [2024-11-15 10:27:57.019319] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:08.751 [2024-11-15 10:27:57.092856] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:08.751 [2024-11-15 10:27:57.146530] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:08.751 [2024-11-15 10:27:57.146586] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:08.751 [2024-11-15 10:27:57.146614] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:08.751 [2024-11-15 10:27:57.146624] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:08.751 [2024-11-15 10:27:57.146634] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:08.751 [2024-11-15 10:27:57.147183] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:09.010 10:27:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:09:09.010 10:27:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@866 -- # return 0 00:09:09.010 10:27:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:09.010 10:27:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:09.010 10:27:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:09.010 10:27:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:09.010 10:27:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:09.010 10:27:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:09.010 10:27:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:09.010 [2024-11-15 10:27:57.284441] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:09.010 10:27:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:09.010 10:27:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:09.010 10:27:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:09.010 10:27:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:09.010 Malloc0 00:09:09.010 10:27:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:09.010 10:27:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:09.010 10:27:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:09.010 10:27:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:09.010 10:27:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:09.010 10:27:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:09.010 10:27:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:09.010 10:27:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:09.010 10:27:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:09.010 10:27:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:09.010 10:27:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:09.010 10:27:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:09.010 [2024-11-15 10:27:57.331723] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:09.010 10:27:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:09.010 10:27:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=290463 00:09:09.010 10:27:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:09:09.010 10:27:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:09:09.010 10:27:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 290463 /var/tmp/bdevperf.sock 00:09:09.010 10:27:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@833 -- # '[' -z 290463 ']' 00:09:09.010 10:27:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:09:09.010 10:27:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@838 -- # local max_retries=100 00:09:09.010 10:27:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:09:09.010 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:09:09.010 10:27:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # xtrace_disable 00:09:09.010 10:27:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:09.010 [2024-11-15 10:27:57.378296] Starting SPDK v25.01-pre git sha1 318515b44 / DPDK 24.03.0 initialization... 00:09:09.010 [2024-11-15 10:27:57.378377] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid290463 ] 00:09:09.010 [2024-11-15 10:27:57.442811] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:09.268 [2024-11-15 10:27:57.500816] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:09.268 10:27:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:09:09.268 10:27:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@866 -- # return 0 00:09:09.268 10:27:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:09:09.268 10:27:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:09.268 10:27:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:09.525 NVMe0n1 00:09:09.525 10:27:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:09.525 10:27:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:09:09.807 Running I/O for 10 seconds... 00:09:11.675 9216.00 IOPS, 36.00 MiB/s [2024-11-15T09:28:01.074Z] 9411.00 IOPS, 36.76 MiB/s [2024-11-15T09:28:02.009Z] 9398.67 IOPS, 36.71 MiB/s [2024-11-15T09:28:03.382Z] 9473.00 IOPS, 37.00 MiB/s [2024-11-15T09:28:04.316Z] 9586.00 IOPS, 37.45 MiB/s [2024-11-15T09:28:05.250Z] 9620.83 IOPS, 37.58 MiB/s [2024-11-15T09:28:06.184Z] 9644.29 IOPS, 37.67 MiB/s [2024-11-15T09:28:07.119Z] 9697.00 IOPS, 37.88 MiB/s [2024-11-15T09:28:08.055Z] 9688.44 IOPS, 37.85 MiB/s [2024-11-15T09:28:08.314Z] 9712.30 IOPS, 37.94 MiB/s 00:09:19.851 Latency(us) 00:09:19.851 [2024-11-15T09:28:08.314Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:19.851 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:09:19.851 Verification LBA range: start 0x0 length 0x4000 00:09:19.851 NVMe0n1 : 10.08 9732.15 38.02 0.00 0.00 104839.81 20486.07 71846.87 00:09:19.851 [2024-11-15T09:28:08.314Z] =================================================================================================================== 00:09:19.851 [2024-11-15T09:28:08.314Z] Total : 9732.15 38.02 0.00 0.00 104839.81 20486.07 71846.87 00:09:19.851 { 00:09:19.851 "results": [ 00:09:19.851 { 00:09:19.851 "job": "NVMe0n1", 00:09:19.851 "core_mask": "0x1", 00:09:19.851 "workload": "verify", 00:09:19.851 "status": "finished", 00:09:19.851 "verify_range": { 00:09:19.851 "start": 0, 00:09:19.851 "length": 16384 00:09:19.851 }, 00:09:19.851 "queue_depth": 1024, 00:09:19.851 "io_size": 4096, 00:09:19.851 "runtime": 10.084823, 00:09:19.851 "iops": 9732.148992600069, 00:09:19.851 "mibps": 38.01620700234402, 00:09:19.851 "io_failed": 0, 00:09:19.851 "io_timeout": 0, 00:09:19.851 "avg_latency_us": 104839.81257960378, 00:09:19.851 "min_latency_us": 20486.068148148148, 00:09:19.851 "max_latency_us": 71846.87407407408 00:09:19.851 } 00:09:19.851 ], 00:09:19.851 "core_count": 1 00:09:19.851 } 00:09:19.851 10:28:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 290463 00:09:19.851 10:28:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@952 -- # '[' -z 290463 ']' 00:09:19.851 10:28:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # kill -0 290463 00:09:19.851 10:28:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@957 -- # uname 00:09:19.851 10:28:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:09:19.851 10:28:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 290463 00:09:19.851 10:28:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:09:19.851 10:28:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:09:19.851 10:28:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@970 -- # echo 'killing process with pid 290463' 00:09:19.851 killing process with pid 290463 00:09:19.851 10:28:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@971 -- # kill 290463 00:09:19.851 Received shutdown signal, test time was about 10.000000 seconds 00:09:19.851 00:09:19.851 Latency(us) 00:09:19.851 [2024-11-15T09:28:08.314Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:19.851 [2024-11-15T09:28:08.314Z] =================================================================================================================== 00:09:19.851 [2024-11-15T09:28:08.314Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:09:19.851 10:28:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@976 -- # wait 290463 00:09:20.109 10:28:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:09:20.109 10:28:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:09:20.109 10:28:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:20.109 10:28:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:09:20.109 10:28:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:20.109 10:28:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:09:20.109 10:28:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:20.109 10:28:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:20.109 rmmod nvme_tcp 00:09:20.109 rmmod nvme_fabrics 00:09:20.109 rmmod nvme_keyring 00:09:20.109 10:28:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:20.109 10:28:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:09:20.109 10:28:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:09:20.109 10:28:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@517 -- # '[' -n 290328 ']' 00:09:20.109 10:28:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@518 -- # killprocess 290328 00:09:20.109 10:28:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@952 -- # '[' -z 290328 ']' 00:09:20.109 10:28:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # kill -0 290328 00:09:20.109 10:28:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@957 -- # uname 00:09:20.109 10:28:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:09:20.109 10:28:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 290328 00:09:20.109 10:28:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:09:20.109 10:28:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:09:20.109 10:28:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@970 -- # echo 'killing process with pid 290328' 00:09:20.109 killing process with pid 290328 00:09:20.109 10:28:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@971 -- # kill 290328 00:09:20.109 10:28:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@976 -- # wait 290328 00:09:20.368 10:28:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:20.368 10:28:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:20.368 10:28:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:20.368 10:28:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:09:20.368 10:28:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-save 00:09:20.368 10:28:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:20.368 10:28:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-restore 00:09:20.368 10:28:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:20.368 10:28:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:20.368 10:28:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:20.368 10:28:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:20.368 10:28:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:22.912 10:28:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:22.912 00:09:22.912 real 0m16.377s 00:09:22.912 user 0m22.568s 00:09:22.912 sys 0m3.534s 00:09:22.912 10:28:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:22.912 10:28:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:22.912 ************************************ 00:09:22.912 END TEST nvmf_queue_depth 00:09:22.912 ************************************ 00:09:22.912 10:28:10 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:09:22.912 10:28:10 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:09:22.912 10:28:10 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:22.912 10:28:10 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:22.912 ************************************ 00:09:22.912 START TEST nvmf_target_multipath 00:09:22.912 ************************************ 00:09:22.912 10:28:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:09:22.912 * Looking for test storage... 00:09:22.912 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:22.912 10:28:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:09:22.912 10:28:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1691 -- # lcov --version 00:09:22.912 10:28:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:09:22.912 10:28:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:09:22.912 10:28:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:22.912 10:28:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:22.912 10:28:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:22.912 10:28:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:09:22.912 10:28:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:09:22.912 10:28:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:09:22.912 10:28:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:09:22.912 10:28:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:09:22.912 10:28:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:09:22.912 10:28:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:09:22.912 10:28:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:22.912 10:28:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:09:22.912 10:28:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:09:22.912 10:28:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:22.912 10:28:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:22.912 10:28:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:09:22.912 10:28:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:09:22.912 10:28:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:22.912 10:28:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:09:22.912 10:28:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:09:22.912 10:28:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:09:22.912 10:28:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:09:22.912 10:28:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:22.912 10:28:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:09:22.912 10:28:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:09:22.913 10:28:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:22.913 10:28:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:22.913 10:28:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:09:22.913 10:28:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:22.913 10:28:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:09:22.913 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:22.913 --rc genhtml_branch_coverage=1 00:09:22.913 --rc genhtml_function_coverage=1 00:09:22.913 --rc genhtml_legend=1 00:09:22.913 --rc geninfo_all_blocks=1 00:09:22.913 --rc geninfo_unexecuted_blocks=1 00:09:22.913 00:09:22.913 ' 00:09:22.913 10:28:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:09:22.913 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:22.913 --rc genhtml_branch_coverage=1 00:09:22.913 --rc genhtml_function_coverage=1 00:09:22.913 --rc genhtml_legend=1 00:09:22.913 --rc geninfo_all_blocks=1 00:09:22.913 --rc geninfo_unexecuted_blocks=1 00:09:22.913 00:09:22.913 ' 00:09:22.913 10:28:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:09:22.913 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:22.913 --rc genhtml_branch_coverage=1 00:09:22.913 --rc genhtml_function_coverage=1 00:09:22.913 --rc genhtml_legend=1 00:09:22.913 --rc geninfo_all_blocks=1 00:09:22.913 --rc geninfo_unexecuted_blocks=1 00:09:22.913 00:09:22.913 ' 00:09:22.913 10:28:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:09:22.913 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:22.913 --rc genhtml_branch_coverage=1 00:09:22.913 --rc genhtml_function_coverage=1 00:09:22.913 --rc genhtml_legend=1 00:09:22.913 --rc geninfo_all_blocks=1 00:09:22.913 --rc geninfo_unexecuted_blocks=1 00:09:22.913 00:09:22.913 ' 00:09:22.913 10:28:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:22.913 10:28:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:09:22.913 10:28:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:22.913 10:28:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:22.913 10:28:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:22.913 10:28:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:22.913 10:28:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:22.913 10:28:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:22.913 10:28:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:22.913 10:28:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:22.913 10:28:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:22.913 10:28:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:22.913 10:28:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:09:22.913 10:28:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=8b464f06-2980-e311-ba20-001e67a94acd 00:09:22.913 10:28:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:22.913 10:28:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:22.913 10:28:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:22.913 10:28:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:22.913 10:28:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:22.913 10:28:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:09:22.913 10:28:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:22.913 10:28:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:22.913 10:28:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:22.913 10:28:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:22.913 10:28:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:22.913 10:28:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:22.913 10:28:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:09:22.913 10:28:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:22.913 10:28:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:09:22.913 10:28:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:22.913 10:28:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:22.913 10:28:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:22.913 10:28:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:22.913 10:28:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:22.913 10:28:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:22.913 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:22.913 10:28:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:22.913 10:28:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:22.913 10:28:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:22.913 10:28:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:22.913 10:28:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:22.913 10:28:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:09:22.913 10:28:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:22.913 10:28:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:09:22.913 10:28:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:22.913 10:28:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:22.913 10:28:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:22.913 10:28:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:22.913 10:28:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:22.913 10:28:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:22.913 10:28:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:22.913 10:28:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:22.913 10:28:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:22.913 10:28:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:22.913 10:28:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@309 -- # xtrace_disable 00:09:22.913 10:28:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:09:24.821 10:28:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:24.821 10:28:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # pci_devs=() 00:09:24.821 10:28:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:24.821 10:28:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:24.821 10:28:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:24.821 10:28:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:24.821 10:28:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:24.821 10:28:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # net_devs=() 00:09:24.821 10:28:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:24.821 10:28:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # e810=() 00:09:24.822 10:28:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # local -ga e810 00:09:24.822 10:28:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # x722=() 00:09:24.822 10:28:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # local -ga x722 00:09:24.822 10:28:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # mlx=() 00:09:24.822 10:28:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # local -ga mlx 00:09:24.822 10:28:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:24.822 10:28:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:24.822 10:28:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:24.822 10:28:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:24.822 10:28:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:24.822 10:28:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:24.822 10:28:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:24.822 10:28:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:24.822 10:28:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:24.822 10:28:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:24.822 10:28:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:24.822 10:28:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:24.822 10:28:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:24.822 10:28:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:24.822 10:28:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:24.822 10:28:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:24.822 10:28:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:24.822 10:28:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:24.822 10:28:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:24.822 10:28:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:82:00.0 (0x8086 - 0x159b)' 00:09:24.822 Found 0000:82:00.0 (0x8086 - 0x159b) 00:09:24.822 10:28:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:24.822 10:28:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:24.822 10:28:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:24.822 10:28:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:24.822 10:28:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:24.822 10:28:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:24.822 10:28:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:82:00.1 (0x8086 - 0x159b)' 00:09:24.822 Found 0000:82:00.1 (0x8086 - 0x159b) 00:09:24.822 10:28:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:24.822 10:28:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:24.822 10:28:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:24.822 10:28:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:24.822 10:28:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:24.822 10:28:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:24.822 10:28:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:24.822 10:28:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:24.822 10:28:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:24.822 10:28:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:24.822 10:28:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:24.822 10:28:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:24.822 10:28:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:24.822 10:28:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:24.822 10:28:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:24.822 10:28:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:82:00.0: cvl_0_0' 00:09:24.822 Found net devices under 0000:82:00.0: cvl_0_0 00:09:24.822 10:28:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:24.822 10:28:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:24.822 10:28:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:24.822 10:28:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:24.822 10:28:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:24.822 10:28:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:24.822 10:28:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:24.822 10:28:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:24.822 10:28:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:82:00.1: cvl_0_1' 00:09:24.822 Found net devices under 0000:82:00.1: cvl_0_1 00:09:24.822 10:28:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:24.822 10:28:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:24.822 10:28:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # is_hw=yes 00:09:24.822 10:28:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:24.822 10:28:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:09:24.822 10:28:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:09:24.822 10:28:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:24.822 10:28:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:24.822 10:28:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:24.822 10:28:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:24.822 10:28:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:24.822 10:28:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:24.822 10:28:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:24.822 10:28:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:24.822 10:28:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:24.822 10:28:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:24.822 10:28:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:24.822 10:28:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:24.822 10:28:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:24.822 10:28:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:24.822 10:28:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:24.822 10:28:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:24.822 10:28:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:24.822 10:28:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:24.822 10:28:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:24.822 10:28:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:24.822 10:28:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:24.822 10:28:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:24.822 10:28:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:25.081 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:25.081 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.268 ms 00:09:25.081 00:09:25.081 --- 10.0.0.2 ping statistics --- 00:09:25.081 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:25.081 rtt min/avg/max/mdev = 0.268/0.268/0.268/0.000 ms 00:09:25.081 10:28:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:25.081 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:25.081 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.101 ms 00:09:25.081 00:09:25.081 --- 10.0.0.1 ping statistics --- 00:09:25.081 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:25.081 rtt min/avg/max/mdev = 0.101/0.101/0.101/0.000 ms 00:09:25.081 10:28:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:25.081 10:28:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@450 -- # return 0 00:09:25.081 10:28:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:25.081 10:28:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:25.081 10:28:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:25.081 10:28:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:25.081 10:28:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:25.081 10:28:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:25.081 10:28:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:25.081 10:28:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:09:25.081 10:28:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:09:25.081 only one NIC for nvmf test 00:09:25.081 10:28:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:09:25.081 10:28:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:25.081 10:28:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:09:25.081 10:28:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:25.081 10:28:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:09:25.081 10:28:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:25.081 10:28:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:25.081 rmmod nvme_tcp 00:09:25.081 rmmod nvme_fabrics 00:09:25.081 rmmod nvme_keyring 00:09:25.081 10:28:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:25.081 10:28:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:09:25.081 10:28:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:09:25.081 10:28:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:09:25.081 10:28:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:25.081 10:28:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:25.081 10:28:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:25.081 10:28:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:09:25.081 10:28:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:09:25.081 10:28:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:25.081 10:28:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:09:25.081 10:28:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:25.081 10:28:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:25.081 10:28:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:25.081 10:28:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:25.081 10:28:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:26.990 10:28:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:26.990 10:28:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:09:26.990 10:28:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:09:26.990 10:28:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:26.990 10:28:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:09:26.990 10:28:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:26.990 10:28:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:09:26.990 10:28:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:26.990 10:28:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:26.990 10:28:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:26.990 10:28:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:09:26.990 10:28:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:09:26.990 10:28:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:09:26.990 10:28:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:26.990 10:28:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:26.990 10:28:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:26.990 10:28:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:09:26.990 10:28:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:09:26.990 10:28:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:26.990 10:28:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:09:26.991 10:28:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:26.991 10:28:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:26.991 10:28:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:26.991 10:28:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:26.991 10:28:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:26.991 10:28:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:26.991 00:09:26.991 real 0m4.637s 00:09:26.991 user 0m0.927s 00:09:26.991 sys 0m1.724s 00:09:26.991 10:28:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:26.991 10:28:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:09:26.991 ************************************ 00:09:26.991 END TEST nvmf_target_multipath 00:09:26.991 ************************************ 00:09:27.250 10:28:15 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:09:27.250 10:28:15 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:09:27.250 10:28:15 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:27.250 10:28:15 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:27.250 ************************************ 00:09:27.250 START TEST nvmf_zcopy 00:09:27.250 ************************************ 00:09:27.250 10:28:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:09:27.250 * Looking for test storage... 00:09:27.250 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:27.250 10:28:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:09:27.250 10:28:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1691 -- # lcov --version 00:09:27.250 10:28:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:09:27.250 10:28:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:09:27.250 10:28:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:27.250 10:28:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:27.250 10:28:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:27.250 10:28:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:09:27.250 10:28:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:09:27.250 10:28:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:09:27.250 10:28:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:09:27.250 10:28:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:09:27.250 10:28:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:09:27.250 10:28:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:09:27.250 10:28:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:27.250 10:28:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:09:27.250 10:28:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:09:27.250 10:28:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:27.250 10:28:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:27.250 10:28:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:09:27.250 10:28:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:09:27.250 10:28:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:27.250 10:28:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:09:27.250 10:28:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:09:27.250 10:28:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:09:27.250 10:28:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:09:27.250 10:28:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:27.250 10:28:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:09:27.250 10:28:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:09:27.250 10:28:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:27.250 10:28:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:27.250 10:28:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:09:27.250 10:28:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:27.250 10:28:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:09:27.250 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:27.250 --rc genhtml_branch_coverage=1 00:09:27.250 --rc genhtml_function_coverage=1 00:09:27.250 --rc genhtml_legend=1 00:09:27.250 --rc geninfo_all_blocks=1 00:09:27.250 --rc geninfo_unexecuted_blocks=1 00:09:27.250 00:09:27.250 ' 00:09:27.250 10:28:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:09:27.250 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:27.250 --rc genhtml_branch_coverage=1 00:09:27.250 --rc genhtml_function_coverage=1 00:09:27.250 --rc genhtml_legend=1 00:09:27.250 --rc geninfo_all_blocks=1 00:09:27.250 --rc geninfo_unexecuted_blocks=1 00:09:27.250 00:09:27.250 ' 00:09:27.250 10:28:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:09:27.250 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:27.250 --rc genhtml_branch_coverage=1 00:09:27.250 --rc genhtml_function_coverage=1 00:09:27.250 --rc genhtml_legend=1 00:09:27.250 --rc geninfo_all_blocks=1 00:09:27.250 --rc geninfo_unexecuted_blocks=1 00:09:27.250 00:09:27.250 ' 00:09:27.250 10:28:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:09:27.250 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:27.250 --rc genhtml_branch_coverage=1 00:09:27.250 --rc genhtml_function_coverage=1 00:09:27.250 --rc genhtml_legend=1 00:09:27.250 --rc geninfo_all_blocks=1 00:09:27.250 --rc geninfo_unexecuted_blocks=1 00:09:27.250 00:09:27.250 ' 00:09:27.250 10:28:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:27.250 10:28:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:09:27.250 10:28:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:27.250 10:28:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:27.250 10:28:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:27.250 10:28:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:27.250 10:28:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:27.250 10:28:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:27.250 10:28:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:27.250 10:28:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:27.250 10:28:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:27.250 10:28:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:27.250 10:28:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:09:27.250 10:28:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=8b464f06-2980-e311-ba20-001e67a94acd 00:09:27.250 10:28:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:27.250 10:28:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:27.250 10:28:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:27.250 10:28:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:27.250 10:28:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:27.250 10:28:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:09:27.250 10:28:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:27.250 10:28:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:27.250 10:28:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:27.250 10:28:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:27.250 10:28:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:27.250 10:28:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:27.250 10:28:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:09:27.250 10:28:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:27.250 10:28:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:09:27.250 10:28:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:27.250 10:28:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:27.250 10:28:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:27.250 10:28:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:27.250 10:28:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:27.250 10:28:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:27.250 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:27.250 10:28:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:27.250 10:28:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:27.250 10:28:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:27.250 10:28:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:09:27.250 10:28:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:27.250 10:28:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:27.250 10:28:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:27.250 10:28:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:27.250 10:28:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:27.250 10:28:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:27.250 10:28:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:27.250 10:28:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:27.250 10:28:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:27.250 10:28:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:27.250 10:28:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@309 -- # xtrace_disable 00:09:27.250 10:28:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:29.782 10:28:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:29.782 10:28:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # pci_devs=() 00:09:29.782 10:28:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:29.782 10:28:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:29.782 10:28:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:29.782 10:28:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:29.782 10:28:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:29.782 10:28:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # net_devs=() 00:09:29.782 10:28:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:29.782 10:28:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # e810=() 00:09:29.782 10:28:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # local -ga e810 00:09:29.782 10:28:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # x722=() 00:09:29.782 10:28:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # local -ga x722 00:09:29.782 10:28:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # mlx=() 00:09:29.782 10:28:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # local -ga mlx 00:09:29.782 10:28:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:29.782 10:28:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:29.782 10:28:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:29.782 10:28:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:29.783 10:28:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:29.783 10:28:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:29.783 10:28:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:29.783 10:28:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:29.783 10:28:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:29.783 10:28:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:29.783 10:28:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:29.783 10:28:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:29.783 10:28:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:29.783 10:28:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:29.783 10:28:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:29.783 10:28:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:29.783 10:28:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:29.783 10:28:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:29.783 10:28:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:29.783 10:28:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:82:00.0 (0x8086 - 0x159b)' 00:09:29.783 Found 0000:82:00.0 (0x8086 - 0x159b) 00:09:29.783 10:28:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:29.783 10:28:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:29.783 10:28:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:29.783 10:28:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:29.783 10:28:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:29.783 10:28:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:29.783 10:28:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:82:00.1 (0x8086 - 0x159b)' 00:09:29.783 Found 0000:82:00.1 (0x8086 - 0x159b) 00:09:29.783 10:28:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:29.783 10:28:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:29.783 10:28:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:29.783 10:28:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:29.783 10:28:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:29.783 10:28:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:29.783 10:28:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:29.783 10:28:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:29.783 10:28:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:29.783 10:28:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:29.783 10:28:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:29.783 10:28:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:29.783 10:28:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:29.783 10:28:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:29.783 10:28:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:29.783 10:28:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:82:00.0: cvl_0_0' 00:09:29.783 Found net devices under 0000:82:00.0: cvl_0_0 00:09:29.783 10:28:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:29.783 10:28:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:29.783 10:28:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:29.783 10:28:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:29.783 10:28:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:29.783 10:28:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:29.783 10:28:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:29.783 10:28:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:29.783 10:28:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:82:00.1: cvl_0_1' 00:09:29.783 Found net devices under 0000:82:00.1: cvl_0_1 00:09:29.783 10:28:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:29.783 10:28:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:29.783 10:28:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # is_hw=yes 00:09:29.783 10:28:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:29.783 10:28:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:09:29.783 10:28:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:09:29.783 10:28:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:29.783 10:28:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:29.783 10:28:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:29.783 10:28:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:29.783 10:28:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:29.783 10:28:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:29.783 10:28:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:29.783 10:28:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:29.783 10:28:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:29.783 10:28:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:29.783 10:28:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:29.783 10:28:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:29.783 10:28:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:29.783 10:28:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:29.783 10:28:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:29.783 10:28:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:29.783 10:28:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:29.783 10:28:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:29.783 10:28:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:29.783 10:28:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:29.783 10:28:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:29.783 10:28:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:29.783 10:28:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:29.783 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:29.783 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.322 ms 00:09:29.783 00:09:29.783 --- 10.0.0.2 ping statistics --- 00:09:29.783 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:29.783 rtt min/avg/max/mdev = 0.322/0.322/0.322/0.000 ms 00:09:29.783 10:28:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:29.783 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:29.783 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.148 ms 00:09:29.783 00:09:29.783 --- 10.0.0.1 ping statistics --- 00:09:29.783 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:29.783 rtt min/avg/max/mdev = 0.148/0.148/0.148/0.000 ms 00:09:29.783 10:28:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:29.783 10:28:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@450 -- # return 0 00:09:29.783 10:28:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:29.783 10:28:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:29.783 10:28:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:29.783 10:28:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:29.783 10:28:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:29.784 10:28:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:29.784 10:28:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:29.784 10:28:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:09:29.784 10:28:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:29.784 10:28:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:29.784 10:28:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:29.784 10:28:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@509 -- # nvmfpid=295675 00:09:29.784 10:28:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:09:29.784 10:28:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@510 -- # waitforlisten 295675 00:09:29.784 10:28:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@833 -- # '[' -z 295675 ']' 00:09:29.784 10:28:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:29.784 10:28:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@838 -- # local max_retries=100 00:09:29.784 10:28:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:29.784 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:29.784 10:28:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@842 -- # xtrace_disable 00:09:29.784 10:28:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:29.784 [2024-11-15 10:28:18.065243] Starting SPDK v25.01-pre git sha1 318515b44 / DPDK 24.03.0 initialization... 00:09:29.784 [2024-11-15 10:28:18.065330] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:29.784 [2024-11-15 10:28:18.137452] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:29.784 [2024-11-15 10:28:18.190531] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:29.784 [2024-11-15 10:28:18.190589] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:29.784 [2024-11-15 10:28:18.190617] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:29.784 [2024-11-15 10:28:18.190628] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:29.784 [2024-11-15 10:28:18.190637] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:29.784 [2024-11-15 10:28:18.191199] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:30.041 10:28:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:09:30.042 10:28:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@866 -- # return 0 00:09:30.042 10:28:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:30.042 10:28:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:30.042 10:28:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:30.042 10:28:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:30.042 10:28:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:09:30.042 10:28:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:09:30.042 10:28:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:30.042 10:28:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:30.042 [2024-11-15 10:28:18.331419] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:30.042 10:28:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:30.042 10:28:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:09:30.042 10:28:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:30.042 10:28:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:30.042 10:28:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:30.042 10:28:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:30.042 10:28:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:30.042 10:28:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:30.042 [2024-11-15 10:28:18.347623] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:30.042 10:28:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:30.042 10:28:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:30.042 10:28:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:30.042 10:28:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:30.042 10:28:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:30.042 10:28:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:09:30.042 10:28:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:30.042 10:28:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:30.042 malloc0 00:09:30.042 10:28:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:30.042 10:28:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:09:30.042 10:28:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:30.042 10:28:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:30.042 10:28:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:30.042 10:28:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:09:30.042 10:28:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:09:30.042 10:28:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:09:30.042 10:28:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:09:30.042 10:28:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:30.042 10:28:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:30.042 { 00:09:30.042 "params": { 00:09:30.042 "name": "Nvme$subsystem", 00:09:30.042 "trtype": "$TEST_TRANSPORT", 00:09:30.042 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:30.042 "adrfam": "ipv4", 00:09:30.042 "trsvcid": "$NVMF_PORT", 00:09:30.042 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:30.042 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:30.042 "hdgst": ${hdgst:-false}, 00:09:30.042 "ddgst": ${ddgst:-false} 00:09:30.042 }, 00:09:30.042 "method": "bdev_nvme_attach_controller" 00:09:30.042 } 00:09:30.042 EOF 00:09:30.042 )") 00:09:30.042 10:28:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:09:30.042 10:28:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:09:30.042 10:28:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:09:30.042 10:28:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:30.042 "params": { 00:09:30.042 "name": "Nvme1", 00:09:30.042 "trtype": "tcp", 00:09:30.042 "traddr": "10.0.0.2", 00:09:30.042 "adrfam": "ipv4", 00:09:30.042 "trsvcid": "4420", 00:09:30.042 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:30.042 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:30.042 "hdgst": false, 00:09:30.042 "ddgst": false 00:09:30.042 }, 00:09:30.042 "method": "bdev_nvme_attach_controller" 00:09:30.042 }' 00:09:30.042 [2024-11-15 10:28:18.431054] Starting SPDK v25.01-pre git sha1 318515b44 / DPDK 24.03.0 initialization... 00:09:30.042 [2024-11-15 10:28:18.431123] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid295702 ] 00:09:30.042 [2024-11-15 10:28:18.499741] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:30.300 [2024-11-15 10:28:18.556931] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:30.558 Running I/O for 10 seconds... 00:09:32.426 6337.00 IOPS, 49.51 MiB/s [2024-11-15T09:28:21.824Z] 6256.00 IOPS, 48.88 MiB/s [2024-11-15T09:28:23.198Z] 6315.00 IOPS, 49.34 MiB/s [2024-11-15T09:28:24.132Z] 6347.75 IOPS, 49.59 MiB/s [2024-11-15T09:28:25.063Z] 6365.60 IOPS, 49.73 MiB/s [2024-11-15T09:28:25.995Z] 6387.83 IOPS, 49.90 MiB/s [2024-11-15T09:28:26.929Z] 6398.29 IOPS, 49.99 MiB/s [2024-11-15T09:28:27.862Z] 6411.38 IOPS, 50.09 MiB/s [2024-11-15T09:28:29.235Z] 6411.89 IOPS, 50.09 MiB/s [2024-11-15T09:28:29.235Z] 6427.40 IOPS, 50.21 MiB/s 00:09:40.772 Latency(us) 00:09:40.772 [2024-11-15T09:28:29.235Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:40.772 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:09:40.772 Verification LBA range: start 0x0 length 0x1000 00:09:40.772 Nvme1n1 : 10.01 6427.46 50.21 0.00 0.00 19863.54 3106.89 26796.94 00:09:40.772 [2024-11-15T09:28:29.235Z] =================================================================================================================== 00:09:40.772 [2024-11-15T09:28:29.235Z] Total : 6427.46 50.21 0.00 0.00 19863.54 3106.89 26796.94 00:09:40.772 10:28:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=296905 00:09:40.772 10:28:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:09:40.772 10:28:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:40.772 10:28:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:09:40.772 10:28:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:09:40.772 10:28:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:09:40.772 10:28:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:09:40.772 10:28:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:40.772 10:28:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:40.772 { 00:09:40.772 "params": { 00:09:40.772 "name": "Nvme$subsystem", 00:09:40.772 "trtype": "$TEST_TRANSPORT", 00:09:40.772 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:40.772 "adrfam": "ipv4", 00:09:40.772 "trsvcid": "$NVMF_PORT", 00:09:40.772 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:40.772 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:40.772 "hdgst": ${hdgst:-false}, 00:09:40.772 "ddgst": ${ddgst:-false} 00:09:40.772 }, 00:09:40.772 "method": "bdev_nvme_attach_controller" 00:09:40.772 } 00:09:40.772 EOF 00:09:40.772 )") 00:09:40.772 10:28:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:09:40.772 [2024-11-15 10:28:29.042619] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.772 [2024-11-15 10:28:29.042686] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.772 10:28:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:09:40.772 10:28:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:09:40.772 10:28:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:40.772 "params": { 00:09:40.772 "name": "Nvme1", 00:09:40.772 "trtype": "tcp", 00:09:40.772 "traddr": "10.0.0.2", 00:09:40.772 "adrfam": "ipv4", 00:09:40.772 "trsvcid": "4420", 00:09:40.772 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:40.772 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:40.772 "hdgst": false, 00:09:40.772 "ddgst": false 00:09:40.772 }, 00:09:40.772 "method": "bdev_nvme_attach_controller" 00:09:40.772 }' 00:09:40.772 [2024-11-15 10:28:29.050570] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.772 [2024-11-15 10:28:29.050596] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.772 [2024-11-15 10:28:29.058592] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.772 [2024-11-15 10:28:29.058615] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.772 [2024-11-15 10:28:29.066609] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.772 [2024-11-15 10:28:29.066630] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.772 [2024-11-15 10:28:29.074635] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.772 [2024-11-15 10:28:29.074670] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.772 [2024-11-15 10:28:29.081615] Starting SPDK v25.01-pre git sha1 318515b44 / DPDK 24.03.0 initialization... 00:09:40.772 [2024-11-15 10:28:29.081687] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid296905 ] 00:09:40.772 [2024-11-15 10:28:29.082672] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.772 [2024-11-15 10:28:29.082693] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.772 [2024-11-15 10:28:29.090692] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.772 [2024-11-15 10:28:29.090736] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.772 [2024-11-15 10:28:29.098711] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.772 [2024-11-15 10:28:29.098737] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.772 [2024-11-15 10:28:29.106728] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.772 [2024-11-15 10:28:29.106748] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.772 [2024-11-15 10:28:29.114765] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.773 [2024-11-15 10:28:29.114785] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.773 [2024-11-15 10:28:29.122785] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.773 [2024-11-15 10:28:29.122806] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.773 [2024-11-15 10:28:29.130806] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.773 [2024-11-15 10:28:29.130826] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.773 [2024-11-15 10:28:29.138852] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.773 [2024-11-15 10:28:29.138874] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.773 [2024-11-15 10:28:29.146849] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.773 [2024-11-15 10:28:29.146869] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.773 [2024-11-15 10:28:29.150914] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:40.773 [2024-11-15 10:28:29.154873] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.773 [2024-11-15 10:28:29.154898] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.773 [2024-11-15 10:28:29.162935] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.773 [2024-11-15 10:28:29.162975] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.773 [2024-11-15 10:28:29.170922] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.773 [2024-11-15 10:28:29.170946] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.773 [2024-11-15 10:28:29.178934] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.773 [2024-11-15 10:28:29.178955] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.773 [2024-11-15 10:28:29.186957] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.773 [2024-11-15 10:28:29.186979] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.773 [2024-11-15 10:28:29.194979] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.773 [2024-11-15 10:28:29.195000] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.773 [2024-11-15 10:28:29.202998] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.773 [2024-11-15 10:28:29.203018] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.773 [2024-11-15 10:28:29.211033] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.773 [2024-11-15 10:28:29.211053] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.773 [2024-11-15 10:28:29.211604] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:40.773 [2024-11-15 10:28:29.219040] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.773 [2024-11-15 10:28:29.219059] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.773 [2024-11-15 10:28:29.227097] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.773 [2024-11-15 10:28:29.227130] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.773 [2024-11-15 10:28:29.235145] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.773 [2024-11-15 10:28:29.235184] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.030 [2024-11-15 10:28:29.243153] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.031 [2024-11-15 10:28:29.243196] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.031 [2024-11-15 10:28:29.251167] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.031 [2024-11-15 10:28:29.251206] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.031 [2024-11-15 10:28:29.259188] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.031 [2024-11-15 10:28:29.259232] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.031 [2024-11-15 10:28:29.267210] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.031 [2024-11-15 10:28:29.267251] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.031 [2024-11-15 10:28:29.275226] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.031 [2024-11-15 10:28:29.275262] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.031 [2024-11-15 10:28:29.283214] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.031 [2024-11-15 10:28:29.283235] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.031 [2024-11-15 10:28:29.291268] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.031 [2024-11-15 10:28:29.291310] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.031 [2024-11-15 10:28:29.299291] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.031 [2024-11-15 10:28:29.299331] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.031 [2024-11-15 10:28:29.307287] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.031 [2024-11-15 10:28:29.307311] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.031 [2024-11-15 10:28:29.315297] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.031 [2024-11-15 10:28:29.315317] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.031 [2024-11-15 10:28:29.323316] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.031 [2024-11-15 10:28:29.323336] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.031 [2024-11-15 10:28:29.331655] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.031 [2024-11-15 10:28:29.331679] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.031 [2024-11-15 10:28:29.339671] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.031 [2024-11-15 10:28:29.339695] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.031 [2024-11-15 10:28:29.347687] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.031 [2024-11-15 10:28:29.347709] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.031 [2024-11-15 10:28:29.355720] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.031 [2024-11-15 10:28:29.355741] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.031 [2024-11-15 10:28:29.363725] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.031 [2024-11-15 10:28:29.363747] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.031 [2024-11-15 10:28:29.371742] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.031 [2024-11-15 10:28:29.371764] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.031 [2024-11-15 10:28:29.379777] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.031 [2024-11-15 10:28:29.379799] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.031 [2024-11-15 10:28:29.387815] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.031 [2024-11-15 10:28:29.387835] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.031 [2024-11-15 10:28:29.395812] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.031 [2024-11-15 10:28:29.395835] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.031 [2024-11-15 10:28:29.403829] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.031 [2024-11-15 10:28:29.403850] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.031 Running I/O for 5 seconds... 00:09:41.031 [2024-11-15 10:28:29.411922] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.031 [2024-11-15 10:28:29.411946] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.031 [2024-11-15 10:28:29.424953] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.031 [2024-11-15 10:28:29.424978] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.031 [2024-11-15 10:28:29.435226] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.031 [2024-11-15 10:28:29.435251] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.031 [2024-11-15 10:28:29.445498] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.031 [2024-11-15 10:28:29.445524] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.031 [2024-11-15 10:28:29.455943] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.031 [2024-11-15 10:28:29.455967] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.031 [2024-11-15 10:28:29.466808] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.031 [2024-11-15 10:28:29.466839] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.031 [2024-11-15 10:28:29.477307] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.031 [2024-11-15 10:28:29.477335] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.031 [2024-11-15 10:28:29.487977] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.031 [2024-11-15 10:28:29.488000] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.289 [2024-11-15 10:28:29.500906] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.289 [2024-11-15 10:28:29.500930] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.289 [2024-11-15 10:28:29.511339] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.289 [2024-11-15 10:28:29.511388] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.289 [2024-11-15 10:28:29.521631] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.289 [2024-11-15 10:28:29.521671] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.289 [2024-11-15 10:28:29.532073] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.289 [2024-11-15 10:28:29.532097] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.289 [2024-11-15 10:28:29.542795] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.289 [2024-11-15 10:28:29.542819] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.289 [2024-11-15 10:28:29.553285] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.289 [2024-11-15 10:28:29.553310] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.289 [2024-11-15 10:28:29.563592] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.289 [2024-11-15 10:28:29.563620] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.289 [2024-11-15 10:28:29.574013] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.289 [2024-11-15 10:28:29.574037] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.289 [2024-11-15 10:28:29.584087] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.289 [2024-11-15 10:28:29.584111] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.289 [2024-11-15 10:28:29.594132] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.289 [2024-11-15 10:28:29.594156] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.289 [2024-11-15 10:28:29.604474] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.289 [2024-11-15 10:28:29.604500] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.289 [2024-11-15 10:28:29.614580] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.289 [2024-11-15 10:28:29.614606] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.289 [2024-11-15 10:28:29.625093] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.289 [2024-11-15 10:28:29.625117] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.289 [2024-11-15 10:28:29.635086] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.289 [2024-11-15 10:28:29.635110] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.289 [2024-11-15 10:28:29.645214] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.289 [2024-11-15 10:28:29.645238] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.289 [2024-11-15 10:28:29.655508] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.289 [2024-11-15 10:28:29.655535] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.289 [2024-11-15 10:28:29.665802] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.289 [2024-11-15 10:28:29.665834] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.289 [2024-11-15 10:28:29.675970] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.289 [2024-11-15 10:28:29.675994] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.289 [2024-11-15 10:28:29.686122] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.289 [2024-11-15 10:28:29.686146] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.289 [2024-11-15 10:28:29.695937] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.289 [2024-11-15 10:28:29.695961] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.289 [2024-11-15 10:28:29.706329] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.289 [2024-11-15 10:28:29.706377] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.289 [2024-11-15 10:28:29.716596] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.289 [2024-11-15 10:28:29.716622] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.289 [2024-11-15 10:28:29.727191] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.289 [2024-11-15 10:28:29.727214] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.289 [2024-11-15 10:28:29.737529] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.290 [2024-11-15 10:28:29.737555] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.290 [2024-11-15 10:28:29.747795] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.290 [2024-11-15 10:28:29.747820] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.548 [2024-11-15 10:28:29.759027] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.548 [2024-11-15 10:28:29.759051] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.548 [2024-11-15 10:28:29.769200] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.548 [2024-11-15 10:28:29.769224] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.548 [2024-11-15 10:28:29.778924] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.548 [2024-11-15 10:28:29.778948] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.548 [2024-11-15 10:28:29.788891] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.548 [2024-11-15 10:28:29.788915] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.548 [2024-11-15 10:28:29.799661] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.548 [2024-11-15 10:28:29.799687] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.548 [2024-11-15 10:28:29.811946] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.548 [2024-11-15 10:28:29.811970] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.548 [2024-11-15 10:28:29.823468] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.548 [2024-11-15 10:28:29.823495] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.548 [2024-11-15 10:28:29.832025] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.548 [2024-11-15 10:28:29.832049] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.548 [2024-11-15 10:28:29.842955] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.548 [2024-11-15 10:28:29.842979] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.548 [2024-11-15 10:28:29.853523] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.548 [2024-11-15 10:28:29.853549] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.548 [2024-11-15 10:28:29.865533] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.548 [2024-11-15 10:28:29.865586] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.548 [2024-11-15 10:28:29.874849] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.548 [2024-11-15 10:28:29.874873] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.548 [2024-11-15 10:28:29.886059] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.548 [2024-11-15 10:28:29.886083] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.549 [2024-11-15 10:28:29.898429] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.549 [2024-11-15 10:28:29.898455] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.549 [2024-11-15 10:28:29.907981] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.549 [2024-11-15 10:28:29.908005] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.549 [2024-11-15 10:28:29.918423] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.549 [2024-11-15 10:28:29.918449] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.549 [2024-11-15 10:28:29.928979] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.549 [2024-11-15 10:28:29.929003] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.549 [2024-11-15 10:28:29.941898] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.549 [2024-11-15 10:28:29.941923] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.549 [2024-11-15 10:28:29.951297] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.549 [2024-11-15 10:28:29.951321] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.549 [2024-11-15 10:28:29.961983] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.549 [2024-11-15 10:28:29.962007] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.549 [2024-11-15 10:28:29.972433] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.549 [2024-11-15 10:28:29.972459] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.549 [2024-11-15 10:28:29.984671] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.549 [2024-11-15 10:28:29.984696] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.549 [2024-11-15 10:28:29.994474] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.549 [2024-11-15 10:28:29.994500] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.549 [2024-11-15 10:28:30.006754] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.549 [2024-11-15 10:28:30.006787] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.807 [2024-11-15 10:28:30.017805] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.807 [2024-11-15 10:28:30.017840] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.807 [2024-11-15 10:28:30.029496] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.807 [2024-11-15 10:28:30.029528] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.807 [2024-11-15 10:28:30.040548] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.807 [2024-11-15 10:28:30.040577] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.807 [2024-11-15 10:28:30.053323] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.807 [2024-11-15 10:28:30.053376] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.807 [2024-11-15 10:28:30.063567] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.807 [2024-11-15 10:28:30.063595] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.807 [2024-11-15 10:28:30.074358] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.807 [2024-11-15 10:28:30.074415] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.807 [2024-11-15 10:28:30.087075] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.807 [2024-11-15 10:28:30.087101] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.807 [2024-11-15 10:28:30.098254] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.807 [2024-11-15 10:28:30.098279] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.807 [2024-11-15 10:28:30.107740] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.807 [2024-11-15 10:28:30.107766] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.807 [2024-11-15 10:28:30.118577] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.807 [2024-11-15 10:28:30.118605] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.807 [2024-11-15 10:28:30.128934] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.807 [2024-11-15 10:28:30.128958] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.807 [2024-11-15 10:28:30.138738] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.807 [2024-11-15 10:28:30.138762] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.807 [2024-11-15 10:28:30.148954] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.807 [2024-11-15 10:28:30.148979] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.807 [2024-11-15 10:28:30.158723] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.807 [2024-11-15 10:28:30.158762] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.807 [2024-11-15 10:28:30.169548] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.807 [2024-11-15 10:28:30.169574] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.807 [2024-11-15 10:28:30.182064] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.807 [2024-11-15 10:28:30.182089] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.807 [2024-11-15 10:28:30.192289] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.807 [2024-11-15 10:28:30.192314] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.807 [2024-11-15 10:28:30.202780] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.807 [2024-11-15 10:28:30.202805] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.807 [2024-11-15 10:28:30.212812] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.807 [2024-11-15 10:28:30.212836] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.807 [2024-11-15 10:28:30.223324] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.807 [2024-11-15 10:28:30.223374] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.807 [2024-11-15 10:28:30.234456] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.807 [2024-11-15 10:28:30.234490] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.807 [2024-11-15 10:28:30.245670] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.807 [2024-11-15 10:28:30.245695] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.807 [2024-11-15 10:28:30.256553] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.807 [2024-11-15 10:28:30.256580] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.807 [2024-11-15 10:28:30.267219] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.807 [2024-11-15 10:28:30.267245] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.065 [2024-11-15 10:28:30.281129] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.065 [2024-11-15 10:28:30.281154] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.065 [2024-11-15 10:28:30.291850] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.065 [2024-11-15 10:28:30.291876] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.065 [2024-11-15 10:28:30.302250] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.065 [2024-11-15 10:28:30.302275] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.065 [2024-11-15 10:28:30.312486] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.065 [2024-11-15 10:28:30.312514] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.065 [2024-11-15 10:28:30.322955] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.065 [2024-11-15 10:28:30.322980] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.065 [2024-11-15 10:28:30.333481] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.065 [2024-11-15 10:28:30.333508] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.065 [2024-11-15 10:28:30.343874] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.065 [2024-11-15 10:28:30.343901] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.065 [2024-11-15 10:28:30.354408] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.065 [2024-11-15 10:28:30.354435] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.065 [2024-11-15 10:28:30.364733] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.065 [2024-11-15 10:28:30.364758] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.065 [2024-11-15 10:28:30.374900] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.065 [2024-11-15 10:28:30.374925] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.065 [2024-11-15 10:28:30.385474] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.065 [2024-11-15 10:28:30.385502] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.065 [2024-11-15 10:28:30.396311] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.065 [2024-11-15 10:28:30.396335] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.065 [2024-11-15 10:28:30.407237] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.065 [2024-11-15 10:28:30.407261] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.065 12030.00 IOPS, 93.98 MiB/s [2024-11-15T09:28:30.528Z] [2024-11-15 10:28:30.417718] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.065 [2024-11-15 10:28:30.417742] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.065 [2024-11-15 10:28:30.428829] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.065 [2024-11-15 10:28:30.428853] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.065 [2024-11-15 10:28:30.439402] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.065 [2024-11-15 10:28:30.439443] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.065 [2024-11-15 10:28:30.449984] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.065 [2024-11-15 10:28:30.450009] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.065 [2024-11-15 10:28:30.460758] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.065 [2024-11-15 10:28:30.460783] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.065 [2024-11-15 10:28:30.471634] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.065 [2024-11-15 10:28:30.471676] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.065 [2024-11-15 10:28:30.483278] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.065 [2024-11-15 10:28:30.483302] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.065 [2024-11-15 10:28:30.493963] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.065 [2024-11-15 10:28:30.493987] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.065 [2024-11-15 10:28:30.505013] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.065 [2024-11-15 10:28:30.505053] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.065 [2024-11-15 10:28:30.515897] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.065 [2024-11-15 10:28:30.515922] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.065 [2024-11-15 10:28:30.526832] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.065 [2024-11-15 10:28:30.526874] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.324 [2024-11-15 10:28:30.538463] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.324 [2024-11-15 10:28:30.538491] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.324 [2024-11-15 10:28:30.549542] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.324 [2024-11-15 10:28:30.549570] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.324 [2024-11-15 10:28:30.560085] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.324 [2024-11-15 10:28:30.560108] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.324 [2024-11-15 10:28:30.572861] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.324 [2024-11-15 10:28:30.572900] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.324 [2024-11-15 10:28:30.584286] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.324 [2024-11-15 10:28:30.584310] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.324 [2024-11-15 10:28:30.593689] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.324 [2024-11-15 10:28:30.593729] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.324 [2024-11-15 10:28:30.605054] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.324 [2024-11-15 10:28:30.605078] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.324 [2024-11-15 10:28:30.616670] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.324 [2024-11-15 10:28:30.616696] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.324 [2024-11-15 10:28:30.625679] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.324 [2024-11-15 10:28:30.625717] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.324 [2024-11-15 10:28:30.637079] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.324 [2024-11-15 10:28:30.637103] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.324 [2024-11-15 10:28:30.647768] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.324 [2024-11-15 10:28:30.647792] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.324 [2024-11-15 10:28:30.658821] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.324 [2024-11-15 10:28:30.658845] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.324 [2024-11-15 10:28:30.668869] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.324 [2024-11-15 10:28:30.668894] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.324 [2024-11-15 10:28:30.679696] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.324 [2024-11-15 10:28:30.679735] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.324 [2024-11-15 10:28:30.692140] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.324 [2024-11-15 10:28:30.692164] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.324 [2024-11-15 10:28:30.702141] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.324 [2024-11-15 10:28:30.702165] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.324 [2024-11-15 10:28:30.712297] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.324 [2024-11-15 10:28:30.712321] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.324 [2024-11-15 10:28:30.722886] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.324 [2024-11-15 10:28:30.722910] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.324 [2024-11-15 10:28:30.734753] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.324 [2024-11-15 10:28:30.734776] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.324 [2024-11-15 10:28:30.744194] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.324 [2024-11-15 10:28:30.744218] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.324 [2024-11-15 10:28:30.756570] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.324 [2024-11-15 10:28:30.756597] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.324 [2024-11-15 10:28:30.766447] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.324 [2024-11-15 10:28:30.766472] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.324 [2024-11-15 10:28:30.776970] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.324 [2024-11-15 10:28:30.776994] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.324 [2024-11-15 10:28:30.789488] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.324 [2024-11-15 10:28:30.789515] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.583 [2024-11-15 10:28:30.799524] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.583 [2024-11-15 10:28:30.799550] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.583 [2024-11-15 10:28:30.810124] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.583 [2024-11-15 10:28:30.810150] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.583 [2024-11-15 10:28:30.820800] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.583 [2024-11-15 10:28:30.820827] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.583 [2024-11-15 10:28:30.832114] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.583 [2024-11-15 10:28:30.832138] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.583 [2024-11-15 10:28:30.843059] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.583 [2024-11-15 10:28:30.843084] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.583 [2024-11-15 10:28:30.854198] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.583 [2024-11-15 10:28:30.854223] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.583 [2024-11-15 10:28:30.866533] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.583 [2024-11-15 10:28:30.866559] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.583 [2024-11-15 10:28:30.876360] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.583 [2024-11-15 10:28:30.876394] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.583 [2024-11-15 10:28:30.886844] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.583 [2024-11-15 10:28:30.886877] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.583 [2024-11-15 10:28:30.899860] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.583 [2024-11-15 10:28:30.899886] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.583 [2024-11-15 10:28:30.910231] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.583 [2024-11-15 10:28:30.910256] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.583 [2024-11-15 10:28:30.920754] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.583 [2024-11-15 10:28:30.920778] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.583 [2024-11-15 10:28:30.931627] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.583 [2024-11-15 10:28:30.931668] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.583 [2024-11-15 10:28:30.942414] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.583 [2024-11-15 10:28:30.942441] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.583 [2024-11-15 10:28:30.954745] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.583 [2024-11-15 10:28:30.954770] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.583 [2024-11-15 10:28:30.964832] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.583 [2024-11-15 10:28:30.964857] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.583 [2024-11-15 10:28:30.975869] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.583 [2024-11-15 10:28:30.975894] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.583 [2024-11-15 10:28:30.988147] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.583 [2024-11-15 10:28:30.988172] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.583 [2024-11-15 10:28:30.998044] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.583 [2024-11-15 10:28:30.998069] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.583 [2024-11-15 10:28:31.008615] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.583 [2024-11-15 10:28:31.008659] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.583 [2024-11-15 10:28:31.019320] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.583 [2024-11-15 10:28:31.019358] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.583 [2024-11-15 10:28:31.032122] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.583 [2024-11-15 10:28:31.032147] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.583 [2024-11-15 10:28:31.041816] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.583 [2024-11-15 10:28:31.041840] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.841 [2024-11-15 10:28:31.052995] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.841 [2024-11-15 10:28:31.053019] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.841 [2024-11-15 10:28:31.065572] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.841 [2024-11-15 10:28:31.065600] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.841 [2024-11-15 10:28:31.075688] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.841 [2024-11-15 10:28:31.075727] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.841 [2024-11-15 10:28:31.086424] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.841 [2024-11-15 10:28:31.086450] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.841 [2024-11-15 10:28:31.096749] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.841 [2024-11-15 10:28:31.096781] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.841 [2024-11-15 10:28:31.107516] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.841 [2024-11-15 10:28:31.107544] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.841 [2024-11-15 10:28:31.119315] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.841 [2024-11-15 10:28:31.119339] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.841 [2024-11-15 10:28:31.128973] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.841 [2024-11-15 10:28:31.128998] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.841 [2024-11-15 10:28:31.141382] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.841 [2024-11-15 10:28:31.141408] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.841 [2024-11-15 10:28:31.151273] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.841 [2024-11-15 10:28:31.151297] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.841 [2024-11-15 10:28:31.161573] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.841 [2024-11-15 10:28:31.161599] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.841 [2024-11-15 10:28:31.171825] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.841 [2024-11-15 10:28:31.171849] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.841 [2024-11-15 10:28:31.182377] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.841 [2024-11-15 10:28:31.182418] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.841 [2024-11-15 10:28:31.193115] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.841 [2024-11-15 10:28:31.193138] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.841 [2024-11-15 10:28:31.205729] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.841 [2024-11-15 10:28:31.205753] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.841 [2024-11-15 10:28:31.215816] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.841 [2024-11-15 10:28:31.215840] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.841 [2024-11-15 10:28:31.225832] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.841 [2024-11-15 10:28:31.225857] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.841 [2024-11-15 10:28:31.236240] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.841 [2024-11-15 10:28:31.236264] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.841 [2024-11-15 10:28:31.247168] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.841 [2024-11-15 10:28:31.247193] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.841 [2024-11-15 10:28:31.257723] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.841 [2024-11-15 10:28:31.257748] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.841 [2024-11-15 10:28:31.270024] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.841 [2024-11-15 10:28:31.270048] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.841 [2024-11-15 10:28:31.279834] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.841 [2024-11-15 10:28:31.279858] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.842 [2024-11-15 10:28:31.289808] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.842 [2024-11-15 10:28:31.289832] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.842 [2024-11-15 10:28:31.299715] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.842 [2024-11-15 10:28:31.299760] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.100 [2024-11-15 10:28:31.310735] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.100 [2024-11-15 10:28:31.310760] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.100 [2024-11-15 10:28:31.322832] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.100 [2024-11-15 10:28:31.322857] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.100 [2024-11-15 10:28:31.333913] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.100 [2024-11-15 10:28:31.333938] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.100 [2024-11-15 10:28:31.342410] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.100 [2024-11-15 10:28:31.342436] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.100 [2024-11-15 10:28:31.355090] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.101 [2024-11-15 10:28:31.355114] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.101 [2024-11-15 10:28:31.366536] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.101 [2024-11-15 10:28:31.366563] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.101 [2024-11-15 10:28:31.376046] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.101 [2024-11-15 10:28:31.376069] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.101 [2024-11-15 10:28:31.386448] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.101 [2024-11-15 10:28:31.386473] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.101 [2024-11-15 10:28:31.396954] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.101 [2024-11-15 10:28:31.396978] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.101 [2024-11-15 10:28:31.410192] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.101 [2024-11-15 10:28:31.410216] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.101 12033.50 IOPS, 94.01 MiB/s [2024-11-15T09:28:31.564Z] [2024-11-15 10:28:31.419929] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.101 [2024-11-15 10:28:31.419954] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.101 [2024-11-15 10:28:31.431844] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.101 [2024-11-15 10:28:31.431869] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.101 [2024-11-15 10:28:31.442354] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.101 [2024-11-15 10:28:31.442389] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.101 [2024-11-15 10:28:31.452965] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.101 [2024-11-15 10:28:31.452990] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.101 [2024-11-15 10:28:31.465013] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.101 [2024-11-15 10:28:31.465037] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.101 [2024-11-15 10:28:31.476135] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.101 [2024-11-15 10:28:31.476159] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.101 [2024-11-15 10:28:31.485016] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.101 [2024-11-15 10:28:31.485041] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.101 [2024-11-15 10:28:31.496300] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.101 [2024-11-15 10:28:31.496324] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.101 [2024-11-15 10:28:31.508612] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.101 [2024-11-15 10:28:31.508638] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.101 [2024-11-15 10:28:31.519588] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.101 [2024-11-15 10:28:31.519615] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.101 [2024-11-15 10:28:31.528630] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.101 [2024-11-15 10:28:31.528677] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.101 [2024-11-15 10:28:31.539044] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.101 [2024-11-15 10:28:31.539068] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.101 [2024-11-15 10:28:31.548889] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.101 [2024-11-15 10:28:31.548914] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.101 [2024-11-15 10:28:31.559434] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.101 [2024-11-15 10:28:31.559460] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.360 [2024-11-15 10:28:31.572856] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.360 [2024-11-15 10:28:31.572880] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.360 [2024-11-15 10:28:31.584504] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.360 [2024-11-15 10:28:31.584530] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.360 [2024-11-15 10:28:31.593162] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.360 [2024-11-15 10:28:31.593193] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.360 [2024-11-15 10:28:31.604049] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.360 [2024-11-15 10:28:31.604072] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.360 [2024-11-15 10:28:31.614268] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.360 [2024-11-15 10:28:31.614292] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.360 [2024-11-15 10:28:31.624914] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.360 [2024-11-15 10:28:31.624939] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.360 [2024-11-15 10:28:31.635322] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.360 [2024-11-15 10:28:31.635360] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.360 [2024-11-15 10:28:31.648191] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.360 [2024-11-15 10:28:31.648215] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.360 [2024-11-15 10:28:31.658066] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.360 [2024-11-15 10:28:31.658090] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.360 [2024-11-15 10:28:31.668537] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.360 [2024-11-15 10:28:31.668562] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.360 [2024-11-15 10:28:31.681679] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.360 [2024-11-15 10:28:31.681718] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.360 [2024-11-15 10:28:31.693529] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.360 [2024-11-15 10:28:31.693555] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.360 [2024-11-15 10:28:31.702598] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.360 [2024-11-15 10:28:31.702624] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.360 [2024-11-15 10:28:31.713441] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.360 [2024-11-15 10:28:31.713466] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.360 [2024-11-15 10:28:31.726227] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.360 [2024-11-15 10:28:31.726252] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.360 [2024-11-15 10:28:31.736314] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.360 [2024-11-15 10:28:31.736338] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.360 [2024-11-15 10:28:31.746627] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.360 [2024-11-15 10:28:31.746666] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.360 [2024-11-15 10:28:31.757256] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.360 [2024-11-15 10:28:31.757280] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.360 [2024-11-15 10:28:31.767818] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.360 [2024-11-15 10:28:31.767842] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.360 [2024-11-15 10:28:31.778796] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.360 [2024-11-15 10:28:31.778820] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.360 [2024-11-15 10:28:31.791009] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.360 [2024-11-15 10:28:31.791033] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.360 [2024-11-15 10:28:31.802997] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.360 [2024-11-15 10:28:31.803020] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.360 [2024-11-15 10:28:31.811894] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.360 [2024-11-15 10:28:31.811920] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.360 [2024-11-15 10:28:31.823397] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.360 [2024-11-15 10:28:31.823424] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.619 [2024-11-15 10:28:31.834101] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.619 [2024-11-15 10:28:31.834125] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.619 [2024-11-15 10:28:31.844581] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.619 [2024-11-15 10:28:31.844606] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.619 [2024-11-15 10:28:31.857133] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.619 [2024-11-15 10:28:31.857164] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.619 [2024-11-15 10:28:31.867517] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.619 [2024-11-15 10:28:31.867544] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.619 [2024-11-15 10:28:31.878142] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.619 [2024-11-15 10:28:31.878166] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.619 [2024-11-15 10:28:31.888587] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.619 [2024-11-15 10:28:31.888612] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.619 [2024-11-15 10:28:31.899137] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.619 [2024-11-15 10:28:31.899161] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.619 [2024-11-15 10:28:31.911077] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.619 [2024-11-15 10:28:31.911101] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.619 [2024-11-15 10:28:31.920479] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.619 [2024-11-15 10:28:31.920504] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.619 [2024-11-15 10:28:31.932992] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.619 [2024-11-15 10:28:31.933016] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.619 [2024-11-15 10:28:31.944995] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.619 [2024-11-15 10:28:31.945020] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.619 [2024-11-15 10:28:31.954581] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.619 [2024-11-15 10:28:31.954608] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.619 [2024-11-15 10:28:31.964742] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.619 [2024-11-15 10:28:31.964767] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.619 [2024-11-15 10:28:31.974357] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.619 [2024-11-15 10:28:31.974390] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.619 [2024-11-15 10:28:31.984689] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.619 [2024-11-15 10:28:31.984713] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.619 [2024-11-15 10:28:31.996103] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.619 [2024-11-15 10:28:31.996128] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.619 [2024-11-15 10:28:32.006589] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.619 [2024-11-15 10:28:32.006615] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.619 [2024-11-15 10:28:32.019058] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.619 [2024-11-15 10:28:32.019082] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.619 [2024-11-15 10:28:32.030381] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.619 [2024-11-15 10:28:32.030407] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.619 [2024-11-15 10:28:32.039884] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.619 [2024-11-15 10:28:32.039911] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.619 [2024-11-15 10:28:32.051318] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.619 [2024-11-15 10:28:32.051358] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.619 [2024-11-15 10:28:32.063999] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.619 [2024-11-15 10:28:32.064024] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.619 [2024-11-15 10:28:32.074274] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.619 [2024-11-15 10:28:32.074314] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.878 [2024-11-15 10:28:32.085576] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.878 [2024-11-15 10:28:32.085603] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.878 [2024-11-15 10:28:32.096498] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.878 [2024-11-15 10:28:32.096525] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.878 [2024-11-15 10:28:32.108137] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.878 [2024-11-15 10:28:32.108161] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.878 [2024-11-15 10:28:32.119050] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.878 [2024-11-15 10:28:32.119084] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.878 [2024-11-15 10:28:32.131688] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.878 [2024-11-15 10:28:32.131713] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.878 [2024-11-15 10:28:32.141540] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.878 [2024-11-15 10:28:32.141566] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.878 [2024-11-15 10:28:32.152284] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.878 [2024-11-15 10:28:32.152309] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.878 [2024-11-15 10:28:32.163301] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.878 [2024-11-15 10:28:32.163329] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.878 [2024-11-15 10:28:32.175749] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.878 [2024-11-15 10:28:32.175774] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.878 [2024-11-15 10:28:32.186311] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.878 [2024-11-15 10:28:32.186337] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.878 [2024-11-15 10:28:32.197538] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.878 [2024-11-15 10:28:32.197564] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.878 [2024-11-15 10:28:32.208731] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.878 [2024-11-15 10:28:32.208756] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.878 [2024-11-15 10:28:32.219556] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.878 [2024-11-15 10:28:32.219583] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.878 [2024-11-15 10:28:32.229917] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.878 [2024-11-15 10:28:32.229943] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.878 [2024-11-15 10:28:32.243120] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.878 [2024-11-15 10:28:32.243145] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.878 [2024-11-15 10:28:32.253139] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.878 [2024-11-15 10:28:32.253164] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.878 [2024-11-15 10:28:32.263969] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.878 [2024-11-15 10:28:32.263994] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.878 [2024-11-15 10:28:32.276387] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.878 [2024-11-15 10:28:32.276428] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.878 [2024-11-15 10:28:32.286202] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.878 [2024-11-15 10:28:32.286227] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.878 [2024-11-15 10:28:32.297529] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.878 [2024-11-15 10:28:32.297555] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.878 [2024-11-15 10:28:32.308112] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.878 [2024-11-15 10:28:32.308136] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.878 [2024-11-15 10:28:32.318132] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.878 [2024-11-15 10:28:32.318157] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.878 [2024-11-15 10:28:32.328116] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.878 [2024-11-15 10:28:32.328149] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.878 [2024-11-15 10:28:32.338135] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.878 [2024-11-15 10:28:32.338160] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.136 [2024-11-15 10:28:32.349248] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.136 [2024-11-15 10:28:32.349272] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.136 [2024-11-15 10:28:32.359877] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.136 [2024-11-15 10:28:32.359901] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.136 [2024-11-15 10:28:32.370118] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.136 [2024-11-15 10:28:32.370143] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.136 [2024-11-15 10:28:32.380621] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.136 [2024-11-15 10:28:32.380661] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.137 [2024-11-15 10:28:32.391252] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.137 [2024-11-15 10:28:32.391276] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.137 [2024-11-15 10:28:32.401664] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.137 [2024-11-15 10:28:32.401690] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.137 [2024-11-15 10:28:32.412558] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.137 [2024-11-15 10:28:32.412584] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.137 12048.33 IOPS, 94.13 MiB/s [2024-11-15T09:28:32.600Z] [2024-11-15 10:28:32.422275] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.137 [2024-11-15 10:28:32.422299] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.137 [2024-11-15 10:28:32.432852] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.137 [2024-11-15 10:28:32.432876] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.137 [2024-11-15 10:28:32.442841] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.137 [2024-11-15 10:28:32.442865] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.137 [2024-11-15 10:28:32.453475] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.137 [2024-11-15 10:28:32.453501] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.137 [2024-11-15 10:28:32.465762] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.137 [2024-11-15 10:28:32.465787] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.137 [2024-11-15 10:28:32.474114] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.137 [2024-11-15 10:28:32.474138] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.137 [2024-11-15 10:28:32.484626] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.137 [2024-11-15 10:28:32.484666] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.137 [2024-11-15 10:28:32.494795] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.137 [2024-11-15 10:28:32.494820] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.137 [2024-11-15 10:28:32.504866] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.137 [2024-11-15 10:28:32.504891] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.137 [2024-11-15 10:28:32.514900] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.137 [2024-11-15 10:28:32.514924] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.137 [2024-11-15 10:28:32.524811] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.137 [2024-11-15 10:28:32.524841] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.137 [2024-11-15 10:28:32.535085] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.137 [2024-11-15 10:28:32.535110] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.137 [2024-11-15 10:28:32.545407] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.137 [2024-11-15 10:28:32.545434] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.137 [2024-11-15 10:28:32.555607] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.137 [2024-11-15 10:28:32.555634] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.137 [2024-11-15 10:28:32.566095] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.137 [2024-11-15 10:28:32.566120] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.137 [2024-11-15 10:28:32.578616] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.137 [2024-11-15 10:28:32.578657] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.137 [2024-11-15 10:28:32.588918] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.137 [2024-11-15 10:28:32.588943] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.137 [2024-11-15 10:28:32.599084] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.137 [2024-11-15 10:28:32.599120] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.395 [2024-11-15 10:28:32.610334] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.395 [2024-11-15 10:28:32.610384] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.395 [2024-11-15 10:28:32.622374] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.395 [2024-11-15 10:28:32.622400] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.395 [2024-11-15 10:28:32.632448] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.395 [2024-11-15 10:28:32.632474] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.395 [2024-11-15 10:28:32.642234] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.395 [2024-11-15 10:28:32.642259] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.395 [2024-11-15 10:28:32.652849] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.395 [2024-11-15 10:28:32.652874] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.395 [2024-11-15 10:28:32.664675] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.395 [2024-11-15 10:28:32.664715] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.395 [2024-11-15 10:28:32.674592] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.395 [2024-11-15 10:28:32.674619] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.395 [2024-11-15 10:28:32.685136] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.395 [2024-11-15 10:28:32.685162] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.395 [2024-11-15 10:28:32.695449] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.395 [2024-11-15 10:28:32.695474] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.395 [2024-11-15 10:28:32.706048] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.395 [2024-11-15 10:28:32.706073] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.395 [2024-11-15 10:28:32.716696] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.395 [2024-11-15 10:28:32.716735] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.395 [2024-11-15 10:28:32.726915] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.395 [2024-11-15 10:28:32.726940] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.395 [2024-11-15 10:28:32.737329] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.395 [2024-11-15 10:28:32.737376] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.396 [2024-11-15 10:28:32.747456] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.396 [2024-11-15 10:28:32.747483] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.396 [2024-11-15 10:28:32.757659] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.396 [2024-11-15 10:28:32.757686] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.396 [2024-11-15 10:28:32.768057] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.396 [2024-11-15 10:28:32.768081] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.396 [2024-11-15 10:28:32.778712] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.396 [2024-11-15 10:28:32.778736] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.396 [2024-11-15 10:28:32.791274] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.396 [2024-11-15 10:28:32.791298] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.396 [2024-11-15 10:28:32.801190] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.396 [2024-11-15 10:28:32.801214] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.396 [2024-11-15 10:28:32.811496] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.396 [2024-11-15 10:28:32.811523] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.396 [2024-11-15 10:28:32.821931] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.396 [2024-11-15 10:28:32.821956] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.396 [2024-11-15 10:28:32.832804] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.396 [2024-11-15 10:28:32.832828] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.396 [2024-11-15 10:28:32.842716] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.396 [2024-11-15 10:28:32.842740] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.396 [2024-11-15 10:28:32.853099] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.396 [2024-11-15 10:28:32.853123] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.654 [2024-11-15 10:28:32.866372] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.654 [2024-11-15 10:28:32.866399] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.654 [2024-11-15 10:28:32.877964] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.654 [2024-11-15 10:28:32.877989] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.654 [2024-11-15 10:28:32.887206] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.654 [2024-11-15 10:28:32.887230] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.654 [2024-11-15 10:28:32.898391] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.654 [2024-11-15 10:28:32.898416] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.654 [2024-11-15 10:28:32.910492] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.654 [2024-11-15 10:28:32.910517] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.654 [2024-11-15 10:28:32.920625] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.654 [2024-11-15 10:28:32.920665] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.654 [2024-11-15 10:28:32.930997] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.654 [2024-11-15 10:28:32.931021] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.654 [2024-11-15 10:28:32.941373] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.654 [2024-11-15 10:28:32.941399] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.654 [2024-11-15 10:28:32.951279] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.654 [2024-11-15 10:28:32.951303] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.654 [2024-11-15 10:28:32.961049] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.654 [2024-11-15 10:28:32.961073] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.654 [2024-11-15 10:28:32.970913] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.654 [2024-11-15 10:28:32.970937] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.654 [2024-11-15 10:28:32.981464] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.654 [2024-11-15 10:28:32.981489] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.654 [2024-11-15 10:28:32.994949] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.654 [2024-11-15 10:28:32.994974] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.654 [2024-11-15 10:28:33.006087] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.654 [2024-11-15 10:28:33.006111] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.654 [2024-11-15 10:28:33.015354] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.654 [2024-11-15 10:28:33.015388] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.654 [2024-11-15 10:28:33.026862] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.654 [2024-11-15 10:28:33.026886] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.654 [2024-11-15 10:28:33.037505] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.654 [2024-11-15 10:28:33.037531] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.654 [2024-11-15 10:28:33.048160] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.654 [2024-11-15 10:28:33.048185] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.654 [2024-11-15 10:28:33.060102] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.654 [2024-11-15 10:28:33.060127] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.654 [2024-11-15 10:28:33.069966] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.654 [2024-11-15 10:28:33.069990] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.654 [2024-11-15 10:28:33.080509] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.654 [2024-11-15 10:28:33.080534] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.654 [2024-11-15 10:28:33.091398] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.654 [2024-11-15 10:28:33.091440] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.654 [2024-11-15 10:28:33.102306] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.654 [2024-11-15 10:28:33.102330] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.654 [2024-11-15 10:28:33.113383] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.654 [2024-11-15 10:28:33.113408] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.913 [2024-11-15 10:28:33.124904] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.913 [2024-11-15 10:28:33.124928] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.913 [2024-11-15 10:28:33.135381] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.913 [2024-11-15 10:28:33.135407] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.913 [2024-11-15 10:28:33.147894] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.913 [2024-11-15 10:28:33.147918] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.913 [2024-11-15 10:28:33.158078] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.913 [2024-11-15 10:28:33.158102] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.913 [2024-11-15 10:28:33.168545] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.913 [2024-11-15 10:28:33.168571] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.913 [2024-11-15 10:28:33.178834] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.913 [2024-11-15 10:28:33.178859] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.913 [2024-11-15 10:28:33.189291] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.913 [2024-11-15 10:28:33.189316] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.913 [2024-11-15 10:28:33.199999] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.913 [2024-11-15 10:28:33.200022] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.913 [2024-11-15 10:28:33.210688] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.913 [2024-11-15 10:28:33.210727] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.913 [2024-11-15 10:28:33.221422] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.913 [2024-11-15 10:28:33.221448] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.913 [2024-11-15 10:28:33.231786] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.913 [2024-11-15 10:28:33.231811] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.913 [2024-11-15 10:28:33.242437] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.913 [2024-11-15 10:28:33.242463] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.913 [2024-11-15 10:28:33.255030] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.913 [2024-11-15 10:28:33.255054] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.913 [2024-11-15 10:28:33.266606] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.913 [2024-11-15 10:28:33.266631] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.913 [2024-11-15 10:28:33.275637] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.913 [2024-11-15 10:28:33.275664] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.913 [2024-11-15 10:28:33.287140] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.913 [2024-11-15 10:28:33.287164] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.913 [2024-11-15 10:28:33.297639] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.913 [2024-11-15 10:28:33.297679] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.913 [2024-11-15 10:28:33.307791] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.913 [2024-11-15 10:28:33.307816] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.913 [2024-11-15 10:28:33.318336] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.913 [2024-11-15 10:28:33.318383] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.913 [2024-11-15 10:28:33.330839] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.913 [2024-11-15 10:28:33.330863] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.913 [2024-11-15 10:28:33.340699] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.913 [2024-11-15 10:28:33.340740] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.913 [2024-11-15 10:28:33.351121] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.913 [2024-11-15 10:28:33.351146] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.913 [2024-11-15 10:28:33.361885] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.913 [2024-11-15 10:28:33.361909] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.913 [2024-11-15 10:28:33.371889] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.913 [2024-11-15 10:28:33.371913] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.172 [2024-11-15 10:28:33.382972] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.172 [2024-11-15 10:28:33.382996] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.172 [2024-11-15 10:28:33.394840] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.172 [2024-11-15 10:28:33.394864] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.172 [2024-11-15 10:28:33.404935] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.172 [2024-11-15 10:28:33.404959] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.172 [2024-11-15 10:28:33.414802] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.172 [2024-11-15 10:28:33.414826] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.172 12090.25 IOPS, 94.46 MiB/s [2024-11-15T09:28:33.635Z] [2024-11-15 10:28:33.424141] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.172 [2024-11-15 10:28:33.424165] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.172 [2024-11-15 10:28:33.434811] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.172 [2024-11-15 10:28:33.434835] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.172 [2024-11-15 10:28:33.444696] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.172 [2024-11-15 10:28:33.444735] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.172 [2024-11-15 10:28:33.454599] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.172 [2024-11-15 10:28:33.454625] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.172 [2024-11-15 10:28:33.464633] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.172 [2024-11-15 10:28:33.464674] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.172 [2024-11-15 10:28:33.474757] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.172 [2024-11-15 10:28:33.474781] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.172 [2024-11-15 10:28:33.484521] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.172 [2024-11-15 10:28:33.484546] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.172 [2024-11-15 10:28:33.494509] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.172 [2024-11-15 10:28:33.494535] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.172 [2024-11-15 10:28:33.504471] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.172 [2024-11-15 10:28:33.504497] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.172 [2024-11-15 10:28:33.514464] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.172 [2024-11-15 10:28:33.514489] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.172 [2024-11-15 10:28:33.524877] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.172 [2024-11-15 10:28:33.524908] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.172 [2024-11-15 10:28:33.536885] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.172 [2024-11-15 10:28:33.536909] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.172 [2024-11-15 10:28:33.545841] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.172 [2024-11-15 10:28:33.545866] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.172 [2024-11-15 10:28:33.556937] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.172 [2024-11-15 10:28:33.556961] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.172 [2024-11-15 10:28:33.567569] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.172 [2024-11-15 10:28:33.567595] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.172 [2024-11-15 10:28:33.577983] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.172 [2024-11-15 10:28:33.578007] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.172 [2024-11-15 10:28:33.589896] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.172 [2024-11-15 10:28:33.589921] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.172 [2024-11-15 10:28:33.598964] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.172 [2024-11-15 10:28:33.598988] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.172 [2024-11-15 10:28:33.608894] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.172 [2024-11-15 10:28:33.608918] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.172 [2024-11-15 10:28:33.618749] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.172 [2024-11-15 10:28:33.618774] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.172 [2024-11-15 10:28:33.628900] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.172 [2024-11-15 10:28:33.628924] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.431 [2024-11-15 10:28:33.640028] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.431 [2024-11-15 10:28:33.640053] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.431 [2024-11-15 10:28:33.651154] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.431 [2024-11-15 10:28:33.651179] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.431 [2024-11-15 10:28:33.661911] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.431 [2024-11-15 10:28:33.661935] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.431 [2024-11-15 10:28:33.672396] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.431 [2024-11-15 10:28:33.672428] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.431 [2024-11-15 10:28:33.684540] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.431 [2024-11-15 10:28:33.684566] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.431 [2024-11-15 10:28:33.694183] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.431 [2024-11-15 10:28:33.694210] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.431 [2024-11-15 10:28:33.704454] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.431 [2024-11-15 10:28:33.704480] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.431 [2024-11-15 10:28:33.714138] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.431 [2024-11-15 10:28:33.714163] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.431 [2024-11-15 10:28:33.724658] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.431 [2024-11-15 10:28:33.724690] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.431 [2024-11-15 10:28:33.737040] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.431 [2024-11-15 10:28:33.737065] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.431 [2024-11-15 10:28:33.747208] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.431 [2024-11-15 10:28:33.747232] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.431 [2024-11-15 10:28:33.757804] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.431 [2024-11-15 10:28:33.757829] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.431 [2024-11-15 10:28:33.770303] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.431 [2024-11-15 10:28:33.770328] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.431 [2024-11-15 10:28:33.779960] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.431 [2024-11-15 10:28:33.779985] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.431 [2024-11-15 10:28:33.790391] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.431 [2024-11-15 10:28:33.790439] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.431 [2024-11-15 10:28:33.800297] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.431 [2024-11-15 10:28:33.800323] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.431 [2024-11-15 10:28:33.810698] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.431 [2024-11-15 10:28:33.810737] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.431 [2024-11-15 10:28:33.820738] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.431 [2024-11-15 10:28:33.820763] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.431 [2024-11-15 10:28:33.830815] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.431 [2024-11-15 10:28:33.830847] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.431 [2024-11-15 10:28:33.840679] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.431 [2024-11-15 10:28:33.840719] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.431 [2024-11-15 10:28:33.850865] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.431 [2024-11-15 10:28:33.850889] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.431 [2024-11-15 10:28:33.861272] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.431 [2024-11-15 10:28:33.861296] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.431 [2024-11-15 10:28:33.871770] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.431 [2024-11-15 10:28:33.871794] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.431 [2024-11-15 10:28:33.881976] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.431 [2024-11-15 10:28:33.882000] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.431 [2024-11-15 10:28:33.892809] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.431 [2024-11-15 10:28:33.892835] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.690 [2024-11-15 10:28:33.904222] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.690 [2024-11-15 10:28:33.904246] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.690 [2024-11-15 10:28:33.914193] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.690 [2024-11-15 10:28:33.914216] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.690 [2024-11-15 10:28:33.924500] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.690 [2024-11-15 10:28:33.924534] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.690 [2024-11-15 10:28:33.935123] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.690 [2024-11-15 10:28:33.935147] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.690 [2024-11-15 10:28:33.945192] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.690 [2024-11-15 10:28:33.945217] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.690 [2024-11-15 10:28:33.955742] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.690 [2024-11-15 10:28:33.955766] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.690 [2024-11-15 10:28:33.965794] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.690 [2024-11-15 10:28:33.965818] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.690 [2024-11-15 10:28:33.975846] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.690 [2024-11-15 10:28:33.975871] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.690 [2024-11-15 10:28:33.986391] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.690 [2024-11-15 10:28:33.986418] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.690 [2024-11-15 10:28:33.999137] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.690 [2024-11-15 10:28:33.999162] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.690 [2024-11-15 10:28:34.008959] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.690 [2024-11-15 10:28:34.008983] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.690 [2024-11-15 10:28:34.019101] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.690 [2024-11-15 10:28:34.019125] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.690 [2024-11-15 10:28:34.029751] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.690 [2024-11-15 10:28:34.029776] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.690 [2024-11-15 10:28:34.042015] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.690 [2024-11-15 10:28:34.042039] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.690 [2024-11-15 10:28:34.051915] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.690 [2024-11-15 10:28:34.051938] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.690 [2024-11-15 10:28:34.062482] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.690 [2024-11-15 10:28:34.062508] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.690 [2024-11-15 10:28:34.074546] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.690 [2024-11-15 10:28:34.074574] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.690 [2024-11-15 10:28:34.084278] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.690 [2024-11-15 10:28:34.084302] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.690 [2024-11-15 10:28:34.095360] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.690 [2024-11-15 10:28:34.095394] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.690 [2024-11-15 10:28:34.105897] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.690 [2024-11-15 10:28:34.105922] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.690 [2024-11-15 10:28:34.116161] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.690 [2024-11-15 10:28:34.116184] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.690 [2024-11-15 10:28:34.126292] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.690 [2024-11-15 10:28:34.126316] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.690 [2024-11-15 10:28:34.136530] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.690 [2024-11-15 10:28:34.136556] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.690 [2024-11-15 10:28:34.146603] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.690 [2024-11-15 10:28:34.146630] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.949 [2024-11-15 10:28:34.157965] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.949 [2024-11-15 10:28:34.157992] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.949 [2024-11-15 10:28:34.170598] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.949 [2024-11-15 10:28:34.170624] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.949 [2024-11-15 10:28:34.187966] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.949 [2024-11-15 10:28:34.187991] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.949 [2024-11-15 10:28:34.197894] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.949 [2024-11-15 10:28:34.197919] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.949 [2024-11-15 10:28:34.208201] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.949 [2024-11-15 10:28:34.208225] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.949 [2024-11-15 10:28:34.218405] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.949 [2024-11-15 10:28:34.218432] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.949 [2024-11-15 10:28:34.228748] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.949 [2024-11-15 10:28:34.228773] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.949 [2024-11-15 10:28:34.239303] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.949 [2024-11-15 10:28:34.239326] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.949 [2024-11-15 10:28:34.251576] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.949 [2024-11-15 10:28:34.251602] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.949 [2024-11-15 10:28:34.261421] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.949 [2024-11-15 10:28:34.261448] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.949 [2024-11-15 10:28:34.271848] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.949 [2024-11-15 10:28:34.271872] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.949 [2024-11-15 10:28:34.282046] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.949 [2024-11-15 10:28:34.282070] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.949 [2024-11-15 10:28:34.292297] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.949 [2024-11-15 10:28:34.292321] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.949 [2024-11-15 10:28:34.302820] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.949 [2024-11-15 10:28:34.302845] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.949 [2024-11-15 10:28:34.313147] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.949 [2024-11-15 10:28:34.313171] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.949 [2024-11-15 10:28:34.323036] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.949 [2024-11-15 10:28:34.323060] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.949 [2024-11-15 10:28:34.333454] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.949 [2024-11-15 10:28:34.333480] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.949 [2024-11-15 10:28:34.343793] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.949 [2024-11-15 10:28:34.343818] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.949 [2024-11-15 10:28:34.353872] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.949 [2024-11-15 10:28:34.353897] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.949 [2024-11-15 10:28:34.364126] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.949 [2024-11-15 10:28:34.364150] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.949 [2024-11-15 10:28:34.374897] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.949 [2024-11-15 10:28:34.374921] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.949 [2024-11-15 10:28:34.385548] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.949 [2024-11-15 10:28:34.385573] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.949 [2024-11-15 10:28:34.396023] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.949 [2024-11-15 10:28:34.396047] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.949 [2024-11-15 10:28:34.406246] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.949 [2024-11-15 10:28:34.406269] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.208 [2024-11-15 10:28:34.417084] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.208 [2024-11-15 10:28:34.417124] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.208 12135.40 IOPS, 94.81 MiB/s [2024-11-15T09:28:34.671Z] [2024-11-15 10:28:34.426153] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.208 [2024-11-15 10:28:34.426177] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.208 [2024-11-15 10:28:34.468647] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.208 [2024-11-15 10:28:34.468685] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.208 00:09:46.208 Latency(us) 00:09:46.208 [2024-11-15T09:28:34.671Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:46.208 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:09:46.208 Nvme1n1 : 5.05 12040.82 94.07 0.00 0.00 10530.87 4077.80 51652.08 00:09:46.208 [2024-11-15T09:28:34.671Z] =================================================================================================================== 00:09:46.208 [2024-11-15T09:28:34.671Z] Total : 12040.82 94.07 0.00 0.00 10530.87 4077.80 51652.08 00:09:46.208 [2024-11-15 10:28:34.475294] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.208 [2024-11-15 10:28:34.475317] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.208 [2024-11-15 10:28:34.483314] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.208 [2024-11-15 10:28:34.483337] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.208 [2024-11-15 10:28:34.491337] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.208 [2024-11-15 10:28:34.491381] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.208 [2024-11-15 10:28:34.499431] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.208 [2024-11-15 10:28:34.499482] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.208 [2024-11-15 10:28:34.507457] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.208 [2024-11-15 10:28:34.507524] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.208 [2024-11-15 10:28:34.515476] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.208 [2024-11-15 10:28:34.515525] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.208 [2024-11-15 10:28:34.523508] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.208 [2024-11-15 10:28:34.523563] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.208 [2024-11-15 10:28:34.531508] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.208 [2024-11-15 10:28:34.531559] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.208 [2024-11-15 10:28:34.539549] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.208 [2024-11-15 10:28:34.539599] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.208 [2024-11-15 10:28:34.547555] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.208 [2024-11-15 10:28:34.547602] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.208 [2024-11-15 10:28:34.555604] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.208 [2024-11-15 10:28:34.555657] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.208 [2024-11-15 10:28:34.563615] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.208 [2024-11-15 10:28:34.563663] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.208 [2024-11-15 10:28:34.571629] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.208 [2024-11-15 10:28:34.571680] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.208 [2024-11-15 10:28:34.579658] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.208 [2024-11-15 10:28:34.579708] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.208 [2024-11-15 10:28:34.587675] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.208 [2024-11-15 10:28:34.587725] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.208 [2024-11-15 10:28:34.595696] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.208 [2024-11-15 10:28:34.595744] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.208 [2024-11-15 10:28:34.603716] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.208 [2024-11-15 10:28:34.603764] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.208 [2024-11-15 10:28:34.611737] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.208 [2024-11-15 10:28:34.611786] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.208 [2024-11-15 10:28:34.619725] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.208 [2024-11-15 10:28:34.619747] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.208 [2024-11-15 10:28:34.627766] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.208 [2024-11-15 10:28:34.627787] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.208 [2024-11-15 10:28:34.635731] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.208 [2024-11-15 10:28:34.635751] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.208 [2024-11-15 10:28:34.643770] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.208 [2024-11-15 10:28:34.643791] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.208 [2024-11-15 10:28:34.651823] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.208 [2024-11-15 10:28:34.651863] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.208 [2024-11-15 10:28:34.659868] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.208 [2024-11-15 10:28:34.659926] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.208 [2024-11-15 10:28:34.667893] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.208 [2024-11-15 10:28:34.667941] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.474 [2024-11-15 10:28:34.675866] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.474 [2024-11-15 10:28:34.675903] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.474 [2024-11-15 10:28:34.683864] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.474 [2024-11-15 10:28:34.683885] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.474 [2024-11-15 10:28:34.691882] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.474 [2024-11-15 10:28:34.691902] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.474 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (296905) - No such process 00:09:46.474 10:28:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 296905 00:09:46.474 10:28:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:46.474 10:28:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:46.474 10:28:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:46.474 10:28:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:46.474 10:28:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:09:46.474 10:28:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:46.474 10:28:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:46.474 delay0 00:09:46.474 10:28:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:46.474 10:28:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:09:46.474 10:28:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:46.474 10:28:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:46.474 10:28:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:46.474 10:28:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:09:46.474 [2024-11-15 10:28:34.866503] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:09:53.046 Initializing NVMe Controllers 00:09:53.046 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:09:53.046 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:09:53.046 Initialization complete. Launching workers. 00:09:53.046 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 1787 00:09:53.046 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 2066, failed to submit 41 00:09:53.046 success 1905, unsuccessful 161, failed 0 00:09:53.046 10:28:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:09:53.046 10:28:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:09:53.046 10:28:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:53.046 10:28:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:09:53.046 10:28:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:53.046 10:28:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:09:53.046 10:28:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:53.046 10:28:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:53.046 rmmod nvme_tcp 00:09:53.046 rmmod nvme_fabrics 00:09:53.046 rmmod nvme_keyring 00:09:53.046 10:28:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:53.046 10:28:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:09:53.046 10:28:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:09:53.046 10:28:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@517 -- # '[' -n 295675 ']' 00:09:53.046 10:28:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@518 -- # killprocess 295675 00:09:53.046 10:28:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@952 -- # '[' -z 295675 ']' 00:09:53.046 10:28:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@956 -- # kill -0 295675 00:09:53.046 10:28:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@957 -- # uname 00:09:53.046 10:28:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:09:53.046 10:28:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 295675 00:09:53.305 10:28:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:09:53.305 10:28:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:09:53.305 10:28:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@970 -- # echo 'killing process with pid 295675' 00:09:53.305 killing process with pid 295675 00:09:53.305 10:28:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@971 -- # kill 295675 00:09:53.305 10:28:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@976 -- # wait 295675 00:09:53.305 10:28:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:53.305 10:28:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:53.305 10:28:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:53.305 10:28:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:09:53.306 10:28:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-save 00:09:53.306 10:28:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:53.306 10:28:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-restore 00:09:53.306 10:28:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:53.306 10:28:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:53.306 10:28:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:53.306 10:28:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:53.306 10:28:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:55.842 10:28:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:55.842 00:09:55.842 real 0m28.298s 00:09:55.842 user 0m41.476s 00:09:55.842 sys 0m8.649s 00:09:55.842 10:28:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:55.842 10:28:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:55.842 ************************************ 00:09:55.842 END TEST nvmf_zcopy 00:09:55.842 ************************************ 00:09:55.842 10:28:43 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:09:55.842 10:28:43 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:09:55.842 10:28:43 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:55.842 10:28:43 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:55.842 ************************************ 00:09:55.842 START TEST nvmf_nmic 00:09:55.842 ************************************ 00:09:55.842 10:28:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:09:55.842 * Looking for test storage... 00:09:55.842 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:55.842 10:28:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:09:55.842 10:28:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1691 -- # lcov --version 00:09:55.842 10:28:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:09:55.842 10:28:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:09:55.842 10:28:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:55.842 10:28:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:55.842 10:28:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:55.842 10:28:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:09:55.842 10:28:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:09:55.842 10:28:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:09:55.842 10:28:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:09:55.842 10:28:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:09:55.842 10:28:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:09:55.842 10:28:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:09:55.842 10:28:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:55.842 10:28:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:09:55.842 10:28:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:09:55.842 10:28:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:55.842 10:28:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:55.842 10:28:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:09:55.842 10:28:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:09:55.842 10:28:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:55.842 10:28:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:09:55.842 10:28:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:09:55.842 10:28:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:09:55.842 10:28:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:09:55.842 10:28:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:55.842 10:28:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:09:55.842 10:28:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:09:55.842 10:28:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:55.842 10:28:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:55.842 10:28:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:09:55.842 10:28:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:55.842 10:28:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:09:55.842 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:55.842 --rc genhtml_branch_coverage=1 00:09:55.842 --rc genhtml_function_coverage=1 00:09:55.842 --rc genhtml_legend=1 00:09:55.842 --rc geninfo_all_blocks=1 00:09:55.842 --rc geninfo_unexecuted_blocks=1 00:09:55.842 00:09:55.842 ' 00:09:55.842 10:28:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:09:55.842 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:55.842 --rc genhtml_branch_coverage=1 00:09:55.842 --rc genhtml_function_coverage=1 00:09:55.842 --rc genhtml_legend=1 00:09:55.842 --rc geninfo_all_blocks=1 00:09:55.842 --rc geninfo_unexecuted_blocks=1 00:09:55.842 00:09:55.842 ' 00:09:55.842 10:28:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:09:55.842 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:55.842 --rc genhtml_branch_coverage=1 00:09:55.842 --rc genhtml_function_coverage=1 00:09:55.842 --rc genhtml_legend=1 00:09:55.842 --rc geninfo_all_blocks=1 00:09:55.842 --rc geninfo_unexecuted_blocks=1 00:09:55.842 00:09:55.842 ' 00:09:55.842 10:28:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:09:55.842 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:55.842 --rc genhtml_branch_coverage=1 00:09:55.842 --rc genhtml_function_coverage=1 00:09:55.842 --rc genhtml_legend=1 00:09:55.842 --rc geninfo_all_blocks=1 00:09:55.842 --rc geninfo_unexecuted_blocks=1 00:09:55.842 00:09:55.842 ' 00:09:55.842 10:28:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:55.842 10:28:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:09:55.842 10:28:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:55.842 10:28:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:55.842 10:28:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:55.842 10:28:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:55.842 10:28:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:55.842 10:28:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:55.842 10:28:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:55.842 10:28:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:55.842 10:28:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:55.842 10:28:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:55.842 10:28:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:09:55.842 10:28:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=8b464f06-2980-e311-ba20-001e67a94acd 00:09:55.842 10:28:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:55.842 10:28:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:55.842 10:28:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:55.842 10:28:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:55.842 10:28:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:55.842 10:28:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:09:55.842 10:28:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:55.842 10:28:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:55.842 10:28:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:55.842 10:28:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:55.842 10:28:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:55.842 10:28:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:55.842 10:28:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:09:55.843 10:28:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:55.843 10:28:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:09:55.843 10:28:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:55.843 10:28:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:55.843 10:28:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:55.843 10:28:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:55.843 10:28:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:55.843 10:28:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:55.843 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:55.843 10:28:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:55.843 10:28:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:55.843 10:28:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:55.843 10:28:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:55.843 10:28:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:55.843 10:28:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:09:55.843 10:28:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:55.843 10:28:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:55.843 10:28:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:55.843 10:28:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:55.843 10:28:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:55.843 10:28:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:55.843 10:28:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:55.843 10:28:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:55.843 10:28:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:55.843 10:28:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:55.843 10:28:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@309 -- # xtrace_disable 00:09:55.843 10:28:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:57.758 10:28:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:57.758 10:28:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # pci_devs=() 00:09:57.758 10:28:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:57.758 10:28:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:57.758 10:28:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:57.758 10:28:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:57.758 10:28:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:57.758 10:28:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # net_devs=() 00:09:57.758 10:28:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:57.758 10:28:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # e810=() 00:09:57.758 10:28:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # local -ga e810 00:09:57.758 10:28:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # x722=() 00:09:57.758 10:28:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # local -ga x722 00:09:57.758 10:28:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # mlx=() 00:09:57.758 10:28:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # local -ga mlx 00:09:57.758 10:28:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:57.758 10:28:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:57.758 10:28:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:57.758 10:28:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:57.758 10:28:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:57.758 10:28:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:57.758 10:28:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:57.758 10:28:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:57.758 10:28:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:57.758 10:28:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:57.758 10:28:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:57.758 10:28:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:57.758 10:28:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:57.758 10:28:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:57.758 10:28:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:57.758 10:28:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:57.758 10:28:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:57.758 10:28:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:57.758 10:28:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:57.758 10:28:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:82:00.0 (0x8086 - 0x159b)' 00:09:57.758 Found 0000:82:00.0 (0x8086 - 0x159b) 00:09:57.758 10:28:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:57.758 10:28:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:57.758 10:28:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:57.758 10:28:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:57.758 10:28:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:57.758 10:28:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:57.758 10:28:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:82:00.1 (0x8086 - 0x159b)' 00:09:57.758 Found 0000:82:00.1 (0x8086 - 0x159b) 00:09:57.758 10:28:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:57.758 10:28:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:57.758 10:28:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:57.758 10:28:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:57.758 10:28:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:57.758 10:28:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:57.758 10:28:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:57.758 10:28:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:57.758 10:28:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:57.758 10:28:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:57.758 10:28:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:57.758 10:28:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:57.758 10:28:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:57.758 10:28:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:57.758 10:28:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:57.758 10:28:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:82:00.0: cvl_0_0' 00:09:57.758 Found net devices under 0000:82:00.0: cvl_0_0 00:09:57.758 10:28:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:57.758 10:28:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:57.759 10:28:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:57.759 10:28:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:57.759 10:28:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:57.759 10:28:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:57.759 10:28:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:57.759 10:28:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:57.759 10:28:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:82:00.1: cvl_0_1' 00:09:57.759 Found net devices under 0000:82:00.1: cvl_0_1 00:09:57.759 10:28:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:57.759 10:28:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:57.759 10:28:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # is_hw=yes 00:09:57.759 10:28:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:57.759 10:28:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:09:57.759 10:28:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:09:57.759 10:28:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:57.759 10:28:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:57.759 10:28:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:57.759 10:28:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:57.759 10:28:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:57.759 10:28:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:57.759 10:28:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:57.759 10:28:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:57.759 10:28:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:57.759 10:28:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:58.017 10:28:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:58.017 10:28:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:58.017 10:28:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:58.017 10:28:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:58.017 10:28:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:58.017 10:28:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:58.017 10:28:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:58.017 10:28:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:58.017 10:28:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:58.017 10:28:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:58.017 10:28:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:58.017 10:28:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:58.017 10:28:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:58.017 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:58.017 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.259 ms 00:09:58.017 00:09:58.017 --- 10.0.0.2 ping statistics --- 00:09:58.017 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:58.017 rtt min/avg/max/mdev = 0.259/0.259/0.259/0.000 ms 00:09:58.017 10:28:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:58.017 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:58.017 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.070 ms 00:09:58.017 00:09:58.017 --- 10.0.0.1 ping statistics --- 00:09:58.017 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:58.017 rtt min/avg/max/mdev = 0.070/0.070/0.070/0.000 ms 00:09:58.017 10:28:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:58.017 10:28:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@450 -- # return 0 00:09:58.017 10:28:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:58.017 10:28:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:58.017 10:28:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:58.017 10:28:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:58.017 10:28:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:58.017 10:28:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:58.017 10:28:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:58.017 10:28:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:09:58.017 10:28:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:58.017 10:28:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:58.017 10:28:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:58.017 10:28:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@509 -- # nvmfpid=300424 00:09:58.018 10:28:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:58.018 10:28:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@510 -- # waitforlisten 300424 00:09:58.018 10:28:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@833 -- # '[' -z 300424 ']' 00:09:58.018 10:28:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:58.018 10:28:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@838 -- # local max_retries=100 00:09:58.018 10:28:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:58.018 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:58.018 10:28:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@842 -- # xtrace_disable 00:09:58.018 10:28:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:58.018 [2024-11-15 10:28:46.432512] Starting SPDK v25.01-pre git sha1 318515b44 / DPDK 24.03.0 initialization... 00:09:58.018 [2024-11-15 10:28:46.432611] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:58.276 [2024-11-15 10:28:46.506859] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:58.276 [2024-11-15 10:28:46.569475] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:58.276 [2024-11-15 10:28:46.569542] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:58.276 [2024-11-15 10:28:46.569569] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:58.276 [2024-11-15 10:28:46.569581] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:58.276 [2024-11-15 10:28:46.569590] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:58.276 [2024-11-15 10:28:46.571204] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:58.276 [2024-11-15 10:28:46.571270] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:58.276 [2024-11-15 10:28:46.571337] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:58.276 [2024-11-15 10:28:46.571340] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:58.277 10:28:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:09:58.277 10:28:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@866 -- # return 0 00:09:58.277 10:28:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:58.277 10:28:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:58.277 10:28:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:58.277 10:28:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:58.277 10:28:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:58.277 10:28:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:58.277 10:28:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:58.277 [2024-11-15 10:28:46.724841] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:58.277 10:28:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:58.277 10:28:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:58.277 10:28:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:58.277 10:28:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:58.535 Malloc0 00:09:58.535 10:28:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:58.535 10:28:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:58.535 10:28:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:58.535 10:28:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:58.535 10:28:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:58.535 10:28:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:58.535 10:28:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:58.535 10:28:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:58.535 10:28:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:58.535 10:28:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:58.535 10:28:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:58.535 10:28:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:58.535 [2024-11-15 10:28:46.792468] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:58.535 10:28:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:58.535 10:28:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:09:58.535 test case1: single bdev can't be used in multiple subsystems 00:09:58.535 10:28:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:09:58.535 10:28:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:58.535 10:28:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:58.535 10:28:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:58.535 10:28:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:09:58.535 10:28:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:58.535 10:28:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:58.535 10:28:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:58.535 10:28:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:09:58.535 10:28:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:09:58.535 10:28:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:58.535 10:28:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:58.535 [2024-11-15 10:28:46.816264] bdev.c:8198:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:09:58.535 [2024-11-15 10:28:46.816292] subsystem.c:2150:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:09:58.535 [2024-11-15 10:28:46.816323] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.535 request: 00:09:58.535 { 00:09:58.535 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:09:58.535 "namespace": { 00:09:58.535 "bdev_name": "Malloc0", 00:09:58.535 "no_auto_visible": false 00:09:58.535 }, 00:09:58.535 "method": "nvmf_subsystem_add_ns", 00:09:58.535 "req_id": 1 00:09:58.535 } 00:09:58.535 Got JSON-RPC error response 00:09:58.535 response: 00:09:58.535 { 00:09:58.535 "code": -32602, 00:09:58.535 "message": "Invalid parameters" 00:09:58.535 } 00:09:58.535 10:28:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:09:58.535 10:28:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:09:58.535 10:28:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:09:58.535 10:28:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:09:58.535 Adding namespace failed - expected result. 00:09:58.535 10:28:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:09:58.535 test case2: host connect to nvmf target in multiple paths 00:09:58.535 10:28:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:09:58.535 10:28:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:58.535 10:28:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:58.535 [2024-11-15 10:28:46.824404] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:09:58.535 10:28:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:58.535 10:28:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid=8b464f06-2980-e311-ba20-001e67a94acd -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:59.100 10:28:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid=8b464f06-2980-e311-ba20-001e67a94acd -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:10:00.033 10:28:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:10:00.033 10:28:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1200 -- # local i=0 00:10:00.033 10:28:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:10:00.033 10:28:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:10:00.033 10:28:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1207 -- # sleep 2 00:10:01.932 10:28:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:10:01.932 10:28:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:10:01.932 10:28:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:10:01.932 10:28:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:10:01.932 10:28:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:10:01.932 10:28:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1210 -- # return 0 00:10:01.932 10:28:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:10:01.932 [global] 00:10:01.932 thread=1 00:10:01.932 invalidate=1 00:10:01.932 rw=write 00:10:01.932 time_based=1 00:10:01.932 runtime=1 00:10:01.932 ioengine=libaio 00:10:01.932 direct=1 00:10:01.932 bs=4096 00:10:01.932 iodepth=1 00:10:01.932 norandommap=0 00:10:01.932 numjobs=1 00:10:01.932 00:10:01.932 verify_dump=1 00:10:01.932 verify_backlog=512 00:10:01.932 verify_state_save=0 00:10:01.932 do_verify=1 00:10:01.932 verify=crc32c-intel 00:10:01.932 [job0] 00:10:01.932 filename=/dev/nvme0n1 00:10:01.932 Could not set queue depth (nvme0n1) 00:10:02.507 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:02.507 fio-3.35 00:10:02.507 Starting 1 thread 00:10:03.443 00:10:03.443 job0: (groupid=0, jobs=1): err= 0: pid=300949: Fri Nov 15 10:28:51 2024 00:10:03.443 read: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec) 00:10:03.443 slat (nsec): min=4381, max=58807, avg=12346.14, stdev=9509.78 00:10:03.443 clat (usec): min=167, max=1227, avg=245.13, stdev=55.59 00:10:03.443 lat (usec): min=172, max=1246, avg=257.47, stdev=60.63 00:10:03.443 clat percentiles (usec): 00:10:03.443 | 1.00th=[ 182], 5.00th=[ 188], 10.00th=[ 194], 20.00th=[ 204], 00:10:03.443 | 30.00th=[ 210], 40.00th=[ 221], 50.00th=[ 233], 60.00th=[ 247], 00:10:03.443 | 70.00th=[ 265], 80.00th=[ 285], 90.00th=[ 326], 95.00th=[ 334], 00:10:03.443 | 99.00th=[ 355], 99.50th=[ 371], 99.90th=[ 461], 99.95th=[ 1172], 00:10:03.443 | 99.99th=[ 1221] 00:10:03.443 write: IOPS=2379, BW=9518KiB/s (9747kB/s)(9528KiB/1001msec); 0 zone resets 00:10:03.443 slat (usec): min=5, max=28807, avg=24.80, stdev=590.01 00:10:03.443 clat (usec): min=118, max=1178, avg=166.92, stdev=39.83 00:10:03.443 lat (usec): min=125, max=29098, avg=191.72, stdev=593.97 00:10:03.443 clat percentiles (usec): 00:10:03.443 | 1.00th=[ 125], 5.00th=[ 135], 10.00th=[ 141], 20.00th=[ 147], 00:10:03.443 | 30.00th=[ 155], 40.00th=[ 161], 50.00th=[ 165], 60.00th=[ 169], 00:10:03.443 | 70.00th=[ 176], 80.00th=[ 184], 90.00th=[ 194], 95.00th=[ 204], 00:10:03.443 | 99.00th=[ 223], 99.50th=[ 233], 99.90th=[ 1074], 99.95th=[ 1106], 00:10:03.443 | 99.99th=[ 1172] 00:10:03.443 bw ( KiB/s): min=11336, max=11336, per=100.00%, avg=11336.00, stdev= 0.00, samples=1 00:10:03.443 iops : min= 2834, max= 2834, avg=2834.00, stdev= 0.00, samples=1 00:10:03.443 lat (usec) : 250=82.80%, 500=17.09% 00:10:03.443 lat (msec) : 2=0.11% 00:10:03.443 cpu : usr=3.30%, sys=5.50%, ctx=4433, majf=0, minf=1 00:10:03.443 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:03.443 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:03.443 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:03.443 issued rwts: total=2048,2382,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:03.443 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:03.443 00:10:03.443 Run status group 0 (all jobs): 00:10:03.443 READ: bw=8184KiB/s (8380kB/s), 8184KiB/s-8184KiB/s (8380kB/s-8380kB/s), io=8192KiB (8389kB), run=1001-1001msec 00:10:03.443 WRITE: bw=9518KiB/s (9747kB/s), 9518KiB/s-9518KiB/s (9747kB/s-9747kB/s), io=9528KiB (9757kB), run=1001-1001msec 00:10:03.443 00:10:03.443 Disk stats (read/write): 00:10:03.443 nvme0n1: ios=1957/2048, merge=0/0, ticks=1451/345, in_queue=1796, util=98.70% 00:10:03.443 10:28:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:03.701 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:10:03.701 10:28:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:03.701 10:28:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1221 -- # local i=0 00:10:03.701 10:28:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:10:03.701 10:28:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:03.701 10:28:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:10:03.701 10:28:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:03.701 10:28:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1233 -- # return 0 00:10:03.701 10:28:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:10:03.701 10:28:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:10:03.701 10:28:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:03.701 10:28:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:10:03.701 10:28:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:03.701 10:28:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:10:03.701 10:28:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:03.701 10:28:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:03.701 rmmod nvme_tcp 00:10:03.701 rmmod nvme_fabrics 00:10:03.701 rmmod nvme_keyring 00:10:03.701 10:28:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:03.701 10:28:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:10:03.701 10:28:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:10:03.701 10:28:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@517 -- # '[' -n 300424 ']' 00:10:03.701 10:28:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@518 -- # killprocess 300424 00:10:03.701 10:28:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@952 -- # '[' -z 300424 ']' 00:10:03.701 10:28:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@956 -- # kill -0 300424 00:10:03.701 10:28:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@957 -- # uname 00:10:03.701 10:28:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:10:03.702 10:28:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 300424 00:10:03.702 10:28:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:10:03.702 10:28:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:10:03.702 10:28:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@970 -- # echo 'killing process with pid 300424' 00:10:03.702 killing process with pid 300424 00:10:03.702 10:28:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@971 -- # kill 300424 00:10:03.702 10:28:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@976 -- # wait 300424 00:10:03.961 10:28:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:03.961 10:28:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:03.961 10:28:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:03.961 10:28:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:10:03.961 10:28:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-save 00:10:03.961 10:28:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:03.962 10:28:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-restore 00:10:03.962 10:28:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:03.962 10:28:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:03.962 10:28:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:03.962 10:28:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:03.962 10:28:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:06.507 10:28:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:06.507 00:10:06.507 real 0m10.549s 00:10:06.507 user 0m24.156s 00:10:06.507 sys 0m3.030s 00:10:06.507 10:28:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:06.507 10:28:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:06.507 ************************************ 00:10:06.507 END TEST nvmf_nmic 00:10:06.507 ************************************ 00:10:06.508 10:28:54 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:10:06.508 10:28:54 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:10:06.508 10:28:54 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:06.508 10:28:54 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:06.508 ************************************ 00:10:06.508 START TEST nvmf_fio_target 00:10:06.508 ************************************ 00:10:06.508 10:28:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:10:06.508 * Looking for test storage... 00:10:06.508 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:06.508 10:28:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:10:06.508 10:28:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1691 -- # lcov --version 00:10:06.508 10:28:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:10:06.508 10:28:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:10:06.508 10:28:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:06.508 10:28:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:06.508 10:28:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:06.508 10:28:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:10:06.508 10:28:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:10:06.508 10:28:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:10:06.508 10:28:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:10:06.508 10:28:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:10:06.508 10:28:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:10:06.508 10:28:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:10:06.508 10:28:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:06.508 10:28:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:10:06.508 10:28:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:10:06.508 10:28:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:06.508 10:28:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:06.508 10:28:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:10:06.508 10:28:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:10:06.508 10:28:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:06.508 10:28:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:10:06.508 10:28:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:10:06.508 10:28:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:10:06.508 10:28:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:10:06.508 10:28:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:06.508 10:28:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:10:06.508 10:28:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:10:06.508 10:28:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:06.508 10:28:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:06.508 10:28:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:10:06.508 10:28:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:06.508 10:28:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:10:06.508 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:06.508 --rc genhtml_branch_coverage=1 00:10:06.508 --rc genhtml_function_coverage=1 00:10:06.508 --rc genhtml_legend=1 00:10:06.508 --rc geninfo_all_blocks=1 00:10:06.508 --rc geninfo_unexecuted_blocks=1 00:10:06.508 00:10:06.508 ' 00:10:06.508 10:28:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:10:06.508 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:06.508 --rc genhtml_branch_coverage=1 00:10:06.508 --rc genhtml_function_coverage=1 00:10:06.508 --rc genhtml_legend=1 00:10:06.508 --rc geninfo_all_blocks=1 00:10:06.508 --rc geninfo_unexecuted_blocks=1 00:10:06.508 00:10:06.508 ' 00:10:06.508 10:28:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:10:06.508 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:06.508 --rc genhtml_branch_coverage=1 00:10:06.508 --rc genhtml_function_coverage=1 00:10:06.508 --rc genhtml_legend=1 00:10:06.508 --rc geninfo_all_blocks=1 00:10:06.508 --rc geninfo_unexecuted_blocks=1 00:10:06.508 00:10:06.508 ' 00:10:06.508 10:28:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:10:06.508 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:06.508 --rc genhtml_branch_coverage=1 00:10:06.508 --rc genhtml_function_coverage=1 00:10:06.508 --rc genhtml_legend=1 00:10:06.508 --rc geninfo_all_blocks=1 00:10:06.508 --rc geninfo_unexecuted_blocks=1 00:10:06.508 00:10:06.508 ' 00:10:06.508 10:28:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:06.508 10:28:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:10:06.508 10:28:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:06.508 10:28:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:06.508 10:28:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:06.508 10:28:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:06.508 10:28:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:06.508 10:28:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:06.508 10:28:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:06.508 10:28:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:06.508 10:28:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:06.508 10:28:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:06.508 10:28:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:10:06.508 10:28:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=8b464f06-2980-e311-ba20-001e67a94acd 00:10:06.508 10:28:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:06.508 10:28:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:06.508 10:28:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:06.508 10:28:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:06.508 10:28:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:06.508 10:28:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:10:06.508 10:28:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:06.508 10:28:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:06.508 10:28:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:06.508 10:28:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:06.509 10:28:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:06.509 10:28:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:06.509 10:28:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:10:06.509 10:28:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:06.509 10:28:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:10:06.509 10:28:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:06.509 10:28:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:06.509 10:28:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:06.509 10:28:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:06.509 10:28:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:06.509 10:28:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:06.509 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:06.509 10:28:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:06.509 10:28:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:06.509 10:28:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:06.509 10:28:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:06.509 10:28:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:06.509 10:28:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:06.509 10:28:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:10:06.509 10:28:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:06.509 10:28:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:06.509 10:28:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:06.509 10:28:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:06.509 10:28:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:06.509 10:28:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:06.509 10:28:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:06.509 10:28:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:06.509 10:28:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:06.509 10:28:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:06.509 10:28:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@309 -- # xtrace_disable 00:10:06.509 10:28:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:08.416 10:28:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:08.416 10:28:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # pci_devs=() 00:10:08.416 10:28:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:08.416 10:28:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:08.416 10:28:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:08.416 10:28:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:08.416 10:28:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:08.416 10:28:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # net_devs=() 00:10:08.416 10:28:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:08.416 10:28:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # e810=() 00:10:08.416 10:28:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # local -ga e810 00:10:08.416 10:28:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # x722=() 00:10:08.416 10:28:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # local -ga x722 00:10:08.416 10:28:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # mlx=() 00:10:08.416 10:28:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # local -ga mlx 00:10:08.416 10:28:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:08.416 10:28:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:08.416 10:28:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:08.416 10:28:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:08.416 10:28:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:08.416 10:28:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:08.416 10:28:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:08.416 10:28:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:08.416 10:28:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:08.416 10:28:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:08.416 10:28:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:08.416 10:28:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:08.416 10:28:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:08.416 10:28:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:08.416 10:28:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:08.416 10:28:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:08.416 10:28:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:08.416 10:28:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:08.416 10:28:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:08.416 10:28:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:82:00.0 (0x8086 - 0x159b)' 00:10:08.416 Found 0000:82:00.0 (0x8086 - 0x159b) 00:10:08.416 10:28:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:08.416 10:28:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:08.416 10:28:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:08.416 10:28:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:08.416 10:28:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:08.416 10:28:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:08.416 10:28:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:82:00.1 (0x8086 - 0x159b)' 00:10:08.416 Found 0000:82:00.1 (0x8086 - 0x159b) 00:10:08.416 10:28:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:08.416 10:28:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:08.416 10:28:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:08.416 10:28:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:08.416 10:28:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:08.416 10:28:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:08.416 10:28:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:08.416 10:28:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:08.416 10:28:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:08.416 10:28:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:08.416 10:28:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:08.416 10:28:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:08.416 10:28:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:08.416 10:28:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:08.416 10:28:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:08.416 10:28:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:82:00.0: cvl_0_0' 00:10:08.416 Found net devices under 0000:82:00.0: cvl_0_0 00:10:08.416 10:28:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:08.416 10:28:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:08.416 10:28:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:08.416 10:28:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:08.416 10:28:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:08.416 10:28:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:08.416 10:28:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:08.416 10:28:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:08.416 10:28:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:82:00.1: cvl_0_1' 00:10:08.416 Found net devices under 0000:82:00.1: cvl_0_1 00:10:08.416 10:28:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:08.416 10:28:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:08.416 10:28:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # is_hw=yes 00:10:08.416 10:28:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:08.416 10:28:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:08.416 10:28:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:08.416 10:28:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:08.416 10:28:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:08.416 10:28:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:08.416 10:28:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:08.416 10:28:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:08.416 10:28:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:08.416 10:28:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:08.416 10:28:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:08.416 10:28:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:08.416 10:28:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:08.416 10:28:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:08.416 10:28:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:08.416 10:28:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:08.416 10:28:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:08.416 10:28:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:08.416 10:28:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:08.416 10:28:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:08.416 10:28:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:08.416 10:28:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:08.416 10:28:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:08.416 10:28:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:08.416 10:28:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:08.416 10:28:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:08.416 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:08.416 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.383 ms 00:10:08.416 00:10:08.416 --- 10.0.0.2 ping statistics --- 00:10:08.416 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:08.416 rtt min/avg/max/mdev = 0.383/0.383/0.383/0.000 ms 00:10:08.416 10:28:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:08.416 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:08.416 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.161 ms 00:10:08.416 00:10:08.416 --- 10.0.0.1 ping statistics --- 00:10:08.416 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:08.416 rtt min/avg/max/mdev = 0.161/0.161/0.161/0.000 ms 00:10:08.416 10:28:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:08.416 10:28:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@450 -- # return 0 00:10:08.416 10:28:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:08.416 10:28:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:08.416 10:28:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:08.416 10:28:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:08.416 10:28:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:08.416 10:28:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:08.416 10:28:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:08.675 10:28:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:10:08.675 10:28:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:08.675 10:28:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:08.675 10:28:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:08.675 10:28:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@509 -- # nvmfpid=303157 00:10:08.675 10:28:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:08.675 10:28:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@510 -- # waitforlisten 303157 00:10:08.675 10:28:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@833 -- # '[' -z 303157 ']' 00:10:08.675 10:28:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:08.675 10:28:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@838 -- # local max_retries=100 00:10:08.675 10:28:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:08.675 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:08.675 10:28:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@842 -- # xtrace_disable 00:10:08.675 10:28:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:08.675 [2024-11-15 10:28:56.943787] Starting SPDK v25.01-pre git sha1 318515b44 / DPDK 24.03.0 initialization... 00:10:08.675 [2024-11-15 10:28:56.943874] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:08.675 [2024-11-15 10:28:57.013622] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:08.675 [2024-11-15 10:28:57.068071] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:08.675 [2024-11-15 10:28:57.068143] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:08.675 [2024-11-15 10:28:57.068157] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:08.675 [2024-11-15 10:28:57.068168] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:08.675 [2024-11-15 10:28:57.068177] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:08.675 [2024-11-15 10:28:57.069770] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:08.675 [2024-11-15 10:28:57.069827] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:08.675 [2024-11-15 10:28:57.069935] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:08.675 [2024-11-15 10:28:57.069938] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:08.934 10:28:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:10:08.934 10:28:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@866 -- # return 0 00:10:08.934 10:28:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:08.934 10:28:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:08.934 10:28:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:08.934 10:28:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:08.934 10:28:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:10:09.192 [2024-11-15 10:28:57.470041] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:09.192 10:28:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:09.451 10:28:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:10:09.451 10:28:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:09.709 10:28:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:10:09.709 10:28:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:09.967 10:28:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:10:09.967 10:28:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:10.225 10:28:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:10:10.225 10:28:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:10:10.483 10:28:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:11.050 10:28:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:10:11.050 10:28:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:11.050 10:28:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:10:11.050 10:28:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:11.308 10:28:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:10:11.308 10:28:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:10:11.875 10:29:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:11.875 10:29:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:10:11.875 10:29:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:12.133 10:29:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:10:12.133 10:29:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:12.392 10:29:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:12.650 [2024-11-15 10:29:01.104560] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:12.908 10:29:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:10:13.167 10:29:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:10:13.424 10:29:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid=8b464f06-2980-e311-ba20-001e67a94acd -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:13.996 10:29:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:10:13.996 10:29:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1200 -- # local i=0 00:10:13.996 10:29:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:10:13.996 10:29:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1202 -- # [[ -n 4 ]] 00:10:13.996 10:29:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1203 -- # nvme_device_counter=4 00:10:13.996 10:29:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1207 -- # sleep 2 00:10:15.894 10:29:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:10:15.894 10:29:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:10:15.894 10:29:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:10:16.152 10:29:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1209 -- # nvme_devices=4 00:10:16.152 10:29:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:10:16.152 10:29:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1210 -- # return 0 00:10:16.152 10:29:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:10:16.152 [global] 00:10:16.152 thread=1 00:10:16.152 invalidate=1 00:10:16.152 rw=write 00:10:16.152 time_based=1 00:10:16.152 runtime=1 00:10:16.152 ioengine=libaio 00:10:16.152 direct=1 00:10:16.152 bs=4096 00:10:16.152 iodepth=1 00:10:16.152 norandommap=0 00:10:16.152 numjobs=1 00:10:16.152 00:10:16.152 verify_dump=1 00:10:16.152 verify_backlog=512 00:10:16.152 verify_state_save=0 00:10:16.152 do_verify=1 00:10:16.152 verify=crc32c-intel 00:10:16.152 [job0] 00:10:16.152 filename=/dev/nvme0n1 00:10:16.152 [job1] 00:10:16.152 filename=/dev/nvme0n2 00:10:16.152 [job2] 00:10:16.152 filename=/dev/nvme0n3 00:10:16.152 [job3] 00:10:16.152 filename=/dev/nvme0n4 00:10:16.152 Could not set queue depth (nvme0n1) 00:10:16.152 Could not set queue depth (nvme0n2) 00:10:16.152 Could not set queue depth (nvme0n3) 00:10:16.152 Could not set queue depth (nvme0n4) 00:10:16.152 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:16.152 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:16.152 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:16.152 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:16.152 fio-3.35 00:10:16.152 Starting 4 threads 00:10:17.534 00:10:17.534 job0: (groupid=0, jobs=1): err= 0: pid=304229: Fri Nov 15 10:29:05 2024 00:10:17.534 read: IOPS=2157, BW=8631KiB/s (8839kB/s)(8640KiB/1001msec) 00:10:17.534 slat (nsec): min=6126, max=69326, avg=11377.99, stdev=5284.92 00:10:17.534 clat (usec): min=164, max=790, avg=233.81, stdev=47.72 00:10:17.534 lat (usec): min=170, max=798, avg=245.19, stdev=51.25 00:10:17.534 clat percentiles (usec): 00:10:17.534 | 1.00th=[ 174], 5.00th=[ 186], 10.00th=[ 192], 20.00th=[ 200], 00:10:17.534 | 30.00th=[ 208], 40.00th=[ 217], 50.00th=[ 227], 60.00th=[ 237], 00:10:17.534 | 70.00th=[ 247], 80.00th=[ 260], 90.00th=[ 277], 95.00th=[ 293], 00:10:17.534 | 99.00th=[ 461], 99.50th=[ 537], 99.90th=[ 603], 99.95th=[ 611], 00:10:17.534 | 99.99th=[ 791] 00:10:17.534 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:10:17.534 slat (nsec): min=8105, max=50580, avg=12218.81, stdev=4639.11 00:10:17.534 clat (usec): min=127, max=401, avg=165.32, stdev=23.80 00:10:17.534 lat (usec): min=135, max=437, avg=177.54, stdev=26.09 00:10:17.534 clat percentiles (usec): 00:10:17.534 | 1.00th=[ 133], 5.00th=[ 137], 10.00th=[ 141], 20.00th=[ 145], 00:10:17.534 | 30.00th=[ 151], 40.00th=[ 157], 50.00th=[ 163], 60.00th=[ 169], 00:10:17.534 | 70.00th=[ 174], 80.00th=[ 182], 90.00th=[ 194], 95.00th=[ 206], 00:10:17.534 | 99.00th=[ 237], 99.50th=[ 249], 99.90th=[ 363], 99.95th=[ 375], 00:10:17.534 | 99.99th=[ 400] 00:10:17.534 bw ( KiB/s): min=11016, max=11016, per=69.39%, avg=11016.00, stdev= 0.00, samples=1 00:10:17.534 iops : min= 2754, max= 2754, avg=2754.00, stdev= 0.00, samples=1 00:10:17.534 lat (usec) : 250=87.58%, 500=12.06%, 750=0.34%, 1000=0.02% 00:10:17.534 cpu : usr=4.50%, sys=7.20%, ctx=4721, majf=0, minf=1 00:10:17.534 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:17.534 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:17.534 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:17.534 issued rwts: total=2160,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:17.534 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:17.534 job1: (groupid=0, jobs=1): err= 0: pid=304230: Fri Nov 15 10:29:05 2024 00:10:17.534 read: IOPS=22, BW=89.1KiB/s (91.3kB/s)(92.0KiB/1032msec) 00:10:17.534 slat (nsec): min=6950, max=34451, avg=20572.78, stdev=7795.55 00:10:17.534 clat (usec): min=282, max=41020, avg=39192.44, stdev=8482.45 00:10:17.534 lat (usec): min=299, max=41035, avg=39213.02, stdev=8483.20 00:10:17.534 clat percentiles (usec): 00:10:17.534 | 1.00th=[ 281], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:10:17.534 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:10:17.534 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:10:17.534 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:10:17.534 | 99.99th=[41157] 00:10:17.534 write: IOPS=496, BW=1984KiB/s (2032kB/s)(2048KiB/1032msec); 0 zone resets 00:10:17.534 slat (nsec): min=6368, max=36552, avg=8934.07, stdev=2959.55 00:10:17.534 clat (usec): min=138, max=2519, avg=241.29, stdev=123.07 00:10:17.534 lat (usec): min=145, max=2535, avg=250.22, stdev=123.69 00:10:17.534 clat percentiles (usec): 00:10:17.534 | 1.00th=[ 169], 5.00th=[ 192], 10.00th=[ 200], 20.00th=[ 208], 00:10:17.534 | 30.00th=[ 217], 40.00th=[ 221], 50.00th=[ 225], 60.00th=[ 231], 00:10:17.534 | 70.00th=[ 235], 80.00th=[ 241], 90.00th=[ 273], 95.00th=[ 330], 00:10:17.534 | 99.00th=[ 594], 99.50th=[ 775], 99.90th=[ 2507], 99.95th=[ 2507], 00:10:17.534 | 99.99th=[ 2507] 00:10:17.534 bw ( KiB/s): min= 4096, max= 4096, per=25.80%, avg=4096.00, stdev= 0.00, samples=1 00:10:17.534 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:17.534 lat (usec) : 250=82.62%, 500=11.96%, 750=0.56%, 1000=0.37% 00:10:17.534 lat (msec) : 2=0.19%, 4=0.19%, 50=4.11% 00:10:17.534 cpu : usr=0.00%, sys=0.68%, ctx=535, majf=0, minf=1 00:10:17.534 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:17.534 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:17.534 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:17.534 issued rwts: total=23,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:17.534 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:17.534 job2: (groupid=0, jobs=1): err= 0: pid=304232: Fri Nov 15 10:29:05 2024 00:10:17.534 read: IOPS=22, BW=91.6KiB/s (93.8kB/s)(92.0KiB/1004msec) 00:10:17.534 slat (nsec): min=8455, max=34889, avg=22266.87, stdev=9622.91 00:10:17.534 clat (usec): min=260, max=41041, avg=39186.58, stdev=8485.83 00:10:17.534 lat (usec): min=277, max=41056, avg=39208.84, stdev=8487.02 00:10:17.534 clat percentiles (usec): 00:10:17.534 | 1.00th=[ 262], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:10:17.534 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:10:17.534 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:10:17.534 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:10:17.534 | 99.99th=[41157] 00:10:17.534 write: IOPS=509, BW=2040KiB/s (2089kB/s)(2048KiB/1004msec); 0 zone resets 00:10:17.534 slat (nsec): min=7289, max=45437, avg=11118.62, stdev=3422.98 00:10:17.534 clat (usec): min=146, max=459, avg=183.97, stdev=38.30 00:10:17.534 lat (usec): min=154, max=505, avg=195.09, stdev=39.83 00:10:17.534 clat percentiles (usec): 00:10:17.534 | 1.00th=[ 153], 5.00th=[ 159], 10.00th=[ 161], 20.00th=[ 165], 00:10:17.534 | 30.00th=[ 167], 40.00th=[ 169], 50.00th=[ 174], 60.00th=[ 178], 00:10:17.534 | 70.00th=[ 182], 80.00th=[ 190], 90.00th=[ 215], 95.00th=[ 255], 00:10:17.534 | 99.00th=[ 371], 99.50th=[ 392], 99.90th=[ 461], 99.95th=[ 461], 00:10:17.534 | 99.99th=[ 461] 00:10:17.534 bw ( KiB/s): min= 4096, max= 4096, per=25.80%, avg=4096.00, stdev= 0.00, samples=1 00:10:17.534 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:17.534 lat (usec) : 250=90.65%, 500=5.23% 00:10:17.534 lat (msec) : 50=4.11% 00:10:17.534 cpu : usr=0.10%, sys=0.80%, ctx=535, majf=0, minf=1 00:10:17.534 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:17.534 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:17.534 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:17.534 issued rwts: total=23,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:17.534 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:17.534 job3: (groupid=0, jobs=1): err= 0: pid=304233: Fri Nov 15 10:29:05 2024 00:10:17.534 read: IOPS=20, BW=83.5KiB/s (85.5kB/s)(84.0KiB/1006msec) 00:10:17.534 slat (nsec): min=8925, max=34605, avg=19754.90, stdev=9594.29 00:10:17.534 clat (usec): min=40856, max=41984, avg=41038.08, stdev=229.84 00:10:17.534 lat (usec): min=40888, max=41993, avg=41057.84, stdev=226.15 00:10:17.534 clat percentiles (usec): 00:10:17.534 | 1.00th=[40633], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:10:17.534 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:10:17.534 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:10:17.534 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:10:17.534 | 99.99th=[42206] 00:10:17.534 write: IOPS=508, BW=2036KiB/s (2085kB/s)(2048KiB/1006msec); 0 zone resets 00:10:17.534 slat (usec): min=9, max=1217, avg=14.26, stdev=53.37 00:10:17.534 clat (usec): min=136, max=3014, avg=229.36, stdev=144.11 00:10:17.534 lat (usec): min=147, max=3027, avg=243.62, stdev=154.61 00:10:17.534 clat percentiles (usec): 00:10:17.534 | 1.00th=[ 143], 5.00th=[ 151], 10.00th=[ 174], 20.00th=[ 198], 00:10:17.534 | 30.00th=[ 204], 40.00th=[ 212], 50.00th=[ 219], 60.00th=[ 225], 00:10:17.534 | 70.00th=[ 229], 80.00th=[ 237], 90.00th=[ 253], 95.00th=[ 318], 00:10:17.534 | 99.00th=[ 437], 99.50th=[ 1172], 99.90th=[ 3032], 99.95th=[ 3032], 00:10:17.534 | 99.99th=[ 3032] 00:10:17.534 bw ( KiB/s): min= 4096, max= 4096, per=25.80%, avg=4096.00, stdev= 0.00, samples=1 00:10:17.534 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:17.534 lat (usec) : 250=86.12%, 500=9.19%, 750=0.19% 00:10:17.534 lat (msec) : 2=0.38%, 4=0.19%, 50=3.94% 00:10:17.534 cpu : usr=0.10%, sys=1.00%, ctx=535, majf=0, minf=1 00:10:17.534 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:17.534 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:17.534 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:17.534 issued rwts: total=21,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:17.534 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:17.534 00:10:17.534 Run status group 0 (all jobs): 00:10:17.534 READ: bw=8632KiB/s (8839kB/s), 83.5KiB/s-8631KiB/s (85.5kB/s-8839kB/s), io=8908KiB (9122kB), run=1001-1032msec 00:10:17.534 WRITE: bw=15.5MiB/s (16.3MB/s), 1984KiB/s-9.99MiB/s (2032kB/s-10.5MB/s), io=16.0MiB (16.8MB), run=1001-1032msec 00:10:17.534 00:10:17.534 Disk stats (read/write): 00:10:17.534 nvme0n1: ios=1877/2048, merge=0/0, ticks=1292/338, in_queue=1630, util=89.28% 00:10:17.534 nvme0n2: ios=68/512, merge=0/0, ticks=1174/122, in_queue=1296, util=90.84% 00:10:17.534 nvme0n3: ios=40/512, merge=0/0, ticks=1600/94, in_queue=1694, util=93.30% 00:10:17.534 nvme0n4: ios=80/512, merge=0/0, ticks=1296/111, in_queue=1407, util=92.19% 00:10:17.534 10:29:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:10:17.534 [global] 00:10:17.534 thread=1 00:10:17.534 invalidate=1 00:10:17.534 rw=randwrite 00:10:17.534 time_based=1 00:10:17.534 runtime=1 00:10:17.534 ioengine=libaio 00:10:17.534 direct=1 00:10:17.534 bs=4096 00:10:17.534 iodepth=1 00:10:17.534 norandommap=0 00:10:17.534 numjobs=1 00:10:17.534 00:10:17.534 verify_dump=1 00:10:17.534 verify_backlog=512 00:10:17.534 verify_state_save=0 00:10:17.534 do_verify=1 00:10:17.534 verify=crc32c-intel 00:10:17.534 [job0] 00:10:17.534 filename=/dev/nvme0n1 00:10:17.534 [job1] 00:10:17.534 filename=/dev/nvme0n2 00:10:17.534 [job2] 00:10:17.534 filename=/dev/nvme0n3 00:10:17.534 [job3] 00:10:17.535 filename=/dev/nvme0n4 00:10:17.535 Could not set queue depth (nvme0n1) 00:10:17.535 Could not set queue depth (nvme0n2) 00:10:17.535 Could not set queue depth (nvme0n3) 00:10:17.535 Could not set queue depth (nvme0n4) 00:10:17.792 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:17.792 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:17.792 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:17.792 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:17.792 fio-3.35 00:10:17.792 Starting 4 threads 00:10:19.167 00:10:19.167 job0: (groupid=0, jobs=1): err= 0: pid=304469: Fri Nov 15 10:29:07 2024 00:10:19.167 read: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec) 00:10:19.167 slat (nsec): min=7100, max=55058, avg=14313.16, stdev=6926.59 00:10:19.167 clat (usec): min=183, max=37949, avg=345.97, stdev=966.06 00:10:19.167 lat (usec): min=191, max=37958, avg=360.29, stdev=966.37 00:10:19.167 clat percentiles (usec): 00:10:19.167 | 1.00th=[ 196], 5.00th=[ 208], 10.00th=[ 217], 20.00th=[ 229], 00:10:19.167 | 30.00th=[ 249], 40.00th=[ 269], 50.00th=[ 285], 60.00th=[ 310], 00:10:19.167 | 70.00th=[ 347], 80.00th=[ 429], 90.00th=[ 490], 95.00th=[ 519], 00:10:19.167 | 99.00th=[ 603], 99.50th=[ 676], 99.90th=[ 1037], 99.95th=[38011], 00:10:19.167 | 99.99th=[38011] 00:10:19.167 write: IOPS=1994, BW=7976KiB/s (8167kB/s)(7984KiB/1001msec); 0 zone resets 00:10:19.167 slat (nsec): min=8561, max=80928, avg=13650.50, stdev=6603.36 00:10:19.167 clat (usec): min=133, max=430, avg=201.05, stdev=44.85 00:10:19.167 lat (usec): min=143, max=461, avg=214.70, stdev=47.30 00:10:19.167 clat percentiles (usec): 00:10:19.167 | 1.00th=[ 139], 5.00th=[ 145], 10.00th=[ 149], 20.00th=[ 157], 00:10:19.167 | 30.00th=[ 172], 40.00th=[ 190], 50.00th=[ 202], 60.00th=[ 212], 00:10:19.167 | 70.00th=[ 221], 80.00th=[ 229], 90.00th=[ 245], 95.00th=[ 277], 00:10:19.167 | 99.00th=[ 367], 99.50th=[ 396], 99.90th=[ 424], 99.95th=[ 433], 00:10:19.167 | 99.99th=[ 433] 00:10:19.167 bw ( KiB/s): min= 8192, max= 8192, per=40.38%, avg=8192.00, stdev= 0.00, samples=1 00:10:19.167 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:10:19.167 lat (usec) : 250=65.32%, 500=31.12%, 750=3.40%, 1000=0.11% 00:10:19.167 lat (msec) : 2=0.03%, 50=0.03% 00:10:19.167 cpu : usr=3.30%, sys=7.10%, ctx=3533, majf=0, minf=1 00:10:19.167 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:19.167 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:19.167 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:19.167 issued rwts: total=1536,1996,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:19.167 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:19.167 job1: (groupid=0, jobs=1): err= 0: pid=304470: Fri Nov 15 10:29:07 2024 00:10:19.167 read: IOPS=503, BW=2013KiB/s (2062kB/s)(2092KiB/1039msec) 00:10:19.167 slat (nsec): min=6687, max=43441, avg=9482.27, stdev=5247.76 00:10:19.167 clat (usec): min=182, max=42213, avg=1563.32, stdev=7253.27 00:10:19.167 lat (usec): min=190, max=42221, avg=1572.80, stdev=7256.10 00:10:19.167 clat percentiles (usec): 00:10:19.167 | 1.00th=[ 188], 5.00th=[ 194], 10.00th=[ 198], 20.00th=[ 204], 00:10:19.167 | 30.00th=[ 210], 40.00th=[ 215], 50.00th=[ 223], 60.00th=[ 233], 00:10:19.167 | 70.00th=[ 239], 80.00th=[ 249], 90.00th=[ 269], 95.00th=[ 306], 00:10:19.167 | 99.00th=[41157], 99.50th=[41157], 99.90th=[42206], 99.95th=[42206], 00:10:19.167 | 99.99th=[42206] 00:10:19.167 write: IOPS=985, BW=3942KiB/s (4037kB/s)(4096KiB/1039msec); 0 zone resets 00:10:19.167 slat (nsec): min=7238, max=47817, avg=10757.03, stdev=5131.25 00:10:19.167 clat (usec): min=126, max=473, avg=195.56, stdev=48.55 00:10:19.167 lat (usec): min=135, max=498, avg=206.32, stdev=50.79 00:10:19.167 clat percentiles (usec): 00:10:19.167 | 1.00th=[ 137], 5.00th=[ 141], 10.00th=[ 147], 20.00th=[ 153], 00:10:19.167 | 30.00th=[ 161], 40.00th=[ 172], 50.00th=[ 186], 60.00th=[ 200], 00:10:19.167 | 70.00th=[ 217], 80.00th=[ 241], 90.00th=[ 255], 95.00th=[ 265], 00:10:19.168 | 99.00th=[ 371], 99.50th=[ 412], 99.90th=[ 474], 99.95th=[ 474], 00:10:19.168 | 99.99th=[ 474] 00:10:19.168 bw ( KiB/s): min= 2912, max= 5280, per=20.19%, avg=4096.00, stdev=1674.43, samples=2 00:10:19.168 iops : min= 728, max= 1320, avg=1024.00, stdev=418.61, samples=2 00:10:19.168 lat (usec) : 250=85.07%, 500=13.64%, 750=0.06% 00:10:19.168 lat (msec) : 2=0.06%, 4=0.06%, 50=1.10% 00:10:19.168 cpu : usr=0.48%, sys=2.79%, ctx=1547, majf=0, minf=2 00:10:19.168 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:19.168 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:19.168 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:19.168 issued rwts: total=523,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:19.168 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:19.168 job2: (groupid=0, jobs=1): err= 0: pid=304471: Fri Nov 15 10:29:07 2024 00:10:19.168 read: IOPS=21, BW=84.8KiB/s (86.8kB/s)(88.0KiB/1038msec) 00:10:19.168 slat (nsec): min=8980, max=31233, avg=19638.00, stdev=7743.38 00:10:19.168 clat (usec): min=40885, max=42024, avg=41521.13, stdev=486.29 00:10:19.168 lat (usec): min=40915, max=42039, avg=41540.77, stdev=485.44 00:10:19.168 clat percentiles (usec): 00:10:19.168 | 1.00th=[40633], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:10:19.168 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41681], 60.00th=[41681], 00:10:19.168 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:10:19.168 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:10:19.168 | 99.99th=[42206] 00:10:19.168 write: IOPS=493, BW=1973KiB/s (2020kB/s)(2048KiB/1038msec); 0 zone resets 00:10:19.168 slat (nsec): min=6681, max=43473, avg=9333.62, stdev=3115.98 00:10:19.168 clat (usec): min=154, max=277, avg=230.09, stdev=25.15 00:10:19.168 lat (usec): min=161, max=284, avg=239.42, stdev=25.11 00:10:19.168 clat percentiles (usec): 00:10:19.168 | 1.00th=[ 161], 5.00th=[ 174], 10.00th=[ 190], 20.00th=[ 204], 00:10:19.168 | 30.00th=[ 237], 40.00th=[ 241], 50.00th=[ 241], 60.00th=[ 243], 00:10:19.168 | 70.00th=[ 245], 80.00th=[ 245], 90.00th=[ 247], 95.00th=[ 249], 00:10:19.168 | 99.00th=[ 258], 99.50th=[ 269], 99.90th=[ 277], 99.95th=[ 277], 00:10:19.168 | 99.99th=[ 277] 00:10:19.168 bw ( KiB/s): min= 4096, max= 4096, per=20.19%, avg=4096.00, stdev= 0.00, samples=1 00:10:19.168 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:19.168 lat (usec) : 250=91.20%, 500=4.68% 00:10:19.168 lat (msec) : 50=4.12% 00:10:19.168 cpu : usr=0.10%, sys=0.48%, ctx=534, majf=0, minf=1 00:10:19.168 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:19.168 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:19.168 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:19.168 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:19.168 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:19.168 job3: (groupid=0, jobs=1): err= 0: pid=304472: Fri Nov 15 10:29:07 2024 00:10:19.168 read: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec) 00:10:19.168 slat (nsec): min=5159, max=44625, avg=10927.05, stdev=4791.92 00:10:19.168 clat (usec): min=194, max=1236, avg=388.59, stdev=125.52 00:10:19.168 lat (usec): min=202, max=1250, avg=399.52, stdev=127.27 00:10:19.168 clat percentiles (usec): 00:10:19.168 | 1.00th=[ 202], 5.00th=[ 221], 10.00th=[ 233], 20.00th=[ 253], 00:10:19.168 | 30.00th=[ 289], 40.00th=[ 347], 50.00th=[ 392], 60.00th=[ 433], 00:10:19.168 | 70.00th=[ 465], 80.00th=[ 494], 90.00th=[ 545], 95.00th=[ 586], 00:10:19.168 | 99.00th=[ 644], 99.50th=[ 734], 99.90th=[ 1188], 99.95th=[ 1237], 00:10:19.168 | 99.99th=[ 1237] 00:10:19.168 write: IOPS=1736, BW=6945KiB/s (7112kB/s)(6952KiB/1001msec); 0 zone resets 00:10:19.168 slat (nsec): min=6491, max=56457, avg=8750.94, stdev=3347.64 00:10:19.168 clat (usec): min=129, max=416, avg=206.46, stdev=35.27 00:10:19.168 lat (usec): min=137, max=436, avg=215.21, stdev=35.32 00:10:19.168 clat percentiles (usec): 00:10:19.168 | 1.00th=[ 147], 5.00th=[ 155], 10.00th=[ 161], 20.00th=[ 176], 00:10:19.168 | 30.00th=[ 192], 40.00th=[ 200], 50.00th=[ 208], 60.00th=[ 217], 00:10:19.168 | 70.00th=[ 223], 80.00th=[ 231], 90.00th=[ 239], 95.00th=[ 249], 00:10:19.168 | 99.00th=[ 375], 99.50th=[ 383], 99.90th=[ 412], 99.95th=[ 416], 00:10:19.168 | 99.99th=[ 416] 00:10:19.168 bw ( KiB/s): min= 8192, max= 8192, per=40.38%, avg=8192.00, stdev= 0.00, samples=1 00:10:19.168 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:10:19.168 lat (usec) : 250=59.07%, 500=32.47%, 750=8.28%, 1000=0.06% 00:10:19.168 lat (msec) : 2=0.12% 00:10:19.168 cpu : usr=1.40%, sys=4.10%, ctx=3277, majf=0, minf=1 00:10:19.168 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:19.168 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:19.168 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:19.168 issued rwts: total=1536,1738,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:19.168 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:19.168 00:10:19.168 Run status group 0 (all jobs): 00:10:19.168 READ: bw=13.6MiB/s (14.3MB/s), 84.8KiB/s-6138KiB/s (86.8kB/s-6285kB/s), io=14.1MiB (14.8MB), run=1001-1039msec 00:10:19.168 WRITE: bw=19.8MiB/s (20.8MB/s), 1973KiB/s-7976KiB/s (2020kB/s-8167kB/s), io=20.6MiB (21.6MB), run=1001-1039msec 00:10:19.168 00:10:19.168 Disk stats (read/write): 00:10:19.168 nvme0n1: ios=1483/1536, merge=0/0, ticks=1112/311, in_queue=1423, util=97.60% 00:10:19.168 nvme0n2: ios=531/1024, merge=0/0, ticks=635/196, in_queue=831, util=86.69% 00:10:19.168 nvme0n3: ios=17/512, merge=0/0, ticks=708/117, in_queue=825, util=88.92% 00:10:19.168 nvme0n4: ios=1321/1536, merge=0/0, ticks=1427/321, in_queue=1748, util=97.47% 00:10:19.168 10:29:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:10:19.168 [global] 00:10:19.168 thread=1 00:10:19.168 invalidate=1 00:10:19.168 rw=write 00:10:19.168 time_based=1 00:10:19.168 runtime=1 00:10:19.168 ioengine=libaio 00:10:19.168 direct=1 00:10:19.168 bs=4096 00:10:19.168 iodepth=128 00:10:19.168 norandommap=0 00:10:19.168 numjobs=1 00:10:19.168 00:10:19.168 verify_dump=1 00:10:19.168 verify_backlog=512 00:10:19.168 verify_state_save=0 00:10:19.168 do_verify=1 00:10:19.168 verify=crc32c-intel 00:10:19.168 [job0] 00:10:19.168 filename=/dev/nvme0n1 00:10:19.168 [job1] 00:10:19.168 filename=/dev/nvme0n2 00:10:19.168 [job2] 00:10:19.168 filename=/dev/nvme0n3 00:10:19.168 [job3] 00:10:19.168 filename=/dev/nvme0n4 00:10:19.168 Could not set queue depth (nvme0n1) 00:10:19.168 Could not set queue depth (nvme0n2) 00:10:19.168 Could not set queue depth (nvme0n3) 00:10:19.168 Could not set queue depth (nvme0n4) 00:10:19.168 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:19.168 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:19.168 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:19.168 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:19.168 fio-3.35 00:10:19.168 Starting 4 threads 00:10:20.544 00:10:20.545 job0: (groupid=0, jobs=1): err= 0: pid=304696: Fri Nov 15 10:29:08 2024 00:10:20.545 read: IOPS=3041, BW=11.9MiB/s (12.5MB/s)(12.0MiB/1010msec) 00:10:20.545 slat (usec): min=2, max=14291, avg=117.14, stdev=736.26 00:10:20.545 clat (usec): min=6626, max=39195, avg=14744.22, stdev=3828.64 00:10:20.545 lat (usec): min=6636, max=39214, avg=14861.37, stdev=3903.05 00:10:20.545 clat percentiles (usec): 00:10:20.545 | 1.00th=[ 8586], 5.00th=[10945], 10.00th=[11600], 20.00th=[12125], 00:10:20.545 | 30.00th=[12387], 40.00th=[13304], 50.00th=[13566], 60.00th=[13829], 00:10:20.545 | 70.00th=[15401], 80.00th=[17433], 90.00th=[19268], 95.00th=[22414], 00:10:20.545 | 99.00th=[29230], 99.50th=[31327], 99.90th=[33817], 99.95th=[33817], 00:10:20.545 | 99.99th=[39060] 00:10:20.545 write: IOPS=3261, BW=12.7MiB/s (13.4MB/s)(12.9MiB/1010msec); 0 zone resets 00:10:20.545 slat (usec): min=3, max=24737, avg=186.09, stdev=1353.04 00:10:20.545 clat (usec): min=4870, max=79580, avg=23430.92, stdev=12103.96 00:10:20.545 lat (usec): min=4888, max=79624, avg=23617.00, stdev=12239.90 00:10:20.545 clat percentiles (usec): 00:10:20.545 | 1.00th=[ 5800], 5.00th=[10552], 10.00th=[11863], 20.00th=[13566], 00:10:20.545 | 30.00th=[15008], 40.00th=[16188], 50.00th=[19006], 60.00th=[23462], 00:10:20.545 | 70.00th=[26608], 80.00th=[34866], 90.00th=[44827], 95.00th=[46924], 00:10:20.545 | 99.00th=[54789], 99.50th=[54789], 99.90th=[61604], 99.95th=[70779], 00:10:20.545 | 99.99th=[79168] 00:10:20.545 bw ( KiB/s): min=11944, max=13384, per=19.49%, avg=12664.00, stdev=1018.23, samples=2 00:10:20.545 iops : min= 2986, max= 3346, avg=3166.00, stdev=254.56, samples=2 00:10:20.545 lat (msec) : 10=3.69%, 20=66.78%, 50=28.51%, 100=1.02% 00:10:20.545 cpu : usr=3.47%, sys=5.75%, ctx=232, majf=0, minf=1 00:10:20.545 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:10:20.545 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:20.545 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:20.545 issued rwts: total=3072,3294,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:20.545 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:20.545 job1: (groupid=0, jobs=1): err= 0: pid=304697: Fri Nov 15 10:29:08 2024 00:10:20.545 read: IOPS=5104, BW=19.9MiB/s (20.9MB/s)(20.0MiB/1003msec) 00:10:20.545 slat (usec): min=3, max=10431, avg=89.38, stdev=587.82 00:10:20.545 clat (usec): min=4428, max=48593, avg=11937.56, stdev=3816.63 00:10:20.545 lat (usec): min=4437, max=52164, avg=12026.94, stdev=3848.31 00:10:20.545 clat percentiles (usec): 00:10:20.545 | 1.00th=[ 6652], 5.00th=[ 8848], 10.00th=[ 9503], 20.00th=[10290], 00:10:20.545 | 30.00th=[10552], 40.00th=[10814], 50.00th=[10945], 60.00th=[11207], 00:10:20.545 | 70.00th=[11994], 80.00th=[13173], 90.00th=[15533], 95.00th=[17957], 00:10:20.545 | 99.00th=[27132], 99.50th=[39060], 99.90th=[47973], 99.95th=[48497], 00:10:20.545 | 99.99th=[48497] 00:10:20.545 write: IOPS=5148, BW=20.1MiB/s (21.1MB/s)(20.2MiB/1003msec); 0 zone resets 00:10:20.545 slat (usec): min=4, max=8897, avg=94.90, stdev=486.94 00:10:20.545 clat (usec): min=692, max=57012, avg=12797.65, stdev=8393.09 00:10:20.545 lat (usec): min=2826, max=59798, avg=12892.56, stdev=8461.49 00:10:20.545 clat percentiles (usec): 00:10:20.545 | 1.00th=[ 3982], 5.00th=[ 6849], 10.00th=[ 9110], 20.00th=[10552], 00:10:20.545 | 30.00th=[10683], 40.00th=[10945], 50.00th=[11207], 60.00th=[11469], 00:10:20.545 | 70.00th=[11600], 80.00th=[11994], 90.00th=[13566], 95.00th=[21365], 00:10:20.545 | 99.00th=[53740], 99.50th=[56886], 99.90th=[56886], 99.95th=[56886], 00:10:20.545 | 99.99th=[56886] 00:10:20.545 bw ( KiB/s): min=17544, max=23416, per=31.51%, avg=20480.00, stdev=4152.13, samples=2 00:10:20.545 iops : min= 4386, max= 5854, avg=5120.00, stdev=1038.03, samples=2 00:10:20.545 lat (usec) : 750=0.01% 00:10:20.545 lat (msec) : 4=0.50%, 10=13.92%, 20=81.68%, 50=2.53%, 100=1.36% 00:10:20.545 cpu : usr=6.09%, sys=11.08%, ctx=566, majf=0, minf=1 00:10:20.545 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:10:20.545 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:20.545 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:20.545 issued rwts: total=5120,5164,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:20.545 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:20.545 job2: (groupid=0, jobs=1): err= 0: pid=304699: Fri Nov 15 10:29:08 2024 00:10:20.545 read: IOPS=4075, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1005msec) 00:10:20.545 slat (usec): min=2, max=12114, avg=107.77, stdev=597.91 00:10:20.545 clat (usec): min=1905, max=77856, avg=15229.92, stdev=9254.98 00:10:20.545 lat (usec): min=1918, max=77865, avg=15337.69, stdev=9284.10 00:10:20.545 clat percentiles (usec): 00:10:20.545 | 1.00th=[ 3720], 5.00th=[ 8848], 10.00th=[11338], 20.00th=[12125], 00:10:20.545 | 30.00th=[12256], 40.00th=[12780], 50.00th=[13173], 60.00th=[13566], 00:10:20.545 | 70.00th=[14091], 80.00th=[14746], 90.00th=[23200], 95.00th=[23725], 00:10:20.545 | 99.00th=[67634], 99.50th=[76022], 99.90th=[78119], 99.95th=[78119], 00:10:20.545 | 99.99th=[78119] 00:10:20.545 write: IOPS=4387, BW=17.1MiB/s (18.0MB/s)(17.2MiB/1005msec); 0 zone resets 00:10:20.545 slat (usec): min=3, max=24309, avg=113.38, stdev=799.75 00:10:20.545 clat (usec): min=1263, max=57703, avg=14747.69, stdev=8108.49 00:10:20.545 lat (usec): min=1827, max=68112, avg=14861.08, stdev=8176.02 00:10:20.545 clat percentiles (usec): 00:10:20.545 | 1.00th=[ 4817], 5.00th=[ 9503], 10.00th=[10945], 20.00th=[11731], 00:10:20.545 | 30.00th=[12256], 40.00th=[12518], 50.00th=[12649], 60.00th=[12911], 00:10:20.545 | 70.00th=[13566], 80.00th=[14615], 90.00th=[17695], 95.00th=[29754], 00:10:20.545 | 99.00th=[56886], 99.50th=[57410], 99.90th=[57410], 99.95th=[57934], 00:10:20.545 | 99.99th=[57934] 00:10:20.545 bw ( KiB/s): min=13768, max=20480, per=26.35%, avg=17124.00, stdev=4746.10, samples=2 00:10:20.545 iops : min= 3442, max= 5120, avg=4281.00, stdev=1186.53, samples=2 00:10:20.545 lat (msec) : 2=0.12%, 4=0.85%, 10=5.08%, 20=83.03%, 50=9.31% 00:10:20.545 lat (msec) : 100=1.61% 00:10:20.545 cpu : usr=5.38%, sys=9.16%, ctx=427, majf=0, minf=1 00:10:20.545 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:10:20.545 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:20.545 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:20.545 issued rwts: total=4096,4409,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:20.545 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:20.545 job3: (groupid=0, jobs=1): err= 0: pid=304700: Fri Nov 15 10:29:08 2024 00:10:20.545 read: IOPS=3658, BW=14.3MiB/s (15.0MB/s)(14.9MiB/1044msec) 00:10:20.545 slat (usec): min=3, max=14870, avg=132.74, stdev=901.25 00:10:20.545 clat (usec): min=4724, max=57500, avg=16885.08, stdev=8317.42 00:10:20.545 lat (usec): min=4730, max=57507, avg=17017.82, stdev=8361.66 00:10:20.545 clat percentiles (usec): 00:10:20.545 | 1.00th=[ 6980], 5.00th=[11338], 10.00th=[11863], 20.00th=[12125], 00:10:20.545 | 30.00th=[12518], 40.00th=[14615], 50.00th=[15401], 60.00th=[15664], 00:10:20.545 | 70.00th=[15926], 80.00th=[17957], 90.00th=[22938], 95.00th=[36439], 00:10:20.545 | 99.00th=[52167], 99.50th=[52167], 99.90th=[57410], 99.95th=[57410], 00:10:20.545 | 99.99th=[57410] 00:10:20.545 write: IOPS=3923, BW=15.3MiB/s (16.1MB/s)(16.0MiB/1044msec); 0 zone resets 00:10:20.545 slat (usec): min=4, max=12817, avg=110.61, stdev=584.79 00:10:20.545 clat (usec): min=713, max=51904, avg=16562.33, stdev=8248.86 00:10:20.545 lat (usec): min=724, max=51913, avg=16672.95, stdev=8301.87 00:10:20.545 clat percentiles (usec): 00:10:20.545 | 1.00th=[ 3326], 5.00th=[ 7242], 10.00th=[ 9765], 20.00th=[12387], 00:10:20.545 | 30.00th=[12911], 40.00th=[13173], 50.00th=[13829], 60.00th=[14615], 00:10:20.545 | 70.00th=[15270], 80.00th=[19530], 90.00th=[31065], 95.00th=[34866], 00:10:20.545 | 99.00th=[41157], 99.50th=[44827], 99.90th=[49546], 99.95th=[49546], 00:10:20.545 | 99.99th=[51643] 00:10:20.545 bw ( KiB/s): min=12976, max=19792, per=25.21%, avg=16384.00, stdev=4819.64, samples=2 00:10:20.545 iops : min= 3244, max= 4948, avg=4096.00, stdev=1204.91, samples=2 00:10:20.545 lat (usec) : 750=0.04% 00:10:20.545 lat (msec) : 2=0.04%, 4=0.87%, 10=6.04%, 20=75.59%, 50=16.54% 00:10:20.545 lat (msec) : 100=0.88% 00:10:20.545 cpu : usr=3.26%, sys=5.27%, ctx=478, majf=0, minf=1 00:10:20.545 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:10:20.545 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:20.545 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:20.545 issued rwts: total=3819,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:20.545 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:20.545 00:10:20.545 Run status group 0 (all jobs): 00:10:20.545 READ: bw=60.3MiB/s (63.2MB/s), 11.9MiB/s-19.9MiB/s (12.5MB/s-20.9MB/s), io=62.9MiB (66.0MB), run=1003-1044msec 00:10:20.545 WRITE: bw=63.5MiB/s (66.6MB/s), 12.7MiB/s-20.1MiB/s (13.4MB/s-21.1MB/s), io=66.3MiB (69.5MB), run=1003-1044msec 00:10:20.545 00:10:20.545 Disk stats (read/write): 00:10:20.545 nvme0n1: ios=2583/2791, merge=0/0, ticks=20121/25907, in_queue=46028, util=96.99% 00:10:20.545 nvme0n2: ios=4140/4344, merge=0/0, ticks=41554/41891, in_queue=83445, util=96.83% 00:10:20.545 nvme0n3: ios=3821/4096, merge=0/0, ticks=21744/25535, in_queue=47279, util=96.74% 00:10:20.545 nvme0n4: ios=3123/3207, merge=0/0, ticks=49794/55527, in_queue=105321, util=96.83% 00:10:20.545 10:29:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:10:20.545 [global] 00:10:20.545 thread=1 00:10:20.545 invalidate=1 00:10:20.545 rw=randwrite 00:10:20.545 time_based=1 00:10:20.545 runtime=1 00:10:20.545 ioengine=libaio 00:10:20.545 direct=1 00:10:20.545 bs=4096 00:10:20.545 iodepth=128 00:10:20.545 norandommap=0 00:10:20.545 numjobs=1 00:10:20.545 00:10:20.545 verify_dump=1 00:10:20.545 verify_backlog=512 00:10:20.545 verify_state_save=0 00:10:20.545 do_verify=1 00:10:20.545 verify=crc32c-intel 00:10:20.545 [job0] 00:10:20.545 filename=/dev/nvme0n1 00:10:20.545 [job1] 00:10:20.545 filename=/dev/nvme0n2 00:10:20.545 [job2] 00:10:20.545 filename=/dev/nvme0n3 00:10:20.545 [job3] 00:10:20.545 filename=/dev/nvme0n4 00:10:20.545 Could not set queue depth (nvme0n1) 00:10:20.545 Could not set queue depth (nvme0n2) 00:10:20.545 Could not set queue depth (nvme0n3) 00:10:20.546 Could not set queue depth (nvme0n4) 00:10:20.804 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:20.804 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:20.804 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:20.804 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:20.804 fio-3.35 00:10:20.804 Starting 4 threads 00:10:22.180 00:10:22.180 job0: (groupid=0, jobs=1): err= 0: pid=305050: Fri Nov 15 10:29:10 2024 00:10:22.180 read: IOPS=3569, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1004msec) 00:10:22.180 slat (usec): min=2, max=20200, avg=124.46, stdev=780.72 00:10:22.180 clat (usec): min=6683, max=57328, avg=16084.74, stdev=9182.31 00:10:22.180 lat (usec): min=7702, max=57335, avg=16209.20, stdev=9241.68 00:10:22.180 clat percentiles (usec): 00:10:22.180 | 1.00th=[ 9241], 5.00th=[10421], 10.00th=[10683], 20.00th=[11731], 00:10:22.180 | 30.00th=[11994], 40.00th=[12256], 50.00th=[13042], 60.00th=[13698], 00:10:22.180 | 70.00th=[14353], 80.00th=[16057], 90.00th=[22414], 95.00th=[40633], 00:10:22.180 | 99.00th=[53216], 99.50th=[54789], 99.90th=[57410], 99.95th=[57410], 00:10:22.180 | 99.99th=[57410] 00:10:22.180 write: IOPS=3943, BW=15.4MiB/s (16.2MB/s)(15.5MiB/1004msec); 0 zone resets 00:10:22.180 slat (usec): min=4, max=10943, avg=127.11, stdev=684.28 00:10:22.180 clat (usec): min=300, max=82082, avg=17613.58, stdev=13340.20 00:10:22.180 lat (usec): min=341, max=82100, avg=17740.69, stdev=13421.40 00:10:22.180 clat percentiles (usec): 00:10:22.180 | 1.00th=[ 2180], 5.00th=[ 8979], 10.00th=[ 9765], 20.00th=[10552], 00:10:22.180 | 30.00th=[11600], 40.00th=[11994], 50.00th=[12256], 60.00th=[13960], 00:10:22.180 | 70.00th=[16909], 80.00th=[22676], 90.00th=[29230], 95.00th=[49021], 00:10:22.180 | 99.00th=[76022], 99.50th=[80217], 99.90th=[82314], 99.95th=[82314], 00:10:22.180 | 99.99th=[82314] 00:10:22.180 bw ( KiB/s): min=13328, max=17320, per=24.24%, avg=15324.00, stdev=2822.77, samples=2 00:10:22.180 iops : min= 3332, max= 4330, avg=3831.00, stdev=705.69, samples=2 00:10:22.180 lat (usec) : 500=0.01%, 1000=0.09% 00:10:22.180 lat (msec) : 2=0.27%, 4=0.48%, 10=7.61%, 20=72.16%, 50=15.64% 00:10:22.180 lat (msec) : 100=3.74% 00:10:22.180 cpu : usr=4.69%, sys=8.37%, ctx=392, majf=0, minf=1 00:10:22.180 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:10:22.180 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:22.180 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:22.180 issued rwts: total=3584,3959,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:22.180 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:22.180 job1: (groupid=0, jobs=1): err= 0: pid=305051: Fri Nov 15 10:29:10 2024 00:10:22.180 read: IOPS=5099, BW=19.9MiB/s (20.9MB/s)(20.0MiB/1004msec) 00:10:22.180 slat (usec): min=2, max=9836, avg=81.72, stdev=473.38 00:10:22.180 clat (usec): min=4317, max=25147, avg=12337.43, stdev=2301.99 00:10:22.180 lat (usec): min=4321, max=25235, avg=12419.14, stdev=2306.65 00:10:22.180 clat percentiles (usec): 00:10:22.180 | 1.00th=[ 8586], 5.00th=[10159], 10.00th=[10421], 20.00th=[10945], 00:10:22.180 | 30.00th=[11469], 40.00th=[11731], 50.00th=[12125], 60.00th=[12387], 00:10:22.180 | 70.00th=[12649], 80.00th=[12911], 90.00th=[14091], 95.00th=[16581], 00:10:22.180 | 99.00th=[24511], 99.50th=[24511], 99.90th=[25035], 99.95th=[25035], 00:10:22.180 | 99.99th=[25035] 00:10:22.180 write: IOPS=5221, BW=20.4MiB/s (21.4MB/s)(20.5MiB/1004msec); 0 zone resets 00:10:22.180 slat (usec): min=3, max=10817, avg=91.60, stdev=544.82 00:10:22.180 clat (usec): min=938, max=41388, avg=12259.63, stdev=3865.87 00:10:22.180 lat (usec): min=5263, max=41394, avg=12351.23, stdev=3891.34 00:10:22.180 clat percentiles (usec): 00:10:22.180 | 1.00th=[ 6063], 5.00th=[ 8586], 10.00th=[ 9372], 20.00th=[ 9765], 00:10:22.180 | 30.00th=[10552], 40.00th=[10945], 50.00th=[11469], 60.00th=[11994], 00:10:22.180 | 70.00th=[12518], 80.00th=[13304], 90.00th=[15926], 95.00th=[20579], 00:10:22.180 | 99.00th=[26608], 99.50th=[28181], 99.90th=[39060], 99.95th=[39060], 00:10:22.180 | 99.99th=[41157] 00:10:22.180 bw ( KiB/s): min=20360, max=20664, per=32.45%, avg=20512.00, stdev=214.96, samples=2 00:10:22.180 iops : min= 5090, max= 5166, avg=5128.00, stdev=53.74, samples=2 00:10:22.180 lat (usec) : 1000=0.01% 00:10:22.180 lat (msec) : 10=14.24%, 20=82.15%, 50=3.60% 00:10:22.180 cpu : usr=5.68%, sys=9.97%, ctx=454, majf=0, minf=1 00:10:22.180 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:10:22.180 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:22.180 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:22.180 issued rwts: total=5120,5242,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:22.180 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:22.180 job2: (groupid=0, jobs=1): err= 0: pid=305052: Fri Nov 15 10:29:10 2024 00:10:22.180 read: IOPS=4071, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1006msec) 00:10:22.180 slat (usec): min=2, max=14901, avg=117.11, stdev=817.41 00:10:22.180 clat (usec): min=1280, max=38160, avg=15421.12, stdev=5037.25 00:10:22.180 lat (usec): min=1284, max=40034, avg=15538.22, stdev=5093.29 00:10:22.180 clat percentiles (usec): 00:10:22.180 | 1.00th=[ 5276], 5.00th=[ 8029], 10.00th=[11076], 20.00th=[12387], 00:10:22.180 | 30.00th=[13042], 40.00th=[13829], 50.00th=[14484], 60.00th=[15270], 00:10:22.180 | 70.00th=[15795], 80.00th=[17171], 90.00th=[22676], 95.00th=[27132], 00:10:22.180 | 99.00th=[31589], 99.50th=[33162], 99.90th=[37487], 99.95th=[38011], 00:10:22.180 | 99.99th=[38011] 00:10:22.180 write: IOPS=4216, BW=16.5MiB/s (17.3MB/s)(16.6MiB/1006msec); 0 zone resets 00:10:22.180 slat (usec): min=3, max=11872, avg=105.31, stdev=595.03 00:10:22.180 clat (usec): min=389, max=79974, avg=15104.15, stdev=8372.80 00:10:22.180 lat (usec): min=743, max=79979, avg=15209.47, stdev=8380.68 00:10:22.180 clat percentiles (usec): 00:10:22.180 | 1.00th=[ 3458], 5.00th=[ 5997], 10.00th=[ 7504], 20.00th=[10814], 00:10:22.180 | 30.00th=[12387], 40.00th=[12780], 50.00th=[13304], 60.00th=[14877], 00:10:22.180 | 70.00th=[15664], 80.00th=[17957], 90.00th=[22938], 95.00th=[30016], 00:10:22.180 | 99.00th=[62129], 99.50th=[74974], 99.90th=[74974], 99.95th=[74974], 00:10:22.180 | 99.99th=[80217] 00:10:22.180 bw ( KiB/s): min=14352, max=18560, per=26.03%, avg=16456.00, stdev=2975.51, samples=2 00:10:22.180 iops : min= 3588, max= 4640, avg=4114.00, stdev=743.88, samples=2 00:10:22.180 lat (usec) : 500=0.01%, 750=0.02% 00:10:22.180 lat (msec) : 2=0.12%, 4=0.76%, 10=12.16%, 20=73.46%, 50=12.92% 00:10:22.180 lat (msec) : 100=0.55% 00:10:22.180 cpu : usr=5.07%, sys=6.07%, ctx=424, majf=0, minf=1 00:10:22.180 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:10:22.180 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:22.180 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:22.180 issued rwts: total=4096,4242,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:22.180 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:22.180 job3: (groupid=0, jobs=1): err= 0: pid=305053: Fri Nov 15 10:29:10 2024 00:10:22.180 read: IOPS=2751, BW=10.7MiB/s (11.3MB/s)(11.2MiB/1045msec) 00:10:22.180 slat (usec): min=2, max=11561, avg=165.57, stdev=836.78 00:10:22.181 clat (usec): min=7246, max=70103, avg=22731.16, stdev=13506.43 00:10:22.181 lat (usec): min=7776, max=70116, avg=22896.73, stdev=13555.32 00:10:22.181 clat percentiles (usec): 00:10:22.181 | 1.00th=[10421], 5.00th=[11076], 10.00th=[11731], 20.00th=[13960], 00:10:22.181 | 30.00th=[14484], 40.00th=[15008], 50.00th=[15926], 60.00th=[19268], 00:10:22.181 | 70.00th=[23987], 80.00th=[32900], 90.00th=[44303], 95.00th=[53740], 00:10:22.181 | 99.00th=[63701], 99.50th=[63701], 99.90th=[69731], 99.95th=[69731], 00:10:22.181 | 99.99th=[69731] 00:10:22.181 write: IOPS=2939, BW=11.5MiB/s (12.0MB/s)(12.0MiB/1045msec); 0 zone resets 00:10:22.181 slat (usec): min=3, max=22137, avg=161.45, stdev=1036.05 00:10:22.181 clat (usec): min=3419, max=71172, avg=21871.47, stdev=12494.18 00:10:22.181 lat (usec): min=3443, max=71185, avg=22032.92, stdev=12542.22 00:10:22.181 clat percentiles (usec): 00:10:22.181 | 1.00th=[ 6456], 5.00th=[10159], 10.00th=[11600], 20.00th=[13829], 00:10:22.181 | 30.00th=[14353], 40.00th=[14615], 50.00th=[15401], 60.00th=[18482], 00:10:22.181 | 70.00th=[24249], 80.00th=[31851], 90.00th=[43254], 95.00th=[49021], 00:10:22.181 | 99.00th=[57934], 99.50th=[70779], 99.90th=[70779], 99.95th=[70779], 00:10:22.181 | 99.99th=[70779] 00:10:22.181 bw ( KiB/s): min= 8192, max=16384, per=19.44%, avg=12288.00, stdev=5792.62, samples=2 00:10:22.181 iops : min= 2048, max= 4096, avg=3072.00, stdev=1448.15, samples=2 00:10:22.181 lat (msec) : 4=0.18%, 10=1.95%, 20=60.79%, 50=30.76%, 100=6.32% 00:10:22.181 cpu : usr=3.83%, sys=4.50%, ctx=353, majf=0, minf=1 00:10:22.181 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=98.9% 00:10:22.181 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:22.181 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:22.181 issued rwts: total=2875,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:22.181 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:22.181 00:10:22.181 Run status group 0 (all jobs): 00:10:22.181 READ: bw=58.6MiB/s (61.4MB/s), 10.7MiB/s-19.9MiB/s (11.3MB/s-20.9MB/s), io=61.2MiB (64.2MB), run=1004-1045msec 00:10:22.181 WRITE: bw=61.7MiB/s (64.7MB/s), 11.5MiB/s-20.4MiB/s (12.0MB/s-21.4MB/s), io=64.5MiB (67.6MB), run=1004-1045msec 00:10:22.181 00:10:22.181 Disk stats (read/write): 00:10:22.181 nvme0n1: ios=3315/3584, merge=0/0, ticks=25330/41832, in_queue=67162, util=98.50% 00:10:22.181 nvme0n2: ios=4200/4608, merge=0/0, ticks=25813/29862, in_queue=55675, util=96.15% 00:10:22.181 nvme0n3: ios=3634/3791, merge=0/0, ticks=37969/33702, in_queue=71671, util=96.88% 00:10:22.181 nvme0n4: ios=2162/2560, merge=0/0, ticks=13369/17647, in_queue=31016, util=96.65% 00:10:22.181 10:29:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:10:22.181 10:29:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=305191 00:10:22.181 10:29:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:10:22.181 10:29:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:10:22.181 [global] 00:10:22.181 thread=1 00:10:22.181 invalidate=1 00:10:22.181 rw=read 00:10:22.181 time_based=1 00:10:22.181 runtime=10 00:10:22.181 ioengine=libaio 00:10:22.181 direct=1 00:10:22.181 bs=4096 00:10:22.181 iodepth=1 00:10:22.181 norandommap=1 00:10:22.181 numjobs=1 00:10:22.181 00:10:22.181 [job0] 00:10:22.181 filename=/dev/nvme0n1 00:10:22.181 [job1] 00:10:22.181 filename=/dev/nvme0n2 00:10:22.181 [job2] 00:10:22.181 filename=/dev/nvme0n3 00:10:22.181 [job3] 00:10:22.181 filename=/dev/nvme0n4 00:10:22.181 Could not set queue depth (nvme0n1) 00:10:22.181 Could not set queue depth (nvme0n2) 00:10:22.181 Could not set queue depth (nvme0n3) 00:10:22.181 Could not set queue depth (nvme0n4) 00:10:22.181 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:22.181 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:22.181 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:22.181 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:22.181 fio-3.35 00:10:22.181 Starting 4 threads 00:10:25.468 10:29:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:10:25.468 10:29:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:10:25.468 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=32034816, buflen=4096 00:10:25.468 fio: pid=305285, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:10:25.727 10:29:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:25.727 10:29:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:10:25.727 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=28971008, buflen=4096 00:10:25.727 fio: pid=305284, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:10:25.986 10:29:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:25.986 10:29:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:10:25.986 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=31490048, buflen=4096 00:10:25.986 fio: pid=305282, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:10:26.246 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=1437696, buflen=4096 00:10:26.246 fio: pid=305283, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:10:26.246 10:29:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:26.246 10:29:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:10:26.246 00:10:26.246 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=305282: Fri Nov 15 10:29:14 2024 00:10:26.246 read: IOPS=2184, BW=8736KiB/s (8946kB/s)(30.0MiB/3520msec) 00:10:26.246 slat (usec): min=6, max=18914, avg=16.53, stdev=268.87 00:10:26.246 clat (usec): min=161, max=42229, avg=435.19, stdev=2674.02 00:10:26.246 lat (usec): min=168, max=49033, avg=451.71, stdev=2703.20 00:10:26.246 clat percentiles (usec): 00:10:26.246 | 1.00th=[ 174], 5.00th=[ 190], 10.00th=[ 198], 20.00th=[ 208], 00:10:26.246 | 30.00th=[ 219], 40.00th=[ 233], 50.00th=[ 251], 60.00th=[ 269], 00:10:26.246 | 70.00th=[ 285], 80.00th=[ 297], 90.00th=[ 322], 95.00th=[ 363], 00:10:26.246 | 99.00th=[ 506], 99.50th=[ 627], 99.90th=[41157], 99.95th=[42206], 00:10:26.246 | 99.99th=[42206] 00:10:26.246 bw ( KiB/s): min= 3329, max=16584, per=39.83%, avg=9676.17, stdev=4334.01, samples=6 00:10:26.246 iops : min= 832, max= 4146, avg=2419.00, stdev=1083.58, samples=6 00:10:26.246 lat (usec) : 250=49.38%, 500=49.54%, 750=0.62% 00:10:26.246 lat (msec) : 20=0.01%, 50=0.43% 00:10:26.246 cpu : usr=1.34%, sys=3.98%, ctx=7696, majf=0, minf=1 00:10:26.246 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:26.246 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:26.246 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:26.246 issued rwts: total=7689,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:26.246 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:26.246 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=305283: Fri Nov 15 10:29:14 2024 00:10:26.246 read: IOPS=93, BW=372KiB/s (381kB/s)(1404KiB/3776msec) 00:10:26.246 slat (usec): min=6, max=13860, avg=100.75, stdev=994.08 00:10:26.246 clat (usec): min=179, max=43955, avg=10586.66, stdev=17781.95 00:10:26.246 lat (usec): min=186, max=54957, avg=10665.23, stdev=17924.45 00:10:26.246 clat percentiles (usec): 00:10:26.246 | 1.00th=[ 188], 5.00th=[ 200], 10.00th=[ 206], 20.00th=[ 212], 00:10:26.246 | 30.00th=[ 221], 40.00th=[ 229], 50.00th=[ 237], 60.00th=[ 251], 00:10:26.246 | 70.00th=[ 322], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:10:26.246 | 99.00th=[42206], 99.50th=[42206], 99.90th=[43779], 99.95th=[43779], 00:10:26.246 | 99.99th=[43779] 00:10:26.246 bw ( KiB/s): min= 96, max= 1550, per=1.51%, avg=368.71, stdev=539.87, samples=7 00:10:26.246 iops : min= 24, max= 387, avg=92.00, stdev=134.84, samples=7 00:10:26.246 lat (usec) : 250=59.66%, 500=14.20%, 750=0.57% 00:10:26.246 lat (msec) : 50=25.28% 00:10:26.246 cpu : usr=0.05%, sys=0.16%, ctx=356, majf=0, minf=2 00:10:26.246 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:26.246 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:26.246 complete : 0=0.3%, 4=99.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:26.246 issued rwts: total=352,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:26.246 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:26.246 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=305284: Fri Nov 15 10:29:14 2024 00:10:26.246 read: IOPS=2181, BW=8727KiB/s (8936kB/s)(27.6MiB/3242msec) 00:10:26.246 slat (usec): min=5, max=8197, avg=15.04, stdev=125.94 00:10:26.246 clat (usec): min=176, max=42230, avg=436.30, stdev=2712.70 00:10:26.246 lat (usec): min=182, max=42236, avg=451.34, stdev=2716.04 00:10:26.246 clat percentiles (usec): 00:10:26.246 | 1.00th=[ 186], 5.00th=[ 196], 10.00th=[ 202], 20.00th=[ 210], 00:10:26.246 | 30.00th=[ 217], 40.00th=[ 225], 50.00th=[ 237], 60.00th=[ 249], 00:10:26.246 | 70.00th=[ 277], 80.00th=[ 297], 90.00th=[ 343], 95.00th=[ 375], 00:10:26.246 | 99.00th=[ 494], 99.50th=[ 652], 99.90th=[42206], 99.95th=[42206], 00:10:26.246 | 99.99th=[42206] 00:10:26.246 bw ( KiB/s): min= 3704, max=16472, per=35.59%, avg=8647.83, stdev=5513.78, samples=6 00:10:26.246 iops : min= 926, max= 4118, avg=2161.83, stdev=1378.45, samples=6 00:10:26.246 lat (usec) : 250=60.08%, 500=38.96%, 750=0.49% 00:10:26.246 lat (msec) : 4=0.01%, 50=0.44% 00:10:26.246 cpu : usr=1.33%, sys=3.05%, ctx=7077, majf=0, minf=2 00:10:26.246 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:26.246 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:26.246 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:26.246 issued rwts: total=7074,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:26.246 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:26.246 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=305285: Fri Nov 15 10:29:14 2024 00:10:26.246 read: IOPS=2682, BW=10.5MiB/s (11.0MB/s)(30.6MiB/2916msec) 00:10:26.246 slat (nsec): min=4612, max=68339, avg=14408.02, stdev=9240.24 00:10:26.246 clat (usec): min=191, max=41977, avg=351.33, stdev=1551.60 00:10:26.246 lat (usec): min=197, max=41987, avg=365.74, stdev=1551.92 00:10:26.246 clat percentiles (usec): 00:10:26.247 | 1.00th=[ 202], 5.00th=[ 210], 10.00th=[ 219], 20.00th=[ 235], 00:10:26.247 | 30.00th=[ 251], 40.00th=[ 273], 50.00th=[ 285], 60.00th=[ 302], 00:10:26.247 | 70.00th=[ 314], 80.00th=[ 334], 90.00th=[ 371], 95.00th=[ 400], 00:10:26.247 | 99.00th=[ 529], 99.50th=[ 586], 99.90th=[41157], 99.95th=[42206], 00:10:26.247 | 99.99th=[42206] 00:10:26.247 bw ( KiB/s): min= 632, max=14480, per=42.56%, avg=10340.80, stdev=5622.49, samples=5 00:10:26.247 iops : min= 158, max= 3620, avg=2585.20, stdev=1405.62, samples=5 00:10:26.247 lat (usec) : 250=30.01%, 500=68.55%, 750=1.25%, 1000=0.01% 00:10:26.247 lat (msec) : 4=0.01%, 20=0.01%, 50=0.14% 00:10:26.247 cpu : usr=1.34%, sys=4.84%, ctx=7822, majf=0, minf=1 00:10:26.247 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:26.247 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:26.247 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:26.247 issued rwts: total=7822,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:26.247 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:26.247 00:10:26.247 Run status group 0 (all jobs): 00:10:26.247 READ: bw=23.7MiB/s (24.9MB/s), 372KiB/s-10.5MiB/s (381kB/s-11.0MB/s), io=89.6MiB (93.9MB), run=2916-3776msec 00:10:26.247 00:10:26.247 Disk stats (read/write): 00:10:26.247 nvme0n1: ios=7725/0, merge=0/0, ticks=3327/0, in_queue=3327, util=98.74% 00:10:26.247 nvme0n2: ios=346/0, merge=0/0, ticks=3513/0, in_queue=3513, util=95.22% 00:10:26.247 nvme0n3: ios=6679/0, merge=0/0, ticks=3012/0, in_queue=3012, util=99.31% 00:10:26.247 nvme0n4: ios=7519/0, merge=0/0, ticks=2601/0, in_queue=2601, util=96.74% 00:10:26.505 10:29:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:26.505 10:29:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:10:26.765 10:29:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:26.765 10:29:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:10:27.023 10:29:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:27.023 10:29:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:10:27.282 10:29:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:27.282 10:29:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:10:27.540 10:29:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:10:27.540 10:29:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # wait 305191 00:10:27.540 10:29:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:10:27.540 10:29:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:27.799 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:27.799 10:29:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:27.799 10:29:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1221 -- # local i=0 00:10:27.799 10:29:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:10:27.799 10:29:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:27.799 10:29:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:10:27.799 10:29:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:27.799 10:29:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1233 -- # return 0 00:10:27.799 10:29:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:10:27.799 10:29:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:10:27.799 nvmf hotplug test: fio failed as expected 00:10:27.799 10:29:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:28.058 10:29:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:10:28.058 10:29:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:10:28.058 10:29:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:10:28.058 10:29:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:10:28.058 10:29:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:10:28.058 10:29:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:28.058 10:29:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:10:28.058 10:29:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:28.058 10:29:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:10:28.058 10:29:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:28.058 10:29:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:28.058 rmmod nvme_tcp 00:10:28.058 rmmod nvme_fabrics 00:10:28.058 rmmod nvme_keyring 00:10:28.058 10:29:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:28.058 10:29:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:10:28.058 10:29:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:10:28.058 10:29:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@517 -- # '[' -n 303157 ']' 00:10:28.058 10:29:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@518 -- # killprocess 303157 00:10:28.058 10:29:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@952 -- # '[' -z 303157 ']' 00:10:28.058 10:29:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@956 -- # kill -0 303157 00:10:28.058 10:29:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@957 -- # uname 00:10:28.058 10:29:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:10:28.058 10:29:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 303157 00:10:28.058 10:29:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:10:28.058 10:29:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:10:28.058 10:29:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@970 -- # echo 'killing process with pid 303157' 00:10:28.058 killing process with pid 303157 00:10:28.058 10:29:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@971 -- # kill 303157 00:10:28.058 10:29:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@976 -- # wait 303157 00:10:28.316 10:29:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:28.316 10:29:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:28.316 10:29:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:28.316 10:29:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:10:28.316 10:29:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-save 00:10:28.316 10:29:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-restore 00:10:28.316 10:29:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:28.316 10:29:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:28.316 10:29:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:28.316 10:29:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:28.316 10:29:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:28.316 10:29:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:30.860 10:29:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:30.860 00:10:30.860 real 0m24.274s 00:10:30.860 user 1m25.714s 00:10:30.860 sys 0m7.139s 00:10:30.860 10:29:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:30.860 10:29:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:30.860 ************************************ 00:10:30.860 END TEST nvmf_fio_target 00:10:30.860 ************************************ 00:10:30.860 10:29:18 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:10:30.860 10:29:18 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:10:30.860 10:29:18 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:30.860 10:29:18 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:30.860 ************************************ 00:10:30.860 START TEST nvmf_bdevio 00:10:30.860 ************************************ 00:10:30.860 10:29:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:10:30.860 * Looking for test storage... 00:10:30.860 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:30.860 10:29:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:10:30.860 10:29:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1691 -- # lcov --version 00:10:30.860 10:29:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:10:30.860 10:29:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:10:30.860 10:29:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:30.860 10:29:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:30.860 10:29:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:30.860 10:29:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:10:30.860 10:29:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:10:30.860 10:29:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:10:30.860 10:29:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:10:30.860 10:29:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:10:30.860 10:29:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:10:30.860 10:29:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:10:30.860 10:29:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:30.860 10:29:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:10:30.860 10:29:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:10:30.860 10:29:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:30.860 10:29:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:30.860 10:29:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:10:30.860 10:29:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:10:30.860 10:29:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:30.860 10:29:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:10:30.860 10:29:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:10:30.860 10:29:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:10:30.860 10:29:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:10:30.860 10:29:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:30.860 10:29:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:10:30.860 10:29:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:10:30.860 10:29:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:30.860 10:29:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:30.860 10:29:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:10:30.860 10:29:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:30.860 10:29:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:10:30.860 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:30.860 --rc genhtml_branch_coverage=1 00:10:30.860 --rc genhtml_function_coverage=1 00:10:30.860 --rc genhtml_legend=1 00:10:30.860 --rc geninfo_all_blocks=1 00:10:30.860 --rc geninfo_unexecuted_blocks=1 00:10:30.860 00:10:30.860 ' 00:10:30.860 10:29:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:10:30.860 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:30.860 --rc genhtml_branch_coverage=1 00:10:30.860 --rc genhtml_function_coverage=1 00:10:30.860 --rc genhtml_legend=1 00:10:30.860 --rc geninfo_all_blocks=1 00:10:30.860 --rc geninfo_unexecuted_blocks=1 00:10:30.860 00:10:30.860 ' 00:10:30.860 10:29:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:10:30.860 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:30.860 --rc genhtml_branch_coverage=1 00:10:30.860 --rc genhtml_function_coverage=1 00:10:30.860 --rc genhtml_legend=1 00:10:30.860 --rc geninfo_all_blocks=1 00:10:30.860 --rc geninfo_unexecuted_blocks=1 00:10:30.860 00:10:30.860 ' 00:10:30.860 10:29:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:10:30.860 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:30.860 --rc genhtml_branch_coverage=1 00:10:30.860 --rc genhtml_function_coverage=1 00:10:30.860 --rc genhtml_legend=1 00:10:30.860 --rc geninfo_all_blocks=1 00:10:30.860 --rc geninfo_unexecuted_blocks=1 00:10:30.860 00:10:30.860 ' 00:10:30.860 10:29:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:30.860 10:29:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:10:30.860 10:29:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:30.860 10:29:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:30.860 10:29:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:30.860 10:29:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:30.860 10:29:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:30.860 10:29:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:30.860 10:29:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:30.860 10:29:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:30.860 10:29:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:30.860 10:29:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:30.860 10:29:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:10:30.860 10:29:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=8b464f06-2980-e311-ba20-001e67a94acd 00:10:30.860 10:29:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:30.860 10:29:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:30.860 10:29:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:30.860 10:29:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:30.860 10:29:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:30.860 10:29:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:10:30.860 10:29:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:30.860 10:29:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:30.861 10:29:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:30.861 10:29:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:30.861 10:29:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:30.861 10:29:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:30.861 10:29:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:10:30.861 10:29:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:30.861 10:29:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:10:30.861 10:29:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:30.861 10:29:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:30.861 10:29:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:30.861 10:29:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:30.861 10:29:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:30.861 10:29:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:30.861 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:30.861 10:29:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:30.861 10:29:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:30.861 10:29:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:30.861 10:29:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:30.861 10:29:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:30.861 10:29:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:10:30.861 10:29:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:30.861 10:29:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:30.861 10:29:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:30.861 10:29:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:30.861 10:29:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:30.861 10:29:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:30.861 10:29:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:30.861 10:29:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:30.861 10:29:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:30.861 10:29:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:30.861 10:29:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@309 -- # xtrace_disable 00:10:30.861 10:29:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:32.766 10:29:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:32.766 10:29:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # pci_devs=() 00:10:32.766 10:29:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:32.766 10:29:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:32.766 10:29:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:32.766 10:29:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:32.766 10:29:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:32.766 10:29:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # net_devs=() 00:10:32.766 10:29:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:32.767 10:29:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # e810=() 00:10:32.767 10:29:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # local -ga e810 00:10:32.767 10:29:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # x722=() 00:10:32.767 10:29:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # local -ga x722 00:10:32.767 10:29:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # mlx=() 00:10:32.767 10:29:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # local -ga mlx 00:10:32.767 10:29:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:32.767 10:29:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:32.767 10:29:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:32.767 10:29:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:32.767 10:29:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:32.767 10:29:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:32.767 10:29:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:32.767 10:29:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:32.767 10:29:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:32.767 10:29:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:32.767 10:29:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:32.767 10:29:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:32.767 10:29:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:32.767 10:29:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:32.767 10:29:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:32.767 10:29:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:32.767 10:29:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:32.767 10:29:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:32.767 10:29:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:32.767 10:29:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:82:00.0 (0x8086 - 0x159b)' 00:10:32.767 Found 0000:82:00.0 (0x8086 - 0x159b) 00:10:32.767 10:29:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:32.767 10:29:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:32.767 10:29:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:32.767 10:29:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:32.767 10:29:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:32.767 10:29:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:32.767 10:29:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:82:00.1 (0x8086 - 0x159b)' 00:10:32.767 Found 0000:82:00.1 (0x8086 - 0x159b) 00:10:32.767 10:29:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:32.767 10:29:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:32.767 10:29:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:32.767 10:29:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:32.767 10:29:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:32.767 10:29:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:32.767 10:29:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:32.767 10:29:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:32.767 10:29:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:32.767 10:29:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:32.767 10:29:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:32.767 10:29:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:32.767 10:29:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:32.767 10:29:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:32.767 10:29:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:32.767 10:29:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:82:00.0: cvl_0_0' 00:10:32.767 Found net devices under 0000:82:00.0: cvl_0_0 00:10:32.767 10:29:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:32.767 10:29:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:32.767 10:29:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:32.767 10:29:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:32.767 10:29:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:32.767 10:29:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:32.767 10:29:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:32.767 10:29:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:32.767 10:29:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:82:00.1: cvl_0_1' 00:10:32.767 Found net devices under 0000:82:00.1: cvl_0_1 00:10:32.767 10:29:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:32.767 10:29:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:32.767 10:29:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # is_hw=yes 00:10:32.767 10:29:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:32.767 10:29:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:32.767 10:29:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:32.767 10:29:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:32.767 10:29:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:32.767 10:29:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:32.767 10:29:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:32.767 10:29:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:32.767 10:29:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:32.767 10:29:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:32.767 10:29:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:32.767 10:29:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:32.767 10:29:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:32.767 10:29:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:32.767 10:29:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:32.767 10:29:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:32.767 10:29:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:32.767 10:29:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:32.767 10:29:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:32.767 10:29:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:32.767 10:29:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:32.767 10:29:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:32.767 10:29:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:32.767 10:29:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:32.767 10:29:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:32.767 10:29:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:32.767 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:32.767 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.353 ms 00:10:32.767 00:10:32.767 --- 10.0.0.2 ping statistics --- 00:10:32.767 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:32.767 rtt min/avg/max/mdev = 0.353/0.353/0.353/0.000 ms 00:10:32.767 10:29:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:32.767 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:32.767 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.178 ms 00:10:32.767 00:10:32.767 --- 10.0.0.1 ping statistics --- 00:10:32.767 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:32.767 rtt min/avg/max/mdev = 0.178/0.178/0.178/0.000 ms 00:10:32.767 10:29:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:32.767 10:29:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@450 -- # return 0 00:10:32.767 10:29:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:32.767 10:29:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:32.767 10:29:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:32.767 10:29:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:32.767 10:29:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:32.767 10:29:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:32.767 10:29:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:33.026 10:29:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:10:33.026 10:29:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:33.026 10:29:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:33.026 10:29:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:33.026 10:29:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@509 -- # nvmfpid=307931 00:10:33.026 10:29:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:10:33.026 10:29:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@510 -- # waitforlisten 307931 00:10:33.026 10:29:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@833 -- # '[' -z 307931 ']' 00:10:33.026 10:29:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:33.026 10:29:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@838 -- # local max_retries=100 00:10:33.026 10:29:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:33.026 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:33.026 10:29:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@842 -- # xtrace_disable 00:10:33.026 10:29:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:33.026 [2024-11-15 10:29:21.307898] Starting SPDK v25.01-pre git sha1 318515b44 / DPDK 24.03.0 initialization... 00:10:33.026 [2024-11-15 10:29:21.307972] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:33.026 [2024-11-15 10:29:21.382038] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:33.026 [2024-11-15 10:29:21.442314] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:33.026 [2024-11-15 10:29:21.442388] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:33.026 [2024-11-15 10:29:21.442404] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:33.026 [2024-11-15 10:29:21.442431] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:33.026 [2024-11-15 10:29:21.442441] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:33.026 [2024-11-15 10:29:21.447383] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:10:33.026 [2024-11-15 10:29:21.447430] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:10:33.026 [2024-11-15 10:29:21.447521] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:10:33.026 [2024-11-15 10:29:21.447525] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:33.285 10:29:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:10:33.285 10:29:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@866 -- # return 0 00:10:33.285 10:29:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:33.285 10:29:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:33.285 10:29:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:33.285 10:29:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:33.285 10:29:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:33.285 10:29:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:33.285 10:29:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:33.285 [2024-11-15 10:29:21.596932] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:33.285 10:29:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:33.285 10:29:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:10:33.285 10:29:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:33.285 10:29:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:33.285 Malloc0 00:10:33.285 10:29:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:33.285 10:29:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:10:33.285 10:29:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:33.285 10:29:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:33.285 10:29:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:33.285 10:29:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:33.285 10:29:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:33.285 10:29:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:33.285 10:29:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:33.285 10:29:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:33.285 10:29:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:33.285 10:29:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:33.285 [2024-11-15 10:29:21.667732] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:33.285 10:29:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:33.285 10:29:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:10:33.285 10:29:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:10:33.285 10:29:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # config=() 00:10:33.285 10:29:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # local subsystem config 00:10:33.285 10:29:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:10:33.285 10:29:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:10:33.285 { 00:10:33.285 "params": { 00:10:33.285 "name": "Nvme$subsystem", 00:10:33.285 "trtype": "$TEST_TRANSPORT", 00:10:33.285 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:33.285 "adrfam": "ipv4", 00:10:33.285 "trsvcid": "$NVMF_PORT", 00:10:33.285 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:33.285 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:33.285 "hdgst": ${hdgst:-false}, 00:10:33.285 "ddgst": ${ddgst:-false} 00:10:33.285 }, 00:10:33.285 "method": "bdev_nvme_attach_controller" 00:10:33.285 } 00:10:33.285 EOF 00:10:33.285 )") 00:10:33.285 10:29:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # cat 00:10:33.285 10:29:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@584 -- # jq . 00:10:33.285 10:29:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@585 -- # IFS=, 00:10:33.285 10:29:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:10:33.285 "params": { 00:10:33.285 "name": "Nvme1", 00:10:33.285 "trtype": "tcp", 00:10:33.285 "traddr": "10.0.0.2", 00:10:33.285 "adrfam": "ipv4", 00:10:33.285 "trsvcid": "4420", 00:10:33.285 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:33.285 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:33.285 "hdgst": false, 00:10:33.285 "ddgst": false 00:10:33.285 }, 00:10:33.285 "method": "bdev_nvme_attach_controller" 00:10:33.285 }' 00:10:33.285 [2024-11-15 10:29:21.718738] Starting SPDK v25.01-pre git sha1 318515b44 / DPDK 24.03.0 initialization... 00:10:33.285 [2024-11-15 10:29:21.718802] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid308075 ] 00:10:33.544 [2024-11-15 10:29:21.787773] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:33.544 [2024-11-15 10:29:21.850832] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:33.544 [2024-11-15 10:29:21.850882] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:33.544 [2024-11-15 10:29:21.850887] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:33.803 I/O targets: 00:10:33.803 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:10:33.803 00:10:33.803 00:10:33.803 CUnit - A unit testing framework for C - Version 2.1-3 00:10:33.803 http://cunit.sourceforge.net/ 00:10:33.803 00:10:33.803 00:10:33.803 Suite: bdevio tests on: Nvme1n1 00:10:33.803 Test: blockdev write read block ...passed 00:10:33.803 Test: blockdev write zeroes read block ...passed 00:10:33.803 Test: blockdev write zeroes read no split ...passed 00:10:34.061 Test: blockdev write zeroes read split ...passed 00:10:34.061 Test: blockdev write zeroes read split partial ...passed 00:10:34.061 Test: blockdev reset ...[2024-11-15 10:29:22.317785] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:10:34.061 [2024-11-15 10:29:22.317903] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1edb640 (9): Bad file descriptor 00:10:34.061 [2024-11-15 10:29:22.329839] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:10:34.061 passed 00:10:34.061 Test: blockdev write read 8 blocks ...passed 00:10:34.061 Test: blockdev write read size > 128k ...passed 00:10:34.061 Test: blockdev write read invalid size ...passed 00:10:34.061 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:10:34.061 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:10:34.061 Test: blockdev write read max offset ...passed 00:10:34.061 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:10:34.319 Test: blockdev writev readv 8 blocks ...passed 00:10:34.319 Test: blockdev writev readv 30 x 1block ...passed 00:10:34.319 Test: blockdev writev readv block ...passed 00:10:34.319 Test: blockdev writev readv size > 128k ...passed 00:10:34.319 Test: blockdev writev readv size > 128k in two iovs ...passed 00:10:34.319 Test: blockdev comparev and writev ...[2024-11-15 10:29:22.663835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:34.319 [2024-11-15 10:29:22.663876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:10:34.319 [2024-11-15 10:29:22.663902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:34.319 [2024-11-15 10:29:22.663920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:10:34.319 [2024-11-15 10:29:22.664388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:34.319 [2024-11-15 10:29:22.664413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:10:34.319 [2024-11-15 10:29:22.664436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:34.319 [2024-11-15 10:29:22.664453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:10:34.319 [2024-11-15 10:29:22.664888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:34.319 [2024-11-15 10:29:22.664912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:10:34.319 [2024-11-15 10:29:22.664934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:34.319 [2024-11-15 10:29:22.664951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:10:34.319 [2024-11-15 10:29:22.665418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:34.319 [2024-11-15 10:29:22.665442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:10:34.319 [2024-11-15 10:29:22.665464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:34.319 [2024-11-15 10:29:22.665481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:10:34.319 passed 00:10:34.319 Test: blockdev nvme passthru rw ...passed 00:10:34.319 Test: blockdev nvme passthru vendor specific ...[2024-11-15 10:29:22.747673] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:34.319 [2024-11-15 10:29:22.747702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:10:34.319 [2024-11-15 10:29:22.747845] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:34.319 [2024-11-15 10:29:22.747868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:10:34.319 [2024-11-15 10:29:22.748004] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:34.319 [2024-11-15 10:29:22.748026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:10:34.319 [2024-11-15 10:29:22.748162] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:34.319 [2024-11-15 10:29:22.748191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:10:34.319 passed 00:10:34.319 Test: blockdev nvme admin passthru ...passed 00:10:34.578 Test: blockdev copy ...passed 00:10:34.578 00:10:34.578 Run Summary: Type Total Ran Passed Failed Inactive 00:10:34.578 suites 1 1 n/a 0 0 00:10:34.578 tests 23 23 23 0 0 00:10:34.578 asserts 152 152 152 0 n/a 00:10:34.578 00:10:34.578 Elapsed time = 1.282 seconds 00:10:34.578 10:29:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:34.578 10:29:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:34.578 10:29:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:34.578 10:29:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:34.578 10:29:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:10:34.578 10:29:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:10:34.578 10:29:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:34.578 10:29:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:10:34.578 10:29:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:34.578 10:29:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:10:34.578 10:29:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:34.578 10:29:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:34.578 rmmod nvme_tcp 00:10:34.578 rmmod nvme_fabrics 00:10:34.578 rmmod nvme_keyring 00:10:34.578 10:29:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:34.578 10:29:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:10:34.578 10:29:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:10:34.578 10:29:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@517 -- # '[' -n 307931 ']' 00:10:34.578 10:29:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@518 -- # killprocess 307931 00:10:34.578 10:29:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@952 -- # '[' -z 307931 ']' 00:10:34.578 10:29:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@956 -- # kill -0 307931 00:10:34.578 10:29:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@957 -- # uname 00:10:34.578 10:29:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:10:34.836 10:29:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 307931 00:10:34.837 10:29:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@958 -- # process_name=reactor_3 00:10:34.837 10:29:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@962 -- # '[' reactor_3 = sudo ']' 00:10:34.837 10:29:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@970 -- # echo 'killing process with pid 307931' 00:10:34.837 killing process with pid 307931 00:10:34.837 10:29:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@971 -- # kill 307931 00:10:34.837 10:29:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@976 -- # wait 307931 00:10:35.096 10:29:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:35.096 10:29:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:35.096 10:29:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:35.096 10:29:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:10:35.096 10:29:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-save 00:10:35.096 10:29:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:35.096 10:29:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-restore 00:10:35.096 10:29:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:35.096 10:29:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:35.096 10:29:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:35.096 10:29:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:35.096 10:29:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:37.007 10:29:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:37.007 00:10:37.007 real 0m6.608s 00:10:37.007 user 0m10.874s 00:10:37.007 sys 0m2.228s 00:10:37.007 10:29:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:37.007 10:29:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:37.007 ************************************ 00:10:37.007 END TEST nvmf_bdevio 00:10:37.007 ************************************ 00:10:37.007 10:29:25 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:10:37.007 00:10:37.007 real 3m56.941s 00:10:37.007 user 10m18.633s 00:10:37.007 sys 1m9.858s 00:10:37.007 10:29:25 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:37.007 10:29:25 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:37.007 ************************************ 00:10:37.007 END TEST nvmf_target_core 00:10:37.007 ************************************ 00:10:37.007 10:29:25 nvmf_tcp -- nvmf/nvmf.sh@15 -- # run_test nvmf_target_extra /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:10:37.007 10:29:25 nvmf_tcp -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:10:37.007 10:29:25 nvmf_tcp -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:37.007 10:29:25 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:37.007 ************************************ 00:10:37.007 START TEST nvmf_target_extra 00:10:37.007 ************************************ 00:10:37.007 10:29:25 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:10:37.268 * Looking for test storage... 00:10:37.268 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:10:37.268 10:29:25 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:10:37.268 10:29:25 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1691 -- # lcov --version 00:10:37.268 10:29:25 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:10:37.268 10:29:25 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:10:37.268 10:29:25 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:37.268 10:29:25 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:37.268 10:29:25 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:37.268 10:29:25 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # IFS=.-: 00:10:37.268 10:29:25 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # read -ra ver1 00:10:37.268 10:29:25 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # IFS=.-: 00:10:37.268 10:29:25 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # read -ra ver2 00:10:37.268 10:29:25 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@338 -- # local 'op=<' 00:10:37.268 10:29:25 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@340 -- # ver1_l=2 00:10:37.268 10:29:25 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@341 -- # ver2_l=1 00:10:37.268 10:29:25 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:37.268 10:29:25 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@344 -- # case "$op" in 00:10:37.268 10:29:25 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@345 -- # : 1 00:10:37.268 10:29:25 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:37.268 10:29:25 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:37.268 10:29:25 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # decimal 1 00:10:37.268 10:29:25 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=1 00:10:37.268 10:29:25 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:37.268 10:29:25 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 1 00:10:37.268 10:29:25 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # ver1[v]=1 00:10:37.268 10:29:25 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # decimal 2 00:10:37.268 10:29:25 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=2 00:10:37.268 10:29:25 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:37.268 10:29:25 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 2 00:10:37.268 10:29:25 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # ver2[v]=2 00:10:37.268 10:29:25 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:37.268 10:29:25 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:37.268 10:29:25 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # return 0 00:10:37.268 10:29:25 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:37.268 10:29:25 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:10:37.268 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:37.268 --rc genhtml_branch_coverage=1 00:10:37.268 --rc genhtml_function_coverage=1 00:10:37.268 --rc genhtml_legend=1 00:10:37.268 --rc geninfo_all_blocks=1 00:10:37.268 --rc geninfo_unexecuted_blocks=1 00:10:37.268 00:10:37.268 ' 00:10:37.268 10:29:25 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:10:37.268 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:37.268 --rc genhtml_branch_coverage=1 00:10:37.268 --rc genhtml_function_coverage=1 00:10:37.268 --rc genhtml_legend=1 00:10:37.268 --rc geninfo_all_blocks=1 00:10:37.268 --rc geninfo_unexecuted_blocks=1 00:10:37.268 00:10:37.268 ' 00:10:37.268 10:29:25 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:10:37.268 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:37.268 --rc genhtml_branch_coverage=1 00:10:37.268 --rc genhtml_function_coverage=1 00:10:37.268 --rc genhtml_legend=1 00:10:37.268 --rc geninfo_all_blocks=1 00:10:37.268 --rc geninfo_unexecuted_blocks=1 00:10:37.268 00:10:37.268 ' 00:10:37.268 10:29:25 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:10:37.268 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:37.268 --rc genhtml_branch_coverage=1 00:10:37.268 --rc genhtml_function_coverage=1 00:10:37.268 --rc genhtml_legend=1 00:10:37.268 --rc geninfo_all_blocks=1 00:10:37.268 --rc geninfo_unexecuted_blocks=1 00:10:37.268 00:10:37.268 ' 00:10:37.268 10:29:25 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:37.268 10:29:25 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # uname -s 00:10:37.268 10:29:25 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:37.268 10:29:25 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:37.268 10:29:25 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:37.268 10:29:25 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:37.268 10:29:25 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:37.268 10:29:25 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:37.268 10:29:25 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:37.268 10:29:25 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:37.268 10:29:25 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:37.268 10:29:25 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:37.268 10:29:25 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:10:37.268 10:29:25 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@18 -- # NVME_HOSTID=8b464f06-2980-e311-ba20-001e67a94acd 00:10:37.268 10:29:25 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:37.269 10:29:25 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:37.269 10:29:25 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:37.269 10:29:25 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:37.269 10:29:25 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:37.269 10:29:25 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@15 -- # shopt -s extglob 00:10:37.269 10:29:25 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:37.269 10:29:25 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:37.269 10:29:25 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:37.269 10:29:25 nvmf_tcp.nvmf_target_extra -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:37.269 10:29:25 nvmf_tcp.nvmf_target_extra -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:37.269 10:29:25 nvmf_tcp.nvmf_target_extra -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:37.269 10:29:25 nvmf_tcp.nvmf_target_extra -- paths/export.sh@5 -- # export PATH 00:10:37.269 10:29:25 nvmf_tcp.nvmf_target_extra -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:37.269 10:29:25 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@51 -- # : 0 00:10:37.269 10:29:25 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:37.269 10:29:25 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:37.269 10:29:25 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:37.269 10:29:25 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:37.269 10:29:25 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:37.269 10:29:25 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:37.269 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:37.269 10:29:25 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:37.269 10:29:25 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:37.269 10:29:25 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:37.269 10:29:25 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:10:37.269 10:29:25 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@13 -- # TEST_ARGS=("$@") 00:10:37.269 10:29:25 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@15 -- # [[ 0 -eq 0 ]] 00:10:37.269 10:29:25 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@16 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:10:37.269 10:29:25 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:10:37.269 10:29:25 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:37.269 10:29:25 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:10:37.269 ************************************ 00:10:37.269 START TEST nvmf_example 00:10:37.269 ************************************ 00:10:37.269 10:29:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:10:37.269 * Looking for test storage... 00:10:37.269 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:37.269 10:29:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:10:37.269 10:29:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1691 -- # lcov --version 00:10:37.269 10:29:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:10:37.529 10:29:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:10:37.529 10:29:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:37.529 10:29:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:37.529 10:29:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:37.529 10:29:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # IFS=.-: 00:10:37.529 10:29:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # read -ra ver1 00:10:37.529 10:29:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # IFS=.-: 00:10:37.529 10:29:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # read -ra ver2 00:10:37.529 10:29:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@338 -- # local 'op=<' 00:10:37.529 10:29:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@340 -- # ver1_l=2 00:10:37.529 10:29:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@341 -- # ver2_l=1 00:10:37.529 10:29:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:37.529 10:29:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@344 -- # case "$op" in 00:10:37.529 10:29:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@345 -- # : 1 00:10:37.529 10:29:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:37.529 10:29:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:37.529 10:29:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # decimal 1 00:10:37.529 10:29:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=1 00:10:37.529 10:29:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:37.529 10:29:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 1 00:10:37.529 10:29:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # ver1[v]=1 00:10:37.529 10:29:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # decimal 2 00:10:37.529 10:29:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=2 00:10:37.529 10:29:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:37.530 10:29:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 2 00:10:37.530 10:29:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # ver2[v]=2 00:10:37.530 10:29:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:37.530 10:29:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:37.530 10:29:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # return 0 00:10:37.530 10:29:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:37.530 10:29:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:10:37.530 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:37.530 --rc genhtml_branch_coverage=1 00:10:37.530 --rc genhtml_function_coverage=1 00:10:37.530 --rc genhtml_legend=1 00:10:37.530 --rc geninfo_all_blocks=1 00:10:37.530 --rc geninfo_unexecuted_blocks=1 00:10:37.530 00:10:37.530 ' 00:10:37.530 10:29:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:10:37.530 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:37.530 --rc genhtml_branch_coverage=1 00:10:37.530 --rc genhtml_function_coverage=1 00:10:37.530 --rc genhtml_legend=1 00:10:37.530 --rc geninfo_all_blocks=1 00:10:37.530 --rc geninfo_unexecuted_blocks=1 00:10:37.530 00:10:37.530 ' 00:10:37.530 10:29:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:10:37.530 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:37.530 --rc genhtml_branch_coverage=1 00:10:37.530 --rc genhtml_function_coverage=1 00:10:37.530 --rc genhtml_legend=1 00:10:37.530 --rc geninfo_all_blocks=1 00:10:37.530 --rc geninfo_unexecuted_blocks=1 00:10:37.530 00:10:37.530 ' 00:10:37.530 10:29:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:10:37.530 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:37.530 --rc genhtml_branch_coverage=1 00:10:37.530 --rc genhtml_function_coverage=1 00:10:37.530 --rc genhtml_legend=1 00:10:37.530 --rc geninfo_all_blocks=1 00:10:37.530 --rc geninfo_unexecuted_blocks=1 00:10:37.530 00:10:37.530 ' 00:10:37.530 10:29:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:37.530 10:29:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:10:37.530 10:29:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:37.530 10:29:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:37.530 10:29:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:37.530 10:29:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:37.530 10:29:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:37.530 10:29:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:37.530 10:29:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:37.530 10:29:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:37.530 10:29:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:37.530 10:29:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:37.530 10:29:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:10:37.530 10:29:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=8b464f06-2980-e311-ba20-001e67a94acd 00:10:37.530 10:29:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:37.530 10:29:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:37.530 10:29:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:37.530 10:29:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:37.530 10:29:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:37.530 10:29:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@15 -- # shopt -s extglob 00:10:37.530 10:29:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:37.530 10:29:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:37.530 10:29:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:37.530 10:29:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:37.530 10:29:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:37.530 10:29:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:37.530 10:29:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@5 -- # export PATH 00:10:37.530 10:29:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:37.530 10:29:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@51 -- # : 0 00:10:37.530 10:29:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:37.530 10:29:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:37.530 10:29:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:37.530 10:29:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:37.530 10:29:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:37.530 10:29:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:37.530 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:37.530 10:29:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:37.530 10:29:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:37.530 10:29:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:37.530 10:29:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:10:37.530 10:29:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:10:37.530 10:29:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:10:37.530 10:29:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:10:37.530 10:29:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:10:37.530 10:29:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:10:37.530 10:29:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:10:37.530 10:29:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:10:37.530 10:29:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:37.530 10:29:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:37.530 10:29:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:10:37.530 10:29:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:37.530 10:29:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:37.530 10:29:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:37.530 10:29:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:37.530 10:29:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:37.530 10:29:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:37.530 10:29:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:37.530 10:29:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:37.530 10:29:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:37.530 10:29:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:37.530 10:29:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@309 -- # xtrace_disable 00:10:37.530 10:29:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:40.066 10:29:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:40.066 10:29:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # pci_devs=() 00:10:40.066 10:29:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:40.066 10:29:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:40.066 10:29:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:40.066 10:29:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:40.066 10:29:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:40.066 10:29:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # net_devs=() 00:10:40.066 10:29:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:40.066 10:29:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # e810=() 00:10:40.066 10:29:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # local -ga e810 00:10:40.066 10:29:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # x722=() 00:10:40.066 10:29:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # local -ga x722 00:10:40.066 10:29:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # mlx=() 00:10:40.066 10:29:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # local -ga mlx 00:10:40.066 10:29:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:40.066 10:29:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:40.066 10:29:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:40.066 10:29:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:40.066 10:29:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:40.066 10:29:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:40.066 10:29:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:40.066 10:29:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:40.066 10:29:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:40.067 10:29:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:40.067 10:29:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:40.067 10:29:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:40.067 10:29:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:40.067 10:29:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:40.067 10:29:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:40.067 10:29:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:40.067 10:29:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:40.067 10:29:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:40.067 10:29:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:40.067 10:29:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@367 -- # echo 'Found 0000:82:00.0 (0x8086 - 0x159b)' 00:10:40.067 Found 0000:82:00.0 (0x8086 - 0x159b) 00:10:40.067 10:29:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:40.067 10:29:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:40.067 10:29:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:40.067 10:29:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:40.067 10:29:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:40.067 10:29:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:40.067 10:29:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@367 -- # echo 'Found 0000:82:00.1 (0x8086 - 0x159b)' 00:10:40.067 Found 0000:82:00.1 (0x8086 - 0x159b) 00:10:40.067 10:29:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:40.067 10:29:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:40.067 10:29:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:40.067 10:29:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:40.067 10:29:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:40.067 10:29:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:40.067 10:29:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:40.067 10:29:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:40.067 10:29:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:40.067 10:29:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:40.067 10:29:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:40.067 10:29:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:40.067 10:29:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:40.067 10:29:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:40.067 10:29:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:40.067 10:29:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:82:00.0: cvl_0_0' 00:10:40.067 Found net devices under 0000:82:00.0: cvl_0_0 00:10:40.067 10:29:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:40.067 10:29:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:40.067 10:29:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:40.067 10:29:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:40.067 10:29:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:40.067 10:29:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:40.067 10:29:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:40.067 10:29:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:40.067 10:29:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:82:00.1: cvl_0_1' 00:10:40.067 Found net devices under 0000:82:00.1: cvl_0_1 00:10:40.067 10:29:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:40.067 10:29:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:40.067 10:29:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # is_hw=yes 00:10:40.067 10:29:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:40.067 10:29:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:40.067 10:29:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:40.067 10:29:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:40.067 10:29:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:40.067 10:29:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:40.067 10:29:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:40.067 10:29:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:40.067 10:29:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:40.067 10:29:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:40.067 10:29:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:40.067 10:29:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:40.067 10:29:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:40.067 10:29:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:40.067 10:29:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:40.067 10:29:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:40.067 10:29:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:40.067 10:29:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:40.067 10:29:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:40.067 10:29:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:40.067 10:29:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:40.067 10:29:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:40.067 10:29:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:40.067 10:29:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:40.067 10:29:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:40.067 10:29:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:40.067 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:40.067 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.274 ms 00:10:40.067 00:10:40.067 --- 10.0.0.2 ping statistics --- 00:10:40.067 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:40.067 rtt min/avg/max/mdev = 0.274/0.274/0.274/0.000 ms 00:10:40.067 10:29:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:40.067 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:40.067 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.085 ms 00:10:40.067 00:10:40.067 --- 10.0.0.1 ping statistics --- 00:10:40.067 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:40.067 rtt min/avg/max/mdev = 0.085/0.085/0.085/0.000 ms 00:10:40.067 10:29:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:40.067 10:29:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@450 -- # return 0 00:10:40.067 10:29:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:40.067 10:29:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:40.067 10:29:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:40.067 10:29:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:40.067 10:29:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:40.067 10:29:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:40.067 10:29:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:40.067 10:29:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:10:40.067 10:29:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:10:40.067 10:29:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:40.067 10:29:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:40.067 10:29:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:10:40.067 10:29:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:10:40.067 10:29:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=310221 00:10:40.067 10:29:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:10:40.067 10:29:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:10:40.067 10:29:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 310221 00:10:40.067 10:29:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@833 -- # '[' -z 310221 ']' 00:10:40.067 10:29:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:40.067 10:29:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@838 -- # local max_retries=100 00:10:40.067 10:29:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:40.067 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:40.067 10:29:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@842 -- # xtrace_disable 00:10:40.067 10:29:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:40.068 10:29:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:10:40.068 10:29:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@866 -- # return 0 00:10:40.068 10:29:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:10:40.068 10:29:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:40.068 10:29:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:40.068 10:29:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:40.068 10:29:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:40.068 10:29:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:40.068 10:29:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:40.068 10:29:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:10:40.068 10:29:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:40.068 10:29:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:40.068 10:29:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:40.068 10:29:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:10:40.068 10:29:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:10:40.068 10:29:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:40.068 10:29:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:40.068 10:29:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:40.068 10:29:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:10:40.068 10:29:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:40.068 10:29:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:40.068 10:29:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:40.068 10:29:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:40.068 10:29:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:40.068 10:29:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:40.068 10:29:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:40.068 10:29:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:40.068 10:29:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:10:40.068 10:29:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:10:52.270 Initializing NVMe Controllers 00:10:52.270 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:10:52.270 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:10:52.270 Initialization complete. Launching workers. 00:10:52.270 ======================================================== 00:10:52.270 Latency(us) 00:10:52.270 Device Information : IOPS MiB/s Average min max 00:10:52.270 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 14838.70 57.96 4314.59 851.24 16418.98 00:10:52.270 ======================================================== 00:10:52.270 Total : 14838.70 57.96 4314.59 851.24 16418.98 00:10:52.270 00:10:52.270 10:29:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:10:52.270 10:29:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:10:52.270 10:29:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:52.270 10:29:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@121 -- # sync 00:10:52.270 10:29:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:52.270 10:29:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@124 -- # set +e 00:10:52.270 10:29:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:52.270 10:29:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:52.270 rmmod nvme_tcp 00:10:52.270 rmmod nvme_fabrics 00:10:52.270 rmmod nvme_keyring 00:10:52.270 10:29:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:52.270 10:29:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@128 -- # set -e 00:10:52.270 10:29:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@129 -- # return 0 00:10:52.270 10:29:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@517 -- # '[' -n 310221 ']' 00:10:52.270 10:29:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@518 -- # killprocess 310221 00:10:52.270 10:29:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@952 -- # '[' -z 310221 ']' 00:10:52.270 10:29:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@956 -- # kill -0 310221 00:10:52.270 10:29:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@957 -- # uname 00:10:52.270 10:29:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:10:52.270 10:29:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 310221 00:10:52.270 10:29:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@958 -- # process_name=nvmf 00:10:52.270 10:29:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@962 -- # '[' nvmf = sudo ']' 00:10:52.270 10:29:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@970 -- # echo 'killing process with pid 310221' 00:10:52.270 killing process with pid 310221 00:10:52.270 10:29:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@971 -- # kill 310221 00:10:52.270 10:29:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@976 -- # wait 310221 00:10:52.270 nvmf threads initialize successfully 00:10:52.270 bdev subsystem init successfully 00:10:52.270 created a nvmf target service 00:10:52.270 create targets's poll groups done 00:10:52.270 all subsystems of target started 00:10:52.270 nvmf target is running 00:10:52.270 all subsystems of target stopped 00:10:52.270 destroy targets's poll groups done 00:10:52.270 destroyed the nvmf target service 00:10:52.270 bdev subsystem finish successfully 00:10:52.270 nvmf threads destroy successfully 00:10:52.270 10:29:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:52.270 10:29:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:52.270 10:29:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:52.270 10:29:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@297 -- # iptr 00:10:52.270 10:29:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # iptables-save 00:10:52.270 10:29:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:52.270 10:29:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # iptables-restore 00:10:52.270 10:29:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:52.270 10:29:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:52.270 10:29:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:52.270 10:29:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:52.270 10:29:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:52.841 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:52.841 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:10:52.841 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:52.841 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:52.841 00:10:52.841 real 0m15.426s 00:10:52.841 user 0m41.970s 00:10:52.841 sys 0m3.679s 00:10:52.841 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:52.841 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:52.841 ************************************ 00:10:52.841 END TEST nvmf_example 00:10:52.841 ************************************ 00:10:52.841 10:29:41 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@17 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:10:52.841 10:29:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:10:52.841 10:29:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:52.841 10:29:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:10:52.841 ************************************ 00:10:52.841 START TEST nvmf_filesystem 00:10:52.841 ************************************ 00:10:52.841 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:10:52.841 * Looking for test storage... 00:10:52.841 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:52.841 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:10:52.841 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1691 -- # lcov --version 00:10:52.841 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:10:52.841 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:10:52.841 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:52.841 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:52.841 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:52.841 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:10:52.841 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:10:52.841 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:10:52.841 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:10:52.841 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:10:52.841 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:10:52.841 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:10:52.841 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:52.841 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:10:52.841 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:10:52.841 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:52.841 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:52.841 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:10:52.841 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:10:52.841 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:52.841 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:10:52.841 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:10:52.841 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:10:52.841 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:10:52.842 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:52.842 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:10:52.842 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:10:52.842 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:52.842 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:52.842 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:10:52.842 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:52.842 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:10:52.842 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:52.842 --rc genhtml_branch_coverage=1 00:10:52.842 --rc genhtml_function_coverage=1 00:10:52.842 --rc genhtml_legend=1 00:10:52.842 --rc geninfo_all_blocks=1 00:10:52.842 --rc geninfo_unexecuted_blocks=1 00:10:52.842 00:10:52.842 ' 00:10:52.842 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:10:52.842 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:52.842 --rc genhtml_branch_coverage=1 00:10:52.842 --rc genhtml_function_coverage=1 00:10:52.842 --rc genhtml_legend=1 00:10:52.842 --rc geninfo_all_blocks=1 00:10:52.842 --rc geninfo_unexecuted_blocks=1 00:10:52.842 00:10:52.842 ' 00:10:52.842 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:10:52.842 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:52.842 --rc genhtml_branch_coverage=1 00:10:52.842 --rc genhtml_function_coverage=1 00:10:52.842 --rc genhtml_legend=1 00:10:52.842 --rc geninfo_all_blocks=1 00:10:52.842 --rc geninfo_unexecuted_blocks=1 00:10:52.842 00:10:52.842 ' 00:10:52.842 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:10:52.842 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:52.842 --rc genhtml_branch_coverage=1 00:10:52.842 --rc genhtml_function_coverage=1 00:10:52.842 --rc genhtml_legend=1 00:10:52.842 --rc geninfo_all_blocks=1 00:10:52.842 --rc geninfo_unexecuted_blocks=1 00:10:52.842 00:10:52.842 ' 00:10:52.842 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:10:52.842 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:10:52.842 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:10:52.842 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:10:52.842 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:10:52.842 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:10:52.842 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:10:52.842 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:10:52.842 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:10:52.842 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:10:52.842 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:10:52.842 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:10:52.842 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:10:52.842 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:10:52.842 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:10:52.842 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:10:52.842 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:10:52.842 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:10:52.842 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:10:52.842 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:10:52.842 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:10:52.842 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:10:52.842 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:10:52.842 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:10:52.842 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:10:52.842 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_MAX_NUMA_NODES=1 00:10:52.842 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_PGO_CAPTURE=n 00:10:52.842 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:10:52.842 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:10:52.842 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_LTO=n 00:10:52.842 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_ISCSI_INITIATOR=y 00:10:52.842 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_CET=n 00:10:52.842 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:10:52.842 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_OCF_PATH= 00:10:52.842 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_RDMA_SET_TOS=y 00:10:52.842 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_AIO_FSDEV=y 00:10:52.842 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_HAVE_ARC4RANDOM=y 00:10:52.842 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_HAVE_LIBARCHIVE=n 00:10:52.842 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_UBLK=y 00:10:52.842 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_ISAL_CRYPTO=y 00:10:52.842 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_OPENSSL_PATH= 00:10:52.842 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_OCF=n 00:10:52.842 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUSE=n 00:10:52.842 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_VTUNE_DIR= 00:10:52.842 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_FUZZER_LIB= 00:10:52.842 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_FUZZER=n 00:10:52.842 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_FSDEV=y 00:10:52.842 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:10:52.842 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_CRYPTO=n 00:10:52.842 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_PGO_USE=n 00:10:52.842 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_VHOST=y 00:10:52.842 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_DAOS=n 00:10:52.842 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_DPDK_INC_DIR= 00:10:52.842 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_DAOS_DIR= 00:10:52.842 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_UNIT_TESTS=n 00:10:52.842 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:10:52.842 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_VIRTIO=y 00:10:52.842 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_DPDK_UADK=n 00:10:52.842 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_COVERAGE=y 00:10:52.842 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_RDMA=y 00:10:52.842 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:10:52.842 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_HAVE_LZ4=n 00:10:52.842 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:10:52.842 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_PATH= 00:10:52.842 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_XNVME=n 00:10:52.842 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_VFIO_USER=y 00:10:52.842 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_ARCH=native 00:10:52.842 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_HAVE_EVP_MAC=y 00:10:52.842 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_URING_ZNS=n 00:10:52.842 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_WERROR=y 00:10:52.842 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_HAVE_LIBBSD=n 00:10:52.842 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_UBSAN=y 00:10:52.842 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:10:52.842 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_IPSEC_MB_DIR= 00:10:52.842 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_GOLANG=n 00:10:52.842 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_ISAL=y 00:10:52.842 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_IDXD_KERNEL=y 00:10:52.842 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_LIB_DIR= 00:10:52.842 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_RDMA_PROV=verbs 00:10:52.843 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_APPS=y 00:10:52.843 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_SHARED=y 00:10:52.843 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_HAVE_KEYUTILS=y 00:10:52.843 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_FC_PATH= 00:10:52.843 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_DPDK_PKG_CONFIG=n 00:10:52.843 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_FC=n 00:10:52.843 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_AVAHI=n 00:10:52.843 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_FIO_PLUGIN=y 00:10:52.843 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_RAID5F=n 00:10:52.843 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_EXAMPLES=y 00:10:52.843 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_TESTS=y 00:10:52.843 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CRYPTO_MLX5=n 00:10:52.843 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_MAX_LCORES=128 00:10:52.843 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@84 -- # CONFIG_IPSEC_MB=n 00:10:52.843 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@85 -- # CONFIG_PGO_DIR= 00:10:52.843 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@86 -- # CONFIG_DEBUG=y 00:10:52.843 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@87 -- # CONFIG_DPDK_COMPRESSDEV=n 00:10:52.843 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@88 -- # CONFIG_CROSS_PREFIX= 00:10:52.843 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@89 -- # CONFIG_COPY_FILE_RANGE=y 00:10:52.843 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@90 -- # CONFIG_URING=n 00:10:52.843 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:10:52.843 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:10:52.843 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:10:52.843 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:10:52.843 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:10:52.843 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:10:52.843 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:10:52.843 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:10:52.843 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:10:52.843 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:10:52.843 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:10:52.843 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:10:52.843 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:10:52.843 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:10:52.843 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:10:52.843 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:10:52.843 #define SPDK_CONFIG_H 00:10:52.843 #define SPDK_CONFIG_AIO_FSDEV 1 00:10:52.843 #define SPDK_CONFIG_APPS 1 00:10:52.843 #define SPDK_CONFIG_ARCH native 00:10:52.843 #undef SPDK_CONFIG_ASAN 00:10:52.843 #undef SPDK_CONFIG_AVAHI 00:10:52.843 #undef SPDK_CONFIG_CET 00:10:52.843 #define SPDK_CONFIG_COPY_FILE_RANGE 1 00:10:52.843 #define SPDK_CONFIG_COVERAGE 1 00:10:52.843 #define SPDK_CONFIG_CROSS_PREFIX 00:10:52.843 #undef SPDK_CONFIG_CRYPTO 00:10:52.843 #undef SPDK_CONFIG_CRYPTO_MLX5 00:10:52.843 #undef SPDK_CONFIG_CUSTOMOCF 00:10:52.843 #undef SPDK_CONFIG_DAOS 00:10:52.843 #define SPDK_CONFIG_DAOS_DIR 00:10:52.843 #define SPDK_CONFIG_DEBUG 1 00:10:52.843 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:10:52.843 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:10:52.843 #define SPDK_CONFIG_DPDK_INC_DIR 00:10:52.843 #define SPDK_CONFIG_DPDK_LIB_DIR 00:10:52.843 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:10:52.843 #undef SPDK_CONFIG_DPDK_UADK 00:10:52.843 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:10:52.843 #define SPDK_CONFIG_EXAMPLES 1 00:10:52.843 #undef SPDK_CONFIG_FC 00:10:52.843 #define SPDK_CONFIG_FC_PATH 00:10:52.843 #define SPDK_CONFIG_FIO_PLUGIN 1 00:10:52.843 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:10:52.843 #define SPDK_CONFIG_FSDEV 1 00:10:52.843 #undef SPDK_CONFIG_FUSE 00:10:52.843 #undef SPDK_CONFIG_FUZZER 00:10:52.843 #define SPDK_CONFIG_FUZZER_LIB 00:10:52.843 #undef SPDK_CONFIG_GOLANG 00:10:52.843 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:10:52.843 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:10:52.843 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:10:52.843 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:10:52.843 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:10:52.843 #undef SPDK_CONFIG_HAVE_LIBBSD 00:10:52.843 #undef SPDK_CONFIG_HAVE_LZ4 00:10:52.843 #define SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIM 1 00:10:52.843 #undef SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC 00:10:52.843 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:10:52.843 #define SPDK_CONFIG_IDXD 1 00:10:52.843 #define SPDK_CONFIG_IDXD_KERNEL 1 00:10:52.843 #undef SPDK_CONFIG_IPSEC_MB 00:10:52.843 #define SPDK_CONFIG_IPSEC_MB_DIR 00:10:52.843 #define SPDK_CONFIG_ISAL 1 00:10:52.843 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:10:52.843 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:10:52.843 #define SPDK_CONFIG_LIBDIR 00:10:52.843 #undef SPDK_CONFIG_LTO 00:10:52.843 #define SPDK_CONFIG_MAX_LCORES 128 00:10:52.843 #define SPDK_CONFIG_MAX_NUMA_NODES 1 00:10:52.843 #define SPDK_CONFIG_NVME_CUSE 1 00:10:52.843 #undef SPDK_CONFIG_OCF 00:10:52.843 #define SPDK_CONFIG_OCF_PATH 00:10:52.843 #define SPDK_CONFIG_OPENSSL_PATH 00:10:52.843 #undef SPDK_CONFIG_PGO_CAPTURE 00:10:52.843 #define SPDK_CONFIG_PGO_DIR 00:10:52.843 #undef SPDK_CONFIG_PGO_USE 00:10:52.843 #define SPDK_CONFIG_PREFIX /usr/local 00:10:52.843 #undef SPDK_CONFIG_RAID5F 00:10:52.843 #undef SPDK_CONFIG_RBD 00:10:52.843 #define SPDK_CONFIG_RDMA 1 00:10:52.843 #define SPDK_CONFIG_RDMA_PROV verbs 00:10:52.843 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:10:52.843 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:10:52.843 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:10:52.843 #define SPDK_CONFIG_SHARED 1 00:10:52.843 #undef SPDK_CONFIG_SMA 00:10:52.843 #define SPDK_CONFIG_TESTS 1 00:10:52.843 #undef SPDK_CONFIG_TSAN 00:10:52.843 #define SPDK_CONFIG_UBLK 1 00:10:52.843 #define SPDK_CONFIG_UBSAN 1 00:10:52.843 #undef SPDK_CONFIG_UNIT_TESTS 00:10:52.843 #undef SPDK_CONFIG_URING 00:10:52.843 #define SPDK_CONFIG_URING_PATH 00:10:52.843 #undef SPDK_CONFIG_URING_ZNS 00:10:52.843 #undef SPDK_CONFIG_USDT 00:10:52.843 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:10:52.843 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:10:52.843 #define SPDK_CONFIG_VFIO_USER 1 00:10:52.843 #define SPDK_CONFIG_VFIO_USER_DIR 00:10:52.843 #define SPDK_CONFIG_VHOST 1 00:10:52.843 #define SPDK_CONFIG_VIRTIO 1 00:10:52.843 #undef SPDK_CONFIG_VTUNE 00:10:52.843 #define SPDK_CONFIG_VTUNE_DIR 00:10:52.843 #define SPDK_CONFIG_WERROR 1 00:10:52.843 #define SPDK_CONFIG_WPDK_DIR 00:10:52.843 #undef SPDK_CONFIG_XNVME 00:10:52.843 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:10:52.843 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:10:52.843 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:52.843 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:10:52.843 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:52.843 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:52.843 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:52.843 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:52.843 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:52.843 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:52.843 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:10:52.844 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:52.844 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:10:52.844 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:10:52.844 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:10:52.844 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:10:52.844 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:10:52.844 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:10:52.844 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:10:52.844 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:10:52.844 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:10:52.844 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # uname -s 00:10:52.844 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:10:52.844 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:10:52.844 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:10:52.844 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:10:52.844 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:10:52.844 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:10:52.844 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:10:52.844 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:10:52.844 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:10:52.844 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:10:52.844 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:10:52.844 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:10:52.844 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:10:52.844 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:10:52.844 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:10:52.844 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:10:52.844 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power ]] 00:10:52.844 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 0 00:10:52.844 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:10:52.844 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:10:52.844 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:10:52.844 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:10:52.844 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:10:52.844 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:10:52.844 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:10:52.844 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:10:52.844 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:10:52.844 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:10:52.844 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:10:53.108 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:10:53.108 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:10:53.108 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:10:53.108 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:10:53.108 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:10:53.108 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:10:53.108 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:10:53.108 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:10:53.108 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:10:53.108 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:10:53.108 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:10:53.108 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:10:53.108 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:10:53.108 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:10:53.108 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 1 00:10:53.108 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:10:53.108 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:10:53.108 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:10:53.108 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:10:53.108 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:10:53.108 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:10:53.108 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:10:53.108 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 1 00:10:53.108 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:10:53.108 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:10:53.108 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:10:53.108 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:10:53.108 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:10:53.108 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:10:53.108 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:10:53.108 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@102 -- # : tcp 00:10:53.108 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:10:53.108 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:10:53.108 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:10:53.108 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:10:53.108 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:10:53.108 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:10:53.108 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:10:53.108 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:10:53.108 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_RAID 00:10:53.108 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:10:53.108 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_IOAT 00:10:53.108 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:10:53.108 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_BLOBFS 00:10:53.108 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:10:53.108 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_VHOST_INIT 00:10:53.108 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:10:53.108 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_LVOL 00:10:53.108 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0 00:10:53.108 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_TEST_VBDEV_COMPRESS 00:10:53.108 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 0 00:10:53.108 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_ASAN 00:10:53.108 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@124 -- # : 1 00:10:53.108 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_UBSAN 00:10:53.108 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@126 -- # : 00:10:53.108 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_EXTERNAL_DPDK 00:10:53.108 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:10:53.108 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_RUN_NON_ROOT 00:10:53.108 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:10:53.108 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_CRYPTO 00:10:53.108 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:10:53.108 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_FTL 00:10:53.108 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:10:53.108 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_OCF 00:10:53.108 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:10:53.108 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_VMD 00:10:53.108 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@138 -- # : 0 00:10:53.109 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_OPAL 00:10:53.109 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@140 -- # : 00:10:53.109 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_TEST_NATIVE_DPDK 00:10:53.109 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@142 -- # : true 00:10:53.109 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_AUTOTEST_X 00:10:53.109 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:10:53.109 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:10:53.109 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 0 00:10:53.109 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:10:53.109 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:10:53.109 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:10:53.109 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:10:53.109 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:10:53.109 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:10:53.109 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:10:53.109 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@154 -- # : e810 00:10:53.109 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:10:53.109 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:10:53.109 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:10:53.109 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:10:53.109 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:10:53.109 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:10:53.109 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:10:53.109 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:10:53.109 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:10:53.109 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:10:53.109 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:10:53.109 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@166 -- # : 0 00:10:53.109 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:10:53.109 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 00:10:53.109 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:10:53.109 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 0 00:10:53.109 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:10:53.109 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@173 -- # : 0 00:10:53.109 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:10:53.109 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@175 -- # : 0 00:10:53.109 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@176 -- # export SPDK_TEST_SETUP 00:10:53.109 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@179 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:10:53.109 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@179 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:10:53.109 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@180 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:10:53.109 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@180 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:10:53.109 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:10:53.109 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:10:53.109 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:10:53.109 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:10:53.109 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@185 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:10:53.109 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@185 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:10:53.109 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@189 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:10:53.109 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@189 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:10:53.109 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@193 -- # export PYTHONDONTWRITEBYTECODE=1 00:10:53.109 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@193 -- # PYTHONDONTWRITEBYTECODE=1 00:10:53.109 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@197 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:10:53.109 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@197 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:10:53.109 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@198 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:10:53.109 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@198 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:10:53.109 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@202 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:10:53.109 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@203 -- # rm -rf /var/tmp/asan_suppression_file 00:10:53.109 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@204 -- # cat 00:10:53.109 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@240 -- # echo leak:libfuse3.so 00:10:53.109 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:10:53.109 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:10:53.109 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:10:53.109 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:10:53.109 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@246 -- # '[' -z /var/spdk/dependencies ']' 00:10:53.109 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@249 -- # export DEPENDENCY_DIR 00:10:53.109 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@253 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:10:53.109 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@253 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:10:53.109 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@254 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:10:53.109 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@254 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:10:53.109 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@257 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:10:53.109 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@257 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:10:53.109 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@258 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:10:53.109 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@258 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:10:53.109 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:10:53.109 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:10:53.109 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@263 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:10:53.110 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@263 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:10:53.110 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # _LCOV_MAIN=0 00:10:53.110 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@266 -- # _LCOV_LLVM=1 00:10:53.110 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@267 -- # _LCOV= 00:10:53.110 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@268 -- # [[ '' == *clang* ]] 00:10:53.110 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@268 -- # [[ 0 -eq 1 ]] 00:10:53.110 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:10:53.110 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@271 -- # _lcov_opt[_LCOV_MAIN]= 00:10:53.110 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@273 -- # lcov_opt= 00:10:53.110 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@276 -- # '[' 0 -eq 0 ']' 00:10:53.110 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@277 -- # export valgrind= 00:10:53.110 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@277 -- # valgrind= 00:10:53.110 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@283 -- # uname -s 00:10:53.110 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@283 -- # '[' Linux = Linux ']' 00:10:53.110 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@284 -- # HUGEMEM=4096 00:10:53.110 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # export CLEAR_HUGE=yes 00:10:53.110 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # CLEAR_HUGE=yes 00:10:53.110 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@287 -- # MAKE=make 00:10:53.110 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@288 -- # MAKEFLAGS=-j48 00:10:53.110 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@304 -- # export HUGEMEM=4096 00:10:53.110 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@304 -- # HUGEMEM=4096 00:10:53.110 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # NO_HUGE=() 00:10:53.110 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@307 -- # TEST_MODE= 00:10:53.110 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@308 -- # for i in "$@" 00:10:53.110 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@309 -- # case "$i" in 00:10:53.110 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@314 -- # TEST_TRANSPORT=tcp 00:10:53.110 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@329 -- # [[ -z 311909 ]] 00:10:53.110 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@329 -- # kill -0 311909 00:10:53.110 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1676 -- # set_test_storage 2147483648 00:10:53.110 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@339 -- # [[ -v testdir ]] 00:10:53.110 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@341 -- # local requested_size=2147483648 00:10:53.110 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@342 -- # local mount target_dir 00:10:53.110 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@344 -- # local -A mounts fss sizes avails uses 00:10:53.110 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@345 -- # local source fs size avail mount use 00:10:53.110 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@347 -- # local storage_fallback storage_candidates 00:10:53.110 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@349 -- # mktemp -udt spdk.XXXXXX 00:10:53.110 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@349 -- # storage_fallback=/tmp/spdk.GezOEt 00:10:53.110 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@354 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:10:53.110 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@356 -- # [[ -n '' ]] 00:10:53.110 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@361 -- # [[ -n '' ]] 00:10:53.110 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@366 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.GezOEt/tests/target /tmp/spdk.GezOEt 00:10:53.110 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@369 -- # requested_size=2214592512 00:10:53.110 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:10:53.110 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@338 -- # df -T 00:10:53.110 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@338 -- # grep -v Filesystem 00:10:53.110 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=spdk_devtmpfs 00:10:53.110 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=devtmpfs 00:10:53.110 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=67108864 00:10:53.110 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=67108864 00:10:53.110 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=0 00:10:53.110 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:10:53.110 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=/dev/pmem0 00:10:53.110 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=ext2 00:10:53.110 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=4096 00:10:53.110 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=5284429824 00:10:53.110 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=5284425728 00:10:53.110 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:10:53.110 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=spdk_root 00:10:53.110 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=overlay 00:10:53.110 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=56229662720 00:10:53.110 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=61988528128 00:10:53.110 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=5758865408 00:10:53.110 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:10:53.110 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:10:53.110 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:10:53.110 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=30982897664 00:10:53.110 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=30994264064 00:10:53.110 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=11366400 00:10:53.110 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:10:53.110 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:10:53.110 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:10:53.110 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=12375285760 00:10:53.110 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=12397707264 00:10:53.110 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=22421504 00:10:53.110 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:10:53.110 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:10:53.110 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:10:53.110 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=30993956864 00:10:53.110 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=30994264064 00:10:53.110 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=307200 00:10:53.110 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:10:53.110 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:10:53.110 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:10:53.110 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=6198837248 00:10:53.110 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=6198849536 00:10:53.110 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=12288 00:10:53.110 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:10:53.110 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@377 -- # printf '* Looking for test storage...\n' 00:10:53.110 * Looking for test storage... 00:10:53.110 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@379 -- # local target_space new_size 00:10:53.110 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@380 -- # for target_dir in "${storage_candidates[@]}" 00:10:53.110 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@383 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:53.110 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@383 -- # awk '$1 !~ /Filesystem/{print $6}' 00:10:53.110 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@383 -- # mount=/ 00:10:53.110 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # target_space=56229662720 00:10:53.110 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@386 -- # (( target_space == 0 || target_space < requested_size )) 00:10:53.110 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@389 -- # (( target_space >= requested_size )) 00:10:53.110 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # [[ overlay == tmpfs ]] 00:10:53.110 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # [[ overlay == ramfs ]] 00:10:53.110 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # [[ / == / ]] 00:10:53.110 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@392 -- # new_size=7973457920 00:10:53.110 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # (( new_size * 100 / sizes[/] > 95 )) 00:10:53.111 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@398 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:53.111 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@398 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:53.111 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@399 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:53.111 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:53.111 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@400 -- # return 0 00:10:53.111 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1678 -- # set -o errtrace 00:10:53.111 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1679 -- # shopt -s extdebug 00:10:53.111 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1680 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:10:53.111 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1682 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:10:53.111 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1683 -- # true 00:10:53.111 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1685 -- # xtrace_fd 00:10:53.111 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 15 ]] 00:10:53.111 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/15 ]] 00:10:53.111 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:10:53.111 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:10:53.111 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:10:53.111 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:10:53.111 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:10:53.111 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:10:53.111 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:10:53.111 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1691 -- # lcov --version 00:10:53.111 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:10:53.111 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:10:53.111 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:53.111 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:53.111 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:53.111 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:10:53.111 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:10:53.111 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:10:53.111 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:10:53.111 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:10:53.111 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:10:53.111 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:10:53.111 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:53.111 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:10:53.111 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:10:53.111 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:53.111 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:53.111 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:10:53.111 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:10:53.111 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:53.111 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:10:53.111 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:10:53.111 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:10:53.111 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:10:53.111 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:53.111 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:10:53.111 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:10:53.111 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:53.111 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:53.111 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:10:53.111 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:53.111 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:10:53.111 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:53.111 --rc genhtml_branch_coverage=1 00:10:53.111 --rc genhtml_function_coverage=1 00:10:53.111 --rc genhtml_legend=1 00:10:53.111 --rc geninfo_all_blocks=1 00:10:53.111 --rc geninfo_unexecuted_blocks=1 00:10:53.111 00:10:53.111 ' 00:10:53.111 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:10:53.111 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:53.111 --rc genhtml_branch_coverage=1 00:10:53.111 --rc genhtml_function_coverage=1 00:10:53.111 --rc genhtml_legend=1 00:10:53.111 --rc geninfo_all_blocks=1 00:10:53.111 --rc geninfo_unexecuted_blocks=1 00:10:53.111 00:10:53.111 ' 00:10:53.111 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:10:53.111 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:53.111 --rc genhtml_branch_coverage=1 00:10:53.111 --rc genhtml_function_coverage=1 00:10:53.111 --rc genhtml_legend=1 00:10:53.111 --rc geninfo_all_blocks=1 00:10:53.111 --rc geninfo_unexecuted_blocks=1 00:10:53.111 00:10:53.111 ' 00:10:53.111 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:10:53.111 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:53.111 --rc genhtml_branch_coverage=1 00:10:53.111 --rc genhtml_function_coverage=1 00:10:53.111 --rc genhtml_legend=1 00:10:53.111 --rc geninfo_all_blocks=1 00:10:53.111 --rc geninfo_unexecuted_blocks=1 00:10:53.111 00:10:53.111 ' 00:10:53.111 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:53.111 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:10:53.111 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:53.111 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:53.111 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:53.111 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:53.111 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:53.111 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:53.111 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:53.111 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:53.111 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:53.111 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:53.111 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:10:53.111 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=8b464f06-2980-e311-ba20-001e67a94acd 00:10:53.111 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:53.111 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:53.111 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:53.111 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:53.111 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:53.111 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:10:53.111 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:53.111 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:53.111 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:53.111 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:53.111 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:53.112 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:53.112 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:10:53.112 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:53.112 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@51 -- # : 0 00:10:53.112 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:53.112 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:53.112 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:53.112 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:53.112 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:53.112 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:53.112 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:53.112 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:53.112 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:53.112 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:53.112 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:10:53.112 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:10:53.112 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:10:53.112 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:53.112 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:53.112 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:53.112 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:53.112 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:53.112 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:53.112 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:53.112 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:53.112 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:53.112 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:53.112 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@309 -- # xtrace_disable 00:10:53.112 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:10:55.645 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:55.645 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # pci_devs=() 00:10:55.645 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:55.645 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:55.645 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:55.645 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:55.645 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:55.645 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # net_devs=() 00:10:55.645 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:55.645 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # e810=() 00:10:55.645 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # local -ga e810 00:10:55.646 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # x722=() 00:10:55.646 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # local -ga x722 00:10:55.646 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # mlx=() 00:10:55.646 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # local -ga mlx 00:10:55.646 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:55.646 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:55.646 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:55.646 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:55.646 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:55.646 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:55.646 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:55.646 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:55.646 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:55.646 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:55.646 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:55.646 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:55.646 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:55.646 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:55.646 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:55.646 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:55.646 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:55.646 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:55.646 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:55.646 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@367 -- # echo 'Found 0000:82:00.0 (0x8086 - 0x159b)' 00:10:55.646 Found 0000:82:00.0 (0x8086 - 0x159b) 00:10:55.646 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:55.646 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:55.646 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:55.646 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:55.646 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:55.646 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:55.646 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@367 -- # echo 'Found 0000:82:00.1 (0x8086 - 0x159b)' 00:10:55.646 Found 0000:82:00.1 (0x8086 - 0x159b) 00:10:55.646 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:55.646 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:55.646 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:55.646 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:55.646 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:55.646 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:55.646 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:55.646 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:55.646 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:55.646 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:55.646 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:55.646 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:55.646 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:55.646 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:55.646 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:55.646 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:82:00.0: cvl_0_0' 00:10:55.646 Found net devices under 0000:82:00.0: cvl_0_0 00:10:55.646 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:55.646 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:55.646 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:55.646 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:55.646 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:55.646 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:55.646 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:55.646 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:55.646 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:82:00.1: cvl_0_1' 00:10:55.646 Found net devices under 0000:82:00.1: cvl_0_1 00:10:55.646 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:55.646 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:55.646 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # is_hw=yes 00:10:55.646 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:55.646 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:55.646 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:55.646 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:55.646 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:55.646 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:55.646 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:55.646 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:55.646 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:55.646 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:55.646 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:55.646 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:55.646 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:55.646 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:55.646 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:55.646 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:55.646 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:55.646 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:55.646 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:55.646 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:55.646 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:55.646 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:55.646 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:55.646 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:55.646 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:55.646 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:55.646 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:55.646 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.314 ms 00:10:55.646 00:10:55.646 --- 10.0.0.2 ping statistics --- 00:10:55.646 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:55.646 rtt min/avg/max/mdev = 0.314/0.314/0.314/0.000 ms 00:10:55.646 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:55.646 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:55.646 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.072 ms 00:10:55.646 00:10:55.646 --- 10.0.0.1 ping statistics --- 00:10:55.646 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:55.646 rtt min/avg/max/mdev = 0.072/0.072/0.072/0.000 ms 00:10:55.646 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:55.646 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@450 -- # return 0 00:10:55.646 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:55.646 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:55.646 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:55.646 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:55.647 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:55.647 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:55.647 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:55.647 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:10:55.647 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:10:55.647 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:55.647 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:10:55.647 ************************************ 00:10:55.647 START TEST nvmf_filesystem_no_in_capsule 00:10:55.647 ************************************ 00:10:55.647 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1127 -- # nvmf_filesystem_part 0 00:10:55.647 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:10:55.647 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:10:55.647 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:55.647 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:55.647 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:55.647 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@509 -- # nvmfpid=313559 00:10:55.647 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:55.647 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@510 -- # waitforlisten 313559 00:10:55.647 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@833 -- # '[' -z 313559 ']' 00:10:55.647 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:55.647 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@838 -- # local max_retries=100 00:10:55.647 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:55.647 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:55.647 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@842 -- # xtrace_disable 00:10:55.647 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:55.647 [2024-11-15 10:29:43.937156] Starting SPDK v25.01-pre git sha1 318515b44 / DPDK 24.03.0 initialization... 00:10:55.647 [2024-11-15 10:29:43.937240] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:55.647 [2024-11-15 10:29:44.006495] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:55.647 [2024-11-15 10:29:44.062746] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:55.647 [2024-11-15 10:29:44.062805] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:55.647 [2024-11-15 10:29:44.062833] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:55.647 [2024-11-15 10:29:44.062845] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:55.647 [2024-11-15 10:29:44.062855] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:55.647 [2024-11-15 10:29:44.064321] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:55.647 [2024-11-15 10:29:44.064387] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:55.647 [2024-11-15 10:29:44.064455] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:55.647 [2024-11-15 10:29:44.064458] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:55.905 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:10:55.905 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@866 -- # return 0 00:10:55.905 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:55.905 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:55.905 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:55.905 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:55.905 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:10:55.905 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:10:55.905 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:55.905 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:55.905 [2024-11-15 10:29:44.209968] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:55.905 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:55.905 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:10:55.905 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:55.905 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:56.163 Malloc1 00:10:56.163 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:56.163 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:56.163 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:56.163 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:56.163 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:56.163 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:56.163 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:56.163 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:56.163 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:56.163 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:56.163 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:56.163 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:56.163 [2024-11-15 10:29:44.395196] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:56.163 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:56.163 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:10:56.163 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1380 -- # local bdev_name=Malloc1 00:10:56.163 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1381 -- # local bdev_info 00:10:56.163 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # local bs 00:10:56.163 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # local nb 00:10:56.163 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:10:56.163 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:56.163 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:56.163 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:56.163 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:10:56.163 { 00:10:56.163 "name": "Malloc1", 00:10:56.163 "aliases": [ 00:10:56.163 "9f061441-2853-4de5-ba3d-c9f54c56ff33" 00:10:56.163 ], 00:10:56.163 "product_name": "Malloc disk", 00:10:56.163 "block_size": 512, 00:10:56.163 "num_blocks": 1048576, 00:10:56.163 "uuid": "9f061441-2853-4de5-ba3d-c9f54c56ff33", 00:10:56.163 "assigned_rate_limits": { 00:10:56.163 "rw_ios_per_sec": 0, 00:10:56.163 "rw_mbytes_per_sec": 0, 00:10:56.163 "r_mbytes_per_sec": 0, 00:10:56.163 "w_mbytes_per_sec": 0 00:10:56.163 }, 00:10:56.163 "claimed": true, 00:10:56.163 "claim_type": "exclusive_write", 00:10:56.163 "zoned": false, 00:10:56.163 "supported_io_types": { 00:10:56.163 "read": true, 00:10:56.163 "write": true, 00:10:56.163 "unmap": true, 00:10:56.163 "flush": true, 00:10:56.163 "reset": true, 00:10:56.163 "nvme_admin": false, 00:10:56.163 "nvme_io": false, 00:10:56.163 "nvme_io_md": false, 00:10:56.163 "write_zeroes": true, 00:10:56.163 "zcopy": true, 00:10:56.163 "get_zone_info": false, 00:10:56.163 "zone_management": false, 00:10:56.163 "zone_append": false, 00:10:56.163 "compare": false, 00:10:56.163 "compare_and_write": false, 00:10:56.163 "abort": true, 00:10:56.163 "seek_hole": false, 00:10:56.163 "seek_data": false, 00:10:56.163 "copy": true, 00:10:56.163 "nvme_iov_md": false 00:10:56.163 }, 00:10:56.163 "memory_domains": [ 00:10:56.163 { 00:10:56.163 "dma_device_id": "system", 00:10:56.163 "dma_device_type": 1 00:10:56.163 }, 00:10:56.163 { 00:10:56.163 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:56.163 "dma_device_type": 2 00:10:56.163 } 00:10:56.163 ], 00:10:56.164 "driver_specific": {} 00:10:56.164 } 00:10:56.164 ]' 00:10:56.164 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:10:56.164 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1385 -- # bs=512 00:10:56.164 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:10:56.164 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # nb=1048576 00:10:56.164 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1389 -- # bdev_size=512 00:10:56.164 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1390 -- # echo 512 00:10:56.164 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:10:56.164 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid=8b464f06-2980-e311-ba20-001e67a94acd -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:56.728 10:29:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:10:56.728 10:29:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1200 -- # local i=0 00:10:56.728 10:29:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:10:56.728 10:29:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:10:56.728 10:29:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # sleep 2 00:10:59.256 10:29:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:10:59.256 10:29:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:10:59.256 10:29:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:10:59.256 10:29:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:10:59.256 10:29:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:10:59.256 10:29:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1210 -- # return 0 00:10:59.256 10:29:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:10:59.256 10:29:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:10:59.256 10:29:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:10:59.256 10:29:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:10:59.256 10:29:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:10:59.256 10:29:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:10:59.256 10:29:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:10:59.256 10:29:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:10:59.256 10:29:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:10:59.256 10:29:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:10:59.257 10:29:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:10:59.257 10:29:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:10:59.515 10:29:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:11:00.449 10:29:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:11:00.449 10:29:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:11:00.449 10:29:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:11:00.449 10:29:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:00.449 10:29:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:00.449 ************************************ 00:11:00.449 START TEST filesystem_ext4 00:11:00.449 ************************************ 00:11:00.449 10:29:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1127 -- # nvmf_filesystem_create ext4 nvme0n1 00:11:00.449 10:29:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:11:00.449 10:29:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:00.449 10:29:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:11:00.449 10:29:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@928 -- # local fstype=ext4 00:11:00.449 10:29:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@929 -- # local dev_name=/dev/nvme0n1p1 00:11:00.449 10:29:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@930 -- # local i=0 00:11:00.449 10:29:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@931 -- # local force 00:11:00.449 10:29:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@933 -- # '[' ext4 = ext4 ']' 00:11:00.449 10:29:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@934 -- # force=-F 00:11:00.449 10:29:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@939 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:11:00.449 mke2fs 1.47.0 (5-Feb-2023) 00:11:00.449 Discarding device blocks: 0/522240 done 00:11:00.449 Creating filesystem with 522240 1k blocks and 130560 inodes 00:11:00.449 Filesystem UUID: d2828b5c-b9fd-4eb0-8c98-073445339423 00:11:00.449 Superblock backups stored on blocks: 00:11:00.449 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:11:00.449 00:11:00.449 Allocating group tables: 0/64 done 00:11:00.449 Writing inode tables: 0/64 done 00:11:00.707 Creating journal (8192 blocks): done 00:11:02.903 Writing superblocks and filesystem accounting information: 0/6410/64 done 00:11:02.903 00:11:02.903 10:29:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@947 -- # return 0 00:11:02.903 10:29:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:09.458 10:29:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:09.459 10:29:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:11:09.459 10:29:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:09.459 10:29:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:11:09.459 10:29:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:11:09.459 10:29:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:09.459 10:29:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 313559 00:11:09.459 10:29:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:09.459 10:29:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:09.459 10:29:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:09.459 10:29:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:09.459 00:11:09.459 real 0m8.657s 00:11:09.459 user 0m0.020s 00:11:09.459 sys 0m0.088s 00:11:09.459 10:29:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:09.459 10:29:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:11:09.459 ************************************ 00:11:09.459 END TEST filesystem_ext4 00:11:09.459 ************************************ 00:11:09.459 10:29:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:11:09.459 10:29:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:11:09.459 10:29:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:09.459 10:29:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:09.459 ************************************ 00:11:09.459 START TEST filesystem_btrfs 00:11:09.459 ************************************ 00:11:09.459 10:29:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1127 -- # nvmf_filesystem_create btrfs nvme0n1 00:11:09.459 10:29:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:11:09.459 10:29:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:09.459 10:29:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:11:09.459 10:29:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@928 -- # local fstype=btrfs 00:11:09.459 10:29:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@929 -- # local dev_name=/dev/nvme0n1p1 00:11:09.459 10:29:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@930 -- # local i=0 00:11:09.459 10:29:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@931 -- # local force 00:11:09.459 10:29:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@933 -- # '[' btrfs = ext4 ']' 00:11:09.459 10:29:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@936 -- # force=-f 00:11:09.459 10:29:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@939 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:11:09.459 btrfs-progs v6.8.1 00:11:09.459 See https://btrfs.readthedocs.io for more information. 00:11:09.459 00:11:09.459 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:11:09.459 NOTE: several default settings have changed in version 5.15, please make sure 00:11:09.459 this does not affect your deployments: 00:11:09.459 - DUP for metadata (-m dup) 00:11:09.459 - enabled no-holes (-O no-holes) 00:11:09.459 - enabled free-space-tree (-R free-space-tree) 00:11:09.459 00:11:09.459 Label: (null) 00:11:09.459 UUID: fe1a5425-91cc-42ec-aeee-c5ea7b3bac1c 00:11:09.459 Node size: 16384 00:11:09.459 Sector size: 4096 (CPU page size: 4096) 00:11:09.459 Filesystem size: 510.00MiB 00:11:09.459 Block group profiles: 00:11:09.459 Data: single 8.00MiB 00:11:09.459 Metadata: DUP 32.00MiB 00:11:09.459 System: DUP 8.00MiB 00:11:09.459 SSD detected: yes 00:11:09.459 Zoned device: no 00:11:09.459 Features: extref, skinny-metadata, no-holes, free-space-tree 00:11:09.459 Checksum: crc32c 00:11:09.459 Number of devices: 1 00:11:09.459 Devices: 00:11:09.459 ID SIZE PATH 00:11:09.459 1 510.00MiB /dev/nvme0n1p1 00:11:09.459 00:11:09.459 10:29:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@947 -- # return 0 00:11:09.459 10:29:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:10.393 10:29:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:10.393 10:29:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:11:10.393 10:29:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:10.393 10:29:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:11:10.393 10:29:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:11:10.393 10:29:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:10.393 10:29:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 313559 00:11:10.393 10:29:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:10.393 10:29:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:10.393 10:29:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:10.393 10:29:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:10.393 00:11:10.393 real 0m1.127s 00:11:10.393 user 0m0.022s 00:11:10.393 sys 0m0.128s 00:11:10.393 10:29:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:10.393 10:29:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:11:10.393 ************************************ 00:11:10.393 END TEST filesystem_btrfs 00:11:10.393 ************************************ 00:11:10.393 10:29:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:11:10.393 10:29:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:11:10.393 10:29:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:10.393 10:29:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:10.393 ************************************ 00:11:10.393 START TEST filesystem_xfs 00:11:10.393 ************************************ 00:11:10.393 10:29:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1127 -- # nvmf_filesystem_create xfs nvme0n1 00:11:10.393 10:29:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:11:10.393 10:29:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:10.393 10:29:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:11:10.393 10:29:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@928 -- # local fstype=xfs 00:11:10.393 10:29:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@929 -- # local dev_name=/dev/nvme0n1p1 00:11:10.393 10:29:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@930 -- # local i=0 00:11:10.393 10:29:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@931 -- # local force 00:11:10.393 10:29:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@933 -- # '[' xfs = ext4 ']' 00:11:10.393 10:29:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@936 -- # force=-f 00:11:10.393 10:29:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@939 -- # mkfs.xfs -f /dev/nvme0n1p1 00:11:10.393 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:11:10.393 = sectsz=512 attr=2, projid32bit=1 00:11:10.393 = crc=1 finobt=1, sparse=1, rmapbt=0 00:11:10.393 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:11:10.393 data = bsize=4096 blocks=130560, imaxpct=25 00:11:10.393 = sunit=0 swidth=0 blks 00:11:10.393 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:11:10.393 log =internal log bsize=4096 blocks=16384, version=2 00:11:10.393 = sectsz=512 sunit=0 blks, lazy-count=1 00:11:10.393 realtime =none extsz=4096 blocks=0, rtextents=0 00:11:11.326 Discarding blocks...Done. 00:11:11.326 10:29:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@947 -- # return 0 00:11:11.326 10:29:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:14.607 10:30:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:14.607 10:30:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:11:14.607 10:30:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:14.607 10:30:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:11:14.607 10:30:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:11:14.607 10:30:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:14.607 10:30:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 313559 00:11:14.607 10:30:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:14.607 10:30:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:14.607 10:30:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:14.607 10:30:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:14.607 00:11:14.607 real 0m4.145s 00:11:14.607 user 0m0.023s 00:11:14.607 sys 0m0.090s 00:11:14.607 10:30:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:14.607 10:30:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:11:14.607 ************************************ 00:11:14.607 END TEST filesystem_xfs 00:11:14.607 ************************************ 00:11:14.607 10:30:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:11:14.607 10:30:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:11:14.607 10:30:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:14.607 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:14.607 10:30:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:14.607 10:30:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1221 -- # local i=0 00:11:14.607 10:30:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:11:14.607 10:30:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:14.607 10:30:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:11:14.607 10:30:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:14.607 10:30:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1233 -- # return 0 00:11:14.607 10:30:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:14.607 10:30:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:14.607 10:30:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:14.865 10:30:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:14.865 10:30:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:11:14.865 10:30:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 313559 00:11:14.865 10:30:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@952 -- # '[' -z 313559 ']' 00:11:14.866 10:30:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@956 -- # kill -0 313559 00:11:14.866 10:30:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@957 -- # uname 00:11:14.866 10:30:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:11:14.866 10:30:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 313559 00:11:14.866 10:30:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:11:14.866 10:30:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:11:14.866 10:30:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@970 -- # echo 'killing process with pid 313559' 00:11:14.866 killing process with pid 313559 00:11:14.866 10:30:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@971 -- # kill 313559 00:11:14.866 10:30:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@976 -- # wait 313559 00:11:15.124 10:30:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:11:15.124 00:11:15.124 real 0m19.653s 00:11:15.124 user 1m16.174s 00:11:15.124 sys 0m2.487s 00:11:15.124 10:30:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:15.124 10:30:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:15.124 ************************************ 00:11:15.124 END TEST nvmf_filesystem_no_in_capsule 00:11:15.124 ************************************ 00:11:15.124 10:30:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:11:15.124 10:30:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:11:15.124 10:30:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:15.124 10:30:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:11:15.124 ************************************ 00:11:15.124 START TEST nvmf_filesystem_in_capsule 00:11:15.124 ************************************ 00:11:15.124 10:30:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1127 -- # nvmf_filesystem_part 4096 00:11:15.124 10:30:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:11:15.124 10:30:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:11:15.124 10:30:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:15.382 10:30:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:15.382 10:30:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:15.382 10:30:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@509 -- # nvmfpid=316289 00:11:15.382 10:30:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:15.382 10:30:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@510 -- # waitforlisten 316289 00:11:15.382 10:30:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@833 -- # '[' -z 316289 ']' 00:11:15.382 10:30:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:15.382 10:30:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@838 -- # local max_retries=100 00:11:15.382 10:30:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:15.382 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:15.382 10:30:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@842 -- # xtrace_disable 00:11:15.382 10:30:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:15.382 [2024-11-15 10:30:03.646595] Starting SPDK v25.01-pre git sha1 318515b44 / DPDK 24.03.0 initialization... 00:11:15.382 [2024-11-15 10:30:03.646707] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:15.382 [2024-11-15 10:30:03.718774] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:15.382 [2024-11-15 10:30:03.779147] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:15.382 [2024-11-15 10:30:03.779208] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:15.382 [2024-11-15 10:30:03.779236] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:15.382 [2024-11-15 10:30:03.779247] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:15.382 [2024-11-15 10:30:03.779256] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:15.382 [2024-11-15 10:30:03.780878] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:15.382 [2024-11-15 10:30:03.780943] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:15.382 [2024-11-15 10:30:03.781009] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:15.382 [2024-11-15 10:30:03.781012] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:15.640 10:30:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:11:15.640 10:30:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@866 -- # return 0 00:11:15.640 10:30:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:15.640 10:30:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:15.640 10:30:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:15.640 10:30:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:15.640 10:30:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:11:15.640 10:30:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:11:15.640 10:30:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:15.640 10:30:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:15.640 [2024-11-15 10:30:03.934049] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:15.640 10:30:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:15.640 10:30:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:11:15.640 10:30:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:15.640 10:30:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:15.640 Malloc1 00:11:15.640 10:30:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:15.640 10:30:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:15.640 10:30:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:15.640 10:30:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:15.898 10:30:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:15.899 10:30:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:15.899 10:30:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:15.899 10:30:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:15.899 10:30:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:15.899 10:30:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:15.899 10:30:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:15.899 10:30:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:15.899 [2024-11-15 10:30:04.123979] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:15.899 10:30:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:15.899 10:30:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:11:15.899 10:30:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1380 -- # local bdev_name=Malloc1 00:11:15.899 10:30:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1381 -- # local bdev_info 00:11:15.899 10:30:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # local bs 00:11:15.899 10:30:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # local nb 00:11:15.899 10:30:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:11:15.899 10:30:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:15.899 10:30:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:15.899 10:30:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:15.899 10:30:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:11:15.899 { 00:11:15.899 "name": "Malloc1", 00:11:15.899 "aliases": [ 00:11:15.899 "76ed50b0-6d6a-49f8-913e-0cb7b75caf13" 00:11:15.899 ], 00:11:15.899 "product_name": "Malloc disk", 00:11:15.899 "block_size": 512, 00:11:15.899 "num_blocks": 1048576, 00:11:15.899 "uuid": "76ed50b0-6d6a-49f8-913e-0cb7b75caf13", 00:11:15.899 "assigned_rate_limits": { 00:11:15.899 "rw_ios_per_sec": 0, 00:11:15.899 "rw_mbytes_per_sec": 0, 00:11:15.899 "r_mbytes_per_sec": 0, 00:11:15.899 "w_mbytes_per_sec": 0 00:11:15.899 }, 00:11:15.899 "claimed": true, 00:11:15.899 "claim_type": "exclusive_write", 00:11:15.899 "zoned": false, 00:11:15.899 "supported_io_types": { 00:11:15.899 "read": true, 00:11:15.899 "write": true, 00:11:15.899 "unmap": true, 00:11:15.899 "flush": true, 00:11:15.899 "reset": true, 00:11:15.899 "nvme_admin": false, 00:11:15.899 "nvme_io": false, 00:11:15.899 "nvme_io_md": false, 00:11:15.899 "write_zeroes": true, 00:11:15.899 "zcopy": true, 00:11:15.899 "get_zone_info": false, 00:11:15.899 "zone_management": false, 00:11:15.899 "zone_append": false, 00:11:15.899 "compare": false, 00:11:15.899 "compare_and_write": false, 00:11:15.899 "abort": true, 00:11:15.899 "seek_hole": false, 00:11:15.899 "seek_data": false, 00:11:15.899 "copy": true, 00:11:15.899 "nvme_iov_md": false 00:11:15.899 }, 00:11:15.899 "memory_domains": [ 00:11:15.899 { 00:11:15.899 "dma_device_id": "system", 00:11:15.899 "dma_device_type": 1 00:11:15.899 }, 00:11:15.899 { 00:11:15.899 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:15.899 "dma_device_type": 2 00:11:15.899 } 00:11:15.899 ], 00:11:15.899 "driver_specific": {} 00:11:15.899 } 00:11:15.899 ]' 00:11:15.899 10:30:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:11:15.899 10:30:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1385 -- # bs=512 00:11:15.899 10:30:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:11:15.899 10:30:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # nb=1048576 00:11:15.899 10:30:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1389 -- # bdev_size=512 00:11:15.899 10:30:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1390 -- # echo 512 00:11:15.899 10:30:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:11:15.899 10:30:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid=8b464f06-2980-e311-ba20-001e67a94acd -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:16.464 10:30:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:11:16.464 10:30:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1200 -- # local i=0 00:11:16.464 10:30:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:11:16.464 10:30:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:11:16.464 10:30:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # sleep 2 00:11:18.992 10:30:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:11:18.992 10:30:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:11:18.992 10:30:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:11:18.992 10:30:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:11:18.992 10:30:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:11:18.992 10:30:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1210 -- # return 0 00:11:18.992 10:30:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:11:18.992 10:30:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:11:18.992 10:30:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:11:18.992 10:30:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:11:18.992 10:30:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:11:18.992 10:30:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:11:18.992 10:30:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:11:18.993 10:30:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:11:18.993 10:30:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:11:18.993 10:30:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:11:18.993 10:30:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:11:18.993 10:30:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:11:19.559 10:30:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:11:20.492 10:30:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:11:20.492 10:30:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:11:20.492 10:30:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:11:20.492 10:30:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:20.492 10:30:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:20.492 ************************************ 00:11:20.492 START TEST filesystem_in_capsule_ext4 00:11:20.492 ************************************ 00:11:20.492 10:30:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1127 -- # nvmf_filesystem_create ext4 nvme0n1 00:11:20.492 10:30:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:11:20.492 10:30:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:20.492 10:30:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:11:20.492 10:30:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@928 -- # local fstype=ext4 00:11:20.492 10:30:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@929 -- # local dev_name=/dev/nvme0n1p1 00:11:20.492 10:30:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@930 -- # local i=0 00:11:20.492 10:30:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@931 -- # local force 00:11:20.492 10:30:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@933 -- # '[' ext4 = ext4 ']' 00:11:20.492 10:30:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@934 -- # force=-F 00:11:20.492 10:30:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@939 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:11:20.492 mke2fs 1.47.0 (5-Feb-2023) 00:11:20.492 Discarding device blocks: 0/522240 done 00:11:20.750 Creating filesystem with 522240 1k blocks and 130560 inodes 00:11:20.750 Filesystem UUID: e94221fc-eea9-43d3-9550-5bdfd0105a34 00:11:20.750 Superblock backups stored on blocks: 00:11:20.750 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:11:20.750 00:11:20.750 Allocating group tables: 0/64 done 00:11:20.750 Writing inode tables: 0/64 done 00:11:23.303 Creating journal (8192 blocks): done 00:11:23.303 Writing superblocks and filesystem accounting information: 0/64 done 00:11:23.303 00:11:23.561 10:30:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@947 -- # return 0 00:11:23.561 10:30:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:30.121 10:30:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:30.121 10:30:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:11:30.121 10:30:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:30.121 10:30:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:11:30.121 10:30:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:11:30.121 10:30:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:30.121 10:30:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 316289 00:11:30.121 10:30:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:30.121 10:30:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:30.121 10:30:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:30.121 10:30:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:30.121 00:11:30.121 real 0m8.759s 00:11:30.121 user 0m0.022s 00:11:30.121 sys 0m0.062s 00:11:30.121 10:30:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:30.121 10:30:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:11:30.121 ************************************ 00:11:30.121 END TEST filesystem_in_capsule_ext4 00:11:30.121 ************************************ 00:11:30.121 10:30:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:11:30.121 10:30:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:11:30.121 10:30:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:30.121 10:30:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:30.121 ************************************ 00:11:30.121 START TEST filesystem_in_capsule_btrfs 00:11:30.121 ************************************ 00:11:30.121 10:30:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1127 -- # nvmf_filesystem_create btrfs nvme0n1 00:11:30.121 10:30:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:11:30.121 10:30:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:30.121 10:30:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:11:30.121 10:30:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@928 -- # local fstype=btrfs 00:11:30.121 10:30:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@929 -- # local dev_name=/dev/nvme0n1p1 00:11:30.121 10:30:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@930 -- # local i=0 00:11:30.121 10:30:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@931 -- # local force 00:11:30.121 10:30:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@933 -- # '[' btrfs = ext4 ']' 00:11:30.121 10:30:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@936 -- # force=-f 00:11:30.121 10:30:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@939 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:11:30.121 btrfs-progs v6.8.1 00:11:30.121 See https://btrfs.readthedocs.io for more information. 00:11:30.121 00:11:30.121 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:11:30.121 NOTE: several default settings have changed in version 5.15, please make sure 00:11:30.121 this does not affect your deployments: 00:11:30.121 - DUP for metadata (-m dup) 00:11:30.121 - enabled no-holes (-O no-holes) 00:11:30.121 - enabled free-space-tree (-R free-space-tree) 00:11:30.121 00:11:30.121 Label: (null) 00:11:30.121 UUID: e8e0a7a9-8552-453c-a58f-70f749cb5586 00:11:30.121 Node size: 16384 00:11:30.121 Sector size: 4096 (CPU page size: 4096) 00:11:30.121 Filesystem size: 510.00MiB 00:11:30.121 Block group profiles: 00:11:30.121 Data: single 8.00MiB 00:11:30.121 Metadata: DUP 32.00MiB 00:11:30.121 System: DUP 8.00MiB 00:11:30.121 SSD detected: yes 00:11:30.121 Zoned device: no 00:11:30.121 Features: extref, skinny-metadata, no-holes, free-space-tree 00:11:30.121 Checksum: crc32c 00:11:30.121 Number of devices: 1 00:11:30.121 Devices: 00:11:30.121 ID SIZE PATH 00:11:30.121 1 510.00MiB /dev/nvme0n1p1 00:11:30.121 00:11:30.121 10:30:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@947 -- # return 0 00:11:30.121 10:30:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:30.121 10:30:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:30.121 10:30:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:11:30.121 10:30:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:30.121 10:30:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:11:30.121 10:30:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:11:30.121 10:30:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:30.121 10:30:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 316289 00:11:30.121 10:30:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:30.121 10:30:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:30.121 10:30:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:30.121 10:30:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:30.121 00:11:30.121 real 0m0.732s 00:11:30.121 user 0m0.016s 00:11:30.121 sys 0m0.096s 00:11:30.121 10:30:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:30.121 10:30:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:11:30.121 ************************************ 00:11:30.121 END TEST filesystem_in_capsule_btrfs 00:11:30.121 ************************************ 00:11:30.121 10:30:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:11:30.121 10:30:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:11:30.121 10:30:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:30.121 10:30:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:30.121 ************************************ 00:11:30.121 START TEST filesystem_in_capsule_xfs 00:11:30.121 ************************************ 00:11:30.121 10:30:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1127 -- # nvmf_filesystem_create xfs nvme0n1 00:11:30.122 10:30:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:11:30.122 10:30:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:30.122 10:30:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:11:30.122 10:30:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@928 -- # local fstype=xfs 00:11:30.122 10:30:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@929 -- # local dev_name=/dev/nvme0n1p1 00:11:30.122 10:30:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@930 -- # local i=0 00:11:30.122 10:30:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@931 -- # local force 00:11:30.122 10:30:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@933 -- # '[' xfs = ext4 ']' 00:11:30.122 10:30:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@936 -- # force=-f 00:11:30.122 10:30:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@939 -- # mkfs.xfs -f /dev/nvme0n1p1 00:11:30.122 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:11:30.122 = sectsz=512 attr=2, projid32bit=1 00:11:30.122 = crc=1 finobt=1, sparse=1, rmapbt=0 00:11:30.122 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:11:30.122 data = bsize=4096 blocks=130560, imaxpct=25 00:11:30.122 = sunit=0 swidth=0 blks 00:11:30.122 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:11:30.122 log =internal log bsize=4096 blocks=16384, version=2 00:11:30.122 = sectsz=512 sunit=0 blks, lazy-count=1 00:11:30.122 realtime =none extsz=4096 blocks=0, rtextents=0 00:11:31.496 Discarding blocks...Done. 00:11:31.496 10:30:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@947 -- # return 0 00:11:31.496 10:30:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:32.870 10:30:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:32.870 10:30:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:11:32.870 10:30:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:32.870 10:30:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:11:32.870 10:30:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:11:32.870 10:30:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:32.870 10:30:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 316289 00:11:32.870 10:30:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:32.870 10:30:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:32.870 10:30:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:32.870 10:30:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:32.870 00:11:32.870 real 0m2.894s 00:11:32.870 user 0m0.016s 00:11:32.870 sys 0m0.053s 00:11:32.870 10:30:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:32.870 10:30:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:11:32.870 ************************************ 00:11:32.870 END TEST filesystem_in_capsule_xfs 00:11:32.870 ************************************ 00:11:33.129 10:30:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:11:33.387 10:30:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:11:33.387 10:30:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:33.388 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:33.388 10:30:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:33.388 10:30:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1221 -- # local i=0 00:11:33.388 10:30:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:11:33.388 10:30:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:33.388 10:30:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:11:33.388 10:30:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:33.388 10:30:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1233 -- # return 0 00:11:33.388 10:30:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:33.388 10:30:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:33.388 10:30:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:33.388 10:30:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:33.388 10:30:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:11:33.388 10:30:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 316289 00:11:33.388 10:30:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@952 -- # '[' -z 316289 ']' 00:11:33.388 10:30:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@956 -- # kill -0 316289 00:11:33.388 10:30:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@957 -- # uname 00:11:33.388 10:30:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:11:33.388 10:30:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 316289 00:11:33.646 10:30:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:11:33.646 10:30:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:11:33.646 10:30:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@970 -- # echo 'killing process with pid 316289' 00:11:33.646 killing process with pid 316289 00:11:33.646 10:30:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@971 -- # kill 316289 00:11:33.646 10:30:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@976 -- # wait 316289 00:11:33.906 10:30:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:11:33.906 00:11:33.906 real 0m18.718s 00:11:33.906 user 1m12.431s 00:11:33.906 sys 0m2.338s 00:11:33.906 10:30:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:33.906 10:30:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:33.906 ************************************ 00:11:33.906 END TEST nvmf_filesystem_in_capsule 00:11:33.906 ************************************ 00:11:33.906 10:30:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:11:33.906 10:30:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:33.906 10:30:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@121 -- # sync 00:11:33.906 10:30:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:33.906 10:30:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@124 -- # set +e 00:11:33.906 10:30:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:33.906 10:30:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:33.906 rmmod nvme_tcp 00:11:33.906 rmmod nvme_fabrics 00:11:33.906 rmmod nvme_keyring 00:11:34.164 10:30:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:34.164 10:30:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@128 -- # set -e 00:11:34.164 10:30:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@129 -- # return 0 00:11:34.164 10:30:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:11:34.164 10:30:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:34.164 10:30:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:34.165 10:30:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:34.165 10:30:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@297 -- # iptr 00:11:34.165 10:30:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # iptables-save 00:11:34.165 10:30:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:34.165 10:30:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # iptables-restore 00:11:34.165 10:30:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:34.165 10:30:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:34.165 10:30:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:34.165 10:30:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:34.165 10:30:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:36.074 10:30:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:36.074 00:11:36.074 real 0m43.314s 00:11:36.074 user 2m29.762s 00:11:36.074 sys 0m6.630s 00:11:36.074 10:30:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:36.074 10:30:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:11:36.074 ************************************ 00:11:36.074 END TEST nvmf_filesystem 00:11:36.074 ************************************ 00:11:36.074 10:30:24 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@18 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:11:36.074 10:30:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:11:36.074 10:30:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:36.074 10:30:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:36.074 ************************************ 00:11:36.074 START TEST nvmf_target_discovery 00:11:36.074 ************************************ 00:11:36.074 10:30:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:11:36.333 * Looking for test storage... 00:11:36.333 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:36.333 10:30:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:11:36.333 10:30:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1691 -- # lcov --version 00:11:36.333 10:30:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:11:36.333 10:30:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:11:36.333 10:30:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:36.333 10:30:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:36.333 10:30:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:36.333 10:30:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:11:36.333 10:30:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:11:36.333 10:30:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:11:36.334 10:30:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:11:36.334 10:30:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:11:36.334 10:30:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:11:36.334 10:30:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:11:36.334 10:30:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:36.334 10:30:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@344 -- # case "$op" in 00:11:36.334 10:30:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@345 -- # : 1 00:11:36.334 10:30:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:36.334 10:30:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:36.334 10:30:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # decimal 1 00:11:36.334 10:30:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=1 00:11:36.334 10:30:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:36.334 10:30:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 1 00:11:36.334 10:30:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:11:36.334 10:30:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # decimal 2 00:11:36.334 10:30:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=2 00:11:36.334 10:30:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:36.334 10:30:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 2 00:11:36.334 10:30:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:11:36.334 10:30:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:36.334 10:30:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:36.334 10:30:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # return 0 00:11:36.334 10:30:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:36.334 10:30:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:11:36.334 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:36.334 --rc genhtml_branch_coverage=1 00:11:36.334 --rc genhtml_function_coverage=1 00:11:36.334 --rc genhtml_legend=1 00:11:36.334 --rc geninfo_all_blocks=1 00:11:36.334 --rc geninfo_unexecuted_blocks=1 00:11:36.334 00:11:36.334 ' 00:11:36.334 10:30:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:11:36.334 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:36.334 --rc genhtml_branch_coverage=1 00:11:36.334 --rc genhtml_function_coverage=1 00:11:36.334 --rc genhtml_legend=1 00:11:36.334 --rc geninfo_all_blocks=1 00:11:36.334 --rc geninfo_unexecuted_blocks=1 00:11:36.334 00:11:36.334 ' 00:11:36.334 10:30:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:11:36.334 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:36.334 --rc genhtml_branch_coverage=1 00:11:36.334 --rc genhtml_function_coverage=1 00:11:36.334 --rc genhtml_legend=1 00:11:36.334 --rc geninfo_all_blocks=1 00:11:36.334 --rc geninfo_unexecuted_blocks=1 00:11:36.334 00:11:36.334 ' 00:11:36.334 10:30:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:11:36.334 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:36.334 --rc genhtml_branch_coverage=1 00:11:36.334 --rc genhtml_function_coverage=1 00:11:36.334 --rc genhtml_legend=1 00:11:36.334 --rc geninfo_all_blocks=1 00:11:36.334 --rc geninfo_unexecuted_blocks=1 00:11:36.334 00:11:36.334 ' 00:11:36.334 10:30:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:36.334 10:30:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:11:36.334 10:30:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:36.334 10:30:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:36.334 10:30:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:36.334 10:30:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:36.334 10:30:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:36.334 10:30:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:36.334 10:30:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:36.334 10:30:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:36.334 10:30:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:36.334 10:30:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:36.334 10:30:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:11:36.334 10:30:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=8b464f06-2980-e311-ba20-001e67a94acd 00:11:36.334 10:30:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:36.334 10:30:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:36.334 10:30:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:36.334 10:30:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:36.334 10:30:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:36.334 10:30:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:11:36.334 10:30:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:36.334 10:30:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:36.334 10:30:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:36.334 10:30:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:36.334 10:30:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:36.334 10:30:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:36.334 10:30:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:11:36.334 10:30:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:36.334 10:30:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@51 -- # : 0 00:11:36.334 10:30:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:36.334 10:30:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:36.334 10:30:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:36.334 10:30:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:36.334 10:30:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:36.334 10:30:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:36.334 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:36.334 10:30:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:36.334 10:30:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:36.334 10:30:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:36.334 10:30:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:11:36.334 10:30:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:11:36.334 10:30:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:11:36.334 10:30:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:11:36.334 10:30:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:11:36.334 10:30:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:36.334 10:30:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:36.334 10:30:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:36.334 10:30:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:36.334 10:30:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:36.334 10:30:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:36.335 10:30:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:36.335 10:30:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:36.335 10:30:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:36.335 10:30:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:36.335 10:30:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@309 -- # xtrace_disable 00:11:36.335 10:30:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:38.871 10:30:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:38.871 10:30:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # pci_devs=() 00:11:38.871 10:30:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:38.871 10:30:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:38.871 10:30:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:38.871 10:30:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:38.871 10:30:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:38.871 10:30:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # net_devs=() 00:11:38.871 10:30:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:38.871 10:30:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # e810=() 00:11:38.871 10:30:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # local -ga e810 00:11:38.871 10:30:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # x722=() 00:11:38.871 10:30:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # local -ga x722 00:11:38.871 10:30:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # mlx=() 00:11:38.871 10:30:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # local -ga mlx 00:11:38.871 10:30:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:38.871 10:30:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:38.871 10:30:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:38.871 10:30:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:38.871 10:30:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:38.871 10:30:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:38.871 10:30:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:38.871 10:30:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:38.871 10:30:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:38.871 10:30:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:38.871 10:30:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:38.871 10:30:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:38.871 10:30:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:38.871 10:30:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:38.871 10:30:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:38.871 10:30:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:38.871 10:30:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:38.871 10:30:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:38.871 10:30:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:38.871 10:30:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:82:00.0 (0x8086 - 0x159b)' 00:11:38.871 Found 0000:82:00.0 (0x8086 - 0x159b) 00:11:38.871 10:30:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:38.871 10:30:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:38.871 10:30:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:38.871 10:30:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:38.871 10:30:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:38.871 10:30:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:38.871 10:30:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:82:00.1 (0x8086 - 0x159b)' 00:11:38.871 Found 0000:82:00.1 (0x8086 - 0x159b) 00:11:38.871 10:30:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:38.871 10:30:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:38.871 10:30:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:38.871 10:30:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:38.871 10:30:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:38.871 10:30:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:38.871 10:30:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:38.871 10:30:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:38.871 10:30:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:38.871 10:30:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:38.872 10:30:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:38.872 10:30:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:38.872 10:30:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:38.872 10:30:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:38.872 10:30:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:38.872 10:30:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:82:00.0: cvl_0_0' 00:11:38.872 Found net devices under 0000:82:00.0: cvl_0_0 00:11:38.872 10:30:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:38.872 10:30:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:38.872 10:30:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:38.872 10:30:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:38.872 10:30:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:38.872 10:30:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:38.872 10:30:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:38.872 10:30:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:38.872 10:30:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:82:00.1: cvl_0_1' 00:11:38.872 Found net devices under 0000:82:00.1: cvl_0_1 00:11:38.872 10:30:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:38.872 10:30:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:38.872 10:30:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # is_hw=yes 00:11:38.872 10:30:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:38.872 10:30:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:38.872 10:30:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:38.872 10:30:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:38.872 10:30:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:38.872 10:30:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:38.872 10:30:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:38.872 10:30:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:38.872 10:30:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:38.872 10:30:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:38.872 10:30:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:38.872 10:30:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:38.872 10:30:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:38.872 10:30:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:38.872 10:30:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:38.872 10:30:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:38.872 10:30:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:38.872 10:30:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:38.872 10:30:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:38.872 10:30:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:38.872 10:30:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:38.872 10:30:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:38.872 10:30:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:38.872 10:30:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:38.872 10:30:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:38.872 10:30:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:38.872 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:38.872 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.401 ms 00:11:38.872 00:11:38.872 --- 10.0.0.2 ping statistics --- 00:11:38.872 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:38.872 rtt min/avg/max/mdev = 0.401/0.401/0.401/0.000 ms 00:11:38.872 10:30:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:38.872 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:38.872 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.149 ms 00:11:38.872 00:11:38.872 --- 10.0.0.1 ping statistics --- 00:11:38.872 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:38.872 rtt min/avg/max/mdev = 0.149/0.149/0.149/0.000 ms 00:11:38.872 10:30:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:38.872 10:30:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@450 -- # return 0 00:11:38.872 10:30:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:38.872 10:30:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:38.872 10:30:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:38.872 10:30:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:38.872 10:30:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:38.872 10:30:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:38.872 10:30:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:38.872 10:30:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:11:38.872 10:30:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:38.872 10:30:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:38.872 10:30:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:38.872 10:30:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@509 -- # nvmfpid=321095 00:11:38.872 10:30:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:38.872 10:30:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@510 -- # waitforlisten 321095 00:11:38.872 10:30:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@833 -- # '[' -z 321095 ']' 00:11:38.872 10:30:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:38.872 10:30:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@838 -- # local max_retries=100 00:11:38.872 10:30:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:38.872 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:38.872 10:30:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@842 -- # xtrace_disable 00:11:38.872 10:30:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:38.872 [2024-11-15 10:30:27.019579] Starting SPDK v25.01-pre git sha1 318515b44 / DPDK 24.03.0 initialization... 00:11:38.872 [2024-11-15 10:30:27.019673] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:38.872 [2024-11-15 10:30:27.090870] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:38.872 [2024-11-15 10:30:27.146142] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:38.872 [2024-11-15 10:30:27.146197] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:38.872 [2024-11-15 10:30:27.146218] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:38.872 [2024-11-15 10:30:27.146229] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:38.872 [2024-11-15 10:30:27.146254] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:38.872 [2024-11-15 10:30:27.147750] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:38.872 [2024-11-15 10:30:27.147808] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:38.872 [2024-11-15 10:30:27.147872] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:38.872 [2024-11-15 10:30:27.147875] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:38.872 10:30:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:11:38.872 10:30:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@866 -- # return 0 00:11:38.872 10:30:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:38.873 10:30:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:38.873 10:30:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:38.873 10:30:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:38.873 10:30:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:38.873 10:30:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:38.873 10:30:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:38.873 [2024-11-15 10:30:27.293491] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:38.873 10:30:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:38.873 10:30:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:11:38.873 10:30:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:11:38.873 10:30:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:11:38.873 10:30:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:38.873 10:30:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:38.873 Null1 00:11:38.873 10:30:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:38.873 10:30:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:11:38.873 10:30:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:38.873 10:30:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:38.873 10:30:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:38.873 10:30:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:11:38.873 10:30:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:38.873 10:30:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:38.873 10:30:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:38.873 10:30:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:38.873 10:30:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:38.873 10:30:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:39.132 [2024-11-15 10:30:27.337816] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:39.132 10:30:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:39.132 10:30:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:11:39.132 10:30:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:11:39.132 10:30:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:39.132 10:30:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:39.132 Null2 00:11:39.132 10:30:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:39.132 10:30:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:11:39.132 10:30:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:39.132 10:30:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:39.132 10:30:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:39.132 10:30:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:11:39.132 10:30:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:39.132 10:30:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:39.132 10:30:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:39.132 10:30:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:11:39.132 10:30:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:39.132 10:30:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:39.132 10:30:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:39.132 10:30:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:11:39.132 10:30:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:11:39.132 10:30:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:39.132 10:30:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:39.132 Null3 00:11:39.132 10:30:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:39.132 10:30:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:11:39.132 10:30:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:39.132 10:30:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:39.132 10:30:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:39.132 10:30:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:11:39.132 10:30:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:39.132 10:30:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:39.132 10:30:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:39.132 10:30:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:11:39.132 10:30:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:39.132 10:30:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:39.132 10:30:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:39.132 10:30:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:11:39.132 10:30:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:11:39.132 10:30:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:39.132 10:30:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:39.132 Null4 00:11:39.132 10:30:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:39.132 10:30:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:11:39.132 10:30:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:39.132 10:30:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:39.132 10:30:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:39.132 10:30:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:11:39.132 10:30:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:39.132 10:30:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:39.132 10:30:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:39.132 10:30:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:11:39.132 10:30:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:39.132 10:30:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:39.132 10:30:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:39.132 10:30:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:11:39.132 10:30:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:39.132 10:30:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:39.132 10:30:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:39.132 10:30:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:11:39.132 10:30:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:39.132 10:30:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:39.132 10:30:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:39.132 10:30:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid=8b464f06-2980-e311-ba20-001e67a94acd -t tcp -a 10.0.0.2 -s 4420 00:11:39.391 00:11:39.391 Discovery Log Number of Records 6, Generation counter 6 00:11:39.391 =====Discovery Log Entry 0====== 00:11:39.391 trtype: tcp 00:11:39.391 adrfam: ipv4 00:11:39.391 subtype: current discovery subsystem 00:11:39.391 treq: not required 00:11:39.391 portid: 0 00:11:39.391 trsvcid: 4420 00:11:39.391 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:11:39.391 traddr: 10.0.0.2 00:11:39.391 eflags: explicit discovery connections, duplicate discovery information 00:11:39.391 sectype: none 00:11:39.391 =====Discovery Log Entry 1====== 00:11:39.391 trtype: tcp 00:11:39.391 adrfam: ipv4 00:11:39.391 subtype: nvme subsystem 00:11:39.391 treq: not required 00:11:39.391 portid: 0 00:11:39.391 trsvcid: 4420 00:11:39.391 subnqn: nqn.2016-06.io.spdk:cnode1 00:11:39.391 traddr: 10.0.0.2 00:11:39.391 eflags: none 00:11:39.391 sectype: none 00:11:39.391 =====Discovery Log Entry 2====== 00:11:39.391 trtype: tcp 00:11:39.391 adrfam: ipv4 00:11:39.391 subtype: nvme subsystem 00:11:39.391 treq: not required 00:11:39.391 portid: 0 00:11:39.391 trsvcid: 4420 00:11:39.391 subnqn: nqn.2016-06.io.spdk:cnode2 00:11:39.391 traddr: 10.0.0.2 00:11:39.391 eflags: none 00:11:39.391 sectype: none 00:11:39.391 =====Discovery Log Entry 3====== 00:11:39.391 trtype: tcp 00:11:39.391 adrfam: ipv4 00:11:39.391 subtype: nvme subsystem 00:11:39.391 treq: not required 00:11:39.391 portid: 0 00:11:39.391 trsvcid: 4420 00:11:39.391 subnqn: nqn.2016-06.io.spdk:cnode3 00:11:39.391 traddr: 10.0.0.2 00:11:39.391 eflags: none 00:11:39.391 sectype: none 00:11:39.391 =====Discovery Log Entry 4====== 00:11:39.391 trtype: tcp 00:11:39.391 adrfam: ipv4 00:11:39.391 subtype: nvme subsystem 00:11:39.391 treq: not required 00:11:39.391 portid: 0 00:11:39.391 trsvcid: 4420 00:11:39.391 subnqn: nqn.2016-06.io.spdk:cnode4 00:11:39.391 traddr: 10.0.0.2 00:11:39.391 eflags: none 00:11:39.391 sectype: none 00:11:39.391 =====Discovery Log Entry 5====== 00:11:39.391 trtype: tcp 00:11:39.391 adrfam: ipv4 00:11:39.391 subtype: discovery subsystem referral 00:11:39.391 treq: not required 00:11:39.391 portid: 0 00:11:39.391 trsvcid: 4430 00:11:39.391 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:11:39.391 traddr: 10.0.0.2 00:11:39.391 eflags: none 00:11:39.391 sectype: none 00:11:39.391 10:30:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:11:39.391 Perform nvmf subsystem discovery via RPC 00:11:39.391 10:30:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:11:39.391 10:30:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:39.391 10:30:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:39.391 [ 00:11:39.391 { 00:11:39.391 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:11:39.391 "subtype": "Discovery", 00:11:39.391 "listen_addresses": [ 00:11:39.391 { 00:11:39.391 "trtype": "TCP", 00:11:39.391 "adrfam": "IPv4", 00:11:39.391 "traddr": "10.0.0.2", 00:11:39.391 "trsvcid": "4420" 00:11:39.391 } 00:11:39.391 ], 00:11:39.391 "allow_any_host": true, 00:11:39.391 "hosts": [] 00:11:39.391 }, 00:11:39.391 { 00:11:39.391 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:11:39.391 "subtype": "NVMe", 00:11:39.391 "listen_addresses": [ 00:11:39.391 { 00:11:39.391 "trtype": "TCP", 00:11:39.391 "adrfam": "IPv4", 00:11:39.391 "traddr": "10.0.0.2", 00:11:39.391 "trsvcid": "4420" 00:11:39.391 } 00:11:39.391 ], 00:11:39.391 "allow_any_host": true, 00:11:39.391 "hosts": [], 00:11:39.391 "serial_number": "SPDK00000000000001", 00:11:39.391 "model_number": "SPDK bdev Controller", 00:11:39.391 "max_namespaces": 32, 00:11:39.391 "min_cntlid": 1, 00:11:39.391 "max_cntlid": 65519, 00:11:39.391 "namespaces": [ 00:11:39.391 { 00:11:39.391 "nsid": 1, 00:11:39.391 "bdev_name": "Null1", 00:11:39.391 "name": "Null1", 00:11:39.391 "nguid": "045C940538BB461BA6F775E3E33D1388", 00:11:39.391 "uuid": "045c9405-38bb-461b-a6f7-75e3e33d1388" 00:11:39.391 } 00:11:39.391 ] 00:11:39.391 }, 00:11:39.391 { 00:11:39.391 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:11:39.391 "subtype": "NVMe", 00:11:39.391 "listen_addresses": [ 00:11:39.391 { 00:11:39.391 "trtype": "TCP", 00:11:39.391 "adrfam": "IPv4", 00:11:39.391 "traddr": "10.0.0.2", 00:11:39.391 "trsvcid": "4420" 00:11:39.391 } 00:11:39.391 ], 00:11:39.391 "allow_any_host": true, 00:11:39.391 "hosts": [], 00:11:39.391 "serial_number": "SPDK00000000000002", 00:11:39.391 "model_number": "SPDK bdev Controller", 00:11:39.391 "max_namespaces": 32, 00:11:39.391 "min_cntlid": 1, 00:11:39.391 "max_cntlid": 65519, 00:11:39.392 "namespaces": [ 00:11:39.392 { 00:11:39.392 "nsid": 1, 00:11:39.392 "bdev_name": "Null2", 00:11:39.392 "name": "Null2", 00:11:39.392 "nguid": "7D278FF73E0145C68465DCBCE04333AC", 00:11:39.392 "uuid": "7d278ff7-3e01-45c6-8465-dcbce04333ac" 00:11:39.392 } 00:11:39.392 ] 00:11:39.392 }, 00:11:39.392 { 00:11:39.392 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:11:39.392 "subtype": "NVMe", 00:11:39.392 "listen_addresses": [ 00:11:39.392 { 00:11:39.392 "trtype": "TCP", 00:11:39.392 "adrfam": "IPv4", 00:11:39.392 "traddr": "10.0.0.2", 00:11:39.392 "trsvcid": "4420" 00:11:39.392 } 00:11:39.392 ], 00:11:39.392 "allow_any_host": true, 00:11:39.392 "hosts": [], 00:11:39.392 "serial_number": "SPDK00000000000003", 00:11:39.392 "model_number": "SPDK bdev Controller", 00:11:39.392 "max_namespaces": 32, 00:11:39.392 "min_cntlid": 1, 00:11:39.392 "max_cntlid": 65519, 00:11:39.392 "namespaces": [ 00:11:39.392 { 00:11:39.392 "nsid": 1, 00:11:39.392 "bdev_name": "Null3", 00:11:39.392 "name": "Null3", 00:11:39.392 "nguid": "B38F213D024E413096904EC162B2E98C", 00:11:39.392 "uuid": "b38f213d-024e-4130-9690-4ec162b2e98c" 00:11:39.392 } 00:11:39.392 ] 00:11:39.392 }, 00:11:39.392 { 00:11:39.392 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:11:39.392 "subtype": "NVMe", 00:11:39.392 "listen_addresses": [ 00:11:39.392 { 00:11:39.392 "trtype": "TCP", 00:11:39.392 "adrfam": "IPv4", 00:11:39.392 "traddr": "10.0.0.2", 00:11:39.392 "trsvcid": "4420" 00:11:39.392 } 00:11:39.392 ], 00:11:39.392 "allow_any_host": true, 00:11:39.392 "hosts": [], 00:11:39.392 "serial_number": "SPDK00000000000004", 00:11:39.392 "model_number": "SPDK bdev Controller", 00:11:39.392 "max_namespaces": 32, 00:11:39.392 "min_cntlid": 1, 00:11:39.392 "max_cntlid": 65519, 00:11:39.392 "namespaces": [ 00:11:39.392 { 00:11:39.392 "nsid": 1, 00:11:39.392 "bdev_name": "Null4", 00:11:39.392 "name": "Null4", 00:11:39.392 "nguid": "C094E31D52A84A239B1316E7B1DDF3CA", 00:11:39.392 "uuid": "c094e31d-52a8-4a23-9b13-16e7b1ddf3ca" 00:11:39.392 } 00:11:39.392 ] 00:11:39.392 } 00:11:39.392 ] 00:11:39.392 10:30:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:39.392 10:30:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:11:39.392 10:30:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:11:39.392 10:30:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:39.392 10:30:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:39.392 10:30:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:39.392 10:30:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:39.392 10:30:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:11:39.392 10:30:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:39.392 10:30:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:39.392 10:30:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:39.392 10:30:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:11:39.392 10:30:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:11:39.392 10:30:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:39.392 10:30:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:39.392 10:30:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:39.392 10:30:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:11:39.392 10:30:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:39.392 10:30:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:39.392 10:30:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:39.392 10:30:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:11:39.392 10:30:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:11:39.392 10:30:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:39.392 10:30:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:39.392 10:30:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:39.392 10:30:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:11:39.392 10:30:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:39.392 10:30:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:39.392 10:30:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:39.392 10:30:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:11:39.392 10:30:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:11:39.392 10:30:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:39.392 10:30:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:39.392 10:30:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:39.392 10:30:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:11:39.392 10:30:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:39.392 10:30:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:39.392 10:30:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:39.392 10:30:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:11:39.392 10:30:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:39.392 10:30:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:39.392 10:30:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:39.392 10:30:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:11:39.392 10:30:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:11:39.392 10:30:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:39.392 10:30:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:39.392 10:30:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:39.392 10:30:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:11:39.392 10:30:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:11:39.392 10:30:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:11:39.392 10:30:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:11:39.392 10:30:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:39.392 10:30:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@121 -- # sync 00:11:39.392 10:30:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:39.392 10:30:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@124 -- # set +e 00:11:39.392 10:30:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:39.392 10:30:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:39.392 rmmod nvme_tcp 00:11:39.392 rmmod nvme_fabrics 00:11:39.392 rmmod nvme_keyring 00:11:39.392 10:30:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:39.392 10:30:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@128 -- # set -e 00:11:39.392 10:30:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@129 -- # return 0 00:11:39.392 10:30:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@517 -- # '[' -n 321095 ']' 00:11:39.392 10:30:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@518 -- # killprocess 321095 00:11:39.392 10:30:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@952 -- # '[' -z 321095 ']' 00:11:39.392 10:30:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@956 -- # kill -0 321095 00:11:39.392 10:30:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@957 -- # uname 00:11:39.392 10:30:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:11:39.652 10:30:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 321095 00:11:39.652 10:30:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:11:39.652 10:30:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:11:39.652 10:30:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@970 -- # echo 'killing process with pid 321095' 00:11:39.652 killing process with pid 321095 00:11:39.652 10:30:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@971 -- # kill 321095 00:11:39.652 10:30:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@976 -- # wait 321095 00:11:39.652 10:30:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:39.652 10:30:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:39.652 10:30:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:39.652 10:30:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@297 -- # iptr 00:11:39.652 10:30:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # iptables-save 00:11:39.652 10:30:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:39.652 10:30:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # iptables-restore 00:11:39.652 10:30:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:39.652 10:30:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:39.652 10:30:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:39.652 10:30:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:39.652 10:30:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:42.199 10:30:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:42.199 00:11:42.199 real 0m5.651s 00:11:42.199 user 0m4.730s 00:11:42.199 sys 0m1.968s 00:11:42.199 10:30:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:42.199 10:30:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:42.199 ************************************ 00:11:42.199 END TEST nvmf_target_discovery 00:11:42.199 ************************************ 00:11:42.199 10:30:30 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@19 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:11:42.199 10:30:30 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:11:42.199 10:30:30 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:42.199 10:30:30 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:42.199 ************************************ 00:11:42.199 START TEST nvmf_referrals 00:11:42.199 ************************************ 00:11:42.199 10:30:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:11:42.199 * Looking for test storage... 00:11:42.199 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:42.199 10:30:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:11:42.199 10:30:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1691 -- # lcov --version 00:11:42.199 10:30:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:11:42.199 10:30:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:11:42.199 10:30:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:42.199 10:30:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:42.199 10:30:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:42.199 10:30:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # IFS=.-: 00:11:42.199 10:30:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # read -ra ver1 00:11:42.199 10:30:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # IFS=.-: 00:11:42.199 10:30:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # read -ra ver2 00:11:42.199 10:30:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@338 -- # local 'op=<' 00:11:42.199 10:30:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@340 -- # ver1_l=2 00:11:42.199 10:30:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@341 -- # ver2_l=1 00:11:42.199 10:30:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:42.199 10:30:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@344 -- # case "$op" in 00:11:42.199 10:30:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@345 -- # : 1 00:11:42.199 10:30:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:42.199 10:30:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:42.199 10:30:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # decimal 1 00:11:42.199 10:30:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=1 00:11:42.199 10:30:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:42.199 10:30:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 1 00:11:42.199 10:30:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # ver1[v]=1 00:11:42.199 10:30:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # decimal 2 00:11:42.199 10:30:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=2 00:11:42.199 10:30:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:42.199 10:30:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 2 00:11:42.199 10:30:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # ver2[v]=2 00:11:42.199 10:30:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:42.199 10:30:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:42.199 10:30:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # return 0 00:11:42.199 10:30:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:42.199 10:30:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:11:42.199 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:42.199 --rc genhtml_branch_coverage=1 00:11:42.199 --rc genhtml_function_coverage=1 00:11:42.199 --rc genhtml_legend=1 00:11:42.199 --rc geninfo_all_blocks=1 00:11:42.199 --rc geninfo_unexecuted_blocks=1 00:11:42.199 00:11:42.199 ' 00:11:42.199 10:30:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:11:42.199 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:42.199 --rc genhtml_branch_coverage=1 00:11:42.199 --rc genhtml_function_coverage=1 00:11:42.199 --rc genhtml_legend=1 00:11:42.199 --rc geninfo_all_blocks=1 00:11:42.199 --rc geninfo_unexecuted_blocks=1 00:11:42.199 00:11:42.199 ' 00:11:42.199 10:30:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:11:42.199 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:42.199 --rc genhtml_branch_coverage=1 00:11:42.199 --rc genhtml_function_coverage=1 00:11:42.199 --rc genhtml_legend=1 00:11:42.199 --rc geninfo_all_blocks=1 00:11:42.199 --rc geninfo_unexecuted_blocks=1 00:11:42.199 00:11:42.199 ' 00:11:42.199 10:30:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:11:42.199 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:42.199 --rc genhtml_branch_coverage=1 00:11:42.199 --rc genhtml_function_coverage=1 00:11:42.199 --rc genhtml_legend=1 00:11:42.199 --rc geninfo_all_blocks=1 00:11:42.199 --rc geninfo_unexecuted_blocks=1 00:11:42.199 00:11:42.199 ' 00:11:42.199 10:30:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:42.199 10:30:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:11:42.199 10:30:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:42.199 10:30:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:42.199 10:30:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:42.199 10:30:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:42.199 10:30:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:42.199 10:30:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:42.199 10:30:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:42.199 10:30:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:42.199 10:30:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:42.199 10:30:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:42.199 10:30:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:11:42.199 10:30:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=8b464f06-2980-e311-ba20-001e67a94acd 00:11:42.200 10:30:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:42.200 10:30:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:42.200 10:30:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:42.200 10:30:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:42.200 10:30:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:42.200 10:30:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@15 -- # shopt -s extglob 00:11:42.200 10:30:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:42.200 10:30:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:42.200 10:30:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:42.200 10:30:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:42.200 10:30:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:42.200 10:30:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:42.200 10:30:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:11:42.200 10:30:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:42.200 10:30:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@51 -- # : 0 00:11:42.200 10:30:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:42.200 10:30:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:42.200 10:30:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:42.200 10:30:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:42.200 10:30:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:42.200 10:30:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:42.200 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:42.200 10:30:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:42.200 10:30:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:42.200 10:30:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:42.200 10:30:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:11:42.200 10:30:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:11:42.200 10:30:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:11:42.200 10:30:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:11:42.200 10:30:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:11:42.200 10:30:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:11:42.200 10:30:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:11:42.200 10:30:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:42.200 10:30:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:42.200 10:30:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:42.200 10:30:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:42.200 10:30:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:42.200 10:30:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:42.200 10:30:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:42.200 10:30:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:42.200 10:30:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:42.200 10:30:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:42.200 10:30:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@309 -- # xtrace_disable 00:11:42.200 10:30:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:44.107 10:30:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:44.107 10:30:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # pci_devs=() 00:11:44.107 10:30:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:44.107 10:30:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:44.107 10:30:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:44.107 10:30:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:44.107 10:30:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:44.107 10:30:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # net_devs=() 00:11:44.107 10:30:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:44.107 10:30:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # e810=() 00:11:44.107 10:30:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # local -ga e810 00:11:44.107 10:30:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # x722=() 00:11:44.107 10:30:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # local -ga x722 00:11:44.107 10:30:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # mlx=() 00:11:44.107 10:30:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # local -ga mlx 00:11:44.107 10:30:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:44.107 10:30:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:44.107 10:30:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:44.107 10:30:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:44.107 10:30:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:44.107 10:30:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:44.107 10:30:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:44.107 10:30:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:44.107 10:30:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:44.107 10:30:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:44.107 10:30:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:44.107 10:30:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:44.107 10:30:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:44.107 10:30:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:44.107 10:30:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:44.107 10:30:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:44.107 10:30:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:44.107 10:30:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:44.107 10:30:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:44.107 10:30:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@367 -- # echo 'Found 0000:82:00.0 (0x8086 - 0x159b)' 00:11:44.107 Found 0000:82:00.0 (0x8086 - 0x159b) 00:11:44.107 10:30:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:44.107 10:30:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:44.107 10:30:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:44.107 10:30:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:44.107 10:30:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:44.107 10:30:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:44.107 10:30:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@367 -- # echo 'Found 0000:82:00.1 (0x8086 - 0x159b)' 00:11:44.107 Found 0000:82:00.1 (0x8086 - 0x159b) 00:11:44.107 10:30:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:44.107 10:30:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:44.107 10:30:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:44.107 10:30:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:44.107 10:30:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:44.107 10:30:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:44.107 10:30:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:44.107 10:30:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:44.107 10:30:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:44.107 10:30:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:44.107 10:30:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:44.107 10:30:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:44.107 10:30:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:44.107 10:30:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:44.107 10:30:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:44.107 10:30:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:82:00.0: cvl_0_0' 00:11:44.107 Found net devices under 0000:82:00.0: cvl_0_0 00:11:44.107 10:30:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:44.107 10:30:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:44.107 10:30:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:44.107 10:30:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:44.107 10:30:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:44.107 10:30:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:44.107 10:30:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:44.107 10:30:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:44.107 10:30:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:82:00.1: cvl_0_1' 00:11:44.107 Found net devices under 0000:82:00.1: cvl_0_1 00:11:44.107 10:30:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:44.107 10:30:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:44.107 10:30:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # is_hw=yes 00:11:44.107 10:30:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:44.107 10:30:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:44.107 10:30:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:44.107 10:30:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:44.107 10:30:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:44.107 10:30:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:44.107 10:30:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:44.107 10:30:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:44.107 10:30:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:44.107 10:30:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:44.107 10:30:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:44.108 10:30:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:44.108 10:30:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:44.108 10:30:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:44.108 10:30:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:44.108 10:30:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:44.108 10:30:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:44.108 10:30:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:44.367 10:30:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:44.367 10:30:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:44.367 10:30:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:44.367 10:30:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:44.367 10:30:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:44.367 10:30:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:44.367 10:30:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:44.367 10:30:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:44.367 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:44.367 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.233 ms 00:11:44.367 00:11:44.367 --- 10.0.0.2 ping statistics --- 00:11:44.367 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:44.367 rtt min/avg/max/mdev = 0.233/0.233/0.233/0.000 ms 00:11:44.367 10:30:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:44.367 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:44.367 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.112 ms 00:11:44.367 00:11:44.367 --- 10.0.0.1 ping statistics --- 00:11:44.367 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:44.367 rtt min/avg/max/mdev = 0.112/0.112/0.112/0.000 ms 00:11:44.367 10:30:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:44.367 10:30:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@450 -- # return 0 00:11:44.367 10:30:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:44.367 10:30:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:44.367 10:30:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:44.367 10:30:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:44.367 10:30:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:44.367 10:30:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:44.367 10:30:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:44.367 10:30:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:11:44.367 10:30:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:44.367 10:30:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:44.367 10:30:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:44.367 10:30:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@509 -- # nvmfpid=323194 00:11:44.367 10:30:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:44.367 10:30:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@510 -- # waitforlisten 323194 00:11:44.367 10:30:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@833 -- # '[' -z 323194 ']' 00:11:44.367 10:30:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:44.367 10:30:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@838 -- # local max_retries=100 00:11:44.367 10:30:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:44.367 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:44.367 10:30:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@842 -- # xtrace_disable 00:11:44.367 10:30:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:44.367 [2024-11-15 10:30:32.722252] Starting SPDK v25.01-pre git sha1 318515b44 / DPDK 24.03.0 initialization... 00:11:44.367 [2024-11-15 10:30:32.722344] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:44.367 [2024-11-15 10:30:32.797605] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:44.626 [2024-11-15 10:30:32.859382] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:44.626 [2024-11-15 10:30:32.859441] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:44.626 [2024-11-15 10:30:32.859456] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:44.626 [2024-11-15 10:30:32.859467] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:44.626 [2024-11-15 10:30:32.859477] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:44.626 [2024-11-15 10:30:32.861170] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:44.626 [2024-11-15 10:30:32.861246] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:44.626 [2024-11-15 10:30:32.861250] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:44.626 [2024-11-15 10:30:32.861193] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:44.626 10:30:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:11:44.626 10:30:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@866 -- # return 0 00:11:44.626 10:30:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:44.626 10:30:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:44.626 10:30:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:44.626 10:30:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:44.626 10:30:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:44.626 10:30:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:44.626 10:30:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:44.626 [2024-11-15 10:30:33.017224] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:44.626 10:30:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:44.626 10:30:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:11:44.626 10:30:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:44.626 10:30:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:44.626 [2024-11-15 10:30:33.029500] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:11:44.626 10:30:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:44.626 10:30:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:11:44.626 10:30:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:44.626 10:30:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:44.626 10:30:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:44.626 10:30:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:11:44.626 10:30:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:44.626 10:30:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:44.626 10:30:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:44.626 10:30:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:11:44.626 10:30:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:44.626 10:30:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:44.626 10:30:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:44.626 10:30:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:44.626 10:30:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:44.626 10:30:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:11:44.627 10:30:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:44.627 10:30:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:44.885 10:30:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:11:44.885 10:30:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:11:44.885 10:30:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:11:44.885 10:30:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:44.885 10:30:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:44.885 10:30:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:11:44.885 10:30:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:44.885 10:30:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:11:44.885 10:30:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:44.885 10:30:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:11:44.885 10:30:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:11:44.885 10:30:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:11:44.885 10:30:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:44.885 10:30:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:44.885 10:30:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid=8b464f06-2980-e311-ba20-001e67a94acd -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:44.885 10:30:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:44.885 10:30:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:11:44.885 10:30:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:11:44.885 10:30:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:11:44.885 10:30:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:11:44.885 10:30:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:44.885 10:30:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:45.142 10:30:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:45.142 10:30:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:11:45.142 10:30:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:45.142 10:30:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:45.142 10:30:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:45.142 10:30:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:11:45.143 10:30:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:45.143 10:30:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:45.143 10:30:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:45.143 10:30:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:45.143 10:30:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:11:45.143 10:30:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:45.143 10:30:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:45.143 10:30:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:45.143 10:30:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:11:45.143 10:30:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:11:45.143 10:30:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:45.143 10:30:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:45.143 10:30:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid=8b464f06-2980-e311-ba20-001e67a94acd -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:45.143 10:30:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:45.143 10:30:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:11:45.401 10:30:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:11:45.401 10:30:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:11:45.401 10:30:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:11:45.401 10:30:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:45.401 10:30:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:45.401 10:30:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:45.401 10:30:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:11:45.401 10:30:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:45.401 10:30:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:45.401 10:30:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:45.401 10:30:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:11:45.401 10:30:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:11:45.401 10:30:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:45.401 10:30:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:11:45.401 10:30:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:45.401 10:30:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:11:45.401 10:30:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:45.401 10:30:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:45.401 10:30:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:11:45.401 10:30:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:11:45.401 10:30:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:11:45.401 10:30:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:45.401 10:30:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:45.401 10:30:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid=8b464f06-2980-e311-ba20-001e67a94acd -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:45.401 10:30:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:45.401 10:30:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:11:45.659 10:30:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:11:45.659 10:30:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:11:45.659 10:30:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:11:45.659 10:30:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:11:45.659 10:30:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:11:45.659 10:30:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid=8b464f06-2980-e311-ba20-001e67a94acd -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:45.659 10:30:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:11:45.659 10:30:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:11:45.659 10:30:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:11:45.659 10:30:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:11:45.659 10:30:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:11:45.659 10:30:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid=8b464f06-2980-e311-ba20-001e67a94acd -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:45.659 10:30:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:11:45.917 10:30:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:11:45.918 10:30:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:11:45.918 10:30:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:45.918 10:30:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:45.918 10:30:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:45.918 10:30:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:11:45.918 10:30:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:11:45.918 10:30:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:45.918 10:30:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:45.918 10:30:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:11:45.918 10:30:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:45.918 10:30:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:11:45.918 10:30:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:45.918 10:30:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:11:45.918 10:30:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:11:45.918 10:30:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:11:45.918 10:30:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:45.918 10:30:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:45.918 10:30:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid=8b464f06-2980-e311-ba20-001e67a94acd -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:45.918 10:30:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:45.918 10:30:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:11:46.200 10:30:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:11:46.200 10:30:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:11:46.200 10:30:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:11:46.200 10:30:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:11:46.200 10:30:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:11:46.200 10:30:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid=8b464f06-2980-e311-ba20-001e67a94acd -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:46.200 10:30:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:11:46.469 10:30:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:11:46.469 10:30:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:11:46.469 10:30:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:11:46.469 10:30:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:11:46.469 10:30:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid=8b464f06-2980-e311-ba20-001e67a94acd -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:46.469 10:30:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:11:46.469 10:30:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:11:46.469 10:30:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:11:46.469 10:30:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:46.469 10:30:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:46.469 10:30:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:46.469 10:30:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:46.469 10:30:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:46.469 10:30:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:11:46.469 10:30:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:46.469 10:30:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:46.469 10:30:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:11:46.469 10:30:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:11:46.469 10:30:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:46.469 10:30:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:46.469 10:30:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid=8b464f06-2980-e311-ba20-001e67a94acd -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:46.469 10:30:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:46.469 10:30:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:11:46.791 10:30:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:11:46.791 10:30:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:11:46.791 10:30:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:11:46.791 10:30:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:11:46.791 10:30:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:46.791 10:30:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@121 -- # sync 00:11:46.791 10:30:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:46.791 10:30:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@124 -- # set +e 00:11:46.791 10:30:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:46.791 10:30:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:46.791 rmmod nvme_tcp 00:11:46.791 rmmod nvme_fabrics 00:11:46.791 rmmod nvme_keyring 00:11:46.791 10:30:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:46.791 10:30:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@128 -- # set -e 00:11:46.791 10:30:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@129 -- # return 0 00:11:46.791 10:30:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@517 -- # '[' -n 323194 ']' 00:11:46.791 10:30:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@518 -- # killprocess 323194 00:11:46.791 10:30:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@952 -- # '[' -z 323194 ']' 00:11:46.791 10:30:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@956 -- # kill -0 323194 00:11:46.791 10:30:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@957 -- # uname 00:11:46.791 10:30:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:11:46.791 10:30:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 323194 00:11:46.791 10:30:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:11:46.791 10:30:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:11:46.791 10:30:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@970 -- # echo 'killing process with pid 323194' 00:11:46.791 killing process with pid 323194 00:11:46.791 10:30:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@971 -- # kill 323194 00:11:46.791 10:30:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@976 -- # wait 323194 00:11:47.072 10:30:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:47.072 10:30:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:47.072 10:30:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:47.072 10:30:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@297 -- # iptr 00:11:47.072 10:30:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # iptables-save 00:11:47.072 10:30:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:47.072 10:30:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # iptables-restore 00:11:47.072 10:30:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:47.072 10:30:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:47.072 10:30:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:47.072 10:30:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:47.072 10:30:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:49.172 10:30:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:49.172 00:11:49.172 real 0m7.281s 00:11:49.172 user 0m11.688s 00:11:49.172 sys 0m2.392s 00:11:49.172 10:30:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:49.172 10:30:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:49.172 ************************************ 00:11:49.172 END TEST nvmf_referrals 00:11:49.172 ************************************ 00:11:49.172 10:30:37 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@20 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:11:49.172 10:30:37 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:11:49.172 10:30:37 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:49.172 10:30:37 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:49.172 ************************************ 00:11:49.172 START TEST nvmf_connect_disconnect 00:11:49.172 ************************************ 00:11:49.172 10:30:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:11:49.172 * Looking for test storage... 00:11:49.172 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:49.172 10:30:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:11:49.172 10:30:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1691 -- # lcov --version 00:11:49.172 10:30:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:11:49.431 10:30:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:11:49.431 10:30:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:49.431 10:30:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:49.431 10:30:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:49.431 10:30:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:11:49.431 10:30:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:11:49.431 10:30:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:11:49.431 10:30:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:11:49.431 10:30:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:11:49.431 10:30:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:11:49.431 10:30:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:11:49.431 10:30:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:49.431 10:30:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:11:49.431 10:30:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@345 -- # : 1 00:11:49.431 10:30:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:49.431 10:30:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:49.431 10:30:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # decimal 1 00:11:49.431 10:30:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=1 00:11:49.432 10:30:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:49.432 10:30:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 1 00:11:49.432 10:30:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:11:49.432 10:30:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # decimal 2 00:11:49.432 10:30:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=2 00:11:49.432 10:30:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:49.432 10:30:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 2 00:11:49.432 10:30:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:11:49.432 10:30:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:49.432 10:30:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:49.432 10:30:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # return 0 00:11:49.432 10:30:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:49.432 10:30:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:11:49.432 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:49.432 --rc genhtml_branch_coverage=1 00:11:49.432 --rc genhtml_function_coverage=1 00:11:49.432 --rc genhtml_legend=1 00:11:49.432 --rc geninfo_all_blocks=1 00:11:49.432 --rc geninfo_unexecuted_blocks=1 00:11:49.432 00:11:49.432 ' 00:11:49.432 10:30:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:11:49.432 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:49.432 --rc genhtml_branch_coverage=1 00:11:49.432 --rc genhtml_function_coverage=1 00:11:49.432 --rc genhtml_legend=1 00:11:49.432 --rc geninfo_all_blocks=1 00:11:49.432 --rc geninfo_unexecuted_blocks=1 00:11:49.432 00:11:49.432 ' 00:11:49.432 10:30:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:11:49.432 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:49.432 --rc genhtml_branch_coverage=1 00:11:49.432 --rc genhtml_function_coverage=1 00:11:49.432 --rc genhtml_legend=1 00:11:49.432 --rc geninfo_all_blocks=1 00:11:49.432 --rc geninfo_unexecuted_blocks=1 00:11:49.432 00:11:49.432 ' 00:11:49.432 10:30:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:11:49.432 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:49.432 --rc genhtml_branch_coverage=1 00:11:49.432 --rc genhtml_function_coverage=1 00:11:49.432 --rc genhtml_legend=1 00:11:49.432 --rc geninfo_all_blocks=1 00:11:49.432 --rc geninfo_unexecuted_blocks=1 00:11:49.432 00:11:49.432 ' 00:11:49.432 10:30:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:49.432 10:30:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:11:49.432 10:30:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:49.432 10:30:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:49.432 10:30:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:49.432 10:30:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:49.432 10:30:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:49.432 10:30:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:49.432 10:30:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:49.432 10:30:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:49.432 10:30:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:49.432 10:30:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:49.432 10:30:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:11:49.432 10:30:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=8b464f06-2980-e311-ba20-001e67a94acd 00:11:49.432 10:30:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:49.432 10:30:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:49.432 10:30:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:49.432 10:30:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:49.432 10:30:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:49.432 10:30:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:11:49.432 10:30:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:49.432 10:30:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:49.432 10:30:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:49.432 10:30:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:49.432 10:30:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:49.432 10:30:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:49.432 10:30:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:11:49.432 10:30:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:49.432 10:30:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # : 0 00:11:49.432 10:30:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:49.432 10:30:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:49.432 10:30:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:49.432 10:30:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:49.432 10:30:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:49.432 10:30:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:49.432 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:49.432 10:30:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:49.432 10:30:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:49.432 10:30:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:49.432 10:30:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:49.432 10:30:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:49.432 10:30:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:11:49.432 10:30:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:49.432 10:30:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:49.432 10:30:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:49.432 10:30:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:49.432 10:30:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:49.432 10:30:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:49.432 10:30:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:49.432 10:30:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:49.432 10:30:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:49.432 10:30:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:49.432 10:30:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:11:49.432 10:30:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:51.965 10:30:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:51.965 10:30:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:11:51.965 10:30:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:51.965 10:30:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:51.965 10:30:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:51.965 10:30:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:51.965 10:30:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:51.965 10:30:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:11:51.965 10:30:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:51.965 10:30:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # e810=() 00:11:51.965 10:30:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:11:51.965 10:30:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # x722=() 00:11:51.965 10:30:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:11:51.965 10:30:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:11:51.966 10:30:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:11:51.966 10:30:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:51.966 10:30:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:51.966 10:30:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:51.966 10:30:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:51.966 10:30:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:51.966 10:30:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:51.966 10:30:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:51.966 10:30:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:51.966 10:30:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:51.966 10:30:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:51.966 10:30:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:51.966 10:30:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:51.966 10:30:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:51.966 10:30:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:51.966 10:30:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:51.966 10:30:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:51.966 10:30:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:51.966 10:30:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:51.966 10:30:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:51.966 10:30:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:82:00.0 (0x8086 - 0x159b)' 00:11:51.966 Found 0000:82:00.0 (0x8086 - 0x159b) 00:11:51.966 10:30:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:51.966 10:30:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:51.966 10:30:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:51.966 10:30:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:51.966 10:30:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:51.966 10:30:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:51.966 10:30:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:82:00.1 (0x8086 - 0x159b)' 00:11:51.966 Found 0000:82:00.1 (0x8086 - 0x159b) 00:11:51.966 10:30:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:51.966 10:30:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:51.966 10:30:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:51.966 10:30:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:51.966 10:30:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:51.966 10:30:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:51.966 10:30:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:51.966 10:30:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:51.966 10:30:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:51.966 10:30:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:51.966 10:30:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:51.966 10:30:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:51.966 10:30:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:51.966 10:30:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:51.966 10:30:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:51.966 10:30:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:82:00.0: cvl_0_0' 00:11:51.966 Found net devices under 0000:82:00.0: cvl_0_0 00:11:51.966 10:30:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:51.966 10:30:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:51.966 10:30:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:51.966 10:30:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:51.966 10:30:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:51.966 10:30:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:51.966 10:30:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:51.966 10:30:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:51.966 10:30:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:82:00.1: cvl_0_1' 00:11:51.966 Found net devices under 0000:82:00.1: cvl_0_1 00:11:51.966 10:30:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:51.966 10:30:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:51.966 10:30:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # is_hw=yes 00:11:51.966 10:30:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:51.966 10:30:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:51.966 10:30:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:51.966 10:30:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:51.966 10:30:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:51.966 10:30:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:51.966 10:30:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:51.966 10:30:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:51.966 10:30:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:51.966 10:30:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:51.966 10:30:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:51.966 10:30:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:51.966 10:30:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:51.966 10:30:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:51.966 10:30:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:51.966 10:30:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:51.966 10:30:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:51.966 10:30:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:51.966 10:30:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:51.966 10:30:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:51.966 10:30:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:51.966 10:30:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:51.966 10:30:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:51.966 10:30:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:51.966 10:30:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:51.966 10:30:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:51.966 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:51.966 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.233 ms 00:11:51.966 00:11:51.966 --- 10.0.0.2 ping statistics --- 00:11:51.966 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:51.966 rtt min/avg/max/mdev = 0.233/0.233/0.233/0.000 ms 00:11:51.966 10:30:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:51.966 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:51.966 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.094 ms 00:11:51.966 00:11:51.966 --- 10.0.0.1 ping statistics --- 00:11:51.967 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:51.967 rtt min/avg/max/mdev = 0.094/0.094/0.094/0.000 ms 00:11:51.967 10:30:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:51.967 10:30:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@450 -- # return 0 00:11:51.967 10:30:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:51.967 10:30:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:51.967 10:30:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:51.967 10:30:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:51.967 10:30:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:51.967 10:30:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:51.967 10:30:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:51.967 10:30:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:11:51.967 10:30:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:51.967 10:30:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:51.967 10:30:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:51.967 10:30:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@509 -- # nvmfpid=325632 00:11:51.967 10:30:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:51.967 10:30:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@510 -- # waitforlisten 325632 00:11:51.967 10:30:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@833 -- # '[' -z 325632 ']' 00:11:51.967 10:30:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:51.967 10:30:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@838 -- # local max_retries=100 00:11:51.967 10:30:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:51.967 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:51.967 10:30:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@842 -- # xtrace_disable 00:11:51.967 10:30:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:51.967 [2024-11-15 10:30:40.129175] Starting SPDK v25.01-pre git sha1 318515b44 / DPDK 24.03.0 initialization... 00:11:51.967 [2024-11-15 10:30:40.129253] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:51.967 [2024-11-15 10:30:40.203201] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:51.967 [2024-11-15 10:30:40.265117] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:51.967 [2024-11-15 10:30:40.265179] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:51.967 [2024-11-15 10:30:40.265192] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:51.967 [2024-11-15 10:30:40.265203] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:51.967 [2024-11-15 10:30:40.265213] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:51.967 [2024-11-15 10:30:40.266845] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:51.967 [2024-11-15 10:30:40.266910] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:51.967 [2024-11-15 10:30:40.267004] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:51.967 [2024-11-15 10:30:40.267007] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:51.967 10:30:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:11:51.967 10:30:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@866 -- # return 0 00:11:51.967 10:30:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:51.967 10:30:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:51.967 10:30:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:51.967 10:30:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:51.967 10:30:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:11:51.967 10:30:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:51.967 10:30:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:51.967 [2024-11-15 10:30:40.420341] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:51.967 10:30:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:51.967 10:30:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:11:51.967 10:30:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:51.967 10:30:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:52.226 10:30:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:52.226 10:30:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:11:52.226 10:30:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:52.226 10:30:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:52.226 10:30:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:52.226 10:30:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:52.226 10:30:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:52.226 10:30:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:52.226 10:30:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:52.226 10:30:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:52.226 10:30:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:52.226 10:30:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:52.226 10:30:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:52.226 [2024-11-15 10:30:40.489839] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:52.226 10:30:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:52.226 10:30:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 0 -eq 1 ']' 00:11:52.226 10:30:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@31 -- # num_iterations=5 00:11:52.226 10:30:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:11:54.751 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:58.030 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:00.556 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:03.836 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:06.363 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:06.363 10:30:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:12:06.363 10:30:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:12:06.363 10:30:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:06.363 10:30:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # sync 00:12:06.363 10:30:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:06.363 10:30:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set +e 00:12:06.363 10:30:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:06.363 10:30:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:06.363 rmmod nvme_tcp 00:12:06.363 rmmod nvme_fabrics 00:12:06.363 rmmod nvme_keyring 00:12:06.363 10:30:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:06.363 10:30:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@128 -- # set -e 00:12:06.363 10:30:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@129 -- # return 0 00:12:06.363 10:30:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@517 -- # '[' -n 325632 ']' 00:12:06.363 10:30:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@518 -- # killprocess 325632 00:12:06.363 10:30:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@952 -- # '[' -z 325632 ']' 00:12:06.363 10:30:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@956 -- # kill -0 325632 00:12:06.363 10:30:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@957 -- # uname 00:12:06.363 10:30:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:12:06.363 10:30:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 325632 00:12:06.363 10:30:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:12:06.363 10:30:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:12:06.363 10:30:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@970 -- # echo 'killing process with pid 325632' 00:12:06.363 killing process with pid 325632 00:12:06.363 10:30:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@971 -- # kill 325632 00:12:06.363 10:30:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@976 -- # wait 325632 00:12:06.363 10:30:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:06.363 10:30:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:06.363 10:30:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:06.363 10:30:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # iptr 00:12:06.363 10:30:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # iptables-save 00:12:06.363 10:30:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:06.363 10:30:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # iptables-restore 00:12:06.363 10:30:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:06.363 10:30:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:06.363 10:30:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:06.363 10:30:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:06.363 10:30:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:08.270 10:30:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:08.270 00:12:08.270 real 0m19.200s 00:12:08.270 user 0m57.191s 00:12:08.270 sys 0m3.674s 00:12:08.270 10:30:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1128 -- # xtrace_disable 00:12:08.270 10:30:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:08.270 ************************************ 00:12:08.270 END TEST nvmf_connect_disconnect 00:12:08.270 ************************************ 00:12:08.529 10:30:56 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@21 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:12:08.529 10:30:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:12:08.529 10:30:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:12:08.529 10:30:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:08.529 ************************************ 00:12:08.529 START TEST nvmf_multitarget 00:12:08.529 ************************************ 00:12:08.529 10:30:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:12:08.529 * Looking for test storage... 00:12:08.529 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:08.529 10:30:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:12:08.529 10:30:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1691 -- # lcov --version 00:12:08.529 10:30:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:12:08.529 10:30:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:12:08.529 10:30:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:08.529 10:30:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:08.529 10:30:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:08.529 10:30:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # IFS=.-: 00:12:08.529 10:30:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # read -ra ver1 00:12:08.529 10:30:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # IFS=.-: 00:12:08.529 10:30:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # read -ra ver2 00:12:08.529 10:30:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@338 -- # local 'op=<' 00:12:08.529 10:30:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@340 -- # ver1_l=2 00:12:08.529 10:30:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@341 -- # ver2_l=1 00:12:08.529 10:30:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:08.529 10:30:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@344 -- # case "$op" in 00:12:08.529 10:30:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@345 -- # : 1 00:12:08.529 10:30:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:08.529 10:30:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:08.530 10:30:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # decimal 1 00:12:08.530 10:30:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=1 00:12:08.530 10:30:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:08.530 10:30:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 1 00:12:08.530 10:30:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # ver1[v]=1 00:12:08.530 10:30:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # decimal 2 00:12:08.530 10:30:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=2 00:12:08.530 10:30:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:08.530 10:30:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 2 00:12:08.530 10:30:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # ver2[v]=2 00:12:08.530 10:30:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:08.530 10:30:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:08.530 10:30:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # return 0 00:12:08.530 10:30:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:08.530 10:30:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:12:08.530 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:08.530 --rc genhtml_branch_coverage=1 00:12:08.530 --rc genhtml_function_coverage=1 00:12:08.530 --rc genhtml_legend=1 00:12:08.530 --rc geninfo_all_blocks=1 00:12:08.530 --rc geninfo_unexecuted_blocks=1 00:12:08.530 00:12:08.530 ' 00:12:08.530 10:30:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:12:08.530 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:08.530 --rc genhtml_branch_coverage=1 00:12:08.530 --rc genhtml_function_coverage=1 00:12:08.530 --rc genhtml_legend=1 00:12:08.530 --rc geninfo_all_blocks=1 00:12:08.530 --rc geninfo_unexecuted_blocks=1 00:12:08.530 00:12:08.530 ' 00:12:08.530 10:30:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:12:08.530 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:08.530 --rc genhtml_branch_coverage=1 00:12:08.530 --rc genhtml_function_coverage=1 00:12:08.530 --rc genhtml_legend=1 00:12:08.530 --rc geninfo_all_blocks=1 00:12:08.530 --rc geninfo_unexecuted_blocks=1 00:12:08.530 00:12:08.530 ' 00:12:08.530 10:30:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:12:08.530 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:08.530 --rc genhtml_branch_coverage=1 00:12:08.530 --rc genhtml_function_coverage=1 00:12:08.530 --rc genhtml_legend=1 00:12:08.530 --rc geninfo_all_blocks=1 00:12:08.530 --rc geninfo_unexecuted_blocks=1 00:12:08.530 00:12:08.530 ' 00:12:08.530 10:30:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:08.530 10:30:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:12:08.530 10:30:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:08.530 10:30:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:08.530 10:30:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:08.530 10:30:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:08.530 10:30:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:08.530 10:30:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:08.530 10:30:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:08.530 10:30:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:08.530 10:30:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:08.530 10:30:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:08.530 10:30:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:12:08.530 10:30:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=8b464f06-2980-e311-ba20-001e67a94acd 00:12:08.530 10:30:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:08.530 10:30:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:08.530 10:30:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:08.530 10:30:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:08.530 10:30:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:08.530 10:30:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@15 -- # shopt -s extglob 00:12:08.530 10:30:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:08.530 10:30:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:08.530 10:30:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:08.530 10:30:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:08.530 10:30:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:08.530 10:30:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:08.530 10:30:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:12:08.530 10:30:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:08.530 10:30:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@51 -- # : 0 00:12:08.530 10:30:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:08.530 10:30:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:08.530 10:30:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:08.530 10:30:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:08.530 10:30:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:08.530 10:30:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:08.530 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:08.530 10:30:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:08.530 10:30:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:08.530 10:30:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:08.530 10:30:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:12:08.530 10:30:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:12:08.530 10:30:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:08.530 10:30:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:08.530 10:30:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:08.530 10:30:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:08.530 10:30:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:08.530 10:30:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:08.530 10:30:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:08.531 10:30:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:08.531 10:30:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:08.531 10:30:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:08.531 10:30:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@309 -- # xtrace_disable 00:12:08.531 10:30:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:11.062 10:30:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:11.062 10:30:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # pci_devs=() 00:12:11.062 10:30:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:11.062 10:30:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:11.062 10:30:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:11.062 10:30:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:11.062 10:30:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:11.062 10:30:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # net_devs=() 00:12:11.062 10:30:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:11.062 10:30:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # e810=() 00:12:11.062 10:30:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # local -ga e810 00:12:11.062 10:30:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # x722=() 00:12:11.062 10:30:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # local -ga x722 00:12:11.062 10:30:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # mlx=() 00:12:11.062 10:30:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # local -ga mlx 00:12:11.062 10:30:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:11.062 10:30:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:11.062 10:30:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:11.062 10:30:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:11.062 10:30:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:11.062 10:30:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:11.062 10:30:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:11.062 10:30:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:11.062 10:30:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:11.062 10:30:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:11.062 10:30:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:11.062 10:30:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:11.062 10:30:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:11.062 10:30:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:11.062 10:30:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:11.062 10:30:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:11.062 10:30:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:11.062 10:30:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:11.062 10:30:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:11.062 10:30:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@367 -- # echo 'Found 0000:82:00.0 (0x8086 - 0x159b)' 00:12:11.062 Found 0000:82:00.0 (0x8086 - 0x159b) 00:12:11.062 10:30:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:11.062 10:30:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:11.062 10:30:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:11.062 10:30:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:11.062 10:30:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:11.062 10:30:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:11.062 10:30:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@367 -- # echo 'Found 0000:82:00.1 (0x8086 - 0x159b)' 00:12:11.062 Found 0000:82:00.1 (0x8086 - 0x159b) 00:12:11.062 10:30:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:11.062 10:30:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:11.062 10:30:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:11.062 10:30:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:11.062 10:30:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:11.062 10:30:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:11.062 10:30:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:11.062 10:30:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:11.062 10:30:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:11.062 10:30:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:11.062 10:30:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:11.062 10:30:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:11.062 10:30:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:11.062 10:30:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:11.062 10:30:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:11.062 10:30:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:82:00.0: cvl_0_0' 00:12:11.062 Found net devices under 0000:82:00.0: cvl_0_0 00:12:11.062 10:30:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:11.062 10:30:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:11.062 10:30:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:11.062 10:30:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:11.062 10:30:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:11.062 10:30:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:11.062 10:30:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:11.062 10:30:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:11.062 10:30:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:82:00.1: cvl_0_1' 00:12:11.062 Found net devices under 0000:82:00.1: cvl_0_1 00:12:11.062 10:30:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:11.062 10:30:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:11.062 10:30:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # is_hw=yes 00:12:11.062 10:30:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:11.062 10:30:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:11.062 10:30:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:11.062 10:30:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:11.062 10:30:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:11.062 10:30:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:11.062 10:30:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:11.062 10:30:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:11.062 10:30:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:11.062 10:30:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:11.062 10:30:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:11.063 10:30:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:11.063 10:30:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:11.063 10:30:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:11.063 10:30:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:11.063 10:30:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:11.063 10:30:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:11.063 10:30:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:11.063 10:30:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:11.063 10:30:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:11.063 10:30:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:11.063 10:30:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:11.063 10:30:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:11.063 10:30:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:11.063 10:30:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:11.063 10:30:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:11.063 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:11.063 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.285 ms 00:12:11.063 00:12:11.063 --- 10.0.0.2 ping statistics --- 00:12:11.063 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:11.063 rtt min/avg/max/mdev = 0.285/0.285/0.285/0.000 ms 00:12:11.063 10:30:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:11.063 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:11.063 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.116 ms 00:12:11.063 00:12:11.063 --- 10.0.0.1 ping statistics --- 00:12:11.063 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:11.063 rtt min/avg/max/mdev = 0.116/0.116/0.116/0.000 ms 00:12:11.063 10:30:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:11.063 10:30:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@450 -- # return 0 00:12:11.063 10:30:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:11.063 10:30:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:11.063 10:30:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:11.063 10:30:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:11.063 10:30:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:11.063 10:30:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:11.063 10:30:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:11.063 10:30:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:12:11.063 10:30:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:11.063 10:30:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:11.063 10:30:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:11.063 10:30:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@509 -- # nvmfpid=329284 00:12:11.063 10:30:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:11.063 10:30:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@510 -- # waitforlisten 329284 00:12:11.063 10:30:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@833 -- # '[' -z 329284 ']' 00:12:11.063 10:30:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:11.063 10:30:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@838 -- # local max_retries=100 00:12:11.063 10:30:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:11.063 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:11.063 10:30:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@842 -- # xtrace_disable 00:12:11.063 10:30:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:11.063 [2024-11-15 10:30:59.334201] Starting SPDK v25.01-pre git sha1 318515b44 / DPDK 24.03.0 initialization... 00:12:11.063 [2024-11-15 10:30:59.334290] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:11.063 [2024-11-15 10:30:59.408534] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:11.063 [2024-11-15 10:30:59.470055] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:11.063 [2024-11-15 10:30:59.470118] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:11.063 [2024-11-15 10:30:59.470132] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:11.063 [2024-11-15 10:30:59.470143] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:11.063 [2024-11-15 10:30:59.470158] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:11.063 [2024-11-15 10:30:59.471829] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:11.063 [2024-11-15 10:30:59.471953] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:11.063 [2024-11-15 10:30:59.472012] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:11.063 [2024-11-15 10:30:59.472015] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:11.321 10:30:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:12:11.321 10:30:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@866 -- # return 0 00:12:11.321 10:30:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:11.321 10:30:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:11.321 10:30:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:11.321 10:30:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:11.321 10:30:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:12:11.321 10:30:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:12:11.321 10:30:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:12:11.321 10:30:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:12:11.321 10:30:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:12:11.580 "nvmf_tgt_1" 00:12:11.580 10:30:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:12:11.580 "nvmf_tgt_2" 00:12:11.580 10:30:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:12:11.580 10:30:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:12:11.838 10:31:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:12:11.838 10:31:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:12:11.838 true 00:12:11.838 10:31:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:12:11.838 true 00:12:12.096 10:31:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:12:12.096 10:31:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:12:12.096 10:31:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:12:12.096 10:31:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:12:12.096 10:31:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:12:12.096 10:31:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:12.096 10:31:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@121 -- # sync 00:12:12.096 10:31:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:12.096 10:31:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@124 -- # set +e 00:12:12.096 10:31:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:12.096 10:31:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:12.096 rmmod nvme_tcp 00:12:12.096 rmmod nvme_fabrics 00:12:12.096 rmmod nvme_keyring 00:12:12.096 10:31:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:12.096 10:31:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@128 -- # set -e 00:12:12.096 10:31:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@129 -- # return 0 00:12:12.096 10:31:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@517 -- # '[' -n 329284 ']' 00:12:12.096 10:31:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@518 -- # killprocess 329284 00:12:12.096 10:31:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@952 -- # '[' -z 329284 ']' 00:12:12.096 10:31:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@956 -- # kill -0 329284 00:12:12.096 10:31:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@957 -- # uname 00:12:12.096 10:31:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:12:12.096 10:31:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 329284 00:12:12.096 10:31:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:12:12.096 10:31:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:12:12.096 10:31:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@970 -- # echo 'killing process with pid 329284' 00:12:12.096 killing process with pid 329284 00:12:12.096 10:31:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@971 -- # kill 329284 00:12:12.096 10:31:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@976 -- # wait 329284 00:12:12.354 10:31:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:12.354 10:31:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:12.354 10:31:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:12.354 10:31:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@297 -- # iptr 00:12:12.354 10:31:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # iptables-save 00:12:12.354 10:31:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:12.354 10:31:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # iptables-restore 00:12:12.354 10:31:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:12.354 10:31:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:12.354 10:31:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:12.354 10:31:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:12.354 10:31:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:14.889 10:31:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:14.889 00:12:14.889 real 0m6.010s 00:12:14.889 user 0m6.790s 00:12:14.889 sys 0m2.041s 00:12:14.889 10:31:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1128 -- # xtrace_disable 00:12:14.889 10:31:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:14.889 ************************************ 00:12:14.889 END TEST nvmf_multitarget 00:12:14.889 ************************************ 00:12:14.889 10:31:02 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@22 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:12:14.889 10:31:02 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:12:14.889 10:31:02 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:12:14.889 10:31:02 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:14.889 ************************************ 00:12:14.889 START TEST nvmf_rpc 00:12:14.889 ************************************ 00:12:14.889 10:31:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:12:14.889 * Looking for test storage... 00:12:14.889 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:14.889 10:31:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:12:14.889 10:31:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1691 -- # lcov --version 00:12:14.889 10:31:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:12:14.889 10:31:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:12:14.889 10:31:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:14.889 10:31:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:14.889 10:31:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:14.889 10:31:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:12:14.889 10:31:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:12:14.889 10:31:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:12:14.889 10:31:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:12:14.889 10:31:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:12:14.889 10:31:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:12:14.889 10:31:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:12:14.889 10:31:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:14.889 10:31:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@344 -- # case "$op" in 00:12:14.889 10:31:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@345 -- # : 1 00:12:14.889 10:31:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:14.889 10:31:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:14.889 10:31:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # decimal 1 00:12:14.889 10:31:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=1 00:12:14.889 10:31:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:14.889 10:31:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 1 00:12:14.889 10:31:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:12:14.889 10:31:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # decimal 2 00:12:14.889 10:31:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=2 00:12:14.889 10:31:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:14.889 10:31:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 2 00:12:14.889 10:31:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:12:14.889 10:31:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:14.889 10:31:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:14.889 10:31:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # return 0 00:12:14.889 10:31:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:14.889 10:31:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:12:14.889 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:14.889 --rc genhtml_branch_coverage=1 00:12:14.889 --rc genhtml_function_coverage=1 00:12:14.889 --rc genhtml_legend=1 00:12:14.889 --rc geninfo_all_blocks=1 00:12:14.889 --rc geninfo_unexecuted_blocks=1 00:12:14.889 00:12:14.889 ' 00:12:14.889 10:31:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:12:14.889 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:14.889 --rc genhtml_branch_coverage=1 00:12:14.889 --rc genhtml_function_coverage=1 00:12:14.889 --rc genhtml_legend=1 00:12:14.889 --rc geninfo_all_blocks=1 00:12:14.889 --rc geninfo_unexecuted_blocks=1 00:12:14.889 00:12:14.889 ' 00:12:14.890 10:31:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:12:14.890 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:14.890 --rc genhtml_branch_coverage=1 00:12:14.890 --rc genhtml_function_coverage=1 00:12:14.890 --rc genhtml_legend=1 00:12:14.890 --rc geninfo_all_blocks=1 00:12:14.890 --rc geninfo_unexecuted_blocks=1 00:12:14.890 00:12:14.890 ' 00:12:14.890 10:31:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:12:14.890 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:14.890 --rc genhtml_branch_coverage=1 00:12:14.890 --rc genhtml_function_coverage=1 00:12:14.890 --rc genhtml_legend=1 00:12:14.890 --rc geninfo_all_blocks=1 00:12:14.890 --rc geninfo_unexecuted_blocks=1 00:12:14.890 00:12:14.890 ' 00:12:14.890 10:31:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:14.890 10:31:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:12:14.890 10:31:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:14.890 10:31:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:14.890 10:31:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:14.890 10:31:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:14.890 10:31:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:14.890 10:31:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:14.890 10:31:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:14.890 10:31:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:14.890 10:31:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:14.890 10:31:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:14.890 10:31:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:12:14.890 10:31:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=8b464f06-2980-e311-ba20-001e67a94acd 00:12:14.890 10:31:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:14.890 10:31:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:14.890 10:31:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:14.890 10:31:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:14.890 10:31:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:14.890 10:31:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@15 -- # shopt -s extglob 00:12:14.890 10:31:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:14.890 10:31:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:14.890 10:31:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:14.890 10:31:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:14.890 10:31:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:14.890 10:31:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:14.890 10:31:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:12:14.890 10:31:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:14.890 10:31:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@51 -- # : 0 00:12:14.890 10:31:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:14.890 10:31:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:14.890 10:31:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:14.890 10:31:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:14.890 10:31:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:14.890 10:31:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:14.890 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:14.890 10:31:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:14.890 10:31:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:14.890 10:31:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:14.890 10:31:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:12:14.890 10:31:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:12:14.890 10:31:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:14.890 10:31:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:14.890 10:31:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:14.890 10:31:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:14.890 10:31:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:14.890 10:31:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:14.890 10:31:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:14.890 10:31:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:14.890 10:31:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:14.890 10:31:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:14.890 10:31:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@309 -- # xtrace_disable 00:12:14.890 10:31:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:16.793 10:31:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:16.793 10:31:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # pci_devs=() 00:12:16.793 10:31:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:16.793 10:31:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:16.793 10:31:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:16.793 10:31:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:16.793 10:31:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:16.793 10:31:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # net_devs=() 00:12:16.793 10:31:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:16.793 10:31:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # e810=() 00:12:16.793 10:31:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # local -ga e810 00:12:16.793 10:31:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # x722=() 00:12:16.793 10:31:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # local -ga x722 00:12:16.793 10:31:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # mlx=() 00:12:16.793 10:31:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # local -ga mlx 00:12:16.794 10:31:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:16.794 10:31:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:16.794 10:31:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:16.794 10:31:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:16.794 10:31:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:16.794 10:31:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:16.794 10:31:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:16.794 10:31:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:16.794 10:31:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:16.794 10:31:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:16.794 10:31:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:16.794 10:31:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:16.794 10:31:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:16.794 10:31:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:16.794 10:31:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:16.794 10:31:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:16.794 10:31:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:16.794 10:31:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:16.794 10:31:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:16.794 10:31:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@367 -- # echo 'Found 0000:82:00.0 (0x8086 - 0x159b)' 00:12:16.794 Found 0000:82:00.0 (0x8086 - 0x159b) 00:12:16.794 10:31:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:16.794 10:31:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:16.794 10:31:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:16.794 10:31:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:16.794 10:31:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:16.794 10:31:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:16.794 10:31:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@367 -- # echo 'Found 0000:82:00.1 (0x8086 - 0x159b)' 00:12:16.794 Found 0000:82:00.1 (0x8086 - 0x159b) 00:12:16.794 10:31:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:16.794 10:31:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:16.794 10:31:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:16.794 10:31:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:16.794 10:31:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:16.794 10:31:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:16.794 10:31:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:16.794 10:31:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:16.794 10:31:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:16.794 10:31:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:16.794 10:31:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:16.794 10:31:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:16.794 10:31:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:16.794 10:31:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:16.794 10:31:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:16.794 10:31:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:82:00.0: cvl_0_0' 00:12:16.794 Found net devices under 0000:82:00.0: cvl_0_0 00:12:16.794 10:31:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:16.794 10:31:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:16.794 10:31:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:16.794 10:31:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:16.794 10:31:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:16.794 10:31:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:16.794 10:31:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:16.794 10:31:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:16.794 10:31:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:82:00.1: cvl_0_1' 00:12:16.794 Found net devices under 0000:82:00.1: cvl_0_1 00:12:16.794 10:31:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:16.794 10:31:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:16.794 10:31:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # is_hw=yes 00:12:16.794 10:31:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:16.794 10:31:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:16.794 10:31:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:16.794 10:31:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:16.794 10:31:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:16.794 10:31:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:16.794 10:31:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:16.794 10:31:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:16.794 10:31:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:16.794 10:31:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:16.794 10:31:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:16.794 10:31:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:16.794 10:31:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:16.794 10:31:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:16.794 10:31:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:16.794 10:31:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:16.794 10:31:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:16.794 10:31:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:16.794 10:31:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:16.794 10:31:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:16.794 10:31:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:16.794 10:31:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:17.053 10:31:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:17.053 10:31:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:17.053 10:31:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:17.053 10:31:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:17.053 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:17.053 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.235 ms 00:12:17.053 00:12:17.053 --- 10.0.0.2 ping statistics --- 00:12:17.053 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:17.053 rtt min/avg/max/mdev = 0.235/0.235/0.235/0.000 ms 00:12:17.053 10:31:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:17.053 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:17.053 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.099 ms 00:12:17.053 00:12:17.053 --- 10.0.0.1 ping statistics --- 00:12:17.053 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:17.053 rtt min/avg/max/mdev = 0.099/0.099/0.099/0.000 ms 00:12:17.053 10:31:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:17.053 10:31:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@450 -- # return 0 00:12:17.053 10:31:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:17.053 10:31:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:17.053 10:31:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:17.053 10:31:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:17.053 10:31:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:17.053 10:31:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:17.053 10:31:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:17.053 10:31:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:12:17.053 10:31:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:17.053 10:31:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:17.053 10:31:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:17.053 10:31:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@509 -- # nvmfpid=331519 00:12:17.053 10:31:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:17.053 10:31:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@510 -- # waitforlisten 331519 00:12:17.053 10:31:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@833 -- # '[' -z 331519 ']' 00:12:17.053 10:31:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:17.053 10:31:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:12:17.053 10:31:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:17.053 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:17.053 10:31:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:12:17.053 10:31:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:17.053 [2024-11-15 10:31:05.393077] Starting SPDK v25.01-pre git sha1 318515b44 / DPDK 24.03.0 initialization... 00:12:17.053 [2024-11-15 10:31:05.393178] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:17.053 [2024-11-15 10:31:05.469085] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:17.311 [2024-11-15 10:31:05.528231] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:17.311 [2024-11-15 10:31:05.528279] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:17.311 [2024-11-15 10:31:05.528307] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:17.311 [2024-11-15 10:31:05.528318] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:17.311 [2024-11-15 10:31:05.528328] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:17.311 [2024-11-15 10:31:05.529800] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:17.311 [2024-11-15 10:31:05.529859] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:17.311 [2024-11-15 10:31:05.529881] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:17.311 [2024-11-15 10:31:05.529885] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:17.311 10:31:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:12:17.311 10:31:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@866 -- # return 0 00:12:17.311 10:31:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:17.311 10:31:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:17.311 10:31:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:17.311 10:31:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:17.311 10:31:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:12:17.311 10:31:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:17.311 10:31:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:17.311 10:31:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:17.311 10:31:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:12:17.311 "tick_rate": 2700000000, 00:12:17.311 "poll_groups": [ 00:12:17.311 { 00:12:17.311 "name": "nvmf_tgt_poll_group_000", 00:12:17.311 "admin_qpairs": 0, 00:12:17.311 "io_qpairs": 0, 00:12:17.311 "current_admin_qpairs": 0, 00:12:17.311 "current_io_qpairs": 0, 00:12:17.311 "pending_bdev_io": 0, 00:12:17.311 "completed_nvme_io": 0, 00:12:17.311 "transports": [] 00:12:17.311 }, 00:12:17.312 { 00:12:17.312 "name": "nvmf_tgt_poll_group_001", 00:12:17.312 "admin_qpairs": 0, 00:12:17.312 "io_qpairs": 0, 00:12:17.312 "current_admin_qpairs": 0, 00:12:17.312 "current_io_qpairs": 0, 00:12:17.312 "pending_bdev_io": 0, 00:12:17.312 "completed_nvme_io": 0, 00:12:17.312 "transports": [] 00:12:17.312 }, 00:12:17.312 { 00:12:17.312 "name": "nvmf_tgt_poll_group_002", 00:12:17.312 "admin_qpairs": 0, 00:12:17.312 "io_qpairs": 0, 00:12:17.312 "current_admin_qpairs": 0, 00:12:17.312 "current_io_qpairs": 0, 00:12:17.312 "pending_bdev_io": 0, 00:12:17.312 "completed_nvme_io": 0, 00:12:17.312 "transports": [] 00:12:17.312 }, 00:12:17.312 { 00:12:17.312 "name": "nvmf_tgt_poll_group_003", 00:12:17.312 "admin_qpairs": 0, 00:12:17.312 "io_qpairs": 0, 00:12:17.312 "current_admin_qpairs": 0, 00:12:17.312 "current_io_qpairs": 0, 00:12:17.312 "pending_bdev_io": 0, 00:12:17.312 "completed_nvme_io": 0, 00:12:17.312 "transports": [] 00:12:17.312 } 00:12:17.312 ] 00:12:17.312 }' 00:12:17.312 10:31:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:12:17.312 10:31:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:12:17.312 10:31:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:12:17.312 10:31:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:12:17.312 10:31:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:12:17.312 10:31:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:12:17.571 10:31:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:12:17.571 10:31:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:17.571 10:31:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:17.571 10:31:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:17.571 [2024-11-15 10:31:05.784837] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:17.571 10:31:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:17.571 10:31:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:12:17.571 10:31:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:17.571 10:31:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:17.571 10:31:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:17.571 10:31:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:12:17.571 "tick_rate": 2700000000, 00:12:17.571 "poll_groups": [ 00:12:17.571 { 00:12:17.571 "name": "nvmf_tgt_poll_group_000", 00:12:17.571 "admin_qpairs": 0, 00:12:17.571 "io_qpairs": 0, 00:12:17.571 "current_admin_qpairs": 0, 00:12:17.571 "current_io_qpairs": 0, 00:12:17.571 "pending_bdev_io": 0, 00:12:17.571 "completed_nvme_io": 0, 00:12:17.571 "transports": [ 00:12:17.571 { 00:12:17.571 "trtype": "TCP" 00:12:17.571 } 00:12:17.571 ] 00:12:17.571 }, 00:12:17.571 { 00:12:17.571 "name": "nvmf_tgt_poll_group_001", 00:12:17.571 "admin_qpairs": 0, 00:12:17.571 "io_qpairs": 0, 00:12:17.571 "current_admin_qpairs": 0, 00:12:17.571 "current_io_qpairs": 0, 00:12:17.571 "pending_bdev_io": 0, 00:12:17.571 "completed_nvme_io": 0, 00:12:17.571 "transports": [ 00:12:17.571 { 00:12:17.571 "trtype": "TCP" 00:12:17.571 } 00:12:17.571 ] 00:12:17.571 }, 00:12:17.571 { 00:12:17.571 "name": "nvmf_tgt_poll_group_002", 00:12:17.571 "admin_qpairs": 0, 00:12:17.571 "io_qpairs": 0, 00:12:17.571 "current_admin_qpairs": 0, 00:12:17.571 "current_io_qpairs": 0, 00:12:17.571 "pending_bdev_io": 0, 00:12:17.571 "completed_nvme_io": 0, 00:12:17.571 "transports": [ 00:12:17.571 { 00:12:17.571 "trtype": "TCP" 00:12:17.571 } 00:12:17.571 ] 00:12:17.571 }, 00:12:17.571 { 00:12:17.571 "name": "nvmf_tgt_poll_group_003", 00:12:17.571 "admin_qpairs": 0, 00:12:17.571 "io_qpairs": 0, 00:12:17.571 "current_admin_qpairs": 0, 00:12:17.571 "current_io_qpairs": 0, 00:12:17.571 "pending_bdev_io": 0, 00:12:17.571 "completed_nvme_io": 0, 00:12:17.571 "transports": [ 00:12:17.571 { 00:12:17.571 "trtype": "TCP" 00:12:17.571 } 00:12:17.571 ] 00:12:17.571 } 00:12:17.571 ] 00:12:17.571 }' 00:12:17.571 10:31:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:12:17.571 10:31:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:12:17.571 10:31:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:12:17.571 10:31:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:17.571 10:31:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:12:17.571 10:31:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:12:17.571 10:31:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:12:17.571 10:31:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:12:17.571 10:31:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:17.571 10:31:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:12:17.571 10:31:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:12:17.571 10:31:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:12:17.571 10:31:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:12:17.571 10:31:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:12:17.571 10:31:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:17.571 10:31:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:17.571 Malloc1 00:12:17.571 10:31:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:17.571 10:31:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:17.571 10:31:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:17.571 10:31:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:17.571 10:31:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:17.571 10:31:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:17.571 10:31:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:17.571 10:31:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:17.571 10:31:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:17.571 10:31:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:12:17.571 10:31:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:17.571 10:31:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:17.571 10:31:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:17.571 10:31:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:17.571 10:31:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:17.571 10:31:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:17.571 [2024-11-15 10:31:05.953109] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:17.571 10:31:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:17.571 10:31:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid=8b464f06-2980-e311-ba20-001e67a94acd -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -a 10.0.0.2 -s 4420 00:12:17.571 10:31:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@650 -- # local es=0 00:12:17.572 10:31:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid=8b464f06-2980-e311-ba20-001e67a94acd -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -a 10.0.0.2 -s 4420 00:12:17.572 10:31:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@638 -- # local arg=nvme 00:12:17.572 10:31:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:17.572 10:31:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # type -t nvme 00:12:17.572 10:31:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:17.572 10:31:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -P nvme 00:12:17.572 10:31:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:17.572 10:31:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # arg=/usr/sbin/nvme 00:12:17.572 10:31:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # [[ -x /usr/sbin/nvme ]] 00:12:17.572 10:31:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid=8b464f06-2980-e311-ba20-001e67a94acd -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -a 10.0.0.2 -s 4420 00:12:17.572 [2024-11-15 10:31:05.975768] ctrlr.c: 823:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd' 00:12:17.572 Failed to write to /dev/nvme-fabrics: Input/output error 00:12:17.572 could not add new controller: failed to write to nvme-fabrics device 00:12:17.572 10:31:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # es=1 00:12:17.572 10:31:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:12:17.572 10:31:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:12:17.572 10:31:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:12:17.572 10:31:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:12:17.572 10:31:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:17.572 10:31:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:17.572 10:31:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:17.572 10:31:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid=8b464f06-2980-e311-ba20-001e67a94acd -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:18.505 10:31:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:12:18.505 10:31:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # local i=0 00:12:18.505 10:31:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:12:18.505 10:31:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:12:18.505 10:31:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # sleep 2 00:12:20.404 10:31:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:12:20.404 10:31:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:12:20.404 10:31:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:12:20.404 10:31:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:12:20.404 10:31:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:12:20.404 10:31:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # return 0 00:12:20.404 10:31:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:20.404 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:20.405 10:31:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:20.405 10:31:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1221 -- # local i=0 00:12:20.405 10:31:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:12:20.405 10:31:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:20.405 10:31:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:12:20.405 10:31:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:20.405 10:31:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1233 -- # return 0 00:12:20.405 10:31:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:12:20.405 10:31:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:20.405 10:31:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:20.405 10:31:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:20.405 10:31:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid=8b464f06-2980-e311-ba20-001e67a94acd -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:20.405 10:31:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@650 -- # local es=0 00:12:20.405 10:31:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid=8b464f06-2980-e311-ba20-001e67a94acd -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:20.405 10:31:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@638 -- # local arg=nvme 00:12:20.405 10:31:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:20.405 10:31:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # type -t nvme 00:12:20.405 10:31:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:20.405 10:31:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -P nvme 00:12:20.405 10:31:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:20.405 10:31:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # arg=/usr/sbin/nvme 00:12:20.405 10:31:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # [[ -x /usr/sbin/nvme ]] 00:12:20.405 10:31:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid=8b464f06-2980-e311-ba20-001e67a94acd -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:20.405 [2024-11-15 10:31:08.854466] ctrlr.c: 823:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd' 00:12:20.663 Failed to write to /dev/nvme-fabrics: Input/output error 00:12:20.663 could not add new controller: failed to write to nvme-fabrics device 00:12:20.663 10:31:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # es=1 00:12:20.663 10:31:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:12:20.663 10:31:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:12:20.663 10:31:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:12:20.663 10:31:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:12:20.663 10:31:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:20.663 10:31:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:20.663 10:31:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:20.663 10:31:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid=8b464f06-2980-e311-ba20-001e67a94acd -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:21.228 10:31:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:12:21.228 10:31:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # local i=0 00:12:21.228 10:31:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:12:21.228 10:31:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:12:21.228 10:31:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # sleep 2 00:12:23.129 10:31:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:12:23.129 10:31:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:12:23.129 10:31:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:12:23.129 10:31:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:12:23.129 10:31:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:12:23.129 10:31:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # return 0 00:12:23.129 10:31:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:23.388 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:23.388 10:31:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:23.388 10:31:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1221 -- # local i=0 00:12:23.388 10:31:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:12:23.388 10:31:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:23.388 10:31:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:12:23.388 10:31:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:23.388 10:31:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1233 -- # return 0 00:12:23.388 10:31:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:23.388 10:31:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:23.388 10:31:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:23.388 10:31:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:23.388 10:31:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:12:23.388 10:31:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:23.388 10:31:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:23.388 10:31:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:23.388 10:31:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:23.388 10:31:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:23.388 10:31:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:23.388 10:31:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:23.388 10:31:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:23.388 [2024-11-15 10:31:11.680538] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:23.388 10:31:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:23.388 10:31:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:23.388 10:31:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:23.388 10:31:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:23.388 10:31:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:23.388 10:31:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:23.388 10:31:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:23.388 10:31:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:23.388 10:31:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:23.388 10:31:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid=8b464f06-2980-e311-ba20-001e67a94acd -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:23.955 10:31:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:23.955 10:31:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # local i=0 00:12:23.955 10:31:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:12:23.955 10:31:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:12:23.955 10:31:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # sleep 2 00:12:26.479 10:31:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:12:26.479 10:31:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:12:26.479 10:31:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:12:26.479 10:31:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:12:26.479 10:31:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:12:26.479 10:31:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # return 0 00:12:26.479 10:31:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:26.479 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:26.479 10:31:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:26.479 10:31:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1221 -- # local i=0 00:12:26.479 10:31:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:12:26.479 10:31:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:26.479 10:31:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:12:26.479 10:31:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:26.479 10:31:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1233 -- # return 0 00:12:26.479 10:31:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:26.479 10:31:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:26.479 10:31:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:26.479 10:31:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:26.479 10:31:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:26.479 10:31:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:26.479 10:31:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:26.479 10:31:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:26.479 10:31:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:26.479 10:31:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:26.479 10:31:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:26.479 10:31:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:26.479 10:31:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:26.479 10:31:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:26.479 10:31:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:26.479 10:31:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:26.479 [2024-11-15 10:31:14.549122] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:26.479 10:31:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:26.479 10:31:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:26.479 10:31:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:26.479 10:31:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:26.479 10:31:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:26.479 10:31:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:26.479 10:31:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:26.479 10:31:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:26.479 10:31:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:26.479 10:31:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid=8b464f06-2980-e311-ba20-001e67a94acd -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:27.044 10:31:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:27.044 10:31:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # local i=0 00:12:27.044 10:31:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:12:27.044 10:31:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:12:27.044 10:31:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # sleep 2 00:12:28.944 10:31:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:12:28.944 10:31:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:12:28.945 10:31:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:12:28.945 10:31:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:12:28.945 10:31:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:12:28.945 10:31:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # return 0 00:12:28.945 10:31:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:28.945 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:28.945 10:31:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:28.945 10:31:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1221 -- # local i=0 00:12:28.945 10:31:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:12:28.945 10:31:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:28.945 10:31:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:12:28.945 10:31:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:28.945 10:31:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1233 -- # return 0 00:12:28.945 10:31:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:28.945 10:31:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:28.945 10:31:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:28.945 10:31:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:28.945 10:31:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:28.945 10:31:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:28.945 10:31:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:28.945 10:31:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:28.945 10:31:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:28.945 10:31:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:28.945 10:31:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:28.945 10:31:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:29.203 10:31:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:29.203 10:31:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:29.203 10:31:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:29.203 10:31:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:29.203 [2024-11-15 10:31:17.417156] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:29.203 10:31:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:29.203 10:31:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:29.203 10:31:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:29.203 10:31:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:29.203 10:31:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:29.203 10:31:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:29.203 10:31:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:29.203 10:31:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:29.203 10:31:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:29.203 10:31:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid=8b464f06-2980-e311-ba20-001e67a94acd -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:29.769 10:31:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:29.769 10:31:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # local i=0 00:12:29.769 10:31:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:12:29.769 10:31:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:12:29.769 10:31:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # sleep 2 00:12:32.297 10:31:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:12:32.297 10:31:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:12:32.297 10:31:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:12:32.297 10:31:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:12:32.297 10:31:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:12:32.297 10:31:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # return 0 00:12:32.297 10:31:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:32.297 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:32.297 10:31:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:32.297 10:31:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1221 -- # local i=0 00:12:32.297 10:31:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:12:32.297 10:31:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:32.297 10:31:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:12:32.297 10:31:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:32.297 10:31:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1233 -- # return 0 00:12:32.297 10:31:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:32.297 10:31:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:32.297 10:31:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:32.297 10:31:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:32.297 10:31:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:32.297 10:31:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:32.297 10:31:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:32.297 10:31:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:32.297 10:31:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:32.297 10:31:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:32.297 10:31:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:32.297 10:31:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:32.298 10:31:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:32.298 10:31:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:32.298 10:31:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:32.298 10:31:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:32.298 [2024-11-15 10:31:20.291987] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:32.298 10:31:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:32.298 10:31:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:32.298 10:31:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:32.298 10:31:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:32.298 10:31:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:32.298 10:31:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:32.298 10:31:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:32.298 10:31:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:32.298 10:31:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:32.298 10:31:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid=8b464f06-2980-e311-ba20-001e67a94acd -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:32.556 10:31:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:32.556 10:31:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # local i=0 00:12:32.556 10:31:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:12:32.556 10:31:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:12:32.556 10:31:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # sleep 2 00:12:35.083 10:31:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:12:35.083 10:31:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:12:35.083 10:31:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:12:35.083 10:31:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:12:35.083 10:31:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:12:35.083 10:31:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # return 0 00:12:35.083 10:31:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:35.083 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:35.083 10:31:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:35.083 10:31:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1221 -- # local i=0 00:12:35.083 10:31:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:12:35.083 10:31:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:35.083 10:31:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:12:35.083 10:31:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:35.083 10:31:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1233 -- # return 0 00:12:35.083 10:31:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:35.083 10:31:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:35.083 10:31:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:35.083 10:31:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:35.083 10:31:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:35.083 10:31:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:35.083 10:31:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:35.083 10:31:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:35.083 10:31:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:35.083 10:31:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:35.083 10:31:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:35.083 10:31:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:35.083 10:31:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:35.083 10:31:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:35.083 10:31:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:35.083 10:31:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:35.083 [2024-11-15 10:31:23.161953] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:35.083 10:31:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:35.083 10:31:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:35.084 10:31:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:35.084 10:31:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:35.084 10:31:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:35.084 10:31:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:35.084 10:31:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:35.084 10:31:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:35.084 10:31:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:35.084 10:31:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid=8b464f06-2980-e311-ba20-001e67a94acd -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:35.344 10:31:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:35.344 10:31:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # local i=0 00:12:35.344 10:31:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:12:35.344 10:31:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:12:35.344 10:31:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # sleep 2 00:12:37.873 10:31:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:12:37.873 10:31:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:12:37.873 10:31:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:12:37.873 10:31:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:12:37.873 10:31:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:12:37.873 10:31:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # return 0 00:12:37.873 10:31:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:37.873 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:37.873 10:31:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:37.873 10:31:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1221 -- # local i=0 00:12:37.873 10:31:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:12:37.873 10:31:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:37.873 10:31:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:12:37.873 10:31:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:37.873 10:31:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1233 -- # return 0 00:12:37.873 10:31:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:37.873 10:31:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:37.873 10:31:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:37.873 10:31:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:37.873 10:31:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:37.873 10:31:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:37.873 10:31:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:37.873 10:31:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:37.873 10:31:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:12:37.873 10:31:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:37.873 10:31:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:37.873 10:31:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:37.873 10:31:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:37.873 10:31:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:37.873 10:31:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:37.873 10:31:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:37.873 10:31:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:37.873 [2024-11-15 10:31:25.928515] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:37.873 10:31:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:37.873 10:31:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:37.873 10:31:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:37.873 10:31:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:37.873 10:31:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:37.873 10:31:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:37.873 10:31:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:37.873 10:31:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:37.873 10:31:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:37.873 10:31:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:37.873 10:31:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:37.873 10:31:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:37.873 10:31:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:37.873 10:31:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:37.873 10:31:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:37.873 10:31:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:37.873 10:31:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:37.873 10:31:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:37.873 10:31:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:37.873 10:31:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:37.873 10:31:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:37.873 10:31:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:37.873 10:31:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:37.873 10:31:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:37.873 10:31:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:37.873 [2024-11-15 10:31:25.976588] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:37.873 10:31:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:37.873 10:31:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:37.873 10:31:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:37.873 10:31:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:37.873 10:31:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:37.873 10:31:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:37.873 10:31:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:37.873 10:31:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:37.874 10:31:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:37.874 10:31:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:37.874 10:31:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:37.874 10:31:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:37.874 10:31:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:37.874 10:31:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:37.874 10:31:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:37.874 10:31:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:37.874 10:31:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:37.874 10:31:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:37.874 10:31:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:37.874 10:31:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:37.874 10:31:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:37.874 10:31:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:37.874 10:31:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:37.874 10:31:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:37.874 10:31:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:37.874 [2024-11-15 10:31:26.024783] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:37.874 10:31:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:37.874 10:31:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:37.874 10:31:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:37.874 10:31:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:37.874 10:31:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:37.874 10:31:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:37.874 10:31:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:37.874 10:31:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:37.874 10:31:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:37.874 10:31:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:37.874 10:31:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:37.874 10:31:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:37.874 10:31:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:37.874 10:31:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:37.874 10:31:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:37.874 10:31:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:37.874 10:31:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:37.874 10:31:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:37.874 10:31:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:37.874 10:31:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:37.874 10:31:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:37.874 10:31:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:37.874 10:31:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:37.874 10:31:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:37.874 10:31:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:37.874 [2024-11-15 10:31:26.072923] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:37.874 10:31:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:37.874 10:31:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:37.874 10:31:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:37.874 10:31:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:37.874 10:31:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:37.874 10:31:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:37.874 10:31:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:37.874 10:31:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:37.874 10:31:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:37.874 10:31:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:37.874 10:31:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:37.874 10:31:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:37.874 10:31:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:37.874 10:31:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:37.874 10:31:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:37.874 10:31:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:37.874 10:31:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:37.874 10:31:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:37.874 10:31:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:37.874 10:31:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:37.874 10:31:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:37.874 10:31:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:37.874 10:31:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:37.874 10:31:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:37.874 10:31:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:37.874 [2024-11-15 10:31:26.121091] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:37.874 10:31:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:37.874 10:31:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:37.874 10:31:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:37.874 10:31:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:37.874 10:31:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:37.874 10:31:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:37.874 10:31:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:37.874 10:31:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:37.874 10:31:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:37.874 10:31:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:37.874 10:31:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:37.874 10:31:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:37.874 10:31:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:37.874 10:31:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:37.874 10:31:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:37.874 10:31:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:37.874 10:31:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:37.874 10:31:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:12:37.874 10:31:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:37.874 10:31:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:37.874 10:31:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:37.874 10:31:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:12:37.874 "tick_rate": 2700000000, 00:12:37.874 "poll_groups": [ 00:12:37.874 { 00:12:37.874 "name": "nvmf_tgt_poll_group_000", 00:12:37.874 "admin_qpairs": 2, 00:12:37.874 "io_qpairs": 84, 00:12:37.874 "current_admin_qpairs": 0, 00:12:37.874 "current_io_qpairs": 0, 00:12:37.874 "pending_bdev_io": 0, 00:12:37.874 "completed_nvme_io": 167, 00:12:37.874 "transports": [ 00:12:37.874 { 00:12:37.874 "trtype": "TCP" 00:12:37.874 } 00:12:37.874 ] 00:12:37.874 }, 00:12:37.874 { 00:12:37.874 "name": "nvmf_tgt_poll_group_001", 00:12:37.874 "admin_qpairs": 2, 00:12:37.874 "io_qpairs": 84, 00:12:37.874 "current_admin_qpairs": 0, 00:12:37.874 "current_io_qpairs": 0, 00:12:37.874 "pending_bdev_io": 0, 00:12:37.874 "completed_nvme_io": 202, 00:12:37.874 "transports": [ 00:12:37.874 { 00:12:37.874 "trtype": "TCP" 00:12:37.874 } 00:12:37.874 ] 00:12:37.874 }, 00:12:37.874 { 00:12:37.874 "name": "nvmf_tgt_poll_group_002", 00:12:37.874 "admin_qpairs": 1, 00:12:37.874 "io_qpairs": 84, 00:12:37.874 "current_admin_qpairs": 0, 00:12:37.874 "current_io_qpairs": 0, 00:12:37.874 "pending_bdev_io": 0, 00:12:37.874 "completed_nvme_io": 135, 00:12:37.874 "transports": [ 00:12:37.874 { 00:12:37.874 "trtype": "TCP" 00:12:37.874 } 00:12:37.874 ] 00:12:37.874 }, 00:12:37.874 { 00:12:37.874 "name": "nvmf_tgt_poll_group_003", 00:12:37.874 "admin_qpairs": 2, 00:12:37.874 "io_qpairs": 84, 00:12:37.874 "current_admin_qpairs": 0, 00:12:37.874 "current_io_qpairs": 0, 00:12:37.874 "pending_bdev_io": 0, 00:12:37.875 "completed_nvme_io": 182, 00:12:37.875 "transports": [ 00:12:37.875 { 00:12:37.875 "trtype": "TCP" 00:12:37.875 } 00:12:37.875 ] 00:12:37.875 } 00:12:37.875 ] 00:12:37.875 }' 00:12:37.875 10:31:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:12:37.875 10:31:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:12:37.875 10:31:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:12:37.875 10:31:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:37.875 10:31:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:12:37.875 10:31:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:12:37.875 10:31:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:12:37.875 10:31:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:12:37.875 10:31:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:37.875 10:31:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # (( 336 > 0 )) 00:12:37.875 10:31:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:12:37.875 10:31:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:12:37.875 10:31:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:12:37.875 10:31:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:37.875 10:31:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@121 -- # sync 00:12:37.875 10:31:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:37.875 10:31:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@124 -- # set +e 00:12:37.875 10:31:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:37.875 10:31:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:37.875 rmmod nvme_tcp 00:12:37.875 rmmod nvme_fabrics 00:12:37.875 rmmod nvme_keyring 00:12:37.875 10:31:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:37.875 10:31:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@128 -- # set -e 00:12:37.875 10:31:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@129 -- # return 0 00:12:37.875 10:31:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@517 -- # '[' -n 331519 ']' 00:12:37.875 10:31:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@518 -- # killprocess 331519 00:12:37.875 10:31:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@952 -- # '[' -z 331519 ']' 00:12:37.875 10:31:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@956 -- # kill -0 331519 00:12:37.875 10:31:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@957 -- # uname 00:12:37.875 10:31:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:12:37.875 10:31:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 331519 00:12:38.133 10:31:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:12:38.133 10:31:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:12:38.133 10:31:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 331519' 00:12:38.133 killing process with pid 331519 00:12:38.133 10:31:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@971 -- # kill 331519 00:12:38.133 10:31:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@976 -- # wait 331519 00:12:38.394 10:31:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:38.394 10:31:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:38.394 10:31:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:38.394 10:31:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@297 -- # iptr 00:12:38.394 10:31:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # iptables-save 00:12:38.394 10:31:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:38.394 10:31:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # iptables-restore 00:12:38.394 10:31:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:38.394 10:31:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:38.394 10:31:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:38.394 10:31:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:38.394 10:31:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:40.297 10:31:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:40.297 00:12:40.297 real 0m25.833s 00:12:40.297 user 1m23.627s 00:12:40.297 sys 0m4.417s 00:12:40.297 10:31:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:12:40.297 10:31:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:40.297 ************************************ 00:12:40.297 END TEST nvmf_rpc 00:12:40.297 ************************************ 00:12:40.297 10:31:28 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@23 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:12:40.297 10:31:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:12:40.297 10:31:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:12:40.297 10:31:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:40.297 ************************************ 00:12:40.297 START TEST nvmf_invalid 00:12:40.297 ************************************ 00:12:40.297 10:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:12:40.297 * Looking for test storage... 00:12:40.557 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:40.557 10:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:12:40.557 10:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1691 -- # lcov --version 00:12:40.557 10:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:12:40.557 10:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:12:40.557 10:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:40.557 10:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:40.557 10:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:40.557 10:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # IFS=.-: 00:12:40.557 10:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # read -ra ver1 00:12:40.557 10:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # IFS=.-: 00:12:40.557 10:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # read -ra ver2 00:12:40.557 10:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@338 -- # local 'op=<' 00:12:40.557 10:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@340 -- # ver1_l=2 00:12:40.557 10:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@341 -- # ver2_l=1 00:12:40.557 10:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:40.557 10:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@344 -- # case "$op" in 00:12:40.557 10:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@345 -- # : 1 00:12:40.557 10:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:40.557 10:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:40.557 10:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # decimal 1 00:12:40.557 10:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=1 00:12:40.557 10:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:40.557 10:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 1 00:12:40.557 10:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # ver1[v]=1 00:12:40.557 10:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # decimal 2 00:12:40.558 10:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=2 00:12:40.558 10:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:40.558 10:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 2 00:12:40.558 10:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # ver2[v]=2 00:12:40.558 10:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:40.558 10:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:40.558 10:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # return 0 00:12:40.558 10:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:40.558 10:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:12:40.558 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:40.558 --rc genhtml_branch_coverage=1 00:12:40.558 --rc genhtml_function_coverage=1 00:12:40.558 --rc genhtml_legend=1 00:12:40.558 --rc geninfo_all_blocks=1 00:12:40.558 --rc geninfo_unexecuted_blocks=1 00:12:40.558 00:12:40.558 ' 00:12:40.558 10:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:12:40.558 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:40.558 --rc genhtml_branch_coverage=1 00:12:40.558 --rc genhtml_function_coverage=1 00:12:40.558 --rc genhtml_legend=1 00:12:40.558 --rc geninfo_all_blocks=1 00:12:40.558 --rc geninfo_unexecuted_blocks=1 00:12:40.558 00:12:40.558 ' 00:12:40.558 10:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:12:40.558 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:40.558 --rc genhtml_branch_coverage=1 00:12:40.558 --rc genhtml_function_coverage=1 00:12:40.558 --rc genhtml_legend=1 00:12:40.558 --rc geninfo_all_blocks=1 00:12:40.558 --rc geninfo_unexecuted_blocks=1 00:12:40.558 00:12:40.558 ' 00:12:40.558 10:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:12:40.558 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:40.558 --rc genhtml_branch_coverage=1 00:12:40.558 --rc genhtml_function_coverage=1 00:12:40.558 --rc genhtml_legend=1 00:12:40.558 --rc geninfo_all_blocks=1 00:12:40.558 --rc geninfo_unexecuted_blocks=1 00:12:40.558 00:12:40.558 ' 00:12:40.558 10:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:40.558 10:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:12:40.558 10:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:40.558 10:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:40.558 10:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:40.558 10:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:40.558 10:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:40.558 10:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:40.558 10:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:40.558 10:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:40.558 10:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:40.558 10:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:40.558 10:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:12:40.558 10:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=8b464f06-2980-e311-ba20-001e67a94acd 00:12:40.558 10:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:40.558 10:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:40.558 10:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:40.558 10:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:40.558 10:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:40.558 10:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@15 -- # shopt -s extglob 00:12:40.558 10:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:40.558 10:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:40.558 10:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:40.558 10:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:40.558 10:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:40.558 10:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:40.558 10:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:12:40.558 10:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:40.558 10:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@51 -- # : 0 00:12:40.558 10:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:40.558 10:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:40.558 10:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:40.558 10:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:40.558 10:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:40.558 10:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:40.558 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:40.558 10:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:40.558 10:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:40.558 10:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:40.558 10:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:12:40.558 10:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:40.558 10:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:12:40.558 10:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:12:40.558 10:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:12:40.558 10:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:12:40.558 10:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:40.558 10:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:40.558 10:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:40.558 10:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:40.558 10:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:40.558 10:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:40.559 10:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:40.559 10:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:40.559 10:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:40.559 10:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:40.559 10:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@309 -- # xtrace_disable 00:12:40.559 10:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:12:43.091 10:31:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:43.091 10:31:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # pci_devs=() 00:12:43.091 10:31:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:43.091 10:31:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:43.091 10:31:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:43.091 10:31:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:43.091 10:31:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:43.091 10:31:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # net_devs=() 00:12:43.091 10:31:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:43.091 10:31:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # e810=() 00:12:43.091 10:31:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # local -ga e810 00:12:43.091 10:31:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # x722=() 00:12:43.091 10:31:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # local -ga x722 00:12:43.091 10:31:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # mlx=() 00:12:43.091 10:31:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # local -ga mlx 00:12:43.091 10:31:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:43.091 10:31:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:43.091 10:31:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:43.091 10:31:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:43.091 10:31:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:43.091 10:31:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:43.091 10:31:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:43.091 10:31:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:43.091 10:31:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:43.091 10:31:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:43.091 10:31:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:43.091 10:31:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:43.091 10:31:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:43.091 10:31:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:43.091 10:31:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:43.091 10:31:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:43.091 10:31:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:43.091 10:31:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:43.091 10:31:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:43.091 10:31:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@367 -- # echo 'Found 0000:82:00.0 (0x8086 - 0x159b)' 00:12:43.091 Found 0000:82:00.0 (0x8086 - 0x159b) 00:12:43.091 10:31:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:43.091 10:31:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:43.091 10:31:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:43.091 10:31:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:43.091 10:31:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:43.091 10:31:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:43.091 10:31:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@367 -- # echo 'Found 0000:82:00.1 (0x8086 - 0x159b)' 00:12:43.091 Found 0000:82:00.1 (0x8086 - 0x159b) 00:12:43.091 10:31:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:43.091 10:31:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:43.091 10:31:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:43.091 10:31:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:43.091 10:31:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:43.091 10:31:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:43.091 10:31:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:43.091 10:31:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:43.091 10:31:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:43.091 10:31:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:43.091 10:31:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:43.091 10:31:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:43.091 10:31:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:43.091 10:31:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:43.091 10:31:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:43.091 10:31:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:82:00.0: cvl_0_0' 00:12:43.091 Found net devices under 0000:82:00.0: cvl_0_0 00:12:43.091 10:31:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:43.091 10:31:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:43.091 10:31:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:43.091 10:31:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:43.091 10:31:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:43.091 10:31:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:43.091 10:31:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:43.091 10:31:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:43.091 10:31:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:82:00.1: cvl_0_1' 00:12:43.091 Found net devices under 0000:82:00.1: cvl_0_1 00:12:43.091 10:31:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:43.091 10:31:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:43.091 10:31:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # is_hw=yes 00:12:43.091 10:31:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:43.091 10:31:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:43.091 10:31:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:43.091 10:31:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:43.091 10:31:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:43.091 10:31:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:43.091 10:31:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:43.091 10:31:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:43.091 10:31:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:43.091 10:31:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:43.091 10:31:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:43.091 10:31:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:43.092 10:31:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:43.092 10:31:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:43.092 10:31:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:43.092 10:31:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:43.092 10:31:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:43.092 10:31:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:43.092 10:31:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:43.092 10:31:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:43.092 10:31:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:43.092 10:31:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:43.092 10:31:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:43.092 10:31:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:43.092 10:31:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:43.092 10:31:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:43.092 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:43.092 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.238 ms 00:12:43.092 00:12:43.092 --- 10.0.0.2 ping statistics --- 00:12:43.092 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:43.092 rtt min/avg/max/mdev = 0.238/0.238/0.238/0.000 ms 00:12:43.092 10:31:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:43.092 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:43.092 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.133 ms 00:12:43.092 00:12:43.092 --- 10.0.0.1 ping statistics --- 00:12:43.092 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:43.092 rtt min/avg/max/mdev = 0.133/0.133/0.133/0.000 ms 00:12:43.092 10:31:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:43.092 10:31:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@450 -- # return 0 00:12:43.092 10:31:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:43.092 10:31:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:43.092 10:31:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:43.092 10:31:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:43.092 10:31:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:43.092 10:31:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:43.092 10:31:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:43.092 10:31:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:12:43.092 10:31:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:43.092 10:31:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:43.092 10:31:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:12:43.092 10:31:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@509 -- # nvmfpid=336039 00:12:43.092 10:31:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:43.092 10:31:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@510 -- # waitforlisten 336039 00:12:43.092 10:31:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@833 -- # '[' -z 336039 ']' 00:12:43.092 10:31:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:43.092 10:31:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@838 -- # local max_retries=100 00:12:43.092 10:31:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:43.092 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:43.092 10:31:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@842 -- # xtrace_disable 00:12:43.092 10:31:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:12:43.092 [2024-11-15 10:31:31.350412] Starting SPDK v25.01-pre git sha1 318515b44 / DPDK 24.03.0 initialization... 00:12:43.092 [2024-11-15 10:31:31.350483] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:43.092 [2024-11-15 10:31:31.422762] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:43.092 [2024-11-15 10:31:31.479810] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:43.092 [2024-11-15 10:31:31.479865] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:43.092 [2024-11-15 10:31:31.479893] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:43.092 [2024-11-15 10:31:31.479904] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:43.092 [2024-11-15 10:31:31.479914] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:43.092 [2024-11-15 10:31:31.481525] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:43.092 [2024-11-15 10:31:31.481553] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:43.092 [2024-11-15 10:31:31.481602] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:43.092 [2024-11-15 10:31:31.481606] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:43.350 10:31:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:12:43.350 10:31:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@866 -- # return 0 00:12:43.350 10:31:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:43.350 10:31:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:43.350 10:31:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:12:43.350 10:31:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:43.350 10:31:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:12:43.350 10:31:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode4915 00:12:43.607 [2024-11-15 10:31:31.873461] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:12:43.607 10:31:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:12:43.607 { 00:12:43.607 "nqn": "nqn.2016-06.io.spdk:cnode4915", 00:12:43.607 "tgt_name": "foobar", 00:12:43.607 "method": "nvmf_create_subsystem", 00:12:43.607 "req_id": 1 00:12:43.607 } 00:12:43.607 Got JSON-RPC error response 00:12:43.607 response: 00:12:43.607 { 00:12:43.607 "code": -32603, 00:12:43.607 "message": "Unable to find target foobar" 00:12:43.607 }' 00:12:43.607 10:31:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:12:43.607 { 00:12:43.607 "nqn": "nqn.2016-06.io.spdk:cnode4915", 00:12:43.607 "tgt_name": "foobar", 00:12:43.607 "method": "nvmf_create_subsystem", 00:12:43.607 "req_id": 1 00:12:43.607 } 00:12:43.607 Got JSON-RPC error response 00:12:43.607 response: 00:12:43.607 { 00:12:43.607 "code": -32603, 00:12:43.607 "message": "Unable to find target foobar" 00:12:43.607 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:12:43.607 10:31:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:12:43.607 10:31:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode31506 00:12:43.865 [2024-11-15 10:31:32.142407] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode31506: invalid serial number 'SPDKISFASTANDAWESOME' 00:12:43.865 10:31:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:12:43.865 { 00:12:43.865 "nqn": "nqn.2016-06.io.spdk:cnode31506", 00:12:43.865 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:12:43.865 "method": "nvmf_create_subsystem", 00:12:43.865 "req_id": 1 00:12:43.865 } 00:12:43.865 Got JSON-RPC error response 00:12:43.865 response: 00:12:43.865 { 00:12:43.865 "code": -32602, 00:12:43.865 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:12:43.865 }' 00:12:43.865 10:31:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:12:43.865 { 00:12:43.865 "nqn": "nqn.2016-06.io.spdk:cnode31506", 00:12:43.865 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:12:43.865 "method": "nvmf_create_subsystem", 00:12:43.865 "req_id": 1 00:12:43.865 } 00:12:43.865 Got JSON-RPC error response 00:12:43.865 response: 00:12:43.865 { 00:12:43.865 "code": -32602, 00:12:43.865 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:12:43.865 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:12:43.865 10:31:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:12:43.865 10:31:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode18348 00:12:44.124 [2024-11-15 10:31:32.407227] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode18348: invalid model number 'SPDK_Controller' 00:12:44.124 10:31:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:12:44.124 { 00:12:44.124 "nqn": "nqn.2016-06.io.spdk:cnode18348", 00:12:44.124 "model_number": "SPDK_Controller\u001f", 00:12:44.124 "method": "nvmf_create_subsystem", 00:12:44.124 "req_id": 1 00:12:44.124 } 00:12:44.124 Got JSON-RPC error response 00:12:44.124 response: 00:12:44.124 { 00:12:44.124 "code": -32602, 00:12:44.124 "message": "Invalid MN SPDK_Controller\u001f" 00:12:44.124 }' 00:12:44.124 10:31:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:12:44.124 { 00:12:44.124 "nqn": "nqn.2016-06.io.spdk:cnode18348", 00:12:44.124 "model_number": "SPDK_Controller\u001f", 00:12:44.124 "method": "nvmf_create_subsystem", 00:12:44.124 "req_id": 1 00:12:44.124 } 00:12:44.124 Got JSON-RPC error response 00:12:44.124 response: 00:12:44.124 { 00:12:44.124 "code": -32602, 00:12:44.124 "message": "Invalid MN SPDK_Controller\u001f" 00:12:44.124 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:12:44.124 10:31:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:12:44.124 10:31:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:12:44.124 10:31:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:12:44.124 10:31:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:12:44.124 10:31:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:12:44.124 10:31:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:12:44.124 10:31:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:44.124 10:31:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 103 00:12:44.124 10:31:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x67' 00:12:44.124 10:31:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=g 00:12:44.124 10:31:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:44.125 10:31:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:44.125 10:31:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 94 00:12:44.125 10:31:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5e' 00:12:44.125 10:31:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='^' 00:12:44.125 10:31:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:44.125 10:31:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:44.125 10:31:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 50 00:12:44.125 10:31:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x32' 00:12:44.125 10:31:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=2 00:12:44.125 10:31:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:44.125 10:31:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:44.125 10:31:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 90 00:12:44.125 10:31:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5a' 00:12:44.125 10:31:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Z 00:12:44.125 10:31:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:44.125 10:31:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:44.125 10:31:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 37 00:12:44.125 10:31:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x25' 00:12:44.125 10:31:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=% 00:12:44.125 10:31:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:44.125 10:31:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:44.125 10:31:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 98 00:12:44.125 10:31:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x62' 00:12:44.125 10:31:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=b 00:12:44.125 10:31:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:44.125 10:31:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:44.125 10:31:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 120 00:12:44.125 10:31:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x78' 00:12:44.125 10:31:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=x 00:12:44.125 10:31:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:44.125 10:31:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:44.125 10:31:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 108 00:12:44.125 10:31:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6c' 00:12:44.125 10:31:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=l 00:12:44.125 10:31:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:44.125 10:31:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:44.125 10:31:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 49 00:12:44.125 10:31:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x31' 00:12:44.125 10:31:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=1 00:12:44.125 10:31:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:44.125 10:31:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:44.125 10:31:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 53 00:12:44.125 10:31:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x35' 00:12:44.125 10:31:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=5 00:12:44.125 10:31:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:44.125 10:31:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:44.125 10:31:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 115 00:12:44.125 10:31:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x73' 00:12:44.125 10:31:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=s 00:12:44.125 10:31:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:44.125 10:31:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:44.125 10:31:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 39 00:12:44.125 10:31:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x27' 00:12:44.125 10:31:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=\' 00:12:44.125 10:31:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:44.125 10:31:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:44.125 10:31:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 124 00:12:44.125 10:31:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7c' 00:12:44.125 10:31:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='|' 00:12:44.125 10:31:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:44.125 10:31:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:44.125 10:31:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 80 00:12:44.125 10:31:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x50' 00:12:44.125 10:31:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=P 00:12:44.125 10:31:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:44.125 10:31:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:44.125 10:31:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 75 00:12:44.125 10:31:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4b' 00:12:44.125 10:31:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=K 00:12:44.125 10:31:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:44.125 10:31:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:44.125 10:31:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 64 00:12:44.125 10:31:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x40' 00:12:44.125 10:31:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=@ 00:12:44.125 10:31:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:44.125 10:31:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:44.125 10:31:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 120 00:12:44.125 10:31:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x78' 00:12:44.125 10:31:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=x 00:12:44.125 10:31:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:44.125 10:31:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:44.125 10:31:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 76 00:12:44.125 10:31:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4c' 00:12:44.125 10:31:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=L 00:12:44.125 10:31:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:44.125 10:31:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:44.125 10:31:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 65 00:12:44.125 10:31:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x41' 00:12:44.125 10:31:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=A 00:12:44.125 10:31:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:44.125 10:31:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:44.125 10:31:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 98 00:12:44.125 10:31:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x62' 00:12:44.125 10:31:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=b 00:12:44.125 10:31:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:44.125 10:31:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:44.125 10:31:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 99 00:12:44.125 10:31:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x63' 00:12:44.125 10:31:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=c 00:12:44.125 10:31:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:44.125 10:31:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:44.125 10:31:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ g == \- ]] 00:12:44.125 10:31:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo 'g^2Z%bxl15s'\''|PK@xLAbc' 00:12:44.125 10:31:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s 'g^2Z%bxl15s'\''|PK@xLAbc' nqn.2016-06.io.spdk:cnode5295 00:12:44.384 [2024-11-15 10:31:32.808598] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode5295: invalid serial number 'g^2Z%bxl15s'|PK@xLAbc' 00:12:44.384 10:31:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # out='request: 00:12:44.384 { 00:12:44.384 "nqn": "nqn.2016-06.io.spdk:cnode5295", 00:12:44.384 "serial_number": "g^2Z%bxl15s'\''|PK@xLAbc", 00:12:44.384 "method": "nvmf_create_subsystem", 00:12:44.384 "req_id": 1 00:12:44.384 } 00:12:44.384 Got JSON-RPC error response 00:12:44.384 response: 00:12:44.384 { 00:12:44.384 "code": -32602, 00:12:44.384 "message": "Invalid SN g^2Z%bxl15s'\''|PK@xLAbc" 00:12:44.384 }' 00:12:44.384 10:31:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@55 -- # [[ request: 00:12:44.384 { 00:12:44.384 "nqn": "nqn.2016-06.io.spdk:cnode5295", 00:12:44.384 "serial_number": "g^2Z%bxl15s'|PK@xLAbc", 00:12:44.384 "method": "nvmf_create_subsystem", 00:12:44.384 "req_id": 1 00:12:44.384 } 00:12:44.384 Got JSON-RPC error response 00:12:44.384 response: 00:12:44.384 { 00:12:44.384 "code": -32602, 00:12:44.384 "message": "Invalid SN g^2Z%bxl15s'|PK@xLAbc" 00:12:44.384 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:12:44.384 10:31:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:12:44.384 10:31:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:12:44.384 10:31:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:12:44.384 10:31:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:12:44.384 10:31:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:12:44.384 10:31:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:12:44.384 10:31:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:44.384 10:31:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 72 00:12:44.384 10:31:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x48' 00:12:44.384 10:31:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=H 00:12:44.384 10:31:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:44.384 10:31:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:44.384 10:31:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 50 00:12:44.384 10:31:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x32' 00:12:44.384 10:31:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=2 00:12:44.384 10:31:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:44.384 10:31:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:44.384 10:31:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 39 00:12:44.384 10:31:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x27' 00:12:44.384 10:31:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=\' 00:12:44.384 10:31:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:44.384 10:31:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:44.384 10:31:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 115 00:12:44.384 10:31:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x73' 00:12:44.384 10:31:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=s 00:12:44.384 10:31:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:44.384 10:31:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:44.384 10:31:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 86 00:12:44.384 10:31:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x56' 00:12:44.384 10:31:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=V 00:12:44.384 10:31:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:44.384 10:31:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:44.384 10:31:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 73 00:12:44.384 10:31:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x49' 00:12:44.384 10:31:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=I 00:12:44.384 10:31:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:44.384 10:31:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:44.643 10:31:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 42 00:12:44.643 10:31:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2a' 00:12:44.643 10:31:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='*' 00:12:44.643 10:31:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:44.643 10:31:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:44.643 10:31:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 61 00:12:44.643 10:31:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3d' 00:12:44.643 10:31:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+== 00:12:44.643 10:31:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:44.643 10:31:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:44.643 10:31:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 127 00:12:44.643 10:31:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7f' 00:12:44.643 10:31:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=$'\177' 00:12:44.643 10:31:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:44.643 10:31:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:44.643 10:31:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 113 00:12:44.643 10:31:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x71' 00:12:44.643 10:31:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=q 00:12:44.643 10:31:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:44.643 10:31:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:44.643 10:31:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 62 00:12:44.643 10:31:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3e' 00:12:44.643 10:31:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='>' 00:12:44.643 10:31:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:44.643 10:31:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:44.643 10:31:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 37 00:12:44.643 10:31:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x25' 00:12:44.643 10:31:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=% 00:12:44.643 10:31:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:44.643 10:31:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:44.643 10:31:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 93 00:12:44.643 10:31:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5d' 00:12:44.643 10:31:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=']' 00:12:44.643 10:31:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:44.643 10:31:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:44.643 10:31:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 74 00:12:44.643 10:31:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4a' 00:12:44.643 10:31:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=J 00:12:44.643 10:31:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:44.643 10:31:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:44.643 10:31:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 67 00:12:44.643 10:31:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x43' 00:12:44.643 10:31:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=C 00:12:44.643 10:31:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:44.643 10:31:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:44.643 10:31:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 74 00:12:44.643 10:31:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4a' 00:12:44.643 10:31:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=J 00:12:44.643 10:31:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:44.643 10:31:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:44.643 10:31:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 73 00:12:44.643 10:31:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x49' 00:12:44.643 10:31:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=I 00:12:44.643 10:31:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:44.643 10:31:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:44.643 10:31:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 47 00:12:44.643 10:31:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2f' 00:12:44.643 10:31:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=/ 00:12:44.643 10:31:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:44.643 10:31:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:44.644 10:31:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 46 00:12:44.644 10:31:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2e' 00:12:44.644 10:31:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=. 00:12:44.644 10:31:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:44.644 10:31:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:44.644 10:31:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 127 00:12:44.644 10:31:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7f' 00:12:44.644 10:31:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=$'\177' 00:12:44.644 10:31:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:44.644 10:31:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:44.644 10:31:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 87 00:12:44.644 10:31:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x57' 00:12:44.644 10:31:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=W 00:12:44.644 10:31:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:44.644 10:31:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:44.644 10:31:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 56 00:12:44.644 10:31:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x38' 00:12:44.644 10:31:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=8 00:12:44.644 10:31:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:44.644 10:31:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:44.644 10:31:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 54 00:12:44.644 10:31:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x36' 00:12:44.644 10:31:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=6 00:12:44.644 10:31:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:44.644 10:31:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:44.644 10:31:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 54 00:12:44.644 10:31:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x36' 00:12:44.644 10:31:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=6 00:12:44.644 10:31:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:44.644 10:31:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:44.644 10:31:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 94 00:12:44.644 10:31:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5e' 00:12:44.644 10:31:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='^' 00:12:44.644 10:31:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:44.644 10:31:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:44.644 10:31:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 66 00:12:44.644 10:31:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x42' 00:12:44.644 10:31:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=B 00:12:44.644 10:31:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:44.644 10:31:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:44.644 10:31:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 103 00:12:44.644 10:31:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x67' 00:12:44.644 10:31:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=g 00:12:44.644 10:31:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:44.644 10:31:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:44.644 10:31:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 91 00:12:44.644 10:31:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5b' 00:12:44.644 10:31:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='[' 00:12:44.644 10:31:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:44.644 10:31:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:44.644 10:31:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 47 00:12:44.644 10:31:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2f' 00:12:44.644 10:31:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=/ 00:12:44.644 10:31:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:44.644 10:31:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:44.644 10:31:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 67 00:12:44.644 10:31:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x43' 00:12:44.644 10:31:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=C 00:12:44.644 10:31:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:44.644 10:31:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:44.644 10:31:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 90 00:12:44.644 10:31:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5a' 00:12:44.644 10:31:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Z 00:12:44.644 10:31:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:44.644 10:31:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:44.644 10:31:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 102 00:12:44.644 10:31:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x66' 00:12:44.644 10:31:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=f 00:12:44.644 10:31:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:44.644 10:31:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:44.644 10:31:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 105 00:12:44.644 10:31:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x69' 00:12:44.644 10:31:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=i 00:12:44.644 10:31:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:44.644 10:31:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:44.644 10:31:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 76 00:12:44.644 10:31:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4c' 00:12:44.644 10:31:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=L 00:12:44.644 10:31:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:44.644 10:31:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:44.644 10:31:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 106 00:12:44.644 10:31:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6a' 00:12:44.644 10:31:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=j 00:12:44.644 10:31:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:44.644 10:31:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:44.644 10:31:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 96 00:12:44.644 10:31:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x60' 00:12:44.644 10:31:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='`' 00:12:44.644 10:31:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:44.644 10:31:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:44.644 10:31:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 59 00:12:44.644 10:31:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3b' 00:12:44.644 10:31:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=';' 00:12:44.644 10:31:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:44.644 10:31:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:44.644 10:31:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 46 00:12:44.644 10:31:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2e' 00:12:44.644 10:31:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=. 00:12:44.644 10:31:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:44.644 10:31:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:44.644 10:31:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 49 00:12:44.644 10:31:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x31' 00:12:44.644 10:31:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=1 00:12:44.644 10:31:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:44.644 10:31:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:44.644 10:31:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 60 00:12:44.644 10:31:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3c' 00:12:44.644 10:31:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='<' 00:12:44.644 10:31:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:44.644 10:31:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:44.644 10:31:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 106 00:12:44.644 10:31:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6a' 00:12:44.644 10:31:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=j 00:12:44.644 10:31:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:44.644 10:31:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:44.644 10:31:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ H == \- ]] 00:12:44.644 10:31:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo 'H2'\''sVI*=q>%]JCJI/.W866^Bg[/CZfiLj`;.1%]JCJI/.W866^Bg[/CZfiLj`;.1%]JCJI/.W866^Bg[/CZfiLj`;.1%]JCJI/.\u007fW866^Bg[/CZfiLj`;.1%]JCJI/.\u007fW866^Bg[/CZfiLj`;.1%]JCJI/.\u007fW866^Bg[/CZfiLj`;.1%]JCJI/.\u007fW866^Bg[/CZfiLj`;.1 /dev/null' 00:12:47.491 10:31:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:49.395 10:31:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:49.395 00:12:49.395 real 0m9.127s 00:12:49.395 user 0m21.238s 00:12:49.395 sys 0m2.669s 00:12:49.395 10:31:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1128 -- # xtrace_disable 00:12:49.395 10:31:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:12:49.395 ************************************ 00:12:49.395 END TEST nvmf_invalid 00:12:49.395 ************************************ 00:12:49.655 10:31:37 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@24 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:12:49.655 10:31:37 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:12:49.655 10:31:37 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:12:49.655 10:31:37 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:49.655 ************************************ 00:12:49.655 START TEST nvmf_connect_stress 00:12:49.655 ************************************ 00:12:49.655 10:31:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:12:49.655 * Looking for test storage... 00:12:49.655 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:49.655 10:31:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:12:49.655 10:31:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1691 -- # lcov --version 00:12:49.655 10:31:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:12:49.655 10:31:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:12:49.655 10:31:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:49.655 10:31:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:49.655 10:31:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:49.655 10:31:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # IFS=.-: 00:12:49.655 10:31:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # read -ra ver1 00:12:49.655 10:31:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # IFS=.-: 00:12:49.655 10:31:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # read -ra ver2 00:12:49.655 10:31:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@338 -- # local 'op=<' 00:12:49.655 10:31:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@340 -- # ver1_l=2 00:12:49.655 10:31:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@341 -- # ver2_l=1 00:12:49.655 10:31:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:49.655 10:31:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@344 -- # case "$op" in 00:12:49.655 10:31:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@345 -- # : 1 00:12:49.655 10:31:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:49.655 10:31:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:49.655 10:31:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # decimal 1 00:12:49.655 10:31:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=1 00:12:49.655 10:31:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:49.655 10:31:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 1 00:12:49.655 10:31:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:12:49.655 10:31:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # decimal 2 00:12:49.655 10:31:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=2 00:12:49.655 10:31:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:49.655 10:31:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 2 00:12:49.655 10:31:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:12:49.655 10:31:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:49.655 10:31:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:49.655 10:31:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # return 0 00:12:49.655 10:31:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:49.655 10:31:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:12:49.655 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:49.656 --rc genhtml_branch_coverage=1 00:12:49.656 --rc genhtml_function_coverage=1 00:12:49.656 --rc genhtml_legend=1 00:12:49.656 --rc geninfo_all_blocks=1 00:12:49.656 --rc geninfo_unexecuted_blocks=1 00:12:49.656 00:12:49.656 ' 00:12:49.656 10:31:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:12:49.656 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:49.656 --rc genhtml_branch_coverage=1 00:12:49.656 --rc genhtml_function_coverage=1 00:12:49.656 --rc genhtml_legend=1 00:12:49.656 --rc geninfo_all_blocks=1 00:12:49.656 --rc geninfo_unexecuted_blocks=1 00:12:49.656 00:12:49.656 ' 00:12:49.656 10:31:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:12:49.656 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:49.656 --rc genhtml_branch_coverage=1 00:12:49.656 --rc genhtml_function_coverage=1 00:12:49.656 --rc genhtml_legend=1 00:12:49.656 --rc geninfo_all_blocks=1 00:12:49.656 --rc geninfo_unexecuted_blocks=1 00:12:49.656 00:12:49.656 ' 00:12:49.656 10:31:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:12:49.656 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:49.656 --rc genhtml_branch_coverage=1 00:12:49.656 --rc genhtml_function_coverage=1 00:12:49.656 --rc genhtml_legend=1 00:12:49.656 --rc geninfo_all_blocks=1 00:12:49.656 --rc geninfo_unexecuted_blocks=1 00:12:49.656 00:12:49.656 ' 00:12:49.656 10:31:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:49.656 10:31:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:12:49.656 10:31:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:49.656 10:31:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:49.656 10:31:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:49.656 10:31:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:49.656 10:31:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:49.656 10:31:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:49.656 10:31:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:49.656 10:31:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:49.656 10:31:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:49.656 10:31:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:49.656 10:31:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:12:49.656 10:31:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=8b464f06-2980-e311-ba20-001e67a94acd 00:12:49.656 10:31:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:49.656 10:31:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:49.656 10:31:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:49.656 10:31:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:49.656 10:31:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:49.656 10:31:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:12:49.656 10:31:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:49.656 10:31:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:49.656 10:31:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:49.656 10:31:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:49.656 10:31:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:49.656 10:31:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:49.656 10:31:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:12:49.656 10:31:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:49.656 10:31:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@51 -- # : 0 00:12:49.656 10:31:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:49.656 10:31:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:49.656 10:31:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:49.656 10:31:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:49.656 10:31:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:49.656 10:31:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:49.656 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:49.656 10:31:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:49.656 10:31:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:49.656 10:31:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:49.656 10:31:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:12:49.656 10:31:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:49.656 10:31:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:49.656 10:31:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:49.656 10:31:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:49.656 10:31:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:49.656 10:31:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:49.656 10:31:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:49.656 10:31:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:49.656 10:31:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:49.656 10:31:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:49.656 10:31:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:12:49.656 10:31:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:52.217 10:31:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:52.217 10:31:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:12:52.217 10:31:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:52.217 10:31:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:52.217 10:31:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:52.217 10:31:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:52.217 10:31:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:52.217 10:31:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # net_devs=() 00:12:52.217 10:31:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:52.217 10:31:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # e810=() 00:12:52.217 10:31:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # local -ga e810 00:12:52.217 10:31:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # x722=() 00:12:52.217 10:31:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # local -ga x722 00:12:52.217 10:31:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # mlx=() 00:12:52.217 10:31:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:12:52.217 10:31:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:52.217 10:31:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:52.217 10:31:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:52.217 10:31:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:52.217 10:31:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:52.217 10:31:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:52.217 10:31:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:52.217 10:31:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:52.217 10:31:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:52.217 10:31:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:52.217 10:31:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:52.217 10:31:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:52.217 10:31:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:52.217 10:31:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:52.217 10:31:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:52.217 10:31:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:52.217 10:31:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:52.217 10:31:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:52.217 10:31:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:52.217 10:31:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:82:00.0 (0x8086 - 0x159b)' 00:12:52.217 Found 0000:82:00.0 (0x8086 - 0x159b) 00:12:52.217 10:31:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:52.217 10:31:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:52.217 10:31:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:52.217 10:31:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:52.217 10:31:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:52.217 10:31:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:52.217 10:31:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:82:00.1 (0x8086 - 0x159b)' 00:12:52.217 Found 0000:82:00.1 (0x8086 - 0x159b) 00:12:52.217 10:31:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:52.217 10:31:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:52.217 10:31:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:52.217 10:31:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:52.217 10:31:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:52.217 10:31:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:52.217 10:31:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:52.217 10:31:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:52.217 10:31:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:52.217 10:31:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:52.217 10:31:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:52.217 10:31:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:52.217 10:31:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:52.217 10:31:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:52.217 10:31:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:52.217 10:31:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:82:00.0: cvl_0_0' 00:12:52.217 Found net devices under 0000:82:00.0: cvl_0_0 00:12:52.217 10:31:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:52.217 10:31:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:52.217 10:31:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:52.217 10:31:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:52.217 10:31:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:52.217 10:31:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:52.217 10:31:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:52.217 10:31:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:52.217 10:31:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:82:00.1: cvl_0_1' 00:12:52.217 Found net devices under 0000:82:00.1: cvl_0_1 00:12:52.217 10:31:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:52.217 10:31:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:52.217 10:31:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:12:52.217 10:31:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:52.217 10:31:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:52.217 10:31:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:52.217 10:31:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:52.217 10:31:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:52.217 10:31:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:52.217 10:31:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:52.217 10:31:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:52.217 10:31:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:52.217 10:31:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:52.217 10:31:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:52.217 10:31:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:52.217 10:31:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:52.217 10:31:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:52.217 10:31:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:52.217 10:31:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:52.217 10:31:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:52.217 10:31:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:52.218 10:31:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:52.218 10:31:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:52.218 10:31:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:52.218 10:31:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:52.218 10:31:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:52.218 10:31:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:52.218 10:31:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:52.218 10:31:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:52.218 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:52.218 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.267 ms 00:12:52.218 00:12:52.218 --- 10.0.0.2 ping statistics --- 00:12:52.218 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:52.218 rtt min/avg/max/mdev = 0.267/0.267/0.267/0.000 ms 00:12:52.218 10:31:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:52.218 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:52.218 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.075 ms 00:12:52.218 00:12:52.218 --- 10.0.0.1 ping statistics --- 00:12:52.218 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:52.218 rtt min/avg/max/mdev = 0.075/0.075/0.075/0.000 ms 00:12:52.218 10:31:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:52.218 10:31:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@450 -- # return 0 00:12:52.218 10:31:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:52.218 10:31:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:52.218 10:31:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:52.218 10:31:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:52.218 10:31:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:52.218 10:31:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:52.218 10:31:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:52.218 10:31:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:12:52.218 10:31:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:52.218 10:31:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:52.218 10:31:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:52.218 10:31:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@509 -- # nvmfpid=338678 00:12:52.218 10:31:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@510 -- # waitforlisten 338678 00:12:52.218 10:31:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@833 -- # '[' -z 338678 ']' 00:12:52.218 10:31:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:52.218 10:31:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@838 -- # local max_retries=100 00:12:52.218 10:31:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:52.218 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:52.218 10:31:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:12:52.218 10:31:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@842 -- # xtrace_disable 00:12:52.218 10:31:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:52.218 [2024-11-15 10:31:40.471761] Starting SPDK v25.01-pre git sha1 318515b44 / DPDK 24.03.0 initialization... 00:12:52.218 [2024-11-15 10:31:40.471864] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:52.218 [2024-11-15 10:31:40.564688] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:12:52.218 [2024-11-15 10:31:40.637605] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:52.218 [2024-11-15 10:31:40.637683] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:52.218 [2024-11-15 10:31:40.637723] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:52.218 [2024-11-15 10:31:40.637746] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:52.218 [2024-11-15 10:31:40.637765] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:52.218 [2024-11-15 10:31:40.639615] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:52.218 [2024-11-15 10:31:40.639674] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:52.218 [2024-11-15 10:31:40.639664] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:52.476 10:31:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:12:52.476 10:31:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@866 -- # return 0 00:12:52.476 10:31:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:52.476 10:31:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:52.477 10:31:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:52.477 10:31:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:52.477 10:31:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:52.477 10:31:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:52.477 10:31:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:52.477 [2024-11-15 10:31:40.852077] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:52.477 10:31:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:52.477 10:31:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:12:52.477 10:31:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:52.477 10:31:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:52.477 10:31:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:52.477 10:31:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:52.477 10:31:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:52.477 10:31:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:52.477 [2024-11-15 10:31:40.869164] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:52.477 10:31:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:52.477 10:31:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:12:52.477 10:31:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:52.477 10:31:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:52.477 NULL1 00:12:52.477 10:31:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:52.477 10:31:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=338818 00:12:52.477 10:31:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:12:52.477 10:31:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:12:52.477 10:31:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:12:52.477 10:31:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:12:52.477 10:31:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:52.477 10:31:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:52.477 10:31:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:52.477 10:31:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:52.477 10:31:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:52.477 10:31:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:52.477 10:31:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:52.477 10:31:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:52.477 10:31:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:52.477 10:31:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:52.477 10:31:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:52.477 10:31:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:52.477 10:31:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:52.477 10:31:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:52.477 10:31:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:52.477 10:31:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:52.477 10:31:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:52.477 10:31:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:52.477 10:31:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:52.477 10:31:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:52.477 10:31:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:52.477 10:31:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:52.477 10:31:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:52.477 10:31:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:52.477 10:31:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:52.477 10:31:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:52.477 10:31:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:52.477 10:31:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:52.477 10:31:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:52.477 10:31:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:52.477 10:31:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:52.477 10:31:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:52.477 10:31:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:52.477 10:31:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:52.477 10:31:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:52.477 10:31:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:52.477 10:31:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:52.477 10:31:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:52.477 10:31:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:52.477 10:31:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:52.477 10:31:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 338818 00:12:52.477 10:31:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:52.477 10:31:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:52.477 10:31:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:53.043 10:31:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:53.043 10:31:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 338818 00:12:53.043 10:31:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:53.043 10:31:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:53.043 10:31:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:53.300 10:31:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:53.300 10:31:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 338818 00:12:53.300 10:31:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:53.300 10:31:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:53.300 10:31:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:53.557 10:31:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:53.557 10:31:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 338818 00:12:53.557 10:31:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:53.557 10:31:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:53.557 10:31:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:53.815 10:31:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:53.815 10:31:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 338818 00:12:53.815 10:31:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:53.815 10:31:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:53.815 10:31:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:54.072 10:31:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:54.072 10:31:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 338818 00:12:54.072 10:31:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:54.072 10:31:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:54.072 10:31:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:54.637 10:31:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:54.637 10:31:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 338818 00:12:54.637 10:31:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:54.637 10:31:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:54.637 10:31:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:54.895 10:31:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:54.895 10:31:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 338818 00:12:54.895 10:31:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:54.895 10:31:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:54.895 10:31:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:55.152 10:31:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:55.152 10:31:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 338818 00:12:55.152 10:31:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:55.152 10:31:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:55.152 10:31:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:55.410 10:31:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:55.410 10:31:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 338818 00:12:55.410 10:31:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:55.410 10:31:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:55.410 10:31:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:55.976 10:31:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:55.976 10:31:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 338818 00:12:55.976 10:31:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:55.976 10:31:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:55.976 10:31:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:56.233 10:31:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:56.233 10:31:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 338818 00:12:56.233 10:31:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:56.233 10:31:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:56.233 10:31:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:56.491 10:31:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:56.491 10:31:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 338818 00:12:56.491 10:31:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:56.491 10:31:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:56.491 10:31:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:56.749 10:31:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:56.749 10:31:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 338818 00:12:56.749 10:31:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:56.749 10:31:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:56.749 10:31:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:57.007 10:31:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:57.007 10:31:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 338818 00:12:57.007 10:31:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:57.007 10:31:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:57.007 10:31:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:57.572 10:31:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:57.572 10:31:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 338818 00:12:57.572 10:31:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:57.572 10:31:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:57.572 10:31:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:57.829 10:31:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:57.829 10:31:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 338818 00:12:57.829 10:31:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:57.829 10:31:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:57.829 10:31:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:58.088 10:31:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:58.088 10:31:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 338818 00:12:58.088 10:31:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:58.088 10:31:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:58.088 10:31:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:58.346 10:31:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:58.346 10:31:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 338818 00:12:58.346 10:31:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:58.346 10:31:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:58.346 10:31:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:58.603 10:31:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:58.604 10:31:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 338818 00:12:58.604 10:31:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:58.604 10:31:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:58.604 10:31:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:59.169 10:31:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:59.169 10:31:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 338818 00:12:59.169 10:31:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:59.169 10:31:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:59.169 10:31:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:59.427 10:31:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:59.427 10:31:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 338818 00:12:59.427 10:31:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:59.427 10:31:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:59.427 10:31:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:59.685 10:31:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:59.685 10:31:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 338818 00:12:59.685 10:31:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:59.685 10:31:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:59.685 10:31:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:59.942 10:31:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:59.942 10:31:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 338818 00:12:59.942 10:31:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:59.942 10:31:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:59.942 10:31:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:00.200 10:31:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:00.200 10:31:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 338818 00:13:00.200 10:31:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:00.200 10:31:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:00.200 10:31:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:00.772 10:31:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:00.772 10:31:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 338818 00:13:00.772 10:31:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:00.772 10:31:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:00.772 10:31:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:01.030 10:31:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:01.030 10:31:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 338818 00:13:01.030 10:31:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:01.030 10:31:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:01.030 10:31:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:01.288 10:31:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:01.288 10:31:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 338818 00:13:01.288 10:31:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:01.288 10:31:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:01.288 10:31:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:01.545 10:31:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:01.545 10:31:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 338818 00:13:01.545 10:31:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:01.545 10:31:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:01.545 10:31:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:01.803 10:31:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:01.803 10:31:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 338818 00:13:01.803 10:31:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:01.803 10:31:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:01.803 10:31:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:02.368 10:31:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:02.368 10:31:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 338818 00:13:02.368 10:31:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:02.368 10:31:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:02.368 10:31:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:02.626 10:31:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:02.626 10:31:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 338818 00:13:02.626 10:31:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:02.626 10:31:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:02.626 10:31:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:02.626 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:13:02.884 10:31:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:02.884 10:31:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 338818 00:13:02.884 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (338818) - No such process 00:13:02.884 10:31:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 338818 00:13:02.884 10:31:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:13:02.884 10:31:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:13:02.884 10:31:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:13:02.884 10:31:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:02.884 10:31:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@121 -- # sync 00:13:02.884 10:31:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:02.884 10:31:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@124 -- # set +e 00:13:02.884 10:31:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:02.884 10:31:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:02.884 rmmod nvme_tcp 00:13:02.884 rmmod nvme_fabrics 00:13:02.884 rmmod nvme_keyring 00:13:02.884 10:31:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:02.884 10:31:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@128 -- # set -e 00:13:02.884 10:31:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@129 -- # return 0 00:13:02.884 10:31:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@517 -- # '[' -n 338678 ']' 00:13:02.884 10:31:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@518 -- # killprocess 338678 00:13:02.884 10:31:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@952 -- # '[' -z 338678 ']' 00:13:02.884 10:31:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@956 -- # kill -0 338678 00:13:02.884 10:31:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@957 -- # uname 00:13:02.884 10:31:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:13:02.884 10:31:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 338678 00:13:02.884 10:31:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:13:02.884 10:31:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:13:02.884 10:31:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@970 -- # echo 'killing process with pid 338678' 00:13:02.884 killing process with pid 338678 00:13:02.884 10:31:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@971 -- # kill 338678 00:13:02.884 10:31:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@976 -- # wait 338678 00:13:03.144 10:31:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:03.144 10:31:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:13:03.144 10:31:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:13:03.144 10:31:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@297 -- # iptr 00:13:03.144 10:31:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # iptables-save 00:13:03.144 10:31:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:13:03.144 10:31:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # iptables-restore 00:13:03.144 10:31:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:03.144 10:31:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:13:03.144 10:31:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:03.144 10:31:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:03.144 10:31:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:05.680 10:31:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:13:05.680 00:13:05.680 real 0m15.678s 00:13:05.680 user 0m40.392s 00:13:05.680 sys 0m4.756s 00:13:05.680 10:31:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1128 -- # xtrace_disable 00:13:05.680 10:31:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:05.680 ************************************ 00:13:05.680 END TEST nvmf_connect_stress 00:13:05.680 ************************************ 00:13:05.680 10:31:53 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@25 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:13:05.680 10:31:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:13:05.680 10:31:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:13:05.680 10:31:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:05.680 ************************************ 00:13:05.680 START TEST nvmf_fused_ordering 00:13:05.680 ************************************ 00:13:05.680 10:31:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:13:05.680 * Looking for test storage... 00:13:05.680 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:05.680 10:31:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:13:05.680 10:31:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1691 -- # lcov --version 00:13:05.680 10:31:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:13:05.680 10:31:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:13:05.680 10:31:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:05.680 10:31:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:05.680 10:31:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:05.680 10:31:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # IFS=.-: 00:13:05.680 10:31:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # read -ra ver1 00:13:05.680 10:31:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # IFS=.-: 00:13:05.680 10:31:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # read -ra ver2 00:13:05.680 10:31:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@338 -- # local 'op=<' 00:13:05.680 10:31:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@340 -- # ver1_l=2 00:13:05.681 10:31:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@341 -- # ver2_l=1 00:13:05.681 10:31:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:05.681 10:31:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@344 -- # case "$op" in 00:13:05.681 10:31:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@345 -- # : 1 00:13:05.681 10:31:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:05.681 10:31:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:05.681 10:31:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # decimal 1 00:13:05.681 10:31:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=1 00:13:05.681 10:31:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:05.681 10:31:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 1 00:13:05.681 10:31:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # ver1[v]=1 00:13:05.681 10:31:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # decimal 2 00:13:05.681 10:31:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=2 00:13:05.681 10:31:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:05.681 10:31:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 2 00:13:05.681 10:31:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # ver2[v]=2 00:13:05.681 10:31:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:05.681 10:31:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:05.681 10:31:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # return 0 00:13:05.681 10:31:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:05.681 10:31:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:13:05.681 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:05.681 --rc genhtml_branch_coverage=1 00:13:05.681 --rc genhtml_function_coverage=1 00:13:05.681 --rc genhtml_legend=1 00:13:05.681 --rc geninfo_all_blocks=1 00:13:05.681 --rc geninfo_unexecuted_blocks=1 00:13:05.681 00:13:05.681 ' 00:13:05.681 10:31:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:13:05.681 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:05.681 --rc genhtml_branch_coverage=1 00:13:05.681 --rc genhtml_function_coverage=1 00:13:05.681 --rc genhtml_legend=1 00:13:05.681 --rc geninfo_all_blocks=1 00:13:05.681 --rc geninfo_unexecuted_blocks=1 00:13:05.681 00:13:05.681 ' 00:13:05.681 10:31:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:13:05.681 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:05.681 --rc genhtml_branch_coverage=1 00:13:05.681 --rc genhtml_function_coverage=1 00:13:05.681 --rc genhtml_legend=1 00:13:05.681 --rc geninfo_all_blocks=1 00:13:05.681 --rc geninfo_unexecuted_blocks=1 00:13:05.681 00:13:05.681 ' 00:13:05.681 10:31:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:13:05.681 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:05.681 --rc genhtml_branch_coverage=1 00:13:05.681 --rc genhtml_function_coverage=1 00:13:05.681 --rc genhtml_legend=1 00:13:05.681 --rc geninfo_all_blocks=1 00:13:05.681 --rc geninfo_unexecuted_blocks=1 00:13:05.681 00:13:05.681 ' 00:13:05.681 10:31:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:05.681 10:31:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:13:05.681 10:31:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:05.681 10:31:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:05.681 10:31:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:05.681 10:31:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:05.681 10:31:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:05.681 10:31:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:05.681 10:31:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:05.681 10:31:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:05.681 10:31:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:05.681 10:31:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:05.681 10:31:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:13:05.681 10:31:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=8b464f06-2980-e311-ba20-001e67a94acd 00:13:05.681 10:31:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:05.681 10:31:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:05.681 10:31:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:05.681 10:31:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:05.681 10:31:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:05.681 10:31:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@15 -- # shopt -s extglob 00:13:05.681 10:31:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:05.681 10:31:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:05.681 10:31:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:05.681 10:31:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:05.681 10:31:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:05.681 10:31:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:05.681 10:31:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:13:05.681 10:31:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:05.681 10:31:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@51 -- # : 0 00:13:05.681 10:31:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:05.681 10:31:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:05.681 10:31:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:05.681 10:31:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:05.681 10:31:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:05.681 10:31:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:05.681 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:05.681 10:31:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:05.681 10:31:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:05.681 10:31:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:05.681 10:31:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:13:05.681 10:31:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:13:05.681 10:31:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:05.681 10:31:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@476 -- # prepare_net_devs 00:13:05.681 10:31:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@438 -- # local -g is_hw=no 00:13:05.681 10:31:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@440 -- # remove_spdk_ns 00:13:05.681 10:31:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:05.681 10:31:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:05.681 10:31:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:05.681 10:31:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:13:05.681 10:31:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:13:05.681 10:31:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@309 -- # xtrace_disable 00:13:05.681 10:31:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:07.589 10:31:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:07.589 10:31:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # pci_devs=() 00:13:07.589 10:31:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # local -a pci_devs 00:13:07.589 10:31:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # pci_net_devs=() 00:13:07.589 10:31:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:13:07.589 10:31:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # pci_drivers=() 00:13:07.589 10:31:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # local -A pci_drivers 00:13:07.589 10:31:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # net_devs=() 00:13:07.589 10:31:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # local -ga net_devs 00:13:07.589 10:31:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # e810=() 00:13:07.589 10:31:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # local -ga e810 00:13:07.589 10:31:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # x722=() 00:13:07.589 10:31:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # local -ga x722 00:13:07.589 10:31:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # mlx=() 00:13:07.589 10:31:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # local -ga mlx 00:13:07.589 10:31:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:07.589 10:31:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:07.589 10:31:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:07.589 10:31:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:07.589 10:31:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:07.589 10:31:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:07.589 10:31:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:07.589 10:31:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:13:07.589 10:31:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:07.589 10:31:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:07.589 10:31:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:07.589 10:31:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:07.589 10:31:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:13:07.589 10:31:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:13:07.589 10:31:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:13:07.589 10:31:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:13:07.589 10:31:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:13:07.589 10:31:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:13:07.589 10:31:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:07.589 10:31:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@367 -- # echo 'Found 0000:82:00.0 (0x8086 - 0x159b)' 00:13:07.589 Found 0000:82:00.0 (0x8086 - 0x159b) 00:13:07.589 10:31:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:07.589 10:31:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:07.589 10:31:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:07.589 10:31:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:07.589 10:31:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:07.589 10:31:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:07.589 10:31:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@367 -- # echo 'Found 0000:82:00.1 (0x8086 - 0x159b)' 00:13:07.589 Found 0000:82:00.1 (0x8086 - 0x159b) 00:13:07.589 10:31:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:07.589 10:31:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:07.589 10:31:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:07.589 10:31:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:07.589 10:31:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:07.589 10:31:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:13:07.589 10:31:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:13:07.589 10:31:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:13:07.589 10:31:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:07.589 10:31:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:07.589 10:31:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:07.589 10:31:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:07.589 10:31:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:07.589 10:31:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:07.589 10:31:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:07.589 10:31:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:82:00.0: cvl_0_0' 00:13:07.589 Found net devices under 0000:82:00.0: cvl_0_0 00:13:07.589 10:31:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:07.589 10:31:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:07.589 10:31:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:07.589 10:31:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:07.589 10:31:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:07.589 10:31:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:07.589 10:31:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:07.589 10:31:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:07.589 10:31:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:82:00.1: cvl_0_1' 00:13:07.589 Found net devices under 0000:82:00.1: cvl_0_1 00:13:07.589 10:31:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:07.589 10:31:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:13:07.589 10:31:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # is_hw=yes 00:13:07.589 10:31:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:13:07.589 10:31:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:13:07.589 10:31:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:13:07.589 10:31:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:07.589 10:31:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:07.589 10:31:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:07.589 10:31:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:07.589 10:31:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:13:07.589 10:31:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:07.589 10:31:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:07.589 10:31:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:13:07.589 10:31:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:13:07.589 10:31:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:07.589 10:31:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:07.589 10:31:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:13:07.589 10:31:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:13:07.589 10:31:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:13:07.589 10:31:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:07.848 10:31:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:07.848 10:31:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:07.848 10:31:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:13:07.848 10:31:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:07.848 10:31:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:07.848 10:31:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:07.848 10:31:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:13:07.848 10:31:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:13:07.848 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:07.848 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.223 ms 00:13:07.848 00:13:07.848 --- 10.0.0.2 ping statistics --- 00:13:07.848 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:07.848 rtt min/avg/max/mdev = 0.223/0.223/0.223/0.000 ms 00:13:07.848 10:31:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:07.848 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:07.848 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.176 ms 00:13:07.848 00:13:07.848 --- 10.0.0.1 ping statistics --- 00:13:07.848 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:07.848 rtt min/avg/max/mdev = 0.176/0.176/0.176/0.000 ms 00:13:07.848 10:31:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:07.848 10:31:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@450 -- # return 0 00:13:07.848 10:31:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:13:07.848 10:31:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:07.848 10:31:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:13:07.848 10:31:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:13:07.848 10:31:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:07.848 10:31:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:13:07.848 10:31:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:13:07.848 10:31:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:13:07.848 10:31:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:07.848 10:31:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@724 -- # xtrace_disable 00:13:07.848 10:31:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:07.848 10:31:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@509 -- # nvmfpid=341977 00:13:07.848 10:31:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:13:07.848 10:31:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@510 -- # waitforlisten 341977 00:13:07.848 10:31:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@833 -- # '[' -z 341977 ']' 00:13:07.848 10:31:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:07.848 10:31:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@838 -- # local max_retries=100 00:13:07.848 10:31:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:07.848 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:07.848 10:31:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@842 -- # xtrace_disable 00:13:07.849 10:31:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:07.849 [2024-11-15 10:31:56.227627] Starting SPDK v25.01-pre git sha1 318515b44 / DPDK 24.03.0 initialization... 00:13:07.849 [2024-11-15 10:31:56.227723] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:07.849 [2024-11-15 10:31:56.300286] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:08.108 [2024-11-15 10:31:56.354869] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:08.108 [2024-11-15 10:31:56.354921] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:08.108 [2024-11-15 10:31:56.354950] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:08.108 [2024-11-15 10:31:56.354962] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:08.108 [2024-11-15 10:31:56.354971] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:08.108 [2024-11-15 10:31:56.355538] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:08.108 10:31:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:13:08.108 10:31:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@866 -- # return 0 00:13:08.108 10:31:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:08.108 10:31:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@730 -- # xtrace_disable 00:13:08.108 10:31:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:08.108 10:31:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:08.108 10:31:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:08.108 10:31:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:08.108 10:31:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:08.108 [2024-11-15 10:31:56.493698] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:08.108 10:31:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:08.108 10:31:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:13:08.108 10:31:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:08.108 10:31:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:08.108 10:31:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:08.108 10:31:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:08.108 10:31:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:08.108 10:31:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:08.108 [2024-11-15 10:31:56.509894] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:08.108 10:31:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:08.108 10:31:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:13:08.108 10:31:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:08.108 10:31:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:08.108 NULL1 00:13:08.108 10:31:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:08.108 10:31:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:13:08.108 10:31:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:08.108 10:31:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:08.108 10:31:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:08.108 10:31:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:13:08.108 10:31:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:08.108 10:31:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:08.108 10:31:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:08.108 10:31:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:13:08.108 [2024-11-15 10:31:56.554029] Starting SPDK v25.01-pre git sha1 318515b44 / DPDK 24.03.0 initialization... 00:13:08.108 [2024-11-15 10:31:56.554063] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid342005 ] 00:13:08.674 Attached to nqn.2016-06.io.spdk:cnode1 00:13:08.674 Namespace ID: 1 size: 1GB 00:13:08.674 fused_ordering(0) 00:13:08.674 fused_ordering(1) 00:13:08.674 fused_ordering(2) 00:13:08.674 fused_ordering(3) 00:13:08.674 fused_ordering(4) 00:13:08.674 fused_ordering(5) 00:13:08.674 fused_ordering(6) 00:13:08.674 fused_ordering(7) 00:13:08.674 fused_ordering(8) 00:13:08.674 fused_ordering(9) 00:13:08.674 fused_ordering(10) 00:13:08.674 fused_ordering(11) 00:13:08.674 fused_ordering(12) 00:13:08.674 fused_ordering(13) 00:13:08.674 fused_ordering(14) 00:13:08.674 fused_ordering(15) 00:13:08.674 fused_ordering(16) 00:13:08.674 fused_ordering(17) 00:13:08.674 fused_ordering(18) 00:13:08.674 fused_ordering(19) 00:13:08.674 fused_ordering(20) 00:13:08.674 fused_ordering(21) 00:13:08.674 fused_ordering(22) 00:13:08.674 fused_ordering(23) 00:13:08.674 fused_ordering(24) 00:13:08.674 fused_ordering(25) 00:13:08.674 fused_ordering(26) 00:13:08.674 fused_ordering(27) 00:13:08.674 fused_ordering(28) 00:13:08.674 fused_ordering(29) 00:13:08.674 fused_ordering(30) 00:13:08.674 fused_ordering(31) 00:13:08.674 fused_ordering(32) 00:13:08.674 fused_ordering(33) 00:13:08.674 fused_ordering(34) 00:13:08.674 fused_ordering(35) 00:13:08.674 fused_ordering(36) 00:13:08.674 fused_ordering(37) 00:13:08.674 fused_ordering(38) 00:13:08.674 fused_ordering(39) 00:13:08.674 fused_ordering(40) 00:13:08.674 fused_ordering(41) 00:13:08.674 fused_ordering(42) 00:13:08.674 fused_ordering(43) 00:13:08.674 fused_ordering(44) 00:13:08.674 fused_ordering(45) 00:13:08.674 fused_ordering(46) 00:13:08.674 fused_ordering(47) 00:13:08.674 fused_ordering(48) 00:13:08.674 fused_ordering(49) 00:13:08.674 fused_ordering(50) 00:13:08.674 fused_ordering(51) 00:13:08.674 fused_ordering(52) 00:13:08.674 fused_ordering(53) 00:13:08.674 fused_ordering(54) 00:13:08.674 fused_ordering(55) 00:13:08.674 fused_ordering(56) 00:13:08.674 fused_ordering(57) 00:13:08.674 fused_ordering(58) 00:13:08.674 fused_ordering(59) 00:13:08.674 fused_ordering(60) 00:13:08.674 fused_ordering(61) 00:13:08.674 fused_ordering(62) 00:13:08.674 fused_ordering(63) 00:13:08.674 fused_ordering(64) 00:13:08.674 fused_ordering(65) 00:13:08.674 fused_ordering(66) 00:13:08.674 fused_ordering(67) 00:13:08.674 fused_ordering(68) 00:13:08.674 fused_ordering(69) 00:13:08.674 fused_ordering(70) 00:13:08.674 fused_ordering(71) 00:13:08.674 fused_ordering(72) 00:13:08.674 fused_ordering(73) 00:13:08.674 fused_ordering(74) 00:13:08.674 fused_ordering(75) 00:13:08.674 fused_ordering(76) 00:13:08.674 fused_ordering(77) 00:13:08.674 fused_ordering(78) 00:13:08.674 fused_ordering(79) 00:13:08.674 fused_ordering(80) 00:13:08.674 fused_ordering(81) 00:13:08.674 fused_ordering(82) 00:13:08.674 fused_ordering(83) 00:13:08.674 fused_ordering(84) 00:13:08.674 fused_ordering(85) 00:13:08.674 fused_ordering(86) 00:13:08.674 fused_ordering(87) 00:13:08.674 fused_ordering(88) 00:13:08.674 fused_ordering(89) 00:13:08.674 fused_ordering(90) 00:13:08.674 fused_ordering(91) 00:13:08.674 fused_ordering(92) 00:13:08.674 fused_ordering(93) 00:13:08.674 fused_ordering(94) 00:13:08.674 fused_ordering(95) 00:13:08.674 fused_ordering(96) 00:13:08.674 fused_ordering(97) 00:13:08.674 fused_ordering(98) 00:13:08.674 fused_ordering(99) 00:13:08.674 fused_ordering(100) 00:13:08.674 fused_ordering(101) 00:13:08.674 fused_ordering(102) 00:13:08.674 fused_ordering(103) 00:13:08.674 fused_ordering(104) 00:13:08.674 fused_ordering(105) 00:13:08.674 fused_ordering(106) 00:13:08.674 fused_ordering(107) 00:13:08.674 fused_ordering(108) 00:13:08.674 fused_ordering(109) 00:13:08.674 fused_ordering(110) 00:13:08.674 fused_ordering(111) 00:13:08.674 fused_ordering(112) 00:13:08.674 fused_ordering(113) 00:13:08.674 fused_ordering(114) 00:13:08.674 fused_ordering(115) 00:13:08.674 fused_ordering(116) 00:13:08.674 fused_ordering(117) 00:13:08.674 fused_ordering(118) 00:13:08.674 fused_ordering(119) 00:13:08.675 fused_ordering(120) 00:13:08.675 fused_ordering(121) 00:13:08.675 fused_ordering(122) 00:13:08.675 fused_ordering(123) 00:13:08.675 fused_ordering(124) 00:13:08.675 fused_ordering(125) 00:13:08.675 fused_ordering(126) 00:13:08.675 fused_ordering(127) 00:13:08.675 fused_ordering(128) 00:13:08.675 fused_ordering(129) 00:13:08.675 fused_ordering(130) 00:13:08.675 fused_ordering(131) 00:13:08.675 fused_ordering(132) 00:13:08.675 fused_ordering(133) 00:13:08.675 fused_ordering(134) 00:13:08.675 fused_ordering(135) 00:13:08.675 fused_ordering(136) 00:13:08.675 fused_ordering(137) 00:13:08.675 fused_ordering(138) 00:13:08.675 fused_ordering(139) 00:13:08.675 fused_ordering(140) 00:13:08.675 fused_ordering(141) 00:13:08.675 fused_ordering(142) 00:13:08.675 fused_ordering(143) 00:13:08.675 fused_ordering(144) 00:13:08.675 fused_ordering(145) 00:13:08.675 fused_ordering(146) 00:13:08.675 fused_ordering(147) 00:13:08.675 fused_ordering(148) 00:13:08.675 fused_ordering(149) 00:13:08.675 fused_ordering(150) 00:13:08.675 fused_ordering(151) 00:13:08.675 fused_ordering(152) 00:13:08.675 fused_ordering(153) 00:13:08.675 fused_ordering(154) 00:13:08.675 fused_ordering(155) 00:13:08.675 fused_ordering(156) 00:13:08.675 fused_ordering(157) 00:13:08.675 fused_ordering(158) 00:13:08.675 fused_ordering(159) 00:13:08.675 fused_ordering(160) 00:13:08.675 fused_ordering(161) 00:13:08.675 fused_ordering(162) 00:13:08.675 fused_ordering(163) 00:13:08.675 fused_ordering(164) 00:13:08.675 fused_ordering(165) 00:13:08.675 fused_ordering(166) 00:13:08.675 fused_ordering(167) 00:13:08.675 fused_ordering(168) 00:13:08.675 fused_ordering(169) 00:13:08.675 fused_ordering(170) 00:13:08.675 fused_ordering(171) 00:13:08.675 fused_ordering(172) 00:13:08.675 fused_ordering(173) 00:13:08.675 fused_ordering(174) 00:13:08.675 fused_ordering(175) 00:13:08.675 fused_ordering(176) 00:13:08.675 fused_ordering(177) 00:13:08.675 fused_ordering(178) 00:13:08.675 fused_ordering(179) 00:13:08.675 fused_ordering(180) 00:13:08.675 fused_ordering(181) 00:13:08.675 fused_ordering(182) 00:13:08.675 fused_ordering(183) 00:13:08.675 fused_ordering(184) 00:13:08.675 fused_ordering(185) 00:13:08.675 fused_ordering(186) 00:13:08.675 fused_ordering(187) 00:13:08.675 fused_ordering(188) 00:13:08.675 fused_ordering(189) 00:13:08.675 fused_ordering(190) 00:13:08.675 fused_ordering(191) 00:13:08.675 fused_ordering(192) 00:13:08.675 fused_ordering(193) 00:13:08.675 fused_ordering(194) 00:13:08.675 fused_ordering(195) 00:13:08.675 fused_ordering(196) 00:13:08.675 fused_ordering(197) 00:13:08.675 fused_ordering(198) 00:13:08.675 fused_ordering(199) 00:13:08.675 fused_ordering(200) 00:13:08.675 fused_ordering(201) 00:13:08.675 fused_ordering(202) 00:13:08.675 fused_ordering(203) 00:13:08.675 fused_ordering(204) 00:13:08.675 fused_ordering(205) 00:13:08.933 fused_ordering(206) 00:13:08.933 fused_ordering(207) 00:13:08.933 fused_ordering(208) 00:13:08.933 fused_ordering(209) 00:13:08.933 fused_ordering(210) 00:13:08.933 fused_ordering(211) 00:13:08.933 fused_ordering(212) 00:13:08.933 fused_ordering(213) 00:13:08.933 fused_ordering(214) 00:13:08.933 fused_ordering(215) 00:13:08.933 fused_ordering(216) 00:13:08.933 fused_ordering(217) 00:13:08.933 fused_ordering(218) 00:13:08.933 fused_ordering(219) 00:13:08.933 fused_ordering(220) 00:13:08.933 fused_ordering(221) 00:13:08.933 fused_ordering(222) 00:13:08.933 fused_ordering(223) 00:13:08.933 fused_ordering(224) 00:13:08.933 fused_ordering(225) 00:13:08.933 fused_ordering(226) 00:13:08.933 fused_ordering(227) 00:13:08.933 fused_ordering(228) 00:13:08.933 fused_ordering(229) 00:13:08.933 fused_ordering(230) 00:13:08.933 fused_ordering(231) 00:13:08.933 fused_ordering(232) 00:13:08.933 fused_ordering(233) 00:13:08.933 fused_ordering(234) 00:13:08.933 fused_ordering(235) 00:13:08.933 fused_ordering(236) 00:13:08.933 fused_ordering(237) 00:13:08.933 fused_ordering(238) 00:13:08.933 fused_ordering(239) 00:13:08.933 fused_ordering(240) 00:13:08.933 fused_ordering(241) 00:13:08.933 fused_ordering(242) 00:13:08.933 fused_ordering(243) 00:13:08.933 fused_ordering(244) 00:13:08.933 fused_ordering(245) 00:13:08.933 fused_ordering(246) 00:13:08.933 fused_ordering(247) 00:13:08.933 fused_ordering(248) 00:13:08.933 fused_ordering(249) 00:13:08.933 fused_ordering(250) 00:13:08.933 fused_ordering(251) 00:13:08.933 fused_ordering(252) 00:13:08.933 fused_ordering(253) 00:13:08.933 fused_ordering(254) 00:13:08.933 fused_ordering(255) 00:13:08.933 fused_ordering(256) 00:13:08.933 fused_ordering(257) 00:13:08.933 fused_ordering(258) 00:13:08.933 fused_ordering(259) 00:13:08.933 fused_ordering(260) 00:13:08.933 fused_ordering(261) 00:13:08.933 fused_ordering(262) 00:13:08.933 fused_ordering(263) 00:13:08.933 fused_ordering(264) 00:13:08.933 fused_ordering(265) 00:13:08.933 fused_ordering(266) 00:13:08.933 fused_ordering(267) 00:13:08.933 fused_ordering(268) 00:13:08.933 fused_ordering(269) 00:13:08.933 fused_ordering(270) 00:13:08.933 fused_ordering(271) 00:13:08.933 fused_ordering(272) 00:13:08.933 fused_ordering(273) 00:13:08.933 fused_ordering(274) 00:13:08.934 fused_ordering(275) 00:13:08.934 fused_ordering(276) 00:13:08.934 fused_ordering(277) 00:13:08.934 fused_ordering(278) 00:13:08.934 fused_ordering(279) 00:13:08.934 fused_ordering(280) 00:13:08.934 fused_ordering(281) 00:13:08.934 fused_ordering(282) 00:13:08.934 fused_ordering(283) 00:13:08.934 fused_ordering(284) 00:13:08.934 fused_ordering(285) 00:13:08.934 fused_ordering(286) 00:13:08.934 fused_ordering(287) 00:13:08.934 fused_ordering(288) 00:13:08.934 fused_ordering(289) 00:13:08.934 fused_ordering(290) 00:13:08.934 fused_ordering(291) 00:13:08.934 fused_ordering(292) 00:13:08.934 fused_ordering(293) 00:13:08.934 fused_ordering(294) 00:13:08.934 fused_ordering(295) 00:13:08.934 fused_ordering(296) 00:13:08.934 fused_ordering(297) 00:13:08.934 fused_ordering(298) 00:13:08.934 fused_ordering(299) 00:13:08.934 fused_ordering(300) 00:13:08.934 fused_ordering(301) 00:13:08.934 fused_ordering(302) 00:13:08.934 fused_ordering(303) 00:13:08.934 fused_ordering(304) 00:13:08.934 fused_ordering(305) 00:13:08.934 fused_ordering(306) 00:13:08.934 fused_ordering(307) 00:13:08.934 fused_ordering(308) 00:13:08.934 fused_ordering(309) 00:13:08.934 fused_ordering(310) 00:13:08.934 fused_ordering(311) 00:13:08.934 fused_ordering(312) 00:13:08.934 fused_ordering(313) 00:13:08.934 fused_ordering(314) 00:13:08.934 fused_ordering(315) 00:13:08.934 fused_ordering(316) 00:13:08.934 fused_ordering(317) 00:13:08.934 fused_ordering(318) 00:13:08.934 fused_ordering(319) 00:13:08.934 fused_ordering(320) 00:13:08.934 fused_ordering(321) 00:13:08.934 fused_ordering(322) 00:13:08.934 fused_ordering(323) 00:13:08.934 fused_ordering(324) 00:13:08.934 fused_ordering(325) 00:13:08.934 fused_ordering(326) 00:13:08.934 fused_ordering(327) 00:13:08.934 fused_ordering(328) 00:13:08.934 fused_ordering(329) 00:13:08.934 fused_ordering(330) 00:13:08.934 fused_ordering(331) 00:13:08.934 fused_ordering(332) 00:13:08.934 fused_ordering(333) 00:13:08.934 fused_ordering(334) 00:13:08.934 fused_ordering(335) 00:13:08.934 fused_ordering(336) 00:13:08.934 fused_ordering(337) 00:13:08.934 fused_ordering(338) 00:13:08.934 fused_ordering(339) 00:13:08.934 fused_ordering(340) 00:13:08.934 fused_ordering(341) 00:13:08.934 fused_ordering(342) 00:13:08.934 fused_ordering(343) 00:13:08.934 fused_ordering(344) 00:13:08.934 fused_ordering(345) 00:13:08.934 fused_ordering(346) 00:13:08.934 fused_ordering(347) 00:13:08.934 fused_ordering(348) 00:13:08.934 fused_ordering(349) 00:13:08.934 fused_ordering(350) 00:13:08.934 fused_ordering(351) 00:13:08.934 fused_ordering(352) 00:13:08.934 fused_ordering(353) 00:13:08.934 fused_ordering(354) 00:13:08.934 fused_ordering(355) 00:13:08.934 fused_ordering(356) 00:13:08.934 fused_ordering(357) 00:13:08.934 fused_ordering(358) 00:13:08.934 fused_ordering(359) 00:13:08.934 fused_ordering(360) 00:13:08.934 fused_ordering(361) 00:13:08.934 fused_ordering(362) 00:13:08.934 fused_ordering(363) 00:13:08.934 fused_ordering(364) 00:13:08.934 fused_ordering(365) 00:13:08.934 fused_ordering(366) 00:13:08.934 fused_ordering(367) 00:13:08.934 fused_ordering(368) 00:13:08.934 fused_ordering(369) 00:13:08.934 fused_ordering(370) 00:13:08.934 fused_ordering(371) 00:13:08.934 fused_ordering(372) 00:13:08.934 fused_ordering(373) 00:13:08.934 fused_ordering(374) 00:13:08.934 fused_ordering(375) 00:13:08.934 fused_ordering(376) 00:13:08.934 fused_ordering(377) 00:13:08.934 fused_ordering(378) 00:13:08.934 fused_ordering(379) 00:13:08.934 fused_ordering(380) 00:13:08.934 fused_ordering(381) 00:13:08.934 fused_ordering(382) 00:13:08.934 fused_ordering(383) 00:13:08.934 fused_ordering(384) 00:13:08.934 fused_ordering(385) 00:13:08.934 fused_ordering(386) 00:13:08.934 fused_ordering(387) 00:13:08.934 fused_ordering(388) 00:13:08.934 fused_ordering(389) 00:13:08.934 fused_ordering(390) 00:13:08.934 fused_ordering(391) 00:13:08.934 fused_ordering(392) 00:13:08.934 fused_ordering(393) 00:13:08.934 fused_ordering(394) 00:13:08.934 fused_ordering(395) 00:13:08.934 fused_ordering(396) 00:13:08.934 fused_ordering(397) 00:13:08.934 fused_ordering(398) 00:13:08.934 fused_ordering(399) 00:13:08.934 fused_ordering(400) 00:13:08.934 fused_ordering(401) 00:13:08.934 fused_ordering(402) 00:13:08.934 fused_ordering(403) 00:13:08.934 fused_ordering(404) 00:13:08.934 fused_ordering(405) 00:13:08.934 fused_ordering(406) 00:13:08.934 fused_ordering(407) 00:13:08.934 fused_ordering(408) 00:13:08.934 fused_ordering(409) 00:13:08.934 fused_ordering(410) 00:13:09.500 fused_ordering(411) 00:13:09.500 fused_ordering(412) 00:13:09.500 fused_ordering(413) 00:13:09.500 fused_ordering(414) 00:13:09.500 fused_ordering(415) 00:13:09.500 fused_ordering(416) 00:13:09.500 fused_ordering(417) 00:13:09.500 fused_ordering(418) 00:13:09.500 fused_ordering(419) 00:13:09.500 fused_ordering(420) 00:13:09.500 fused_ordering(421) 00:13:09.500 fused_ordering(422) 00:13:09.500 fused_ordering(423) 00:13:09.500 fused_ordering(424) 00:13:09.500 fused_ordering(425) 00:13:09.500 fused_ordering(426) 00:13:09.500 fused_ordering(427) 00:13:09.500 fused_ordering(428) 00:13:09.500 fused_ordering(429) 00:13:09.500 fused_ordering(430) 00:13:09.500 fused_ordering(431) 00:13:09.500 fused_ordering(432) 00:13:09.500 fused_ordering(433) 00:13:09.500 fused_ordering(434) 00:13:09.500 fused_ordering(435) 00:13:09.500 fused_ordering(436) 00:13:09.500 fused_ordering(437) 00:13:09.500 fused_ordering(438) 00:13:09.500 fused_ordering(439) 00:13:09.500 fused_ordering(440) 00:13:09.500 fused_ordering(441) 00:13:09.500 fused_ordering(442) 00:13:09.500 fused_ordering(443) 00:13:09.500 fused_ordering(444) 00:13:09.500 fused_ordering(445) 00:13:09.500 fused_ordering(446) 00:13:09.500 fused_ordering(447) 00:13:09.500 fused_ordering(448) 00:13:09.500 fused_ordering(449) 00:13:09.500 fused_ordering(450) 00:13:09.500 fused_ordering(451) 00:13:09.500 fused_ordering(452) 00:13:09.500 fused_ordering(453) 00:13:09.500 fused_ordering(454) 00:13:09.500 fused_ordering(455) 00:13:09.500 fused_ordering(456) 00:13:09.500 fused_ordering(457) 00:13:09.500 fused_ordering(458) 00:13:09.500 fused_ordering(459) 00:13:09.500 fused_ordering(460) 00:13:09.500 fused_ordering(461) 00:13:09.500 fused_ordering(462) 00:13:09.500 fused_ordering(463) 00:13:09.500 fused_ordering(464) 00:13:09.500 fused_ordering(465) 00:13:09.500 fused_ordering(466) 00:13:09.500 fused_ordering(467) 00:13:09.500 fused_ordering(468) 00:13:09.500 fused_ordering(469) 00:13:09.500 fused_ordering(470) 00:13:09.500 fused_ordering(471) 00:13:09.500 fused_ordering(472) 00:13:09.500 fused_ordering(473) 00:13:09.500 fused_ordering(474) 00:13:09.500 fused_ordering(475) 00:13:09.500 fused_ordering(476) 00:13:09.500 fused_ordering(477) 00:13:09.500 fused_ordering(478) 00:13:09.500 fused_ordering(479) 00:13:09.500 fused_ordering(480) 00:13:09.500 fused_ordering(481) 00:13:09.500 fused_ordering(482) 00:13:09.500 fused_ordering(483) 00:13:09.500 fused_ordering(484) 00:13:09.500 fused_ordering(485) 00:13:09.500 fused_ordering(486) 00:13:09.500 fused_ordering(487) 00:13:09.500 fused_ordering(488) 00:13:09.500 fused_ordering(489) 00:13:09.500 fused_ordering(490) 00:13:09.500 fused_ordering(491) 00:13:09.500 fused_ordering(492) 00:13:09.500 fused_ordering(493) 00:13:09.500 fused_ordering(494) 00:13:09.500 fused_ordering(495) 00:13:09.500 fused_ordering(496) 00:13:09.500 fused_ordering(497) 00:13:09.500 fused_ordering(498) 00:13:09.500 fused_ordering(499) 00:13:09.500 fused_ordering(500) 00:13:09.500 fused_ordering(501) 00:13:09.500 fused_ordering(502) 00:13:09.500 fused_ordering(503) 00:13:09.501 fused_ordering(504) 00:13:09.501 fused_ordering(505) 00:13:09.501 fused_ordering(506) 00:13:09.501 fused_ordering(507) 00:13:09.501 fused_ordering(508) 00:13:09.501 fused_ordering(509) 00:13:09.501 fused_ordering(510) 00:13:09.501 fused_ordering(511) 00:13:09.501 fused_ordering(512) 00:13:09.501 fused_ordering(513) 00:13:09.501 fused_ordering(514) 00:13:09.501 fused_ordering(515) 00:13:09.501 fused_ordering(516) 00:13:09.501 fused_ordering(517) 00:13:09.501 fused_ordering(518) 00:13:09.501 fused_ordering(519) 00:13:09.501 fused_ordering(520) 00:13:09.501 fused_ordering(521) 00:13:09.501 fused_ordering(522) 00:13:09.501 fused_ordering(523) 00:13:09.501 fused_ordering(524) 00:13:09.501 fused_ordering(525) 00:13:09.501 fused_ordering(526) 00:13:09.501 fused_ordering(527) 00:13:09.501 fused_ordering(528) 00:13:09.501 fused_ordering(529) 00:13:09.501 fused_ordering(530) 00:13:09.501 fused_ordering(531) 00:13:09.501 fused_ordering(532) 00:13:09.501 fused_ordering(533) 00:13:09.501 fused_ordering(534) 00:13:09.501 fused_ordering(535) 00:13:09.501 fused_ordering(536) 00:13:09.501 fused_ordering(537) 00:13:09.501 fused_ordering(538) 00:13:09.501 fused_ordering(539) 00:13:09.501 fused_ordering(540) 00:13:09.501 fused_ordering(541) 00:13:09.501 fused_ordering(542) 00:13:09.501 fused_ordering(543) 00:13:09.501 fused_ordering(544) 00:13:09.501 fused_ordering(545) 00:13:09.501 fused_ordering(546) 00:13:09.501 fused_ordering(547) 00:13:09.501 fused_ordering(548) 00:13:09.501 fused_ordering(549) 00:13:09.501 fused_ordering(550) 00:13:09.501 fused_ordering(551) 00:13:09.501 fused_ordering(552) 00:13:09.501 fused_ordering(553) 00:13:09.501 fused_ordering(554) 00:13:09.501 fused_ordering(555) 00:13:09.501 fused_ordering(556) 00:13:09.501 fused_ordering(557) 00:13:09.501 fused_ordering(558) 00:13:09.501 fused_ordering(559) 00:13:09.501 fused_ordering(560) 00:13:09.501 fused_ordering(561) 00:13:09.501 fused_ordering(562) 00:13:09.501 fused_ordering(563) 00:13:09.501 fused_ordering(564) 00:13:09.501 fused_ordering(565) 00:13:09.501 fused_ordering(566) 00:13:09.501 fused_ordering(567) 00:13:09.501 fused_ordering(568) 00:13:09.501 fused_ordering(569) 00:13:09.501 fused_ordering(570) 00:13:09.501 fused_ordering(571) 00:13:09.501 fused_ordering(572) 00:13:09.501 fused_ordering(573) 00:13:09.501 fused_ordering(574) 00:13:09.501 fused_ordering(575) 00:13:09.501 fused_ordering(576) 00:13:09.501 fused_ordering(577) 00:13:09.501 fused_ordering(578) 00:13:09.501 fused_ordering(579) 00:13:09.501 fused_ordering(580) 00:13:09.501 fused_ordering(581) 00:13:09.501 fused_ordering(582) 00:13:09.501 fused_ordering(583) 00:13:09.501 fused_ordering(584) 00:13:09.501 fused_ordering(585) 00:13:09.501 fused_ordering(586) 00:13:09.501 fused_ordering(587) 00:13:09.501 fused_ordering(588) 00:13:09.501 fused_ordering(589) 00:13:09.501 fused_ordering(590) 00:13:09.501 fused_ordering(591) 00:13:09.501 fused_ordering(592) 00:13:09.501 fused_ordering(593) 00:13:09.501 fused_ordering(594) 00:13:09.501 fused_ordering(595) 00:13:09.501 fused_ordering(596) 00:13:09.501 fused_ordering(597) 00:13:09.501 fused_ordering(598) 00:13:09.501 fused_ordering(599) 00:13:09.501 fused_ordering(600) 00:13:09.501 fused_ordering(601) 00:13:09.501 fused_ordering(602) 00:13:09.501 fused_ordering(603) 00:13:09.501 fused_ordering(604) 00:13:09.501 fused_ordering(605) 00:13:09.501 fused_ordering(606) 00:13:09.501 fused_ordering(607) 00:13:09.501 fused_ordering(608) 00:13:09.501 fused_ordering(609) 00:13:09.501 fused_ordering(610) 00:13:09.501 fused_ordering(611) 00:13:09.501 fused_ordering(612) 00:13:09.501 fused_ordering(613) 00:13:09.501 fused_ordering(614) 00:13:09.501 fused_ordering(615) 00:13:09.759 fused_ordering(616) 00:13:09.759 fused_ordering(617) 00:13:09.759 fused_ordering(618) 00:13:09.759 fused_ordering(619) 00:13:09.759 fused_ordering(620) 00:13:09.759 fused_ordering(621) 00:13:09.759 fused_ordering(622) 00:13:09.759 fused_ordering(623) 00:13:09.759 fused_ordering(624) 00:13:09.759 fused_ordering(625) 00:13:09.759 fused_ordering(626) 00:13:09.759 fused_ordering(627) 00:13:09.759 fused_ordering(628) 00:13:09.759 fused_ordering(629) 00:13:09.759 fused_ordering(630) 00:13:09.759 fused_ordering(631) 00:13:09.759 fused_ordering(632) 00:13:09.759 fused_ordering(633) 00:13:09.759 fused_ordering(634) 00:13:09.759 fused_ordering(635) 00:13:09.759 fused_ordering(636) 00:13:09.759 fused_ordering(637) 00:13:09.759 fused_ordering(638) 00:13:09.759 fused_ordering(639) 00:13:09.759 fused_ordering(640) 00:13:09.759 fused_ordering(641) 00:13:09.759 fused_ordering(642) 00:13:09.759 fused_ordering(643) 00:13:09.759 fused_ordering(644) 00:13:09.759 fused_ordering(645) 00:13:09.759 fused_ordering(646) 00:13:09.759 fused_ordering(647) 00:13:09.759 fused_ordering(648) 00:13:09.759 fused_ordering(649) 00:13:09.759 fused_ordering(650) 00:13:09.759 fused_ordering(651) 00:13:09.759 fused_ordering(652) 00:13:09.759 fused_ordering(653) 00:13:09.759 fused_ordering(654) 00:13:09.759 fused_ordering(655) 00:13:09.759 fused_ordering(656) 00:13:09.759 fused_ordering(657) 00:13:09.759 fused_ordering(658) 00:13:09.759 fused_ordering(659) 00:13:09.759 fused_ordering(660) 00:13:09.759 fused_ordering(661) 00:13:09.759 fused_ordering(662) 00:13:09.759 fused_ordering(663) 00:13:09.759 fused_ordering(664) 00:13:09.759 fused_ordering(665) 00:13:09.759 fused_ordering(666) 00:13:09.759 fused_ordering(667) 00:13:09.759 fused_ordering(668) 00:13:09.759 fused_ordering(669) 00:13:09.759 fused_ordering(670) 00:13:09.759 fused_ordering(671) 00:13:09.759 fused_ordering(672) 00:13:09.759 fused_ordering(673) 00:13:09.759 fused_ordering(674) 00:13:09.759 fused_ordering(675) 00:13:09.759 fused_ordering(676) 00:13:09.759 fused_ordering(677) 00:13:09.759 fused_ordering(678) 00:13:09.759 fused_ordering(679) 00:13:09.759 fused_ordering(680) 00:13:09.759 fused_ordering(681) 00:13:09.759 fused_ordering(682) 00:13:09.759 fused_ordering(683) 00:13:09.759 fused_ordering(684) 00:13:09.759 fused_ordering(685) 00:13:09.759 fused_ordering(686) 00:13:09.759 fused_ordering(687) 00:13:09.759 fused_ordering(688) 00:13:09.759 fused_ordering(689) 00:13:09.759 fused_ordering(690) 00:13:09.759 fused_ordering(691) 00:13:09.759 fused_ordering(692) 00:13:09.759 fused_ordering(693) 00:13:09.759 fused_ordering(694) 00:13:09.759 fused_ordering(695) 00:13:09.759 fused_ordering(696) 00:13:09.759 fused_ordering(697) 00:13:09.759 fused_ordering(698) 00:13:09.759 fused_ordering(699) 00:13:09.759 fused_ordering(700) 00:13:09.759 fused_ordering(701) 00:13:09.759 fused_ordering(702) 00:13:09.759 fused_ordering(703) 00:13:09.759 fused_ordering(704) 00:13:09.759 fused_ordering(705) 00:13:09.759 fused_ordering(706) 00:13:09.759 fused_ordering(707) 00:13:09.759 fused_ordering(708) 00:13:09.759 fused_ordering(709) 00:13:09.759 fused_ordering(710) 00:13:09.759 fused_ordering(711) 00:13:09.759 fused_ordering(712) 00:13:09.759 fused_ordering(713) 00:13:09.759 fused_ordering(714) 00:13:09.759 fused_ordering(715) 00:13:09.759 fused_ordering(716) 00:13:09.759 fused_ordering(717) 00:13:09.759 fused_ordering(718) 00:13:09.759 fused_ordering(719) 00:13:09.759 fused_ordering(720) 00:13:09.759 fused_ordering(721) 00:13:09.759 fused_ordering(722) 00:13:09.759 fused_ordering(723) 00:13:09.759 fused_ordering(724) 00:13:09.759 fused_ordering(725) 00:13:09.759 fused_ordering(726) 00:13:09.759 fused_ordering(727) 00:13:09.759 fused_ordering(728) 00:13:09.759 fused_ordering(729) 00:13:09.759 fused_ordering(730) 00:13:09.759 fused_ordering(731) 00:13:09.759 fused_ordering(732) 00:13:09.759 fused_ordering(733) 00:13:09.759 fused_ordering(734) 00:13:09.759 fused_ordering(735) 00:13:09.759 fused_ordering(736) 00:13:09.759 fused_ordering(737) 00:13:09.759 fused_ordering(738) 00:13:09.759 fused_ordering(739) 00:13:09.759 fused_ordering(740) 00:13:09.759 fused_ordering(741) 00:13:09.759 fused_ordering(742) 00:13:09.759 fused_ordering(743) 00:13:09.759 fused_ordering(744) 00:13:09.759 fused_ordering(745) 00:13:09.759 fused_ordering(746) 00:13:09.759 fused_ordering(747) 00:13:09.759 fused_ordering(748) 00:13:09.759 fused_ordering(749) 00:13:09.759 fused_ordering(750) 00:13:09.759 fused_ordering(751) 00:13:09.759 fused_ordering(752) 00:13:09.759 fused_ordering(753) 00:13:09.759 fused_ordering(754) 00:13:09.759 fused_ordering(755) 00:13:09.759 fused_ordering(756) 00:13:09.759 fused_ordering(757) 00:13:09.759 fused_ordering(758) 00:13:09.759 fused_ordering(759) 00:13:09.759 fused_ordering(760) 00:13:09.759 fused_ordering(761) 00:13:09.759 fused_ordering(762) 00:13:09.759 fused_ordering(763) 00:13:09.759 fused_ordering(764) 00:13:09.759 fused_ordering(765) 00:13:09.759 fused_ordering(766) 00:13:09.759 fused_ordering(767) 00:13:09.759 fused_ordering(768) 00:13:09.759 fused_ordering(769) 00:13:09.759 fused_ordering(770) 00:13:09.759 fused_ordering(771) 00:13:09.759 fused_ordering(772) 00:13:09.759 fused_ordering(773) 00:13:09.759 fused_ordering(774) 00:13:09.759 fused_ordering(775) 00:13:09.759 fused_ordering(776) 00:13:09.759 fused_ordering(777) 00:13:09.759 fused_ordering(778) 00:13:09.759 fused_ordering(779) 00:13:09.759 fused_ordering(780) 00:13:09.759 fused_ordering(781) 00:13:09.759 fused_ordering(782) 00:13:09.759 fused_ordering(783) 00:13:09.759 fused_ordering(784) 00:13:09.759 fused_ordering(785) 00:13:09.759 fused_ordering(786) 00:13:09.759 fused_ordering(787) 00:13:09.759 fused_ordering(788) 00:13:09.759 fused_ordering(789) 00:13:09.759 fused_ordering(790) 00:13:09.759 fused_ordering(791) 00:13:09.759 fused_ordering(792) 00:13:09.759 fused_ordering(793) 00:13:09.759 fused_ordering(794) 00:13:09.759 fused_ordering(795) 00:13:09.760 fused_ordering(796) 00:13:09.760 fused_ordering(797) 00:13:09.760 fused_ordering(798) 00:13:09.760 fused_ordering(799) 00:13:09.760 fused_ordering(800) 00:13:09.760 fused_ordering(801) 00:13:09.760 fused_ordering(802) 00:13:09.760 fused_ordering(803) 00:13:09.760 fused_ordering(804) 00:13:09.760 fused_ordering(805) 00:13:09.760 fused_ordering(806) 00:13:09.760 fused_ordering(807) 00:13:09.760 fused_ordering(808) 00:13:09.760 fused_ordering(809) 00:13:09.760 fused_ordering(810) 00:13:09.760 fused_ordering(811) 00:13:09.760 fused_ordering(812) 00:13:09.760 fused_ordering(813) 00:13:09.760 fused_ordering(814) 00:13:09.760 fused_ordering(815) 00:13:09.760 fused_ordering(816) 00:13:09.760 fused_ordering(817) 00:13:09.760 fused_ordering(818) 00:13:09.760 fused_ordering(819) 00:13:09.760 fused_ordering(820) 00:13:10.326 fused_ordering(821) 00:13:10.326 fused_ordering(822) 00:13:10.326 fused_ordering(823) 00:13:10.326 fused_ordering(824) 00:13:10.326 fused_ordering(825) 00:13:10.326 fused_ordering(826) 00:13:10.326 fused_ordering(827) 00:13:10.326 fused_ordering(828) 00:13:10.326 fused_ordering(829) 00:13:10.326 fused_ordering(830) 00:13:10.326 fused_ordering(831) 00:13:10.326 fused_ordering(832) 00:13:10.326 fused_ordering(833) 00:13:10.326 fused_ordering(834) 00:13:10.326 fused_ordering(835) 00:13:10.326 fused_ordering(836) 00:13:10.326 fused_ordering(837) 00:13:10.326 fused_ordering(838) 00:13:10.326 fused_ordering(839) 00:13:10.326 fused_ordering(840) 00:13:10.326 fused_ordering(841) 00:13:10.326 fused_ordering(842) 00:13:10.326 fused_ordering(843) 00:13:10.326 fused_ordering(844) 00:13:10.326 fused_ordering(845) 00:13:10.326 fused_ordering(846) 00:13:10.326 fused_ordering(847) 00:13:10.326 fused_ordering(848) 00:13:10.326 fused_ordering(849) 00:13:10.326 fused_ordering(850) 00:13:10.326 fused_ordering(851) 00:13:10.326 fused_ordering(852) 00:13:10.326 fused_ordering(853) 00:13:10.326 fused_ordering(854) 00:13:10.326 fused_ordering(855) 00:13:10.326 fused_ordering(856) 00:13:10.326 fused_ordering(857) 00:13:10.326 fused_ordering(858) 00:13:10.326 fused_ordering(859) 00:13:10.326 fused_ordering(860) 00:13:10.327 fused_ordering(861) 00:13:10.327 fused_ordering(862) 00:13:10.327 fused_ordering(863) 00:13:10.327 fused_ordering(864) 00:13:10.327 fused_ordering(865) 00:13:10.327 fused_ordering(866) 00:13:10.327 fused_ordering(867) 00:13:10.327 fused_ordering(868) 00:13:10.327 fused_ordering(869) 00:13:10.327 fused_ordering(870) 00:13:10.327 fused_ordering(871) 00:13:10.327 fused_ordering(872) 00:13:10.327 fused_ordering(873) 00:13:10.327 fused_ordering(874) 00:13:10.327 fused_ordering(875) 00:13:10.327 fused_ordering(876) 00:13:10.327 fused_ordering(877) 00:13:10.327 fused_ordering(878) 00:13:10.327 fused_ordering(879) 00:13:10.327 fused_ordering(880) 00:13:10.327 fused_ordering(881) 00:13:10.327 fused_ordering(882) 00:13:10.327 fused_ordering(883) 00:13:10.327 fused_ordering(884) 00:13:10.327 fused_ordering(885) 00:13:10.327 fused_ordering(886) 00:13:10.327 fused_ordering(887) 00:13:10.327 fused_ordering(888) 00:13:10.327 fused_ordering(889) 00:13:10.327 fused_ordering(890) 00:13:10.327 fused_ordering(891) 00:13:10.327 fused_ordering(892) 00:13:10.327 fused_ordering(893) 00:13:10.327 fused_ordering(894) 00:13:10.327 fused_ordering(895) 00:13:10.327 fused_ordering(896) 00:13:10.327 fused_ordering(897) 00:13:10.327 fused_ordering(898) 00:13:10.327 fused_ordering(899) 00:13:10.327 fused_ordering(900) 00:13:10.327 fused_ordering(901) 00:13:10.327 fused_ordering(902) 00:13:10.327 fused_ordering(903) 00:13:10.327 fused_ordering(904) 00:13:10.327 fused_ordering(905) 00:13:10.327 fused_ordering(906) 00:13:10.327 fused_ordering(907) 00:13:10.327 fused_ordering(908) 00:13:10.327 fused_ordering(909) 00:13:10.327 fused_ordering(910) 00:13:10.327 fused_ordering(911) 00:13:10.327 fused_ordering(912) 00:13:10.327 fused_ordering(913) 00:13:10.327 fused_ordering(914) 00:13:10.327 fused_ordering(915) 00:13:10.327 fused_ordering(916) 00:13:10.327 fused_ordering(917) 00:13:10.327 fused_ordering(918) 00:13:10.327 fused_ordering(919) 00:13:10.327 fused_ordering(920) 00:13:10.327 fused_ordering(921) 00:13:10.327 fused_ordering(922) 00:13:10.327 fused_ordering(923) 00:13:10.327 fused_ordering(924) 00:13:10.327 fused_ordering(925) 00:13:10.327 fused_ordering(926) 00:13:10.327 fused_ordering(927) 00:13:10.327 fused_ordering(928) 00:13:10.327 fused_ordering(929) 00:13:10.327 fused_ordering(930) 00:13:10.327 fused_ordering(931) 00:13:10.327 fused_ordering(932) 00:13:10.327 fused_ordering(933) 00:13:10.327 fused_ordering(934) 00:13:10.327 fused_ordering(935) 00:13:10.327 fused_ordering(936) 00:13:10.327 fused_ordering(937) 00:13:10.327 fused_ordering(938) 00:13:10.327 fused_ordering(939) 00:13:10.327 fused_ordering(940) 00:13:10.327 fused_ordering(941) 00:13:10.327 fused_ordering(942) 00:13:10.327 fused_ordering(943) 00:13:10.327 fused_ordering(944) 00:13:10.327 fused_ordering(945) 00:13:10.327 fused_ordering(946) 00:13:10.327 fused_ordering(947) 00:13:10.327 fused_ordering(948) 00:13:10.327 fused_ordering(949) 00:13:10.327 fused_ordering(950) 00:13:10.327 fused_ordering(951) 00:13:10.327 fused_ordering(952) 00:13:10.327 fused_ordering(953) 00:13:10.327 fused_ordering(954) 00:13:10.327 fused_ordering(955) 00:13:10.327 fused_ordering(956) 00:13:10.327 fused_ordering(957) 00:13:10.327 fused_ordering(958) 00:13:10.327 fused_ordering(959) 00:13:10.327 fused_ordering(960) 00:13:10.327 fused_ordering(961) 00:13:10.327 fused_ordering(962) 00:13:10.327 fused_ordering(963) 00:13:10.327 fused_ordering(964) 00:13:10.327 fused_ordering(965) 00:13:10.327 fused_ordering(966) 00:13:10.327 fused_ordering(967) 00:13:10.327 fused_ordering(968) 00:13:10.327 fused_ordering(969) 00:13:10.327 fused_ordering(970) 00:13:10.327 fused_ordering(971) 00:13:10.327 fused_ordering(972) 00:13:10.327 fused_ordering(973) 00:13:10.327 fused_ordering(974) 00:13:10.327 fused_ordering(975) 00:13:10.327 fused_ordering(976) 00:13:10.327 fused_ordering(977) 00:13:10.327 fused_ordering(978) 00:13:10.327 fused_ordering(979) 00:13:10.327 fused_ordering(980) 00:13:10.327 fused_ordering(981) 00:13:10.327 fused_ordering(982) 00:13:10.327 fused_ordering(983) 00:13:10.327 fused_ordering(984) 00:13:10.327 fused_ordering(985) 00:13:10.327 fused_ordering(986) 00:13:10.327 fused_ordering(987) 00:13:10.327 fused_ordering(988) 00:13:10.327 fused_ordering(989) 00:13:10.327 fused_ordering(990) 00:13:10.327 fused_ordering(991) 00:13:10.327 fused_ordering(992) 00:13:10.327 fused_ordering(993) 00:13:10.327 fused_ordering(994) 00:13:10.327 fused_ordering(995) 00:13:10.327 fused_ordering(996) 00:13:10.327 fused_ordering(997) 00:13:10.327 fused_ordering(998) 00:13:10.327 fused_ordering(999) 00:13:10.327 fused_ordering(1000) 00:13:10.327 fused_ordering(1001) 00:13:10.327 fused_ordering(1002) 00:13:10.327 fused_ordering(1003) 00:13:10.327 fused_ordering(1004) 00:13:10.327 fused_ordering(1005) 00:13:10.327 fused_ordering(1006) 00:13:10.327 fused_ordering(1007) 00:13:10.327 fused_ordering(1008) 00:13:10.327 fused_ordering(1009) 00:13:10.327 fused_ordering(1010) 00:13:10.327 fused_ordering(1011) 00:13:10.327 fused_ordering(1012) 00:13:10.327 fused_ordering(1013) 00:13:10.327 fused_ordering(1014) 00:13:10.327 fused_ordering(1015) 00:13:10.327 fused_ordering(1016) 00:13:10.327 fused_ordering(1017) 00:13:10.327 fused_ordering(1018) 00:13:10.327 fused_ordering(1019) 00:13:10.327 fused_ordering(1020) 00:13:10.327 fused_ordering(1021) 00:13:10.327 fused_ordering(1022) 00:13:10.327 fused_ordering(1023) 00:13:10.327 10:31:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:13:10.327 10:31:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:13:10.327 10:31:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:10.327 10:31:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@121 -- # sync 00:13:10.327 10:31:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:10.327 10:31:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set +e 00:13:10.327 10:31:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:10.327 10:31:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:10.327 rmmod nvme_tcp 00:13:10.327 rmmod nvme_fabrics 00:13:10.327 rmmod nvme_keyring 00:13:10.327 10:31:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:10.327 10:31:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@128 -- # set -e 00:13:10.327 10:31:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@129 -- # return 0 00:13:10.327 10:31:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@517 -- # '[' -n 341977 ']' 00:13:10.327 10:31:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@518 -- # killprocess 341977 00:13:10.327 10:31:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@952 -- # '[' -z 341977 ']' 00:13:10.327 10:31:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@956 -- # kill -0 341977 00:13:10.327 10:31:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@957 -- # uname 00:13:10.327 10:31:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:13:10.327 10:31:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 341977 00:13:10.327 10:31:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:13:10.327 10:31:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:13:10.327 10:31:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@970 -- # echo 'killing process with pid 341977' 00:13:10.327 killing process with pid 341977 00:13:10.327 10:31:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@971 -- # kill 341977 00:13:10.327 10:31:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@976 -- # wait 341977 00:13:10.586 10:31:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:10.586 10:31:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:13:10.586 10:31:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:13:10.586 10:31:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@297 -- # iptr 00:13:10.586 10:31:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # iptables-save 00:13:10.586 10:31:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:13:10.586 10:31:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # iptables-restore 00:13:10.586 10:31:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:10.586 10:31:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@302 -- # remove_spdk_ns 00:13:10.586 10:31:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:10.586 10:31:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:10.586 10:31:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:13.124 10:32:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:13:13.124 00:13:13.124 real 0m7.376s 00:13:13.124 user 0m4.708s 00:13:13.124 sys 0m3.113s 00:13:13.124 10:32:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1128 -- # xtrace_disable 00:13:13.124 10:32:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:13.124 ************************************ 00:13:13.124 END TEST nvmf_fused_ordering 00:13:13.124 ************************************ 00:13:13.124 10:32:01 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@26 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:13:13.124 10:32:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:13:13.124 10:32:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:13:13.125 10:32:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:13.125 ************************************ 00:13:13.125 START TEST nvmf_ns_masking 00:13:13.125 ************************************ 00:13:13.125 10:32:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1127 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:13:13.125 * Looking for test storage... 00:13:13.125 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:13.125 10:32:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:13:13.125 10:32:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1691 -- # lcov --version 00:13:13.125 10:32:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:13:13.125 10:32:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:13:13.125 10:32:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:13.125 10:32:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:13.125 10:32:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:13.125 10:32:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # IFS=.-: 00:13:13.125 10:32:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # read -ra ver1 00:13:13.125 10:32:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # IFS=.-: 00:13:13.125 10:32:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # read -ra ver2 00:13:13.125 10:32:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@338 -- # local 'op=<' 00:13:13.125 10:32:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@340 -- # ver1_l=2 00:13:13.125 10:32:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@341 -- # ver2_l=1 00:13:13.125 10:32:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:13.125 10:32:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@344 -- # case "$op" in 00:13:13.125 10:32:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@345 -- # : 1 00:13:13.125 10:32:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:13.125 10:32:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:13.125 10:32:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # decimal 1 00:13:13.125 10:32:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=1 00:13:13.125 10:32:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:13.125 10:32:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 1 00:13:13.125 10:32:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # ver1[v]=1 00:13:13.125 10:32:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # decimal 2 00:13:13.125 10:32:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=2 00:13:13.125 10:32:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:13.125 10:32:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 2 00:13:13.125 10:32:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # ver2[v]=2 00:13:13.125 10:32:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:13.125 10:32:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:13.125 10:32:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # return 0 00:13:13.125 10:32:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:13.125 10:32:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:13:13.125 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:13.125 --rc genhtml_branch_coverage=1 00:13:13.125 --rc genhtml_function_coverage=1 00:13:13.125 --rc genhtml_legend=1 00:13:13.125 --rc geninfo_all_blocks=1 00:13:13.125 --rc geninfo_unexecuted_blocks=1 00:13:13.125 00:13:13.125 ' 00:13:13.125 10:32:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:13:13.125 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:13.125 --rc genhtml_branch_coverage=1 00:13:13.125 --rc genhtml_function_coverage=1 00:13:13.125 --rc genhtml_legend=1 00:13:13.125 --rc geninfo_all_blocks=1 00:13:13.125 --rc geninfo_unexecuted_blocks=1 00:13:13.125 00:13:13.125 ' 00:13:13.125 10:32:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:13:13.125 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:13.125 --rc genhtml_branch_coverage=1 00:13:13.125 --rc genhtml_function_coverage=1 00:13:13.125 --rc genhtml_legend=1 00:13:13.125 --rc geninfo_all_blocks=1 00:13:13.125 --rc geninfo_unexecuted_blocks=1 00:13:13.125 00:13:13.125 ' 00:13:13.125 10:32:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:13:13.125 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:13.125 --rc genhtml_branch_coverage=1 00:13:13.125 --rc genhtml_function_coverage=1 00:13:13.125 --rc genhtml_legend=1 00:13:13.125 --rc geninfo_all_blocks=1 00:13:13.125 --rc geninfo_unexecuted_blocks=1 00:13:13.125 00:13:13.125 ' 00:13:13.125 10:32:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:13.125 10:32:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:13:13.125 10:32:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:13.125 10:32:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:13.125 10:32:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:13.125 10:32:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:13.125 10:32:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:13.125 10:32:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:13.125 10:32:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:13.125 10:32:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:13.125 10:32:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:13.125 10:32:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:13.125 10:32:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:13:13.125 10:32:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=8b464f06-2980-e311-ba20-001e67a94acd 00:13:13.125 10:32:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:13.125 10:32:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:13.125 10:32:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:13.125 10:32:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:13.125 10:32:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:13.125 10:32:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@15 -- # shopt -s extglob 00:13:13.125 10:32:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:13.125 10:32:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:13.125 10:32:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:13.125 10:32:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:13.125 10:32:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:13.125 10:32:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:13.125 10:32:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:13:13.125 10:32:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:13.125 10:32:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@51 -- # : 0 00:13:13.125 10:32:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:13.125 10:32:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:13.125 10:32:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:13.125 10:32:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:13.125 10:32:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:13.125 10:32:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:13.126 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:13.126 10:32:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:13.126 10:32:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:13.126 10:32:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:13.126 10:32:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:13.126 10:32:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@11 -- # hostsock=/var/tmp/host.sock 00:13:13.126 10:32:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@12 -- # loops=5 00:13:13.126 10:32:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # uuidgen 00:13:13.126 10:32:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # ns1uuid=2531eba7-fbdd-4494-b79a-255a206d03a0 00:13:13.126 10:32:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # uuidgen 00:13:13.126 10:32:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # ns2uuid=aef944a6-1570-42b9-85c9-72f548dafe19 00:13:13.126 10:32:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@16 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:13:13.126 10:32:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@17 -- # HOSTNQN1=nqn.2016-06.io.spdk:host1 00:13:13.126 10:32:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@18 -- # HOSTNQN2=nqn.2016-06.io.spdk:host2 00:13:13.126 10:32:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # uuidgen 00:13:13.126 10:32:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # HOSTID=b8611765-ae6a-4aa6-885d-6dfabc6c7fc6 00:13:13.126 10:32:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@50 -- # nvmftestinit 00:13:13.126 10:32:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:13:13.126 10:32:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:13.126 10:32:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@476 -- # prepare_net_devs 00:13:13.126 10:32:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@438 -- # local -g is_hw=no 00:13:13.126 10:32:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@440 -- # remove_spdk_ns 00:13:13.126 10:32:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:13.126 10:32:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:13.126 10:32:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:13.126 10:32:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:13:13.126 10:32:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:13:13.126 10:32:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@309 -- # xtrace_disable 00:13:13.126 10:32:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:13:15.033 10:32:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:15.033 10:32:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # pci_devs=() 00:13:15.033 10:32:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # local -a pci_devs 00:13:15.033 10:32:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # pci_net_devs=() 00:13:15.033 10:32:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:13:15.033 10:32:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # pci_drivers=() 00:13:15.033 10:32:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # local -A pci_drivers 00:13:15.033 10:32:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # net_devs=() 00:13:15.033 10:32:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # local -ga net_devs 00:13:15.033 10:32:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # e810=() 00:13:15.033 10:32:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # local -ga e810 00:13:15.033 10:32:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # x722=() 00:13:15.033 10:32:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # local -ga x722 00:13:15.033 10:32:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # mlx=() 00:13:15.033 10:32:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # local -ga mlx 00:13:15.033 10:32:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:15.033 10:32:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:15.033 10:32:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:15.033 10:32:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:15.033 10:32:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:15.033 10:32:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:15.033 10:32:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:15.033 10:32:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:13:15.033 10:32:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:15.033 10:32:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:15.033 10:32:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:15.033 10:32:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:15.034 10:32:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:13:15.034 10:32:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:13:15.034 10:32:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:13:15.034 10:32:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:13:15.034 10:32:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:13:15.034 10:32:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:13:15.034 10:32:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:15.034 10:32:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@367 -- # echo 'Found 0000:82:00.0 (0x8086 - 0x159b)' 00:13:15.034 Found 0000:82:00.0 (0x8086 - 0x159b) 00:13:15.034 10:32:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:15.034 10:32:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:15.034 10:32:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:15.034 10:32:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:15.034 10:32:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:15.034 10:32:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:15.034 10:32:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@367 -- # echo 'Found 0000:82:00.1 (0x8086 - 0x159b)' 00:13:15.034 Found 0000:82:00.1 (0x8086 - 0x159b) 00:13:15.034 10:32:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:15.034 10:32:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:15.034 10:32:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:15.034 10:32:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:15.034 10:32:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:15.034 10:32:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:13:15.034 10:32:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:13:15.034 10:32:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:13:15.034 10:32:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:15.034 10:32:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:15.034 10:32:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:15.034 10:32:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:15.034 10:32:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:15.034 10:32:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:15.034 10:32:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:15.034 10:32:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:82:00.0: cvl_0_0' 00:13:15.034 Found net devices under 0000:82:00.0: cvl_0_0 00:13:15.034 10:32:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:15.034 10:32:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:15.034 10:32:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:15.034 10:32:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:15.034 10:32:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:15.034 10:32:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:15.034 10:32:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:15.034 10:32:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:15.034 10:32:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:82:00.1: cvl_0_1' 00:13:15.034 Found net devices under 0000:82:00.1: cvl_0_1 00:13:15.034 10:32:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:15.034 10:32:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:13:15.034 10:32:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # is_hw=yes 00:13:15.034 10:32:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:13:15.034 10:32:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:13:15.034 10:32:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:13:15.034 10:32:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:15.034 10:32:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:15.034 10:32:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:15.034 10:32:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:15.034 10:32:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:13:15.034 10:32:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:15.034 10:32:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:15.034 10:32:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:13:15.034 10:32:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:13:15.034 10:32:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:15.034 10:32:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:15.034 10:32:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:13:15.034 10:32:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:13:15.034 10:32:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:13:15.034 10:32:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:15.034 10:32:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:15.034 10:32:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:15.292 10:32:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:13:15.292 10:32:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:15.292 10:32:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:15.292 10:32:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:15.292 10:32:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:13:15.292 10:32:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:13:15.292 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:15.292 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.344 ms 00:13:15.292 00:13:15.292 --- 10.0.0.2 ping statistics --- 00:13:15.292 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:15.292 rtt min/avg/max/mdev = 0.344/0.344/0.344/0.000 ms 00:13:15.292 10:32:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:15.292 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:15.293 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.154 ms 00:13:15.293 00:13:15.293 --- 10.0.0.1 ping statistics --- 00:13:15.293 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:15.293 rtt min/avg/max/mdev = 0.154/0.154/0.154/0.000 ms 00:13:15.293 10:32:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:15.293 10:32:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@450 -- # return 0 00:13:15.293 10:32:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:13:15.293 10:32:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:15.293 10:32:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:13:15.293 10:32:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:13:15.293 10:32:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:15.293 10:32:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:13:15.293 10:32:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:13:15.293 10:32:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@51 -- # nvmfappstart 00:13:15.293 10:32:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:15.293 10:32:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@724 -- # xtrace_disable 00:13:15.293 10:32:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:13:15.293 10:32:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@509 -- # nvmfpid=344328 00:13:15.293 10:32:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@510 -- # waitforlisten 344328 00:13:15.293 10:32:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:13:15.293 10:32:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@833 -- # '[' -z 344328 ']' 00:13:15.293 10:32:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:15.293 10:32:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@838 -- # local max_retries=100 00:13:15.293 10:32:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:15.293 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:15.293 10:32:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@842 -- # xtrace_disable 00:13:15.293 10:32:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:13:15.293 [2024-11-15 10:32:03.637971] Starting SPDK v25.01-pre git sha1 318515b44 / DPDK 24.03.0 initialization... 00:13:15.293 [2024-11-15 10:32:03.638057] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:15.293 [2024-11-15 10:32:03.707756] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:15.551 [2024-11-15 10:32:03.761491] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:15.551 [2024-11-15 10:32:03.761559] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:15.551 [2024-11-15 10:32:03.761589] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:15.551 [2024-11-15 10:32:03.761601] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:15.551 [2024-11-15 10:32:03.761611] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:15.551 [2024-11-15 10:32:03.762238] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:15.551 10:32:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:13:15.551 10:32:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@866 -- # return 0 00:13:15.551 10:32:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:15.551 10:32:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@730 -- # xtrace_disable 00:13:15.551 10:32:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:13:15.551 10:32:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:15.551 10:32:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:13:15.809 [2024-11-15 10:32:04.148883] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:15.809 10:32:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@55 -- # MALLOC_BDEV_SIZE=64 00:13:15.809 10:32:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@56 -- # MALLOC_BLOCK_SIZE=512 00:13:15.809 10:32:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:13:16.068 Malloc1 00:13:16.068 10:32:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:13:16.326 Malloc2 00:13:16.584 10:32:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:13:16.841 10:32:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:13:17.099 10:32:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:17.357 [2024-11-15 10:32:05.696301] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:17.357 10:32:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@67 -- # connect 00:13:17.357 10:32:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I b8611765-ae6a-4aa6-885d-6dfabc6c7fc6 -a 10.0.0.2 -s 4420 -i 4 00:13:17.616 10:32:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 00:13:17.616 10:32:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # local i=0 00:13:17.616 10:32:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:13:17.616 10:32:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:13:17.616 10:32:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # sleep 2 00:13:19.516 10:32:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:13:19.516 10:32:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:13:19.516 10:32:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:13:19.516 10:32:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:13:19.516 10:32:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:13:19.516 10:32:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # return 0 00:13:19.516 10:32:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:13:19.516 10:32:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:13:19.516 10:32:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:13:19.516 10:32:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:13:19.516 10:32:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@68 -- # ns_is_visible 0x1 00:13:19.516 10:32:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:13:19.516 10:32:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:19.516 [ 0]:0x1 00:13:19.516 10:32:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:19.516 10:32:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:19.775 10:32:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=c929acd45ef24f98bc26fbf9fdd6eaac 00:13:19.775 10:32:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ c929acd45ef24f98bc26fbf9fdd6eaac != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:19.775 10:32:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:13:20.034 10:32:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@72 -- # ns_is_visible 0x1 00:13:20.034 10:32:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:13:20.034 10:32:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:20.034 [ 0]:0x1 00:13:20.034 10:32:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:20.034 10:32:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:20.034 10:32:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=c929acd45ef24f98bc26fbf9fdd6eaac 00:13:20.034 10:32:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ c929acd45ef24f98bc26fbf9fdd6eaac != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:20.034 10:32:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@73 -- # ns_is_visible 0x2 00:13:20.034 10:32:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:20.034 10:32:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:13:20.034 [ 1]:0x2 00:13:20.034 10:32:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:13:20.034 10:32:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:20.034 10:32:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=ee3f8972a0644d07bc67e6dcd8be19d6 00:13:20.035 10:32:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ ee3f8972a0644d07bc67e6dcd8be19d6 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:20.035 10:32:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@75 -- # disconnect 00:13:20.035 10:32:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:20.035 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:20.035 10:32:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:20.294 10:32:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:13:20.553 10:32:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@83 -- # connect 1 00:13:20.553 10:32:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I b8611765-ae6a-4aa6-885d-6dfabc6c7fc6 -a 10.0.0.2 -s 4420 -i 4 00:13:20.811 10:32:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 1 00:13:20.811 10:32:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # local i=0 00:13:20.811 10:32:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:13:20.811 10:32:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # [[ -n 1 ]] 00:13:20.811 10:32:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # nvme_device_counter=1 00:13:20.811 10:32:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # sleep 2 00:13:23.339 10:32:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:13:23.339 10:32:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:13:23.339 10:32:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:13:23.339 10:32:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:13:23.339 10:32:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:13:23.339 10:32:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # return 0 00:13:23.339 10:32:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:13:23.339 10:32:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:13:23.339 10:32:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:13:23.339 10:32:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:13:23.339 10:32:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@84 -- # NOT ns_is_visible 0x1 00:13:23.339 10:32:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:13:23.339 10:32:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:13:23.339 10:32:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:13:23.339 10:32:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:23.339 10:32:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:13:23.339 10:32:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:23.339 10:32:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:13:23.339 10:32:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:23.339 10:32:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:13:23.339 10:32:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:23.339 10:32:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:23.339 10:32:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:13:23.339 10:32:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:23.339 10:32:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:13:23.339 10:32:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:13:23.339 10:32:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:13:23.339 10:32:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:13:23.339 10:32:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@85 -- # ns_is_visible 0x2 00:13:23.339 10:32:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:23.339 10:32:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:13:23.339 [ 0]:0x2 00:13:23.340 10:32:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:13:23.340 10:32:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:23.340 10:32:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=ee3f8972a0644d07bc67e6dcd8be19d6 00:13:23.340 10:32:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ ee3f8972a0644d07bc67e6dcd8be19d6 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:23.340 10:32:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:13:23.340 10:32:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x1 00:13:23.340 10:32:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:23.340 10:32:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:13:23.340 [ 0]:0x1 00:13:23.340 10:32:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:23.340 10:32:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:23.340 10:32:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=c929acd45ef24f98bc26fbf9fdd6eaac 00:13:23.340 10:32:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ c929acd45ef24f98bc26fbf9fdd6eaac != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:23.340 10:32:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@90 -- # ns_is_visible 0x2 00:13:23.340 10:32:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:23.340 10:32:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:13:23.340 [ 1]:0x2 00:13:23.340 10:32:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:13:23.340 10:32:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:23.597 10:32:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=ee3f8972a0644d07bc67e6dcd8be19d6 00:13:23.597 10:32:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ ee3f8972a0644d07bc67e6dcd8be19d6 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:23.597 10:32:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:13:23.855 10:32:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@94 -- # NOT ns_is_visible 0x1 00:13:23.855 10:32:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:13:23.855 10:32:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:13:23.855 10:32:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:13:23.855 10:32:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:23.855 10:32:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:13:23.855 10:32:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:23.855 10:32:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:13:23.855 10:32:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:23.855 10:32:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:13:23.855 10:32:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:23.855 10:32:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:23.855 10:32:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:13:23.855 10:32:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:23.855 10:32:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:13:23.855 10:32:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:13:23.855 10:32:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:13:23.855 10:32:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:13:23.855 10:32:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@95 -- # ns_is_visible 0x2 00:13:23.855 10:32:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:23.855 10:32:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:13:23.855 [ 0]:0x2 00:13:23.855 10:32:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:13:23.855 10:32:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:23.855 10:32:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=ee3f8972a0644d07bc67e6dcd8be19d6 00:13:23.855 10:32:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ ee3f8972a0644d07bc67e6dcd8be19d6 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:23.855 10:32:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@97 -- # disconnect 00:13:23.855 10:32:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:24.113 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:24.113 10:32:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:13:24.371 10:32:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@101 -- # connect 2 00:13:24.371 10:32:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I b8611765-ae6a-4aa6-885d-6dfabc6c7fc6 -a 10.0.0.2 -s 4420 -i 4 00:13:24.371 10:32:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 2 00:13:24.371 10:32:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # local i=0 00:13:24.371 10:32:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:13:24.371 10:32:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # [[ -n 2 ]] 00:13:24.371 10:32:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # nvme_device_counter=2 00:13:24.371 10:32:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # sleep 2 00:13:26.901 10:32:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:13:26.901 10:32:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:13:26.901 10:32:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:13:26.901 10:32:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # nvme_devices=2 00:13:26.901 10:32:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:13:26.901 10:32:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # return 0 00:13:26.901 10:32:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:13:26.901 10:32:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:13:26.901 10:32:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:13:26.901 10:32:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:13:26.901 10:32:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x1 00:13:26.901 10:32:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:26.901 10:32:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:13:26.901 [ 0]:0x1 00:13:26.901 10:32:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:26.901 10:32:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:26.901 10:32:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=c929acd45ef24f98bc26fbf9fdd6eaac 00:13:26.901 10:32:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ c929acd45ef24f98bc26fbf9fdd6eaac != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:26.901 10:32:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@103 -- # ns_is_visible 0x2 00:13:26.901 10:32:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:26.901 10:32:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:13:26.901 [ 1]:0x2 00:13:26.901 10:32:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:13:26.901 10:32:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:26.901 10:32:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=ee3f8972a0644d07bc67e6dcd8be19d6 00:13:26.901 10:32:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ ee3f8972a0644d07bc67e6dcd8be19d6 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:26.902 10:32:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:13:26.902 10:32:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@107 -- # NOT ns_is_visible 0x1 00:13:26.902 10:32:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:13:26.902 10:32:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:13:26.902 10:32:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:13:26.902 10:32:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:26.902 10:32:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:13:26.902 10:32:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:26.902 10:32:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:13:26.902 10:32:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:26.902 10:32:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:13:26.902 10:32:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:26.902 10:32:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:26.902 10:32:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:13:26.902 10:32:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:26.902 10:32:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:13:26.902 10:32:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:13:26.902 10:32:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:13:26.902 10:32:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:13:26.902 10:32:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@108 -- # ns_is_visible 0x2 00:13:26.902 10:32:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:26.902 10:32:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:13:26.902 [ 0]:0x2 00:13:26.902 10:32:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:13:26.902 10:32:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:26.902 10:32:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=ee3f8972a0644d07bc67e6dcd8be19d6 00:13:26.902 10:32:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ ee3f8972a0644d07bc67e6dcd8be19d6 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:26.902 10:32:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@111 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:13:26.902 10:32:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:13:26.902 10:32:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:13:26.902 10:32:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:26.902 10:32:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:26.902 10:32:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:26.902 10:32:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:26.902 10:32:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:26.902 10:32:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:26.902 10:32:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:26.902 10:32:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:13:26.902 10:32:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:13:27.160 [2024-11-15 10:32:15.609918] nvmf_rpc.c:1870:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:13:27.160 request: 00:13:27.160 { 00:13:27.160 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:13:27.160 "nsid": 2, 00:13:27.160 "host": "nqn.2016-06.io.spdk:host1", 00:13:27.160 "method": "nvmf_ns_remove_host", 00:13:27.160 "req_id": 1 00:13:27.160 } 00:13:27.160 Got JSON-RPC error response 00:13:27.160 response: 00:13:27.160 { 00:13:27.160 "code": -32602, 00:13:27.160 "message": "Invalid parameters" 00:13:27.160 } 00:13:27.418 10:32:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:13:27.418 10:32:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:13:27.418 10:32:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:13:27.418 10:32:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:13:27.418 10:32:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@112 -- # NOT ns_is_visible 0x1 00:13:27.418 10:32:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:13:27.418 10:32:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:13:27.418 10:32:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:13:27.418 10:32:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:27.418 10:32:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:13:27.418 10:32:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:27.418 10:32:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:13:27.418 10:32:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:27.418 10:32:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:13:27.418 10:32:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:27.418 10:32:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:27.418 10:32:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:13:27.418 10:32:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:27.418 10:32:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:13:27.418 10:32:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:13:27.418 10:32:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:13:27.418 10:32:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:13:27.418 10:32:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@113 -- # ns_is_visible 0x2 00:13:27.418 10:32:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:27.418 10:32:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:13:27.418 [ 0]:0x2 00:13:27.418 10:32:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:13:27.418 10:32:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:27.418 10:32:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=ee3f8972a0644d07bc67e6dcd8be19d6 00:13:27.418 10:32:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ ee3f8972a0644d07bc67e6dcd8be19d6 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:27.418 10:32:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@114 -- # disconnect 00:13:27.418 10:32:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:27.418 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:27.418 10:32:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@118 -- # hostpid=345829 00:13:27.418 10:32:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -r /var/tmp/host.sock -m 2 00:13:27.418 10:32:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@119 -- # trap 'killprocess $hostpid; nvmftestfini' SIGINT SIGTERM EXIT 00:13:27.418 10:32:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@121 -- # waitforlisten 345829 /var/tmp/host.sock 00:13:27.418 10:32:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@833 -- # '[' -z 345829 ']' 00:13:27.418 10:32:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/host.sock 00:13:27.418 10:32:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@838 -- # local max_retries=100 00:13:27.418 10:32:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:13:27.418 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:13:27.418 10:32:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@842 -- # xtrace_disable 00:13:27.418 10:32:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:13:27.418 [2024-11-15 10:32:15.805303] Starting SPDK v25.01-pre git sha1 318515b44 / DPDK 24.03.0 initialization... 00:13:27.418 [2024-11-15 10:32:15.805398] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid345829 ] 00:13:27.418 [2024-11-15 10:32:15.870937] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:27.677 [2024-11-15 10:32:15.929558] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:27.935 10:32:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:13:27.935 10:32:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@866 -- # return 0 00:13:27.935 10:32:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:28.193 10:32:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:28.451 10:32:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # uuid2nguid 2531eba7-fbdd-4494-b79a-255a206d03a0 00:13:28.451 10:32:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:13:28.451 10:32:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g 2531EBA7FBDD4494B79A255A206D03A0 -i 00:13:28.708 10:32:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # uuid2nguid aef944a6-1570-42b9-85c9-72f548dafe19 00:13:28.708 10:32:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:13:28.708 10:32:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 -g AEF944A6157042B985C972F548DAFE19 -i 00:13:28.966 10:32:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:13:29.223 10:32:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host2 00:13:29.480 10:32:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@129 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:13:29.481 10:32:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:13:30.045 nvme0n1 00:13:30.045 10:32:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@131 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:13:30.045 10:32:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:13:30.610 nvme1n2 00:13:30.610 10:32:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # hostrpc bdev_get_bdevs 00:13:30.610 10:32:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:13:30.610 10:32:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # jq -r '.[].name' 00:13:30.610 10:32:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # sort 00:13:30.610 10:32:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # xargs 00:13:30.867 10:32:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # [[ nvme0n1 nvme1n2 == \n\v\m\e\0\n\1\ \n\v\m\e\1\n\2 ]] 00:13:30.867 10:32:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # hostrpc bdev_get_bdevs -b nvme0n1 00:13:30.867 10:32:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # jq -r '.[].uuid' 00:13:30.868 10:32:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme0n1 00:13:31.125 10:32:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # [[ 2531eba7-fbdd-4494-b79a-255a206d03a0 == \2\5\3\1\e\b\a\7\-\f\b\d\d\-\4\4\9\4\-\b\7\9\a\-\2\5\5\a\2\0\6\d\0\3\a\0 ]] 00:13:31.125 10:32:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # hostrpc bdev_get_bdevs -b nvme1n2 00:13:31.125 10:32:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # jq -r '.[].uuid' 00:13:31.125 10:32:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme1n2 00:13:31.382 10:32:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # [[ aef944a6-1570-42b9-85c9-72f548dafe19 == \a\e\f\9\4\4\a\6\-\1\5\7\0\-\4\2\b\9\-\8\5\c\9\-\7\2\f\5\4\8\d\a\f\e\1\9 ]] 00:13:31.382 10:32:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@137 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:31.639 10:32:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@138 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:31.897 10:32:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # uuid2nguid 2531eba7-fbdd-4494-b79a-255a206d03a0 00:13:31.897 10:32:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:13:31.897 10:32:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g 2531EBA7FBDD4494B79A255A206D03A0 00:13:31.897 10:32:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:13:31.897 10:32:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g 2531EBA7FBDD4494B79A255A206D03A0 00:13:31.897 10:32:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:31.897 10:32:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:31.897 10:32:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:31.897 10:32:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:31.897 10:32:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:31.897 10:32:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:31.897 10:32:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:31.897 10:32:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:13:31.897 10:32:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g 2531EBA7FBDD4494B79A255A206D03A0 00:13:32.154 [2024-11-15 10:32:20.480052] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: invalid 00:13:32.154 [2024-11-15 10:32:20.480098] subsystem.c:2150:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode1: bdev invalid cannot be opened, error=-19 00:13:32.154 [2024-11-15 10:32:20.480128] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.154 request: 00:13:32.154 { 00:13:32.154 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:13:32.154 "namespace": { 00:13:32.154 "bdev_name": "invalid", 00:13:32.154 "nsid": 1, 00:13:32.154 "nguid": "2531EBA7FBDD4494B79A255A206D03A0", 00:13:32.154 "no_auto_visible": false 00:13:32.154 }, 00:13:32.154 "method": "nvmf_subsystem_add_ns", 00:13:32.154 "req_id": 1 00:13:32.154 } 00:13:32.154 Got JSON-RPC error response 00:13:32.154 response: 00:13:32.154 { 00:13:32.154 "code": -32602, 00:13:32.154 "message": "Invalid parameters" 00:13:32.154 } 00:13:32.154 10:32:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:13:32.154 10:32:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:13:32.154 10:32:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:13:32.154 10:32:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:13:32.154 10:32:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # uuid2nguid 2531eba7-fbdd-4494-b79a-255a206d03a0 00:13:32.154 10:32:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:13:32.154 10:32:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g 2531EBA7FBDD4494B79A255A206D03A0 -i 00:13:32.411 10:32:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@143 -- # sleep 2s 00:13:34.308 10:32:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # hostrpc bdev_get_bdevs 00:13:34.308 10:32:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # jq length 00:13:34.308 10:32:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:13:34.874 10:32:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # (( 0 == 0 )) 00:13:34.874 10:32:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@146 -- # killprocess 345829 00:13:34.874 10:32:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@952 -- # '[' -z 345829 ']' 00:13:34.874 10:32:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # kill -0 345829 00:13:34.874 10:32:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@957 -- # uname 00:13:34.874 10:32:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:13:34.874 10:32:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 345829 00:13:34.874 10:32:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:13:34.874 10:32:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:13:34.874 10:32:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@970 -- # echo 'killing process with pid 345829' 00:13:34.874 killing process with pid 345829 00:13:34.874 10:32:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@971 -- # kill 345829 00:13:34.874 10:32:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@976 -- # wait 345829 00:13:35.132 10:32:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:35.390 10:32:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:13:35.390 10:32:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@150 -- # nvmftestfini 00:13:35.390 10:32:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:35.390 10:32:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@121 -- # sync 00:13:35.390 10:32:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:35.390 10:32:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@124 -- # set +e 00:13:35.390 10:32:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:35.390 10:32:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:35.390 rmmod nvme_tcp 00:13:35.647 rmmod nvme_fabrics 00:13:35.647 rmmod nvme_keyring 00:13:35.647 10:32:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:35.647 10:32:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@128 -- # set -e 00:13:35.647 10:32:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@129 -- # return 0 00:13:35.647 10:32:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@517 -- # '[' -n 344328 ']' 00:13:35.647 10:32:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@518 -- # killprocess 344328 00:13:35.647 10:32:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@952 -- # '[' -z 344328 ']' 00:13:35.647 10:32:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # kill -0 344328 00:13:35.647 10:32:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@957 -- # uname 00:13:35.647 10:32:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:13:35.648 10:32:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 344328 00:13:35.648 10:32:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:13:35.648 10:32:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:13:35.648 10:32:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@970 -- # echo 'killing process with pid 344328' 00:13:35.648 killing process with pid 344328 00:13:35.648 10:32:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@971 -- # kill 344328 00:13:35.648 10:32:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@976 -- # wait 344328 00:13:35.906 10:32:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:35.906 10:32:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:13:35.906 10:32:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:13:35.906 10:32:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@297 -- # iptr 00:13:35.906 10:32:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # iptables-save 00:13:35.906 10:32:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:13:35.906 10:32:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # iptables-restore 00:13:35.906 10:32:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:35.906 10:32:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@302 -- # remove_spdk_ns 00:13:35.906 10:32:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:35.906 10:32:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:35.906 10:32:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:37.873 10:32:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:13:37.873 00:13:37.873 real 0m25.206s 00:13:37.873 user 0m36.452s 00:13:37.873 sys 0m4.793s 00:13:37.873 10:32:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1128 -- # xtrace_disable 00:13:37.873 10:32:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:13:37.873 ************************************ 00:13:37.873 END TEST nvmf_ns_masking 00:13:37.873 ************************************ 00:13:37.873 10:32:26 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@27 -- # [[ 1 -eq 1 ]] 00:13:37.873 10:32:26 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@28 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:13:37.873 10:32:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:13:37.873 10:32:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:13:37.873 10:32:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:38.209 ************************************ 00:13:38.209 START TEST nvmf_nvme_cli 00:13:38.209 ************************************ 00:13:38.209 10:32:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:13:38.209 * Looking for test storage... 00:13:38.209 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:38.209 10:32:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:13:38.209 10:32:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1691 -- # lcov --version 00:13:38.209 10:32:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:13:38.209 10:32:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:13:38.209 10:32:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:38.209 10:32:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:38.209 10:32:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:38.209 10:32:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # IFS=.-: 00:13:38.209 10:32:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # read -ra ver1 00:13:38.209 10:32:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # IFS=.-: 00:13:38.209 10:32:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # read -ra ver2 00:13:38.209 10:32:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@338 -- # local 'op=<' 00:13:38.209 10:32:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@340 -- # ver1_l=2 00:13:38.209 10:32:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@341 -- # ver2_l=1 00:13:38.209 10:32:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:38.209 10:32:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@344 -- # case "$op" in 00:13:38.209 10:32:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@345 -- # : 1 00:13:38.209 10:32:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:38.209 10:32:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:38.209 10:32:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # decimal 1 00:13:38.209 10:32:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=1 00:13:38.209 10:32:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:38.209 10:32:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 1 00:13:38.209 10:32:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # ver1[v]=1 00:13:38.209 10:32:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # decimal 2 00:13:38.209 10:32:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=2 00:13:38.209 10:32:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:38.209 10:32:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 2 00:13:38.209 10:32:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # ver2[v]=2 00:13:38.209 10:32:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:38.209 10:32:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:38.209 10:32:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # return 0 00:13:38.209 10:32:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:38.209 10:32:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:13:38.209 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:38.209 --rc genhtml_branch_coverage=1 00:13:38.209 --rc genhtml_function_coverage=1 00:13:38.209 --rc genhtml_legend=1 00:13:38.209 --rc geninfo_all_blocks=1 00:13:38.209 --rc geninfo_unexecuted_blocks=1 00:13:38.209 00:13:38.209 ' 00:13:38.209 10:32:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:13:38.209 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:38.209 --rc genhtml_branch_coverage=1 00:13:38.209 --rc genhtml_function_coverage=1 00:13:38.209 --rc genhtml_legend=1 00:13:38.209 --rc geninfo_all_blocks=1 00:13:38.209 --rc geninfo_unexecuted_blocks=1 00:13:38.209 00:13:38.209 ' 00:13:38.209 10:32:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:13:38.209 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:38.209 --rc genhtml_branch_coverage=1 00:13:38.209 --rc genhtml_function_coverage=1 00:13:38.209 --rc genhtml_legend=1 00:13:38.209 --rc geninfo_all_blocks=1 00:13:38.209 --rc geninfo_unexecuted_blocks=1 00:13:38.209 00:13:38.209 ' 00:13:38.209 10:32:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:13:38.209 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:38.209 --rc genhtml_branch_coverage=1 00:13:38.209 --rc genhtml_function_coverage=1 00:13:38.209 --rc genhtml_legend=1 00:13:38.209 --rc geninfo_all_blocks=1 00:13:38.209 --rc geninfo_unexecuted_blocks=1 00:13:38.209 00:13:38.209 ' 00:13:38.209 10:32:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:38.209 10:32:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:13:38.209 10:32:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:38.209 10:32:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:38.209 10:32:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:38.209 10:32:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:38.209 10:32:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:38.209 10:32:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:38.209 10:32:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:38.209 10:32:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:38.209 10:32:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:38.209 10:32:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:38.209 10:32:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:13:38.209 10:32:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=8b464f06-2980-e311-ba20-001e67a94acd 00:13:38.210 10:32:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:38.210 10:32:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:38.210 10:32:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:38.210 10:32:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:38.210 10:32:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:38.210 10:32:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@15 -- # shopt -s extglob 00:13:38.210 10:32:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:38.210 10:32:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:38.210 10:32:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:38.210 10:32:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:38.210 10:32:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:38.210 10:32:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:38.210 10:32:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:13:38.210 10:32:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:38.210 10:32:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@51 -- # : 0 00:13:38.210 10:32:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:38.210 10:32:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:38.210 10:32:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:38.210 10:32:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:38.210 10:32:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:38.210 10:32:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:38.210 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:38.210 10:32:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:38.210 10:32:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:38.210 10:32:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:38.210 10:32:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:38.210 10:32:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:38.210 10:32:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:13:38.210 10:32:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:13:38.210 10:32:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:13:38.210 10:32:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:38.210 10:32:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@476 -- # prepare_net_devs 00:13:38.210 10:32:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@438 -- # local -g is_hw=no 00:13:38.210 10:32:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@440 -- # remove_spdk_ns 00:13:38.210 10:32:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:38.210 10:32:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:38.210 10:32:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:38.210 10:32:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:13:38.210 10:32:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:13:38.210 10:32:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@309 -- # xtrace_disable 00:13:38.210 10:32:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:40.323 10:32:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:40.324 10:32:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # pci_devs=() 00:13:40.324 10:32:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # local -a pci_devs 00:13:40.324 10:32:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # pci_net_devs=() 00:13:40.324 10:32:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:13:40.324 10:32:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # pci_drivers=() 00:13:40.324 10:32:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # local -A pci_drivers 00:13:40.324 10:32:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # net_devs=() 00:13:40.324 10:32:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # local -ga net_devs 00:13:40.324 10:32:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # e810=() 00:13:40.324 10:32:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # local -ga e810 00:13:40.324 10:32:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # x722=() 00:13:40.324 10:32:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # local -ga x722 00:13:40.324 10:32:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # mlx=() 00:13:40.324 10:32:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # local -ga mlx 00:13:40.324 10:32:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:40.324 10:32:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:40.324 10:32:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:40.324 10:32:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:40.324 10:32:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:40.324 10:32:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:40.324 10:32:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:40.324 10:32:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:13:40.324 10:32:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:40.324 10:32:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:40.324 10:32:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:40.324 10:32:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:40.324 10:32:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:13:40.324 10:32:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:13:40.324 10:32:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:13:40.324 10:32:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:13:40.324 10:32:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:13:40.324 10:32:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:13:40.324 10:32:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:40.324 10:32:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@367 -- # echo 'Found 0000:82:00.0 (0x8086 - 0x159b)' 00:13:40.324 Found 0000:82:00.0 (0x8086 - 0x159b) 00:13:40.324 10:32:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:40.324 10:32:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:40.324 10:32:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:40.324 10:32:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:40.324 10:32:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:40.324 10:32:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:40.324 10:32:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@367 -- # echo 'Found 0000:82:00.1 (0x8086 - 0x159b)' 00:13:40.324 Found 0000:82:00.1 (0x8086 - 0x159b) 00:13:40.324 10:32:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:40.324 10:32:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:40.324 10:32:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:40.324 10:32:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:40.324 10:32:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:40.324 10:32:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:13:40.324 10:32:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:13:40.324 10:32:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:13:40.324 10:32:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:40.324 10:32:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:40.324 10:32:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:40.324 10:32:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:40.324 10:32:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:40.324 10:32:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:40.324 10:32:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:40.324 10:32:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:82:00.0: cvl_0_0' 00:13:40.324 Found net devices under 0000:82:00.0: cvl_0_0 00:13:40.324 10:32:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:40.324 10:32:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:40.324 10:32:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:40.324 10:32:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:40.324 10:32:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:40.324 10:32:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:40.324 10:32:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:40.324 10:32:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:40.324 10:32:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:82:00.1: cvl_0_1' 00:13:40.324 Found net devices under 0000:82:00.1: cvl_0_1 00:13:40.324 10:32:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:40.324 10:32:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:13:40.324 10:32:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # is_hw=yes 00:13:40.324 10:32:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:13:40.324 10:32:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:13:40.324 10:32:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:13:40.324 10:32:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:40.324 10:32:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:40.324 10:32:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:40.324 10:32:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:40.324 10:32:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:13:40.324 10:32:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:40.324 10:32:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:40.324 10:32:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:13:40.324 10:32:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:13:40.324 10:32:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:40.324 10:32:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:40.324 10:32:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:13:40.324 10:32:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:13:40.324 10:32:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:13:40.324 10:32:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:40.324 10:32:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:40.324 10:32:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:40.324 10:32:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:13:40.324 10:32:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:40.325 10:32:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:40.325 10:32:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:40.325 10:32:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:13:40.325 10:32:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:13:40.325 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:40.325 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.249 ms 00:13:40.325 00:13:40.325 --- 10.0.0.2 ping statistics --- 00:13:40.325 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:40.325 rtt min/avg/max/mdev = 0.249/0.249/0.249/0.000 ms 00:13:40.325 10:32:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:40.325 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:40.325 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.168 ms 00:13:40.325 00:13:40.325 --- 10.0.0.1 ping statistics --- 00:13:40.325 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:40.325 rtt min/avg/max/mdev = 0.168/0.168/0.168/0.000 ms 00:13:40.325 10:32:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:40.325 10:32:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@450 -- # return 0 00:13:40.325 10:32:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:13:40.325 10:32:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:40.325 10:32:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:13:40.325 10:32:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:13:40.325 10:32:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:40.325 10:32:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:13:40.325 10:32:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:13:40.325 10:32:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:13:40.325 10:32:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:40.325 10:32:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@724 -- # xtrace_disable 00:13:40.325 10:32:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:40.325 10:32:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@509 -- # nvmfpid=348867 00:13:40.325 10:32:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:40.325 10:32:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@510 -- # waitforlisten 348867 00:13:40.325 10:32:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@833 -- # '[' -z 348867 ']' 00:13:40.325 10:32:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:40.325 10:32:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@838 -- # local max_retries=100 00:13:40.325 10:32:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:40.325 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:40.325 10:32:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@842 -- # xtrace_disable 00:13:40.325 10:32:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:40.325 [2024-11-15 10:32:28.711793] Starting SPDK v25.01-pre git sha1 318515b44 / DPDK 24.03.0 initialization... 00:13:40.325 [2024-11-15 10:32:28.711891] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:40.325 [2024-11-15 10:32:28.786205] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:40.584 [2024-11-15 10:32:28.848710] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:40.584 [2024-11-15 10:32:28.848771] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:40.584 [2024-11-15 10:32:28.848799] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:40.584 [2024-11-15 10:32:28.848811] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:40.584 [2024-11-15 10:32:28.848821] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:40.584 [2024-11-15 10:32:28.850491] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:40.584 [2024-11-15 10:32:28.850549] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:40.584 [2024-11-15 10:32:28.850614] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:13:40.584 [2024-11-15 10:32:28.850617] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:40.584 10:32:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:13:40.584 10:32:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@866 -- # return 0 00:13:40.584 10:32:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:40.584 10:32:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@730 -- # xtrace_disable 00:13:40.584 10:32:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:40.584 10:32:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:40.584 10:32:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:40.584 10:32:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:40.584 10:32:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:40.584 [2024-11-15 10:32:29.003937] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:40.584 10:32:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:40.584 10:32:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:13:40.584 10:32:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:40.584 10:32:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:40.844 Malloc0 00:13:40.844 10:32:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:40.844 10:32:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:13:40.844 10:32:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:40.844 10:32:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:40.844 Malloc1 00:13:40.844 10:32:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:40.845 10:32:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:13:40.845 10:32:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:40.845 10:32:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:40.845 10:32:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:40.845 10:32:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:13:40.845 10:32:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:40.845 10:32:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:40.845 10:32:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:40.845 10:32:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:40.845 10:32:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:40.845 10:32:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:40.845 10:32:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:40.845 10:32:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:40.845 10:32:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:40.845 10:32:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:40.845 [2024-11-15 10:32:29.106518] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:40.845 10:32:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:40.845 10:32:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:13:40.845 10:32:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:40.845 10:32:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:40.845 10:32:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:40.845 10:32:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid=8b464f06-2980-e311-ba20-001e67a94acd -t tcp -a 10.0.0.2 -s 4420 00:13:40.845 00:13:40.845 Discovery Log Number of Records 2, Generation counter 2 00:13:40.845 =====Discovery Log Entry 0====== 00:13:40.845 trtype: tcp 00:13:40.845 adrfam: ipv4 00:13:40.845 subtype: current discovery subsystem 00:13:40.845 treq: not required 00:13:40.845 portid: 0 00:13:40.845 trsvcid: 4420 00:13:40.845 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:13:40.845 traddr: 10.0.0.2 00:13:40.845 eflags: explicit discovery connections, duplicate discovery information 00:13:40.845 sectype: none 00:13:40.845 =====Discovery Log Entry 1====== 00:13:40.845 trtype: tcp 00:13:40.845 adrfam: ipv4 00:13:40.845 subtype: nvme subsystem 00:13:40.845 treq: not required 00:13:40.845 portid: 0 00:13:40.845 trsvcid: 4420 00:13:40.845 subnqn: nqn.2016-06.io.spdk:cnode1 00:13:40.845 traddr: 10.0.0.2 00:13:40.845 eflags: none 00:13:40.845 sectype: none 00:13:40.845 10:32:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:13:40.845 10:32:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:13:41.103 10:32:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:13:41.103 10:32:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:13:41.103 10:32:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:13:41.103 10:32:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:13:41.103 10:32:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:13:41.103 10:32:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:13:41.103 10:32:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:13:41.103 10:32:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:13:41.103 10:32:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid=8b464f06-2980-e311-ba20-001e67a94acd -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:41.669 10:32:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:13:41.669 10:32:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1200 -- # local i=0 00:13:41.669 10:32:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:13:41.669 10:32:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1202 -- # [[ -n 2 ]] 00:13:41.669 10:32:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1203 -- # nvme_device_counter=2 00:13:41.669 10:32:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # sleep 2 00:13:44.195 10:32:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:13:44.195 10:32:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:13:44.195 10:32:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:13:44.195 10:32:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1209 -- # nvme_devices=2 00:13:44.195 10:32:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:13:44.195 10:32:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1210 -- # return 0 00:13:44.195 10:32:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:13:44.195 10:32:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:13:44.195 10:32:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:13:44.195 10:32:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:13:44.195 10:32:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:13:44.195 10:32:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:13:44.195 10:32:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:13:44.195 10:32:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:13:44.195 10:32:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:13:44.195 10:32:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n1 00:13:44.195 10:32:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:13:44.195 10:32:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:13:44.195 10:32:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n2 00:13:44.195 10:32:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:13:44.195 10:32:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n1 00:13:44.195 /dev/nvme0n2 ]] 00:13:44.195 10:32:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:13:44.195 10:32:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:13:44.195 10:32:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:13:44.195 10:32:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:13:44.195 10:32:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:13:44.195 10:32:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:13:44.195 10:32:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:13:44.195 10:32:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:13:44.195 10:32:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:13:44.195 10:32:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:13:44.195 10:32:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n1 00:13:44.195 10:32:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:13:44.195 10:32:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:13:44.195 10:32:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n2 00:13:44.195 10:32:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:13:44.195 10:32:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2 00:13:44.195 10:32:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:44.195 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:44.195 10:32:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:44.195 10:32:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1221 -- # local i=0 00:13:44.195 10:32:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:13:44.195 10:32:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:44.195 10:32:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:13:44.195 10:32:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:44.195 10:32:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1233 -- # return 0 00:13:44.195 10:32:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:13:44.195 10:32:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:44.195 10:32:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:44.195 10:32:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:44.195 10:32:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:44.195 10:32:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:13:44.195 10:32:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:13:44.195 10:32:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:44.195 10:32:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@121 -- # sync 00:13:44.195 10:32:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:44.195 10:32:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set +e 00:13:44.195 10:32:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:44.195 10:32:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:44.195 rmmod nvme_tcp 00:13:44.195 rmmod nvme_fabrics 00:13:44.195 rmmod nvme_keyring 00:13:44.195 10:32:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:44.454 10:32:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@128 -- # set -e 00:13:44.454 10:32:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@129 -- # return 0 00:13:44.454 10:32:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@517 -- # '[' -n 348867 ']' 00:13:44.454 10:32:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@518 -- # killprocess 348867 00:13:44.454 10:32:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@952 -- # '[' -z 348867 ']' 00:13:44.454 10:32:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@956 -- # kill -0 348867 00:13:44.454 10:32:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@957 -- # uname 00:13:44.454 10:32:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:13:44.454 10:32:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 348867 00:13:44.454 10:32:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:13:44.454 10:32:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:13:44.454 10:32:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@970 -- # echo 'killing process with pid 348867' 00:13:44.454 killing process with pid 348867 00:13:44.454 10:32:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@971 -- # kill 348867 00:13:44.454 10:32:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@976 -- # wait 348867 00:13:44.713 10:32:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:44.713 10:32:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:13:44.713 10:32:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:13:44.713 10:32:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@297 -- # iptr 00:13:44.713 10:32:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # iptables-save 00:13:44.713 10:32:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:13:44.713 10:32:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # iptables-restore 00:13:44.713 10:32:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:44.713 10:32:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@302 -- # remove_spdk_ns 00:13:44.713 10:32:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:44.713 10:32:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:44.713 10:32:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:46.618 10:32:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:13:46.618 00:13:46.618 real 0m8.712s 00:13:46.618 user 0m16.985s 00:13:46.618 sys 0m2.317s 00:13:46.618 10:32:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1128 -- # xtrace_disable 00:13:46.618 10:32:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:46.618 ************************************ 00:13:46.618 END TEST nvmf_nvme_cli 00:13:46.618 ************************************ 00:13:46.618 10:32:35 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@30 -- # [[ 1 -eq 1 ]] 00:13:46.618 10:32:35 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@31 -- # run_test nvmf_vfio_user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:13:46.618 10:32:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:13:46.618 10:32:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:13:46.618 10:32:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:46.618 ************************************ 00:13:46.618 START TEST nvmf_vfio_user 00:13:46.618 ************************************ 00:13:46.618 10:32:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:13:46.876 * Looking for test storage... 00:13:46.876 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:46.876 10:32:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:13:46.877 10:32:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1691 -- # lcov --version 00:13:46.877 10:32:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:13:46.877 10:32:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:13:46.877 10:32:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:46.877 10:32:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:46.877 10:32:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:46.877 10:32:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@336 -- # IFS=.-: 00:13:46.877 10:32:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@336 -- # read -ra ver1 00:13:46.877 10:32:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@337 -- # IFS=.-: 00:13:46.877 10:32:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@337 -- # read -ra ver2 00:13:46.877 10:32:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@338 -- # local 'op=<' 00:13:46.877 10:32:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@340 -- # ver1_l=2 00:13:46.877 10:32:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@341 -- # ver2_l=1 00:13:46.877 10:32:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:46.877 10:32:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@344 -- # case "$op" in 00:13:46.877 10:32:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@345 -- # : 1 00:13:46.877 10:32:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:46.877 10:32:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:46.877 10:32:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@365 -- # decimal 1 00:13:46.877 10:32:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@353 -- # local d=1 00:13:46.877 10:32:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:46.877 10:32:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@355 -- # echo 1 00:13:46.877 10:32:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@365 -- # ver1[v]=1 00:13:46.877 10:32:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@366 -- # decimal 2 00:13:46.877 10:32:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@353 -- # local d=2 00:13:46.877 10:32:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:46.877 10:32:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@355 -- # echo 2 00:13:46.877 10:32:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@366 -- # ver2[v]=2 00:13:46.877 10:32:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:46.877 10:32:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:46.877 10:32:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@368 -- # return 0 00:13:46.877 10:32:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:46.877 10:32:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:13:46.877 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:46.877 --rc genhtml_branch_coverage=1 00:13:46.877 --rc genhtml_function_coverage=1 00:13:46.877 --rc genhtml_legend=1 00:13:46.877 --rc geninfo_all_blocks=1 00:13:46.877 --rc geninfo_unexecuted_blocks=1 00:13:46.877 00:13:46.877 ' 00:13:46.877 10:32:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:13:46.877 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:46.877 --rc genhtml_branch_coverage=1 00:13:46.877 --rc genhtml_function_coverage=1 00:13:46.877 --rc genhtml_legend=1 00:13:46.877 --rc geninfo_all_blocks=1 00:13:46.877 --rc geninfo_unexecuted_blocks=1 00:13:46.877 00:13:46.877 ' 00:13:46.877 10:32:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:13:46.877 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:46.877 --rc genhtml_branch_coverage=1 00:13:46.877 --rc genhtml_function_coverage=1 00:13:46.877 --rc genhtml_legend=1 00:13:46.877 --rc geninfo_all_blocks=1 00:13:46.877 --rc geninfo_unexecuted_blocks=1 00:13:46.877 00:13:46.877 ' 00:13:46.877 10:32:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:13:46.877 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:46.877 --rc genhtml_branch_coverage=1 00:13:46.877 --rc genhtml_function_coverage=1 00:13:46.877 --rc genhtml_legend=1 00:13:46.877 --rc geninfo_all_blocks=1 00:13:46.877 --rc geninfo_unexecuted_blocks=1 00:13:46.877 00:13:46.877 ' 00:13:46.877 10:32:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:46.877 10:32:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # uname -s 00:13:46.877 10:32:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:46.877 10:32:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:46.877 10:32:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:46.877 10:32:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:46.877 10:32:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:46.877 10:32:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:46.877 10:32:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:46.877 10:32:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:46.877 10:32:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:46.877 10:32:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:46.877 10:32:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:13:46.877 10:32:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@18 -- # NVME_HOSTID=8b464f06-2980-e311-ba20-001e67a94acd 00:13:46.877 10:32:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:46.877 10:32:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:46.877 10:32:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:46.877 10:32:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:46.877 10:32:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:46.877 10:32:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@15 -- # shopt -s extglob 00:13:46.877 10:32:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:46.877 10:32:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:46.877 10:32:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:46.877 10:32:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:46.877 10:32:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:46.877 10:32:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:46.877 10:32:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@5 -- # export PATH 00:13:46.877 10:32:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:46.877 10:32:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@51 -- # : 0 00:13:46.877 10:32:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:46.877 10:32:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:46.877 10:32:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:46.877 10:32:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:46.877 10:32:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:46.877 10:32:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:46.877 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:46.877 10:32:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:46.877 10:32:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:46.878 10:32:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:46.878 10:32:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@12 -- # MALLOC_BDEV_SIZE=64 00:13:46.878 10:32:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:13:46.878 10:32:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@14 -- # NUM_DEVICES=2 00:13:46.878 10:32:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:46.878 10:32:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:13:46.878 10:32:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:13:46.878 10:32:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@47 -- # rm -rf /var/run/vfio-user 00:13:46.878 10:32:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@103 -- # setup_nvmf_vfio_user '' '' 00:13:46.878 10:32:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args= 00:13:46.878 10:32:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local transport_args= 00:13:46.878 10:32:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=349697 00:13:46.878 10:32:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 349697' 00:13:46.878 Process pid: 349697 00:13:46.878 10:32:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' 00:13:46.878 10:32:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:13:46.878 10:32:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 349697 00:13:46.878 10:32:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@833 -- # '[' -z 349697 ']' 00:13:46.878 10:32:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:46.878 10:32:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@838 -- # local max_retries=100 00:13:46.878 10:32:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:46.878 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:46.878 10:32:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@842 -- # xtrace_disable 00:13:46.878 10:32:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:13:46.878 [2024-11-15 10:32:35.288597] Starting SPDK v25.01-pre git sha1 318515b44 / DPDK 24.03.0 initialization... 00:13:46.878 [2024-11-15 10:32:35.288705] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:47.136 [2024-11-15 10:32:35.362742] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:47.136 [2024-11-15 10:32:35.422560] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:47.136 [2024-11-15 10:32:35.422619] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:47.136 [2024-11-15 10:32:35.422649] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:47.136 [2024-11-15 10:32:35.422661] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:47.136 [2024-11-15 10:32:35.422672] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:47.136 [2024-11-15 10:32:35.424303] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:47.136 [2024-11-15 10:32:35.424371] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:47.136 [2024-11-15 10:32:35.424430] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:13:47.136 [2024-11-15 10:32:35.424434] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:47.136 10:32:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:13:47.136 10:32:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@866 -- # return 0 00:13:47.136 10:32:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:13:48.508 10:32:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER 00:13:48.508 10:32:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:13:48.508 10:32:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:13:48.508 10:32:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:13:48.508 10:32:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:13:48.508 10:32:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:13:48.766 Malloc1 00:13:48.766 10:32:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:13:49.024 10:32:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:13:49.283 10:32:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:13:49.540 10:32:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:13:49.540 10:32:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:13:49.541 10:32:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:13:49.798 Malloc2 00:13:49.798 10:32:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:13:50.056 10:32:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:13:50.313 10:32:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:13:50.571 10:32:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@104 -- # run_nvmf_vfio_user 00:13:50.571 10:32:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # seq 1 2 00:13:50.571 10:32:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:13:50.571 10:32:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user1/1 00:13:50.571 10:32:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode1 00:13:50.571 10:32:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -L nvme -L nvme_vfio -L vfio_pci 00:13:50.571 [2024-11-15 10:32:39.032038] Starting SPDK v25.01-pre git sha1 318515b44 / DPDK 24.03.0 initialization... 00:13:50.571 [2024-11-15 10:32:39.032079] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid350232 ] 00:13:50.832 [2024-11-15 10:32:39.081542] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user1/1 00:13:50.832 [2024-11-15 10:32:39.090814] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:13:50.832 [2024-11-15 10:32:39.090846] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f9517514000 00:13:50.832 [2024-11-15 10:32:39.091806] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:50.832 [2024-11-15 10:32:39.092802] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:50.832 [2024-11-15 10:32:39.093810] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:50.832 [2024-11-15 10:32:39.094815] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:13:50.832 [2024-11-15 10:32:39.095815] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:13:50.832 [2024-11-15 10:32:39.096820] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:50.832 [2024-11-15 10:32:39.097826] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:13:50.832 [2024-11-15 10:32:39.098828] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:50.832 [2024-11-15 10:32:39.099835] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:13:50.832 [2024-11-15 10:32:39.099857] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f9517509000 00:13:50.832 [2024-11-15 10:32:39.101000] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:13:50.832 [2024-11-15 10:32:39.116675] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user1/1/cntrl Setup Successfully 00:13:50.832 [2024-11-15 10:32:39.116738] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to connect adminq (no timeout) 00:13:50.832 [2024-11-15 10:32:39.118953] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:13:50.832 [2024-11-15 10:32:39.119014] nvme_pcie_common.c: 159:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:13:50.832 [2024-11-15 10:32:39.119117] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for connect adminq (no timeout) 00:13:50.832 [2024-11-15 10:32:39.119151] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read vs (no timeout) 00:13:50.832 [2024-11-15 10:32:39.119163] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read vs wait for vs (no timeout) 00:13:50.832 [2024-11-15 10:32:39.119952] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x8, value 0x10300 00:13:50.832 [2024-11-15 10:32:39.119974] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read cap (no timeout) 00:13:50.832 [2024-11-15 10:32:39.119986] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read cap wait for cap (no timeout) 00:13:50.832 [2024-11-15 10:32:39.120958] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:13:50.832 [2024-11-15 10:32:39.120979] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to check en (no timeout) 00:13:50.832 [2024-11-15 10:32:39.120992] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to check en wait for cc (timeout 15000 ms) 00:13:50.832 [2024-11-15 10:32:39.121969] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x0 00:13:50.832 [2024-11-15 10:32:39.121988] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:13:50.832 [2024-11-15 10:32:39.122967] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x0 00:13:50.832 [2024-11-15 10:32:39.122986] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CC.EN = 0 && CSTS.RDY = 0 00:13:50.832 [2024-11-15 10:32:39.122994] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to controller is disabled (timeout 15000 ms) 00:13:50.832 [2024-11-15 10:32:39.123006] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:13:50.832 [2024-11-15 10:32:39.123116] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Setting CC.EN = 1 00:13:50.832 [2024-11-15 10:32:39.123124] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:13:50.832 [2024-11-15 10:32:39.123132] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x28, value 0x2000003c0000 00:13:50.832 [2024-11-15 10:32:39.123978] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x30, value 0x2000003be000 00:13:50.832 [2024-11-15 10:32:39.124976] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x24, value 0xff00ff 00:13:50.832 [2024-11-15 10:32:39.127377] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:13:50.833 [2024-11-15 10:32:39.127977] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:13:50.833 [2024-11-15 10:32:39.128115] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:13:50.833 [2024-11-15 10:32:39.128991] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x1 00:13:50.833 [2024-11-15 10:32:39.129009] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:13:50.833 [2024-11-15 10:32:39.129023] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to reset admin queue (timeout 30000 ms) 00:13:50.833 [2024-11-15 10:32:39.129048] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify controller (no timeout) 00:13:50.833 [2024-11-15 10:32:39.129061] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify controller (timeout 30000 ms) 00:13:50.833 [2024-11-15 10:32:39.129093] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:13:50.833 [2024-11-15 10:32:39.129104] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:13:50.833 [2024-11-15 10:32:39.129111] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:13:50.833 [2024-11-15 10:32:39.129133] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:13:50.833 [2024-11-15 10:32:39.129207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:13:50.833 [2024-11-15 10:32:39.129227] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] transport max_xfer_size 131072 00:13:50.833 [2024-11-15 10:32:39.129235] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] MDTS max_xfer_size 131072 00:13:50.833 [2024-11-15 10:32:39.129242] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CNTLID 0x0001 00:13:50.833 [2024-11-15 10:32:39.129251] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:13:50.833 [2024-11-15 10:32:39.129262] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] transport max_sges 1 00:13:50.833 [2024-11-15 10:32:39.129271] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] fuses compare and write: 1 00:13:50.833 [2024-11-15 10:32:39.129278] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to configure AER (timeout 30000 ms) 00:13:50.833 [2024-11-15 10:32:39.129294] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for configure aer (timeout 30000 ms) 00:13:50.833 [2024-11-15 10:32:39.129311] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:13:50.833 [2024-11-15 10:32:39.129325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:13:50.833 [2024-11-15 10:32:39.129359] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:13:50.833 [2024-11-15 10:32:39.129381] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:13:50.833 [2024-11-15 10:32:39.129394] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:13:50.833 [2024-11-15 10:32:39.129407] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:13:50.833 [2024-11-15 10:32:39.129416] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set keep alive timeout (timeout 30000 ms) 00:13:50.833 [2024-11-15 10:32:39.129429] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:13:50.833 [2024-11-15 10:32:39.129442] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:13:50.833 [2024-11-15 10:32:39.129459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:13:50.833 [2024-11-15 10:32:39.129476] nvme_ctrlr.c:3047:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Controller adjusted keep alive timeout to 0 ms 00:13:50.833 [2024-11-15 10:32:39.129487] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify controller iocs specific (timeout 30000 ms) 00:13:50.833 [2024-11-15 10:32:39.129499] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set number of queues (timeout 30000 ms) 00:13:50.833 [2024-11-15 10:32:39.129509] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for set number of queues (timeout 30000 ms) 00:13:50.833 [2024-11-15 10:32:39.129523] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:13:50.833 [2024-11-15 10:32:39.129538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:13:50.833 [2024-11-15 10:32:39.129608] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify active ns (timeout 30000 ms) 00:13:50.833 [2024-11-15 10:32:39.129625] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify active ns (timeout 30000 ms) 00:13:50.833 [2024-11-15 10:32:39.129640] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:13:50.833 [2024-11-15 10:32:39.129649] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:13:50.833 [2024-11-15 10:32:39.129669] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:13:50.833 [2024-11-15 10:32:39.129679] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:13:50.833 [2024-11-15 10:32:39.129694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:13:50.833 [2024-11-15 10:32:39.129714] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Namespace 1 was added 00:13:50.833 [2024-11-15 10:32:39.129752] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify ns (timeout 30000 ms) 00:13:50.833 [2024-11-15 10:32:39.129768] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify ns (timeout 30000 ms) 00:13:50.833 [2024-11-15 10:32:39.129780] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:13:50.833 [2024-11-15 10:32:39.129788] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:13:50.833 [2024-11-15 10:32:39.129794] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:13:50.833 [2024-11-15 10:32:39.129803] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:13:50.833 [2024-11-15 10:32:39.129836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:13:50.833 [2024-11-15 10:32:39.129861] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify namespace id descriptors (timeout 30000 ms) 00:13:50.833 [2024-11-15 10:32:39.129875] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:13:50.833 [2024-11-15 10:32:39.129887] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:13:50.833 [2024-11-15 10:32:39.129899] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:13:50.833 [2024-11-15 10:32:39.129905] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:13:50.833 [2024-11-15 10:32:39.129914] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:13:50.833 [2024-11-15 10:32:39.129930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:13:50.833 [2024-11-15 10:32:39.129945] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify ns iocs specific (timeout 30000 ms) 00:13:50.833 [2024-11-15 10:32:39.129956] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set supported log pages (timeout 30000 ms) 00:13:50.833 [2024-11-15 10:32:39.129970] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set supported features (timeout 30000 ms) 00:13:50.833 [2024-11-15 10:32:39.129981] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set host behavior support feature (timeout 30000 ms) 00:13:50.833 [2024-11-15 10:32:39.129989] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set doorbell buffer config (timeout 30000 ms) 00:13:50.833 [2024-11-15 10:32:39.129997] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set host ID (timeout 30000 ms) 00:13:50.833 [2024-11-15 10:32:39.130005] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] NVMe-oF transport - not sending Set Features - Host ID 00:13:50.833 [2024-11-15 10:32:39.130012] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to transport ready (timeout 30000 ms) 00:13:50.833 [2024-11-15 10:32:39.130021] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to ready (no timeout) 00:13:50.833 [2024-11-15 10:32:39.130050] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:13:50.833 [2024-11-15 10:32:39.130069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:13:50.833 [2024-11-15 10:32:39.130088] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:13:50.833 [2024-11-15 10:32:39.130100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:13:50.833 [2024-11-15 10:32:39.130116] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:13:50.833 [2024-11-15 10:32:39.130127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:13:50.834 [2024-11-15 10:32:39.130143] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:13:50.834 [2024-11-15 10:32:39.130154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:13:50.834 [2024-11-15 10:32:39.130177] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:13:50.834 [2024-11-15 10:32:39.130187] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:13:50.834 [2024-11-15 10:32:39.130193] nvme_pcie_common.c:1275:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:13:50.834 [2024-11-15 10:32:39.130199] nvme_pcie_common.c:1291:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:13:50.834 [2024-11-15 10:32:39.130205] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:13:50.834 [2024-11-15 10:32:39.130214] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:13:50.834 [2024-11-15 10:32:39.130229] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:13:50.834 [2024-11-15 10:32:39.130237] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:13:50.834 [2024-11-15 10:32:39.130243] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:13:50.834 [2024-11-15 10:32:39.130251] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:13:50.834 [2024-11-15 10:32:39.130262] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:13:50.834 [2024-11-15 10:32:39.130270] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:13:50.834 [2024-11-15 10:32:39.130275] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:13:50.834 [2024-11-15 10:32:39.130284] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:13:50.834 [2024-11-15 10:32:39.130296] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:13:50.834 [2024-11-15 10:32:39.130304] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:13:50.834 [2024-11-15 10:32:39.130310] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:13:50.834 [2024-11-15 10:32:39.130318] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:13:50.834 [2024-11-15 10:32:39.130329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:13:50.834 [2024-11-15 10:32:39.130375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:13:50.834 [2024-11-15 10:32:39.130401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:13:50.834 [2024-11-15 10:32:39.130414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:13:50.834 ===================================================== 00:13:50.834 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:13:50.834 ===================================================== 00:13:50.834 Controller Capabilities/Features 00:13:50.834 ================================ 00:13:50.834 Vendor ID: 4e58 00:13:50.834 Subsystem Vendor ID: 4e58 00:13:50.834 Serial Number: SPDK1 00:13:50.834 Model Number: SPDK bdev Controller 00:13:50.834 Firmware Version: 25.01 00:13:50.834 Recommended Arb Burst: 6 00:13:50.834 IEEE OUI Identifier: 8d 6b 50 00:13:50.834 Multi-path I/O 00:13:50.834 May have multiple subsystem ports: Yes 00:13:50.834 May have multiple controllers: Yes 00:13:50.834 Associated with SR-IOV VF: No 00:13:50.834 Max Data Transfer Size: 131072 00:13:50.834 Max Number of Namespaces: 32 00:13:50.834 Max Number of I/O Queues: 127 00:13:50.834 NVMe Specification Version (VS): 1.3 00:13:50.834 NVMe Specification Version (Identify): 1.3 00:13:50.834 Maximum Queue Entries: 256 00:13:50.834 Contiguous Queues Required: Yes 00:13:50.834 Arbitration Mechanisms Supported 00:13:50.834 Weighted Round Robin: Not Supported 00:13:50.834 Vendor Specific: Not Supported 00:13:50.834 Reset Timeout: 15000 ms 00:13:50.834 Doorbell Stride: 4 bytes 00:13:50.834 NVM Subsystem Reset: Not Supported 00:13:50.834 Command Sets Supported 00:13:50.834 NVM Command Set: Supported 00:13:50.834 Boot Partition: Not Supported 00:13:50.834 Memory Page Size Minimum: 4096 bytes 00:13:50.834 Memory Page Size Maximum: 4096 bytes 00:13:50.834 Persistent Memory Region: Not Supported 00:13:50.834 Optional Asynchronous Events Supported 00:13:50.834 Namespace Attribute Notices: Supported 00:13:50.834 Firmware Activation Notices: Not Supported 00:13:50.834 ANA Change Notices: Not Supported 00:13:50.834 PLE Aggregate Log Change Notices: Not Supported 00:13:50.834 LBA Status Info Alert Notices: Not Supported 00:13:50.834 EGE Aggregate Log Change Notices: Not Supported 00:13:50.834 Normal NVM Subsystem Shutdown event: Not Supported 00:13:50.834 Zone Descriptor Change Notices: Not Supported 00:13:50.834 Discovery Log Change Notices: Not Supported 00:13:50.834 Controller Attributes 00:13:50.834 128-bit Host Identifier: Supported 00:13:50.834 Non-Operational Permissive Mode: Not Supported 00:13:50.834 NVM Sets: Not Supported 00:13:50.834 Read Recovery Levels: Not Supported 00:13:50.834 Endurance Groups: Not Supported 00:13:50.834 Predictable Latency Mode: Not Supported 00:13:50.834 Traffic Based Keep ALive: Not Supported 00:13:50.834 Namespace Granularity: Not Supported 00:13:50.834 SQ Associations: Not Supported 00:13:50.834 UUID List: Not Supported 00:13:50.834 Multi-Domain Subsystem: Not Supported 00:13:50.834 Fixed Capacity Management: Not Supported 00:13:50.834 Variable Capacity Management: Not Supported 00:13:50.834 Delete Endurance Group: Not Supported 00:13:50.834 Delete NVM Set: Not Supported 00:13:50.834 Extended LBA Formats Supported: Not Supported 00:13:50.834 Flexible Data Placement Supported: Not Supported 00:13:50.834 00:13:50.834 Controller Memory Buffer Support 00:13:50.834 ================================ 00:13:50.834 Supported: No 00:13:50.834 00:13:50.834 Persistent Memory Region Support 00:13:50.834 ================================ 00:13:50.834 Supported: No 00:13:50.834 00:13:50.834 Admin Command Set Attributes 00:13:50.834 ============================ 00:13:50.834 Security Send/Receive: Not Supported 00:13:50.834 Format NVM: Not Supported 00:13:50.834 Firmware Activate/Download: Not Supported 00:13:50.834 Namespace Management: Not Supported 00:13:50.834 Device Self-Test: Not Supported 00:13:50.834 Directives: Not Supported 00:13:50.834 NVMe-MI: Not Supported 00:13:50.834 Virtualization Management: Not Supported 00:13:50.834 Doorbell Buffer Config: Not Supported 00:13:50.834 Get LBA Status Capability: Not Supported 00:13:50.834 Command & Feature Lockdown Capability: Not Supported 00:13:50.834 Abort Command Limit: 4 00:13:50.834 Async Event Request Limit: 4 00:13:50.834 Number of Firmware Slots: N/A 00:13:50.834 Firmware Slot 1 Read-Only: N/A 00:13:50.834 Firmware Activation Without Reset: N/A 00:13:50.834 Multiple Update Detection Support: N/A 00:13:50.834 Firmware Update Granularity: No Information Provided 00:13:50.834 Per-Namespace SMART Log: No 00:13:50.834 Asymmetric Namespace Access Log Page: Not Supported 00:13:50.834 Subsystem NQN: nqn.2019-07.io.spdk:cnode1 00:13:50.834 Command Effects Log Page: Supported 00:13:50.834 Get Log Page Extended Data: Supported 00:13:50.834 Telemetry Log Pages: Not Supported 00:13:50.834 Persistent Event Log Pages: Not Supported 00:13:50.834 Supported Log Pages Log Page: May Support 00:13:50.834 Commands Supported & Effects Log Page: Not Supported 00:13:50.834 Feature Identifiers & Effects Log Page:May Support 00:13:50.834 NVMe-MI Commands & Effects Log Page: May Support 00:13:50.834 Data Area 4 for Telemetry Log: Not Supported 00:13:50.834 Error Log Page Entries Supported: 128 00:13:50.834 Keep Alive: Supported 00:13:50.834 Keep Alive Granularity: 10000 ms 00:13:50.834 00:13:50.834 NVM Command Set Attributes 00:13:50.834 ========================== 00:13:50.834 Submission Queue Entry Size 00:13:50.834 Max: 64 00:13:50.834 Min: 64 00:13:50.834 Completion Queue Entry Size 00:13:50.835 Max: 16 00:13:50.835 Min: 16 00:13:50.835 Number of Namespaces: 32 00:13:50.835 Compare Command: Supported 00:13:50.835 Write Uncorrectable Command: Not Supported 00:13:50.835 Dataset Management Command: Supported 00:13:50.835 Write Zeroes Command: Supported 00:13:50.835 Set Features Save Field: Not Supported 00:13:50.835 Reservations: Not Supported 00:13:50.835 Timestamp: Not Supported 00:13:50.835 Copy: Supported 00:13:50.835 Volatile Write Cache: Present 00:13:50.835 Atomic Write Unit (Normal): 1 00:13:50.835 Atomic Write Unit (PFail): 1 00:13:50.835 Atomic Compare & Write Unit: 1 00:13:50.835 Fused Compare & Write: Supported 00:13:50.835 Scatter-Gather List 00:13:50.835 SGL Command Set: Supported (Dword aligned) 00:13:50.835 SGL Keyed: Not Supported 00:13:50.835 SGL Bit Bucket Descriptor: Not Supported 00:13:50.835 SGL Metadata Pointer: Not Supported 00:13:50.835 Oversized SGL: Not Supported 00:13:50.835 SGL Metadata Address: Not Supported 00:13:50.835 SGL Offset: Not Supported 00:13:50.835 Transport SGL Data Block: Not Supported 00:13:50.835 Replay Protected Memory Block: Not Supported 00:13:50.835 00:13:50.835 Firmware Slot Information 00:13:50.835 ========================= 00:13:50.835 Active slot: 1 00:13:50.835 Slot 1 Firmware Revision: 25.01 00:13:50.835 00:13:50.835 00:13:50.835 Commands Supported and Effects 00:13:50.835 ============================== 00:13:50.835 Admin Commands 00:13:50.835 -------------- 00:13:50.835 Get Log Page (02h): Supported 00:13:50.835 Identify (06h): Supported 00:13:50.835 Abort (08h): Supported 00:13:50.835 Set Features (09h): Supported 00:13:50.835 Get Features (0Ah): Supported 00:13:50.835 Asynchronous Event Request (0Ch): Supported 00:13:50.835 Keep Alive (18h): Supported 00:13:50.835 I/O Commands 00:13:50.835 ------------ 00:13:50.835 Flush (00h): Supported LBA-Change 00:13:50.835 Write (01h): Supported LBA-Change 00:13:50.835 Read (02h): Supported 00:13:50.835 Compare (05h): Supported 00:13:50.835 Write Zeroes (08h): Supported LBA-Change 00:13:50.835 Dataset Management (09h): Supported LBA-Change 00:13:50.835 Copy (19h): Supported LBA-Change 00:13:50.835 00:13:50.835 Error Log 00:13:50.835 ========= 00:13:50.835 00:13:50.835 Arbitration 00:13:50.835 =========== 00:13:50.835 Arbitration Burst: 1 00:13:50.835 00:13:50.835 Power Management 00:13:50.835 ================ 00:13:50.835 Number of Power States: 1 00:13:50.835 Current Power State: Power State #0 00:13:50.835 Power State #0: 00:13:50.835 Max Power: 0.00 W 00:13:50.835 Non-Operational State: Operational 00:13:50.835 Entry Latency: Not Reported 00:13:50.835 Exit Latency: Not Reported 00:13:50.835 Relative Read Throughput: 0 00:13:50.835 Relative Read Latency: 0 00:13:50.835 Relative Write Throughput: 0 00:13:50.835 Relative Write Latency: 0 00:13:50.835 Idle Power: Not Reported 00:13:50.835 Active Power: Not Reported 00:13:50.835 Non-Operational Permissive Mode: Not Supported 00:13:50.835 00:13:50.835 Health Information 00:13:50.835 ================== 00:13:50.835 Critical Warnings: 00:13:50.835 Available Spare Space: OK 00:13:50.835 Temperature: OK 00:13:50.835 Device Reliability: OK 00:13:50.835 Read Only: No 00:13:50.835 Volatile Memory Backup: OK 00:13:50.835 Current Temperature: 0 Kelvin (-273 Celsius) 00:13:50.835 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:13:50.835 Available Spare: 0% 00:13:50.835 Available Sp[2024-11-15 10:32:39.130550] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:13:50.835 [2024-11-15 10:32:39.130567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:13:50.835 [2024-11-15 10:32:39.130615] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Prepare to destruct SSD 00:13:50.835 [2024-11-15 10:32:39.130633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:50.835 [2024-11-15 10:32:39.130661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:50.835 [2024-11-15 10:32:39.130671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:50.835 [2024-11-15 10:32:39.130681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:50.835 [2024-11-15 10:32:39.132373] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:13:50.835 [2024-11-15 10:32:39.132408] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x464001 00:13:50.835 [2024-11-15 10:32:39.133008] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:13:50.835 [2024-11-15 10:32:39.133102] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] RTD3E = 0 us 00:13:50.835 [2024-11-15 10:32:39.133120] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] shutdown timeout = 10000 ms 00:13:50.835 [2024-11-15 10:32:39.134021] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x9 00:13:50.835 [2024-11-15 10:32:39.134044] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] shutdown complete in 0 milliseconds 00:13:50.835 [2024-11-15 10:32:39.134102] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user1/1/cntrl 00:13:50.835 [2024-11-15 10:32:39.137374] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:13:50.835 are Threshold: 0% 00:13:50.835 Life Percentage Used: 0% 00:13:50.835 Data Units Read: 0 00:13:50.835 Data Units Written: 0 00:13:50.835 Host Read Commands: 0 00:13:50.835 Host Write Commands: 0 00:13:50.835 Controller Busy Time: 0 minutes 00:13:50.835 Power Cycles: 0 00:13:50.835 Power On Hours: 0 hours 00:13:50.835 Unsafe Shutdowns: 0 00:13:50.835 Unrecoverable Media Errors: 0 00:13:50.835 Lifetime Error Log Entries: 0 00:13:50.835 Warning Temperature Time: 0 minutes 00:13:50.835 Critical Temperature Time: 0 minutes 00:13:50.835 00:13:50.835 Number of Queues 00:13:50.835 ================ 00:13:50.835 Number of I/O Submission Queues: 127 00:13:50.835 Number of I/O Completion Queues: 127 00:13:50.835 00:13:50.835 Active Namespaces 00:13:50.835 ================= 00:13:50.835 Namespace ID:1 00:13:50.835 Error Recovery Timeout: Unlimited 00:13:50.835 Command Set Identifier: NVM (00h) 00:13:50.835 Deallocate: Supported 00:13:50.835 Deallocated/Unwritten Error: Not Supported 00:13:50.835 Deallocated Read Value: Unknown 00:13:50.835 Deallocate in Write Zeroes: Not Supported 00:13:50.835 Deallocated Guard Field: 0xFFFF 00:13:50.835 Flush: Supported 00:13:50.835 Reservation: Supported 00:13:50.835 Namespace Sharing Capabilities: Multiple Controllers 00:13:50.835 Size (in LBAs): 131072 (0GiB) 00:13:50.835 Capacity (in LBAs): 131072 (0GiB) 00:13:50.835 Utilization (in LBAs): 131072 (0GiB) 00:13:50.835 NGUID: 368A6F54642E4676AD8A717B60DFF501 00:13:50.835 UUID: 368a6f54-642e-4676-ad8a-717b60dff501 00:13:50.835 Thin Provisioning: Not Supported 00:13:50.835 Per-NS Atomic Units: Yes 00:13:50.835 Atomic Boundary Size (Normal): 0 00:13:50.835 Atomic Boundary Size (PFail): 0 00:13:50.835 Atomic Boundary Offset: 0 00:13:50.835 Maximum Single Source Range Length: 65535 00:13:50.835 Maximum Copy Length: 65535 00:13:50.835 Maximum Source Range Count: 1 00:13:50.835 NGUID/EUI64 Never Reused: No 00:13:50.835 Namespace Write Protected: No 00:13:50.835 Number of LBA Formats: 1 00:13:50.835 Current LBA Format: LBA Format #00 00:13:50.836 LBA Format #00: Data Size: 512 Metadata Size: 0 00:13:50.836 00:13:50.836 10:32:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:13:51.094 [2024-11-15 10:32:39.386297] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:13:56.358 Initializing NVMe Controllers 00:13:56.358 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:13:56.358 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:13:56.358 Initialization complete. Launching workers. 00:13:56.358 ======================================================== 00:13:56.358 Latency(us) 00:13:56.358 Device Information : IOPS MiB/s Average min max 00:13:56.358 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 32285.40 126.11 3964.75 1196.70 8289.91 00:13:56.358 ======================================================== 00:13:56.358 Total : 32285.40 126.11 3964.75 1196.70 8289.91 00:13:56.358 00:13:56.358 [2024-11-15 10:32:44.408786] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:13:56.358 10:32:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:13:56.358 [2024-11-15 10:32:44.661990] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:01.614 Initializing NVMe Controllers 00:14:01.614 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:14:01.615 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:14:01.615 Initialization complete. Launching workers. 00:14:01.615 ======================================================== 00:14:01.615 Latency(us) 00:14:01.615 Device Information : IOPS MiB/s Average min max 00:14:01.615 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 16050.27 62.70 7985.77 6976.57 11974.76 00:14:01.615 ======================================================== 00:14:01.615 Total : 16050.27 62.70 7985.77 6976.57 11974.76 00:14:01.615 00:14:01.615 [2024-11-15 10:32:49.703409] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:01.615 10:32:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:14:01.615 [2024-11-15 10:32:49.930483] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:06.909 [2024-11-15 10:32:54.995064] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:06.909 Initializing NVMe Controllers 00:14:06.909 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:14:06.909 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:14:06.909 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 1 00:14:06.909 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 2 00:14:06.909 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 3 00:14:06.909 Initialization complete. Launching workers. 00:14:06.909 Starting thread on core 2 00:14:06.909 Starting thread on core 3 00:14:06.909 Starting thread on core 1 00:14:06.909 10:32:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -d 256 -g 00:14:06.909 [2024-11-15 10:32:55.314855] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:10.191 [2024-11-15 10:32:58.388165] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:10.191 Initializing NVMe Controllers 00:14:10.191 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:14:10.191 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:14:10.191 Associating SPDK bdev Controller (SPDK1 ) with lcore 0 00:14:10.191 Associating SPDK bdev Controller (SPDK1 ) with lcore 1 00:14:10.191 Associating SPDK bdev Controller (SPDK1 ) with lcore 2 00:14:10.191 Associating SPDK bdev Controller (SPDK1 ) with lcore 3 00:14:10.191 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:14:10.191 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:14:10.191 Initialization complete. Launching workers. 00:14:10.191 Starting thread on core 1 with urgent priority queue 00:14:10.191 Starting thread on core 2 with urgent priority queue 00:14:10.191 Starting thread on core 3 with urgent priority queue 00:14:10.191 Starting thread on core 0 with urgent priority queue 00:14:10.191 SPDK bdev Controller (SPDK1 ) core 0: 4697.67 IO/s 21.29 secs/100000 ios 00:14:10.191 SPDK bdev Controller (SPDK1 ) core 1: 5083.00 IO/s 19.67 secs/100000 ios 00:14:10.191 SPDK bdev Controller (SPDK1 ) core 2: 5453.33 IO/s 18.34 secs/100000 ios 00:14:10.191 SPDK bdev Controller (SPDK1 ) core 3: 5551.00 IO/s 18.01 secs/100000 ios 00:14:10.191 ======================================================== 00:14:10.191 00:14:10.191 10:32:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:14:10.448 [2024-11-15 10:32:58.703864] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:10.448 Initializing NVMe Controllers 00:14:10.448 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:14:10.448 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:14:10.448 Namespace ID: 1 size: 0GB 00:14:10.448 Initialization complete. 00:14:10.448 INFO: using host memory buffer for IO 00:14:10.448 Hello world! 00:14:10.448 [2024-11-15 10:32:58.741561] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:10.448 10:32:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:14:10.706 [2024-11-15 10:32:59.053284] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:11.637 Initializing NVMe Controllers 00:14:11.637 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:14:11.637 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:14:11.637 Initialization complete. Launching workers. 00:14:11.637 submit (in ns) avg, min, max = 7795.7, 3502.2, 4018028.9 00:14:11.637 complete (in ns) avg, min, max = 26264.2, 2072.2, 4048108.9 00:14:11.637 00:14:11.637 Submit histogram 00:14:11.637 ================ 00:14:11.637 Range in us Cumulative Count 00:14:11.637 3.484 - 3.508: 0.0158% ( 2) 00:14:11.637 3.508 - 3.532: 0.6863% ( 85) 00:14:11.637 3.532 - 3.556: 1.7433% ( 134) 00:14:11.637 3.556 - 3.579: 5.7190% ( 504) 00:14:11.637 3.579 - 3.603: 10.9569% ( 664) 00:14:11.637 3.603 - 3.627: 19.1843% ( 1043) 00:14:11.637 3.627 - 3.650: 28.1297% ( 1134) 00:14:11.637 3.650 - 3.674: 37.1066% ( 1138) 00:14:11.637 3.674 - 3.698: 43.4961% ( 810) 00:14:11.637 3.698 - 3.721: 48.9469% ( 691) 00:14:11.637 3.721 - 3.745: 52.5440% ( 456) 00:14:11.637 3.745 - 3.769: 56.4408% ( 494) 00:14:11.637 3.769 - 3.793: 59.9038% ( 439) 00:14:11.637 3.793 - 3.816: 63.2642% ( 426) 00:14:11.637 3.816 - 3.840: 66.8218% ( 451) 00:14:11.637 3.840 - 3.864: 71.1840% ( 553) 00:14:11.637 3.864 - 3.887: 75.4279% ( 538) 00:14:11.637 3.887 - 3.911: 79.0644% ( 461) 00:14:11.638 3.911 - 3.935: 81.8411% ( 352) 00:14:11.638 3.935 - 3.959: 83.6239% ( 226) 00:14:11.638 3.959 - 3.982: 84.8229% ( 152) 00:14:11.638 3.982 - 4.006: 86.1245% ( 165) 00:14:11.638 4.006 - 4.030: 87.1973% ( 136) 00:14:11.638 4.030 - 4.053: 88.0413% ( 107) 00:14:11.638 4.053 - 4.077: 88.7592% ( 91) 00:14:11.638 4.077 - 4.101: 89.4928% ( 93) 00:14:11.638 4.101 - 4.124: 90.1002% ( 77) 00:14:11.638 4.124 - 4.148: 90.6681% ( 72) 00:14:11.638 4.148 - 4.172: 91.0310% ( 46) 00:14:11.638 4.172 - 4.196: 91.3939% ( 46) 00:14:11.638 4.196 - 4.219: 91.6700% ( 35) 00:14:11.638 4.219 - 4.243: 91.8593% ( 24) 00:14:11.638 4.243 - 4.267: 92.1354% ( 35) 00:14:11.638 4.267 - 4.290: 92.2931% ( 20) 00:14:11.638 4.290 - 4.314: 92.4667% ( 22) 00:14:11.638 4.314 - 4.338: 92.5692% ( 13) 00:14:11.638 4.338 - 4.361: 92.7033% ( 17) 00:14:11.638 4.361 - 4.385: 92.8059% ( 13) 00:14:11.638 4.385 - 4.409: 92.9163% ( 14) 00:14:11.638 4.409 - 4.433: 93.0820% ( 21) 00:14:11.638 4.433 - 4.456: 93.1530% ( 9) 00:14:11.638 4.456 - 4.480: 93.2082% ( 7) 00:14:11.638 4.480 - 4.504: 93.3896% ( 23) 00:14:11.638 4.504 - 4.527: 93.4922% ( 13) 00:14:11.638 4.527 - 4.551: 93.6341% ( 18) 00:14:11.638 4.551 - 4.575: 93.7525% ( 15) 00:14:11.638 4.575 - 4.599: 93.8708% ( 15) 00:14:11.638 4.599 - 4.622: 93.9733% ( 13) 00:14:11.638 4.622 - 4.646: 94.0996% ( 16) 00:14:11.638 4.646 - 4.670: 94.2258% ( 16) 00:14:11.638 4.670 - 4.693: 94.3993% ( 22) 00:14:11.638 4.693 - 4.717: 94.5255% ( 16) 00:14:11.638 4.717 - 4.741: 94.6754% ( 19) 00:14:11.638 4.741 - 4.764: 94.8332% ( 20) 00:14:11.638 4.764 - 4.788: 94.9830% ( 19) 00:14:11.638 4.788 - 4.812: 95.1881% ( 26) 00:14:11.638 4.812 - 4.836: 95.3617% ( 22) 00:14:11.638 4.836 - 4.859: 95.5431% ( 23) 00:14:11.638 4.859 - 4.883: 95.7403% ( 25) 00:14:11.638 4.883 - 4.907: 95.9533% ( 27) 00:14:11.638 4.907 - 4.930: 96.1111% ( 20) 00:14:11.638 4.930 - 4.954: 96.2215% ( 14) 00:14:11.638 4.954 - 4.978: 96.3319% ( 14) 00:14:11.638 4.978 - 5.001: 96.4345% ( 13) 00:14:11.638 5.001 - 5.025: 96.6317% ( 25) 00:14:11.638 5.025 - 5.049: 96.7737% ( 18) 00:14:11.638 5.049 - 5.073: 96.9315% ( 20) 00:14:11.638 5.073 - 5.096: 97.0419% ( 14) 00:14:11.638 5.096 - 5.120: 97.1365% ( 12) 00:14:11.638 5.120 - 5.144: 97.2075% ( 9) 00:14:11.638 5.144 - 5.167: 97.2706% ( 8) 00:14:11.638 5.167 - 5.191: 97.3259% ( 7) 00:14:11.638 5.191 - 5.215: 97.3574% ( 4) 00:14:11.638 5.215 - 5.239: 97.4047% ( 6) 00:14:11.638 5.239 - 5.262: 97.4521% ( 6) 00:14:11.638 5.262 - 5.286: 97.5388% ( 11) 00:14:11.638 5.286 - 5.310: 97.6098% ( 9) 00:14:11.638 5.310 - 5.333: 97.6256% ( 2) 00:14:11.638 5.333 - 5.357: 97.6808% ( 7) 00:14:11.638 5.357 - 5.381: 97.7045% ( 3) 00:14:11.638 5.381 - 5.404: 97.7439% ( 5) 00:14:11.638 5.404 - 5.428: 97.7913% ( 6) 00:14:11.638 5.428 - 5.452: 97.8228% ( 4) 00:14:11.638 5.452 - 5.476: 97.8465% ( 3) 00:14:11.638 5.476 - 5.499: 97.8702% ( 3) 00:14:11.638 5.499 - 5.523: 97.8859% ( 2) 00:14:11.638 5.523 - 5.547: 97.9096% ( 3) 00:14:11.638 5.547 - 5.570: 97.9412% ( 4) 00:14:11.638 5.570 - 5.594: 97.9885% ( 6) 00:14:11.638 5.594 - 5.618: 97.9964% ( 1) 00:14:11.638 5.618 - 5.641: 98.0200% ( 3) 00:14:11.638 5.736 - 5.760: 98.0516% ( 4) 00:14:11.638 5.760 - 5.784: 98.0595% ( 1) 00:14:11.638 5.784 - 5.807: 98.0674% ( 1) 00:14:11.638 5.807 - 5.831: 98.0831% ( 2) 00:14:11.638 5.831 - 5.855: 98.0910% ( 1) 00:14:11.638 5.855 - 5.879: 98.1068% ( 2) 00:14:11.638 5.879 - 5.902: 98.1147% ( 1) 00:14:11.638 5.902 - 5.926: 98.1305% ( 2) 00:14:11.638 5.926 - 5.950: 98.1384% ( 1) 00:14:11.638 5.973 - 5.997: 98.1778% ( 5) 00:14:11.638 6.021 - 6.044: 98.1857% ( 1) 00:14:11.638 6.068 - 6.116: 98.2172% ( 4) 00:14:11.638 6.116 - 6.163: 98.2567% ( 5) 00:14:11.638 6.258 - 6.305: 98.2882% ( 4) 00:14:11.638 6.305 - 6.353: 98.2961% ( 1) 00:14:11.638 6.400 - 6.447: 98.3040% ( 1) 00:14:11.638 6.447 - 6.495: 98.3198% ( 2) 00:14:11.638 6.495 - 6.542: 98.3277% ( 1) 00:14:11.638 6.542 - 6.590: 98.3435% ( 2) 00:14:11.638 6.590 - 6.637: 98.3513% ( 1) 00:14:11.638 6.637 - 6.684: 98.3592% ( 1) 00:14:11.638 6.827 - 6.874: 98.3671% ( 1) 00:14:11.638 6.874 - 6.921: 98.3908% ( 3) 00:14:11.638 7.064 - 7.111: 98.3987% ( 1) 00:14:11.638 7.111 - 7.159: 98.4066% ( 1) 00:14:11.638 7.159 - 7.206: 98.4223% ( 2) 00:14:11.638 7.301 - 7.348: 98.4302% ( 1) 00:14:11.638 7.348 - 7.396: 98.4381% ( 1) 00:14:11.638 7.538 - 7.585: 98.4539% ( 2) 00:14:11.638 7.633 - 7.680: 98.4618% ( 1) 00:14:11.638 7.870 - 7.917: 98.4697% ( 1) 00:14:11.638 7.917 - 7.964: 98.4776% ( 1) 00:14:11.638 8.012 - 8.059: 98.4854% ( 1) 00:14:11.638 8.059 - 8.107: 98.4933% ( 1) 00:14:11.638 8.107 - 8.154: 98.5091% ( 2) 00:14:11.638 8.201 - 8.249: 98.5170% ( 1) 00:14:11.638 8.391 - 8.439: 98.5249% ( 1) 00:14:11.638 8.439 - 8.486: 98.5407% ( 2) 00:14:11.638 8.533 - 8.581: 98.5486% ( 1) 00:14:11.638 8.628 - 8.676: 98.5564% ( 1) 00:14:11.638 8.723 - 8.770: 98.5643% ( 1) 00:14:11.638 8.770 - 8.818: 98.5722% ( 1) 00:14:11.638 8.818 - 8.865: 98.5801% ( 1) 00:14:11.638 8.913 - 8.960: 98.5880% ( 1) 00:14:11.638 9.055 - 9.102: 98.5959% ( 1) 00:14:11.638 9.339 - 9.387: 98.6038% ( 1) 00:14:11.638 9.387 - 9.434: 98.6117% ( 1) 00:14:11.638 9.434 - 9.481: 98.6195% ( 1) 00:14:11.638 9.576 - 9.624: 98.6274% ( 1) 00:14:11.638 9.719 - 9.766: 98.6432% ( 2) 00:14:11.638 9.956 - 10.003: 98.6511% ( 1) 00:14:11.638 10.003 - 10.050: 98.6590% ( 1) 00:14:11.638 10.619 - 10.667: 98.6669% ( 1) 00:14:11.638 10.667 - 10.714: 98.6748% ( 1) 00:14:11.638 11.141 - 11.188: 98.6827% ( 1) 00:14:11.638 11.473 - 11.520: 98.6905% ( 1) 00:14:11.638 11.662 - 11.710: 98.6984% ( 1) 00:14:11.638 12.041 - 12.089: 98.7063% ( 1) 00:14:11.638 12.089 - 12.136: 98.7221% ( 2) 00:14:11.638 12.136 - 12.231: 98.7379% ( 2) 00:14:11.638 12.421 - 12.516: 98.7536% ( 2) 00:14:11.638 12.705 - 12.800: 98.7615% ( 1) 00:14:11.638 13.179 - 13.274: 98.7773% ( 2) 00:14:11.638 13.274 - 13.369: 98.7852% ( 1) 00:14:11.638 13.369 - 13.464: 98.7931% ( 1) 00:14:11.638 13.464 - 13.559: 98.8010% ( 1) 00:14:11.638 13.938 - 14.033: 98.8168% ( 2) 00:14:11.638 14.127 - 14.222: 98.8246% ( 1) 00:14:11.638 14.317 - 14.412: 98.8325% ( 1) 00:14:11.638 14.412 - 14.507: 98.8404% ( 1) 00:14:11.638 14.507 - 14.601: 98.8483% ( 1) 00:14:11.638 14.601 - 14.696: 98.8562% ( 1) 00:14:11.638 14.791 - 14.886: 98.8641% ( 1) 00:14:11.638 15.076 - 15.170: 98.8720% ( 1) 00:14:11.638 15.929 - 16.024: 98.8799% ( 1) 00:14:11.638 16.119 - 16.213: 98.8877% ( 1) 00:14:11.638 16.687 - 16.782: 98.8956% ( 1) 00:14:11.638 17.067 - 17.161: 98.9035% ( 1) 00:14:11.638 17.351 - 17.446: 98.9272% ( 3) 00:14:11.638 17.446 - 17.541: 98.9587% ( 4) 00:14:11.638 17.541 - 17.636: 98.9824% ( 3) 00:14:11.638 17.636 - 17.730: 99.0061% ( 3) 00:14:11.638 17.730 - 17.825: 99.0692% ( 8) 00:14:11.638 17.825 - 17.920: 99.1165% ( 6) 00:14:11.638 17.920 - 18.015: 99.1875% ( 9) 00:14:11.638 18.015 - 18.110: 99.2585% ( 9) 00:14:11.638 18.110 - 18.204: 99.3295% ( 9) 00:14:11.638 18.204 - 18.299: 99.4005% ( 9) 00:14:11.638 18.299 - 18.394: 99.4557% ( 7) 00:14:11.638 18.394 - 18.489: 99.5583% ( 13) 00:14:11.638 18.489 - 18.584: 99.6214% ( 8) 00:14:11.638 18.584 - 18.679: 99.6529% ( 4) 00:14:11.638 18.679 - 18.773: 99.6845% ( 4) 00:14:11.638 18.773 - 18.868: 99.7002% ( 2) 00:14:11.638 18.868 - 18.963: 99.7476% ( 6) 00:14:11.639 18.963 - 19.058: 99.7555% ( 1) 00:14:11.639 19.058 - 19.153: 99.7791% ( 3) 00:14:11.639 19.153 - 19.247: 99.7870% ( 1) 00:14:11.639 19.247 - 19.342: 99.8028% ( 2) 00:14:11.639 19.342 - 19.437: 99.8107% ( 1) 00:14:11.639 19.437 - 19.532: 99.8343% ( 3) 00:14:11.639 19.532 - 19.627: 99.8422% ( 1) 00:14:11.639 19.627 - 19.721: 99.8501% ( 1) 00:14:11.639 19.721 - 19.816: 99.8580% ( 1) 00:14:11.639 19.816 - 19.911: 99.8659% ( 1) 00:14:11.639 19.911 - 20.006: 99.8738% ( 1) 00:14:11.639 21.333 - 21.428: 99.8817% ( 1) 00:14:11.639 21.902 - 21.997: 99.8896% ( 1) 00:14:11.639 23.040 - 23.135: 99.8975% ( 1) 00:14:11.639 23.230 - 23.324: 99.9053% ( 1) 00:14:11.639 3980.705 - 4004.978: 99.9606% ( 7) 00:14:11.639 4004.978 - 4029.250: 100.0000% ( 5) 00:14:11.639 00:14:11.639 Complete histogram 00:14:11.639 ================== 00:14:11.639 Range in us Cumulative Count 00:14:11.639 2.062 - 2.074: 0.1262% ( 16) 00:14:11.639 2.074 - 2.086: 22.2766% ( 2808) 00:14:11.639 2.086 - 2.098: 42.0131% ( 2502) 00:14:11.639 2.098 - 2.110: 43.9694% ( 248) 00:14:11.639 2.110 - 2.121: 52.7175% ( 1109) 00:14:11.639 2.121 - 2.133: 56.1884% ( 440) 00:14:11.639 2.133 - 2.145: 57.9001% ( 217) 00:14:11.639 2.145 - 2.157: 66.9007% ( 1141) 00:14:11.639 2.157 - 2.169: 71.6652% ( 604) 00:14:11.639 2.169 - 2.181: 72.7065% ( 132) 00:14:11.639 2.181 - 2.193: 75.4122% ( 343) 00:14:11.639 2.193 - 2.204: 76.6191% ( 153) 00:14:11.639 2.204 - 2.216: 77.2344% ( 78) 00:14:11.639 2.216 - 2.228: 81.0681% ( 486) 00:14:11.639 2.228 - 2.240: 84.8308% ( 477) 00:14:11.639 2.240 - 2.252: 86.3927% ( 198) 00:14:11.639 2.252 - 2.264: 88.1123% ( 218) 00:14:11.639 2.264 - 2.276: 88.8617% ( 95) 00:14:11.639 2.276 - 2.287: 89.1378% ( 35) 00:14:11.639 2.287 - 2.299: 89.6663% ( 67) 00:14:11.639 2.299 - 2.311: 90.1396% ( 60) 00:14:11.639 2.311 - 2.323: 90.7549% ( 78) 00:14:11.639 2.323 - 2.335: 91.0231% ( 34) 00:14:11.639 2.335 - 2.347: 91.1099% ( 11) 00:14:11.639 2.347 - 2.359: 91.1651% ( 7) 00:14:11.639 2.359 - 2.370: 91.2361% ( 9) 00:14:11.639 2.370 - 2.382: 91.3386% ( 13) 00:14:11.639 2.382 - 2.394: 91.5832% ( 31) 00:14:11.639 2.394 - 2.406: 91.8198% ( 30) 00:14:11.639 2.406 - 2.418: 91.8750% ( 7) 00:14:11.639 2.418 - 2.430: 92.0249% ( 19) 00:14:11.639 2.430 - 2.441: 92.2379% ( 27) 00:14:11.639 2.441 - 2.453: 92.3326% ( 12) 00:14:11.639 2.453 - 2.465: 92.5456% ( 27) 00:14:11.639 2.465 - 2.477: 92.7822% ( 30) 00:14:11.639 2.477 - 2.489: 92.9715% ( 24) 00:14:11.639 2.489 - 2.501: 93.1687% ( 25) 00:14:11.639 2.501 - 2.513: 93.3817% ( 27) 00:14:11.639 2.513 - 2.524: 93.6105% ( 29) 00:14:11.639 2.524 - 2.536: 93.7998% ( 24) 00:14:11.639 2.536 - 2.548: 93.9418% ( 18) 00:14:11.639 2.548 - 2.560: 94.0838% ( 18) 00:14:11.639 2.560 - 2.572: 94.2337% ( 19) 00:14:11.639 2.572 - 2.584: 94.3756% ( 18) 00:14:11.639 2.584 - 2.596: 94.4703% ( 12) 00:14:11.639 2.596 - 2.607: 94.5965% ( 16) 00:14:11.639 2.607 - 2.619: 94.7385% ( 18) 00:14:11.639 2.619 - 2.631: 94.8095% ( 9) 00:14:11.639 2.631 - 2.643: 94.9120% ( 13) 00:14:11.639 2.643 - 2.655: 94.9909% ( 10) 00:14:11.639 2.655 - 2.667: 95.0540% ( 8) 00:14:11.639 2.667 - 2.679: 95.1802% ( 16) 00:14:11.639 2.679 - 2.690: 95.2434% ( 8) 00:14:11.639 2.690 - 2.702: 95.3853% ( 18) 00:14:11.639 2.702 - 2.714: 95.4642% ( 10) 00:14:11.639 2.714 - 2.726: 95.6141% ( 19) 00:14:11.639 2.726 - 2.738: 95.7561% ( 18) 00:14:11.639 2.738 - 2.750: 95.8902% ( 17) 00:14:11.639 2.750 - 2.761: 95.9770% ( 11) 00:14:11.639 2.761 - 2.773: 96.0953% ( 15) 00:14:11.639 2.773 - 2.785: 96.2136% ( 15) 00:14:11.639 2.785 - 2.797: 96.3162% ( 13) 00:14:11.639 2.797 - 2.809: 96.4029% ( 11) 00:14:11.639 2.809 - 2.821: 96.4582% ( 7) 00:14:11.639 2.821 - 2.833: 96.5449% ( 11) 00:14:11.639 2.833 - 2.844: 96.6396% ( 12) 00:14:11.639 2.844 - 2.856: 96.7185% ( 10) 00:14:11.639 2.856 - 2.868: 96.7737% ( 7) 00:14:11.639 2.868 - 2.880: 96.8289% ( 7) 00:14:11.639 2.880 - 2.892: 96.9393% ( 14) 00:14:11.639 2.892 - 2.904: 97.0103% ( 9) 00:14:11.639 2.904 - 2.916: 97.0577% ( 6) 00:14:11.639 2.916 - 2.927: 97.1365% ( 10) 00:14:11.639 2.927 - 2.939: 97.2233% ( 11) 00:14:11.639 2.939 - 2.951: 97.2706% ( 6) 00:14:11.639 2.951 - 2.963: 97.3180% ( 6) 00:14:11.639 2.963 - 2.975: 97.3495% ( 4) 00:14:11.639 2.975 - 2.987: 97.4047% ( 7) 00:14:11.639 2.987 - 2.999: 97.4363% ( 4) 00:14:11.639 2.999 - 3.010: 97.4915% ( 7) 00:14:11.639 3.010 - 3.022: 97.5231% ( 4) 00:14:11.639 3.022 - 3.034: 97.5546% ( 4) 00:14:11.639 3.034 - 3.058: 97.6177% ( 8) 00:14:11.639 3.058 - 3.081: 97.6887% ( 9) 00:14:11.639 3.081 - 3.105: 97.7361% ( 6) 00:14:11.639 3.105 - 3.129: 97.7913% ( 7) 00:14:11.639 3.129 - 3.153: 97.8386% ( 6) 00:14:11.639 3.153 - 3.176: 97.9175% ( 10) 00:14:11.639 3.200 - 3.224: 97.9964% ( 10) 00:14:11.639 3.224 - 3.247: 98.0358% ( 5) 00:14:11.639 3.247 - 3.271: 98.0595% ( 3) 00:14:11.639 3.271 - 3.295: 98.0989% ( 5) 00:14:11.639 3.295 - 3.319: 98.1068% ( 1) 00:14:11.639 3.319 - 3.342: 98.1462% ( 5) 00:14:11.639 3.342 - 3.366: 98.1778% ( 4) 00:14:11.639 3.366 - 3.390: 98.2015% ( 3) 00:14:11.639 3.413 - 3.437: 98.2172% ( 2) 00:14:11.639 3.437 - 3.461: 98.2251% ( 1) 00:14:11.639 3.461 - 3.484: 98.2488% ( 3) 00:14:11.639 3.484 - 3.508: 98.2567% ( 1) 00:14:11.639 3.508 - 3.532: 98.2646% ( 1) 00:14:11.639 3.532 - 3.556: 98.2882% ( 3) 00:14:11.639 3.556 - 3.579: 98.3040% ( 2) 00:14:11.639 3.579 - 3.603: 98.3435% ( 5) 00:14:11.639 3.603 - 3.627: 98.3513% ( 1) 00:14:11.639 3.627 - 3.650: 98.3829% ( 4) 00:14:11.639 3.650 - 3.674: 98.3908% ( 1) 00:14:11.639 3.674 - 3.698: 98.3987% ( 1) 00:14:11.639 3.698 - 3.721: 98.4145% ( 2) 00:14:11.639 3.745 - 3.769: 98.4223% ( 1) 00:14:11.639 3.769 - 3.793: 98.4302% ( 1) 00:14:11.639 3.793 - 3.816: 98.4460% ( 2) 00:14:11.639 3.816 - 3.840: 98.4539% ( 1) 00:14:11.639 3.840 - 3.864: 98.4854% ( 4) 00:14:11.639 3.911 - 3.935: 98.4933% ( 1) 00:14:11.639 3.935 - 3.959: 98.5170% ( 3) 00:14:11.639 3.959 - 3.982: 98.5328% ( 2) 00:14:11.639 4.006 - 4.030: 98.5407% ( 1) 00:14:11.639 4.030 - 4.053: 98.5486% ( 1) 00:14:11.639 4.077 - 4.101: 98.5643% ( 2) 00:14:11.639 4.101 - 4.124: 98.5722% ( 1) 00:14:11.639 4.172 - 4.196: 98.5801% ( 1) 00:14:11.639 4.290 - 4.314: 98.5959% ( 2) 00:14:11.639 4.504 - 4.527: 98.6038% ( 1) 00:14:11.639 4.717 - 4.741: 98.6117% ( 1) 00:14:11.639 5.262 - 5.286: 98.6195% ( 1) 00:14:11.639 5.404 - 5.428: 98.6274% ( 1) 00:14:11.639 5.594 - 5.618: 98.6353% ( 1) 00:14:11.639 5.641 - 5.665: 98.6432% ( 1) 00:14:11.639 6.258 - 6.305: 98.6590% ( 2) 00:14:11.639 6.495 - 6.542: 98.6827% ( 3) 00:14:11.639 6.684 - 6.732: 98.6984% ( 2) 00:14:11.639 6.921 - 6.969: 98.7142% ( 2) 00:14:11.639 7.111 - 7.159: 98.7221% ( 1) 00:14:11.639 7.206 - 7.253: 98.7300% ( 1) 00:14:11.639 7.396 - 7.443: 98.7379%[2024-11-15 10:33:00.077506] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:11.897 ( 1) 00:14:11.897 7.490 - 7.538: 98.7458% ( 1) 00:14:11.897 8.201 - 8.249: 98.7615% ( 2) 00:14:11.897 8.865 - 8.913: 98.7694% ( 1) 00:14:11.897 10.050 - 10.098: 98.7773% ( 1) 00:14:11.897 10.287 - 10.335: 98.7852% ( 1) 00:14:11.897 11.947 - 11.994: 98.7931% ( 1) 00:14:11.897 12.516 - 12.610: 98.8010% ( 1) 00:14:11.897 13.274 - 13.369: 98.8089% ( 1) 00:14:11.897 13.464 - 13.559: 98.8168% ( 1) 00:14:11.897 15.170 - 15.265: 98.8246% ( 1) 00:14:11.897 15.265 - 15.360: 98.8325% ( 1) 00:14:11.897 15.455 - 15.550: 98.8404% ( 1) 00:14:11.897 15.644 - 15.739: 98.8562% ( 2) 00:14:11.897 15.739 - 15.834: 98.8720% ( 2) 00:14:11.897 15.929 - 16.024: 98.9193% ( 6) 00:14:11.897 16.024 - 16.119: 98.9587% ( 5) 00:14:11.897 16.119 - 16.213: 98.9982% ( 5) 00:14:11.897 16.213 - 16.308: 99.0140% ( 2) 00:14:11.897 16.308 - 16.403: 99.0219% ( 1) 00:14:11.897 16.403 - 16.498: 99.0534% ( 4) 00:14:11.897 16.498 - 16.593: 99.0928% ( 5) 00:14:11.897 16.593 - 16.687: 99.1165% ( 3) 00:14:11.897 16.687 - 16.782: 99.1954% ( 10) 00:14:11.897 16.782 - 16.877: 99.2269% ( 4) 00:14:11.897 16.877 - 16.972: 99.2664% ( 5) 00:14:11.897 16.972 - 17.067: 99.2743% ( 1) 00:14:11.897 17.067 - 17.161: 99.2822% ( 1) 00:14:11.897 17.256 - 17.351: 99.3216% ( 5) 00:14:11.897 17.351 - 17.446: 99.3295% ( 1) 00:14:11.897 17.541 - 17.636: 99.3374% ( 1) 00:14:11.897 17.825 - 17.920: 99.3453% ( 1) 00:14:11.897 17.920 - 18.015: 99.3532% ( 1) 00:14:11.897 18.204 - 18.299: 99.3610% ( 1) 00:14:11.897 18.299 - 18.394: 99.3689% ( 1) 00:14:11.897 18.394 - 18.489: 99.3768% ( 1) 00:14:11.897 18.773 - 18.868: 99.3847% ( 1) 00:14:11.897 21.713 - 21.807: 99.3926% ( 1) 00:14:11.897 24.178 - 24.273: 99.4005% ( 1) 00:14:11.897 3980.705 - 4004.978: 99.8659% ( 59) 00:14:11.897 4004.978 - 4029.250: 99.9921% ( 16) 00:14:11.897 4029.250 - 4053.523: 100.0000% ( 1) 00:14:11.897 00:14:11.897 10:33:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user1/1 nqn.2019-07.io.spdk:cnode1 1 00:14:11.897 10:33:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user1/1 00:14:11.897 10:33:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode1 00:14:11.897 10:33:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc3 00:14:11.897 10:33:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:14:12.155 [ 00:14:12.155 { 00:14:12.155 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:14:12.155 "subtype": "Discovery", 00:14:12.155 "listen_addresses": [], 00:14:12.155 "allow_any_host": true, 00:14:12.155 "hosts": [] 00:14:12.155 }, 00:14:12.155 { 00:14:12.155 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:14:12.155 "subtype": "NVMe", 00:14:12.155 "listen_addresses": [ 00:14:12.155 { 00:14:12.155 "trtype": "VFIOUSER", 00:14:12.155 "adrfam": "IPv4", 00:14:12.155 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:14:12.155 "trsvcid": "0" 00:14:12.155 } 00:14:12.155 ], 00:14:12.155 "allow_any_host": true, 00:14:12.155 "hosts": [], 00:14:12.155 "serial_number": "SPDK1", 00:14:12.155 "model_number": "SPDK bdev Controller", 00:14:12.155 "max_namespaces": 32, 00:14:12.155 "min_cntlid": 1, 00:14:12.155 "max_cntlid": 65519, 00:14:12.155 "namespaces": [ 00:14:12.155 { 00:14:12.155 "nsid": 1, 00:14:12.155 "bdev_name": "Malloc1", 00:14:12.155 "name": "Malloc1", 00:14:12.155 "nguid": "368A6F54642E4676AD8A717B60DFF501", 00:14:12.155 "uuid": "368a6f54-642e-4676-ad8a-717b60dff501" 00:14:12.155 } 00:14:12.155 ] 00:14:12.155 }, 00:14:12.155 { 00:14:12.155 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:14:12.155 "subtype": "NVMe", 00:14:12.155 "listen_addresses": [ 00:14:12.155 { 00:14:12.155 "trtype": "VFIOUSER", 00:14:12.155 "adrfam": "IPv4", 00:14:12.155 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:14:12.155 "trsvcid": "0" 00:14:12.155 } 00:14:12.155 ], 00:14:12.155 "allow_any_host": true, 00:14:12.155 "hosts": [], 00:14:12.155 "serial_number": "SPDK2", 00:14:12.155 "model_number": "SPDK bdev Controller", 00:14:12.155 "max_namespaces": 32, 00:14:12.155 "min_cntlid": 1, 00:14:12.155 "max_cntlid": 65519, 00:14:12.155 "namespaces": [ 00:14:12.155 { 00:14:12.155 "nsid": 1, 00:14:12.155 "bdev_name": "Malloc2", 00:14:12.155 "name": "Malloc2", 00:14:12.155 "nguid": "701ECD518F0849F09965B0911C17A856", 00:14:12.155 "uuid": "701ecd51-8f08-49f0-9965-b0911c17a856" 00:14:12.155 } 00:14:12.155 ] 00:14:12.155 } 00:14:12.155 ] 00:14:12.155 10:33:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:14:12.156 10:33:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=352783 00:14:12.156 10:33:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -n 2 -g -t /tmp/aer_touch_file 00:14:12.156 10:33:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:14:12.156 10:33:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1267 -- # local i=0 00:14:12.156 10:33:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1268 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:14:12.156 10:33:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1269 -- # '[' 0 -lt 200 ']' 00:14:12.156 10:33:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1270 -- # i=1 00:14:12.156 10:33:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1271 -- # sleep 0.1 00:14:12.156 10:33:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1268 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:14:12.156 10:33:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1269 -- # '[' 1 -lt 200 ']' 00:14:12.156 10:33:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1270 -- # i=2 00:14:12.156 10:33:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1271 -- # sleep 0.1 00:14:12.156 [2024-11-15 10:33:00.593887] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:12.413 10:33:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1268 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:14:12.413 10:33:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1274 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:14:12.413 10:33:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1278 -- # return 0 00:14:12.413 10:33:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:14:12.413 10:33:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc3 00:14:12.670 Malloc3 00:14:12.670 10:33:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc3 -n 2 00:14:12.927 [2024-11-15 10:33:01.190349] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:12.927 10:33:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:14:12.927 Asynchronous Event Request test 00:14:12.927 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:14:12.927 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:14:12.927 Registering asynchronous event callbacks... 00:14:12.927 Starting namespace attribute notice tests for all controllers... 00:14:12.927 /var/run/vfio-user/domain/vfio-user1/1: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:14:12.927 aer_cb - Changed Namespace 00:14:12.927 Cleaning up... 00:14:13.185 [ 00:14:13.186 { 00:14:13.186 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:14:13.186 "subtype": "Discovery", 00:14:13.186 "listen_addresses": [], 00:14:13.186 "allow_any_host": true, 00:14:13.186 "hosts": [] 00:14:13.186 }, 00:14:13.186 { 00:14:13.186 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:14:13.186 "subtype": "NVMe", 00:14:13.186 "listen_addresses": [ 00:14:13.186 { 00:14:13.186 "trtype": "VFIOUSER", 00:14:13.186 "adrfam": "IPv4", 00:14:13.186 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:14:13.186 "trsvcid": "0" 00:14:13.186 } 00:14:13.186 ], 00:14:13.186 "allow_any_host": true, 00:14:13.186 "hosts": [], 00:14:13.186 "serial_number": "SPDK1", 00:14:13.186 "model_number": "SPDK bdev Controller", 00:14:13.186 "max_namespaces": 32, 00:14:13.186 "min_cntlid": 1, 00:14:13.186 "max_cntlid": 65519, 00:14:13.186 "namespaces": [ 00:14:13.186 { 00:14:13.186 "nsid": 1, 00:14:13.186 "bdev_name": "Malloc1", 00:14:13.186 "name": "Malloc1", 00:14:13.186 "nguid": "368A6F54642E4676AD8A717B60DFF501", 00:14:13.186 "uuid": "368a6f54-642e-4676-ad8a-717b60dff501" 00:14:13.186 }, 00:14:13.186 { 00:14:13.186 "nsid": 2, 00:14:13.186 "bdev_name": "Malloc3", 00:14:13.186 "name": "Malloc3", 00:14:13.186 "nguid": "76FE01958F4743BF8183086A94F4EAB7", 00:14:13.186 "uuid": "76fe0195-8f47-43bf-8183-086a94f4eab7" 00:14:13.186 } 00:14:13.186 ] 00:14:13.186 }, 00:14:13.186 { 00:14:13.186 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:14:13.186 "subtype": "NVMe", 00:14:13.186 "listen_addresses": [ 00:14:13.186 { 00:14:13.186 "trtype": "VFIOUSER", 00:14:13.186 "adrfam": "IPv4", 00:14:13.186 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:14:13.186 "trsvcid": "0" 00:14:13.186 } 00:14:13.186 ], 00:14:13.186 "allow_any_host": true, 00:14:13.186 "hosts": [], 00:14:13.186 "serial_number": "SPDK2", 00:14:13.186 "model_number": "SPDK bdev Controller", 00:14:13.186 "max_namespaces": 32, 00:14:13.186 "min_cntlid": 1, 00:14:13.186 "max_cntlid": 65519, 00:14:13.186 "namespaces": [ 00:14:13.186 { 00:14:13.186 "nsid": 1, 00:14:13.186 "bdev_name": "Malloc2", 00:14:13.186 "name": "Malloc2", 00:14:13.186 "nguid": "701ECD518F0849F09965B0911C17A856", 00:14:13.186 "uuid": "701ecd51-8f08-49f0-9965-b0911c17a856" 00:14:13.186 } 00:14:13.186 ] 00:14:13.186 } 00:14:13.186 ] 00:14:13.186 10:33:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 352783 00:14:13.186 10:33:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:14:13.186 10:33:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user2/2 00:14:13.186 10:33:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode2 00:14:13.186 10:33:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -L nvme -L nvme_vfio -L vfio_pci 00:14:13.186 [2024-11-15 10:33:01.521907] Starting SPDK v25.01-pre git sha1 318515b44 / DPDK 24.03.0 initialization... 00:14:13.186 [2024-11-15 10:33:01.521951] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid352979 ] 00:14:13.186 [2024-11-15 10:33:01.572281] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user2/2 00:14:13.186 [2024-11-15 10:33:01.577639] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:14:13.186 [2024-11-15 10:33:01.577680] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7fd0a74aa000 00:14:13.186 [2024-11-15 10:33:01.578638] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:13.186 [2024-11-15 10:33:01.579644] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:13.186 [2024-11-15 10:33:01.582373] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:13.186 [2024-11-15 10:33:01.582658] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:14:13.186 [2024-11-15 10:33:01.583681] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:14:13.186 [2024-11-15 10:33:01.584688] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:13.186 [2024-11-15 10:33:01.585681] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:14:13.186 [2024-11-15 10:33:01.586691] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:13.186 [2024-11-15 10:33:01.587714] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:14:13.186 [2024-11-15 10:33:01.587736] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7fd0a749f000 00:14:13.186 [2024-11-15 10:33:01.588869] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:14:13.186 [2024-11-15 10:33:01.606635] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user2/2/cntrl Setup Successfully 00:14:13.186 [2024-11-15 10:33:01.606691] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to connect adminq (no timeout) 00:14:13.186 [2024-11-15 10:33:01.608781] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:14:13.186 [2024-11-15 10:33:01.608835] nvme_pcie_common.c: 159:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:14:13.186 [2024-11-15 10:33:01.608923] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for connect adminq (no timeout) 00:14:13.186 [2024-11-15 10:33:01.608947] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read vs (no timeout) 00:14:13.186 [2024-11-15 10:33:01.608958] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read vs wait for vs (no timeout) 00:14:13.186 [2024-11-15 10:33:01.609793] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x8, value 0x10300 00:14:13.186 [2024-11-15 10:33:01.609816] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read cap (no timeout) 00:14:13.186 [2024-11-15 10:33:01.609829] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read cap wait for cap (no timeout) 00:14:13.186 [2024-11-15 10:33:01.610792] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:14:13.186 [2024-11-15 10:33:01.610814] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to check en (no timeout) 00:14:13.186 [2024-11-15 10:33:01.610833] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to check en wait for cc (timeout 15000 ms) 00:14:13.186 [2024-11-15 10:33:01.611804] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x0 00:14:13.186 [2024-11-15 10:33:01.611825] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:14:13.186 [2024-11-15 10:33:01.612807] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x0 00:14:13.186 [2024-11-15 10:33:01.612827] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CC.EN = 0 && CSTS.RDY = 0 00:14:13.186 [2024-11-15 10:33:01.612836] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to controller is disabled (timeout 15000 ms) 00:14:13.186 [2024-11-15 10:33:01.612848] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:14:13.186 [2024-11-15 10:33:01.612957] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Setting CC.EN = 1 00:14:13.186 [2024-11-15 10:33:01.612965] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:14:13.186 [2024-11-15 10:33:01.612973] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x28, value 0x2000003c0000 00:14:13.186 [2024-11-15 10:33:01.613816] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x30, value 0x2000003be000 00:14:13.186 [2024-11-15 10:33:01.614816] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x24, value 0xff00ff 00:14:13.186 [2024-11-15 10:33:01.615821] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:14:13.186 [2024-11-15 10:33:01.616812] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:14:13.186 [2024-11-15 10:33:01.616897] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:14:13.186 [2024-11-15 10:33:01.617836] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x1 00:14:13.186 [2024-11-15 10:33:01.617857] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:14:13.186 [2024-11-15 10:33:01.617867] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to reset admin queue (timeout 30000 ms) 00:14:13.186 [2024-11-15 10:33:01.617892] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify controller (no timeout) 00:14:13.186 [2024-11-15 10:33:01.617911] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify controller (timeout 30000 ms) 00:14:13.186 [2024-11-15 10:33:01.617935] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:14:13.186 [2024-11-15 10:33:01.617945] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:13.187 [2024-11-15 10:33:01.617952] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:13.187 [2024-11-15 10:33:01.617970] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:13.187 [2024-11-15 10:33:01.624382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:14:13.187 [2024-11-15 10:33:01.624422] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] transport max_xfer_size 131072 00:14:13.187 [2024-11-15 10:33:01.624433] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] MDTS max_xfer_size 131072 00:14:13.187 [2024-11-15 10:33:01.624440] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CNTLID 0x0001 00:14:13.187 [2024-11-15 10:33:01.624449] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:14:13.187 [2024-11-15 10:33:01.624461] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] transport max_sges 1 00:14:13.187 [2024-11-15 10:33:01.624471] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] fuses compare and write: 1 00:14:13.187 [2024-11-15 10:33:01.624480] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to configure AER (timeout 30000 ms) 00:14:13.187 [2024-11-15 10:33:01.624496] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for configure aer (timeout 30000 ms) 00:14:13.187 [2024-11-15 10:33:01.624513] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:14:13.187 [2024-11-15 10:33:01.632377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:14:13.187 [2024-11-15 10:33:01.632413] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:14:13.187 [2024-11-15 10:33:01.632427] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:14:13.187 [2024-11-15 10:33:01.632439] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:14:13.187 [2024-11-15 10:33:01.632451] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:14:13.187 [2024-11-15 10:33:01.632460] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set keep alive timeout (timeout 30000 ms) 00:14:13.187 [2024-11-15 10:33:01.632473] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:14:13.187 [2024-11-15 10:33:01.632486] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:14:13.187 [2024-11-15 10:33:01.640376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:14:13.187 [2024-11-15 10:33:01.640400] nvme_ctrlr.c:3047:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Controller adjusted keep alive timeout to 0 ms 00:14:13.187 [2024-11-15 10:33:01.640411] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify controller iocs specific (timeout 30000 ms) 00:14:13.187 [2024-11-15 10:33:01.640424] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set number of queues (timeout 30000 ms) 00:14:13.187 [2024-11-15 10:33:01.640434] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for set number of queues (timeout 30000 ms) 00:14:13.187 [2024-11-15 10:33:01.640448] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:14:13.187 [2024-11-15 10:33:01.648377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:14:13.187 [2024-11-15 10:33:01.648467] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify active ns (timeout 30000 ms) 00:14:13.187 [2024-11-15 10:33:01.648486] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify active ns (timeout 30000 ms) 00:14:13.187 [2024-11-15 10:33:01.648500] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:14:13.187 [2024-11-15 10:33:01.648508] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:14:13.187 [2024-11-15 10:33:01.648514] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:13.187 [2024-11-15 10:33:01.648524] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:14:13.446 [2024-11-15 10:33:01.656387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:14:13.446 [2024-11-15 10:33:01.656417] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Namespace 1 was added 00:14:13.446 [2024-11-15 10:33:01.656442] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify ns (timeout 30000 ms) 00:14:13.446 [2024-11-15 10:33:01.656459] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify ns (timeout 30000 ms) 00:14:13.446 [2024-11-15 10:33:01.656473] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:14:13.446 [2024-11-15 10:33:01.656483] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:13.446 [2024-11-15 10:33:01.656489] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:13.446 [2024-11-15 10:33:01.656499] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:13.446 [2024-11-15 10:33:01.664376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:14:13.446 [2024-11-15 10:33:01.664406] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify namespace id descriptors (timeout 30000 ms) 00:14:13.446 [2024-11-15 10:33:01.664423] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:14:13.446 [2024-11-15 10:33:01.664436] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:14:13.446 [2024-11-15 10:33:01.664445] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:13.446 [2024-11-15 10:33:01.664451] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:13.446 [2024-11-15 10:33:01.664460] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:13.446 [2024-11-15 10:33:01.672376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:14:13.446 [2024-11-15 10:33:01.672398] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify ns iocs specific (timeout 30000 ms) 00:14:13.446 [2024-11-15 10:33:01.672410] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set supported log pages (timeout 30000 ms) 00:14:13.446 [2024-11-15 10:33:01.672425] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set supported features (timeout 30000 ms) 00:14:13.446 [2024-11-15 10:33:01.672436] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set host behavior support feature (timeout 30000 ms) 00:14:13.446 [2024-11-15 10:33:01.672447] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set doorbell buffer config (timeout 30000 ms) 00:14:13.446 [2024-11-15 10:33:01.672456] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set host ID (timeout 30000 ms) 00:14:13.446 [2024-11-15 10:33:01.672466] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] NVMe-oF transport - not sending Set Features - Host ID 00:14:13.446 [2024-11-15 10:33:01.672473] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to transport ready (timeout 30000 ms) 00:14:13.446 [2024-11-15 10:33:01.672482] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to ready (no timeout) 00:14:13.446 [2024-11-15 10:33:01.672507] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:14:13.446 [2024-11-15 10:33:01.680376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:14:13.446 [2024-11-15 10:33:01.680403] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:14:13.446 [2024-11-15 10:33:01.688392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:14:13.446 [2024-11-15 10:33:01.688416] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:14:13.446 [2024-11-15 10:33:01.695376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:14:13.446 [2024-11-15 10:33:01.695413] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:14:13.446 [2024-11-15 10:33:01.704391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:14:13.446 [2024-11-15 10:33:01.704425] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:14:13.446 [2024-11-15 10:33:01.704436] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:14:13.446 [2024-11-15 10:33:01.704442] nvme_pcie_common.c:1275:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:14:13.446 [2024-11-15 10:33:01.704448] nvme_pcie_common.c:1291:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:14:13.446 [2024-11-15 10:33:01.704454] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:14:13.446 [2024-11-15 10:33:01.704464] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:14:13.447 [2024-11-15 10:33:01.704476] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:14:13.447 [2024-11-15 10:33:01.704484] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:14:13.447 [2024-11-15 10:33:01.704490] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:13.447 [2024-11-15 10:33:01.704499] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:14:13.447 [2024-11-15 10:33:01.704510] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:14:13.447 [2024-11-15 10:33:01.704518] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:13.447 [2024-11-15 10:33:01.704523] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:13.447 [2024-11-15 10:33:01.704532] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:13.447 [2024-11-15 10:33:01.704544] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:14:13.447 [2024-11-15 10:33:01.704556] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:14:13.447 [2024-11-15 10:33:01.704563] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:13.447 [2024-11-15 10:33:01.704571] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:14:13.447 [2024-11-15 10:33:01.712377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:14:13.447 [2024-11-15 10:33:01.712407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:14:13.447 [2024-11-15 10:33:01.712426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:14:13.447 [2024-11-15 10:33:01.712438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:14:13.447 ===================================================== 00:14:13.447 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:14:13.447 ===================================================== 00:14:13.447 Controller Capabilities/Features 00:14:13.447 ================================ 00:14:13.447 Vendor ID: 4e58 00:14:13.447 Subsystem Vendor ID: 4e58 00:14:13.447 Serial Number: SPDK2 00:14:13.447 Model Number: SPDK bdev Controller 00:14:13.447 Firmware Version: 25.01 00:14:13.447 Recommended Arb Burst: 6 00:14:13.447 IEEE OUI Identifier: 8d 6b 50 00:14:13.447 Multi-path I/O 00:14:13.447 May have multiple subsystem ports: Yes 00:14:13.447 May have multiple controllers: Yes 00:14:13.447 Associated with SR-IOV VF: No 00:14:13.447 Max Data Transfer Size: 131072 00:14:13.447 Max Number of Namespaces: 32 00:14:13.447 Max Number of I/O Queues: 127 00:14:13.447 NVMe Specification Version (VS): 1.3 00:14:13.447 NVMe Specification Version (Identify): 1.3 00:14:13.447 Maximum Queue Entries: 256 00:14:13.447 Contiguous Queues Required: Yes 00:14:13.447 Arbitration Mechanisms Supported 00:14:13.447 Weighted Round Robin: Not Supported 00:14:13.447 Vendor Specific: Not Supported 00:14:13.447 Reset Timeout: 15000 ms 00:14:13.447 Doorbell Stride: 4 bytes 00:14:13.447 NVM Subsystem Reset: Not Supported 00:14:13.447 Command Sets Supported 00:14:13.447 NVM Command Set: Supported 00:14:13.447 Boot Partition: Not Supported 00:14:13.447 Memory Page Size Minimum: 4096 bytes 00:14:13.447 Memory Page Size Maximum: 4096 bytes 00:14:13.447 Persistent Memory Region: Not Supported 00:14:13.447 Optional Asynchronous Events Supported 00:14:13.447 Namespace Attribute Notices: Supported 00:14:13.447 Firmware Activation Notices: Not Supported 00:14:13.447 ANA Change Notices: Not Supported 00:14:13.447 PLE Aggregate Log Change Notices: Not Supported 00:14:13.447 LBA Status Info Alert Notices: Not Supported 00:14:13.447 EGE Aggregate Log Change Notices: Not Supported 00:14:13.447 Normal NVM Subsystem Shutdown event: Not Supported 00:14:13.447 Zone Descriptor Change Notices: Not Supported 00:14:13.447 Discovery Log Change Notices: Not Supported 00:14:13.447 Controller Attributes 00:14:13.447 128-bit Host Identifier: Supported 00:14:13.447 Non-Operational Permissive Mode: Not Supported 00:14:13.447 NVM Sets: Not Supported 00:14:13.447 Read Recovery Levels: Not Supported 00:14:13.447 Endurance Groups: Not Supported 00:14:13.447 Predictable Latency Mode: Not Supported 00:14:13.447 Traffic Based Keep ALive: Not Supported 00:14:13.447 Namespace Granularity: Not Supported 00:14:13.447 SQ Associations: Not Supported 00:14:13.447 UUID List: Not Supported 00:14:13.447 Multi-Domain Subsystem: Not Supported 00:14:13.447 Fixed Capacity Management: Not Supported 00:14:13.447 Variable Capacity Management: Not Supported 00:14:13.447 Delete Endurance Group: Not Supported 00:14:13.447 Delete NVM Set: Not Supported 00:14:13.447 Extended LBA Formats Supported: Not Supported 00:14:13.447 Flexible Data Placement Supported: Not Supported 00:14:13.447 00:14:13.447 Controller Memory Buffer Support 00:14:13.447 ================================ 00:14:13.447 Supported: No 00:14:13.447 00:14:13.447 Persistent Memory Region Support 00:14:13.447 ================================ 00:14:13.447 Supported: No 00:14:13.447 00:14:13.447 Admin Command Set Attributes 00:14:13.447 ============================ 00:14:13.447 Security Send/Receive: Not Supported 00:14:13.447 Format NVM: Not Supported 00:14:13.447 Firmware Activate/Download: Not Supported 00:14:13.447 Namespace Management: Not Supported 00:14:13.447 Device Self-Test: Not Supported 00:14:13.447 Directives: Not Supported 00:14:13.447 NVMe-MI: Not Supported 00:14:13.447 Virtualization Management: Not Supported 00:14:13.447 Doorbell Buffer Config: Not Supported 00:14:13.447 Get LBA Status Capability: Not Supported 00:14:13.447 Command & Feature Lockdown Capability: Not Supported 00:14:13.447 Abort Command Limit: 4 00:14:13.447 Async Event Request Limit: 4 00:14:13.447 Number of Firmware Slots: N/A 00:14:13.447 Firmware Slot 1 Read-Only: N/A 00:14:13.447 Firmware Activation Without Reset: N/A 00:14:13.447 Multiple Update Detection Support: N/A 00:14:13.447 Firmware Update Granularity: No Information Provided 00:14:13.447 Per-Namespace SMART Log: No 00:14:13.447 Asymmetric Namespace Access Log Page: Not Supported 00:14:13.447 Subsystem NQN: nqn.2019-07.io.spdk:cnode2 00:14:13.447 Command Effects Log Page: Supported 00:14:13.447 Get Log Page Extended Data: Supported 00:14:13.447 Telemetry Log Pages: Not Supported 00:14:13.447 Persistent Event Log Pages: Not Supported 00:14:13.447 Supported Log Pages Log Page: May Support 00:14:13.447 Commands Supported & Effects Log Page: Not Supported 00:14:13.447 Feature Identifiers & Effects Log Page:May Support 00:14:13.447 NVMe-MI Commands & Effects Log Page: May Support 00:14:13.447 Data Area 4 for Telemetry Log: Not Supported 00:14:13.447 Error Log Page Entries Supported: 128 00:14:13.447 Keep Alive: Supported 00:14:13.447 Keep Alive Granularity: 10000 ms 00:14:13.447 00:14:13.447 NVM Command Set Attributes 00:14:13.447 ========================== 00:14:13.447 Submission Queue Entry Size 00:14:13.447 Max: 64 00:14:13.447 Min: 64 00:14:13.447 Completion Queue Entry Size 00:14:13.447 Max: 16 00:14:13.447 Min: 16 00:14:13.447 Number of Namespaces: 32 00:14:13.447 Compare Command: Supported 00:14:13.447 Write Uncorrectable Command: Not Supported 00:14:13.447 Dataset Management Command: Supported 00:14:13.447 Write Zeroes Command: Supported 00:14:13.447 Set Features Save Field: Not Supported 00:14:13.448 Reservations: Not Supported 00:14:13.448 Timestamp: Not Supported 00:14:13.448 Copy: Supported 00:14:13.448 Volatile Write Cache: Present 00:14:13.448 Atomic Write Unit (Normal): 1 00:14:13.448 Atomic Write Unit (PFail): 1 00:14:13.448 Atomic Compare & Write Unit: 1 00:14:13.448 Fused Compare & Write: Supported 00:14:13.448 Scatter-Gather List 00:14:13.448 SGL Command Set: Supported (Dword aligned) 00:14:13.448 SGL Keyed: Not Supported 00:14:13.448 SGL Bit Bucket Descriptor: Not Supported 00:14:13.448 SGL Metadata Pointer: Not Supported 00:14:13.448 Oversized SGL: Not Supported 00:14:13.448 SGL Metadata Address: Not Supported 00:14:13.448 SGL Offset: Not Supported 00:14:13.448 Transport SGL Data Block: Not Supported 00:14:13.448 Replay Protected Memory Block: Not Supported 00:14:13.448 00:14:13.448 Firmware Slot Information 00:14:13.448 ========================= 00:14:13.448 Active slot: 1 00:14:13.448 Slot 1 Firmware Revision: 25.01 00:14:13.448 00:14:13.448 00:14:13.448 Commands Supported and Effects 00:14:13.448 ============================== 00:14:13.448 Admin Commands 00:14:13.448 -------------- 00:14:13.448 Get Log Page (02h): Supported 00:14:13.448 Identify (06h): Supported 00:14:13.448 Abort (08h): Supported 00:14:13.448 Set Features (09h): Supported 00:14:13.448 Get Features (0Ah): Supported 00:14:13.448 Asynchronous Event Request (0Ch): Supported 00:14:13.448 Keep Alive (18h): Supported 00:14:13.448 I/O Commands 00:14:13.448 ------------ 00:14:13.448 Flush (00h): Supported LBA-Change 00:14:13.448 Write (01h): Supported LBA-Change 00:14:13.448 Read (02h): Supported 00:14:13.448 Compare (05h): Supported 00:14:13.448 Write Zeroes (08h): Supported LBA-Change 00:14:13.448 Dataset Management (09h): Supported LBA-Change 00:14:13.448 Copy (19h): Supported LBA-Change 00:14:13.448 00:14:13.448 Error Log 00:14:13.448 ========= 00:14:13.448 00:14:13.448 Arbitration 00:14:13.448 =========== 00:14:13.448 Arbitration Burst: 1 00:14:13.448 00:14:13.448 Power Management 00:14:13.448 ================ 00:14:13.448 Number of Power States: 1 00:14:13.448 Current Power State: Power State #0 00:14:13.448 Power State #0: 00:14:13.448 Max Power: 0.00 W 00:14:13.448 Non-Operational State: Operational 00:14:13.448 Entry Latency: Not Reported 00:14:13.448 Exit Latency: Not Reported 00:14:13.448 Relative Read Throughput: 0 00:14:13.448 Relative Read Latency: 0 00:14:13.448 Relative Write Throughput: 0 00:14:13.448 Relative Write Latency: 0 00:14:13.448 Idle Power: Not Reported 00:14:13.448 Active Power: Not Reported 00:14:13.448 Non-Operational Permissive Mode: Not Supported 00:14:13.448 00:14:13.448 Health Information 00:14:13.448 ================== 00:14:13.448 Critical Warnings: 00:14:13.448 Available Spare Space: OK 00:14:13.448 Temperature: OK 00:14:13.448 Device Reliability: OK 00:14:13.448 Read Only: No 00:14:13.448 Volatile Memory Backup: OK 00:14:13.448 Current Temperature: 0 Kelvin (-273 Celsius) 00:14:13.448 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:14:13.448 Available Spare: 0% 00:14:13.448 Available Sp[2024-11-15 10:33:01.712567] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:14:13.448 [2024-11-15 10:33:01.720374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:14:13.448 [2024-11-15 10:33:01.720428] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Prepare to destruct SSD 00:14:13.448 [2024-11-15 10:33:01.720446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:13.448 [2024-11-15 10:33:01.720457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:13.448 [2024-11-15 10:33:01.720466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:13.448 [2024-11-15 10:33:01.720476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:13.448 [2024-11-15 10:33:01.720541] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:14:13.448 [2024-11-15 10:33:01.720563] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x464001 00:14:13.448 [2024-11-15 10:33:01.721548] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:14:13.448 [2024-11-15 10:33:01.721640] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] RTD3E = 0 us 00:14:13.448 [2024-11-15 10:33:01.721670] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] shutdown timeout = 10000 ms 00:14:13.448 [2024-11-15 10:33:01.722552] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x9 00:14:13.448 [2024-11-15 10:33:01.722576] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] shutdown complete in 0 milliseconds 00:14:13.448 [2024-11-15 10:33:01.722629] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user2/2/cntrl 00:14:13.448 [2024-11-15 10:33:01.725375] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:14:13.448 are Threshold: 0% 00:14:13.448 Life Percentage Used: 0% 00:14:13.448 Data Units Read: 0 00:14:13.448 Data Units Written: 0 00:14:13.448 Host Read Commands: 0 00:14:13.448 Host Write Commands: 0 00:14:13.448 Controller Busy Time: 0 minutes 00:14:13.448 Power Cycles: 0 00:14:13.448 Power On Hours: 0 hours 00:14:13.448 Unsafe Shutdowns: 0 00:14:13.448 Unrecoverable Media Errors: 0 00:14:13.448 Lifetime Error Log Entries: 0 00:14:13.448 Warning Temperature Time: 0 minutes 00:14:13.448 Critical Temperature Time: 0 minutes 00:14:13.448 00:14:13.448 Number of Queues 00:14:13.448 ================ 00:14:13.448 Number of I/O Submission Queues: 127 00:14:13.448 Number of I/O Completion Queues: 127 00:14:13.448 00:14:13.448 Active Namespaces 00:14:13.448 ================= 00:14:13.448 Namespace ID:1 00:14:13.448 Error Recovery Timeout: Unlimited 00:14:13.448 Command Set Identifier: NVM (00h) 00:14:13.448 Deallocate: Supported 00:14:13.448 Deallocated/Unwritten Error: Not Supported 00:14:13.448 Deallocated Read Value: Unknown 00:14:13.448 Deallocate in Write Zeroes: Not Supported 00:14:13.448 Deallocated Guard Field: 0xFFFF 00:14:13.448 Flush: Supported 00:14:13.448 Reservation: Supported 00:14:13.448 Namespace Sharing Capabilities: Multiple Controllers 00:14:13.448 Size (in LBAs): 131072 (0GiB) 00:14:13.448 Capacity (in LBAs): 131072 (0GiB) 00:14:13.448 Utilization (in LBAs): 131072 (0GiB) 00:14:13.448 NGUID: 701ECD518F0849F09965B0911C17A856 00:14:13.448 UUID: 701ecd51-8f08-49f0-9965-b0911c17a856 00:14:13.448 Thin Provisioning: Not Supported 00:14:13.448 Per-NS Atomic Units: Yes 00:14:13.448 Atomic Boundary Size (Normal): 0 00:14:13.448 Atomic Boundary Size (PFail): 0 00:14:13.448 Atomic Boundary Offset: 0 00:14:13.448 Maximum Single Source Range Length: 65535 00:14:13.448 Maximum Copy Length: 65535 00:14:13.448 Maximum Source Range Count: 1 00:14:13.448 NGUID/EUI64 Never Reused: No 00:14:13.448 Namespace Write Protected: No 00:14:13.448 Number of LBA Formats: 1 00:14:13.448 Current LBA Format: LBA Format #00 00:14:13.448 LBA Format #00: Data Size: 512 Metadata Size: 0 00:14:13.448 00:14:13.448 10:33:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:14:13.706 [2024-11-15 10:33:01.969255] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:14:18.966 Initializing NVMe Controllers 00:14:18.966 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:14:18.966 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:14:18.966 Initialization complete. Launching workers. 00:14:18.966 ======================================================== 00:14:18.966 Latency(us) 00:14:18.966 Device Information : IOPS MiB/s Average min max 00:14:18.966 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 33485.91 130.80 3821.77 1178.12 7670.09 00:14:18.966 ======================================================== 00:14:18.966 Total : 33485.91 130.80 3821.77 1178.12 7670.09 00:14:18.966 00:14:18.966 [2024-11-15 10:33:07.081786] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:14:18.966 10:33:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:14:18.966 [2024-11-15 10:33:07.343461] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:14:24.226 Initializing NVMe Controllers 00:14:24.226 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:14:24.226 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:14:24.226 Initialization complete. Launching workers. 00:14:24.226 ======================================================== 00:14:24.226 Latency(us) 00:14:24.226 Device Information : IOPS MiB/s Average min max 00:14:24.227 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 30161.85 117.82 4243.09 1217.36 10190.34 00:14:24.227 ======================================================== 00:14:24.227 Total : 30161.85 117.82 4243.09 1217.36 10190.34 00:14:24.227 00:14:24.227 [2024-11-15 10:33:12.369396] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:14:24.227 10:33:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:14:24.227 [2024-11-15 10:33:12.599060] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:14:29.488 [2024-11-15 10:33:17.745510] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:14:29.488 Initializing NVMe Controllers 00:14:29.488 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:14:29.488 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:14:29.488 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 1 00:14:29.488 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 2 00:14:29.488 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 3 00:14:29.488 Initialization complete. Launching workers. 00:14:29.488 Starting thread on core 2 00:14:29.488 Starting thread on core 3 00:14:29.488 Starting thread on core 1 00:14:29.488 10:33:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -d 256 -g 00:14:29.746 [2024-11-15 10:33:18.057906] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:14:33.029 [2024-11-15 10:33:21.465946] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:14:33.286 Initializing NVMe Controllers 00:14:33.286 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:14:33.286 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:14:33.286 Associating SPDK bdev Controller (SPDK2 ) with lcore 0 00:14:33.286 Associating SPDK bdev Controller (SPDK2 ) with lcore 1 00:14:33.286 Associating SPDK bdev Controller (SPDK2 ) with lcore 2 00:14:33.286 Associating SPDK bdev Controller (SPDK2 ) with lcore 3 00:14:33.286 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:14:33.286 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:14:33.286 Initialization complete. Launching workers. 00:14:33.286 Starting thread on core 1 with urgent priority queue 00:14:33.286 Starting thread on core 2 with urgent priority queue 00:14:33.286 Starting thread on core 0 with urgent priority queue 00:14:33.286 Starting thread on core 3 with urgent priority queue 00:14:33.286 SPDK bdev Controller (SPDK2 ) core 0: 4914.67 IO/s 20.35 secs/100000 ios 00:14:33.286 SPDK bdev Controller (SPDK2 ) core 1: 5327.00 IO/s 18.77 secs/100000 ios 00:14:33.286 SPDK bdev Controller (SPDK2 ) core 2: 5380.67 IO/s 18.59 secs/100000 ios 00:14:33.286 SPDK bdev Controller (SPDK2 ) core 3: 5256.67 IO/s 19.02 secs/100000 ios 00:14:33.286 ======================================================== 00:14:33.286 00:14:33.286 10:33:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:14:33.543 [2024-11-15 10:33:21.783251] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:14:33.543 Initializing NVMe Controllers 00:14:33.543 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:14:33.543 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:14:33.543 Namespace ID: 1 size: 0GB 00:14:33.543 Initialization complete. 00:14:33.543 INFO: using host memory buffer for IO 00:14:33.543 Hello world! 00:14:33.543 [2024-11-15 10:33:21.798310] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:14:33.543 10:33:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:14:33.801 [2024-11-15 10:33:22.102747] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:14:34.734 Initializing NVMe Controllers 00:14:34.734 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:14:34.734 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:14:34.734 Initialization complete. Launching workers. 00:14:34.734 submit (in ns) avg, min, max = 6506.7, 3513.3, 4017970.0 00:14:34.734 complete (in ns) avg, min, max = 29728.0, 2061.1, 6010914.4 00:14:34.734 00:14:34.734 Submit histogram 00:14:34.734 ================ 00:14:34.734 Range in us Cumulative Count 00:14:34.734 3.508 - 3.532: 0.1570% ( 20) 00:14:34.734 3.532 - 3.556: 0.6986% ( 69) 00:14:34.734 3.556 - 3.579: 2.5981% ( 242) 00:14:34.734 3.579 - 3.603: 6.3579% ( 479) 00:14:34.734 3.603 - 3.627: 13.1319% ( 863) 00:14:34.734 3.627 - 3.650: 22.1978% ( 1155) 00:14:34.734 3.650 - 3.674: 32.9042% ( 1364) 00:14:34.734 3.674 - 3.698: 40.6672% ( 989) 00:14:34.734 3.698 - 3.721: 48.0377% ( 939) 00:14:34.734 3.721 - 3.745: 52.9042% ( 620) 00:14:34.734 3.745 - 3.769: 57.0330% ( 526) 00:14:34.734 3.769 - 3.793: 61.1695% ( 527) 00:14:34.734 3.793 - 3.816: 64.4270% ( 415) 00:14:34.734 3.816 - 3.840: 68.1711% ( 477) 00:14:34.734 3.840 - 3.864: 72.1193% ( 503) 00:14:34.734 3.864 - 3.887: 76.3893% ( 544) 00:14:34.734 3.887 - 3.911: 80.5102% ( 525) 00:14:34.734 3.911 - 3.935: 83.7677% ( 415) 00:14:34.734 3.935 - 3.959: 86.0204% ( 287) 00:14:34.734 3.959 - 3.982: 87.9592% ( 247) 00:14:34.734 3.982 - 4.006: 89.4584% ( 191) 00:14:34.734 4.006 - 4.030: 90.5887% ( 144) 00:14:34.734 4.030 - 4.053: 91.5385% ( 121) 00:14:34.734 4.053 - 4.077: 92.4097% ( 111) 00:14:34.734 4.077 - 4.101: 93.2575% ( 108) 00:14:34.734 4.101 - 4.124: 94.0424% ( 100) 00:14:34.734 4.124 - 4.148: 94.6703% ( 80) 00:14:34.734 4.148 - 4.172: 95.1805% ( 65) 00:14:34.734 4.172 - 4.196: 95.4945% ( 40) 00:14:34.734 4.196 - 4.219: 95.7535% ( 33) 00:14:34.734 4.219 - 4.243: 95.9419% ( 24) 00:14:34.734 4.243 - 4.267: 96.0518% ( 14) 00:14:34.734 4.267 - 4.290: 96.2009% ( 19) 00:14:34.734 4.290 - 4.314: 96.3108% ( 14) 00:14:34.734 4.314 - 4.338: 96.4286% ( 15) 00:14:34.734 4.338 - 4.361: 96.5149% ( 11) 00:14:34.734 4.361 - 4.385: 96.6405% ( 16) 00:14:34.734 4.385 - 4.409: 96.7033% ( 8) 00:14:34.734 4.409 - 4.433: 96.7896% ( 11) 00:14:34.734 4.433 - 4.456: 96.8446% ( 7) 00:14:34.734 4.456 - 4.480: 96.8760% ( 4) 00:14:34.734 4.480 - 4.504: 96.8838% ( 1) 00:14:34.734 4.504 - 4.527: 96.8995% ( 2) 00:14:34.734 4.527 - 4.551: 96.9074% ( 1) 00:14:34.734 4.551 - 4.575: 96.9231% ( 2) 00:14:34.734 4.575 - 4.599: 96.9466% ( 3) 00:14:34.734 4.599 - 4.622: 96.9623% ( 2) 00:14:34.734 4.646 - 4.670: 97.0016% ( 5) 00:14:34.734 4.670 - 4.693: 97.0173% ( 2) 00:14:34.734 4.693 - 4.717: 97.0251% ( 1) 00:14:34.734 4.717 - 4.741: 97.0722% ( 6) 00:14:34.734 4.741 - 4.764: 97.0801% ( 1) 00:14:34.734 4.764 - 4.788: 97.1115% ( 4) 00:14:34.734 4.788 - 4.812: 97.1664% ( 7) 00:14:34.734 4.812 - 4.836: 97.2057% ( 5) 00:14:34.734 4.836 - 4.859: 97.2292% ( 3) 00:14:34.734 4.859 - 4.883: 97.2998% ( 9) 00:14:34.734 4.883 - 4.907: 97.3548% ( 7) 00:14:34.734 4.907 - 4.930: 97.3705% ( 2) 00:14:34.734 4.930 - 4.954: 97.4019% ( 4) 00:14:34.734 4.954 - 4.978: 97.4411% ( 5) 00:14:34.734 4.978 - 5.001: 97.5118% ( 9) 00:14:34.735 5.001 - 5.025: 97.5667% ( 7) 00:14:34.735 5.025 - 5.049: 97.5824% ( 2) 00:14:34.735 5.049 - 5.073: 97.6217% ( 5) 00:14:34.735 5.096 - 5.120: 97.6845% ( 8) 00:14:34.735 5.120 - 5.144: 97.7002% ( 2) 00:14:34.735 5.144 - 5.167: 97.7473% ( 6) 00:14:34.735 5.167 - 5.191: 97.7708% ( 3) 00:14:34.735 5.191 - 5.215: 97.7865% ( 2) 00:14:34.735 5.215 - 5.239: 97.7943% ( 1) 00:14:34.735 5.239 - 5.262: 97.8179% ( 3) 00:14:34.735 5.262 - 5.286: 97.8257% ( 1) 00:14:34.735 5.286 - 5.310: 97.8414% ( 2) 00:14:34.735 5.310 - 5.333: 97.8493% ( 1) 00:14:34.735 5.333 - 5.357: 97.8571% ( 1) 00:14:34.735 5.381 - 5.404: 97.8728% ( 2) 00:14:34.735 5.499 - 5.523: 97.8807% ( 1) 00:14:34.735 5.523 - 5.547: 97.8885% ( 1) 00:14:34.735 5.570 - 5.594: 97.9042% ( 2) 00:14:34.735 5.689 - 5.713: 97.9199% ( 2) 00:14:34.735 5.713 - 5.736: 97.9278% ( 1) 00:14:34.735 5.760 - 5.784: 97.9356% ( 1) 00:14:34.735 5.879 - 5.902: 97.9513% ( 2) 00:14:34.735 5.950 - 5.973: 97.9592% ( 1) 00:14:34.735 5.973 - 5.997: 97.9670% ( 1) 00:14:34.735 6.116 - 6.163: 97.9749% ( 1) 00:14:34.735 6.210 - 6.258: 97.9906% ( 2) 00:14:34.735 6.258 - 6.305: 97.9984% ( 1) 00:14:34.735 6.353 - 6.400: 98.0063% ( 1) 00:14:34.735 6.400 - 6.447: 98.0141% ( 1) 00:14:34.735 6.447 - 6.495: 98.0377% ( 3) 00:14:34.735 6.495 - 6.542: 98.0455% ( 1) 00:14:34.735 6.542 - 6.590: 98.0534% ( 1) 00:14:34.735 6.590 - 6.637: 98.0612% ( 1) 00:14:34.735 6.779 - 6.827: 98.0691% ( 1) 00:14:34.735 6.827 - 6.874: 98.0848% ( 2) 00:14:34.735 6.921 - 6.969: 98.0926% ( 1) 00:14:34.735 7.064 - 7.111: 98.1005% ( 1) 00:14:34.735 7.111 - 7.159: 98.1083% ( 1) 00:14:34.735 7.206 - 7.253: 98.1162% ( 1) 00:14:34.735 7.443 - 7.490: 98.1240% ( 1) 00:14:34.735 7.538 - 7.585: 98.1319% ( 1) 00:14:34.735 7.633 - 7.680: 98.1397% ( 1) 00:14:34.735 7.727 - 7.775: 98.1554% ( 2) 00:14:34.735 7.775 - 7.822: 98.1633% ( 1) 00:14:34.735 7.870 - 7.917: 98.1711% ( 1) 00:14:34.735 7.964 - 8.012: 98.1790% ( 1) 00:14:34.735 8.107 - 8.154: 98.1947% ( 2) 00:14:34.735 8.154 - 8.201: 98.2182% ( 3) 00:14:34.735 8.249 - 8.296: 98.2496% ( 4) 00:14:34.735 8.296 - 8.344: 98.2575% ( 1) 00:14:34.735 8.344 - 8.391: 98.2653% ( 1) 00:14:34.735 8.391 - 8.439: 98.2732% ( 1) 00:14:34.735 8.439 - 8.486: 98.2810% ( 1) 00:14:34.735 8.533 - 8.581: 98.2889% ( 1) 00:14:34.735 8.581 - 8.628: 98.3046% ( 2) 00:14:34.735 8.628 - 8.676: 98.3124% ( 1) 00:14:34.735 8.676 - 8.723: 98.3203% ( 1) 00:14:34.735 8.723 - 8.770: 98.3359% ( 2) 00:14:34.735 8.960 - 9.007: 98.3438% ( 1) 00:14:34.735 9.055 - 9.102: 98.3516% ( 1) 00:14:34.735 9.102 - 9.150: 98.3595% ( 1) 00:14:34.735 9.244 - 9.292: 98.3752% ( 2) 00:14:34.735 9.292 - 9.339: 98.3909% ( 2) 00:14:34.735 9.339 - 9.387: 98.4066% ( 2) 00:14:34.735 9.387 - 9.434: 98.4144% ( 1) 00:14:34.735 9.529 - 9.576: 98.4223% ( 1) 00:14:34.735 9.576 - 9.624: 98.4380% ( 2) 00:14:34.735 9.624 - 9.671: 98.4458% ( 1) 00:14:34.735 9.766 - 9.813: 98.4615% ( 2) 00:14:34.735 9.813 - 9.861: 98.4694% ( 1) 00:14:34.735 9.861 - 9.908: 98.4851% ( 2) 00:14:34.735 9.908 - 9.956: 98.5008% ( 2) 00:14:34.735 9.956 - 10.003: 98.5086% ( 1) 00:14:34.735 10.003 - 10.050: 98.5322% ( 3) 00:14:34.735 10.050 - 10.098: 98.5479% ( 2) 00:14:34.735 10.098 - 10.145: 98.5557% ( 1) 00:14:34.735 10.145 - 10.193: 98.5636% ( 1) 00:14:34.735 10.240 - 10.287: 98.5714% ( 1) 00:14:34.735 10.335 - 10.382: 98.5793% ( 1) 00:14:34.735 10.572 - 10.619: 98.5871% ( 1) 00:14:34.735 10.667 - 10.714: 98.6028% ( 2) 00:14:34.735 10.904 - 10.951: 98.6107% ( 1) 00:14:34.735 10.999 - 11.046: 98.6264% ( 2) 00:14:34.735 11.046 - 11.093: 98.6342% ( 1) 00:14:34.735 11.093 - 11.141: 98.6421% ( 1) 00:14:34.735 11.236 - 11.283: 98.6499% ( 1) 00:14:34.735 11.283 - 11.330: 98.6656% ( 2) 00:14:34.735 11.378 - 11.425: 98.6892% ( 3) 00:14:34.735 11.425 - 11.473: 98.7127% ( 3) 00:14:34.735 11.615 - 11.662: 98.7206% ( 1) 00:14:34.735 11.710 - 11.757: 98.7284% ( 1) 00:14:34.735 11.757 - 11.804: 98.7363% ( 1) 00:14:34.735 12.136 - 12.231: 98.7441% ( 1) 00:14:34.735 12.231 - 12.326: 98.7520% ( 1) 00:14:34.735 12.421 - 12.516: 98.7598% ( 1) 00:14:34.735 12.610 - 12.705: 98.7834% ( 3) 00:14:34.735 12.895 - 12.990: 98.7912% ( 1) 00:14:34.735 12.990 - 13.084: 98.7991% ( 1) 00:14:34.735 13.084 - 13.179: 98.8148% ( 2) 00:14:34.735 13.179 - 13.274: 98.8226% ( 1) 00:14:34.735 13.369 - 13.464: 98.8305% ( 1) 00:14:34.735 13.559 - 13.653: 98.8383% ( 1) 00:14:34.735 13.748 - 13.843: 98.8540% ( 2) 00:14:34.735 14.317 - 14.412: 98.8697% ( 2) 00:14:34.735 14.412 - 14.507: 98.8776% ( 1) 00:14:34.735 14.696 - 14.791: 98.8854% ( 1) 00:14:34.735 14.886 - 14.981: 98.8932% ( 1) 00:14:34.735 15.170 - 15.265: 98.9011% ( 1) 00:14:34.735 17.161 - 17.256: 98.9168% ( 2) 00:14:34.735 17.256 - 17.351: 98.9246% ( 1) 00:14:34.735 17.351 - 17.446: 98.9639% ( 5) 00:14:34.735 17.446 - 17.541: 99.0110% ( 6) 00:14:34.735 17.541 - 17.636: 99.0267% ( 2) 00:14:34.735 17.636 - 17.730: 99.0816% ( 7) 00:14:34.735 17.730 - 17.825: 99.1130% ( 4) 00:14:34.735 17.825 - 17.920: 99.2072% ( 12) 00:14:34.735 17.920 - 18.015: 99.2308% ( 3) 00:14:34.735 18.015 - 18.110: 99.3171% ( 11) 00:14:34.735 18.110 - 18.204: 99.3956% ( 10) 00:14:34.735 18.204 - 18.299: 99.4662% ( 9) 00:14:34.735 18.299 - 18.394: 99.5212% ( 7) 00:14:34.735 18.394 - 18.489: 99.5918% ( 9) 00:14:34.735 18.489 - 18.584: 99.6782% ( 11) 00:14:34.735 18.584 - 18.679: 99.7331% ( 7) 00:14:34.735 18.679 - 18.773: 99.7724% ( 5) 00:14:34.735 18.773 - 18.868: 99.8038% ( 4) 00:14:34.735 18.868 - 18.963: 99.8273% ( 3) 00:14:34.735 18.963 - 19.058: 99.8352% ( 1) 00:14:34.735 19.058 - 19.153: 99.8587% ( 3) 00:14:34.735 21.333 - 21.428: 99.8666% ( 1) 00:14:34.735 22.945 - 23.040: 99.8744% ( 1) 00:14:34.735 23.135 - 23.230: 99.8823% ( 1) 00:14:34.735 23.230 - 23.324: 99.8901% ( 1) 00:14:34.735 23.609 - 23.704: 99.8980% ( 1) 00:14:34.735 25.031 - 25.221: 99.9058% ( 1) 00:14:34.735 25.410 - 25.600: 99.9215% ( 2) 00:14:34.735 28.444 - 28.634: 99.9294% ( 1) 00:14:34.735 28.824 - 29.013: 99.9372% ( 1) 00:14:34.735 3980.705 - 4004.978: 99.9843% ( 6) 00:14:34.735 4004.978 - 4029.250: 100.0000% ( 2) 00:14:34.735 00:14:34.735 Complete histogram 00:14:34.735 ================== 00:14:34.735 Range in us Cumulative Count 00:14:34.735 2.050 - 2.062: 0.0078% ( 1) 00:14:34.735 2.062 - 2.074: 11.6091% ( 1478) 00:14:34.735 2.074 - 2.086: 43.3516% ( 4044) 00:14:34.735 2.086 - 2.098: 46.7661% ( 435) 00:14:34.735 2.098 - 2.110: 54.0502% ( 928) 00:14:34.735 2.110 - 2.121: 59.5840% ( 705) 00:14:34.735 2.121 - 2.133: 61.0440% ( 186) 00:14:34.735 2.133 - 2.145: 68.0534% ( 893) 00:14:34.735 2.145 - 2.157: 74.2386% ( 788) 00:14:34.735 2.157 - 2.169: 76.6405% ( 306) 00:14:34.735 2.169 - 2.181: 79.0110% ( 302) 00:14:34.735 2.181 - 2.193: 80.5651% ( 198) 00:14:34.735 2.193 - 2.204: 81.3815% ( 104) 00:14:34.735 2.204 - 2.216: 83.9325% ( 325) 00:14:34.735 2.216 - 2.228: 87.4411% ( 447) 00:14:34.735 2.228 - 2.240: 90.0706% ( 335) 00:14:34.736 2.240 - 2.252: 91.8524% ( 227) 00:14:34.736 2.252 - 2.264: 92.7865% ( 119) 00:14:34.736 2.264 - 2.276: 93.2653% ( 61) 00:14:34.736 2.276 - 2.287: 93.6264% ( 46) 00:14:34.736 2.287 - 2.299: 94.0345% ( 52) 00:14:34.736 2.299 - 2.311: 94.7410% ( 90) 00:14:34.736 2.311 - 2.323: 95.2276% ( 62) 00:14:34.736 2.323 - 2.335: 95.3689% ( 18) 00:14:34.736 2.335 - 2.347: 95.3846% ( 2) 00:14:34.736 2.347 - 2.359: 95.4396% ( 7) 00:14:34.736 2.359 - 2.370: 95.5024% ( 8) 00:14:34.736 2.370 - 2.382: 95.6279% ( 16) 00:14:34.736 2.382 - 2.394: 95.9027% ( 35) 00:14:34.736 2.394 - 2.406: 96.2166% ( 40) 00:14:34.736 2.406 - 2.418: 96.3265% ( 14) 00:14:34.736 2.418 - 2.430: 96.5306% ( 26) 00:14:34.736 2.430 - 2.441: 96.6954% ( 21) 00:14:34.736 2.441 - 2.453: 96.9152% ( 28) 00:14:34.736 2.453 - 2.465: 97.0801% ( 21) 00:14:34.736 2.465 - 2.477: 97.2841% ( 26) 00:14:34.736 2.477 - 2.489: 97.4490% ( 21) 00:14:34.736 2.489 - 2.501: 97.5589% ( 14) 00:14:34.736 2.501 - 2.513: 97.7316% ( 22) 00:14:34.736 2.513 - 2.524: 97.8257% ( 12) 00:14:34.736 2.524 - 2.536: 97.8885% ( 8) 00:14:34.736 2.536 - 2.548: 97.9827% ( 12) 00:14:34.736 2.548 - 2.560: 98.0377% ( 7) 00:14:34.736 2.560 - 2.572: 98.0691% ( 4) 00:14:34.736 2.572 - 2.584: 98.1162% ( 6) 00:14:34.736 2.584 - 2.596: 98.1476% ( 4) 00:14:34.736 2.596 - 2.607: 98.1554% ( 1) 00:14:34.736 2.607 - 2.619: 98.1711% ( 2) 00:14:34.736 2.619 - 2.631: 98.1790% ( 1) 00:14:34.993 2.631 - 2.643: 9[2024-11-15 10:33:23.200331] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:14:34.993 8.2025% ( 3) 00:14:34.993 2.655 - 2.667: 98.2104% ( 1) 00:14:34.993 2.667 - 2.679: 98.2182% ( 1) 00:14:34.993 2.679 - 2.690: 98.2261% ( 1) 00:14:34.993 2.690 - 2.702: 98.2339% ( 1) 00:14:34.993 2.738 - 2.750: 98.2418% ( 1) 00:14:34.993 2.773 - 2.785: 98.2496% ( 1) 00:14:34.993 2.797 - 2.809: 98.2575% ( 1) 00:14:34.993 2.821 - 2.833: 98.2732% ( 2) 00:14:34.993 2.892 - 2.904: 98.2810% ( 1) 00:14:34.993 2.916 - 2.927: 98.2967% ( 2) 00:14:34.993 3.034 - 3.058: 98.3046% ( 1) 00:14:34.993 3.129 - 3.153: 98.3124% ( 1) 00:14:34.993 3.437 - 3.461: 98.3203% ( 1) 00:14:34.993 3.556 - 3.579: 98.3516% ( 4) 00:14:34.993 3.579 - 3.603: 98.3595% ( 1) 00:14:34.993 3.674 - 3.698: 98.3673% ( 1) 00:14:34.993 3.793 - 3.816: 98.3830% ( 2) 00:14:34.994 3.816 - 3.840: 98.3909% ( 1) 00:14:34.994 3.864 - 3.887: 98.3987% ( 1) 00:14:34.994 3.887 - 3.911: 98.4144% ( 2) 00:14:34.994 3.911 - 3.935: 98.4301% ( 2) 00:14:34.994 3.982 - 4.006: 98.4380% ( 1) 00:14:34.994 4.101 - 4.124: 98.4458% ( 1) 00:14:34.994 4.148 - 4.172: 98.4537% ( 1) 00:14:34.994 4.196 - 4.219: 98.4615% ( 1) 00:14:34.994 4.338 - 4.361: 98.4694% ( 1) 00:14:34.994 4.456 - 4.480: 98.4772% ( 1) 00:14:34.994 4.670 - 4.693: 98.4851% ( 1) 00:14:34.994 4.788 - 4.812: 98.4929% ( 1) 00:14:34.994 6.021 - 6.044: 98.5008% ( 1) 00:14:34.994 6.163 - 6.210: 98.5086% ( 1) 00:14:34.994 6.353 - 6.400: 98.5165% ( 1) 00:14:34.994 6.400 - 6.447: 98.5243% ( 1) 00:14:34.994 6.447 - 6.495: 98.5322% ( 1) 00:14:34.994 6.590 - 6.637: 98.5400% ( 1) 00:14:34.994 6.732 - 6.779: 98.5557% ( 2) 00:14:34.994 6.827 - 6.874: 98.5714% ( 2) 00:14:34.994 7.064 - 7.111: 98.5793% ( 1) 00:14:34.994 7.443 - 7.490: 98.5871% ( 1) 00:14:34.994 7.680 - 7.727: 98.6028% ( 2) 00:14:34.994 8.201 - 8.249: 98.6107% ( 1) 00:14:34.994 8.533 - 8.581: 98.6185% ( 1) 00:14:34.994 8.818 - 8.865: 98.6264% ( 1) 00:14:34.994 9.719 - 9.766: 98.6342% ( 1) 00:14:34.994 12.326 - 12.421: 98.6421% ( 1) 00:14:34.994 13.369 - 13.464: 98.6499% ( 1) 00:14:34.994 15.360 - 15.455: 98.6578% ( 1) 00:14:34.994 15.550 - 15.644: 98.6656% ( 1) 00:14:34.994 15.834 - 15.929: 98.6970% ( 4) 00:14:34.994 15.929 - 16.024: 98.7363% ( 5) 00:14:34.994 16.024 - 16.119: 98.7520% ( 2) 00:14:34.994 16.119 - 16.213: 98.7755% ( 3) 00:14:34.994 16.213 - 16.308: 98.8148% ( 5) 00:14:34.994 16.308 - 16.403: 98.8383% ( 3) 00:14:34.994 16.403 - 16.498: 98.8697% ( 4) 00:14:34.994 16.498 - 16.593: 98.9717% ( 13) 00:14:34.994 16.593 - 16.687: 99.0502% ( 10) 00:14:34.994 16.687 - 16.782: 99.0895% ( 5) 00:14:34.994 16.782 - 16.877: 99.1444% ( 7) 00:14:34.994 16.877 - 16.972: 99.1523% ( 1) 00:14:34.994 16.972 - 17.067: 99.1758% ( 3) 00:14:34.994 17.067 - 17.161: 99.1915% ( 2) 00:14:34.994 17.256 - 17.351: 99.1994% ( 1) 00:14:34.994 17.446 - 17.541: 99.2072% ( 1) 00:14:34.994 17.541 - 17.636: 99.2308% ( 3) 00:14:34.994 17.730 - 17.825: 99.2465% ( 2) 00:14:34.994 17.825 - 17.920: 99.2543% ( 1) 00:14:34.994 18.015 - 18.110: 99.2622% ( 1) 00:14:34.994 18.204 - 18.299: 99.2700% ( 1) 00:14:34.994 18.394 - 18.489: 99.2779% ( 1) 00:14:34.994 18.679 - 18.773: 99.2857% ( 1) 00:14:34.994 19.058 - 19.153: 99.2936% ( 1) 00:14:34.994 20.764 - 20.859: 99.3014% ( 1) 00:14:34.994 28.824 - 29.013: 99.3093% ( 1) 00:14:34.994 29.961 - 30.151: 99.3171% ( 1) 00:14:34.994 3980.705 - 4004.978: 99.8509% ( 68) 00:14:34.994 4004.978 - 4029.250: 99.9922% ( 18) 00:14:34.994 5995.330 - 6019.603: 100.0000% ( 1) 00:14:34.994 00:14:34.994 10:33:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user2/2 nqn.2019-07.io.spdk:cnode2 2 00:14:34.994 10:33:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user2/2 00:14:34.994 10:33:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode2 00:14:34.994 10:33:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc4 00:14:34.994 10:33:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:14:35.252 [ 00:14:35.252 { 00:14:35.252 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:14:35.252 "subtype": "Discovery", 00:14:35.252 "listen_addresses": [], 00:14:35.252 "allow_any_host": true, 00:14:35.252 "hosts": [] 00:14:35.252 }, 00:14:35.252 { 00:14:35.252 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:14:35.252 "subtype": "NVMe", 00:14:35.252 "listen_addresses": [ 00:14:35.252 { 00:14:35.252 "trtype": "VFIOUSER", 00:14:35.252 "adrfam": "IPv4", 00:14:35.252 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:14:35.252 "trsvcid": "0" 00:14:35.252 } 00:14:35.252 ], 00:14:35.252 "allow_any_host": true, 00:14:35.252 "hosts": [], 00:14:35.252 "serial_number": "SPDK1", 00:14:35.252 "model_number": "SPDK bdev Controller", 00:14:35.252 "max_namespaces": 32, 00:14:35.252 "min_cntlid": 1, 00:14:35.252 "max_cntlid": 65519, 00:14:35.252 "namespaces": [ 00:14:35.252 { 00:14:35.252 "nsid": 1, 00:14:35.252 "bdev_name": "Malloc1", 00:14:35.252 "name": "Malloc1", 00:14:35.252 "nguid": "368A6F54642E4676AD8A717B60DFF501", 00:14:35.252 "uuid": "368a6f54-642e-4676-ad8a-717b60dff501" 00:14:35.252 }, 00:14:35.252 { 00:14:35.252 "nsid": 2, 00:14:35.252 "bdev_name": "Malloc3", 00:14:35.252 "name": "Malloc3", 00:14:35.252 "nguid": "76FE01958F4743BF8183086A94F4EAB7", 00:14:35.252 "uuid": "76fe0195-8f47-43bf-8183-086a94f4eab7" 00:14:35.252 } 00:14:35.252 ] 00:14:35.252 }, 00:14:35.252 { 00:14:35.252 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:14:35.252 "subtype": "NVMe", 00:14:35.252 "listen_addresses": [ 00:14:35.252 { 00:14:35.252 "trtype": "VFIOUSER", 00:14:35.252 "adrfam": "IPv4", 00:14:35.252 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:14:35.252 "trsvcid": "0" 00:14:35.252 } 00:14:35.252 ], 00:14:35.252 "allow_any_host": true, 00:14:35.252 "hosts": [], 00:14:35.252 "serial_number": "SPDK2", 00:14:35.252 "model_number": "SPDK bdev Controller", 00:14:35.252 "max_namespaces": 32, 00:14:35.252 "min_cntlid": 1, 00:14:35.252 "max_cntlid": 65519, 00:14:35.252 "namespaces": [ 00:14:35.252 { 00:14:35.252 "nsid": 1, 00:14:35.252 "bdev_name": "Malloc2", 00:14:35.252 "name": "Malloc2", 00:14:35.252 "nguid": "701ECD518F0849F09965B0911C17A856", 00:14:35.252 "uuid": "701ecd51-8f08-49f0-9965-b0911c17a856" 00:14:35.252 } 00:14:35.252 ] 00:14:35.252 } 00:14:35.252 ] 00:14:35.252 10:33:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:14:35.252 10:33:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=356033 00:14:35.252 10:33:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:14:35.252 10:33:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -n 2 -g -t /tmp/aer_touch_file 00:14:35.252 10:33:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1267 -- # local i=0 00:14:35.252 10:33:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1268 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:14:35.252 10:33:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1269 -- # '[' 0 -lt 200 ']' 00:14:35.252 10:33:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1270 -- # i=1 00:14:35.252 10:33:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1271 -- # sleep 0.1 00:14:35.252 10:33:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1268 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:14:35.252 10:33:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1269 -- # '[' 1 -lt 200 ']' 00:14:35.252 10:33:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1270 -- # i=2 00:14:35.252 10:33:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1271 -- # sleep 0.1 00:14:35.252 [2024-11-15 10:33:23.684902] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:14:35.510 10:33:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1268 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:14:35.510 10:33:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1274 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:14:35.510 10:33:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1278 -- # return 0 00:14:35.510 10:33:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:14:35.510 10:33:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc4 00:14:35.768 Malloc4 00:14:35.768 10:33:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc4 -n 2 00:14:36.026 [2024-11-15 10:33:24.276280] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:14:36.026 10:33:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:14:36.026 Asynchronous Event Request test 00:14:36.026 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:14:36.026 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:14:36.026 Registering asynchronous event callbacks... 00:14:36.026 Starting namespace attribute notice tests for all controllers... 00:14:36.026 /var/run/vfio-user/domain/vfio-user2/2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:14:36.026 aer_cb - Changed Namespace 00:14:36.026 Cleaning up... 00:14:36.284 [ 00:14:36.284 { 00:14:36.284 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:14:36.284 "subtype": "Discovery", 00:14:36.284 "listen_addresses": [], 00:14:36.284 "allow_any_host": true, 00:14:36.284 "hosts": [] 00:14:36.284 }, 00:14:36.284 { 00:14:36.284 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:14:36.284 "subtype": "NVMe", 00:14:36.284 "listen_addresses": [ 00:14:36.284 { 00:14:36.284 "trtype": "VFIOUSER", 00:14:36.284 "adrfam": "IPv4", 00:14:36.284 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:14:36.284 "trsvcid": "0" 00:14:36.284 } 00:14:36.284 ], 00:14:36.284 "allow_any_host": true, 00:14:36.284 "hosts": [], 00:14:36.284 "serial_number": "SPDK1", 00:14:36.284 "model_number": "SPDK bdev Controller", 00:14:36.284 "max_namespaces": 32, 00:14:36.284 "min_cntlid": 1, 00:14:36.284 "max_cntlid": 65519, 00:14:36.284 "namespaces": [ 00:14:36.284 { 00:14:36.284 "nsid": 1, 00:14:36.284 "bdev_name": "Malloc1", 00:14:36.284 "name": "Malloc1", 00:14:36.284 "nguid": "368A6F54642E4676AD8A717B60DFF501", 00:14:36.284 "uuid": "368a6f54-642e-4676-ad8a-717b60dff501" 00:14:36.284 }, 00:14:36.284 { 00:14:36.284 "nsid": 2, 00:14:36.284 "bdev_name": "Malloc3", 00:14:36.284 "name": "Malloc3", 00:14:36.284 "nguid": "76FE01958F4743BF8183086A94F4EAB7", 00:14:36.284 "uuid": "76fe0195-8f47-43bf-8183-086a94f4eab7" 00:14:36.284 } 00:14:36.284 ] 00:14:36.284 }, 00:14:36.284 { 00:14:36.284 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:14:36.284 "subtype": "NVMe", 00:14:36.284 "listen_addresses": [ 00:14:36.284 { 00:14:36.284 "trtype": "VFIOUSER", 00:14:36.284 "adrfam": "IPv4", 00:14:36.284 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:14:36.284 "trsvcid": "0" 00:14:36.284 } 00:14:36.284 ], 00:14:36.284 "allow_any_host": true, 00:14:36.284 "hosts": [], 00:14:36.284 "serial_number": "SPDK2", 00:14:36.284 "model_number": "SPDK bdev Controller", 00:14:36.284 "max_namespaces": 32, 00:14:36.284 "min_cntlid": 1, 00:14:36.284 "max_cntlid": 65519, 00:14:36.284 "namespaces": [ 00:14:36.284 { 00:14:36.284 "nsid": 1, 00:14:36.284 "bdev_name": "Malloc2", 00:14:36.284 "name": "Malloc2", 00:14:36.284 "nguid": "701ECD518F0849F09965B0911C17A856", 00:14:36.284 "uuid": "701ecd51-8f08-49f0-9965-b0911c17a856" 00:14:36.284 }, 00:14:36.284 { 00:14:36.284 "nsid": 2, 00:14:36.284 "bdev_name": "Malloc4", 00:14:36.285 "name": "Malloc4", 00:14:36.285 "nguid": "786E77C4C7604AFC81179D31B2DB93F4", 00:14:36.285 "uuid": "786e77c4-c760-4afc-8117-9d31b2db93f4" 00:14:36.285 } 00:14:36.285 ] 00:14:36.285 } 00:14:36.285 ] 00:14:36.285 10:33:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 356033 00:14:36.285 10:33:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@105 -- # stop_nvmf_vfio_user 00:14:36.285 10:33:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 349697 00:14:36.285 10:33:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@952 -- # '[' -z 349697 ']' 00:14:36.285 10:33:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@956 -- # kill -0 349697 00:14:36.285 10:33:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@957 -- # uname 00:14:36.285 10:33:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:14:36.285 10:33:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 349697 00:14:36.285 10:33:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:14:36.285 10:33:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:14:36.285 10:33:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@970 -- # echo 'killing process with pid 349697' 00:14:36.285 killing process with pid 349697 00:14:36.285 10:33:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@971 -- # kill 349697 00:14:36.285 10:33:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@976 -- # wait 349697 00:14:36.542 10:33:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:14:36.542 10:33:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:14:36.542 10:33:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@108 -- # setup_nvmf_vfio_user --interrupt-mode '-M -I' 00:14:36.542 10:33:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args=--interrupt-mode 00:14:36.542 10:33:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local 'transport_args=-M -I' 00:14:36.542 10:33:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=356182 00:14:36.542 10:33:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' --interrupt-mode 00:14:36.542 10:33:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 356182' 00:14:36.542 Process pid: 356182 00:14:36.542 10:33:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:14:36.542 10:33:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 356182 00:14:36.542 10:33:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@833 -- # '[' -z 356182 ']' 00:14:36.542 10:33:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:36.542 10:33:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@838 -- # local max_retries=100 00:14:36.542 10:33:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:36.542 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:36.542 10:33:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@842 -- # xtrace_disable 00:14:36.542 10:33:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:14:36.542 [2024-11-15 10:33:24.942856] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:14:36.542 [2024-11-15 10:33:24.943880] Starting SPDK v25.01-pre git sha1 318515b44 / DPDK 24.03.0 initialization... 00:14:36.542 [2024-11-15 10:33:24.943940] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:36.800 [2024-11-15 10:33:25.010755] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:36.800 [2024-11-15 10:33:25.069747] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:36.800 [2024-11-15 10:33:25.069804] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:36.800 [2024-11-15 10:33:25.069838] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:36.800 [2024-11-15 10:33:25.069850] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:36.800 [2024-11-15 10:33:25.069860] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:36.800 [2024-11-15 10:33:25.071322] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:36.800 [2024-11-15 10:33:25.071387] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:36.800 [2024-11-15 10:33:25.071455] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:14:36.800 [2024-11-15 10:33:25.071458] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:36.800 [2024-11-15 10:33:25.160065] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:14:36.800 [2024-11-15 10:33:25.160271] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:14:36.800 [2024-11-15 10:33:25.160556] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:14:36.800 [2024-11-15 10:33:25.161125] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:14:36.801 [2024-11-15 10:33:25.161360] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:14:36.801 10:33:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:14:36.801 10:33:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@866 -- # return 0 00:14:36.801 10:33:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:14:37.737 10:33:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER -M -I 00:14:38.304 10:33:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:14:38.304 10:33:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:14:38.304 10:33:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:14:38.304 10:33:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:14:38.304 10:33:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:14:38.304 Malloc1 00:14:38.304 10:33:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:14:38.562 10:33:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:14:39.129 10:33:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:14:39.387 10:33:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:14:39.387 10:33:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:14:39.387 10:33:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:14:39.645 Malloc2 00:14:39.645 10:33:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:14:39.902 10:33:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:14:40.160 10:33:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:14:40.417 10:33:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@109 -- # stop_nvmf_vfio_user 00:14:40.417 10:33:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 356182 00:14:40.418 10:33:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@952 -- # '[' -z 356182 ']' 00:14:40.418 10:33:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@956 -- # kill -0 356182 00:14:40.418 10:33:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@957 -- # uname 00:14:40.418 10:33:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:14:40.418 10:33:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 356182 00:14:40.418 10:33:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:14:40.418 10:33:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:14:40.418 10:33:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@970 -- # echo 'killing process with pid 356182' 00:14:40.418 killing process with pid 356182 00:14:40.418 10:33:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@971 -- # kill 356182 00:14:40.418 10:33:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@976 -- # wait 356182 00:14:40.675 10:33:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:14:40.675 10:33:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:14:40.675 00:14:40.675 real 0m53.955s 00:14:40.675 user 3m28.743s 00:14:40.675 sys 0m3.931s 00:14:40.675 10:33:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1128 -- # xtrace_disable 00:14:40.675 10:33:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:14:40.675 ************************************ 00:14:40.675 END TEST nvmf_vfio_user 00:14:40.676 ************************************ 00:14:40.676 10:33:29 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@32 -- # run_test nvmf_vfio_user_nvme_compliance /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:14:40.676 10:33:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:14:40.676 10:33:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:14:40.676 10:33:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:40.676 ************************************ 00:14:40.676 START TEST nvmf_vfio_user_nvme_compliance 00:14:40.676 ************************************ 00:14:40.676 10:33:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:14:40.676 * Looking for test storage... 00:14:40.676 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance 00:14:40.676 10:33:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:14:40.676 10:33:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1691 -- # lcov --version 00:14:40.676 10:33:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:14:40.935 10:33:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:14:40.935 10:33:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:40.935 10:33:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:40.935 10:33:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:40.935 10:33:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@336 -- # IFS=.-: 00:14:40.935 10:33:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@336 -- # read -ra ver1 00:14:40.935 10:33:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@337 -- # IFS=.-: 00:14:40.935 10:33:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@337 -- # read -ra ver2 00:14:40.935 10:33:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@338 -- # local 'op=<' 00:14:40.935 10:33:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@340 -- # ver1_l=2 00:14:40.935 10:33:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@341 -- # ver2_l=1 00:14:40.935 10:33:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:40.935 10:33:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@344 -- # case "$op" in 00:14:40.935 10:33:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@345 -- # : 1 00:14:40.935 10:33:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:40.935 10:33:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:40.935 10:33:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@365 -- # decimal 1 00:14:40.935 10:33:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@353 -- # local d=1 00:14:40.935 10:33:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:40.935 10:33:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@355 -- # echo 1 00:14:40.935 10:33:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@365 -- # ver1[v]=1 00:14:40.935 10:33:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@366 -- # decimal 2 00:14:40.935 10:33:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@353 -- # local d=2 00:14:40.935 10:33:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:40.935 10:33:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@355 -- # echo 2 00:14:40.935 10:33:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@366 -- # ver2[v]=2 00:14:40.935 10:33:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:40.935 10:33:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:40.935 10:33:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@368 -- # return 0 00:14:40.935 10:33:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:40.935 10:33:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:14:40.935 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:40.935 --rc genhtml_branch_coverage=1 00:14:40.935 --rc genhtml_function_coverage=1 00:14:40.935 --rc genhtml_legend=1 00:14:40.935 --rc geninfo_all_blocks=1 00:14:40.935 --rc geninfo_unexecuted_blocks=1 00:14:40.935 00:14:40.935 ' 00:14:40.935 10:33:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:14:40.935 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:40.935 --rc genhtml_branch_coverage=1 00:14:40.935 --rc genhtml_function_coverage=1 00:14:40.935 --rc genhtml_legend=1 00:14:40.935 --rc geninfo_all_blocks=1 00:14:40.935 --rc geninfo_unexecuted_blocks=1 00:14:40.935 00:14:40.935 ' 00:14:40.935 10:33:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:14:40.935 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:40.935 --rc genhtml_branch_coverage=1 00:14:40.935 --rc genhtml_function_coverage=1 00:14:40.935 --rc genhtml_legend=1 00:14:40.935 --rc geninfo_all_blocks=1 00:14:40.935 --rc geninfo_unexecuted_blocks=1 00:14:40.935 00:14:40.935 ' 00:14:40.935 10:33:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:14:40.935 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:40.935 --rc genhtml_branch_coverage=1 00:14:40.935 --rc genhtml_function_coverage=1 00:14:40.935 --rc genhtml_legend=1 00:14:40.935 --rc geninfo_all_blocks=1 00:14:40.935 --rc geninfo_unexecuted_blocks=1 00:14:40.935 00:14:40.935 ' 00:14:40.935 10:33:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:40.935 10:33:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # uname -s 00:14:40.935 10:33:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:40.935 10:33:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:40.935 10:33:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:40.935 10:33:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:40.935 10:33:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:40.935 10:33:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:40.935 10:33:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:40.935 10:33:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:40.935 10:33:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:40.935 10:33:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:40.935 10:33:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:14:40.935 10:33:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@18 -- # NVME_HOSTID=8b464f06-2980-e311-ba20-001e67a94acd 00:14:40.935 10:33:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:40.935 10:33:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:40.935 10:33:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:40.935 10:33:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:40.935 10:33:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:40.935 10:33:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@15 -- # shopt -s extglob 00:14:40.935 10:33:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:40.935 10:33:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:40.935 10:33:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:40.935 10:33:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:40.936 10:33:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:40.936 10:33:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:40.936 10:33:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@5 -- # export PATH 00:14:40.936 10:33:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:40.936 10:33:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@51 -- # : 0 00:14:40.936 10:33:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:40.936 10:33:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:40.936 10:33:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:40.936 10:33:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:40.936 10:33:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:40.936 10:33:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:40.936 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:40.936 10:33:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:40.936 10:33:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:40.936 10:33:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:40.936 10:33:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:40.936 10:33:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:40.936 10:33:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # export TEST_TRANSPORT=VFIOUSER 00:14:40.936 10:33:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # TEST_TRANSPORT=VFIOUSER 00:14:40.936 10:33:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@16 -- # rm -rf /var/run/vfio-user 00:14:40.936 10:33:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@20 -- # nvmfpid=356783 00:14:40.936 10:33:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:14:40.936 10:33:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@21 -- # echo 'Process pid: 356783' 00:14:40.936 Process pid: 356783 00:14:40.936 10:33:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@23 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:14:40.936 10:33:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@24 -- # waitforlisten 356783 00:14:40.936 10:33:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@833 -- # '[' -z 356783 ']' 00:14:40.936 10:33:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:40.936 10:33:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@838 -- # local max_retries=100 00:14:40.936 10:33:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:40.936 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:40.936 10:33:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@842 -- # xtrace_disable 00:14:40.936 10:33:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:14:40.936 [2024-11-15 10:33:29.265402] Starting SPDK v25.01-pre git sha1 318515b44 / DPDK 24.03.0 initialization... 00:14:40.936 [2024-11-15 10:33:29.265496] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:40.936 [2024-11-15 10:33:29.330204] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:14:40.936 [2024-11-15 10:33:29.388056] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:40.936 [2024-11-15 10:33:29.388108] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:40.936 [2024-11-15 10:33:29.388135] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:40.936 [2024-11-15 10:33:29.388147] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:40.936 [2024-11-15 10:33:29.388156] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:40.936 [2024-11-15 10:33:29.389531] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:40.936 [2024-11-15 10:33:29.389588] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:40.936 [2024-11-15 10:33:29.389592] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:41.194 10:33:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:14:41.194 10:33:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@866 -- # return 0 00:14:41.195 10:33:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@26 -- # sleep 1 00:14:42.128 10:33:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@28 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:14:42.128 10:33:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@29 -- # traddr=/var/run/vfio-user 00:14:42.128 10:33:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@31 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:14:42.128 10:33:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:42.128 10:33:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:14:42.128 10:33:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:42.128 10:33:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@33 -- # mkdir -p /var/run/vfio-user 00:14:42.128 10:33:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@35 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:14:42.128 10:33:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:42.128 10:33:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:14:42.128 malloc0 00:14:42.128 10:33:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:42.128 10:33:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@36 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk -m 32 00:14:42.128 10:33:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:42.128 10:33:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:14:42.128 10:33:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:42.128 10:33:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@37 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:14:42.128 10:33:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:42.128 10:33:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:14:42.128 10:33:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:42.128 10:33:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@38 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:14:42.128 10:33:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:42.128 10:33:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:14:42.128 10:33:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:42.128 10:33:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/nvme_compliance -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user subnqn:nqn.2021-09.io.spdk:cnode0' 00:14:42.386 00:14:42.386 00:14:42.386 CUnit - A unit testing framework for C - Version 2.1-3 00:14:42.386 http://cunit.sourceforge.net/ 00:14:42.386 00:14:42.386 00:14:42.386 Suite: nvme_compliance 00:14:42.386 Test: admin_identify_ctrlr_verify_dptr ...[2024-11-15 10:33:30.747971] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:42.386 [2024-11-15 10:33:30.749446] vfio_user.c: 800:nvme_cmd_map_prps: *ERROR*: no PRP2, 3072 remaining 00:14:42.386 [2024-11-15 10:33:30.749473] vfio_user.c:5503:map_admin_cmd_req: *ERROR*: /var/run/vfio-user: map Admin Opc 6 failed 00:14:42.386 [2024-11-15 10:33:30.749485] vfio_user.c:5596:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x6 failed 00:14:42.386 [2024-11-15 10:33:30.750985] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:42.386 passed 00:14:42.386 Test: admin_identify_ctrlr_verify_fused ...[2024-11-15 10:33:30.835608] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:42.386 [2024-11-15 10:33:30.838629] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:42.644 passed 00:14:42.644 Test: admin_identify_ns ...[2024-11-15 10:33:30.925898] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:42.644 [2024-11-15 10:33:30.989383] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:14:42.644 [2024-11-15 10:33:30.997383] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:14:42.644 [2024-11-15 10:33:31.018505] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:42.644 passed 00:14:42.644 Test: admin_get_features_mandatory_features ...[2024-11-15 10:33:31.101027] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:42.644 [2024-11-15 10:33:31.104051] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:42.902 passed 00:14:42.902 Test: admin_get_features_optional_features ...[2024-11-15 10:33:31.186603] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:42.902 [2024-11-15 10:33:31.189625] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:42.902 passed 00:14:42.902 Test: admin_set_features_number_of_queues ...[2024-11-15 10:33:31.271913] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:43.160 [2024-11-15 10:33:31.376480] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:43.160 passed 00:14:43.160 Test: admin_get_log_page_mandatory_logs ...[2024-11-15 10:33:31.460169] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:43.160 [2024-11-15 10:33:31.463194] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:43.160 passed 00:14:43.160 Test: admin_get_log_page_with_lpo ...[2024-11-15 10:33:31.545383] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:43.160 [2024-11-15 10:33:31.613382] ctrlr.c:2697:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (516) > len (512) 00:14:43.417 [2024-11-15 10:33:31.626476] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:43.417 passed 00:14:43.417 Test: fabric_property_get ...[2024-11-15 10:33:31.710929] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:43.417 [2024-11-15 10:33:31.712203] vfio_user.c:5596:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x7f failed 00:14:43.417 [2024-11-15 10:33:31.713949] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:43.417 passed 00:14:43.417 Test: admin_delete_io_sq_use_admin_qid ...[2024-11-15 10:33:31.800513] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:43.417 [2024-11-15 10:33:31.801843] vfio_user.c:2305:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:0 does not exist 00:14:43.417 [2024-11-15 10:33:31.803536] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:43.417 passed 00:14:43.675 Test: admin_delete_io_sq_delete_sq_twice ...[2024-11-15 10:33:31.887935] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:43.675 [2024-11-15 10:33:31.971387] vfio_user.c:2305:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:14:43.675 [2024-11-15 10:33:31.987375] vfio_user.c:2305:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:14:43.675 [2024-11-15 10:33:31.992614] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:43.675 passed 00:14:43.675 Test: admin_delete_io_cq_use_admin_qid ...[2024-11-15 10:33:32.080039] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:43.675 [2024-11-15 10:33:32.081391] vfio_user.c:2305:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O cqid:0 does not exist 00:14:43.675 [2024-11-15 10:33:32.083058] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:43.675 passed 00:14:43.933 Test: admin_delete_io_cq_delete_cq_first ...[2024-11-15 10:33:32.167335] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:43.933 [2024-11-15 10:33:32.245372] vfio_user.c:2315:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:14:43.933 [2024-11-15 10:33:32.269372] vfio_user.c:2305:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:14:43.933 [2024-11-15 10:33:32.274556] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:43.933 passed 00:14:43.933 Test: admin_create_io_cq_verify_iv_pc ...[2024-11-15 10:33:32.358085] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:43.933 [2024-11-15 10:33:32.359417] vfio_user.c:2154:handle_create_io_cq: *ERROR*: /var/run/vfio-user: IV is too big 00:14:43.933 [2024-11-15 10:33:32.359459] vfio_user.c:2148:handle_create_io_cq: *ERROR*: /var/run/vfio-user: non-PC CQ not supported 00:14:43.933 [2024-11-15 10:33:32.361109] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:43.933 passed 00:14:44.191 Test: admin_create_io_sq_verify_qsize_cqid ...[2024-11-15 10:33:32.443433] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:44.191 [2024-11-15 10:33:32.537376] vfio_user.c:2236:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 1 00:14:44.191 [2024-11-15 10:33:32.545369] vfio_user.c:2236:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 257 00:14:44.191 [2024-11-15 10:33:32.553372] vfio_user.c:2034:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:0 00:14:44.191 [2024-11-15 10:33:32.561384] vfio_user.c:2034:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:128 00:14:44.191 [2024-11-15 10:33:32.590495] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:44.191 passed 00:14:44.448 Test: admin_create_io_sq_verify_pc ...[2024-11-15 10:33:32.675134] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:44.448 [2024-11-15 10:33:32.690387] vfio_user.c:2047:handle_create_io_sq: *ERROR*: /var/run/vfio-user: non-PC SQ not supported 00:14:44.448 [2024-11-15 10:33:32.707492] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:44.448 passed 00:14:44.448 Test: admin_create_io_qp_max_qps ...[2024-11-15 10:33:32.792052] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:45.820 [2024-11-15 10:33:33.910382] nvme_ctrlr.c:5523:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [/var/run/vfio-user, 0] No free I/O queue IDs 00:14:46.079 [2024-11-15 10:33:34.290614] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:46.079 passed 00:14:46.079 Test: admin_create_io_sq_shared_cq ...[2024-11-15 10:33:34.372902] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:46.079 [2024-11-15 10:33:34.504387] vfio_user.c:2315:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:14:46.079 [2024-11-15 10:33:34.541474] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:46.337 passed 00:14:46.337 00:14:46.337 Run Summary: Type Total Ran Passed Failed Inactive 00:14:46.337 suites 1 1 n/a 0 0 00:14:46.337 tests 18 18 18 0 0 00:14:46.337 asserts 360 360 360 0 n/a 00:14:46.337 00:14:46.337 Elapsed time = 1.575 seconds 00:14:46.337 10:33:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@42 -- # killprocess 356783 00:14:46.337 10:33:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@952 -- # '[' -z 356783 ']' 00:14:46.337 10:33:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@956 -- # kill -0 356783 00:14:46.337 10:33:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@957 -- # uname 00:14:46.337 10:33:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:14:46.337 10:33:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 356783 00:14:46.337 10:33:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:14:46.337 10:33:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:14:46.337 10:33:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@970 -- # echo 'killing process with pid 356783' 00:14:46.337 killing process with pid 356783 00:14:46.337 10:33:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@971 -- # kill 356783 00:14:46.337 10:33:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@976 -- # wait 356783 00:14:46.594 10:33:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@44 -- # rm -rf /var/run/vfio-user 00:14:46.594 10:33:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:14:46.594 00:14:46.594 real 0m5.787s 00:14:46.594 user 0m16.292s 00:14:46.594 sys 0m0.522s 00:14:46.594 10:33:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1128 -- # xtrace_disable 00:14:46.594 10:33:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:14:46.594 ************************************ 00:14:46.594 END TEST nvmf_vfio_user_nvme_compliance 00:14:46.594 ************************************ 00:14:46.595 10:33:34 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@33 -- # run_test nvmf_vfio_user_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:14:46.595 10:33:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:14:46.595 10:33:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:14:46.595 10:33:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:46.595 ************************************ 00:14:46.595 START TEST nvmf_vfio_user_fuzz 00:14:46.595 ************************************ 00:14:46.595 10:33:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:14:46.595 * Looking for test storage... 00:14:46.595 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:46.595 10:33:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:14:46.595 10:33:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1691 -- # lcov --version 00:14:46.595 10:33:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:14:46.595 10:33:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:14:46.595 10:33:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:46.595 10:33:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:46.595 10:33:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:46.595 10:33:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@336 -- # IFS=.-: 00:14:46.595 10:33:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@336 -- # read -ra ver1 00:14:46.595 10:33:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@337 -- # IFS=.-: 00:14:46.595 10:33:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@337 -- # read -ra ver2 00:14:46.595 10:33:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@338 -- # local 'op=<' 00:14:46.595 10:33:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@340 -- # ver1_l=2 00:14:46.595 10:33:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@341 -- # ver2_l=1 00:14:46.595 10:33:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:46.595 10:33:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@344 -- # case "$op" in 00:14:46.595 10:33:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@345 -- # : 1 00:14:46.595 10:33:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:46.595 10:33:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:46.595 10:33:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@365 -- # decimal 1 00:14:46.595 10:33:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@353 -- # local d=1 00:14:46.595 10:33:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:46.595 10:33:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@355 -- # echo 1 00:14:46.595 10:33:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@365 -- # ver1[v]=1 00:14:46.595 10:33:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@366 -- # decimal 2 00:14:46.595 10:33:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@353 -- # local d=2 00:14:46.595 10:33:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:46.595 10:33:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@355 -- # echo 2 00:14:46.595 10:33:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@366 -- # ver2[v]=2 00:14:46.595 10:33:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:46.595 10:33:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:46.595 10:33:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@368 -- # return 0 00:14:46.595 10:33:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:46.595 10:33:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:14:46.595 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:46.595 --rc genhtml_branch_coverage=1 00:14:46.595 --rc genhtml_function_coverage=1 00:14:46.595 --rc genhtml_legend=1 00:14:46.595 --rc geninfo_all_blocks=1 00:14:46.595 --rc geninfo_unexecuted_blocks=1 00:14:46.595 00:14:46.595 ' 00:14:46.595 10:33:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:14:46.595 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:46.595 --rc genhtml_branch_coverage=1 00:14:46.595 --rc genhtml_function_coverage=1 00:14:46.595 --rc genhtml_legend=1 00:14:46.595 --rc geninfo_all_blocks=1 00:14:46.595 --rc geninfo_unexecuted_blocks=1 00:14:46.595 00:14:46.595 ' 00:14:46.595 10:33:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:14:46.595 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:46.595 --rc genhtml_branch_coverage=1 00:14:46.595 --rc genhtml_function_coverage=1 00:14:46.595 --rc genhtml_legend=1 00:14:46.595 --rc geninfo_all_blocks=1 00:14:46.595 --rc geninfo_unexecuted_blocks=1 00:14:46.595 00:14:46.595 ' 00:14:46.595 10:33:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:14:46.595 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:46.595 --rc genhtml_branch_coverage=1 00:14:46.595 --rc genhtml_function_coverage=1 00:14:46.595 --rc genhtml_legend=1 00:14:46.595 --rc geninfo_all_blocks=1 00:14:46.595 --rc geninfo_unexecuted_blocks=1 00:14:46.595 00:14:46.595 ' 00:14:46.595 10:33:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:46.595 10:33:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # uname -s 00:14:46.595 10:33:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:46.595 10:33:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:46.595 10:33:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:46.595 10:33:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:46.595 10:33:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:46.595 10:33:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:46.595 10:33:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:46.595 10:33:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:46.595 10:33:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:46.595 10:33:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:46.595 10:33:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:14:46.595 10:33:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=8b464f06-2980-e311-ba20-001e67a94acd 00:14:46.595 10:33:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:46.595 10:33:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:46.595 10:33:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:46.595 10:33:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:46.595 10:33:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:46.595 10:33:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@15 -- # shopt -s extglob 00:14:46.595 10:33:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:46.595 10:33:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:46.595 10:33:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:46.595 10:33:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:46.595 10:33:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:46.595 10:33:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:46.595 10:33:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@5 -- # export PATH 00:14:46.595 10:33:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:46.595 10:33:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@51 -- # : 0 00:14:46.595 10:33:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:46.596 10:33:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:46.596 10:33:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:46.596 10:33:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:46.596 10:33:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:46.596 10:33:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:46.596 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:46.596 10:33:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:46.596 10:33:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:46.596 10:33:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:46.596 10:33:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@12 -- # MALLOC_BDEV_SIZE=64 00:14:46.596 10:33:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:14:46.596 10:33:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@15 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:14:46.596 10:33:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@16 -- # traddr=/var/run/vfio-user 00:14:46.596 10:33:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:14:46.596 10:33:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:14:46.596 10:33:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@20 -- # rm -rf /var/run/vfio-user 00:14:46.853 10:33:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@24 -- # nvmfpid=357514 00:14:46.853 10:33:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:14:46.853 10:33:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@25 -- # echo 'Process pid: 357514' 00:14:46.853 Process pid: 357514 00:14:46.853 10:33:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@27 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:14:46.853 10:33:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@28 -- # waitforlisten 357514 00:14:46.853 10:33:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@833 -- # '[' -z 357514 ']' 00:14:46.853 10:33:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:46.853 10:33:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@838 -- # local max_retries=100 00:14:46.853 10:33:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:46.853 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:46.853 10:33:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@842 -- # xtrace_disable 00:14:46.853 10:33:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:14:47.111 10:33:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:14:47.111 10:33:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@866 -- # return 0 00:14:47.111 10:33:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@30 -- # sleep 1 00:14:48.044 10:33:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@32 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:14:48.044 10:33:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:48.044 10:33:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:14:48.045 10:33:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:48.045 10:33:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@34 -- # mkdir -p /var/run/vfio-user 00:14:48.045 10:33:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:14:48.045 10:33:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:48.045 10:33:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:14:48.045 malloc0 00:14:48.045 10:33:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:48.045 10:33:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk 00:14:48.045 10:33:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:48.045 10:33:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:14:48.045 10:33:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:48.045 10:33:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:14:48.045 10:33:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:48.045 10:33:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:14:48.045 10:33:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:48.045 10:33:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:14:48.045 10:33:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:48.045 10:33:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:14:48.045 10:33:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:48.045 10:33:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@41 -- # trid='trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' 00:14:48.045 10:33:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' -N -a 00:15:20.108 Fuzzing completed. Shutting down the fuzz application 00:15:20.108 00:15:20.108 Dumping successful admin opcodes: 00:15:20.108 8, 9, 10, 24, 00:15:20.108 Dumping successful io opcodes: 00:15:20.108 0, 00:15:20.108 NS: 0x20000081ef00 I/O qp, Total commands completed: 677777, total successful commands: 2639, random_seed: 1741761664 00:15:20.108 NS: 0x20000081ef00 admin qp, Total commands completed: 155566, total successful commands: 1253, random_seed: 2547278016 00:15:20.108 10:34:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@44 -- # rpc_cmd nvmf_delete_subsystem nqn.2021-09.io.spdk:cnode0 00:15:20.108 10:34:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:20.108 10:34:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:20.108 10:34:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:20.108 10:34:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@46 -- # killprocess 357514 00:15:20.108 10:34:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@952 -- # '[' -z 357514 ']' 00:15:20.108 10:34:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@956 -- # kill -0 357514 00:15:20.108 10:34:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@957 -- # uname 00:15:20.108 10:34:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:15:20.108 10:34:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 357514 00:15:20.108 10:34:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:15:20.108 10:34:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:15:20.108 10:34:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@970 -- # echo 'killing process with pid 357514' 00:15:20.108 killing process with pid 357514 00:15:20.108 10:34:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@971 -- # kill 357514 00:15:20.108 10:34:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@976 -- # wait 357514 00:15:20.108 10:34:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@48 -- # rm -rf /var/run/vfio-user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_log.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_tgt_output.txt 00:15:20.108 10:34:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@50 -- # trap - SIGINT SIGTERM EXIT 00:15:20.108 00:15:20.108 real 0m32.238s 00:15:20.108 user 0m33.386s 00:15:20.108 sys 0m25.952s 00:15:20.108 10:34:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1128 -- # xtrace_disable 00:15:20.108 10:34:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:20.108 ************************************ 00:15:20.108 END TEST nvmf_vfio_user_fuzz 00:15:20.108 ************************************ 00:15:20.108 10:34:07 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@37 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:15:20.108 10:34:07 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:15:20.108 10:34:07 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:15:20.108 10:34:07 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:20.108 ************************************ 00:15:20.108 START TEST nvmf_auth_target 00:15:20.108 ************************************ 00:15:20.108 10:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:15:20.108 * Looking for test storage... 00:15:20.108 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:20.108 10:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:15:20.108 10:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1691 -- # lcov --version 00:15:20.108 10:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:15:20.108 10:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:15:20.109 10:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:20.109 10:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:20.109 10:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:20.109 10:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # IFS=.-: 00:15:20.109 10:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # read -ra ver1 00:15:20.109 10:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # IFS=.-: 00:15:20.109 10:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # read -ra ver2 00:15:20.109 10:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@338 -- # local 'op=<' 00:15:20.109 10:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@340 -- # ver1_l=2 00:15:20.109 10:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@341 -- # ver2_l=1 00:15:20.109 10:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:20.109 10:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@344 -- # case "$op" in 00:15:20.109 10:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@345 -- # : 1 00:15:20.109 10:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:20.109 10:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:20.109 10:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # decimal 1 00:15:20.109 10:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=1 00:15:20.109 10:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:20.109 10:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 1 00:15:20.109 10:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # ver1[v]=1 00:15:20.109 10:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # decimal 2 00:15:20.109 10:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=2 00:15:20.109 10:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:20.109 10:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 2 00:15:20.109 10:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # ver2[v]=2 00:15:20.109 10:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:20.109 10:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:20.109 10:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # return 0 00:15:20.109 10:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:20.109 10:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:15:20.109 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:20.109 --rc genhtml_branch_coverage=1 00:15:20.109 --rc genhtml_function_coverage=1 00:15:20.109 --rc genhtml_legend=1 00:15:20.109 --rc geninfo_all_blocks=1 00:15:20.109 --rc geninfo_unexecuted_blocks=1 00:15:20.109 00:15:20.109 ' 00:15:20.109 10:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:15:20.109 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:20.109 --rc genhtml_branch_coverage=1 00:15:20.109 --rc genhtml_function_coverage=1 00:15:20.109 --rc genhtml_legend=1 00:15:20.109 --rc geninfo_all_blocks=1 00:15:20.109 --rc geninfo_unexecuted_blocks=1 00:15:20.109 00:15:20.109 ' 00:15:20.109 10:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:15:20.109 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:20.109 --rc genhtml_branch_coverage=1 00:15:20.109 --rc genhtml_function_coverage=1 00:15:20.109 --rc genhtml_legend=1 00:15:20.109 --rc geninfo_all_blocks=1 00:15:20.109 --rc geninfo_unexecuted_blocks=1 00:15:20.109 00:15:20.109 ' 00:15:20.109 10:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:15:20.109 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:20.109 --rc genhtml_branch_coverage=1 00:15:20.109 --rc genhtml_function_coverage=1 00:15:20.109 --rc genhtml_legend=1 00:15:20.109 --rc geninfo_all_blocks=1 00:15:20.109 --rc geninfo_unexecuted_blocks=1 00:15:20.109 00:15:20.109 ' 00:15:20.109 10:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:20.109 10:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:15:20.109 10:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:20.109 10:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:20.109 10:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:20.109 10:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:20.109 10:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:20.109 10:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:20.109 10:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:20.109 10:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:20.109 10:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:20.109 10:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:20.109 10:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:15:20.109 10:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=8b464f06-2980-e311-ba20-001e67a94acd 00:15:20.109 10:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:20.109 10:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:20.109 10:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:20.109 10:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:20.109 10:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:20.109 10:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@15 -- # shopt -s extglob 00:15:20.109 10:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:20.109 10:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:20.109 10:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:20.109 10:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:20.109 10:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:20.109 10:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:20.109 10:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:15:20.110 10:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:20.110 10:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@51 -- # : 0 00:15:20.110 10:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:20.110 10:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:20.110 10:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:20.110 10:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:20.110 10:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:20.110 10:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:20.110 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:20.110 10:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:20.110 10:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:20.110 10:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:20.110 10:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:15:20.110 10:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:15:20.110 10:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:15:20.110 10:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:15:20.110 10:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:15:20.110 10:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:15:20.110 10:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:15:20.110 10:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # nvmftestinit 00:15:20.110 10:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:15:20.110 10:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:20.110 10:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:15:20.110 10:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:15:20.110 10:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:15:20.110 10:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:20.110 10:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:20.110 10:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:20.110 10:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:15:20.110 10:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:15:20.110 10:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@309 -- # xtrace_disable 00:15:20.110 10:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:21.486 10:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:21.486 10:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # pci_devs=() 00:15:21.486 10:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:15:21.486 10:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:15:21.486 10:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:15:21.486 10:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:15:21.486 10:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:15:21.486 10:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # net_devs=() 00:15:21.486 10:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:15:21.486 10:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # e810=() 00:15:21.486 10:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # local -ga e810 00:15:21.486 10:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # x722=() 00:15:21.486 10:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # local -ga x722 00:15:21.486 10:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # mlx=() 00:15:21.486 10:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # local -ga mlx 00:15:21.486 10:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:21.486 10:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:21.486 10:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:21.486 10:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:21.486 10:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:21.486 10:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:21.486 10:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:21.486 10:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:15:21.486 10:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:21.486 10:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:21.486 10:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:21.486 10:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:21.486 10:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:15:21.486 10:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:15:21.486 10:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:15:21.486 10:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:15:21.486 10:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:15:21.486 10:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:15:21.486 10:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:15:21.486 10:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@367 -- # echo 'Found 0000:82:00.0 (0x8086 - 0x159b)' 00:15:21.486 Found 0000:82:00.0 (0x8086 - 0x159b) 00:15:21.486 10:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:15:21.486 10:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:15:21.486 10:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:21.486 10:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:21.486 10:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:15:21.486 10:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:15:21.486 10:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@367 -- # echo 'Found 0000:82:00.1 (0x8086 - 0x159b)' 00:15:21.486 Found 0000:82:00.1 (0x8086 - 0x159b) 00:15:21.486 10:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:15:21.486 10:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:15:21.486 10:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:21.486 10:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:21.486 10:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:15:21.486 10:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:15:21.486 10:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:15:21.486 10:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:15:21.486 10:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:15:21.486 10:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:21.486 10:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:15:21.486 10:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:21.486 10:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:15:21.486 10:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:15:21.486 10:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:21.486 10:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:82:00.0: cvl_0_0' 00:15:21.486 Found net devices under 0000:82:00.0: cvl_0_0 00:15:21.486 10:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:15:21.486 10:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:15:21.486 10:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:21.486 10:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:15:21.486 10:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:21.486 10:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:15:21.486 10:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:15:21.486 10:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:21.486 10:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:82:00.1: cvl_0_1' 00:15:21.486 Found net devices under 0000:82:00.1: cvl_0_1 00:15:21.486 10:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:15:21.486 10:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:15:21.486 10:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # is_hw=yes 00:15:21.487 10:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:15:21.487 10:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:15:21.487 10:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:15:21.487 10:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:15:21.487 10:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:21.487 10:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:21.487 10:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:21.487 10:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:15:21.487 10:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:21.487 10:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:21.487 10:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:15:21.487 10:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:15:21.487 10:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:21.487 10:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:21.487 10:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:15:21.487 10:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:15:21.487 10:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:15:21.487 10:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:21.487 10:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:21.487 10:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:21.487 10:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:15:21.487 10:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:21.487 10:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:21.487 10:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:21.487 10:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:15:21.487 10:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:15:21.487 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:21.487 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.280 ms 00:15:21.487 00:15:21.487 --- 10.0.0.2 ping statistics --- 00:15:21.487 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:21.487 rtt min/avg/max/mdev = 0.280/0.280/0.280/0.000 ms 00:15:21.487 10:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:21.487 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:21.487 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.060 ms 00:15:21.487 00:15:21.487 --- 10.0.0.1 ping statistics --- 00:15:21.487 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:21.487 rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms 00:15:21.487 10:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:21.487 10:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@450 -- # return 0 00:15:21.487 10:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:15:21.487 10:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:21.487 10:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:15:21.487 10:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:15:21.487 10:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:21.487 10:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:15:21.487 10:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:15:21.487 10:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@87 -- # nvmfappstart -L nvmf_auth 00:15:21.487 10:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:15:21.487 10:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:15:21.487 10:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:21.487 10:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=362970 00:15:21.487 10:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:15:21.487 10:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 362970 00:15:21.487 10:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@833 -- # '[' -z 362970 ']' 00:15:21.487 10:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:21.487 10:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # local max_retries=100 00:15:21.487 10:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:21.487 10:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # xtrace_disable 00:15:21.487 10:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:21.745 10:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:15:21.745 10:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@866 -- # return 0 00:15:21.745 10:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:15:21.745 10:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:15:21.745 10:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:21.745 10:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:21.745 10:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@89 -- # hostpid=362990 00:15:21.745 10:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:15:21.745 10:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:15:21.745 10:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key null 48 00:15:21.745 10:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:15:21.745 10:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:15:21.745 10:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:15:21.745 10:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=null 00:15:21.745 10:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:15:21.745 10:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:15:21.745 10:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=722ea9e55c887ae5433a494061a17e84d4c753e3aef200bd 00:15:21.745 10:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:15:21.745 10:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.O51 00:15:21.745 10:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 722ea9e55c887ae5433a494061a17e84d4c753e3aef200bd 0 00:15:21.745 10:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 722ea9e55c887ae5433a494061a17e84d4c753e3aef200bd 0 00:15:21.745 10:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:15:21.745 10:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:15:21.745 10:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=722ea9e55c887ae5433a494061a17e84d4c753e3aef200bd 00:15:21.745 10:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=0 00:15:21.745 10:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:15:21.745 10:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.O51 00:15:21.745 10:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.O51 00:15:21.745 10:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # keys[0]=/tmp/spdk.key-null.O51 00:15:21.745 10:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key sha512 64 00:15:21.745 10:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:15:21.745 10:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:15:21.745 10:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:15:21.745 10:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:15:21.745 10:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:15:21.745 10:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:15:21.745 10:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=3594ab9e2c02dd55aa004d083efe46cf5a0a64ad9fb29c6bd9150aed69b0352b 00:15:21.745 10:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:15:21.745 10:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.z1q 00:15:21.745 10:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 3594ab9e2c02dd55aa004d083efe46cf5a0a64ad9fb29c6bd9150aed69b0352b 3 00:15:21.745 10:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 3594ab9e2c02dd55aa004d083efe46cf5a0a64ad9fb29c6bd9150aed69b0352b 3 00:15:21.745 10:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:15:21.745 10:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:15:21.745 10:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=3594ab9e2c02dd55aa004d083efe46cf5a0a64ad9fb29c6bd9150aed69b0352b 00:15:21.745 10:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:15:21.745 10:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:15:21.745 10:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.z1q 00:15:21.745 10:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.z1q 00:15:21.745 10:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # ckeys[0]=/tmp/spdk.key-sha512.z1q 00:15:21.745 10:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha256 32 00:15:21.745 10:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:15:21.745 10:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:15:21.745 10:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:15:21.746 10:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:15:21.746 10:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:15:21.746 10:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:15:21.746 10:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=af5027022cb696d806be4c967d6fa9df 00:15:21.746 10:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:15:21.746 10:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.osT 00:15:21.746 10:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key af5027022cb696d806be4c967d6fa9df 1 00:15:21.746 10:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 af5027022cb696d806be4c967d6fa9df 1 00:15:21.746 10:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:15:21.746 10:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:15:21.746 10:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=af5027022cb696d806be4c967d6fa9df 00:15:21.746 10:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:15:21.746 10:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:15:21.746 10:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.osT 00:15:21.746 10:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.osT 00:15:21.746 10:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # keys[1]=/tmp/spdk.key-sha256.osT 00:15:21.746 10:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha384 48 00:15:21.746 10:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:15:21.746 10:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:15:21.746 10:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:15:21.746 10:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:15:21.746 10:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:15:21.746 10:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:15:21.746 10:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=ef8ef0910a720c534e8347ebb629975aa91f11b41a13d66f 00:15:21.746 10:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:15:21.746 10:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.3Ky 00:15:21.746 10:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key ef8ef0910a720c534e8347ebb629975aa91f11b41a13d66f 2 00:15:21.746 10:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 ef8ef0910a720c534e8347ebb629975aa91f11b41a13d66f 2 00:15:21.746 10:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:15:21.746 10:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:15:21.746 10:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=ef8ef0910a720c534e8347ebb629975aa91f11b41a13d66f 00:15:21.746 10:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:15:21.746 10:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:15:22.005 10:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.3Ky 00:15:22.005 10:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.3Ky 00:15:22.005 10:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # ckeys[1]=/tmp/spdk.key-sha384.3Ky 00:15:22.005 10:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha384 48 00:15:22.005 10:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:15:22.005 10:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:15:22.005 10:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:15:22.005 10:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:15:22.005 10:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:15:22.005 10:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:15:22.005 10:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=4a93c869b5d498e5a396594009599efda678d7a30c274bb9 00:15:22.005 10:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:15:22.005 10:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.D7y 00:15:22.005 10:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 4a93c869b5d498e5a396594009599efda678d7a30c274bb9 2 00:15:22.005 10:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 4a93c869b5d498e5a396594009599efda678d7a30c274bb9 2 00:15:22.005 10:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:15:22.005 10:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:15:22.005 10:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=4a93c869b5d498e5a396594009599efda678d7a30c274bb9 00:15:22.005 10:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:15:22.005 10:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:15:22.005 10:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.D7y 00:15:22.005 10:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.D7y 00:15:22.005 10:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # keys[2]=/tmp/spdk.key-sha384.D7y 00:15:22.005 10:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha256 32 00:15:22.005 10:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:15:22.005 10:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:15:22.005 10:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:15:22.005 10:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:15:22.005 10:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:15:22.005 10:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:15:22.005 10:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=e2513589bad5f1f8ae0a8e1bc1c155c8 00:15:22.005 10:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:15:22.005 10:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.2Z2 00:15:22.005 10:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key e2513589bad5f1f8ae0a8e1bc1c155c8 1 00:15:22.005 10:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 e2513589bad5f1f8ae0a8e1bc1c155c8 1 00:15:22.005 10:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:15:22.005 10:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:15:22.005 10:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=e2513589bad5f1f8ae0a8e1bc1c155c8 00:15:22.005 10:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:15:22.005 10:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:15:22.005 10:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.2Z2 00:15:22.005 10:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.2Z2 00:15:22.005 10:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # ckeys[2]=/tmp/spdk.key-sha256.2Z2 00:15:22.005 10:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # gen_dhchap_key sha512 64 00:15:22.005 10:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:15:22.005 10:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:15:22.005 10:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:15:22.005 10:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:15:22.005 10:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:15:22.005 10:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:15:22.005 10:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=30b055ae18509aaca04252119ab766542d2e0f3726da4183fe723e39e4389de3 00:15:22.005 10:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:15:22.005 10:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.aWh 00:15:22.005 10:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 30b055ae18509aaca04252119ab766542d2e0f3726da4183fe723e39e4389de3 3 00:15:22.005 10:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 30b055ae18509aaca04252119ab766542d2e0f3726da4183fe723e39e4389de3 3 00:15:22.005 10:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:15:22.005 10:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:15:22.005 10:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=30b055ae18509aaca04252119ab766542d2e0f3726da4183fe723e39e4389de3 00:15:22.005 10:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:15:22.005 10:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:15:22.005 10:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.aWh 00:15:22.005 10:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.aWh 00:15:22.005 10:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # keys[3]=/tmp/spdk.key-sha512.aWh 00:15:22.005 10:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # ckeys[3]= 00:15:22.005 10:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@99 -- # waitforlisten 362970 00:15:22.005 10:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@833 -- # '[' -z 362970 ']' 00:15:22.005 10:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:22.005 10:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # local max_retries=100 00:15:22.005 10:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:22.005 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:22.005 10:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # xtrace_disable 00:15:22.005 10:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:22.263 10:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:15:22.263 10:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@866 -- # return 0 00:15:22.263 10:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@100 -- # waitforlisten 362990 /var/tmp/host.sock 00:15:22.263 10:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@833 -- # '[' -z 362990 ']' 00:15:22.263 10:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/host.sock 00:15:22.263 10:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # local max_retries=100 00:15:22.263 10:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:15:22.263 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:15:22.263 10:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # xtrace_disable 00:15:22.263 10:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:22.521 10:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:15:22.521 10:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@866 -- # return 0 00:15:22.521 10:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@101 -- # rpc_cmd 00:15:22.521 10:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:22.521 10:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:22.521 10:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:22.521 10:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:15:22.521 10:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.O51 00:15:22.521 10:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:22.521 10:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:22.521 10:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:22.521 10:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.O51 00:15:22.521 10:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.O51 00:15:23.086 10:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha512.z1q ]] 00:15:23.086 10:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.z1q 00:15:23.086 10:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:23.086 10:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:23.086 10:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:23.086 10:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.z1q 00:15:23.086 10:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.z1q 00:15:23.086 10:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:15:23.086 10:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.osT 00:15:23.086 10:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:23.086 10:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:23.086 10:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:23.086 10:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.osT 00:15:23.086 10:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.osT 00:15:23.344 10:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha384.3Ky ]] 00:15:23.344 10:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.3Ky 00:15:23.344 10:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:23.344 10:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:23.344 10:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:23.344 10:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.3Ky 00:15:23.344 10:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.3Ky 00:15:23.910 10:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:15:23.910 10:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.D7y 00:15:23.910 10:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:23.910 10:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:23.910 10:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:23.910 10:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.D7y 00:15:23.910 10:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.D7y 00:15:23.910 10:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha256.2Z2 ]] 00:15:23.910 10:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.2Z2 00:15:23.910 10:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:23.910 10:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:23.910 10:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:23.910 10:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.2Z2 00:15:23.910 10:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.2Z2 00:15:24.168 10:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:15:24.168 10:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.aWh 00:15:24.168 10:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:24.168 10:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:24.168 10:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:24.168 10:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.aWh 00:15:24.168 10:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.aWh 00:15:24.734 10:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n '' ]] 00:15:24.734 10:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:15:24.734 10:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:24.734 10:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:24.734 10:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:24.734 10:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:24.734 10:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 0 00:15:24.734 10:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:24.734 10:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:24.734 10:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:15:24.735 10:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:24.735 10:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:24.735 10:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:24.735 10:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:24.735 10:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:24.735 10:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:24.735 10:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:24.735 10:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:24.735 10:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:25.300 00:15:25.300 10:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:25.300 10:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:25.300 10:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:25.558 10:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:25.558 10:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:25.558 10:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:25.558 10:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:25.558 10:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:25.558 10:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:25.558 { 00:15:25.558 "cntlid": 1, 00:15:25.558 "qid": 0, 00:15:25.558 "state": "enabled", 00:15:25.558 "thread": "nvmf_tgt_poll_group_000", 00:15:25.558 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd", 00:15:25.558 "listen_address": { 00:15:25.558 "trtype": "TCP", 00:15:25.558 "adrfam": "IPv4", 00:15:25.558 "traddr": "10.0.0.2", 00:15:25.558 "trsvcid": "4420" 00:15:25.558 }, 00:15:25.558 "peer_address": { 00:15:25.558 "trtype": "TCP", 00:15:25.558 "adrfam": "IPv4", 00:15:25.558 "traddr": "10.0.0.1", 00:15:25.558 "trsvcid": "39288" 00:15:25.558 }, 00:15:25.558 "auth": { 00:15:25.558 "state": "completed", 00:15:25.558 "digest": "sha256", 00:15:25.558 "dhgroup": "null" 00:15:25.558 } 00:15:25.558 } 00:15:25.558 ]' 00:15:25.558 10:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:25.558 10:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:25.558 10:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:25.558 10:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:15:25.558 10:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:25.558 10:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:25.558 10:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:25.558 10:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:25.816 10:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NzIyZWE5ZTU1Yzg4N2FlNTQzM2E0OTQwNjFhMTdlODRkNGM3NTNlM2FlZjIwMGJkgrp0GQ==: --dhchap-ctrl-secret DHHC-1:03:MzU5NGFiOWUyYzAyZGQ1NWFhMDA0ZDA4M2VmZTQ2Y2Y1YTBhNjRhZDlmYjI5YzZiZDkxNTBhZWQ2OWIwMzUyYkSjwPw=: 00:15:25.816 10:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid 8b464f06-2980-e311-ba20-001e67a94acd -l 0 --dhchap-secret DHHC-1:00:NzIyZWE5ZTU1Yzg4N2FlNTQzM2E0OTQwNjFhMTdlODRkNGM3NTNlM2FlZjIwMGJkgrp0GQ==: --dhchap-ctrl-secret DHHC-1:03:MzU5NGFiOWUyYzAyZGQ1NWFhMDA0ZDA4M2VmZTQ2Y2Y1YTBhNjRhZDlmYjI5YzZiZDkxNTBhZWQ2OWIwMzUyYkSjwPw=: 00:15:31.078 10:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:31.078 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:31.078 10:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:15:31.078 10:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:31.078 10:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:31.078 10:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:31.078 10:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:31.078 10:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:31.078 10:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:31.078 10:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 1 00:15:31.078 10:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:31.078 10:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:31.078 10:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:15:31.078 10:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:31.078 10:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:31.078 10:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:31.078 10:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:31.078 10:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:31.078 10:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:31.078 10:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:31.078 10:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:31.078 10:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:31.078 00:15:31.078 10:34:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:31.078 10:34:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:31.078 10:34:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:31.337 10:34:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:31.337 10:34:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:31.337 10:34:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:31.337 10:34:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:31.337 10:34:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:31.337 10:34:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:31.337 { 00:15:31.337 "cntlid": 3, 00:15:31.337 "qid": 0, 00:15:31.337 "state": "enabled", 00:15:31.337 "thread": "nvmf_tgt_poll_group_000", 00:15:31.337 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd", 00:15:31.337 "listen_address": { 00:15:31.337 "trtype": "TCP", 00:15:31.337 "adrfam": "IPv4", 00:15:31.337 "traddr": "10.0.0.2", 00:15:31.337 "trsvcid": "4420" 00:15:31.337 }, 00:15:31.337 "peer_address": { 00:15:31.337 "trtype": "TCP", 00:15:31.337 "adrfam": "IPv4", 00:15:31.337 "traddr": "10.0.0.1", 00:15:31.337 "trsvcid": "41212" 00:15:31.337 }, 00:15:31.337 "auth": { 00:15:31.337 "state": "completed", 00:15:31.337 "digest": "sha256", 00:15:31.338 "dhgroup": "null" 00:15:31.338 } 00:15:31.338 } 00:15:31.338 ]' 00:15:31.338 10:34:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:31.338 10:34:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:31.338 10:34:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:31.338 10:34:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:15:31.338 10:34:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:31.338 10:34:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:31.338 10:34:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:31.338 10:34:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:31.595 10:34:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YWY1MDI3MDIyY2I2OTZkODA2YmU0Yzk2N2Q2ZmE5ZGbrQth/: --dhchap-ctrl-secret DHHC-1:02:ZWY4ZWYwOTEwYTcyMGM1MzRlODM0N2ViYjYyOTk3NWFhOTFmMTFiNDFhMTNkNjZmKWKHwA==: 00:15:31.595 10:34:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid 8b464f06-2980-e311-ba20-001e67a94acd -l 0 --dhchap-secret DHHC-1:01:YWY1MDI3MDIyY2I2OTZkODA2YmU0Yzk2N2Q2ZmE5ZGbrQth/: --dhchap-ctrl-secret DHHC-1:02:ZWY4ZWYwOTEwYTcyMGM1MzRlODM0N2ViYjYyOTk3NWFhOTFmMTFiNDFhMTNkNjZmKWKHwA==: 00:15:32.528 10:34:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:32.528 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:32.528 10:34:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:15:32.528 10:34:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:32.528 10:34:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:32.528 10:34:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:32.528 10:34:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:32.528 10:34:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:32.528 10:34:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:32.786 10:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 2 00:15:32.786 10:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:32.786 10:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:32.786 10:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:15:32.786 10:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:32.786 10:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:32.786 10:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:32.786 10:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:32.786 10:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:32.786 10:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:32.786 10:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:32.786 10:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:32.786 10:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:33.351 00:15:33.351 10:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:33.351 10:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:33.351 10:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:33.609 10:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:33.609 10:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:33.609 10:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:33.609 10:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:33.610 10:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:33.610 10:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:33.610 { 00:15:33.610 "cntlid": 5, 00:15:33.610 "qid": 0, 00:15:33.610 "state": "enabled", 00:15:33.610 "thread": "nvmf_tgt_poll_group_000", 00:15:33.610 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd", 00:15:33.610 "listen_address": { 00:15:33.610 "trtype": "TCP", 00:15:33.610 "adrfam": "IPv4", 00:15:33.610 "traddr": "10.0.0.2", 00:15:33.610 "trsvcid": "4420" 00:15:33.610 }, 00:15:33.610 "peer_address": { 00:15:33.610 "trtype": "TCP", 00:15:33.610 "adrfam": "IPv4", 00:15:33.610 "traddr": "10.0.0.1", 00:15:33.610 "trsvcid": "41234" 00:15:33.610 }, 00:15:33.610 "auth": { 00:15:33.610 "state": "completed", 00:15:33.610 "digest": "sha256", 00:15:33.610 "dhgroup": "null" 00:15:33.610 } 00:15:33.610 } 00:15:33.610 ]' 00:15:33.610 10:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:33.610 10:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:33.610 10:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:33.610 10:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:15:33.610 10:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:33.610 10:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:33.610 10:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:33.610 10:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:33.868 10:34:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NGE5M2M4NjliNWQ0OThlNWEzOTY1OTQwMDk1OTllZmRhNjc4ZDdhMzBjMjc0YmI51dL9kA==: --dhchap-ctrl-secret DHHC-1:01:ZTI1MTM1ODliYWQ1ZjFmOGFlMGE4ZTFiYzFjMTU1YzguHoKq: 00:15:33.868 10:34:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid 8b464f06-2980-e311-ba20-001e67a94acd -l 0 --dhchap-secret DHHC-1:02:NGE5M2M4NjliNWQ0OThlNWEzOTY1OTQwMDk1OTllZmRhNjc4ZDdhMzBjMjc0YmI51dL9kA==: --dhchap-ctrl-secret DHHC-1:01:ZTI1MTM1ODliYWQ1ZjFmOGFlMGE4ZTFiYzFjMTU1YzguHoKq: 00:15:34.800 10:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:34.800 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:34.800 10:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:15:34.800 10:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:34.800 10:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:34.800 10:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:34.800 10:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:34.800 10:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:34.800 10:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:35.058 10:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 3 00:15:35.058 10:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:35.058 10:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:35.058 10:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:15:35.058 10:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:35.058 10:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:35.058 10:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --dhchap-key key3 00:15:35.059 10:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:35.059 10:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:35.059 10:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:35.059 10:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:35.059 10:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:35.059 10:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:35.316 00:15:35.316 10:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:35.316 10:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:35.316 10:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:35.575 10:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:35.575 10:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:35.575 10:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:35.575 10:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:35.575 10:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:35.575 10:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:35.575 { 00:15:35.575 "cntlid": 7, 00:15:35.575 "qid": 0, 00:15:35.575 "state": "enabled", 00:15:35.575 "thread": "nvmf_tgt_poll_group_000", 00:15:35.575 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd", 00:15:35.575 "listen_address": { 00:15:35.575 "trtype": "TCP", 00:15:35.575 "adrfam": "IPv4", 00:15:35.575 "traddr": "10.0.0.2", 00:15:35.575 "trsvcid": "4420" 00:15:35.575 }, 00:15:35.575 "peer_address": { 00:15:35.575 "trtype": "TCP", 00:15:35.575 "adrfam": "IPv4", 00:15:35.575 "traddr": "10.0.0.1", 00:15:35.575 "trsvcid": "41252" 00:15:35.575 }, 00:15:35.575 "auth": { 00:15:35.575 "state": "completed", 00:15:35.575 "digest": "sha256", 00:15:35.575 "dhgroup": "null" 00:15:35.575 } 00:15:35.575 } 00:15:35.575 ]' 00:15:35.575 10:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:35.832 10:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:35.832 10:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:35.832 10:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:15:35.833 10:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:35.833 10:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:35.833 10:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:35.833 10:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:36.090 10:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MzBiMDU1YWUxODUwOWFhY2EwNDI1MjExOWFiNzY2NTQyZDJlMGYzNzI2ZGE0MTgzZmU3MjNlMzllNDM4OWRlM4eymdA=: 00:15:36.090 10:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid 8b464f06-2980-e311-ba20-001e67a94acd -l 0 --dhchap-secret DHHC-1:03:MzBiMDU1YWUxODUwOWFhY2EwNDI1MjExOWFiNzY2NTQyZDJlMGYzNzI2ZGE0MTgzZmU3MjNlMzllNDM4OWRlM4eymdA=: 00:15:37.023 10:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:37.023 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:37.023 10:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:15:37.023 10:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:37.023 10:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:37.023 10:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:37.023 10:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:37.023 10:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:37.023 10:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:37.023 10:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:37.281 10:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 0 00:15:37.281 10:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:37.281 10:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:37.281 10:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:15:37.281 10:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:37.281 10:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:37.281 10:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:37.281 10:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:37.281 10:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:37.281 10:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:37.281 10:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:37.281 10:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:37.281 10:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:37.539 00:15:37.540 10:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:37.540 10:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:37.540 10:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:37.796 10:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:37.796 10:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:37.796 10:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:37.796 10:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:37.796 10:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:37.796 10:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:37.796 { 00:15:37.796 "cntlid": 9, 00:15:37.796 "qid": 0, 00:15:37.796 "state": "enabled", 00:15:37.796 "thread": "nvmf_tgt_poll_group_000", 00:15:37.796 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd", 00:15:37.796 "listen_address": { 00:15:37.796 "trtype": "TCP", 00:15:37.796 "adrfam": "IPv4", 00:15:37.796 "traddr": "10.0.0.2", 00:15:37.796 "trsvcid": "4420" 00:15:37.796 }, 00:15:37.796 "peer_address": { 00:15:37.796 "trtype": "TCP", 00:15:37.796 "adrfam": "IPv4", 00:15:37.796 "traddr": "10.0.0.1", 00:15:37.796 "trsvcid": "41290" 00:15:37.796 }, 00:15:37.796 "auth": { 00:15:37.796 "state": "completed", 00:15:37.796 "digest": "sha256", 00:15:37.796 "dhgroup": "ffdhe2048" 00:15:37.796 } 00:15:37.796 } 00:15:37.796 ]' 00:15:37.796 10:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:38.053 10:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:38.053 10:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:38.053 10:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:38.053 10:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:38.053 10:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:38.053 10:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:38.053 10:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:38.311 10:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NzIyZWE5ZTU1Yzg4N2FlNTQzM2E0OTQwNjFhMTdlODRkNGM3NTNlM2FlZjIwMGJkgrp0GQ==: --dhchap-ctrl-secret DHHC-1:03:MzU5NGFiOWUyYzAyZGQ1NWFhMDA0ZDA4M2VmZTQ2Y2Y1YTBhNjRhZDlmYjI5YzZiZDkxNTBhZWQ2OWIwMzUyYkSjwPw=: 00:15:38.311 10:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid 8b464f06-2980-e311-ba20-001e67a94acd -l 0 --dhchap-secret DHHC-1:00:NzIyZWE5ZTU1Yzg4N2FlNTQzM2E0OTQwNjFhMTdlODRkNGM3NTNlM2FlZjIwMGJkgrp0GQ==: --dhchap-ctrl-secret DHHC-1:03:MzU5NGFiOWUyYzAyZGQ1NWFhMDA0ZDA4M2VmZTQ2Y2Y1YTBhNjRhZDlmYjI5YzZiZDkxNTBhZWQ2OWIwMzUyYkSjwPw=: 00:15:39.245 10:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:39.245 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:39.245 10:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:15:39.245 10:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:39.245 10:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:39.245 10:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:39.245 10:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:39.245 10:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:39.245 10:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:39.503 10:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 1 00:15:39.503 10:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:39.503 10:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:39.503 10:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:15:39.503 10:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:39.503 10:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:39.503 10:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:39.503 10:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:39.503 10:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:39.503 10:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:39.503 10:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:39.503 10:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:39.503 10:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:39.760 00:15:39.760 10:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:39.760 10:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:39.760 10:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:40.018 10:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:40.018 10:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:40.018 10:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:40.018 10:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:40.276 10:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:40.276 10:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:40.276 { 00:15:40.276 "cntlid": 11, 00:15:40.276 "qid": 0, 00:15:40.276 "state": "enabled", 00:15:40.276 "thread": "nvmf_tgt_poll_group_000", 00:15:40.276 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd", 00:15:40.276 "listen_address": { 00:15:40.276 "trtype": "TCP", 00:15:40.276 "adrfam": "IPv4", 00:15:40.276 "traddr": "10.0.0.2", 00:15:40.276 "trsvcid": "4420" 00:15:40.276 }, 00:15:40.276 "peer_address": { 00:15:40.276 "trtype": "TCP", 00:15:40.276 "adrfam": "IPv4", 00:15:40.276 "traddr": "10.0.0.1", 00:15:40.276 "trsvcid": "46390" 00:15:40.276 }, 00:15:40.276 "auth": { 00:15:40.276 "state": "completed", 00:15:40.276 "digest": "sha256", 00:15:40.276 "dhgroup": "ffdhe2048" 00:15:40.276 } 00:15:40.276 } 00:15:40.276 ]' 00:15:40.276 10:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:40.276 10:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:40.276 10:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:40.276 10:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:40.276 10:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:40.276 10:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:40.276 10:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:40.276 10:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:40.534 10:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YWY1MDI3MDIyY2I2OTZkODA2YmU0Yzk2N2Q2ZmE5ZGbrQth/: --dhchap-ctrl-secret DHHC-1:02:ZWY4ZWYwOTEwYTcyMGM1MzRlODM0N2ViYjYyOTk3NWFhOTFmMTFiNDFhMTNkNjZmKWKHwA==: 00:15:40.534 10:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid 8b464f06-2980-e311-ba20-001e67a94acd -l 0 --dhchap-secret DHHC-1:01:YWY1MDI3MDIyY2I2OTZkODA2YmU0Yzk2N2Q2ZmE5ZGbrQth/: --dhchap-ctrl-secret DHHC-1:02:ZWY4ZWYwOTEwYTcyMGM1MzRlODM0N2ViYjYyOTk3NWFhOTFmMTFiNDFhMTNkNjZmKWKHwA==: 00:15:41.466 10:34:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:41.466 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:41.466 10:34:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:15:41.466 10:34:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:41.466 10:34:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:41.466 10:34:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:41.466 10:34:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:41.466 10:34:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:41.466 10:34:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:41.725 10:34:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 2 00:15:41.725 10:34:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:41.725 10:34:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:41.725 10:34:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:15:41.725 10:34:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:41.725 10:34:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:41.725 10:34:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:41.725 10:34:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:41.725 10:34:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:41.725 10:34:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:41.725 10:34:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:41.725 10:34:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:41.725 10:34:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:41.983 00:15:41.983 10:34:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:41.983 10:34:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:41.983 10:34:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:42.241 10:34:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:42.241 10:34:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:42.241 10:34:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:42.241 10:34:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:42.241 10:34:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:42.241 10:34:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:42.241 { 00:15:42.241 "cntlid": 13, 00:15:42.241 "qid": 0, 00:15:42.241 "state": "enabled", 00:15:42.241 "thread": "nvmf_tgt_poll_group_000", 00:15:42.241 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd", 00:15:42.241 "listen_address": { 00:15:42.241 "trtype": "TCP", 00:15:42.241 "adrfam": "IPv4", 00:15:42.241 "traddr": "10.0.0.2", 00:15:42.241 "trsvcid": "4420" 00:15:42.241 }, 00:15:42.241 "peer_address": { 00:15:42.241 "trtype": "TCP", 00:15:42.241 "adrfam": "IPv4", 00:15:42.241 "traddr": "10.0.0.1", 00:15:42.241 "trsvcid": "46416" 00:15:42.241 }, 00:15:42.241 "auth": { 00:15:42.241 "state": "completed", 00:15:42.241 "digest": "sha256", 00:15:42.241 "dhgroup": "ffdhe2048" 00:15:42.241 } 00:15:42.241 } 00:15:42.241 ]' 00:15:42.241 10:34:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:42.241 10:34:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:42.241 10:34:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:42.499 10:34:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:42.499 10:34:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:42.499 10:34:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:42.499 10:34:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:42.499 10:34:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:42.756 10:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NGE5M2M4NjliNWQ0OThlNWEzOTY1OTQwMDk1OTllZmRhNjc4ZDdhMzBjMjc0YmI51dL9kA==: --dhchap-ctrl-secret DHHC-1:01:ZTI1MTM1ODliYWQ1ZjFmOGFlMGE4ZTFiYzFjMTU1YzguHoKq: 00:15:42.756 10:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid 8b464f06-2980-e311-ba20-001e67a94acd -l 0 --dhchap-secret DHHC-1:02:NGE5M2M4NjliNWQ0OThlNWEzOTY1OTQwMDk1OTllZmRhNjc4ZDdhMzBjMjc0YmI51dL9kA==: --dhchap-ctrl-secret DHHC-1:01:ZTI1MTM1ODliYWQ1ZjFmOGFlMGE4ZTFiYzFjMTU1YzguHoKq: 00:15:43.690 10:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:43.690 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:43.690 10:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:15:43.690 10:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:43.690 10:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:43.690 10:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:43.690 10:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:43.690 10:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:43.690 10:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:43.948 10:34:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 3 00:15:43.948 10:34:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:43.948 10:34:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:43.948 10:34:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:15:43.948 10:34:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:43.948 10:34:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:43.948 10:34:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --dhchap-key key3 00:15:43.948 10:34:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:43.948 10:34:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:43.948 10:34:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:43.948 10:34:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:43.948 10:34:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:43.948 10:34:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:44.205 00:15:44.205 10:34:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:44.205 10:34:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:44.205 10:34:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:44.463 10:34:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:44.463 10:34:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:44.463 10:34:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:44.463 10:34:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:44.463 10:34:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:44.463 10:34:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:44.463 { 00:15:44.463 "cntlid": 15, 00:15:44.463 "qid": 0, 00:15:44.463 "state": "enabled", 00:15:44.463 "thread": "nvmf_tgt_poll_group_000", 00:15:44.463 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd", 00:15:44.463 "listen_address": { 00:15:44.463 "trtype": "TCP", 00:15:44.463 "adrfam": "IPv4", 00:15:44.463 "traddr": "10.0.0.2", 00:15:44.463 "trsvcid": "4420" 00:15:44.463 }, 00:15:44.463 "peer_address": { 00:15:44.463 "trtype": "TCP", 00:15:44.463 "adrfam": "IPv4", 00:15:44.463 "traddr": "10.0.0.1", 00:15:44.463 "trsvcid": "46452" 00:15:44.463 }, 00:15:44.463 "auth": { 00:15:44.463 "state": "completed", 00:15:44.463 "digest": "sha256", 00:15:44.463 "dhgroup": "ffdhe2048" 00:15:44.463 } 00:15:44.463 } 00:15:44.463 ]' 00:15:44.464 10:34:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:44.464 10:34:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:44.464 10:34:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:44.464 10:34:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:44.464 10:34:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:44.721 10:34:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:44.721 10:34:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:44.722 10:34:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:44.979 10:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MzBiMDU1YWUxODUwOWFhY2EwNDI1MjExOWFiNzY2NTQyZDJlMGYzNzI2ZGE0MTgzZmU3MjNlMzllNDM4OWRlM4eymdA=: 00:15:44.979 10:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid 8b464f06-2980-e311-ba20-001e67a94acd -l 0 --dhchap-secret DHHC-1:03:MzBiMDU1YWUxODUwOWFhY2EwNDI1MjExOWFiNzY2NTQyZDJlMGYzNzI2ZGE0MTgzZmU3MjNlMzllNDM4OWRlM4eymdA=: 00:15:45.913 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:45.913 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:45.913 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:15:45.913 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:45.913 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:45.913 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:45.913 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:45.913 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:45.913 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:45.913 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:46.170 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 0 00:15:46.170 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:46.170 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:46.170 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:15:46.171 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:46.171 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:46.171 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:46.171 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:46.171 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:46.171 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:46.171 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:46.171 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:46.171 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:46.436 00:15:46.436 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:46.436 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:46.436 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:46.696 10:34:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:46.696 10:34:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:46.696 10:34:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:46.696 10:34:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:46.696 10:34:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:46.696 10:34:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:46.696 { 00:15:46.696 "cntlid": 17, 00:15:46.696 "qid": 0, 00:15:46.696 "state": "enabled", 00:15:46.696 "thread": "nvmf_tgt_poll_group_000", 00:15:46.696 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd", 00:15:46.696 "listen_address": { 00:15:46.696 "trtype": "TCP", 00:15:46.696 "adrfam": "IPv4", 00:15:46.696 "traddr": "10.0.0.2", 00:15:46.696 "trsvcid": "4420" 00:15:46.696 }, 00:15:46.696 "peer_address": { 00:15:46.696 "trtype": "TCP", 00:15:46.696 "adrfam": "IPv4", 00:15:46.696 "traddr": "10.0.0.1", 00:15:46.696 "trsvcid": "46488" 00:15:46.696 }, 00:15:46.696 "auth": { 00:15:46.696 "state": "completed", 00:15:46.696 "digest": "sha256", 00:15:46.696 "dhgroup": "ffdhe3072" 00:15:46.696 } 00:15:46.696 } 00:15:46.696 ]' 00:15:46.696 10:34:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:46.696 10:34:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:46.696 10:34:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:46.954 10:34:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:46.954 10:34:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:46.954 10:34:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:46.954 10:34:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:46.954 10:34:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:47.211 10:34:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NzIyZWE5ZTU1Yzg4N2FlNTQzM2E0OTQwNjFhMTdlODRkNGM3NTNlM2FlZjIwMGJkgrp0GQ==: --dhchap-ctrl-secret DHHC-1:03:MzU5NGFiOWUyYzAyZGQ1NWFhMDA0ZDA4M2VmZTQ2Y2Y1YTBhNjRhZDlmYjI5YzZiZDkxNTBhZWQ2OWIwMzUyYkSjwPw=: 00:15:47.211 10:34:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid 8b464f06-2980-e311-ba20-001e67a94acd -l 0 --dhchap-secret DHHC-1:00:NzIyZWE5ZTU1Yzg4N2FlNTQzM2E0OTQwNjFhMTdlODRkNGM3NTNlM2FlZjIwMGJkgrp0GQ==: --dhchap-ctrl-secret DHHC-1:03:MzU5NGFiOWUyYzAyZGQ1NWFhMDA0ZDA4M2VmZTQ2Y2Y1YTBhNjRhZDlmYjI5YzZiZDkxNTBhZWQ2OWIwMzUyYkSjwPw=: 00:15:48.145 10:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:48.145 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:48.145 10:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:15:48.145 10:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:48.145 10:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:48.145 10:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:48.145 10:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:48.145 10:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:48.145 10:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:48.403 10:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 1 00:15:48.403 10:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:48.403 10:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:48.403 10:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:15:48.403 10:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:48.403 10:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:48.403 10:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:48.403 10:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:48.403 10:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:48.403 10:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:48.403 10:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:48.403 10:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:48.403 10:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:48.968 00:15:48.968 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:48.968 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:48.968 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:49.227 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:49.227 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:49.227 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:49.227 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:49.227 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:49.227 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:49.227 { 00:15:49.227 "cntlid": 19, 00:15:49.227 "qid": 0, 00:15:49.227 "state": "enabled", 00:15:49.227 "thread": "nvmf_tgt_poll_group_000", 00:15:49.227 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd", 00:15:49.227 "listen_address": { 00:15:49.227 "trtype": "TCP", 00:15:49.227 "adrfam": "IPv4", 00:15:49.227 "traddr": "10.0.0.2", 00:15:49.227 "trsvcid": "4420" 00:15:49.227 }, 00:15:49.227 "peer_address": { 00:15:49.227 "trtype": "TCP", 00:15:49.227 "adrfam": "IPv4", 00:15:49.227 "traddr": "10.0.0.1", 00:15:49.227 "trsvcid": "44928" 00:15:49.227 }, 00:15:49.227 "auth": { 00:15:49.227 "state": "completed", 00:15:49.227 "digest": "sha256", 00:15:49.227 "dhgroup": "ffdhe3072" 00:15:49.227 } 00:15:49.227 } 00:15:49.227 ]' 00:15:49.227 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:49.227 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:49.227 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:49.227 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:49.227 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:49.227 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:49.227 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:49.227 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:49.485 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YWY1MDI3MDIyY2I2OTZkODA2YmU0Yzk2N2Q2ZmE5ZGbrQth/: --dhchap-ctrl-secret DHHC-1:02:ZWY4ZWYwOTEwYTcyMGM1MzRlODM0N2ViYjYyOTk3NWFhOTFmMTFiNDFhMTNkNjZmKWKHwA==: 00:15:49.485 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid 8b464f06-2980-e311-ba20-001e67a94acd -l 0 --dhchap-secret DHHC-1:01:YWY1MDI3MDIyY2I2OTZkODA2YmU0Yzk2N2Q2ZmE5ZGbrQth/: --dhchap-ctrl-secret DHHC-1:02:ZWY4ZWYwOTEwYTcyMGM1MzRlODM0N2ViYjYyOTk3NWFhOTFmMTFiNDFhMTNkNjZmKWKHwA==: 00:15:50.418 10:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:50.418 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:50.418 10:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:15:50.418 10:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:50.418 10:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:50.418 10:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:50.418 10:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:50.418 10:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:50.418 10:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:50.676 10:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 2 00:15:50.676 10:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:50.676 10:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:50.676 10:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:15:50.676 10:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:50.676 10:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:50.676 10:34:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:50.676 10:34:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:50.676 10:34:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:50.676 10:34:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:50.676 10:34:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:50.676 10:34:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:50.676 10:34:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:50.934 00:15:50.934 10:34:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:50.934 10:34:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:50.934 10:34:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:51.191 10:34:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:51.191 10:34:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:51.191 10:34:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:51.191 10:34:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:51.449 10:34:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:51.449 10:34:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:51.449 { 00:15:51.449 "cntlid": 21, 00:15:51.449 "qid": 0, 00:15:51.449 "state": "enabled", 00:15:51.449 "thread": "nvmf_tgt_poll_group_000", 00:15:51.449 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd", 00:15:51.449 "listen_address": { 00:15:51.449 "trtype": "TCP", 00:15:51.449 "adrfam": "IPv4", 00:15:51.449 "traddr": "10.0.0.2", 00:15:51.449 "trsvcid": "4420" 00:15:51.449 }, 00:15:51.449 "peer_address": { 00:15:51.449 "trtype": "TCP", 00:15:51.449 "adrfam": "IPv4", 00:15:51.449 "traddr": "10.0.0.1", 00:15:51.449 "trsvcid": "44958" 00:15:51.449 }, 00:15:51.449 "auth": { 00:15:51.449 "state": "completed", 00:15:51.449 "digest": "sha256", 00:15:51.449 "dhgroup": "ffdhe3072" 00:15:51.449 } 00:15:51.449 } 00:15:51.449 ]' 00:15:51.449 10:34:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:51.449 10:34:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:51.449 10:34:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:51.449 10:34:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:51.449 10:34:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:51.449 10:34:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:51.449 10:34:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:51.449 10:34:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:51.708 10:34:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NGE5M2M4NjliNWQ0OThlNWEzOTY1OTQwMDk1OTllZmRhNjc4ZDdhMzBjMjc0YmI51dL9kA==: --dhchap-ctrl-secret DHHC-1:01:ZTI1MTM1ODliYWQ1ZjFmOGFlMGE4ZTFiYzFjMTU1YzguHoKq: 00:15:51.708 10:34:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid 8b464f06-2980-e311-ba20-001e67a94acd -l 0 --dhchap-secret DHHC-1:02:NGE5M2M4NjliNWQ0OThlNWEzOTY1OTQwMDk1OTllZmRhNjc4ZDdhMzBjMjc0YmI51dL9kA==: --dhchap-ctrl-secret DHHC-1:01:ZTI1MTM1ODliYWQ1ZjFmOGFlMGE4ZTFiYzFjMTU1YzguHoKq: 00:15:52.642 10:34:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:52.642 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:52.642 10:34:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:15:52.642 10:34:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:52.642 10:34:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:52.642 10:34:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:52.642 10:34:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:52.642 10:34:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:52.642 10:34:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:52.900 10:34:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 3 00:15:52.900 10:34:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:52.900 10:34:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:52.900 10:34:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:15:52.900 10:34:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:52.900 10:34:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:52.900 10:34:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --dhchap-key key3 00:15:52.900 10:34:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:52.900 10:34:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:52.900 10:34:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:52.900 10:34:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:52.900 10:34:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:52.900 10:34:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:53.465 00:15:53.465 10:34:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:53.465 10:34:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:53.465 10:34:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:53.723 10:34:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:53.723 10:34:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:53.723 10:34:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:53.723 10:34:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:53.723 10:34:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:53.723 10:34:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:53.723 { 00:15:53.723 "cntlid": 23, 00:15:53.723 "qid": 0, 00:15:53.723 "state": "enabled", 00:15:53.723 "thread": "nvmf_tgt_poll_group_000", 00:15:53.723 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd", 00:15:53.723 "listen_address": { 00:15:53.723 "trtype": "TCP", 00:15:53.723 "adrfam": "IPv4", 00:15:53.723 "traddr": "10.0.0.2", 00:15:53.723 "trsvcid": "4420" 00:15:53.723 }, 00:15:53.723 "peer_address": { 00:15:53.723 "trtype": "TCP", 00:15:53.723 "adrfam": "IPv4", 00:15:53.723 "traddr": "10.0.0.1", 00:15:53.723 "trsvcid": "44984" 00:15:53.723 }, 00:15:53.723 "auth": { 00:15:53.723 "state": "completed", 00:15:53.723 "digest": "sha256", 00:15:53.723 "dhgroup": "ffdhe3072" 00:15:53.723 } 00:15:53.723 } 00:15:53.723 ]' 00:15:53.723 10:34:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:53.723 10:34:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:53.723 10:34:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:53.723 10:34:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:53.723 10:34:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:53.723 10:34:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:53.723 10:34:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:53.723 10:34:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:53.981 10:34:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MzBiMDU1YWUxODUwOWFhY2EwNDI1MjExOWFiNzY2NTQyZDJlMGYzNzI2ZGE0MTgzZmU3MjNlMzllNDM4OWRlM4eymdA=: 00:15:53.981 10:34:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid 8b464f06-2980-e311-ba20-001e67a94acd -l 0 --dhchap-secret DHHC-1:03:MzBiMDU1YWUxODUwOWFhY2EwNDI1MjExOWFiNzY2NTQyZDJlMGYzNzI2ZGE0MTgzZmU3MjNlMzllNDM4OWRlM4eymdA=: 00:15:54.915 10:34:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:54.915 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:54.915 10:34:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:15:54.915 10:34:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:54.915 10:34:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:54.915 10:34:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:54.915 10:34:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:54.915 10:34:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:54.915 10:34:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:54.915 10:34:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:55.173 10:34:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 0 00:15:55.173 10:34:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:55.173 10:34:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:55.173 10:34:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:15:55.173 10:34:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:55.173 10:34:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:55.173 10:34:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:55.173 10:34:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:55.173 10:34:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:55.173 10:34:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:55.173 10:34:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:55.173 10:34:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:55.173 10:34:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:55.738 00:15:55.738 10:34:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:55.738 10:34:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:55.738 10:34:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:55.996 10:34:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:55.996 10:34:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:55.996 10:34:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:55.996 10:34:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:55.996 10:34:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:55.996 10:34:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:55.996 { 00:15:55.996 "cntlid": 25, 00:15:55.996 "qid": 0, 00:15:55.996 "state": "enabled", 00:15:55.996 "thread": "nvmf_tgt_poll_group_000", 00:15:55.996 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd", 00:15:55.996 "listen_address": { 00:15:55.996 "trtype": "TCP", 00:15:55.996 "adrfam": "IPv4", 00:15:55.996 "traddr": "10.0.0.2", 00:15:55.996 "trsvcid": "4420" 00:15:55.996 }, 00:15:55.996 "peer_address": { 00:15:55.996 "trtype": "TCP", 00:15:55.996 "adrfam": "IPv4", 00:15:55.996 "traddr": "10.0.0.1", 00:15:55.996 "trsvcid": "45010" 00:15:55.996 }, 00:15:55.996 "auth": { 00:15:55.996 "state": "completed", 00:15:55.996 "digest": "sha256", 00:15:55.996 "dhgroup": "ffdhe4096" 00:15:55.996 } 00:15:55.996 } 00:15:55.996 ]' 00:15:55.996 10:34:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:55.996 10:34:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:55.996 10:34:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:55.996 10:34:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:55.996 10:34:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:55.996 10:34:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:55.996 10:34:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:55.996 10:34:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:56.253 10:34:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NzIyZWE5ZTU1Yzg4N2FlNTQzM2E0OTQwNjFhMTdlODRkNGM3NTNlM2FlZjIwMGJkgrp0GQ==: --dhchap-ctrl-secret DHHC-1:03:MzU5NGFiOWUyYzAyZGQ1NWFhMDA0ZDA4M2VmZTQ2Y2Y1YTBhNjRhZDlmYjI5YzZiZDkxNTBhZWQ2OWIwMzUyYkSjwPw=: 00:15:56.253 10:34:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid 8b464f06-2980-e311-ba20-001e67a94acd -l 0 --dhchap-secret DHHC-1:00:NzIyZWE5ZTU1Yzg4N2FlNTQzM2E0OTQwNjFhMTdlODRkNGM3NTNlM2FlZjIwMGJkgrp0GQ==: --dhchap-ctrl-secret DHHC-1:03:MzU5NGFiOWUyYzAyZGQ1NWFhMDA0ZDA4M2VmZTQ2Y2Y1YTBhNjRhZDlmYjI5YzZiZDkxNTBhZWQ2OWIwMzUyYkSjwPw=: 00:15:57.187 10:34:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:57.187 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:57.188 10:34:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:15:57.188 10:34:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:57.188 10:34:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:57.188 10:34:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:57.188 10:34:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:57.188 10:34:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:57.188 10:34:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:57.445 10:34:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 1 00:15:57.445 10:34:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:57.445 10:34:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:57.445 10:34:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:15:57.445 10:34:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:57.445 10:34:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:57.445 10:34:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:57.445 10:34:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:57.445 10:34:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:57.445 10:34:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:57.445 10:34:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:57.445 10:34:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:57.445 10:34:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:58.010 00:15:58.010 10:34:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:58.010 10:34:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:58.010 10:34:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:58.268 10:34:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:58.268 10:34:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:58.268 10:34:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:58.268 10:34:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:58.268 10:34:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:58.268 10:34:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:58.268 { 00:15:58.268 "cntlid": 27, 00:15:58.268 "qid": 0, 00:15:58.268 "state": "enabled", 00:15:58.268 "thread": "nvmf_tgt_poll_group_000", 00:15:58.268 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd", 00:15:58.268 "listen_address": { 00:15:58.268 "trtype": "TCP", 00:15:58.268 "adrfam": "IPv4", 00:15:58.268 "traddr": "10.0.0.2", 00:15:58.268 "trsvcid": "4420" 00:15:58.268 }, 00:15:58.268 "peer_address": { 00:15:58.268 "trtype": "TCP", 00:15:58.268 "adrfam": "IPv4", 00:15:58.268 "traddr": "10.0.0.1", 00:15:58.268 "trsvcid": "45040" 00:15:58.268 }, 00:15:58.269 "auth": { 00:15:58.269 "state": "completed", 00:15:58.269 "digest": "sha256", 00:15:58.269 "dhgroup": "ffdhe4096" 00:15:58.269 } 00:15:58.269 } 00:15:58.269 ]' 00:15:58.269 10:34:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:58.269 10:34:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:58.269 10:34:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:58.269 10:34:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:58.269 10:34:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:58.269 10:34:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:58.269 10:34:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:58.269 10:34:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:58.526 10:34:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YWY1MDI3MDIyY2I2OTZkODA2YmU0Yzk2N2Q2ZmE5ZGbrQth/: --dhchap-ctrl-secret DHHC-1:02:ZWY4ZWYwOTEwYTcyMGM1MzRlODM0N2ViYjYyOTk3NWFhOTFmMTFiNDFhMTNkNjZmKWKHwA==: 00:15:58.526 10:34:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid 8b464f06-2980-e311-ba20-001e67a94acd -l 0 --dhchap-secret DHHC-1:01:YWY1MDI3MDIyY2I2OTZkODA2YmU0Yzk2N2Q2ZmE5ZGbrQth/: --dhchap-ctrl-secret DHHC-1:02:ZWY4ZWYwOTEwYTcyMGM1MzRlODM0N2ViYjYyOTk3NWFhOTFmMTFiNDFhMTNkNjZmKWKHwA==: 00:15:59.458 10:34:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:59.458 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:59.458 10:34:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:15:59.458 10:34:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:59.458 10:34:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:59.458 10:34:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:59.458 10:34:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:59.458 10:34:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:59.459 10:34:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:59.716 10:34:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 2 00:15:59.716 10:34:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:59.716 10:34:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:59.716 10:34:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:15:59.716 10:34:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:59.716 10:34:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:59.716 10:34:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:59.716 10:34:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:59.716 10:34:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:59.973 10:34:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:59.973 10:34:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:59.973 10:34:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:59.973 10:34:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:00.230 00:16:00.230 10:34:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:00.230 10:34:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:00.230 10:34:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:00.487 10:34:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:00.487 10:34:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:00.487 10:34:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:00.487 10:34:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:00.487 10:34:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:00.487 10:34:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:00.487 { 00:16:00.487 "cntlid": 29, 00:16:00.487 "qid": 0, 00:16:00.487 "state": "enabled", 00:16:00.487 "thread": "nvmf_tgt_poll_group_000", 00:16:00.487 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd", 00:16:00.487 "listen_address": { 00:16:00.487 "trtype": "TCP", 00:16:00.487 "adrfam": "IPv4", 00:16:00.487 "traddr": "10.0.0.2", 00:16:00.487 "trsvcid": "4420" 00:16:00.487 }, 00:16:00.487 "peer_address": { 00:16:00.487 "trtype": "TCP", 00:16:00.487 "adrfam": "IPv4", 00:16:00.487 "traddr": "10.0.0.1", 00:16:00.487 "trsvcid": "52472" 00:16:00.487 }, 00:16:00.487 "auth": { 00:16:00.487 "state": "completed", 00:16:00.487 "digest": "sha256", 00:16:00.487 "dhgroup": "ffdhe4096" 00:16:00.487 } 00:16:00.487 } 00:16:00.487 ]' 00:16:00.487 10:34:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:00.487 10:34:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:00.487 10:34:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:00.487 10:34:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:00.487 10:34:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:00.745 10:34:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:00.745 10:34:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:00.745 10:34:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:01.002 10:34:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NGE5M2M4NjliNWQ0OThlNWEzOTY1OTQwMDk1OTllZmRhNjc4ZDdhMzBjMjc0YmI51dL9kA==: --dhchap-ctrl-secret DHHC-1:01:ZTI1MTM1ODliYWQ1ZjFmOGFlMGE4ZTFiYzFjMTU1YzguHoKq: 00:16:01.002 10:34:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid 8b464f06-2980-e311-ba20-001e67a94acd -l 0 --dhchap-secret DHHC-1:02:NGE5M2M4NjliNWQ0OThlNWEzOTY1OTQwMDk1OTllZmRhNjc4ZDdhMzBjMjc0YmI51dL9kA==: --dhchap-ctrl-secret DHHC-1:01:ZTI1MTM1ODliYWQ1ZjFmOGFlMGE4ZTFiYzFjMTU1YzguHoKq: 00:16:01.934 10:34:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:01.934 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:01.934 10:34:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:16:01.934 10:34:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:01.934 10:34:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:01.934 10:34:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:01.934 10:34:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:01.934 10:34:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:01.934 10:34:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:02.192 10:34:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 3 00:16:02.192 10:34:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:02.192 10:34:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:02.192 10:34:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:02.192 10:34:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:02.192 10:34:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:02.192 10:34:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --dhchap-key key3 00:16:02.192 10:34:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:02.192 10:34:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:02.192 10:34:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:02.192 10:34:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:02.192 10:34:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:02.192 10:34:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:02.449 00:16:02.707 10:34:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:02.707 10:34:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:02.707 10:34:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:02.965 10:34:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:02.965 10:34:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:02.965 10:34:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:02.965 10:34:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:02.965 10:34:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:02.965 10:34:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:02.965 { 00:16:02.965 "cntlid": 31, 00:16:02.965 "qid": 0, 00:16:02.965 "state": "enabled", 00:16:02.965 "thread": "nvmf_tgt_poll_group_000", 00:16:02.965 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd", 00:16:02.965 "listen_address": { 00:16:02.965 "trtype": "TCP", 00:16:02.965 "adrfam": "IPv4", 00:16:02.965 "traddr": "10.0.0.2", 00:16:02.965 "trsvcid": "4420" 00:16:02.965 }, 00:16:02.965 "peer_address": { 00:16:02.965 "trtype": "TCP", 00:16:02.965 "adrfam": "IPv4", 00:16:02.965 "traddr": "10.0.0.1", 00:16:02.965 "trsvcid": "52500" 00:16:02.965 }, 00:16:02.965 "auth": { 00:16:02.965 "state": "completed", 00:16:02.965 "digest": "sha256", 00:16:02.965 "dhgroup": "ffdhe4096" 00:16:02.965 } 00:16:02.965 } 00:16:02.965 ]' 00:16:02.965 10:34:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:02.965 10:34:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:02.965 10:34:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:02.965 10:34:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:02.965 10:34:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:02.965 10:34:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:02.965 10:34:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:02.965 10:34:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:03.221 10:34:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MzBiMDU1YWUxODUwOWFhY2EwNDI1MjExOWFiNzY2NTQyZDJlMGYzNzI2ZGE0MTgzZmU3MjNlMzllNDM4OWRlM4eymdA=: 00:16:03.221 10:34:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid 8b464f06-2980-e311-ba20-001e67a94acd -l 0 --dhchap-secret DHHC-1:03:MzBiMDU1YWUxODUwOWFhY2EwNDI1MjExOWFiNzY2NTQyZDJlMGYzNzI2ZGE0MTgzZmU3MjNlMzllNDM4OWRlM4eymdA=: 00:16:04.152 10:34:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:04.152 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:04.152 10:34:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:16:04.152 10:34:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:04.152 10:34:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:04.152 10:34:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:04.152 10:34:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:04.152 10:34:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:04.152 10:34:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:04.152 10:34:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:04.410 10:34:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 0 00:16:04.410 10:34:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:04.410 10:34:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:04.410 10:34:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:04.410 10:34:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:04.410 10:34:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:04.410 10:34:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:04.410 10:34:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:04.410 10:34:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:04.410 10:34:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:04.410 10:34:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:04.410 10:34:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:04.410 10:34:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:04.975 00:16:04.975 10:34:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:04.975 10:34:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:04.975 10:34:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:05.233 10:34:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:05.233 10:34:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:05.233 10:34:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:05.233 10:34:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:05.233 10:34:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:05.233 10:34:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:05.233 { 00:16:05.233 "cntlid": 33, 00:16:05.233 "qid": 0, 00:16:05.233 "state": "enabled", 00:16:05.233 "thread": "nvmf_tgt_poll_group_000", 00:16:05.233 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd", 00:16:05.234 "listen_address": { 00:16:05.234 "trtype": "TCP", 00:16:05.234 "adrfam": "IPv4", 00:16:05.234 "traddr": "10.0.0.2", 00:16:05.234 "trsvcid": "4420" 00:16:05.234 }, 00:16:05.234 "peer_address": { 00:16:05.234 "trtype": "TCP", 00:16:05.234 "adrfam": "IPv4", 00:16:05.234 "traddr": "10.0.0.1", 00:16:05.234 "trsvcid": "52534" 00:16:05.234 }, 00:16:05.234 "auth": { 00:16:05.234 "state": "completed", 00:16:05.234 "digest": "sha256", 00:16:05.234 "dhgroup": "ffdhe6144" 00:16:05.234 } 00:16:05.234 } 00:16:05.234 ]' 00:16:05.234 10:34:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:05.234 10:34:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:05.234 10:34:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:05.234 10:34:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:05.234 10:34:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:05.234 10:34:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:05.234 10:34:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:05.234 10:34:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:05.798 10:34:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NzIyZWE5ZTU1Yzg4N2FlNTQzM2E0OTQwNjFhMTdlODRkNGM3NTNlM2FlZjIwMGJkgrp0GQ==: --dhchap-ctrl-secret DHHC-1:03:MzU5NGFiOWUyYzAyZGQ1NWFhMDA0ZDA4M2VmZTQ2Y2Y1YTBhNjRhZDlmYjI5YzZiZDkxNTBhZWQ2OWIwMzUyYkSjwPw=: 00:16:05.799 10:34:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid 8b464f06-2980-e311-ba20-001e67a94acd -l 0 --dhchap-secret DHHC-1:00:NzIyZWE5ZTU1Yzg4N2FlNTQzM2E0OTQwNjFhMTdlODRkNGM3NTNlM2FlZjIwMGJkgrp0GQ==: --dhchap-ctrl-secret DHHC-1:03:MzU5NGFiOWUyYzAyZGQ1NWFhMDA0ZDA4M2VmZTQ2Y2Y1YTBhNjRhZDlmYjI5YzZiZDkxNTBhZWQ2OWIwMzUyYkSjwPw=: 00:16:06.363 10:34:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:06.363 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:06.363 10:34:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:16:06.363 10:34:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:06.363 10:34:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:06.620 10:34:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:06.620 10:34:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:06.620 10:34:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:06.620 10:34:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:06.879 10:34:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 1 00:16:06.879 10:34:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:06.879 10:34:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:06.879 10:34:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:06.879 10:34:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:06.879 10:34:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:06.879 10:34:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:06.879 10:34:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:06.879 10:34:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:06.879 10:34:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:06.879 10:34:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:06.879 10:34:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:06.879 10:34:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:07.444 00:16:07.444 10:34:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:07.444 10:34:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:07.444 10:34:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:07.702 10:34:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:07.702 10:34:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:07.702 10:34:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:07.702 10:34:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:07.702 10:34:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:07.702 10:34:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:07.702 { 00:16:07.702 "cntlid": 35, 00:16:07.702 "qid": 0, 00:16:07.702 "state": "enabled", 00:16:07.702 "thread": "nvmf_tgt_poll_group_000", 00:16:07.702 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd", 00:16:07.702 "listen_address": { 00:16:07.702 "trtype": "TCP", 00:16:07.702 "adrfam": "IPv4", 00:16:07.702 "traddr": "10.0.0.2", 00:16:07.702 "trsvcid": "4420" 00:16:07.702 }, 00:16:07.702 "peer_address": { 00:16:07.702 "trtype": "TCP", 00:16:07.702 "adrfam": "IPv4", 00:16:07.702 "traddr": "10.0.0.1", 00:16:07.702 "trsvcid": "52554" 00:16:07.702 }, 00:16:07.702 "auth": { 00:16:07.702 "state": "completed", 00:16:07.702 "digest": "sha256", 00:16:07.702 "dhgroup": "ffdhe6144" 00:16:07.702 } 00:16:07.702 } 00:16:07.702 ]' 00:16:07.702 10:34:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:07.702 10:34:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:07.702 10:34:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:07.702 10:34:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:07.702 10:34:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:07.702 10:34:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:07.702 10:34:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:07.702 10:34:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:07.960 10:34:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YWY1MDI3MDIyY2I2OTZkODA2YmU0Yzk2N2Q2ZmE5ZGbrQth/: --dhchap-ctrl-secret DHHC-1:02:ZWY4ZWYwOTEwYTcyMGM1MzRlODM0N2ViYjYyOTk3NWFhOTFmMTFiNDFhMTNkNjZmKWKHwA==: 00:16:07.960 10:34:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid 8b464f06-2980-e311-ba20-001e67a94acd -l 0 --dhchap-secret DHHC-1:01:YWY1MDI3MDIyY2I2OTZkODA2YmU0Yzk2N2Q2ZmE5ZGbrQth/: --dhchap-ctrl-secret DHHC-1:02:ZWY4ZWYwOTEwYTcyMGM1MzRlODM0N2ViYjYyOTk3NWFhOTFmMTFiNDFhMTNkNjZmKWKHwA==: 00:16:08.893 10:34:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:08.894 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:08.894 10:34:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:16:08.894 10:34:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:08.894 10:34:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:08.894 10:34:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:08.894 10:34:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:08.894 10:34:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:08.894 10:34:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:09.152 10:34:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 2 00:16:09.152 10:34:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:09.152 10:34:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:09.152 10:34:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:09.152 10:34:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:09.152 10:34:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:09.152 10:34:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:09.152 10:34:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:09.152 10:34:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:09.152 10:34:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:09.152 10:34:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:09.152 10:34:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:09.153 10:34:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:09.719 00:16:09.719 10:34:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:09.719 10:34:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:09.719 10:34:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:09.977 10:34:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:09.977 10:34:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:09.977 10:34:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:09.977 10:34:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:09.977 10:34:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:09.977 10:34:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:09.977 { 00:16:09.977 "cntlid": 37, 00:16:09.977 "qid": 0, 00:16:09.977 "state": "enabled", 00:16:09.977 "thread": "nvmf_tgt_poll_group_000", 00:16:09.977 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd", 00:16:09.977 "listen_address": { 00:16:09.977 "trtype": "TCP", 00:16:09.977 "adrfam": "IPv4", 00:16:09.977 "traddr": "10.0.0.2", 00:16:09.977 "trsvcid": "4420" 00:16:09.977 }, 00:16:09.977 "peer_address": { 00:16:09.977 "trtype": "TCP", 00:16:09.977 "adrfam": "IPv4", 00:16:09.977 "traddr": "10.0.0.1", 00:16:09.977 "trsvcid": "51488" 00:16:09.977 }, 00:16:09.977 "auth": { 00:16:09.977 "state": "completed", 00:16:09.977 "digest": "sha256", 00:16:09.977 "dhgroup": "ffdhe6144" 00:16:09.977 } 00:16:09.977 } 00:16:09.977 ]' 00:16:09.977 10:34:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:09.977 10:34:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:09.977 10:34:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:10.235 10:34:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:10.235 10:34:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:10.235 10:34:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:10.235 10:34:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:10.235 10:34:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:10.492 10:34:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NGE5M2M4NjliNWQ0OThlNWEzOTY1OTQwMDk1OTllZmRhNjc4ZDdhMzBjMjc0YmI51dL9kA==: --dhchap-ctrl-secret DHHC-1:01:ZTI1MTM1ODliYWQ1ZjFmOGFlMGE4ZTFiYzFjMTU1YzguHoKq: 00:16:10.492 10:34:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid 8b464f06-2980-e311-ba20-001e67a94acd -l 0 --dhchap-secret DHHC-1:02:NGE5M2M4NjliNWQ0OThlNWEzOTY1OTQwMDk1OTllZmRhNjc4ZDdhMzBjMjc0YmI51dL9kA==: --dhchap-ctrl-secret DHHC-1:01:ZTI1MTM1ODliYWQ1ZjFmOGFlMGE4ZTFiYzFjMTU1YzguHoKq: 00:16:11.426 10:34:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:11.426 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:11.426 10:34:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:16:11.426 10:34:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:11.426 10:34:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:11.427 10:34:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:11.427 10:34:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:11.427 10:34:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:11.427 10:34:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:11.685 10:34:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 3 00:16:11.685 10:34:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:11.685 10:34:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:11.685 10:34:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:11.685 10:34:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:11.685 10:34:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:11.685 10:34:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --dhchap-key key3 00:16:11.685 10:34:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:11.685 10:34:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:11.685 10:34:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:11.685 10:34:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:11.685 10:34:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:11.685 10:34:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:12.250 00:16:12.250 10:35:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:12.250 10:35:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:12.250 10:35:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:12.508 10:35:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:12.508 10:35:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:12.508 10:35:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:12.508 10:35:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:12.508 10:35:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:12.508 10:35:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:12.508 { 00:16:12.508 "cntlid": 39, 00:16:12.508 "qid": 0, 00:16:12.508 "state": "enabled", 00:16:12.508 "thread": "nvmf_tgt_poll_group_000", 00:16:12.508 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd", 00:16:12.508 "listen_address": { 00:16:12.508 "trtype": "TCP", 00:16:12.508 "adrfam": "IPv4", 00:16:12.508 "traddr": "10.0.0.2", 00:16:12.508 "trsvcid": "4420" 00:16:12.508 }, 00:16:12.508 "peer_address": { 00:16:12.508 "trtype": "TCP", 00:16:12.508 "adrfam": "IPv4", 00:16:12.508 "traddr": "10.0.0.1", 00:16:12.508 "trsvcid": "51512" 00:16:12.508 }, 00:16:12.508 "auth": { 00:16:12.508 "state": "completed", 00:16:12.508 "digest": "sha256", 00:16:12.508 "dhgroup": "ffdhe6144" 00:16:12.508 } 00:16:12.508 } 00:16:12.508 ]' 00:16:12.508 10:35:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:12.508 10:35:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:12.508 10:35:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:12.508 10:35:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:12.508 10:35:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:12.508 10:35:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:12.508 10:35:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:12.508 10:35:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:13.074 10:35:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MzBiMDU1YWUxODUwOWFhY2EwNDI1MjExOWFiNzY2NTQyZDJlMGYzNzI2ZGE0MTgzZmU3MjNlMzllNDM4OWRlM4eymdA=: 00:16:13.074 10:35:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid 8b464f06-2980-e311-ba20-001e67a94acd -l 0 --dhchap-secret DHHC-1:03:MzBiMDU1YWUxODUwOWFhY2EwNDI1MjExOWFiNzY2NTQyZDJlMGYzNzI2ZGE0MTgzZmU3MjNlMzllNDM4OWRlM4eymdA=: 00:16:13.639 10:35:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:13.896 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:13.896 10:35:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:16:13.896 10:35:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:13.896 10:35:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:13.896 10:35:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:13.896 10:35:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:13.896 10:35:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:13.896 10:35:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:13.896 10:35:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:14.153 10:35:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 0 00:16:14.154 10:35:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:14.154 10:35:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:14.154 10:35:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:14.154 10:35:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:14.154 10:35:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:14.154 10:35:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:14.154 10:35:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:14.154 10:35:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:14.154 10:35:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:14.154 10:35:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:14.154 10:35:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:14.154 10:35:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:15.086 00:16:15.086 10:35:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:15.086 10:35:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:15.087 10:35:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:15.344 10:35:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:15.344 10:35:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:15.344 10:35:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:15.344 10:35:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:15.344 10:35:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:15.344 10:35:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:15.344 { 00:16:15.344 "cntlid": 41, 00:16:15.344 "qid": 0, 00:16:15.344 "state": "enabled", 00:16:15.344 "thread": "nvmf_tgt_poll_group_000", 00:16:15.344 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd", 00:16:15.344 "listen_address": { 00:16:15.344 "trtype": "TCP", 00:16:15.344 "adrfam": "IPv4", 00:16:15.344 "traddr": "10.0.0.2", 00:16:15.344 "trsvcid": "4420" 00:16:15.344 }, 00:16:15.344 "peer_address": { 00:16:15.344 "trtype": "TCP", 00:16:15.344 "adrfam": "IPv4", 00:16:15.344 "traddr": "10.0.0.1", 00:16:15.344 "trsvcid": "51542" 00:16:15.344 }, 00:16:15.344 "auth": { 00:16:15.344 "state": "completed", 00:16:15.344 "digest": "sha256", 00:16:15.344 "dhgroup": "ffdhe8192" 00:16:15.344 } 00:16:15.344 } 00:16:15.344 ]' 00:16:15.344 10:35:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:15.344 10:35:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:15.345 10:35:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:15.345 10:35:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:15.345 10:35:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:15.345 10:35:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:15.345 10:35:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:15.345 10:35:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:15.910 10:35:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NzIyZWE5ZTU1Yzg4N2FlNTQzM2E0OTQwNjFhMTdlODRkNGM3NTNlM2FlZjIwMGJkgrp0GQ==: --dhchap-ctrl-secret DHHC-1:03:MzU5NGFiOWUyYzAyZGQ1NWFhMDA0ZDA4M2VmZTQ2Y2Y1YTBhNjRhZDlmYjI5YzZiZDkxNTBhZWQ2OWIwMzUyYkSjwPw=: 00:16:15.910 10:35:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid 8b464f06-2980-e311-ba20-001e67a94acd -l 0 --dhchap-secret DHHC-1:00:NzIyZWE5ZTU1Yzg4N2FlNTQzM2E0OTQwNjFhMTdlODRkNGM3NTNlM2FlZjIwMGJkgrp0GQ==: --dhchap-ctrl-secret DHHC-1:03:MzU5NGFiOWUyYzAyZGQ1NWFhMDA0ZDA4M2VmZTQ2Y2Y1YTBhNjRhZDlmYjI5YzZiZDkxNTBhZWQ2OWIwMzUyYkSjwPw=: 00:16:16.841 10:35:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:16.841 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:16.841 10:35:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:16:16.841 10:35:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:16.841 10:35:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:16.841 10:35:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:16.841 10:35:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:16.841 10:35:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:16.841 10:35:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:17.100 10:35:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 1 00:16:17.100 10:35:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:17.100 10:35:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:17.100 10:35:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:17.100 10:35:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:17.100 10:35:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:17.100 10:35:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:17.100 10:35:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:17.100 10:35:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:17.100 10:35:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:17.100 10:35:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:17.100 10:35:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:17.100 10:35:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:18.034 00:16:18.034 10:35:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:18.034 10:35:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:18.034 10:35:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:18.034 10:35:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:18.034 10:35:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:18.034 10:35:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:18.034 10:35:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:18.034 10:35:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:18.034 10:35:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:18.034 { 00:16:18.034 "cntlid": 43, 00:16:18.034 "qid": 0, 00:16:18.034 "state": "enabled", 00:16:18.034 "thread": "nvmf_tgt_poll_group_000", 00:16:18.034 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd", 00:16:18.034 "listen_address": { 00:16:18.034 "trtype": "TCP", 00:16:18.034 "adrfam": "IPv4", 00:16:18.034 "traddr": "10.0.0.2", 00:16:18.034 "trsvcid": "4420" 00:16:18.034 }, 00:16:18.034 "peer_address": { 00:16:18.034 "trtype": "TCP", 00:16:18.034 "adrfam": "IPv4", 00:16:18.034 "traddr": "10.0.0.1", 00:16:18.034 "trsvcid": "51552" 00:16:18.034 }, 00:16:18.034 "auth": { 00:16:18.034 "state": "completed", 00:16:18.034 "digest": "sha256", 00:16:18.034 "dhgroup": "ffdhe8192" 00:16:18.034 } 00:16:18.034 } 00:16:18.034 ]' 00:16:18.034 10:35:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:18.291 10:35:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:18.291 10:35:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:18.291 10:35:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:18.291 10:35:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:18.291 10:35:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:18.291 10:35:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:18.291 10:35:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:18.549 10:35:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YWY1MDI3MDIyY2I2OTZkODA2YmU0Yzk2N2Q2ZmE5ZGbrQth/: --dhchap-ctrl-secret DHHC-1:02:ZWY4ZWYwOTEwYTcyMGM1MzRlODM0N2ViYjYyOTk3NWFhOTFmMTFiNDFhMTNkNjZmKWKHwA==: 00:16:18.549 10:35:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid 8b464f06-2980-e311-ba20-001e67a94acd -l 0 --dhchap-secret DHHC-1:01:YWY1MDI3MDIyY2I2OTZkODA2YmU0Yzk2N2Q2ZmE5ZGbrQth/: --dhchap-ctrl-secret DHHC-1:02:ZWY4ZWYwOTEwYTcyMGM1MzRlODM0N2ViYjYyOTk3NWFhOTFmMTFiNDFhMTNkNjZmKWKHwA==: 00:16:19.481 10:35:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:19.481 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:19.481 10:35:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:16:19.481 10:35:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:19.481 10:35:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:19.481 10:35:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:19.481 10:35:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:19.481 10:35:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:19.481 10:35:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:19.739 10:35:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 2 00:16:19.739 10:35:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:19.739 10:35:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:19.739 10:35:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:19.739 10:35:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:19.739 10:35:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:19.739 10:35:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:19.739 10:35:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:19.739 10:35:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:19.739 10:35:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:19.739 10:35:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:19.739 10:35:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:19.739 10:35:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:20.671 00:16:20.671 10:35:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:20.671 10:35:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:20.671 10:35:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:20.929 10:35:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:20.929 10:35:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:20.929 10:35:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:20.929 10:35:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:20.929 10:35:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:20.929 10:35:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:20.929 { 00:16:20.929 "cntlid": 45, 00:16:20.929 "qid": 0, 00:16:20.929 "state": "enabled", 00:16:20.929 "thread": "nvmf_tgt_poll_group_000", 00:16:20.929 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd", 00:16:20.929 "listen_address": { 00:16:20.929 "trtype": "TCP", 00:16:20.929 "adrfam": "IPv4", 00:16:20.929 "traddr": "10.0.0.2", 00:16:20.929 "trsvcid": "4420" 00:16:20.929 }, 00:16:20.929 "peer_address": { 00:16:20.929 "trtype": "TCP", 00:16:20.929 "adrfam": "IPv4", 00:16:20.929 "traddr": "10.0.0.1", 00:16:20.929 "trsvcid": "35818" 00:16:20.929 }, 00:16:20.929 "auth": { 00:16:20.929 "state": "completed", 00:16:20.929 "digest": "sha256", 00:16:20.929 "dhgroup": "ffdhe8192" 00:16:20.929 } 00:16:20.929 } 00:16:20.929 ]' 00:16:20.929 10:35:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:20.929 10:35:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:20.929 10:35:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:20.929 10:35:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:20.929 10:35:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:20.929 10:35:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:20.929 10:35:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:20.929 10:35:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:21.187 10:35:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NGE5M2M4NjliNWQ0OThlNWEzOTY1OTQwMDk1OTllZmRhNjc4ZDdhMzBjMjc0YmI51dL9kA==: --dhchap-ctrl-secret DHHC-1:01:ZTI1MTM1ODliYWQ1ZjFmOGFlMGE4ZTFiYzFjMTU1YzguHoKq: 00:16:21.187 10:35:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid 8b464f06-2980-e311-ba20-001e67a94acd -l 0 --dhchap-secret DHHC-1:02:NGE5M2M4NjliNWQ0OThlNWEzOTY1OTQwMDk1OTllZmRhNjc4ZDdhMzBjMjc0YmI51dL9kA==: --dhchap-ctrl-secret DHHC-1:01:ZTI1MTM1ODliYWQ1ZjFmOGFlMGE4ZTFiYzFjMTU1YzguHoKq: 00:16:22.119 10:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:22.119 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:22.119 10:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:16:22.119 10:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:22.119 10:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:22.119 10:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:22.119 10:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:22.119 10:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:22.119 10:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:22.689 10:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 3 00:16:22.689 10:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:22.689 10:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:22.689 10:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:22.689 10:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:22.689 10:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:22.689 10:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --dhchap-key key3 00:16:22.689 10:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:22.689 10:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:22.689 10:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:22.689 10:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:22.689 10:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:22.689 10:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:23.339 00:16:23.339 10:35:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:23.339 10:35:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:23.339 10:35:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:23.637 10:35:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:23.637 10:35:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:23.637 10:35:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:23.637 10:35:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:23.637 10:35:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:23.637 10:35:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:23.637 { 00:16:23.637 "cntlid": 47, 00:16:23.637 "qid": 0, 00:16:23.637 "state": "enabled", 00:16:23.637 "thread": "nvmf_tgt_poll_group_000", 00:16:23.637 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd", 00:16:23.637 "listen_address": { 00:16:23.637 "trtype": "TCP", 00:16:23.637 "adrfam": "IPv4", 00:16:23.637 "traddr": "10.0.0.2", 00:16:23.637 "trsvcid": "4420" 00:16:23.637 }, 00:16:23.637 "peer_address": { 00:16:23.637 "trtype": "TCP", 00:16:23.637 "adrfam": "IPv4", 00:16:23.637 "traddr": "10.0.0.1", 00:16:23.637 "trsvcid": "35844" 00:16:23.637 }, 00:16:23.637 "auth": { 00:16:23.637 "state": "completed", 00:16:23.637 "digest": "sha256", 00:16:23.637 "dhgroup": "ffdhe8192" 00:16:23.637 } 00:16:23.637 } 00:16:23.637 ]' 00:16:23.637 10:35:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:23.637 10:35:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:23.637 10:35:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:23.637 10:35:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:23.637 10:35:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:23.932 10:35:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:23.932 10:35:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:23.932 10:35:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:24.208 10:35:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MzBiMDU1YWUxODUwOWFhY2EwNDI1MjExOWFiNzY2NTQyZDJlMGYzNzI2ZGE0MTgzZmU3MjNlMzllNDM4OWRlM4eymdA=: 00:16:24.208 10:35:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid 8b464f06-2980-e311-ba20-001e67a94acd -l 0 --dhchap-secret DHHC-1:03:MzBiMDU1YWUxODUwOWFhY2EwNDI1MjExOWFiNzY2NTQyZDJlMGYzNzI2ZGE0MTgzZmU3MjNlMzllNDM4OWRlM4eymdA=: 00:16:24.958 10:35:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:24.959 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:24.959 10:35:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:16:24.959 10:35:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:24.959 10:35:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:24.959 10:35:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:24.959 10:35:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:16:24.959 10:35:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:24.959 10:35:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:24.959 10:35:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:24.959 10:35:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:25.218 10:35:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 0 00:16:25.218 10:35:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:25.218 10:35:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:25.218 10:35:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:25.218 10:35:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:25.218 10:35:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:25.218 10:35:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:25.218 10:35:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:25.218 10:35:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:25.218 10:35:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:25.218 10:35:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:25.218 10:35:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:25.218 10:35:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:25.478 00:16:25.478 10:35:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:25.478 10:35:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:25.478 10:35:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:25.736 10:35:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:25.736 10:35:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:25.736 10:35:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:25.736 10:35:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:25.994 10:35:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:25.994 10:35:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:25.994 { 00:16:25.994 "cntlid": 49, 00:16:25.994 "qid": 0, 00:16:25.994 "state": "enabled", 00:16:25.994 "thread": "nvmf_tgt_poll_group_000", 00:16:25.994 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd", 00:16:25.994 "listen_address": { 00:16:25.994 "trtype": "TCP", 00:16:25.994 "adrfam": "IPv4", 00:16:25.994 "traddr": "10.0.0.2", 00:16:25.994 "trsvcid": "4420" 00:16:25.994 }, 00:16:25.994 "peer_address": { 00:16:25.994 "trtype": "TCP", 00:16:25.994 "adrfam": "IPv4", 00:16:25.994 "traddr": "10.0.0.1", 00:16:25.994 "trsvcid": "35868" 00:16:25.994 }, 00:16:25.994 "auth": { 00:16:25.994 "state": "completed", 00:16:25.994 "digest": "sha384", 00:16:25.994 "dhgroup": "null" 00:16:25.994 } 00:16:25.994 } 00:16:25.994 ]' 00:16:25.994 10:35:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:25.994 10:35:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:25.994 10:35:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:25.994 10:35:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:25.994 10:35:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:25.994 10:35:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:25.994 10:35:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:25.994 10:35:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:26.253 10:35:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NzIyZWE5ZTU1Yzg4N2FlNTQzM2E0OTQwNjFhMTdlODRkNGM3NTNlM2FlZjIwMGJkgrp0GQ==: --dhchap-ctrl-secret DHHC-1:03:MzU5NGFiOWUyYzAyZGQ1NWFhMDA0ZDA4M2VmZTQ2Y2Y1YTBhNjRhZDlmYjI5YzZiZDkxNTBhZWQ2OWIwMzUyYkSjwPw=: 00:16:26.253 10:35:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid 8b464f06-2980-e311-ba20-001e67a94acd -l 0 --dhchap-secret DHHC-1:00:NzIyZWE5ZTU1Yzg4N2FlNTQzM2E0OTQwNjFhMTdlODRkNGM3NTNlM2FlZjIwMGJkgrp0GQ==: --dhchap-ctrl-secret DHHC-1:03:MzU5NGFiOWUyYzAyZGQ1NWFhMDA0ZDA4M2VmZTQ2Y2Y1YTBhNjRhZDlmYjI5YzZiZDkxNTBhZWQ2OWIwMzUyYkSjwPw=: 00:16:27.188 10:35:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:27.188 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:27.188 10:35:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:16:27.188 10:35:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:27.188 10:35:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:27.188 10:35:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:27.188 10:35:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:27.188 10:35:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:27.188 10:35:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:27.446 10:35:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 1 00:16:27.446 10:35:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:27.446 10:35:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:27.446 10:35:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:27.446 10:35:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:27.446 10:35:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:27.446 10:35:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:27.446 10:35:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:27.446 10:35:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:27.446 10:35:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:27.446 10:35:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:27.446 10:35:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:27.446 10:35:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:27.703 00:16:27.703 10:35:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:27.704 10:35:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:27.704 10:35:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:27.960 10:35:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:27.960 10:35:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:27.960 10:35:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:27.960 10:35:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:27.960 10:35:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:27.960 10:35:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:27.960 { 00:16:27.960 "cntlid": 51, 00:16:27.960 "qid": 0, 00:16:27.960 "state": "enabled", 00:16:27.960 "thread": "nvmf_tgt_poll_group_000", 00:16:27.961 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd", 00:16:27.961 "listen_address": { 00:16:27.961 "trtype": "TCP", 00:16:27.961 "adrfam": "IPv4", 00:16:27.961 "traddr": "10.0.0.2", 00:16:27.961 "trsvcid": "4420" 00:16:27.961 }, 00:16:27.961 "peer_address": { 00:16:27.961 "trtype": "TCP", 00:16:27.961 "adrfam": "IPv4", 00:16:27.961 "traddr": "10.0.0.1", 00:16:27.961 "trsvcid": "35894" 00:16:27.961 }, 00:16:27.961 "auth": { 00:16:27.961 "state": "completed", 00:16:27.961 "digest": "sha384", 00:16:27.961 "dhgroup": "null" 00:16:27.961 } 00:16:27.961 } 00:16:27.961 ]' 00:16:27.961 10:35:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:28.218 10:35:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:28.218 10:35:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:28.218 10:35:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:28.218 10:35:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:28.218 10:35:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:28.218 10:35:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:28.218 10:35:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:28.476 10:35:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YWY1MDI3MDIyY2I2OTZkODA2YmU0Yzk2N2Q2ZmE5ZGbrQth/: --dhchap-ctrl-secret DHHC-1:02:ZWY4ZWYwOTEwYTcyMGM1MzRlODM0N2ViYjYyOTk3NWFhOTFmMTFiNDFhMTNkNjZmKWKHwA==: 00:16:28.476 10:35:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid 8b464f06-2980-e311-ba20-001e67a94acd -l 0 --dhchap-secret DHHC-1:01:YWY1MDI3MDIyY2I2OTZkODA2YmU0Yzk2N2Q2ZmE5ZGbrQth/: --dhchap-ctrl-secret DHHC-1:02:ZWY4ZWYwOTEwYTcyMGM1MzRlODM0N2ViYjYyOTk3NWFhOTFmMTFiNDFhMTNkNjZmKWKHwA==: 00:16:29.407 10:35:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:29.407 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:29.407 10:35:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:16:29.407 10:35:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:29.407 10:35:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:29.407 10:35:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:29.407 10:35:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:29.407 10:35:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:29.407 10:35:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:29.665 10:35:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 2 00:16:29.665 10:35:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:29.665 10:35:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:29.665 10:35:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:29.665 10:35:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:29.665 10:35:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:29.665 10:35:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:29.665 10:35:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:29.665 10:35:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:29.665 10:35:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:29.665 10:35:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:29.665 10:35:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:29.665 10:35:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:29.922 00:16:29.922 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:29.922 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:29.922 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:30.181 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:30.181 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:30.181 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:30.181 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:30.181 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:30.181 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:30.181 { 00:16:30.181 "cntlid": 53, 00:16:30.181 "qid": 0, 00:16:30.181 "state": "enabled", 00:16:30.181 "thread": "nvmf_tgt_poll_group_000", 00:16:30.181 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd", 00:16:30.181 "listen_address": { 00:16:30.181 "trtype": "TCP", 00:16:30.181 "adrfam": "IPv4", 00:16:30.181 "traddr": "10.0.0.2", 00:16:30.181 "trsvcid": "4420" 00:16:30.181 }, 00:16:30.181 "peer_address": { 00:16:30.181 "trtype": "TCP", 00:16:30.181 "adrfam": "IPv4", 00:16:30.181 "traddr": "10.0.0.1", 00:16:30.181 "trsvcid": "48482" 00:16:30.181 }, 00:16:30.181 "auth": { 00:16:30.181 "state": "completed", 00:16:30.181 "digest": "sha384", 00:16:30.181 "dhgroup": "null" 00:16:30.181 } 00:16:30.181 } 00:16:30.181 ]' 00:16:30.181 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:30.181 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:30.181 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:30.439 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:30.439 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:30.439 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:30.439 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:30.439 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:30.697 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NGE5M2M4NjliNWQ0OThlNWEzOTY1OTQwMDk1OTllZmRhNjc4ZDdhMzBjMjc0YmI51dL9kA==: --dhchap-ctrl-secret DHHC-1:01:ZTI1MTM1ODliYWQ1ZjFmOGFlMGE4ZTFiYzFjMTU1YzguHoKq: 00:16:30.697 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid 8b464f06-2980-e311-ba20-001e67a94acd -l 0 --dhchap-secret DHHC-1:02:NGE5M2M4NjliNWQ0OThlNWEzOTY1OTQwMDk1OTllZmRhNjc4ZDdhMzBjMjc0YmI51dL9kA==: --dhchap-ctrl-secret DHHC-1:01:ZTI1MTM1ODliYWQ1ZjFmOGFlMGE4ZTFiYzFjMTU1YzguHoKq: 00:16:31.630 10:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:31.630 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:31.630 10:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:16:31.630 10:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:31.630 10:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:31.630 10:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:31.630 10:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:31.630 10:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:31.631 10:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:31.888 10:35:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 3 00:16:31.888 10:35:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:31.888 10:35:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:31.888 10:35:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:31.888 10:35:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:31.888 10:35:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:31.888 10:35:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --dhchap-key key3 00:16:31.888 10:35:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:31.888 10:35:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:31.888 10:35:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:31.889 10:35:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:31.889 10:35:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:31.889 10:35:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:32.146 00:16:32.146 10:35:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:32.146 10:35:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:32.146 10:35:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:32.404 10:35:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:32.404 10:35:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:32.404 10:35:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:32.404 10:35:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:32.404 10:35:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:32.404 10:35:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:32.404 { 00:16:32.404 "cntlid": 55, 00:16:32.404 "qid": 0, 00:16:32.404 "state": "enabled", 00:16:32.404 "thread": "nvmf_tgt_poll_group_000", 00:16:32.404 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd", 00:16:32.404 "listen_address": { 00:16:32.404 "trtype": "TCP", 00:16:32.404 "adrfam": "IPv4", 00:16:32.404 "traddr": "10.0.0.2", 00:16:32.404 "trsvcid": "4420" 00:16:32.404 }, 00:16:32.404 "peer_address": { 00:16:32.404 "trtype": "TCP", 00:16:32.404 "adrfam": "IPv4", 00:16:32.404 "traddr": "10.0.0.1", 00:16:32.404 "trsvcid": "48504" 00:16:32.404 }, 00:16:32.404 "auth": { 00:16:32.404 "state": "completed", 00:16:32.404 "digest": "sha384", 00:16:32.404 "dhgroup": "null" 00:16:32.404 } 00:16:32.404 } 00:16:32.404 ]' 00:16:32.404 10:35:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:32.662 10:35:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:32.662 10:35:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:32.662 10:35:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:32.662 10:35:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:32.662 10:35:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:32.662 10:35:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:32.662 10:35:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:32.921 10:35:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MzBiMDU1YWUxODUwOWFhY2EwNDI1MjExOWFiNzY2NTQyZDJlMGYzNzI2ZGE0MTgzZmU3MjNlMzllNDM4OWRlM4eymdA=: 00:16:32.921 10:35:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid 8b464f06-2980-e311-ba20-001e67a94acd -l 0 --dhchap-secret DHHC-1:03:MzBiMDU1YWUxODUwOWFhY2EwNDI1MjExOWFiNzY2NTQyZDJlMGYzNzI2ZGE0MTgzZmU3MjNlMzllNDM4OWRlM4eymdA=: 00:16:33.855 10:35:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:33.855 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:33.855 10:35:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:16:33.855 10:35:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:33.855 10:35:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:33.855 10:35:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:33.855 10:35:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:33.855 10:35:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:33.855 10:35:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:33.855 10:35:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:34.113 10:35:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 0 00:16:34.113 10:35:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:34.113 10:35:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:34.113 10:35:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:34.113 10:35:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:34.113 10:35:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:34.113 10:35:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:34.113 10:35:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:34.113 10:35:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:34.113 10:35:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:34.113 10:35:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:34.113 10:35:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:34.113 10:35:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:34.372 00:16:34.372 10:35:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:34.372 10:35:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:34.372 10:35:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:34.630 10:35:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:34.630 10:35:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:34.630 10:35:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:34.630 10:35:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:34.630 10:35:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:34.630 10:35:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:34.630 { 00:16:34.630 "cntlid": 57, 00:16:34.630 "qid": 0, 00:16:34.630 "state": "enabled", 00:16:34.630 "thread": "nvmf_tgt_poll_group_000", 00:16:34.630 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd", 00:16:34.630 "listen_address": { 00:16:34.630 "trtype": "TCP", 00:16:34.630 "adrfam": "IPv4", 00:16:34.630 "traddr": "10.0.0.2", 00:16:34.630 "trsvcid": "4420" 00:16:34.630 }, 00:16:34.630 "peer_address": { 00:16:34.630 "trtype": "TCP", 00:16:34.630 "adrfam": "IPv4", 00:16:34.630 "traddr": "10.0.0.1", 00:16:34.630 "trsvcid": "48530" 00:16:34.630 }, 00:16:34.630 "auth": { 00:16:34.630 "state": "completed", 00:16:34.630 "digest": "sha384", 00:16:34.630 "dhgroup": "ffdhe2048" 00:16:34.630 } 00:16:34.630 } 00:16:34.630 ]' 00:16:34.630 10:35:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:34.630 10:35:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:34.630 10:35:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:34.887 10:35:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:34.887 10:35:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:34.887 10:35:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:34.887 10:35:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:34.887 10:35:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:35.145 10:35:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NzIyZWE5ZTU1Yzg4N2FlNTQzM2E0OTQwNjFhMTdlODRkNGM3NTNlM2FlZjIwMGJkgrp0GQ==: --dhchap-ctrl-secret DHHC-1:03:MzU5NGFiOWUyYzAyZGQ1NWFhMDA0ZDA4M2VmZTQ2Y2Y1YTBhNjRhZDlmYjI5YzZiZDkxNTBhZWQ2OWIwMzUyYkSjwPw=: 00:16:35.145 10:35:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid 8b464f06-2980-e311-ba20-001e67a94acd -l 0 --dhchap-secret DHHC-1:00:NzIyZWE5ZTU1Yzg4N2FlNTQzM2E0OTQwNjFhMTdlODRkNGM3NTNlM2FlZjIwMGJkgrp0GQ==: --dhchap-ctrl-secret DHHC-1:03:MzU5NGFiOWUyYzAyZGQ1NWFhMDA0ZDA4M2VmZTQ2Y2Y1YTBhNjRhZDlmYjI5YzZiZDkxNTBhZWQ2OWIwMzUyYkSjwPw=: 00:16:36.078 10:35:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:36.078 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:36.078 10:35:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:16:36.078 10:35:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:36.078 10:35:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:36.078 10:35:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:36.078 10:35:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:36.078 10:35:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:36.078 10:35:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:36.335 10:35:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 1 00:16:36.335 10:35:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:36.335 10:35:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:36.335 10:35:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:36.335 10:35:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:36.335 10:35:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:36.335 10:35:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:36.335 10:35:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:36.336 10:35:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:36.336 10:35:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:36.336 10:35:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:36.336 10:35:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:36.336 10:35:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:36.592 00:16:36.592 10:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:36.592 10:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:36.592 10:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:36.850 10:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:36.850 10:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:36.850 10:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:36.850 10:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:36.850 10:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:36.850 10:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:36.850 { 00:16:36.850 "cntlid": 59, 00:16:36.850 "qid": 0, 00:16:36.850 "state": "enabled", 00:16:36.850 "thread": "nvmf_tgt_poll_group_000", 00:16:36.850 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd", 00:16:36.850 "listen_address": { 00:16:36.850 "trtype": "TCP", 00:16:36.850 "adrfam": "IPv4", 00:16:36.850 "traddr": "10.0.0.2", 00:16:36.850 "trsvcid": "4420" 00:16:36.850 }, 00:16:36.850 "peer_address": { 00:16:36.850 "trtype": "TCP", 00:16:36.850 "adrfam": "IPv4", 00:16:36.850 "traddr": "10.0.0.1", 00:16:36.850 "trsvcid": "48562" 00:16:36.850 }, 00:16:36.850 "auth": { 00:16:36.850 "state": "completed", 00:16:36.850 "digest": "sha384", 00:16:36.850 "dhgroup": "ffdhe2048" 00:16:36.850 } 00:16:36.850 } 00:16:36.850 ]' 00:16:36.850 10:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:37.107 10:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:37.107 10:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:37.107 10:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:37.107 10:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:37.107 10:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:37.107 10:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:37.107 10:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:37.364 10:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YWY1MDI3MDIyY2I2OTZkODA2YmU0Yzk2N2Q2ZmE5ZGbrQth/: --dhchap-ctrl-secret DHHC-1:02:ZWY4ZWYwOTEwYTcyMGM1MzRlODM0N2ViYjYyOTk3NWFhOTFmMTFiNDFhMTNkNjZmKWKHwA==: 00:16:37.364 10:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid 8b464f06-2980-e311-ba20-001e67a94acd -l 0 --dhchap-secret DHHC-1:01:YWY1MDI3MDIyY2I2OTZkODA2YmU0Yzk2N2Q2ZmE5ZGbrQth/: --dhchap-ctrl-secret DHHC-1:02:ZWY4ZWYwOTEwYTcyMGM1MzRlODM0N2ViYjYyOTk3NWFhOTFmMTFiNDFhMTNkNjZmKWKHwA==: 00:16:38.298 10:35:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:38.298 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:38.299 10:35:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:16:38.299 10:35:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:38.299 10:35:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:38.299 10:35:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:38.299 10:35:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:38.299 10:35:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:38.299 10:35:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:38.556 10:35:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 2 00:16:38.556 10:35:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:38.556 10:35:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:38.556 10:35:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:38.556 10:35:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:38.556 10:35:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:38.556 10:35:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:38.556 10:35:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:38.556 10:35:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:38.556 10:35:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:38.556 10:35:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:38.556 10:35:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:38.556 10:35:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:38.814 00:16:38.814 10:35:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:38.814 10:35:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:38.814 10:35:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:39.071 10:35:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:39.071 10:35:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:39.072 10:35:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:39.072 10:35:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:39.072 10:35:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:39.072 10:35:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:39.072 { 00:16:39.072 "cntlid": 61, 00:16:39.072 "qid": 0, 00:16:39.072 "state": "enabled", 00:16:39.072 "thread": "nvmf_tgt_poll_group_000", 00:16:39.072 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd", 00:16:39.072 "listen_address": { 00:16:39.072 "trtype": "TCP", 00:16:39.072 "adrfam": "IPv4", 00:16:39.072 "traddr": "10.0.0.2", 00:16:39.072 "trsvcid": "4420" 00:16:39.072 }, 00:16:39.072 "peer_address": { 00:16:39.072 "trtype": "TCP", 00:16:39.072 "adrfam": "IPv4", 00:16:39.072 "traddr": "10.0.0.1", 00:16:39.072 "trsvcid": "52104" 00:16:39.072 }, 00:16:39.072 "auth": { 00:16:39.072 "state": "completed", 00:16:39.072 "digest": "sha384", 00:16:39.072 "dhgroup": "ffdhe2048" 00:16:39.072 } 00:16:39.072 } 00:16:39.072 ]' 00:16:39.072 10:35:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:39.329 10:35:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:39.329 10:35:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:39.329 10:35:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:39.329 10:35:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:39.329 10:35:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:39.329 10:35:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:39.329 10:35:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:39.587 10:35:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NGE5M2M4NjliNWQ0OThlNWEzOTY1OTQwMDk1OTllZmRhNjc4ZDdhMzBjMjc0YmI51dL9kA==: --dhchap-ctrl-secret DHHC-1:01:ZTI1MTM1ODliYWQ1ZjFmOGFlMGE4ZTFiYzFjMTU1YzguHoKq: 00:16:39.587 10:35:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid 8b464f06-2980-e311-ba20-001e67a94acd -l 0 --dhchap-secret DHHC-1:02:NGE5M2M4NjliNWQ0OThlNWEzOTY1OTQwMDk1OTllZmRhNjc4ZDdhMzBjMjc0YmI51dL9kA==: --dhchap-ctrl-secret DHHC-1:01:ZTI1MTM1ODliYWQ1ZjFmOGFlMGE4ZTFiYzFjMTU1YzguHoKq: 00:16:40.521 10:35:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:40.521 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:40.521 10:35:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:16:40.521 10:35:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:40.521 10:35:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:40.521 10:35:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:40.521 10:35:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:40.521 10:35:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:40.521 10:35:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:40.779 10:35:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 3 00:16:40.779 10:35:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:40.779 10:35:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:40.779 10:35:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:40.779 10:35:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:40.779 10:35:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:40.779 10:35:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --dhchap-key key3 00:16:40.779 10:35:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:40.779 10:35:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:40.779 10:35:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:40.779 10:35:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:40.779 10:35:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:40.779 10:35:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:41.036 00:16:41.294 10:35:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:41.294 10:35:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:41.294 10:35:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:41.552 10:35:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:41.552 10:35:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:41.552 10:35:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:41.552 10:35:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:41.552 10:35:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:41.552 10:35:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:41.552 { 00:16:41.552 "cntlid": 63, 00:16:41.552 "qid": 0, 00:16:41.552 "state": "enabled", 00:16:41.552 "thread": "nvmf_tgt_poll_group_000", 00:16:41.552 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd", 00:16:41.552 "listen_address": { 00:16:41.552 "trtype": "TCP", 00:16:41.552 "adrfam": "IPv4", 00:16:41.552 "traddr": "10.0.0.2", 00:16:41.552 "trsvcid": "4420" 00:16:41.552 }, 00:16:41.552 "peer_address": { 00:16:41.552 "trtype": "TCP", 00:16:41.552 "adrfam": "IPv4", 00:16:41.552 "traddr": "10.0.0.1", 00:16:41.552 "trsvcid": "52136" 00:16:41.552 }, 00:16:41.552 "auth": { 00:16:41.552 "state": "completed", 00:16:41.552 "digest": "sha384", 00:16:41.552 "dhgroup": "ffdhe2048" 00:16:41.552 } 00:16:41.552 } 00:16:41.552 ]' 00:16:41.552 10:35:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:41.552 10:35:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:41.552 10:35:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:41.552 10:35:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:41.552 10:35:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:41.552 10:35:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:41.552 10:35:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:41.552 10:35:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:41.810 10:35:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MzBiMDU1YWUxODUwOWFhY2EwNDI1MjExOWFiNzY2NTQyZDJlMGYzNzI2ZGE0MTgzZmU3MjNlMzllNDM4OWRlM4eymdA=: 00:16:41.810 10:35:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid 8b464f06-2980-e311-ba20-001e67a94acd -l 0 --dhchap-secret DHHC-1:03:MzBiMDU1YWUxODUwOWFhY2EwNDI1MjExOWFiNzY2NTQyZDJlMGYzNzI2ZGE0MTgzZmU3MjNlMzllNDM4OWRlM4eymdA=: 00:16:42.744 10:35:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:42.744 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:42.744 10:35:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:16:42.744 10:35:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:42.744 10:35:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:42.744 10:35:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:42.744 10:35:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:42.744 10:35:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:42.744 10:35:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:42.744 10:35:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:43.002 10:35:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 0 00:16:43.003 10:35:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:43.003 10:35:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:43.003 10:35:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:43.003 10:35:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:43.003 10:35:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:43.003 10:35:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:43.003 10:35:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:43.003 10:35:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:43.003 10:35:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:43.003 10:35:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:43.003 10:35:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:43.003 10:35:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:43.261 00:16:43.261 10:35:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:43.261 10:35:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:43.261 10:35:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:43.519 10:35:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:43.519 10:35:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:43.519 10:35:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:43.519 10:35:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:43.519 10:35:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:43.519 10:35:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:43.519 { 00:16:43.519 "cntlid": 65, 00:16:43.519 "qid": 0, 00:16:43.519 "state": "enabled", 00:16:43.519 "thread": "nvmf_tgt_poll_group_000", 00:16:43.519 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd", 00:16:43.519 "listen_address": { 00:16:43.519 "trtype": "TCP", 00:16:43.519 "adrfam": "IPv4", 00:16:43.519 "traddr": "10.0.0.2", 00:16:43.519 "trsvcid": "4420" 00:16:43.519 }, 00:16:43.519 "peer_address": { 00:16:43.519 "trtype": "TCP", 00:16:43.519 "adrfam": "IPv4", 00:16:43.519 "traddr": "10.0.0.1", 00:16:43.519 "trsvcid": "52148" 00:16:43.519 }, 00:16:43.519 "auth": { 00:16:43.519 "state": "completed", 00:16:43.519 "digest": "sha384", 00:16:43.519 "dhgroup": "ffdhe3072" 00:16:43.519 } 00:16:43.519 } 00:16:43.519 ]' 00:16:43.519 10:35:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:43.777 10:35:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:43.777 10:35:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:43.777 10:35:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:43.777 10:35:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:43.777 10:35:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:43.777 10:35:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:43.777 10:35:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:44.035 10:35:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NzIyZWE5ZTU1Yzg4N2FlNTQzM2E0OTQwNjFhMTdlODRkNGM3NTNlM2FlZjIwMGJkgrp0GQ==: --dhchap-ctrl-secret DHHC-1:03:MzU5NGFiOWUyYzAyZGQ1NWFhMDA0ZDA4M2VmZTQ2Y2Y1YTBhNjRhZDlmYjI5YzZiZDkxNTBhZWQ2OWIwMzUyYkSjwPw=: 00:16:44.035 10:35:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid 8b464f06-2980-e311-ba20-001e67a94acd -l 0 --dhchap-secret DHHC-1:00:NzIyZWE5ZTU1Yzg4N2FlNTQzM2E0OTQwNjFhMTdlODRkNGM3NTNlM2FlZjIwMGJkgrp0GQ==: --dhchap-ctrl-secret DHHC-1:03:MzU5NGFiOWUyYzAyZGQ1NWFhMDA0ZDA4M2VmZTQ2Y2Y1YTBhNjRhZDlmYjI5YzZiZDkxNTBhZWQ2OWIwMzUyYkSjwPw=: 00:16:44.970 10:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:44.970 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:44.970 10:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:16:44.970 10:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:44.970 10:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:44.970 10:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:44.970 10:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:44.970 10:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:44.970 10:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:45.227 10:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 1 00:16:45.228 10:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:45.228 10:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:45.228 10:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:45.228 10:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:45.228 10:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:45.228 10:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:45.228 10:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:45.228 10:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:45.228 10:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:45.228 10:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:45.228 10:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:45.228 10:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:45.793 00:16:45.793 10:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:45.793 10:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:45.793 10:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:45.793 10:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:45.793 10:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:45.794 10:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:45.794 10:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:45.794 10:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:45.794 10:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:45.794 { 00:16:45.794 "cntlid": 67, 00:16:45.794 "qid": 0, 00:16:45.794 "state": "enabled", 00:16:45.794 "thread": "nvmf_tgt_poll_group_000", 00:16:45.794 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd", 00:16:45.794 "listen_address": { 00:16:45.794 "trtype": "TCP", 00:16:45.794 "adrfam": "IPv4", 00:16:45.794 "traddr": "10.0.0.2", 00:16:45.794 "trsvcid": "4420" 00:16:45.794 }, 00:16:45.794 "peer_address": { 00:16:45.794 "trtype": "TCP", 00:16:45.794 "adrfam": "IPv4", 00:16:45.794 "traddr": "10.0.0.1", 00:16:45.794 "trsvcid": "52184" 00:16:45.794 }, 00:16:45.794 "auth": { 00:16:45.794 "state": "completed", 00:16:45.794 "digest": "sha384", 00:16:45.794 "dhgroup": "ffdhe3072" 00:16:45.794 } 00:16:45.794 } 00:16:45.794 ]' 00:16:45.794 10:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:46.052 10:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:46.052 10:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:46.052 10:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:46.052 10:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:46.052 10:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:46.052 10:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:46.052 10:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:46.310 10:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YWY1MDI3MDIyY2I2OTZkODA2YmU0Yzk2N2Q2ZmE5ZGbrQth/: --dhchap-ctrl-secret DHHC-1:02:ZWY4ZWYwOTEwYTcyMGM1MzRlODM0N2ViYjYyOTk3NWFhOTFmMTFiNDFhMTNkNjZmKWKHwA==: 00:16:46.310 10:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid 8b464f06-2980-e311-ba20-001e67a94acd -l 0 --dhchap-secret DHHC-1:01:YWY1MDI3MDIyY2I2OTZkODA2YmU0Yzk2N2Q2ZmE5ZGbrQth/: --dhchap-ctrl-secret DHHC-1:02:ZWY4ZWYwOTEwYTcyMGM1MzRlODM0N2ViYjYyOTk3NWFhOTFmMTFiNDFhMTNkNjZmKWKHwA==: 00:16:47.243 10:35:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:47.243 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:47.243 10:35:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:16:47.243 10:35:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:47.243 10:35:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:47.243 10:35:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:47.243 10:35:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:47.243 10:35:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:47.243 10:35:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:47.500 10:35:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 2 00:16:47.500 10:35:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:47.500 10:35:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:47.500 10:35:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:47.500 10:35:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:47.500 10:35:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:47.500 10:35:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:47.500 10:35:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:47.500 10:35:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:47.500 10:35:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:47.500 10:35:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:47.500 10:35:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:47.500 10:35:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:47.758 00:16:47.758 10:35:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:47.758 10:35:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:47.758 10:35:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:48.016 10:35:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:48.016 10:35:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:48.016 10:35:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:48.016 10:35:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:48.016 10:35:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:48.016 10:35:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:48.016 { 00:16:48.016 "cntlid": 69, 00:16:48.016 "qid": 0, 00:16:48.016 "state": "enabled", 00:16:48.016 "thread": "nvmf_tgt_poll_group_000", 00:16:48.016 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd", 00:16:48.016 "listen_address": { 00:16:48.016 "trtype": "TCP", 00:16:48.016 "adrfam": "IPv4", 00:16:48.016 "traddr": "10.0.0.2", 00:16:48.016 "trsvcid": "4420" 00:16:48.016 }, 00:16:48.016 "peer_address": { 00:16:48.016 "trtype": "TCP", 00:16:48.016 "adrfam": "IPv4", 00:16:48.016 "traddr": "10.0.0.1", 00:16:48.016 "trsvcid": "52196" 00:16:48.016 }, 00:16:48.016 "auth": { 00:16:48.016 "state": "completed", 00:16:48.016 "digest": "sha384", 00:16:48.016 "dhgroup": "ffdhe3072" 00:16:48.016 } 00:16:48.016 } 00:16:48.016 ]' 00:16:48.016 10:35:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:48.274 10:35:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:48.274 10:35:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:48.274 10:35:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:48.274 10:35:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:48.274 10:35:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:48.274 10:35:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:48.274 10:35:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:48.531 10:35:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NGE5M2M4NjliNWQ0OThlNWEzOTY1OTQwMDk1OTllZmRhNjc4ZDdhMzBjMjc0YmI51dL9kA==: --dhchap-ctrl-secret DHHC-1:01:ZTI1MTM1ODliYWQ1ZjFmOGFlMGE4ZTFiYzFjMTU1YzguHoKq: 00:16:48.531 10:35:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid 8b464f06-2980-e311-ba20-001e67a94acd -l 0 --dhchap-secret DHHC-1:02:NGE5M2M4NjliNWQ0OThlNWEzOTY1OTQwMDk1OTllZmRhNjc4ZDdhMzBjMjc0YmI51dL9kA==: --dhchap-ctrl-secret DHHC-1:01:ZTI1MTM1ODliYWQ1ZjFmOGFlMGE4ZTFiYzFjMTU1YzguHoKq: 00:16:49.464 10:35:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:49.464 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:49.464 10:35:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:16:49.464 10:35:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:49.464 10:35:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:49.464 10:35:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:49.464 10:35:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:49.464 10:35:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:49.464 10:35:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:49.722 10:35:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 3 00:16:49.722 10:35:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:49.722 10:35:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:49.722 10:35:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:49.722 10:35:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:49.722 10:35:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:49.722 10:35:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --dhchap-key key3 00:16:49.722 10:35:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:49.722 10:35:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:49.722 10:35:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:49.722 10:35:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:49.722 10:35:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:49.722 10:35:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:49.980 00:16:49.980 10:35:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:49.980 10:35:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:49.980 10:35:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:50.237 10:35:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:50.238 10:35:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:50.238 10:35:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:50.238 10:35:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:50.238 10:35:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:50.238 10:35:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:50.238 { 00:16:50.238 "cntlid": 71, 00:16:50.238 "qid": 0, 00:16:50.238 "state": "enabled", 00:16:50.238 "thread": "nvmf_tgt_poll_group_000", 00:16:50.238 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd", 00:16:50.238 "listen_address": { 00:16:50.238 "trtype": "TCP", 00:16:50.238 "adrfam": "IPv4", 00:16:50.238 "traddr": "10.0.0.2", 00:16:50.238 "trsvcid": "4420" 00:16:50.238 }, 00:16:50.238 "peer_address": { 00:16:50.238 "trtype": "TCP", 00:16:50.238 "adrfam": "IPv4", 00:16:50.238 "traddr": "10.0.0.1", 00:16:50.238 "trsvcid": "35062" 00:16:50.238 }, 00:16:50.238 "auth": { 00:16:50.238 "state": "completed", 00:16:50.238 "digest": "sha384", 00:16:50.238 "dhgroup": "ffdhe3072" 00:16:50.238 } 00:16:50.238 } 00:16:50.238 ]' 00:16:50.238 10:35:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:50.495 10:35:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:50.495 10:35:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:50.495 10:35:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:50.495 10:35:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:50.495 10:35:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:50.495 10:35:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:50.495 10:35:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:50.753 10:35:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MzBiMDU1YWUxODUwOWFhY2EwNDI1MjExOWFiNzY2NTQyZDJlMGYzNzI2ZGE0MTgzZmU3MjNlMzllNDM4OWRlM4eymdA=: 00:16:50.753 10:35:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid 8b464f06-2980-e311-ba20-001e67a94acd -l 0 --dhchap-secret DHHC-1:03:MzBiMDU1YWUxODUwOWFhY2EwNDI1MjExOWFiNzY2NTQyZDJlMGYzNzI2ZGE0MTgzZmU3MjNlMzllNDM4OWRlM4eymdA=: 00:16:51.686 10:35:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:51.686 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:51.686 10:35:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:16:51.686 10:35:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:51.686 10:35:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:51.686 10:35:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:51.686 10:35:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:51.686 10:35:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:51.686 10:35:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:51.686 10:35:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:51.944 10:35:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 0 00:16:51.944 10:35:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:51.944 10:35:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:51.944 10:35:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:51.944 10:35:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:51.944 10:35:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:51.944 10:35:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:51.944 10:35:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:51.944 10:35:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:51.944 10:35:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:51.944 10:35:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:51.944 10:35:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:51.944 10:35:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:52.202 00:16:52.202 10:35:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:52.202 10:35:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:52.202 10:35:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:52.460 10:35:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:52.460 10:35:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:52.460 10:35:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:52.460 10:35:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:52.460 10:35:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:52.460 10:35:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:52.460 { 00:16:52.460 "cntlid": 73, 00:16:52.460 "qid": 0, 00:16:52.460 "state": "enabled", 00:16:52.460 "thread": "nvmf_tgt_poll_group_000", 00:16:52.460 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd", 00:16:52.460 "listen_address": { 00:16:52.460 "trtype": "TCP", 00:16:52.460 "adrfam": "IPv4", 00:16:52.460 "traddr": "10.0.0.2", 00:16:52.460 "trsvcid": "4420" 00:16:52.460 }, 00:16:52.460 "peer_address": { 00:16:52.460 "trtype": "TCP", 00:16:52.460 "adrfam": "IPv4", 00:16:52.460 "traddr": "10.0.0.1", 00:16:52.460 "trsvcid": "35102" 00:16:52.460 }, 00:16:52.460 "auth": { 00:16:52.460 "state": "completed", 00:16:52.460 "digest": "sha384", 00:16:52.460 "dhgroup": "ffdhe4096" 00:16:52.460 } 00:16:52.460 } 00:16:52.460 ]' 00:16:52.460 10:35:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:52.718 10:35:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:52.718 10:35:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:52.718 10:35:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:52.718 10:35:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:52.718 10:35:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:52.718 10:35:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:52.718 10:35:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:52.976 10:35:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NzIyZWE5ZTU1Yzg4N2FlNTQzM2E0OTQwNjFhMTdlODRkNGM3NTNlM2FlZjIwMGJkgrp0GQ==: --dhchap-ctrl-secret DHHC-1:03:MzU5NGFiOWUyYzAyZGQ1NWFhMDA0ZDA4M2VmZTQ2Y2Y1YTBhNjRhZDlmYjI5YzZiZDkxNTBhZWQ2OWIwMzUyYkSjwPw=: 00:16:52.976 10:35:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid 8b464f06-2980-e311-ba20-001e67a94acd -l 0 --dhchap-secret DHHC-1:00:NzIyZWE5ZTU1Yzg4N2FlNTQzM2E0OTQwNjFhMTdlODRkNGM3NTNlM2FlZjIwMGJkgrp0GQ==: --dhchap-ctrl-secret DHHC-1:03:MzU5NGFiOWUyYzAyZGQ1NWFhMDA0ZDA4M2VmZTQ2Y2Y1YTBhNjRhZDlmYjI5YzZiZDkxNTBhZWQ2OWIwMzUyYkSjwPw=: 00:16:53.909 10:35:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:53.909 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:53.909 10:35:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:16:53.909 10:35:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:53.909 10:35:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:53.909 10:35:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:53.909 10:35:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:53.909 10:35:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:53.909 10:35:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:54.167 10:35:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 1 00:16:54.167 10:35:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:54.167 10:35:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:54.167 10:35:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:54.167 10:35:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:54.167 10:35:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:54.167 10:35:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:54.167 10:35:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:54.167 10:35:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:54.167 10:35:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:54.167 10:35:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:54.167 10:35:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:54.167 10:35:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:54.424 00:16:54.682 10:35:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:54.682 10:35:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:54.682 10:35:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:54.941 10:35:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:54.941 10:35:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:54.941 10:35:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:54.941 10:35:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:54.941 10:35:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:54.941 10:35:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:54.941 { 00:16:54.941 "cntlid": 75, 00:16:54.941 "qid": 0, 00:16:54.941 "state": "enabled", 00:16:54.941 "thread": "nvmf_tgt_poll_group_000", 00:16:54.941 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd", 00:16:54.941 "listen_address": { 00:16:54.941 "trtype": "TCP", 00:16:54.941 "adrfam": "IPv4", 00:16:54.941 "traddr": "10.0.0.2", 00:16:54.941 "trsvcid": "4420" 00:16:54.941 }, 00:16:54.941 "peer_address": { 00:16:54.941 "trtype": "TCP", 00:16:54.941 "adrfam": "IPv4", 00:16:54.941 "traddr": "10.0.0.1", 00:16:54.941 "trsvcid": "35124" 00:16:54.941 }, 00:16:54.941 "auth": { 00:16:54.941 "state": "completed", 00:16:54.941 "digest": "sha384", 00:16:54.941 "dhgroup": "ffdhe4096" 00:16:54.941 } 00:16:54.941 } 00:16:54.941 ]' 00:16:54.941 10:35:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:54.941 10:35:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:54.941 10:35:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:54.941 10:35:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:54.941 10:35:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:54.941 10:35:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:54.941 10:35:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:54.941 10:35:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:55.198 10:35:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YWY1MDI3MDIyY2I2OTZkODA2YmU0Yzk2N2Q2ZmE5ZGbrQth/: --dhchap-ctrl-secret DHHC-1:02:ZWY4ZWYwOTEwYTcyMGM1MzRlODM0N2ViYjYyOTk3NWFhOTFmMTFiNDFhMTNkNjZmKWKHwA==: 00:16:55.198 10:35:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid 8b464f06-2980-e311-ba20-001e67a94acd -l 0 --dhchap-secret DHHC-1:01:YWY1MDI3MDIyY2I2OTZkODA2YmU0Yzk2N2Q2ZmE5ZGbrQth/: --dhchap-ctrl-secret DHHC-1:02:ZWY4ZWYwOTEwYTcyMGM1MzRlODM0N2ViYjYyOTk3NWFhOTFmMTFiNDFhMTNkNjZmKWKHwA==: 00:16:56.131 10:35:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:56.131 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:56.131 10:35:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:16:56.131 10:35:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:56.131 10:35:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:56.131 10:35:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:56.131 10:35:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:56.131 10:35:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:56.131 10:35:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:56.389 10:35:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 2 00:16:56.389 10:35:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:56.389 10:35:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:56.389 10:35:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:56.389 10:35:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:56.389 10:35:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:56.389 10:35:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:56.389 10:35:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:56.389 10:35:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:56.389 10:35:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:56.389 10:35:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:56.389 10:35:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:56.389 10:35:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:56.647 00:16:56.647 10:35:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:56.647 10:35:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:56.647 10:35:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:57.212 10:35:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:57.212 10:35:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:57.212 10:35:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:57.212 10:35:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:57.212 10:35:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:57.212 10:35:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:57.212 { 00:16:57.212 "cntlid": 77, 00:16:57.212 "qid": 0, 00:16:57.212 "state": "enabled", 00:16:57.212 "thread": "nvmf_tgt_poll_group_000", 00:16:57.212 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd", 00:16:57.212 "listen_address": { 00:16:57.212 "trtype": "TCP", 00:16:57.212 "adrfam": "IPv4", 00:16:57.212 "traddr": "10.0.0.2", 00:16:57.212 "trsvcid": "4420" 00:16:57.212 }, 00:16:57.212 "peer_address": { 00:16:57.212 "trtype": "TCP", 00:16:57.212 "adrfam": "IPv4", 00:16:57.212 "traddr": "10.0.0.1", 00:16:57.212 "trsvcid": "35156" 00:16:57.212 }, 00:16:57.212 "auth": { 00:16:57.212 "state": "completed", 00:16:57.212 "digest": "sha384", 00:16:57.212 "dhgroup": "ffdhe4096" 00:16:57.212 } 00:16:57.212 } 00:16:57.212 ]' 00:16:57.212 10:35:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:57.212 10:35:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:57.212 10:35:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:57.212 10:35:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:57.212 10:35:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:57.212 10:35:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:57.212 10:35:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:57.212 10:35:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:57.470 10:35:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NGE5M2M4NjliNWQ0OThlNWEzOTY1OTQwMDk1OTllZmRhNjc4ZDdhMzBjMjc0YmI51dL9kA==: --dhchap-ctrl-secret DHHC-1:01:ZTI1MTM1ODliYWQ1ZjFmOGFlMGE4ZTFiYzFjMTU1YzguHoKq: 00:16:57.470 10:35:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid 8b464f06-2980-e311-ba20-001e67a94acd -l 0 --dhchap-secret DHHC-1:02:NGE5M2M4NjliNWQ0OThlNWEzOTY1OTQwMDk1OTllZmRhNjc4ZDdhMzBjMjc0YmI51dL9kA==: --dhchap-ctrl-secret DHHC-1:01:ZTI1MTM1ODliYWQ1ZjFmOGFlMGE4ZTFiYzFjMTU1YzguHoKq: 00:16:58.402 10:35:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:58.402 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:58.402 10:35:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:16:58.402 10:35:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:58.402 10:35:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:58.402 10:35:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:58.402 10:35:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:58.402 10:35:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:58.402 10:35:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:58.660 10:35:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 3 00:16:58.660 10:35:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:58.660 10:35:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:58.660 10:35:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:58.660 10:35:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:58.660 10:35:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:58.660 10:35:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --dhchap-key key3 00:16:58.660 10:35:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:58.660 10:35:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:58.660 10:35:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:58.660 10:35:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:58.660 10:35:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:58.660 10:35:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:58.918 00:16:58.918 10:35:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:58.918 10:35:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:58.918 10:35:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:59.175 10:35:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:59.175 10:35:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:59.175 10:35:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:59.175 10:35:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:59.175 10:35:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:59.175 10:35:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:59.175 { 00:16:59.175 "cntlid": 79, 00:16:59.175 "qid": 0, 00:16:59.175 "state": "enabled", 00:16:59.175 "thread": "nvmf_tgt_poll_group_000", 00:16:59.175 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd", 00:16:59.175 "listen_address": { 00:16:59.175 "trtype": "TCP", 00:16:59.175 "adrfam": "IPv4", 00:16:59.175 "traddr": "10.0.0.2", 00:16:59.175 "trsvcid": "4420" 00:16:59.175 }, 00:16:59.175 "peer_address": { 00:16:59.176 "trtype": "TCP", 00:16:59.176 "adrfam": "IPv4", 00:16:59.176 "traddr": "10.0.0.1", 00:16:59.176 "trsvcid": "45634" 00:16:59.176 }, 00:16:59.176 "auth": { 00:16:59.176 "state": "completed", 00:16:59.176 "digest": "sha384", 00:16:59.176 "dhgroup": "ffdhe4096" 00:16:59.176 } 00:16:59.176 } 00:16:59.176 ]' 00:16:59.176 10:35:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:59.433 10:35:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:59.433 10:35:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:59.434 10:35:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:59.434 10:35:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:59.434 10:35:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:59.434 10:35:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:59.434 10:35:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:59.691 10:35:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MzBiMDU1YWUxODUwOWFhY2EwNDI1MjExOWFiNzY2NTQyZDJlMGYzNzI2ZGE0MTgzZmU3MjNlMzllNDM4OWRlM4eymdA=: 00:16:59.691 10:35:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid 8b464f06-2980-e311-ba20-001e67a94acd -l 0 --dhchap-secret DHHC-1:03:MzBiMDU1YWUxODUwOWFhY2EwNDI1MjExOWFiNzY2NTQyZDJlMGYzNzI2ZGE0MTgzZmU3MjNlMzllNDM4OWRlM4eymdA=: 00:17:00.625 10:35:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:00.625 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:00.625 10:35:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:17:00.625 10:35:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:00.625 10:35:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:00.625 10:35:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:00.625 10:35:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:00.625 10:35:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:00.625 10:35:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:00.625 10:35:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:00.882 10:35:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 0 00:17:00.882 10:35:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:00.882 10:35:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:00.882 10:35:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:00.882 10:35:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:00.882 10:35:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:00.882 10:35:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:00.883 10:35:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:00.883 10:35:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:00.883 10:35:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:00.883 10:35:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:00.883 10:35:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:00.883 10:35:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:01.448 00:17:01.448 10:35:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:01.449 10:35:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:01.449 10:35:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:01.706 10:35:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:01.706 10:35:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:01.706 10:35:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:01.706 10:35:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:01.706 10:35:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:01.706 10:35:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:01.706 { 00:17:01.706 "cntlid": 81, 00:17:01.706 "qid": 0, 00:17:01.706 "state": "enabled", 00:17:01.706 "thread": "nvmf_tgt_poll_group_000", 00:17:01.706 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd", 00:17:01.706 "listen_address": { 00:17:01.706 "trtype": "TCP", 00:17:01.706 "adrfam": "IPv4", 00:17:01.706 "traddr": "10.0.0.2", 00:17:01.706 "trsvcid": "4420" 00:17:01.706 }, 00:17:01.706 "peer_address": { 00:17:01.706 "trtype": "TCP", 00:17:01.706 "adrfam": "IPv4", 00:17:01.706 "traddr": "10.0.0.1", 00:17:01.706 "trsvcid": "45662" 00:17:01.706 }, 00:17:01.706 "auth": { 00:17:01.706 "state": "completed", 00:17:01.706 "digest": "sha384", 00:17:01.706 "dhgroup": "ffdhe6144" 00:17:01.706 } 00:17:01.706 } 00:17:01.706 ]' 00:17:01.706 10:35:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:01.706 10:35:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:01.706 10:35:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:01.707 10:35:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:01.707 10:35:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:01.707 10:35:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:01.707 10:35:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:01.707 10:35:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:01.965 10:35:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NzIyZWE5ZTU1Yzg4N2FlNTQzM2E0OTQwNjFhMTdlODRkNGM3NTNlM2FlZjIwMGJkgrp0GQ==: --dhchap-ctrl-secret DHHC-1:03:MzU5NGFiOWUyYzAyZGQ1NWFhMDA0ZDA4M2VmZTQ2Y2Y1YTBhNjRhZDlmYjI5YzZiZDkxNTBhZWQ2OWIwMzUyYkSjwPw=: 00:17:01.965 10:35:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid 8b464f06-2980-e311-ba20-001e67a94acd -l 0 --dhchap-secret DHHC-1:00:NzIyZWE5ZTU1Yzg4N2FlNTQzM2E0OTQwNjFhMTdlODRkNGM3NTNlM2FlZjIwMGJkgrp0GQ==: --dhchap-ctrl-secret DHHC-1:03:MzU5NGFiOWUyYzAyZGQ1NWFhMDA0ZDA4M2VmZTQ2Y2Y1YTBhNjRhZDlmYjI5YzZiZDkxNTBhZWQ2OWIwMzUyYkSjwPw=: 00:17:02.899 10:35:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:02.899 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:02.899 10:35:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:17:02.899 10:35:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:02.899 10:35:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:02.899 10:35:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:02.899 10:35:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:02.899 10:35:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:02.899 10:35:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:03.157 10:35:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 1 00:17:03.157 10:35:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:03.157 10:35:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:03.157 10:35:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:03.157 10:35:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:03.157 10:35:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:03.157 10:35:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:03.157 10:35:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:03.157 10:35:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:03.157 10:35:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:03.157 10:35:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:03.157 10:35:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:03.157 10:35:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:03.723 00:17:03.723 10:35:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:03.723 10:35:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:03.723 10:35:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:03.982 10:35:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:03.982 10:35:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:03.982 10:35:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:03.982 10:35:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:03.982 10:35:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:03.982 10:35:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:03.982 { 00:17:03.982 "cntlid": 83, 00:17:03.982 "qid": 0, 00:17:03.982 "state": "enabled", 00:17:03.982 "thread": "nvmf_tgt_poll_group_000", 00:17:03.982 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd", 00:17:03.982 "listen_address": { 00:17:03.982 "trtype": "TCP", 00:17:03.982 "adrfam": "IPv4", 00:17:03.982 "traddr": "10.0.0.2", 00:17:03.982 "trsvcid": "4420" 00:17:03.982 }, 00:17:03.982 "peer_address": { 00:17:03.982 "trtype": "TCP", 00:17:03.982 "adrfam": "IPv4", 00:17:03.982 "traddr": "10.0.0.1", 00:17:03.982 "trsvcid": "45702" 00:17:03.982 }, 00:17:03.982 "auth": { 00:17:03.982 "state": "completed", 00:17:03.982 "digest": "sha384", 00:17:03.982 "dhgroup": "ffdhe6144" 00:17:03.982 } 00:17:03.982 } 00:17:03.982 ]' 00:17:03.982 10:35:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:03.982 10:35:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:03.982 10:35:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:03.982 10:35:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:03.982 10:35:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:04.240 10:35:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:04.240 10:35:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:04.240 10:35:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:04.498 10:35:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YWY1MDI3MDIyY2I2OTZkODA2YmU0Yzk2N2Q2ZmE5ZGbrQth/: --dhchap-ctrl-secret DHHC-1:02:ZWY4ZWYwOTEwYTcyMGM1MzRlODM0N2ViYjYyOTk3NWFhOTFmMTFiNDFhMTNkNjZmKWKHwA==: 00:17:04.498 10:35:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid 8b464f06-2980-e311-ba20-001e67a94acd -l 0 --dhchap-secret DHHC-1:01:YWY1MDI3MDIyY2I2OTZkODA2YmU0Yzk2N2Q2ZmE5ZGbrQth/: --dhchap-ctrl-secret DHHC-1:02:ZWY4ZWYwOTEwYTcyMGM1MzRlODM0N2ViYjYyOTk3NWFhOTFmMTFiNDFhMTNkNjZmKWKHwA==: 00:17:05.431 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:05.431 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:05.431 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:17:05.431 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:05.431 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:05.431 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:05.431 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:05.431 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:05.431 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:05.689 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 2 00:17:05.689 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:05.689 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:05.689 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:05.689 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:05.689 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:05.689 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:05.689 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:05.689 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:05.689 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:05.689 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:05.689 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:05.689 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:06.255 00:17:06.255 10:35:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:06.255 10:35:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:06.255 10:35:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:06.513 10:35:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:06.513 10:35:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:06.513 10:35:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:06.513 10:35:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:06.513 10:35:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:06.513 10:35:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:06.513 { 00:17:06.513 "cntlid": 85, 00:17:06.513 "qid": 0, 00:17:06.513 "state": "enabled", 00:17:06.513 "thread": "nvmf_tgt_poll_group_000", 00:17:06.513 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd", 00:17:06.513 "listen_address": { 00:17:06.513 "trtype": "TCP", 00:17:06.513 "adrfam": "IPv4", 00:17:06.513 "traddr": "10.0.0.2", 00:17:06.513 "trsvcid": "4420" 00:17:06.513 }, 00:17:06.513 "peer_address": { 00:17:06.513 "trtype": "TCP", 00:17:06.513 "adrfam": "IPv4", 00:17:06.513 "traddr": "10.0.0.1", 00:17:06.513 "trsvcid": "45736" 00:17:06.513 }, 00:17:06.513 "auth": { 00:17:06.513 "state": "completed", 00:17:06.513 "digest": "sha384", 00:17:06.513 "dhgroup": "ffdhe6144" 00:17:06.513 } 00:17:06.513 } 00:17:06.513 ]' 00:17:06.513 10:35:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:06.513 10:35:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:06.513 10:35:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:06.513 10:35:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:06.513 10:35:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:06.513 10:35:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:06.513 10:35:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:06.513 10:35:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:06.771 10:35:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NGE5M2M4NjliNWQ0OThlNWEzOTY1OTQwMDk1OTllZmRhNjc4ZDdhMzBjMjc0YmI51dL9kA==: --dhchap-ctrl-secret DHHC-1:01:ZTI1MTM1ODliYWQ1ZjFmOGFlMGE4ZTFiYzFjMTU1YzguHoKq: 00:17:06.771 10:35:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid 8b464f06-2980-e311-ba20-001e67a94acd -l 0 --dhchap-secret DHHC-1:02:NGE5M2M4NjliNWQ0OThlNWEzOTY1OTQwMDk1OTllZmRhNjc4ZDdhMzBjMjc0YmI51dL9kA==: --dhchap-ctrl-secret DHHC-1:01:ZTI1MTM1ODliYWQ1ZjFmOGFlMGE4ZTFiYzFjMTU1YzguHoKq: 00:17:07.705 10:35:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:07.705 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:07.705 10:35:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:17:07.705 10:35:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:07.705 10:35:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:07.705 10:35:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:07.705 10:35:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:07.705 10:35:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:07.705 10:35:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:07.963 10:35:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 3 00:17:07.963 10:35:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:07.963 10:35:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:07.963 10:35:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:07.963 10:35:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:07.963 10:35:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:07.963 10:35:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --dhchap-key key3 00:17:07.963 10:35:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:07.963 10:35:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:07.963 10:35:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:07.963 10:35:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:07.963 10:35:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:07.963 10:35:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:08.527 00:17:08.527 10:35:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:08.527 10:35:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:08.527 10:35:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:08.786 10:35:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:08.786 10:35:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:08.786 10:35:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:08.786 10:35:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:09.044 10:35:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:09.044 10:35:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:09.044 { 00:17:09.044 "cntlid": 87, 00:17:09.044 "qid": 0, 00:17:09.044 "state": "enabled", 00:17:09.044 "thread": "nvmf_tgt_poll_group_000", 00:17:09.044 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd", 00:17:09.044 "listen_address": { 00:17:09.044 "trtype": "TCP", 00:17:09.044 "adrfam": "IPv4", 00:17:09.044 "traddr": "10.0.0.2", 00:17:09.044 "trsvcid": "4420" 00:17:09.044 }, 00:17:09.044 "peer_address": { 00:17:09.044 "trtype": "TCP", 00:17:09.044 "adrfam": "IPv4", 00:17:09.044 "traddr": "10.0.0.1", 00:17:09.044 "trsvcid": "44990" 00:17:09.044 }, 00:17:09.044 "auth": { 00:17:09.044 "state": "completed", 00:17:09.044 "digest": "sha384", 00:17:09.044 "dhgroup": "ffdhe6144" 00:17:09.044 } 00:17:09.044 } 00:17:09.044 ]' 00:17:09.044 10:35:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:09.044 10:35:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:09.044 10:35:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:09.044 10:35:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:09.044 10:35:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:09.044 10:35:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:09.044 10:35:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:09.044 10:35:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:09.302 10:35:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MzBiMDU1YWUxODUwOWFhY2EwNDI1MjExOWFiNzY2NTQyZDJlMGYzNzI2ZGE0MTgzZmU3MjNlMzllNDM4OWRlM4eymdA=: 00:17:09.302 10:35:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid 8b464f06-2980-e311-ba20-001e67a94acd -l 0 --dhchap-secret DHHC-1:03:MzBiMDU1YWUxODUwOWFhY2EwNDI1MjExOWFiNzY2NTQyZDJlMGYzNzI2ZGE0MTgzZmU3MjNlMzllNDM4OWRlM4eymdA=: 00:17:10.235 10:35:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:10.235 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:10.235 10:35:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:17:10.235 10:35:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:10.235 10:35:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:10.235 10:35:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:10.235 10:35:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:10.235 10:35:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:10.235 10:35:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:10.235 10:35:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:10.493 10:35:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 0 00:17:10.493 10:35:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:10.493 10:35:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:10.493 10:35:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:10.493 10:35:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:10.493 10:35:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:10.493 10:35:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:10.493 10:35:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:10.493 10:35:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:10.493 10:35:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:10.493 10:35:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:10.493 10:35:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:10.493 10:35:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:11.428 00:17:11.428 10:35:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:11.428 10:35:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:11.428 10:35:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:11.686 10:36:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:11.686 10:36:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:11.686 10:36:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:11.686 10:36:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:11.686 10:36:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:11.686 10:36:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:11.686 { 00:17:11.686 "cntlid": 89, 00:17:11.686 "qid": 0, 00:17:11.686 "state": "enabled", 00:17:11.686 "thread": "nvmf_tgt_poll_group_000", 00:17:11.686 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd", 00:17:11.686 "listen_address": { 00:17:11.686 "trtype": "TCP", 00:17:11.686 "adrfam": "IPv4", 00:17:11.686 "traddr": "10.0.0.2", 00:17:11.686 "trsvcid": "4420" 00:17:11.686 }, 00:17:11.686 "peer_address": { 00:17:11.686 "trtype": "TCP", 00:17:11.687 "adrfam": "IPv4", 00:17:11.687 "traddr": "10.0.0.1", 00:17:11.687 "trsvcid": "45008" 00:17:11.687 }, 00:17:11.687 "auth": { 00:17:11.687 "state": "completed", 00:17:11.687 "digest": "sha384", 00:17:11.687 "dhgroup": "ffdhe8192" 00:17:11.687 } 00:17:11.687 } 00:17:11.687 ]' 00:17:11.687 10:36:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:11.687 10:36:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:11.687 10:36:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:11.687 10:36:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:11.687 10:36:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:11.687 10:36:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:11.687 10:36:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:11.687 10:36:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:12.253 10:36:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NzIyZWE5ZTU1Yzg4N2FlNTQzM2E0OTQwNjFhMTdlODRkNGM3NTNlM2FlZjIwMGJkgrp0GQ==: --dhchap-ctrl-secret DHHC-1:03:MzU5NGFiOWUyYzAyZGQ1NWFhMDA0ZDA4M2VmZTQ2Y2Y1YTBhNjRhZDlmYjI5YzZiZDkxNTBhZWQ2OWIwMzUyYkSjwPw=: 00:17:12.253 10:36:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid 8b464f06-2980-e311-ba20-001e67a94acd -l 0 --dhchap-secret DHHC-1:00:NzIyZWE5ZTU1Yzg4N2FlNTQzM2E0OTQwNjFhMTdlODRkNGM3NTNlM2FlZjIwMGJkgrp0GQ==: --dhchap-ctrl-secret DHHC-1:03:MzU5NGFiOWUyYzAyZGQ1NWFhMDA0ZDA4M2VmZTQ2Y2Y1YTBhNjRhZDlmYjI5YzZiZDkxNTBhZWQ2OWIwMzUyYkSjwPw=: 00:17:13.186 10:36:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:13.186 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:13.186 10:36:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:17:13.186 10:36:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:13.186 10:36:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:13.186 10:36:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:13.186 10:36:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:13.186 10:36:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:13.186 10:36:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:13.186 10:36:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 1 00:17:13.186 10:36:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:13.186 10:36:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:13.186 10:36:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:13.186 10:36:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:13.186 10:36:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:13.186 10:36:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:13.186 10:36:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:13.186 10:36:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:13.186 10:36:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:13.186 10:36:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:13.186 10:36:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:13.186 10:36:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:14.117 00:17:14.117 10:36:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:14.117 10:36:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:14.117 10:36:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:14.375 10:36:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:14.375 10:36:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:14.375 10:36:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:14.375 10:36:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:14.375 10:36:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:14.375 10:36:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:14.375 { 00:17:14.375 "cntlid": 91, 00:17:14.375 "qid": 0, 00:17:14.375 "state": "enabled", 00:17:14.375 "thread": "nvmf_tgt_poll_group_000", 00:17:14.375 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd", 00:17:14.375 "listen_address": { 00:17:14.375 "trtype": "TCP", 00:17:14.375 "adrfam": "IPv4", 00:17:14.375 "traddr": "10.0.0.2", 00:17:14.375 "trsvcid": "4420" 00:17:14.375 }, 00:17:14.375 "peer_address": { 00:17:14.375 "trtype": "TCP", 00:17:14.375 "adrfam": "IPv4", 00:17:14.375 "traddr": "10.0.0.1", 00:17:14.375 "trsvcid": "45040" 00:17:14.375 }, 00:17:14.375 "auth": { 00:17:14.375 "state": "completed", 00:17:14.375 "digest": "sha384", 00:17:14.375 "dhgroup": "ffdhe8192" 00:17:14.375 } 00:17:14.375 } 00:17:14.375 ]' 00:17:14.375 10:36:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:14.375 10:36:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:14.375 10:36:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:14.375 10:36:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:14.375 10:36:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:14.633 10:36:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:14.633 10:36:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:14.633 10:36:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:14.891 10:36:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YWY1MDI3MDIyY2I2OTZkODA2YmU0Yzk2N2Q2ZmE5ZGbrQth/: --dhchap-ctrl-secret DHHC-1:02:ZWY4ZWYwOTEwYTcyMGM1MzRlODM0N2ViYjYyOTk3NWFhOTFmMTFiNDFhMTNkNjZmKWKHwA==: 00:17:14.891 10:36:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid 8b464f06-2980-e311-ba20-001e67a94acd -l 0 --dhchap-secret DHHC-1:01:YWY1MDI3MDIyY2I2OTZkODA2YmU0Yzk2N2Q2ZmE5ZGbrQth/: --dhchap-ctrl-secret DHHC-1:02:ZWY4ZWYwOTEwYTcyMGM1MzRlODM0N2ViYjYyOTk3NWFhOTFmMTFiNDFhMTNkNjZmKWKHwA==: 00:17:15.824 10:36:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:15.824 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:15.824 10:36:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:17:15.824 10:36:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:15.824 10:36:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:15.824 10:36:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:15.824 10:36:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:15.824 10:36:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:15.824 10:36:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:15.824 10:36:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 2 00:17:15.824 10:36:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:15.824 10:36:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:15.824 10:36:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:15.824 10:36:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:15.824 10:36:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:15.824 10:36:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:15.824 10:36:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:15.824 10:36:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:15.824 10:36:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:15.824 10:36:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:15.824 10:36:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:15.824 10:36:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:16.757 00:17:16.757 10:36:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:16.757 10:36:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:16.757 10:36:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:17.015 10:36:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:17.015 10:36:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:17.015 10:36:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:17.015 10:36:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:17.015 10:36:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:17.015 10:36:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:17.015 { 00:17:17.015 "cntlid": 93, 00:17:17.015 "qid": 0, 00:17:17.015 "state": "enabled", 00:17:17.015 "thread": "nvmf_tgt_poll_group_000", 00:17:17.015 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd", 00:17:17.015 "listen_address": { 00:17:17.015 "trtype": "TCP", 00:17:17.015 "adrfam": "IPv4", 00:17:17.015 "traddr": "10.0.0.2", 00:17:17.015 "trsvcid": "4420" 00:17:17.015 }, 00:17:17.015 "peer_address": { 00:17:17.015 "trtype": "TCP", 00:17:17.015 "adrfam": "IPv4", 00:17:17.015 "traddr": "10.0.0.1", 00:17:17.015 "trsvcid": "45056" 00:17:17.015 }, 00:17:17.015 "auth": { 00:17:17.015 "state": "completed", 00:17:17.015 "digest": "sha384", 00:17:17.015 "dhgroup": "ffdhe8192" 00:17:17.015 } 00:17:17.015 } 00:17:17.015 ]' 00:17:17.015 10:36:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:17.015 10:36:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:17.015 10:36:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:17.273 10:36:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:17.273 10:36:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:17.273 10:36:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:17.273 10:36:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:17.273 10:36:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:17.531 10:36:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NGE5M2M4NjliNWQ0OThlNWEzOTY1OTQwMDk1OTllZmRhNjc4ZDdhMzBjMjc0YmI51dL9kA==: --dhchap-ctrl-secret DHHC-1:01:ZTI1MTM1ODliYWQ1ZjFmOGFlMGE4ZTFiYzFjMTU1YzguHoKq: 00:17:17.531 10:36:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid 8b464f06-2980-e311-ba20-001e67a94acd -l 0 --dhchap-secret DHHC-1:02:NGE5M2M4NjliNWQ0OThlNWEzOTY1OTQwMDk1OTllZmRhNjc4ZDdhMzBjMjc0YmI51dL9kA==: --dhchap-ctrl-secret DHHC-1:01:ZTI1MTM1ODliYWQ1ZjFmOGFlMGE4ZTFiYzFjMTU1YzguHoKq: 00:17:18.463 10:36:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:18.464 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:18.464 10:36:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:17:18.464 10:36:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:18.464 10:36:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:18.464 10:36:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:18.464 10:36:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:18.464 10:36:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:18.464 10:36:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:18.721 10:36:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 3 00:17:18.721 10:36:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:18.721 10:36:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:18.721 10:36:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:18.722 10:36:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:18.722 10:36:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:18.722 10:36:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --dhchap-key key3 00:17:18.722 10:36:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:18.722 10:36:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:18.722 10:36:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:18.722 10:36:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:18.722 10:36:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:18.722 10:36:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:19.656 00:17:19.656 10:36:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:19.656 10:36:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:19.656 10:36:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:19.656 10:36:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:19.656 10:36:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:19.656 10:36:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:19.656 10:36:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:19.656 10:36:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:19.656 10:36:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:19.656 { 00:17:19.656 "cntlid": 95, 00:17:19.656 "qid": 0, 00:17:19.656 "state": "enabled", 00:17:19.656 "thread": "nvmf_tgt_poll_group_000", 00:17:19.656 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd", 00:17:19.656 "listen_address": { 00:17:19.656 "trtype": "TCP", 00:17:19.656 "adrfam": "IPv4", 00:17:19.656 "traddr": "10.0.0.2", 00:17:19.656 "trsvcid": "4420" 00:17:19.656 }, 00:17:19.656 "peer_address": { 00:17:19.656 "trtype": "TCP", 00:17:19.656 "adrfam": "IPv4", 00:17:19.656 "traddr": "10.0.0.1", 00:17:19.656 "trsvcid": "48902" 00:17:19.656 }, 00:17:19.656 "auth": { 00:17:19.656 "state": "completed", 00:17:19.656 "digest": "sha384", 00:17:19.656 "dhgroup": "ffdhe8192" 00:17:19.656 } 00:17:19.656 } 00:17:19.656 ]' 00:17:19.656 10:36:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:19.914 10:36:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:19.914 10:36:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:19.914 10:36:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:19.914 10:36:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:19.914 10:36:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:19.915 10:36:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:19.915 10:36:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:20.172 10:36:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MzBiMDU1YWUxODUwOWFhY2EwNDI1MjExOWFiNzY2NTQyZDJlMGYzNzI2ZGE0MTgzZmU3MjNlMzllNDM4OWRlM4eymdA=: 00:17:20.172 10:36:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid 8b464f06-2980-e311-ba20-001e67a94acd -l 0 --dhchap-secret DHHC-1:03:MzBiMDU1YWUxODUwOWFhY2EwNDI1MjExOWFiNzY2NTQyZDJlMGYzNzI2ZGE0MTgzZmU3MjNlMzllNDM4OWRlM4eymdA=: 00:17:21.107 10:36:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:21.107 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:21.107 10:36:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:17:21.107 10:36:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:21.107 10:36:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:21.107 10:36:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:21.107 10:36:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:17:21.107 10:36:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:21.107 10:36:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:21.107 10:36:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:21.107 10:36:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:21.365 10:36:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 0 00:17:21.365 10:36:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:21.365 10:36:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:21.365 10:36:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:21.365 10:36:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:21.365 10:36:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:21.365 10:36:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:21.365 10:36:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:21.365 10:36:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:21.365 10:36:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:21.365 10:36:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:21.365 10:36:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:21.365 10:36:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:21.623 00:17:21.623 10:36:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:21.623 10:36:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:21.623 10:36:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:21.881 10:36:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:21.881 10:36:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:21.881 10:36:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:21.881 10:36:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:21.881 10:36:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:21.881 10:36:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:21.881 { 00:17:21.881 "cntlid": 97, 00:17:21.881 "qid": 0, 00:17:21.881 "state": "enabled", 00:17:21.881 "thread": "nvmf_tgt_poll_group_000", 00:17:21.881 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd", 00:17:21.881 "listen_address": { 00:17:21.881 "trtype": "TCP", 00:17:21.881 "adrfam": "IPv4", 00:17:21.881 "traddr": "10.0.0.2", 00:17:21.881 "trsvcid": "4420" 00:17:21.881 }, 00:17:21.881 "peer_address": { 00:17:21.881 "trtype": "TCP", 00:17:21.881 "adrfam": "IPv4", 00:17:21.881 "traddr": "10.0.0.1", 00:17:21.881 "trsvcid": "48942" 00:17:21.881 }, 00:17:21.881 "auth": { 00:17:21.881 "state": "completed", 00:17:21.881 "digest": "sha512", 00:17:21.881 "dhgroup": "null" 00:17:21.881 } 00:17:21.881 } 00:17:21.881 ]' 00:17:21.881 10:36:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:21.881 10:36:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:21.881 10:36:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:22.138 10:36:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:22.138 10:36:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:22.138 10:36:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:22.138 10:36:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:22.138 10:36:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:22.396 10:36:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NzIyZWE5ZTU1Yzg4N2FlNTQzM2E0OTQwNjFhMTdlODRkNGM3NTNlM2FlZjIwMGJkgrp0GQ==: --dhchap-ctrl-secret DHHC-1:03:MzU5NGFiOWUyYzAyZGQ1NWFhMDA0ZDA4M2VmZTQ2Y2Y1YTBhNjRhZDlmYjI5YzZiZDkxNTBhZWQ2OWIwMzUyYkSjwPw=: 00:17:22.396 10:36:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid 8b464f06-2980-e311-ba20-001e67a94acd -l 0 --dhchap-secret DHHC-1:00:NzIyZWE5ZTU1Yzg4N2FlNTQzM2E0OTQwNjFhMTdlODRkNGM3NTNlM2FlZjIwMGJkgrp0GQ==: --dhchap-ctrl-secret DHHC-1:03:MzU5NGFiOWUyYzAyZGQ1NWFhMDA0ZDA4M2VmZTQ2Y2Y1YTBhNjRhZDlmYjI5YzZiZDkxNTBhZWQ2OWIwMzUyYkSjwPw=: 00:17:23.330 10:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:23.330 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:23.330 10:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:17:23.330 10:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:23.330 10:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:23.330 10:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:23.330 10:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:23.330 10:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:23.330 10:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:23.587 10:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 1 00:17:23.587 10:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:23.587 10:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:23.587 10:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:23.587 10:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:23.588 10:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:23.588 10:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:23.588 10:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:23.588 10:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:23.588 10:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:23.588 10:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:23.588 10:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:23.588 10:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:23.845 00:17:23.845 10:36:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:23.845 10:36:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:23.845 10:36:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:24.102 10:36:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:24.102 10:36:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:24.102 10:36:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:24.103 10:36:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:24.103 10:36:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:24.103 10:36:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:24.103 { 00:17:24.103 "cntlid": 99, 00:17:24.103 "qid": 0, 00:17:24.103 "state": "enabled", 00:17:24.103 "thread": "nvmf_tgt_poll_group_000", 00:17:24.103 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd", 00:17:24.103 "listen_address": { 00:17:24.103 "trtype": "TCP", 00:17:24.103 "adrfam": "IPv4", 00:17:24.103 "traddr": "10.0.0.2", 00:17:24.103 "trsvcid": "4420" 00:17:24.103 }, 00:17:24.103 "peer_address": { 00:17:24.103 "trtype": "TCP", 00:17:24.103 "adrfam": "IPv4", 00:17:24.103 "traddr": "10.0.0.1", 00:17:24.103 "trsvcid": "48976" 00:17:24.103 }, 00:17:24.103 "auth": { 00:17:24.103 "state": "completed", 00:17:24.103 "digest": "sha512", 00:17:24.103 "dhgroup": "null" 00:17:24.103 } 00:17:24.103 } 00:17:24.103 ]' 00:17:24.103 10:36:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:24.103 10:36:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:24.103 10:36:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:24.103 10:36:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:24.103 10:36:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:24.360 10:36:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:24.360 10:36:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:24.360 10:36:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:24.618 10:36:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YWY1MDI3MDIyY2I2OTZkODA2YmU0Yzk2N2Q2ZmE5ZGbrQth/: --dhchap-ctrl-secret DHHC-1:02:ZWY4ZWYwOTEwYTcyMGM1MzRlODM0N2ViYjYyOTk3NWFhOTFmMTFiNDFhMTNkNjZmKWKHwA==: 00:17:24.618 10:36:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid 8b464f06-2980-e311-ba20-001e67a94acd -l 0 --dhchap-secret DHHC-1:01:YWY1MDI3MDIyY2I2OTZkODA2YmU0Yzk2N2Q2ZmE5ZGbrQth/: --dhchap-ctrl-secret DHHC-1:02:ZWY4ZWYwOTEwYTcyMGM1MzRlODM0N2ViYjYyOTk3NWFhOTFmMTFiNDFhMTNkNjZmKWKHwA==: 00:17:25.551 10:36:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:25.552 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:25.552 10:36:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:17:25.552 10:36:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:25.552 10:36:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:25.552 10:36:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:25.552 10:36:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:25.552 10:36:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:25.552 10:36:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:25.810 10:36:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 2 00:17:25.810 10:36:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:25.810 10:36:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:25.810 10:36:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:25.810 10:36:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:25.810 10:36:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:25.810 10:36:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:25.810 10:36:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:25.810 10:36:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:25.810 10:36:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:25.810 10:36:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:25.810 10:36:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:25.810 10:36:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:26.068 00:17:26.068 10:36:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:26.068 10:36:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:26.068 10:36:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:26.327 10:36:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:26.327 10:36:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:26.327 10:36:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:26.327 10:36:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:26.327 10:36:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:26.327 10:36:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:26.327 { 00:17:26.327 "cntlid": 101, 00:17:26.327 "qid": 0, 00:17:26.327 "state": "enabled", 00:17:26.327 "thread": "nvmf_tgt_poll_group_000", 00:17:26.327 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd", 00:17:26.327 "listen_address": { 00:17:26.327 "trtype": "TCP", 00:17:26.327 "adrfam": "IPv4", 00:17:26.327 "traddr": "10.0.0.2", 00:17:26.327 "trsvcid": "4420" 00:17:26.327 }, 00:17:26.327 "peer_address": { 00:17:26.327 "trtype": "TCP", 00:17:26.327 "adrfam": "IPv4", 00:17:26.327 "traddr": "10.0.0.1", 00:17:26.327 "trsvcid": "48996" 00:17:26.327 }, 00:17:26.327 "auth": { 00:17:26.327 "state": "completed", 00:17:26.327 "digest": "sha512", 00:17:26.327 "dhgroup": "null" 00:17:26.327 } 00:17:26.327 } 00:17:26.327 ]' 00:17:26.327 10:36:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:26.327 10:36:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:26.327 10:36:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:26.327 10:36:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:26.327 10:36:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:26.585 10:36:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:26.585 10:36:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:26.585 10:36:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:26.842 10:36:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NGE5M2M4NjliNWQ0OThlNWEzOTY1OTQwMDk1OTllZmRhNjc4ZDdhMzBjMjc0YmI51dL9kA==: --dhchap-ctrl-secret DHHC-1:01:ZTI1MTM1ODliYWQ1ZjFmOGFlMGE4ZTFiYzFjMTU1YzguHoKq: 00:17:26.842 10:36:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid 8b464f06-2980-e311-ba20-001e67a94acd -l 0 --dhchap-secret DHHC-1:02:NGE5M2M4NjliNWQ0OThlNWEzOTY1OTQwMDk1OTllZmRhNjc4ZDdhMzBjMjc0YmI51dL9kA==: --dhchap-ctrl-secret DHHC-1:01:ZTI1MTM1ODliYWQ1ZjFmOGFlMGE4ZTFiYzFjMTU1YzguHoKq: 00:17:27.775 10:36:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:27.775 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:27.775 10:36:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:17:27.775 10:36:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:27.775 10:36:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:27.775 10:36:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:27.775 10:36:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:27.775 10:36:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:27.775 10:36:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:27.775 10:36:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 3 00:17:27.775 10:36:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:27.775 10:36:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:27.775 10:36:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:27.775 10:36:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:27.775 10:36:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:27.775 10:36:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --dhchap-key key3 00:17:27.775 10:36:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:27.775 10:36:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:27.775 10:36:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:27.775 10:36:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:27.775 10:36:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:27.776 10:36:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:28.341 00:17:28.341 10:36:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:28.341 10:36:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:28.341 10:36:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:28.600 10:36:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:28.600 10:36:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:28.600 10:36:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:28.600 10:36:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:28.600 10:36:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:28.600 10:36:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:28.600 { 00:17:28.600 "cntlid": 103, 00:17:28.600 "qid": 0, 00:17:28.600 "state": "enabled", 00:17:28.600 "thread": "nvmf_tgt_poll_group_000", 00:17:28.600 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd", 00:17:28.600 "listen_address": { 00:17:28.600 "trtype": "TCP", 00:17:28.600 "adrfam": "IPv4", 00:17:28.600 "traddr": "10.0.0.2", 00:17:28.600 "trsvcid": "4420" 00:17:28.600 }, 00:17:28.600 "peer_address": { 00:17:28.600 "trtype": "TCP", 00:17:28.600 "adrfam": "IPv4", 00:17:28.600 "traddr": "10.0.0.1", 00:17:28.600 "trsvcid": "49020" 00:17:28.600 }, 00:17:28.600 "auth": { 00:17:28.600 "state": "completed", 00:17:28.600 "digest": "sha512", 00:17:28.600 "dhgroup": "null" 00:17:28.600 } 00:17:28.600 } 00:17:28.600 ]' 00:17:28.600 10:36:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:28.600 10:36:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:28.600 10:36:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:28.600 10:36:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:28.600 10:36:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:28.600 10:36:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:28.600 10:36:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:28.600 10:36:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:28.858 10:36:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MzBiMDU1YWUxODUwOWFhY2EwNDI1MjExOWFiNzY2NTQyZDJlMGYzNzI2ZGE0MTgzZmU3MjNlMzllNDM4OWRlM4eymdA=: 00:17:28.858 10:36:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid 8b464f06-2980-e311-ba20-001e67a94acd -l 0 --dhchap-secret DHHC-1:03:MzBiMDU1YWUxODUwOWFhY2EwNDI1MjExOWFiNzY2NTQyZDJlMGYzNzI2ZGE0MTgzZmU3MjNlMzllNDM4OWRlM4eymdA=: 00:17:29.791 10:36:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:29.791 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:29.791 10:36:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:17:29.791 10:36:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:29.791 10:36:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:29.791 10:36:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:29.791 10:36:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:29.791 10:36:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:29.791 10:36:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:29.791 10:36:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:30.049 10:36:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 0 00:17:30.049 10:36:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:30.049 10:36:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:30.049 10:36:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:30.049 10:36:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:30.049 10:36:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:30.049 10:36:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:30.049 10:36:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:30.049 10:36:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:30.049 10:36:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:30.049 10:36:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:30.049 10:36:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:30.049 10:36:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:30.307 00:17:30.307 10:36:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:30.307 10:36:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:30.307 10:36:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:30.565 10:36:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:30.565 10:36:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:30.565 10:36:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:30.565 10:36:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:30.565 10:36:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:30.565 10:36:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:30.565 { 00:17:30.565 "cntlid": 105, 00:17:30.565 "qid": 0, 00:17:30.565 "state": "enabled", 00:17:30.565 "thread": "nvmf_tgt_poll_group_000", 00:17:30.565 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd", 00:17:30.565 "listen_address": { 00:17:30.565 "trtype": "TCP", 00:17:30.565 "adrfam": "IPv4", 00:17:30.565 "traddr": "10.0.0.2", 00:17:30.565 "trsvcid": "4420" 00:17:30.565 }, 00:17:30.565 "peer_address": { 00:17:30.565 "trtype": "TCP", 00:17:30.565 "adrfam": "IPv4", 00:17:30.565 "traddr": "10.0.0.1", 00:17:30.565 "trsvcid": "55988" 00:17:30.565 }, 00:17:30.565 "auth": { 00:17:30.565 "state": "completed", 00:17:30.565 "digest": "sha512", 00:17:30.565 "dhgroup": "ffdhe2048" 00:17:30.565 } 00:17:30.565 } 00:17:30.565 ]' 00:17:30.565 10:36:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:30.565 10:36:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:30.565 10:36:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:30.823 10:36:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:30.823 10:36:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:30.823 10:36:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:30.823 10:36:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:30.823 10:36:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:31.081 10:36:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NzIyZWE5ZTU1Yzg4N2FlNTQzM2E0OTQwNjFhMTdlODRkNGM3NTNlM2FlZjIwMGJkgrp0GQ==: --dhchap-ctrl-secret DHHC-1:03:MzU5NGFiOWUyYzAyZGQ1NWFhMDA0ZDA4M2VmZTQ2Y2Y1YTBhNjRhZDlmYjI5YzZiZDkxNTBhZWQ2OWIwMzUyYkSjwPw=: 00:17:31.081 10:36:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid 8b464f06-2980-e311-ba20-001e67a94acd -l 0 --dhchap-secret DHHC-1:00:NzIyZWE5ZTU1Yzg4N2FlNTQzM2E0OTQwNjFhMTdlODRkNGM3NTNlM2FlZjIwMGJkgrp0GQ==: --dhchap-ctrl-secret DHHC-1:03:MzU5NGFiOWUyYzAyZGQ1NWFhMDA0ZDA4M2VmZTQ2Y2Y1YTBhNjRhZDlmYjI5YzZiZDkxNTBhZWQ2OWIwMzUyYkSjwPw=: 00:17:32.015 10:36:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:32.015 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:32.015 10:36:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:17:32.015 10:36:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:32.015 10:36:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:32.015 10:36:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:32.015 10:36:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:32.015 10:36:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:32.015 10:36:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:32.273 10:36:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 1 00:17:32.273 10:36:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:32.273 10:36:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:32.273 10:36:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:32.273 10:36:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:32.273 10:36:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:32.273 10:36:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:32.273 10:36:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:32.273 10:36:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:32.273 10:36:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:32.273 10:36:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:32.273 10:36:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:32.273 10:36:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:32.531 00:17:32.531 10:36:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:32.531 10:36:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:32.531 10:36:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:32.788 10:36:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:32.788 10:36:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:32.788 10:36:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:32.788 10:36:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:32.788 10:36:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:32.788 10:36:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:32.788 { 00:17:32.788 "cntlid": 107, 00:17:32.788 "qid": 0, 00:17:32.788 "state": "enabled", 00:17:32.788 "thread": "nvmf_tgt_poll_group_000", 00:17:32.788 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd", 00:17:32.788 "listen_address": { 00:17:32.788 "trtype": "TCP", 00:17:32.788 "adrfam": "IPv4", 00:17:32.788 "traddr": "10.0.0.2", 00:17:32.788 "trsvcid": "4420" 00:17:32.788 }, 00:17:32.788 "peer_address": { 00:17:32.788 "trtype": "TCP", 00:17:32.788 "adrfam": "IPv4", 00:17:32.788 "traddr": "10.0.0.1", 00:17:32.788 "trsvcid": "56014" 00:17:32.788 }, 00:17:32.788 "auth": { 00:17:32.788 "state": "completed", 00:17:32.788 "digest": "sha512", 00:17:32.788 "dhgroup": "ffdhe2048" 00:17:32.788 } 00:17:32.788 } 00:17:32.788 ]' 00:17:32.788 10:36:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:33.046 10:36:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:33.046 10:36:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:33.046 10:36:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:33.046 10:36:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:33.046 10:36:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:33.046 10:36:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:33.046 10:36:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:33.303 10:36:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YWY1MDI3MDIyY2I2OTZkODA2YmU0Yzk2N2Q2ZmE5ZGbrQth/: --dhchap-ctrl-secret DHHC-1:02:ZWY4ZWYwOTEwYTcyMGM1MzRlODM0N2ViYjYyOTk3NWFhOTFmMTFiNDFhMTNkNjZmKWKHwA==: 00:17:33.303 10:36:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid 8b464f06-2980-e311-ba20-001e67a94acd -l 0 --dhchap-secret DHHC-1:01:YWY1MDI3MDIyY2I2OTZkODA2YmU0Yzk2N2Q2ZmE5ZGbrQth/: --dhchap-ctrl-secret DHHC-1:02:ZWY4ZWYwOTEwYTcyMGM1MzRlODM0N2ViYjYyOTk3NWFhOTFmMTFiNDFhMTNkNjZmKWKHwA==: 00:17:34.235 10:36:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:34.235 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:34.235 10:36:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:17:34.235 10:36:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:34.235 10:36:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:34.235 10:36:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:34.235 10:36:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:34.235 10:36:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:34.235 10:36:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:34.492 10:36:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 2 00:17:34.492 10:36:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:34.492 10:36:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:34.492 10:36:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:34.492 10:36:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:34.492 10:36:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:34.492 10:36:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:34.492 10:36:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:34.492 10:36:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:34.492 10:36:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:34.492 10:36:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:34.492 10:36:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:34.492 10:36:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:34.750 00:17:35.007 10:36:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:35.007 10:36:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:35.007 10:36:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:35.264 10:36:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:35.264 10:36:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:35.264 10:36:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:35.265 10:36:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:35.265 10:36:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:35.265 10:36:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:35.265 { 00:17:35.265 "cntlid": 109, 00:17:35.265 "qid": 0, 00:17:35.265 "state": "enabled", 00:17:35.265 "thread": "nvmf_tgt_poll_group_000", 00:17:35.265 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd", 00:17:35.265 "listen_address": { 00:17:35.265 "trtype": "TCP", 00:17:35.265 "adrfam": "IPv4", 00:17:35.265 "traddr": "10.0.0.2", 00:17:35.265 "trsvcid": "4420" 00:17:35.265 }, 00:17:35.265 "peer_address": { 00:17:35.265 "trtype": "TCP", 00:17:35.265 "adrfam": "IPv4", 00:17:35.265 "traddr": "10.0.0.1", 00:17:35.265 "trsvcid": "56034" 00:17:35.265 }, 00:17:35.265 "auth": { 00:17:35.265 "state": "completed", 00:17:35.265 "digest": "sha512", 00:17:35.265 "dhgroup": "ffdhe2048" 00:17:35.265 } 00:17:35.265 } 00:17:35.265 ]' 00:17:35.265 10:36:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:35.265 10:36:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:35.265 10:36:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:35.265 10:36:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:35.265 10:36:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:35.265 10:36:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:35.265 10:36:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:35.265 10:36:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:35.525 10:36:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NGE5M2M4NjliNWQ0OThlNWEzOTY1OTQwMDk1OTllZmRhNjc4ZDdhMzBjMjc0YmI51dL9kA==: --dhchap-ctrl-secret DHHC-1:01:ZTI1MTM1ODliYWQ1ZjFmOGFlMGE4ZTFiYzFjMTU1YzguHoKq: 00:17:35.525 10:36:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid 8b464f06-2980-e311-ba20-001e67a94acd -l 0 --dhchap-secret DHHC-1:02:NGE5M2M4NjliNWQ0OThlNWEzOTY1OTQwMDk1OTllZmRhNjc4ZDdhMzBjMjc0YmI51dL9kA==: --dhchap-ctrl-secret DHHC-1:01:ZTI1MTM1ODliYWQ1ZjFmOGFlMGE4ZTFiYzFjMTU1YzguHoKq: 00:17:36.457 10:36:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:36.457 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:36.457 10:36:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:17:36.457 10:36:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:36.457 10:36:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:36.457 10:36:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:36.457 10:36:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:36.457 10:36:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:36.457 10:36:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:36.715 10:36:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 3 00:17:36.715 10:36:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:36.715 10:36:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:36.715 10:36:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:36.715 10:36:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:36.715 10:36:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:36.715 10:36:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --dhchap-key key3 00:17:36.715 10:36:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:36.715 10:36:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:36.715 10:36:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:36.715 10:36:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:36.715 10:36:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:36.715 10:36:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:37.281 00:17:37.281 10:36:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:37.281 10:36:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:37.281 10:36:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:37.281 10:36:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:37.281 10:36:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:37.281 10:36:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:37.281 10:36:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:37.281 10:36:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:37.281 10:36:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:37.281 { 00:17:37.281 "cntlid": 111, 00:17:37.281 "qid": 0, 00:17:37.281 "state": "enabled", 00:17:37.281 "thread": "nvmf_tgt_poll_group_000", 00:17:37.281 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd", 00:17:37.281 "listen_address": { 00:17:37.281 "trtype": "TCP", 00:17:37.281 "adrfam": "IPv4", 00:17:37.281 "traddr": "10.0.0.2", 00:17:37.281 "trsvcid": "4420" 00:17:37.281 }, 00:17:37.281 "peer_address": { 00:17:37.281 "trtype": "TCP", 00:17:37.281 "adrfam": "IPv4", 00:17:37.281 "traddr": "10.0.0.1", 00:17:37.281 "trsvcid": "56060" 00:17:37.281 }, 00:17:37.281 "auth": { 00:17:37.281 "state": "completed", 00:17:37.281 "digest": "sha512", 00:17:37.281 "dhgroup": "ffdhe2048" 00:17:37.281 } 00:17:37.281 } 00:17:37.281 ]' 00:17:37.281 10:36:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:37.538 10:36:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:37.538 10:36:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:37.538 10:36:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:37.538 10:36:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:37.538 10:36:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:37.538 10:36:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:37.538 10:36:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:37.796 10:36:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MzBiMDU1YWUxODUwOWFhY2EwNDI1MjExOWFiNzY2NTQyZDJlMGYzNzI2ZGE0MTgzZmU3MjNlMzllNDM4OWRlM4eymdA=: 00:17:37.796 10:36:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid 8b464f06-2980-e311-ba20-001e67a94acd -l 0 --dhchap-secret DHHC-1:03:MzBiMDU1YWUxODUwOWFhY2EwNDI1MjExOWFiNzY2NTQyZDJlMGYzNzI2ZGE0MTgzZmU3MjNlMzllNDM4OWRlM4eymdA=: 00:17:38.729 10:36:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:38.729 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:38.729 10:36:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:17:38.729 10:36:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:38.729 10:36:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:38.729 10:36:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:38.729 10:36:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:38.729 10:36:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:38.729 10:36:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:38.729 10:36:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:38.986 10:36:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 0 00:17:38.986 10:36:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:38.986 10:36:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:38.986 10:36:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:38.986 10:36:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:38.986 10:36:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:38.986 10:36:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:38.986 10:36:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:38.986 10:36:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:38.986 10:36:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:38.986 10:36:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:38.986 10:36:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:38.986 10:36:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:39.242 00:17:39.499 10:36:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:39.499 10:36:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:39.499 10:36:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:39.758 10:36:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:39.758 10:36:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:39.758 10:36:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:39.758 10:36:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:39.758 10:36:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:39.758 10:36:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:39.758 { 00:17:39.758 "cntlid": 113, 00:17:39.758 "qid": 0, 00:17:39.758 "state": "enabled", 00:17:39.758 "thread": "nvmf_tgt_poll_group_000", 00:17:39.758 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd", 00:17:39.758 "listen_address": { 00:17:39.758 "trtype": "TCP", 00:17:39.758 "adrfam": "IPv4", 00:17:39.758 "traddr": "10.0.0.2", 00:17:39.758 "trsvcid": "4420" 00:17:39.758 }, 00:17:39.758 "peer_address": { 00:17:39.758 "trtype": "TCP", 00:17:39.758 "adrfam": "IPv4", 00:17:39.758 "traddr": "10.0.0.1", 00:17:39.758 "trsvcid": "48620" 00:17:39.758 }, 00:17:39.758 "auth": { 00:17:39.758 "state": "completed", 00:17:39.758 "digest": "sha512", 00:17:39.758 "dhgroup": "ffdhe3072" 00:17:39.758 } 00:17:39.758 } 00:17:39.758 ]' 00:17:39.758 10:36:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:39.758 10:36:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:39.758 10:36:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:39.758 10:36:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:39.758 10:36:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:39.758 10:36:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:39.758 10:36:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:39.758 10:36:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:40.016 10:36:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NzIyZWE5ZTU1Yzg4N2FlNTQzM2E0OTQwNjFhMTdlODRkNGM3NTNlM2FlZjIwMGJkgrp0GQ==: --dhchap-ctrl-secret DHHC-1:03:MzU5NGFiOWUyYzAyZGQ1NWFhMDA0ZDA4M2VmZTQ2Y2Y1YTBhNjRhZDlmYjI5YzZiZDkxNTBhZWQ2OWIwMzUyYkSjwPw=: 00:17:40.016 10:36:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid 8b464f06-2980-e311-ba20-001e67a94acd -l 0 --dhchap-secret DHHC-1:00:NzIyZWE5ZTU1Yzg4N2FlNTQzM2E0OTQwNjFhMTdlODRkNGM3NTNlM2FlZjIwMGJkgrp0GQ==: --dhchap-ctrl-secret DHHC-1:03:MzU5NGFiOWUyYzAyZGQ1NWFhMDA0ZDA4M2VmZTQ2Y2Y1YTBhNjRhZDlmYjI5YzZiZDkxNTBhZWQ2OWIwMzUyYkSjwPw=: 00:17:40.950 10:36:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:40.950 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:40.950 10:36:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:17:40.950 10:36:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:40.950 10:36:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:40.950 10:36:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:40.950 10:36:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:40.950 10:36:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:40.950 10:36:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:41.208 10:36:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 1 00:17:41.208 10:36:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:41.208 10:36:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:41.208 10:36:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:41.208 10:36:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:41.208 10:36:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:41.208 10:36:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:41.208 10:36:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:41.208 10:36:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:41.208 10:36:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:41.208 10:36:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:41.208 10:36:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:41.208 10:36:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:41.784 00:17:41.784 10:36:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:41.784 10:36:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:41.784 10:36:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:42.045 10:36:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:42.045 10:36:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:42.045 10:36:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:42.045 10:36:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:42.045 10:36:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:42.045 10:36:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:42.045 { 00:17:42.045 "cntlid": 115, 00:17:42.045 "qid": 0, 00:17:42.045 "state": "enabled", 00:17:42.045 "thread": "nvmf_tgt_poll_group_000", 00:17:42.045 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd", 00:17:42.045 "listen_address": { 00:17:42.045 "trtype": "TCP", 00:17:42.045 "adrfam": "IPv4", 00:17:42.045 "traddr": "10.0.0.2", 00:17:42.045 "trsvcid": "4420" 00:17:42.045 }, 00:17:42.045 "peer_address": { 00:17:42.045 "trtype": "TCP", 00:17:42.045 "adrfam": "IPv4", 00:17:42.045 "traddr": "10.0.0.1", 00:17:42.045 "trsvcid": "48646" 00:17:42.045 }, 00:17:42.045 "auth": { 00:17:42.045 "state": "completed", 00:17:42.045 "digest": "sha512", 00:17:42.045 "dhgroup": "ffdhe3072" 00:17:42.045 } 00:17:42.045 } 00:17:42.045 ]' 00:17:42.045 10:36:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:42.045 10:36:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:42.045 10:36:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:42.045 10:36:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:42.045 10:36:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:42.045 10:36:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:42.045 10:36:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:42.045 10:36:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:42.303 10:36:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YWY1MDI3MDIyY2I2OTZkODA2YmU0Yzk2N2Q2ZmE5ZGbrQth/: --dhchap-ctrl-secret DHHC-1:02:ZWY4ZWYwOTEwYTcyMGM1MzRlODM0N2ViYjYyOTk3NWFhOTFmMTFiNDFhMTNkNjZmKWKHwA==: 00:17:42.303 10:36:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid 8b464f06-2980-e311-ba20-001e67a94acd -l 0 --dhchap-secret DHHC-1:01:YWY1MDI3MDIyY2I2OTZkODA2YmU0Yzk2N2Q2ZmE5ZGbrQth/: --dhchap-ctrl-secret DHHC-1:02:ZWY4ZWYwOTEwYTcyMGM1MzRlODM0N2ViYjYyOTk3NWFhOTFmMTFiNDFhMTNkNjZmKWKHwA==: 00:17:43.237 10:36:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:43.237 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:43.237 10:36:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:17:43.237 10:36:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:43.237 10:36:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:43.237 10:36:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:43.237 10:36:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:43.237 10:36:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:43.237 10:36:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:43.495 10:36:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 2 00:17:43.495 10:36:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:43.495 10:36:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:43.495 10:36:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:43.495 10:36:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:43.495 10:36:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:43.495 10:36:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:43.495 10:36:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:43.495 10:36:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:43.495 10:36:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:43.495 10:36:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:43.495 10:36:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:43.495 10:36:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:43.752 00:17:44.010 10:36:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:44.010 10:36:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:44.010 10:36:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:44.268 10:36:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:44.268 10:36:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:44.268 10:36:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:44.268 10:36:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:44.268 10:36:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:44.268 10:36:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:44.268 { 00:17:44.268 "cntlid": 117, 00:17:44.268 "qid": 0, 00:17:44.268 "state": "enabled", 00:17:44.268 "thread": "nvmf_tgt_poll_group_000", 00:17:44.268 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd", 00:17:44.268 "listen_address": { 00:17:44.268 "trtype": "TCP", 00:17:44.268 "adrfam": "IPv4", 00:17:44.268 "traddr": "10.0.0.2", 00:17:44.268 "trsvcid": "4420" 00:17:44.268 }, 00:17:44.268 "peer_address": { 00:17:44.268 "trtype": "TCP", 00:17:44.268 "adrfam": "IPv4", 00:17:44.268 "traddr": "10.0.0.1", 00:17:44.268 "trsvcid": "48670" 00:17:44.268 }, 00:17:44.268 "auth": { 00:17:44.268 "state": "completed", 00:17:44.268 "digest": "sha512", 00:17:44.268 "dhgroup": "ffdhe3072" 00:17:44.268 } 00:17:44.268 } 00:17:44.268 ]' 00:17:44.268 10:36:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:44.268 10:36:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:44.268 10:36:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:44.268 10:36:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:44.269 10:36:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:44.269 10:36:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:44.269 10:36:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:44.269 10:36:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:44.526 10:36:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NGE5M2M4NjliNWQ0OThlNWEzOTY1OTQwMDk1OTllZmRhNjc4ZDdhMzBjMjc0YmI51dL9kA==: --dhchap-ctrl-secret DHHC-1:01:ZTI1MTM1ODliYWQ1ZjFmOGFlMGE4ZTFiYzFjMTU1YzguHoKq: 00:17:44.526 10:36:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid 8b464f06-2980-e311-ba20-001e67a94acd -l 0 --dhchap-secret DHHC-1:02:NGE5M2M4NjliNWQ0OThlNWEzOTY1OTQwMDk1OTllZmRhNjc4ZDdhMzBjMjc0YmI51dL9kA==: --dhchap-ctrl-secret DHHC-1:01:ZTI1MTM1ODliYWQ1ZjFmOGFlMGE4ZTFiYzFjMTU1YzguHoKq: 00:17:45.459 10:36:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:45.459 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:45.459 10:36:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:17:45.459 10:36:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:45.459 10:36:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:45.459 10:36:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:45.459 10:36:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:45.459 10:36:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:45.459 10:36:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:45.718 10:36:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 3 00:17:45.718 10:36:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:45.718 10:36:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:45.718 10:36:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:45.718 10:36:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:45.718 10:36:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:45.718 10:36:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --dhchap-key key3 00:17:45.718 10:36:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:45.718 10:36:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:45.718 10:36:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:45.718 10:36:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:45.718 10:36:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:45.718 10:36:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:46.284 00:17:46.284 10:36:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:46.284 10:36:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:46.284 10:36:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:46.284 10:36:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:46.284 10:36:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:46.284 10:36:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:46.284 10:36:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:46.541 10:36:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:46.541 10:36:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:46.541 { 00:17:46.541 "cntlid": 119, 00:17:46.541 "qid": 0, 00:17:46.541 "state": "enabled", 00:17:46.541 "thread": "nvmf_tgt_poll_group_000", 00:17:46.541 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd", 00:17:46.541 "listen_address": { 00:17:46.541 "trtype": "TCP", 00:17:46.541 "adrfam": "IPv4", 00:17:46.541 "traddr": "10.0.0.2", 00:17:46.541 "trsvcid": "4420" 00:17:46.541 }, 00:17:46.541 "peer_address": { 00:17:46.541 "trtype": "TCP", 00:17:46.541 "adrfam": "IPv4", 00:17:46.541 "traddr": "10.0.0.1", 00:17:46.541 "trsvcid": "48692" 00:17:46.541 }, 00:17:46.541 "auth": { 00:17:46.541 "state": "completed", 00:17:46.541 "digest": "sha512", 00:17:46.541 "dhgroup": "ffdhe3072" 00:17:46.541 } 00:17:46.541 } 00:17:46.541 ]' 00:17:46.541 10:36:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:46.541 10:36:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:46.541 10:36:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:46.541 10:36:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:46.541 10:36:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:46.541 10:36:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:46.541 10:36:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:46.541 10:36:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:46.799 10:36:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MzBiMDU1YWUxODUwOWFhY2EwNDI1MjExOWFiNzY2NTQyZDJlMGYzNzI2ZGE0MTgzZmU3MjNlMzllNDM4OWRlM4eymdA=: 00:17:46.799 10:36:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid 8b464f06-2980-e311-ba20-001e67a94acd -l 0 --dhchap-secret DHHC-1:03:MzBiMDU1YWUxODUwOWFhY2EwNDI1MjExOWFiNzY2NTQyZDJlMGYzNzI2ZGE0MTgzZmU3MjNlMzllNDM4OWRlM4eymdA=: 00:17:47.733 10:36:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:47.733 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:47.733 10:36:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:17:47.733 10:36:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:47.733 10:36:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:47.733 10:36:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:47.733 10:36:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:47.733 10:36:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:47.733 10:36:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:47.733 10:36:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:47.991 10:36:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 0 00:17:47.991 10:36:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:47.991 10:36:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:47.991 10:36:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:47.991 10:36:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:47.991 10:36:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:47.992 10:36:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:47.992 10:36:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:47.992 10:36:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:47.992 10:36:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:47.992 10:36:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:47.992 10:36:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:47.992 10:36:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:48.250 00:17:48.250 10:36:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:48.250 10:36:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:48.250 10:36:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:48.507 10:36:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:48.507 10:36:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:48.507 10:36:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:48.507 10:36:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:48.765 10:36:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:48.765 10:36:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:48.765 { 00:17:48.765 "cntlid": 121, 00:17:48.765 "qid": 0, 00:17:48.765 "state": "enabled", 00:17:48.765 "thread": "nvmf_tgt_poll_group_000", 00:17:48.765 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd", 00:17:48.765 "listen_address": { 00:17:48.765 "trtype": "TCP", 00:17:48.765 "adrfam": "IPv4", 00:17:48.765 "traddr": "10.0.0.2", 00:17:48.765 "trsvcid": "4420" 00:17:48.765 }, 00:17:48.765 "peer_address": { 00:17:48.765 "trtype": "TCP", 00:17:48.765 "adrfam": "IPv4", 00:17:48.765 "traddr": "10.0.0.1", 00:17:48.765 "trsvcid": "48730" 00:17:48.765 }, 00:17:48.765 "auth": { 00:17:48.765 "state": "completed", 00:17:48.765 "digest": "sha512", 00:17:48.765 "dhgroup": "ffdhe4096" 00:17:48.765 } 00:17:48.766 } 00:17:48.766 ]' 00:17:48.766 10:36:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:48.766 10:36:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:48.766 10:36:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:48.766 10:36:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:48.766 10:36:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:48.766 10:36:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:48.766 10:36:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:48.766 10:36:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:49.024 10:36:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NzIyZWE5ZTU1Yzg4N2FlNTQzM2E0OTQwNjFhMTdlODRkNGM3NTNlM2FlZjIwMGJkgrp0GQ==: --dhchap-ctrl-secret DHHC-1:03:MzU5NGFiOWUyYzAyZGQ1NWFhMDA0ZDA4M2VmZTQ2Y2Y1YTBhNjRhZDlmYjI5YzZiZDkxNTBhZWQ2OWIwMzUyYkSjwPw=: 00:17:49.024 10:36:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid 8b464f06-2980-e311-ba20-001e67a94acd -l 0 --dhchap-secret DHHC-1:00:NzIyZWE5ZTU1Yzg4N2FlNTQzM2E0OTQwNjFhMTdlODRkNGM3NTNlM2FlZjIwMGJkgrp0GQ==: --dhchap-ctrl-secret DHHC-1:03:MzU5NGFiOWUyYzAyZGQ1NWFhMDA0ZDA4M2VmZTQ2Y2Y1YTBhNjRhZDlmYjI5YzZiZDkxNTBhZWQ2OWIwMzUyYkSjwPw=: 00:17:49.958 10:36:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:49.958 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:49.958 10:36:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:17:49.958 10:36:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:49.958 10:36:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:49.958 10:36:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:49.958 10:36:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:49.958 10:36:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:49.958 10:36:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:50.216 10:36:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 1 00:17:50.216 10:36:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:50.216 10:36:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:50.216 10:36:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:50.216 10:36:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:50.216 10:36:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:50.216 10:36:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:50.216 10:36:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:50.216 10:36:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:50.216 10:36:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:50.216 10:36:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:50.216 10:36:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:50.216 10:36:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:50.782 00:17:50.782 10:36:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:50.782 10:36:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:50.782 10:36:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:51.040 10:36:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:51.040 10:36:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:51.040 10:36:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:51.041 10:36:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:51.041 10:36:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:51.041 10:36:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:51.041 { 00:17:51.041 "cntlid": 123, 00:17:51.041 "qid": 0, 00:17:51.041 "state": "enabled", 00:17:51.041 "thread": "nvmf_tgt_poll_group_000", 00:17:51.041 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd", 00:17:51.041 "listen_address": { 00:17:51.041 "trtype": "TCP", 00:17:51.041 "adrfam": "IPv4", 00:17:51.041 "traddr": "10.0.0.2", 00:17:51.041 "trsvcid": "4420" 00:17:51.041 }, 00:17:51.041 "peer_address": { 00:17:51.041 "trtype": "TCP", 00:17:51.041 "adrfam": "IPv4", 00:17:51.041 "traddr": "10.0.0.1", 00:17:51.041 "trsvcid": "54842" 00:17:51.041 }, 00:17:51.041 "auth": { 00:17:51.041 "state": "completed", 00:17:51.041 "digest": "sha512", 00:17:51.041 "dhgroup": "ffdhe4096" 00:17:51.041 } 00:17:51.041 } 00:17:51.041 ]' 00:17:51.041 10:36:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:51.041 10:36:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:51.041 10:36:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:51.041 10:36:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:51.041 10:36:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:51.041 10:36:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:51.041 10:36:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:51.041 10:36:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:51.299 10:36:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YWY1MDI3MDIyY2I2OTZkODA2YmU0Yzk2N2Q2ZmE5ZGbrQth/: --dhchap-ctrl-secret DHHC-1:02:ZWY4ZWYwOTEwYTcyMGM1MzRlODM0N2ViYjYyOTk3NWFhOTFmMTFiNDFhMTNkNjZmKWKHwA==: 00:17:51.299 10:36:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid 8b464f06-2980-e311-ba20-001e67a94acd -l 0 --dhchap-secret DHHC-1:01:YWY1MDI3MDIyY2I2OTZkODA2YmU0Yzk2N2Q2ZmE5ZGbrQth/: --dhchap-ctrl-secret DHHC-1:02:ZWY4ZWYwOTEwYTcyMGM1MzRlODM0N2ViYjYyOTk3NWFhOTFmMTFiNDFhMTNkNjZmKWKHwA==: 00:17:52.232 10:36:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:52.490 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:52.490 10:36:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:17:52.490 10:36:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:52.490 10:36:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:52.490 10:36:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:52.490 10:36:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:52.490 10:36:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:52.490 10:36:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:52.749 10:36:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 2 00:17:52.749 10:36:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:52.749 10:36:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:52.749 10:36:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:52.749 10:36:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:52.749 10:36:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:52.749 10:36:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:52.749 10:36:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:52.749 10:36:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:52.749 10:36:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:52.749 10:36:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:52.749 10:36:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:52.749 10:36:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:53.007 00:17:53.007 10:36:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:53.007 10:36:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:53.007 10:36:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:53.573 10:36:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:53.573 10:36:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:53.573 10:36:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:53.573 10:36:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:53.573 10:36:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:53.573 10:36:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:53.573 { 00:17:53.573 "cntlid": 125, 00:17:53.573 "qid": 0, 00:17:53.573 "state": "enabled", 00:17:53.573 "thread": "nvmf_tgt_poll_group_000", 00:17:53.573 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd", 00:17:53.573 "listen_address": { 00:17:53.573 "trtype": "TCP", 00:17:53.573 "adrfam": "IPv4", 00:17:53.573 "traddr": "10.0.0.2", 00:17:53.573 "trsvcid": "4420" 00:17:53.573 }, 00:17:53.573 "peer_address": { 00:17:53.573 "trtype": "TCP", 00:17:53.573 "adrfam": "IPv4", 00:17:53.573 "traddr": "10.0.0.1", 00:17:53.573 "trsvcid": "54878" 00:17:53.573 }, 00:17:53.573 "auth": { 00:17:53.573 "state": "completed", 00:17:53.573 "digest": "sha512", 00:17:53.573 "dhgroup": "ffdhe4096" 00:17:53.573 } 00:17:53.573 } 00:17:53.573 ]' 00:17:53.573 10:36:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:53.573 10:36:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:53.573 10:36:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:53.573 10:36:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:53.573 10:36:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:53.573 10:36:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:53.573 10:36:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:53.573 10:36:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:53.830 10:36:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NGE5M2M4NjliNWQ0OThlNWEzOTY1OTQwMDk1OTllZmRhNjc4ZDdhMzBjMjc0YmI51dL9kA==: --dhchap-ctrl-secret DHHC-1:01:ZTI1MTM1ODliYWQ1ZjFmOGFlMGE4ZTFiYzFjMTU1YzguHoKq: 00:17:53.830 10:36:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid 8b464f06-2980-e311-ba20-001e67a94acd -l 0 --dhchap-secret DHHC-1:02:NGE5M2M4NjliNWQ0OThlNWEzOTY1OTQwMDk1OTllZmRhNjc4ZDdhMzBjMjc0YmI51dL9kA==: --dhchap-ctrl-secret DHHC-1:01:ZTI1MTM1ODliYWQ1ZjFmOGFlMGE4ZTFiYzFjMTU1YzguHoKq: 00:17:54.763 10:36:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:54.763 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:54.763 10:36:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:17:54.763 10:36:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:54.763 10:36:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:54.763 10:36:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:54.763 10:36:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:54.763 10:36:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:54.763 10:36:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:55.021 10:36:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 3 00:17:55.021 10:36:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:55.021 10:36:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:55.021 10:36:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:55.021 10:36:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:55.021 10:36:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:55.021 10:36:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --dhchap-key key3 00:17:55.021 10:36:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:55.021 10:36:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:55.021 10:36:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:55.021 10:36:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:55.021 10:36:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:55.021 10:36:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:55.586 00:17:55.586 10:36:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:55.586 10:36:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:55.586 10:36:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:55.844 10:36:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:55.844 10:36:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:55.844 10:36:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:55.844 10:36:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:55.844 10:36:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:55.844 10:36:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:55.844 { 00:17:55.844 "cntlid": 127, 00:17:55.844 "qid": 0, 00:17:55.844 "state": "enabled", 00:17:55.844 "thread": "nvmf_tgt_poll_group_000", 00:17:55.844 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd", 00:17:55.844 "listen_address": { 00:17:55.844 "trtype": "TCP", 00:17:55.844 "adrfam": "IPv4", 00:17:55.844 "traddr": "10.0.0.2", 00:17:55.844 "trsvcid": "4420" 00:17:55.844 }, 00:17:55.844 "peer_address": { 00:17:55.844 "trtype": "TCP", 00:17:55.844 "adrfam": "IPv4", 00:17:55.844 "traddr": "10.0.0.1", 00:17:55.844 "trsvcid": "54898" 00:17:55.844 }, 00:17:55.844 "auth": { 00:17:55.844 "state": "completed", 00:17:55.844 "digest": "sha512", 00:17:55.844 "dhgroup": "ffdhe4096" 00:17:55.844 } 00:17:55.844 } 00:17:55.844 ]' 00:17:55.844 10:36:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:55.844 10:36:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:55.844 10:36:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:55.844 10:36:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:55.844 10:36:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:55.844 10:36:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:55.844 10:36:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:55.844 10:36:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:56.102 10:36:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MzBiMDU1YWUxODUwOWFhY2EwNDI1MjExOWFiNzY2NTQyZDJlMGYzNzI2ZGE0MTgzZmU3MjNlMzllNDM4OWRlM4eymdA=: 00:17:56.102 10:36:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid 8b464f06-2980-e311-ba20-001e67a94acd -l 0 --dhchap-secret DHHC-1:03:MzBiMDU1YWUxODUwOWFhY2EwNDI1MjExOWFiNzY2NTQyZDJlMGYzNzI2ZGE0MTgzZmU3MjNlMzllNDM4OWRlM4eymdA=: 00:17:57.035 10:36:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:57.035 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:57.035 10:36:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:17:57.035 10:36:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:57.035 10:36:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:57.035 10:36:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:57.035 10:36:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:57.035 10:36:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:57.035 10:36:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:57.035 10:36:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:57.293 10:36:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 0 00:17:57.293 10:36:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:57.293 10:36:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:57.293 10:36:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:57.293 10:36:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:57.293 10:36:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:57.293 10:36:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:57.293 10:36:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:57.293 10:36:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:57.293 10:36:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:57.293 10:36:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:57.293 10:36:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:57.293 10:36:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:57.860 00:17:57.860 10:36:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:57.860 10:36:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:57.860 10:36:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:58.118 10:36:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:58.118 10:36:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:58.118 10:36:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:58.118 10:36:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:58.118 10:36:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:58.118 10:36:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:58.118 { 00:17:58.118 "cntlid": 129, 00:17:58.118 "qid": 0, 00:17:58.118 "state": "enabled", 00:17:58.118 "thread": "nvmf_tgt_poll_group_000", 00:17:58.118 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd", 00:17:58.118 "listen_address": { 00:17:58.118 "trtype": "TCP", 00:17:58.118 "adrfam": "IPv4", 00:17:58.118 "traddr": "10.0.0.2", 00:17:58.118 "trsvcid": "4420" 00:17:58.118 }, 00:17:58.118 "peer_address": { 00:17:58.118 "trtype": "TCP", 00:17:58.118 "adrfam": "IPv4", 00:17:58.118 "traddr": "10.0.0.1", 00:17:58.118 "trsvcid": "54926" 00:17:58.118 }, 00:17:58.118 "auth": { 00:17:58.118 "state": "completed", 00:17:58.118 "digest": "sha512", 00:17:58.118 "dhgroup": "ffdhe6144" 00:17:58.118 } 00:17:58.118 } 00:17:58.118 ]' 00:17:58.118 10:36:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:58.118 10:36:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:58.118 10:36:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:58.376 10:36:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:58.376 10:36:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:58.376 10:36:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:58.376 10:36:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:58.376 10:36:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:58.634 10:36:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NzIyZWE5ZTU1Yzg4N2FlNTQzM2E0OTQwNjFhMTdlODRkNGM3NTNlM2FlZjIwMGJkgrp0GQ==: --dhchap-ctrl-secret DHHC-1:03:MzU5NGFiOWUyYzAyZGQ1NWFhMDA0ZDA4M2VmZTQ2Y2Y1YTBhNjRhZDlmYjI5YzZiZDkxNTBhZWQ2OWIwMzUyYkSjwPw=: 00:17:58.634 10:36:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid 8b464f06-2980-e311-ba20-001e67a94acd -l 0 --dhchap-secret DHHC-1:00:NzIyZWE5ZTU1Yzg4N2FlNTQzM2E0OTQwNjFhMTdlODRkNGM3NTNlM2FlZjIwMGJkgrp0GQ==: --dhchap-ctrl-secret DHHC-1:03:MzU5NGFiOWUyYzAyZGQ1NWFhMDA0ZDA4M2VmZTQ2Y2Y1YTBhNjRhZDlmYjI5YzZiZDkxNTBhZWQ2OWIwMzUyYkSjwPw=: 00:17:59.588 10:36:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:59.588 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:59.588 10:36:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:17:59.588 10:36:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:59.588 10:36:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:59.588 10:36:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:59.588 10:36:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:59.588 10:36:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:59.588 10:36:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:59.846 10:36:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 1 00:17:59.846 10:36:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:59.846 10:36:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:59.846 10:36:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:59.846 10:36:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:59.846 10:36:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:59.846 10:36:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:59.846 10:36:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:59.846 10:36:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:59.846 10:36:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:59.846 10:36:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:59.846 10:36:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:59.846 10:36:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:00.411 00:18:00.411 10:36:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:00.411 10:36:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:00.411 10:36:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:00.670 10:36:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:00.670 10:36:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:00.670 10:36:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:00.670 10:36:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:00.670 10:36:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:00.670 10:36:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:00.670 { 00:18:00.670 "cntlid": 131, 00:18:00.670 "qid": 0, 00:18:00.670 "state": "enabled", 00:18:00.670 "thread": "nvmf_tgt_poll_group_000", 00:18:00.670 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd", 00:18:00.670 "listen_address": { 00:18:00.670 "trtype": "TCP", 00:18:00.670 "adrfam": "IPv4", 00:18:00.670 "traddr": "10.0.0.2", 00:18:00.670 "trsvcid": "4420" 00:18:00.670 }, 00:18:00.670 "peer_address": { 00:18:00.670 "trtype": "TCP", 00:18:00.670 "adrfam": "IPv4", 00:18:00.670 "traddr": "10.0.0.1", 00:18:00.670 "trsvcid": "39274" 00:18:00.670 }, 00:18:00.670 "auth": { 00:18:00.670 "state": "completed", 00:18:00.670 "digest": "sha512", 00:18:00.670 "dhgroup": "ffdhe6144" 00:18:00.670 } 00:18:00.670 } 00:18:00.670 ]' 00:18:00.670 10:36:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:00.670 10:36:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:00.670 10:36:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:00.670 10:36:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:00.670 10:36:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:00.928 10:36:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:00.928 10:36:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:00.928 10:36:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:01.186 10:36:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YWY1MDI3MDIyY2I2OTZkODA2YmU0Yzk2N2Q2ZmE5ZGbrQth/: --dhchap-ctrl-secret DHHC-1:02:ZWY4ZWYwOTEwYTcyMGM1MzRlODM0N2ViYjYyOTk3NWFhOTFmMTFiNDFhMTNkNjZmKWKHwA==: 00:18:01.187 10:36:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid 8b464f06-2980-e311-ba20-001e67a94acd -l 0 --dhchap-secret DHHC-1:01:YWY1MDI3MDIyY2I2OTZkODA2YmU0Yzk2N2Q2ZmE5ZGbrQth/: --dhchap-ctrl-secret DHHC-1:02:ZWY4ZWYwOTEwYTcyMGM1MzRlODM0N2ViYjYyOTk3NWFhOTFmMTFiNDFhMTNkNjZmKWKHwA==: 00:18:02.120 10:36:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:02.120 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:02.120 10:36:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:18:02.120 10:36:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:02.120 10:36:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:02.120 10:36:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:02.120 10:36:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:02.120 10:36:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:02.120 10:36:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:02.378 10:36:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 2 00:18:02.378 10:36:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:02.378 10:36:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:02.378 10:36:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:18:02.378 10:36:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:02.378 10:36:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:02.378 10:36:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:02.378 10:36:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:02.378 10:36:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:02.378 10:36:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:02.378 10:36:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:02.378 10:36:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:02.378 10:36:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:02.944 00:18:02.944 10:36:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:02.944 10:36:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:02.944 10:36:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:03.202 10:36:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:03.202 10:36:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:03.202 10:36:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:03.202 10:36:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:03.202 10:36:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:03.202 10:36:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:03.202 { 00:18:03.202 "cntlid": 133, 00:18:03.202 "qid": 0, 00:18:03.202 "state": "enabled", 00:18:03.202 "thread": "nvmf_tgt_poll_group_000", 00:18:03.202 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd", 00:18:03.202 "listen_address": { 00:18:03.202 "trtype": "TCP", 00:18:03.202 "adrfam": "IPv4", 00:18:03.202 "traddr": "10.0.0.2", 00:18:03.202 "trsvcid": "4420" 00:18:03.202 }, 00:18:03.202 "peer_address": { 00:18:03.202 "trtype": "TCP", 00:18:03.202 "adrfam": "IPv4", 00:18:03.202 "traddr": "10.0.0.1", 00:18:03.202 "trsvcid": "39308" 00:18:03.202 }, 00:18:03.202 "auth": { 00:18:03.202 "state": "completed", 00:18:03.202 "digest": "sha512", 00:18:03.202 "dhgroup": "ffdhe6144" 00:18:03.202 } 00:18:03.202 } 00:18:03.202 ]' 00:18:03.202 10:36:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:03.202 10:36:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:03.202 10:36:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:03.202 10:36:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:03.202 10:36:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:03.202 10:36:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:03.202 10:36:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:03.202 10:36:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:03.461 10:36:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NGE5M2M4NjliNWQ0OThlNWEzOTY1OTQwMDk1OTllZmRhNjc4ZDdhMzBjMjc0YmI51dL9kA==: --dhchap-ctrl-secret DHHC-1:01:ZTI1MTM1ODliYWQ1ZjFmOGFlMGE4ZTFiYzFjMTU1YzguHoKq: 00:18:03.461 10:36:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid 8b464f06-2980-e311-ba20-001e67a94acd -l 0 --dhchap-secret DHHC-1:02:NGE5M2M4NjliNWQ0OThlNWEzOTY1OTQwMDk1OTllZmRhNjc4ZDdhMzBjMjc0YmI51dL9kA==: --dhchap-ctrl-secret DHHC-1:01:ZTI1MTM1ODliYWQ1ZjFmOGFlMGE4ZTFiYzFjMTU1YzguHoKq: 00:18:04.394 10:36:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:04.394 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:04.394 10:36:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:18:04.394 10:36:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:04.394 10:36:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:04.394 10:36:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:04.394 10:36:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:04.394 10:36:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:04.394 10:36:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:04.652 10:36:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 3 00:18:04.652 10:36:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:04.652 10:36:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:04.652 10:36:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:18:04.652 10:36:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:04.652 10:36:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:04.652 10:36:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --dhchap-key key3 00:18:04.652 10:36:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:04.652 10:36:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:04.652 10:36:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:04.652 10:36:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:04.652 10:36:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:04.652 10:36:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:05.217 00:18:05.217 10:36:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:05.217 10:36:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:05.217 10:36:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:05.475 10:36:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:05.475 10:36:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:05.475 10:36:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:05.475 10:36:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:05.475 10:36:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:05.475 10:36:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:05.475 { 00:18:05.475 "cntlid": 135, 00:18:05.475 "qid": 0, 00:18:05.475 "state": "enabled", 00:18:05.475 "thread": "nvmf_tgt_poll_group_000", 00:18:05.475 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd", 00:18:05.475 "listen_address": { 00:18:05.475 "trtype": "TCP", 00:18:05.475 "adrfam": "IPv4", 00:18:05.475 "traddr": "10.0.0.2", 00:18:05.475 "trsvcid": "4420" 00:18:05.475 }, 00:18:05.475 "peer_address": { 00:18:05.475 "trtype": "TCP", 00:18:05.475 "adrfam": "IPv4", 00:18:05.475 "traddr": "10.0.0.1", 00:18:05.475 "trsvcid": "39330" 00:18:05.475 }, 00:18:05.475 "auth": { 00:18:05.475 "state": "completed", 00:18:05.475 "digest": "sha512", 00:18:05.475 "dhgroup": "ffdhe6144" 00:18:05.475 } 00:18:05.475 } 00:18:05.475 ]' 00:18:05.475 10:36:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:05.476 10:36:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:05.476 10:36:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:05.476 10:36:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:05.476 10:36:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:05.476 10:36:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:05.476 10:36:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:05.476 10:36:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:06.041 10:36:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MzBiMDU1YWUxODUwOWFhY2EwNDI1MjExOWFiNzY2NTQyZDJlMGYzNzI2ZGE0MTgzZmU3MjNlMzllNDM4OWRlM4eymdA=: 00:18:06.041 10:36:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid 8b464f06-2980-e311-ba20-001e67a94acd -l 0 --dhchap-secret DHHC-1:03:MzBiMDU1YWUxODUwOWFhY2EwNDI1MjExOWFiNzY2NTQyZDJlMGYzNzI2ZGE0MTgzZmU3MjNlMzllNDM4OWRlM4eymdA=: 00:18:06.974 10:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:06.974 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:06.974 10:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:18:06.974 10:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:06.974 10:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:06.974 10:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:06.974 10:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:06.974 10:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:06.974 10:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:06.974 10:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:06.974 10:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 0 00:18:06.974 10:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:06.974 10:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:06.974 10:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:06.974 10:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:06.974 10:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:06.974 10:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:06.974 10:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:06.974 10:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:06.974 10:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:06.974 10:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:06.974 10:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:06.974 10:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:07.908 00:18:07.908 10:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:07.908 10:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:07.908 10:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:08.166 10:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:08.166 10:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:08.166 10:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:08.166 10:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:08.166 10:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:08.166 10:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:08.166 { 00:18:08.166 "cntlid": 137, 00:18:08.166 "qid": 0, 00:18:08.166 "state": "enabled", 00:18:08.166 "thread": "nvmf_tgt_poll_group_000", 00:18:08.166 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd", 00:18:08.166 "listen_address": { 00:18:08.166 "trtype": "TCP", 00:18:08.166 "adrfam": "IPv4", 00:18:08.166 "traddr": "10.0.0.2", 00:18:08.166 "trsvcid": "4420" 00:18:08.166 }, 00:18:08.166 "peer_address": { 00:18:08.166 "trtype": "TCP", 00:18:08.166 "adrfam": "IPv4", 00:18:08.166 "traddr": "10.0.0.1", 00:18:08.166 "trsvcid": "39348" 00:18:08.166 }, 00:18:08.166 "auth": { 00:18:08.166 "state": "completed", 00:18:08.166 "digest": "sha512", 00:18:08.166 "dhgroup": "ffdhe8192" 00:18:08.166 } 00:18:08.166 } 00:18:08.166 ]' 00:18:08.166 10:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:08.166 10:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:08.166 10:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:08.166 10:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:08.166 10:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:08.424 10:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:08.424 10:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:08.424 10:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:08.682 10:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NzIyZWE5ZTU1Yzg4N2FlNTQzM2E0OTQwNjFhMTdlODRkNGM3NTNlM2FlZjIwMGJkgrp0GQ==: --dhchap-ctrl-secret DHHC-1:03:MzU5NGFiOWUyYzAyZGQ1NWFhMDA0ZDA4M2VmZTQ2Y2Y1YTBhNjRhZDlmYjI5YzZiZDkxNTBhZWQ2OWIwMzUyYkSjwPw=: 00:18:08.682 10:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid 8b464f06-2980-e311-ba20-001e67a94acd -l 0 --dhchap-secret DHHC-1:00:NzIyZWE5ZTU1Yzg4N2FlNTQzM2E0OTQwNjFhMTdlODRkNGM3NTNlM2FlZjIwMGJkgrp0GQ==: --dhchap-ctrl-secret DHHC-1:03:MzU5NGFiOWUyYzAyZGQ1NWFhMDA0ZDA4M2VmZTQ2Y2Y1YTBhNjRhZDlmYjI5YzZiZDkxNTBhZWQ2OWIwMzUyYkSjwPw=: 00:18:09.751 10:36:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:09.751 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:09.751 10:36:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:18:09.751 10:36:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:09.751 10:36:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:09.751 10:36:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:09.751 10:36:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:09.751 10:36:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:09.751 10:36:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:09.751 10:36:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 1 00:18:09.751 10:36:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:09.751 10:36:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:09.751 10:36:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:09.751 10:36:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:09.751 10:36:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:09.751 10:36:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:09.751 10:36:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:09.751 10:36:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:10.056 10:36:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:10.056 10:36:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:10.056 10:36:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:10.056 10:36:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:10.708 00:18:10.708 10:36:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:10.708 10:36:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:10.708 10:36:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:11.020 10:36:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:11.020 10:36:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:11.020 10:36:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:11.020 10:36:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:11.020 10:36:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:11.020 10:36:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:11.020 { 00:18:11.020 "cntlid": 139, 00:18:11.020 "qid": 0, 00:18:11.020 "state": "enabled", 00:18:11.020 "thread": "nvmf_tgt_poll_group_000", 00:18:11.020 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd", 00:18:11.020 "listen_address": { 00:18:11.020 "trtype": "TCP", 00:18:11.020 "adrfam": "IPv4", 00:18:11.020 "traddr": "10.0.0.2", 00:18:11.020 "trsvcid": "4420" 00:18:11.020 }, 00:18:11.020 "peer_address": { 00:18:11.020 "trtype": "TCP", 00:18:11.020 "adrfam": "IPv4", 00:18:11.020 "traddr": "10.0.0.1", 00:18:11.020 "trsvcid": "57690" 00:18:11.020 }, 00:18:11.020 "auth": { 00:18:11.020 "state": "completed", 00:18:11.020 "digest": "sha512", 00:18:11.020 "dhgroup": "ffdhe8192" 00:18:11.020 } 00:18:11.020 } 00:18:11.020 ]' 00:18:11.020 10:36:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:11.020 10:36:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:11.020 10:36:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:11.020 10:36:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:11.020 10:36:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:11.278 10:36:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:11.278 10:36:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:11.278 10:36:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:11.536 10:36:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YWY1MDI3MDIyY2I2OTZkODA2YmU0Yzk2N2Q2ZmE5ZGbrQth/: --dhchap-ctrl-secret DHHC-1:02:ZWY4ZWYwOTEwYTcyMGM1MzRlODM0N2ViYjYyOTk3NWFhOTFmMTFiNDFhMTNkNjZmKWKHwA==: 00:18:11.536 10:36:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid 8b464f06-2980-e311-ba20-001e67a94acd -l 0 --dhchap-secret DHHC-1:01:YWY1MDI3MDIyY2I2OTZkODA2YmU0Yzk2N2Q2ZmE5ZGbrQth/: --dhchap-ctrl-secret DHHC-1:02:ZWY4ZWYwOTEwYTcyMGM1MzRlODM0N2ViYjYyOTk3NWFhOTFmMTFiNDFhMTNkNjZmKWKHwA==: 00:18:12.467 10:37:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:12.467 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:12.467 10:37:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:18:12.467 10:37:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:12.467 10:37:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:12.467 10:37:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:12.467 10:37:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:12.467 10:37:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:12.467 10:37:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:12.724 10:37:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 2 00:18:12.724 10:37:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:12.724 10:37:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:12.724 10:37:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:12.724 10:37:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:12.724 10:37:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:12.724 10:37:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:12.724 10:37:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:12.724 10:37:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:12.724 10:37:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:12.724 10:37:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:12.724 10:37:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:12.725 10:37:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:13.289 00:18:13.547 10:37:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:13.547 10:37:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:13.547 10:37:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:13.805 10:37:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:13.805 10:37:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:13.805 10:37:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:13.805 10:37:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:13.805 10:37:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:13.805 10:37:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:13.805 { 00:18:13.805 "cntlid": 141, 00:18:13.805 "qid": 0, 00:18:13.805 "state": "enabled", 00:18:13.805 "thread": "nvmf_tgt_poll_group_000", 00:18:13.805 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd", 00:18:13.805 "listen_address": { 00:18:13.805 "trtype": "TCP", 00:18:13.805 "adrfam": "IPv4", 00:18:13.805 "traddr": "10.0.0.2", 00:18:13.805 "trsvcid": "4420" 00:18:13.805 }, 00:18:13.805 "peer_address": { 00:18:13.805 "trtype": "TCP", 00:18:13.805 "adrfam": "IPv4", 00:18:13.805 "traddr": "10.0.0.1", 00:18:13.805 "trsvcid": "57714" 00:18:13.805 }, 00:18:13.805 "auth": { 00:18:13.805 "state": "completed", 00:18:13.805 "digest": "sha512", 00:18:13.805 "dhgroup": "ffdhe8192" 00:18:13.805 } 00:18:13.805 } 00:18:13.805 ]' 00:18:13.805 10:37:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:13.805 10:37:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:13.805 10:37:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:13.805 10:37:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:13.805 10:37:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:13.805 10:37:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:13.805 10:37:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:13.806 10:37:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:14.064 10:37:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NGE5M2M4NjliNWQ0OThlNWEzOTY1OTQwMDk1OTllZmRhNjc4ZDdhMzBjMjc0YmI51dL9kA==: --dhchap-ctrl-secret DHHC-1:01:ZTI1MTM1ODliYWQ1ZjFmOGFlMGE4ZTFiYzFjMTU1YzguHoKq: 00:18:14.064 10:37:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid 8b464f06-2980-e311-ba20-001e67a94acd -l 0 --dhchap-secret DHHC-1:02:NGE5M2M4NjliNWQ0OThlNWEzOTY1OTQwMDk1OTllZmRhNjc4ZDdhMzBjMjc0YmI51dL9kA==: --dhchap-ctrl-secret DHHC-1:01:ZTI1MTM1ODliYWQ1ZjFmOGFlMGE4ZTFiYzFjMTU1YzguHoKq: 00:18:14.997 10:37:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:14.997 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:14.997 10:37:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:18:14.997 10:37:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:14.997 10:37:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:14.997 10:37:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:14.997 10:37:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:14.997 10:37:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:14.997 10:37:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:15.256 10:37:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 3 00:18:15.256 10:37:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:15.256 10:37:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:15.256 10:37:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:15.256 10:37:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:15.256 10:37:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:15.256 10:37:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --dhchap-key key3 00:18:15.256 10:37:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:15.256 10:37:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:15.256 10:37:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:15.256 10:37:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:15.256 10:37:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:15.256 10:37:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:16.190 00:18:16.190 10:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:16.190 10:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:16.190 10:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:16.446 10:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:16.446 10:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:16.446 10:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:16.446 10:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:16.446 10:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:16.446 10:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:16.446 { 00:18:16.446 "cntlid": 143, 00:18:16.446 "qid": 0, 00:18:16.446 "state": "enabled", 00:18:16.446 "thread": "nvmf_tgt_poll_group_000", 00:18:16.446 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd", 00:18:16.446 "listen_address": { 00:18:16.446 "trtype": "TCP", 00:18:16.446 "adrfam": "IPv4", 00:18:16.446 "traddr": "10.0.0.2", 00:18:16.446 "trsvcid": "4420" 00:18:16.446 }, 00:18:16.446 "peer_address": { 00:18:16.446 "trtype": "TCP", 00:18:16.446 "adrfam": "IPv4", 00:18:16.446 "traddr": "10.0.0.1", 00:18:16.446 "trsvcid": "57744" 00:18:16.446 }, 00:18:16.446 "auth": { 00:18:16.446 "state": "completed", 00:18:16.446 "digest": "sha512", 00:18:16.446 "dhgroup": "ffdhe8192" 00:18:16.446 } 00:18:16.446 } 00:18:16.446 ]' 00:18:16.446 10:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:16.446 10:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:16.446 10:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:16.447 10:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:16.447 10:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:16.704 10:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:16.704 10:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:16.704 10:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:16.961 10:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MzBiMDU1YWUxODUwOWFhY2EwNDI1MjExOWFiNzY2NTQyZDJlMGYzNzI2ZGE0MTgzZmU3MjNlMzllNDM4OWRlM4eymdA=: 00:18:16.961 10:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid 8b464f06-2980-e311-ba20-001e67a94acd -l 0 --dhchap-secret DHHC-1:03:MzBiMDU1YWUxODUwOWFhY2EwNDI1MjExOWFiNzY2NTQyZDJlMGYzNzI2ZGE0MTgzZmU3MjNlMzllNDM4OWRlM4eymdA=: 00:18:17.891 10:37:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:17.891 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:17.891 10:37:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:18:17.891 10:37:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:17.891 10:37:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:17.891 10:37:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:17.891 10:37:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:18:17.891 10:37:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s sha256,sha384,sha512 00:18:17.891 10:37:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:18:17.891 10:37:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:17.891 10:37:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:17.891 10:37:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:18.149 10:37:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@141 -- # connect_authenticate sha512 ffdhe8192 0 00:18:18.149 10:37:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:18.149 10:37:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:18.149 10:37:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:18.149 10:37:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:18.149 10:37:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:18.149 10:37:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:18.149 10:37:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:18.149 10:37:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:18.149 10:37:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:18.149 10:37:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:18.149 10:37:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:18.149 10:37:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:19.083 00:18:19.083 10:37:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:19.083 10:37:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:19.083 10:37:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:19.340 10:37:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:19.340 10:37:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:19.340 10:37:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:19.340 10:37:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:19.340 10:37:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:19.340 10:37:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:19.340 { 00:18:19.340 "cntlid": 145, 00:18:19.340 "qid": 0, 00:18:19.340 "state": "enabled", 00:18:19.340 "thread": "nvmf_tgt_poll_group_000", 00:18:19.340 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd", 00:18:19.340 "listen_address": { 00:18:19.340 "trtype": "TCP", 00:18:19.340 "adrfam": "IPv4", 00:18:19.340 "traddr": "10.0.0.2", 00:18:19.340 "trsvcid": "4420" 00:18:19.340 }, 00:18:19.340 "peer_address": { 00:18:19.340 "trtype": "TCP", 00:18:19.340 "adrfam": "IPv4", 00:18:19.340 "traddr": "10.0.0.1", 00:18:19.340 "trsvcid": "47356" 00:18:19.340 }, 00:18:19.340 "auth": { 00:18:19.340 "state": "completed", 00:18:19.340 "digest": "sha512", 00:18:19.340 "dhgroup": "ffdhe8192" 00:18:19.340 } 00:18:19.340 } 00:18:19.340 ]' 00:18:19.340 10:37:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:19.340 10:37:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:19.340 10:37:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:19.340 10:37:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:19.340 10:37:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:19.340 10:37:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:19.340 10:37:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:19.340 10:37:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:19.599 10:37:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NzIyZWE5ZTU1Yzg4N2FlNTQzM2E0OTQwNjFhMTdlODRkNGM3NTNlM2FlZjIwMGJkgrp0GQ==: --dhchap-ctrl-secret DHHC-1:03:MzU5NGFiOWUyYzAyZGQ1NWFhMDA0ZDA4M2VmZTQ2Y2Y1YTBhNjRhZDlmYjI5YzZiZDkxNTBhZWQ2OWIwMzUyYkSjwPw=: 00:18:19.599 10:37:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid 8b464f06-2980-e311-ba20-001e67a94acd -l 0 --dhchap-secret DHHC-1:00:NzIyZWE5ZTU1Yzg4N2FlNTQzM2E0OTQwNjFhMTdlODRkNGM3NTNlM2FlZjIwMGJkgrp0GQ==: --dhchap-ctrl-secret DHHC-1:03:MzU5NGFiOWUyYzAyZGQ1NWFhMDA0ZDA4M2VmZTQ2Y2Y1YTBhNjRhZDlmYjI5YzZiZDkxNTBhZWQ2OWIwMzUyYkSjwPw=: 00:18:20.533 10:37:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:20.533 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:20.533 10:37:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:18:20.533 10:37:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:20.533 10:37:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:20.533 10:37:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:20.533 10:37:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@144 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --dhchap-key key1 00:18:20.533 10:37:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:20.533 10:37:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:20.533 10:37:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:20.533 10:37:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@145 -- # NOT bdev_connect -b nvme0 --dhchap-key key2 00:18:20.533 10:37:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:18:20.533 10:37:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key2 00:18:20.533 10:37:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:18:20.533 10:37:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:20.533 10:37:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:18:20.533 10:37:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:20.533 10:37:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key2 00:18:20.533 10:37:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:18:20.533 10:37:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:18:21.466 request: 00:18:21.466 { 00:18:21.466 "name": "nvme0", 00:18:21.466 "trtype": "tcp", 00:18:21.466 "traddr": "10.0.0.2", 00:18:21.466 "adrfam": "ipv4", 00:18:21.466 "trsvcid": "4420", 00:18:21.466 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:21.466 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd", 00:18:21.466 "prchk_reftag": false, 00:18:21.466 "prchk_guard": false, 00:18:21.466 "hdgst": false, 00:18:21.466 "ddgst": false, 00:18:21.466 "dhchap_key": "key2", 00:18:21.466 "allow_unrecognized_csi": false, 00:18:21.466 "method": "bdev_nvme_attach_controller", 00:18:21.466 "req_id": 1 00:18:21.466 } 00:18:21.466 Got JSON-RPC error response 00:18:21.466 response: 00:18:21.466 { 00:18:21.466 "code": -5, 00:18:21.466 "message": "Input/output error" 00:18:21.466 } 00:18:21.466 10:37:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:18:21.466 10:37:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:21.466 10:37:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:21.466 10:37:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:21.466 10:37:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@146 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:18:21.466 10:37:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:21.466 10:37:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:21.466 10:37:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:21.466 10:37:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@149 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:21.466 10:37:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:21.466 10:37:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:21.466 10:37:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:21.466 10:37:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@150 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:18:21.466 10:37:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:18:21.466 10:37:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:18:21.466 10:37:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:18:21.466 10:37:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:21.466 10:37:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:18:21.466 10:37:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:21.466 10:37:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:18:21.466 10:37:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:18:21.466 10:37:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:18:22.033 request: 00:18:22.033 { 00:18:22.033 "name": "nvme0", 00:18:22.033 "trtype": "tcp", 00:18:22.033 "traddr": "10.0.0.2", 00:18:22.033 "adrfam": "ipv4", 00:18:22.033 "trsvcid": "4420", 00:18:22.033 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:22.033 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd", 00:18:22.033 "prchk_reftag": false, 00:18:22.033 "prchk_guard": false, 00:18:22.033 "hdgst": false, 00:18:22.033 "ddgst": false, 00:18:22.033 "dhchap_key": "key1", 00:18:22.033 "dhchap_ctrlr_key": "ckey2", 00:18:22.033 "allow_unrecognized_csi": false, 00:18:22.033 "method": "bdev_nvme_attach_controller", 00:18:22.033 "req_id": 1 00:18:22.033 } 00:18:22.033 Got JSON-RPC error response 00:18:22.033 response: 00:18:22.033 { 00:18:22.033 "code": -5, 00:18:22.033 "message": "Input/output error" 00:18:22.033 } 00:18:22.033 10:37:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:18:22.033 10:37:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:22.033 10:37:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:22.033 10:37:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:22.034 10:37:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@151 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:18:22.034 10:37:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:22.034 10:37:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:22.034 10:37:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:22.034 10:37:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@154 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --dhchap-key key1 00:18:22.034 10:37:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:22.034 10:37:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:22.034 10:37:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:22.034 10:37:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@155 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:22.034 10:37:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:18:22.034 10:37:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:22.034 10:37:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:18:22.034 10:37:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:22.034 10:37:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:18:22.291 10:37:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:22.291 10:37:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:22.291 10:37:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:22.291 10:37:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:22.857 request: 00:18:22.857 { 00:18:22.857 "name": "nvme0", 00:18:22.857 "trtype": "tcp", 00:18:22.857 "traddr": "10.0.0.2", 00:18:22.857 "adrfam": "ipv4", 00:18:22.857 "trsvcid": "4420", 00:18:22.857 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:22.857 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd", 00:18:22.857 "prchk_reftag": false, 00:18:22.857 "prchk_guard": false, 00:18:22.857 "hdgst": false, 00:18:22.857 "ddgst": false, 00:18:22.857 "dhchap_key": "key1", 00:18:22.857 "dhchap_ctrlr_key": "ckey1", 00:18:22.857 "allow_unrecognized_csi": false, 00:18:22.857 "method": "bdev_nvme_attach_controller", 00:18:22.857 "req_id": 1 00:18:22.857 } 00:18:22.857 Got JSON-RPC error response 00:18:22.857 response: 00:18:22.857 { 00:18:22.857 "code": -5, 00:18:22.857 "message": "Input/output error" 00:18:22.857 } 00:18:23.116 10:37:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:18:23.116 10:37:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:23.116 10:37:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:23.116 10:37:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:23.116 10:37:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:18:23.116 10:37:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:23.116 10:37:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:23.116 10:37:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:23.116 10:37:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@159 -- # killprocess 362970 00:18:23.116 10:37:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@952 -- # '[' -z 362970 ']' 00:18:23.116 10:37:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # kill -0 362970 00:18:23.116 10:37:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@957 -- # uname 00:18:23.116 10:37:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:18:23.116 10:37:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 362970 00:18:23.116 10:37:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:18:23.116 10:37:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:18:23.116 10:37:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@970 -- # echo 'killing process with pid 362970' 00:18:23.116 killing process with pid 362970 00:18:23.116 10:37:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@971 -- # kill 362970 00:18:23.116 10:37:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@976 -- # wait 362970 00:18:23.375 10:37:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@160 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:18:23.375 10:37:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:23.375 10:37:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:18:23.375 10:37:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:23.375 10:37:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=386111 00:18:23.375 10:37:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:18:23.375 10:37:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 386111 00:18:23.375 10:37:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@833 -- # '[' -z 386111 ']' 00:18:23.375 10:37:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:23.375 10:37:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # local max_retries=100 00:18:23.375 10:37:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:23.375 10:37:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # xtrace_disable 00:18:23.375 10:37:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:23.633 10:37:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:18:23.633 10:37:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@866 -- # return 0 00:18:23.633 10:37:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:23.633 10:37:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:18:23.633 10:37:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:23.633 10:37:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:23.633 10:37:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@161 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:18:23.633 10:37:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # waitforlisten 386111 00:18:23.633 10:37:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@833 -- # '[' -z 386111 ']' 00:18:23.633 10:37:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:23.633 10:37:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # local max_retries=100 00:18:23.633 10:37:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:23.633 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:23.633 10:37:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # xtrace_disable 00:18:23.633 10:37:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:23.891 10:37:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:18:23.891 10:37:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@866 -- # return 0 00:18:23.891 10:37:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@164 -- # rpc_cmd 00:18:23.891 10:37:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:23.891 10:37:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:23.891 null0 00:18:23.892 10:37:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:23.892 10:37:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:18:23.892 10:37:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.O51 00:18:23.892 10:37:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:23.892 10:37:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:23.892 10:37:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:23.892 10:37:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha512.z1q ]] 00:18:23.892 10:37:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.z1q 00:18:23.892 10:37:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:23.892 10:37:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:23.892 10:37:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:23.892 10:37:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:18:23.892 10:37:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.osT 00:18:23.892 10:37:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:23.892 10:37:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:23.892 10:37:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:23.892 10:37:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha384.3Ky ]] 00:18:23.892 10:37:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.3Ky 00:18:23.892 10:37:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:23.892 10:37:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:24.149 10:37:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:24.149 10:37:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:18:24.149 10:37:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.D7y 00:18:24.150 10:37:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:24.150 10:37:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:24.150 10:37:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:24.150 10:37:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha256.2Z2 ]] 00:18:24.150 10:37:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.2Z2 00:18:24.150 10:37:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:24.150 10:37:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:24.150 10:37:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:24.150 10:37:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:18:24.150 10:37:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.aWh 00:18:24.150 10:37:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:24.150 10:37:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:24.150 10:37:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:24.150 10:37:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n '' ]] 00:18:24.150 10:37:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@179 -- # connect_authenticate sha512 ffdhe8192 3 00:18:24.150 10:37:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:24.150 10:37:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:24.150 10:37:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:24.150 10:37:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:24.150 10:37:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:24.150 10:37:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --dhchap-key key3 00:18:24.150 10:37:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:24.150 10:37:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:24.150 10:37:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:24.150 10:37:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:24.150 10:37:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:24.150 10:37:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:25.521 nvme0n1 00:18:25.521 10:37:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:25.521 10:37:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:25.521 10:37:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:25.779 10:37:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:25.779 10:37:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:25.779 10:37:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:25.779 10:37:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:25.779 10:37:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:25.779 10:37:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:25.779 { 00:18:25.779 "cntlid": 1, 00:18:25.779 "qid": 0, 00:18:25.779 "state": "enabled", 00:18:25.779 "thread": "nvmf_tgt_poll_group_000", 00:18:25.779 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd", 00:18:25.779 "listen_address": { 00:18:25.779 "trtype": "TCP", 00:18:25.779 "adrfam": "IPv4", 00:18:25.779 "traddr": "10.0.0.2", 00:18:25.779 "trsvcid": "4420" 00:18:25.779 }, 00:18:25.779 "peer_address": { 00:18:25.779 "trtype": "TCP", 00:18:25.779 "adrfam": "IPv4", 00:18:25.779 "traddr": "10.0.0.1", 00:18:25.779 "trsvcid": "47402" 00:18:25.779 }, 00:18:25.779 "auth": { 00:18:25.779 "state": "completed", 00:18:25.779 "digest": "sha512", 00:18:25.779 "dhgroup": "ffdhe8192" 00:18:25.779 } 00:18:25.779 } 00:18:25.779 ]' 00:18:25.779 10:37:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:25.779 10:37:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:25.779 10:37:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:25.779 10:37:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:25.779 10:37:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:25.779 10:37:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:25.779 10:37:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:25.779 10:37:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:26.345 10:37:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MzBiMDU1YWUxODUwOWFhY2EwNDI1MjExOWFiNzY2NTQyZDJlMGYzNzI2ZGE0MTgzZmU3MjNlMzllNDM4OWRlM4eymdA=: 00:18:26.345 10:37:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid 8b464f06-2980-e311-ba20-001e67a94acd -l 0 --dhchap-secret DHHC-1:03:MzBiMDU1YWUxODUwOWFhY2EwNDI1MjExOWFiNzY2NTQyZDJlMGYzNzI2ZGE0MTgzZmU3MjNlMzllNDM4OWRlM4eymdA=: 00:18:26.910 10:37:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:26.910 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:26.910 10:37:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:18:26.910 10:37:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:26.910 10:37:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:27.169 10:37:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:27.169 10:37:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@182 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --dhchap-key key3 00:18:27.169 10:37:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:27.169 10:37:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:27.169 10:37:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:27.169 10:37:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@183 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:18:27.169 10:37:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:18:27.427 10:37:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@184 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:18:27.427 10:37:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:18:27.427 10:37:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:18:27.427 10:37:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:18:27.427 10:37:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:27.427 10:37:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:18:27.427 10:37:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:27.427 10:37:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:27.427 10:37:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:27.427 10:37:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:27.685 request: 00:18:27.685 { 00:18:27.685 "name": "nvme0", 00:18:27.685 "trtype": "tcp", 00:18:27.685 "traddr": "10.0.0.2", 00:18:27.685 "adrfam": "ipv4", 00:18:27.685 "trsvcid": "4420", 00:18:27.685 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:27.685 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd", 00:18:27.685 "prchk_reftag": false, 00:18:27.685 "prchk_guard": false, 00:18:27.685 "hdgst": false, 00:18:27.685 "ddgst": false, 00:18:27.685 "dhchap_key": "key3", 00:18:27.685 "allow_unrecognized_csi": false, 00:18:27.685 "method": "bdev_nvme_attach_controller", 00:18:27.685 "req_id": 1 00:18:27.685 } 00:18:27.685 Got JSON-RPC error response 00:18:27.685 response: 00:18:27.685 { 00:18:27.685 "code": -5, 00:18:27.685 "message": "Input/output error" 00:18:27.685 } 00:18:27.685 10:37:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:18:27.685 10:37:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:27.685 10:37:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:27.685 10:37:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:27.685 10:37:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # IFS=, 00:18:27.685 10:37:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@188 -- # printf %s sha256,sha384,sha512 00:18:27.685 10:37:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:18:27.685 10:37:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:18:27.942 10:37:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@193 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:18:27.942 10:37:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:18:27.942 10:37:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:18:27.942 10:37:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:18:27.942 10:37:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:27.942 10:37:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:18:27.942 10:37:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:27.942 10:37:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:27.942 10:37:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:27.942 10:37:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:28.200 request: 00:18:28.200 { 00:18:28.200 "name": "nvme0", 00:18:28.200 "trtype": "tcp", 00:18:28.200 "traddr": "10.0.0.2", 00:18:28.200 "adrfam": "ipv4", 00:18:28.200 "trsvcid": "4420", 00:18:28.200 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:28.200 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd", 00:18:28.200 "prchk_reftag": false, 00:18:28.200 "prchk_guard": false, 00:18:28.200 "hdgst": false, 00:18:28.200 "ddgst": false, 00:18:28.200 "dhchap_key": "key3", 00:18:28.200 "allow_unrecognized_csi": false, 00:18:28.200 "method": "bdev_nvme_attach_controller", 00:18:28.200 "req_id": 1 00:18:28.200 } 00:18:28.200 Got JSON-RPC error response 00:18:28.200 response: 00:18:28.200 { 00:18:28.200 "code": -5, 00:18:28.200 "message": "Input/output error" 00:18:28.200 } 00:18:28.200 10:37:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:18:28.200 10:37:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:28.200 10:37:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:28.200 10:37:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:28.200 10:37:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:18:28.200 10:37:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s sha256,sha384,sha512 00:18:28.200 10:37:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:18:28.200 10:37:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:28.200 10:37:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:28.200 10:37:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:28.458 10:37:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@208 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:18:28.458 10:37:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:28.458 10:37:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:28.458 10:37:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:28.458 10:37:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@209 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:18:28.458 10:37:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:28.458 10:37:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:28.458 10:37:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:28.458 10:37:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@210 -- # NOT bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:28.458 10:37:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:18:28.458 10:37:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:28.458 10:37:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:18:28.458 10:37:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:28.458 10:37:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:18:28.458 10:37:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:28.459 10:37:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:28.459 10:37:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:28.459 10:37:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:29.024 request: 00:18:29.024 { 00:18:29.024 "name": "nvme0", 00:18:29.024 "trtype": "tcp", 00:18:29.024 "traddr": "10.0.0.2", 00:18:29.024 "adrfam": "ipv4", 00:18:29.024 "trsvcid": "4420", 00:18:29.024 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:29.024 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd", 00:18:29.024 "prchk_reftag": false, 00:18:29.024 "prchk_guard": false, 00:18:29.024 "hdgst": false, 00:18:29.024 "ddgst": false, 00:18:29.024 "dhchap_key": "key0", 00:18:29.024 "dhchap_ctrlr_key": "key1", 00:18:29.024 "allow_unrecognized_csi": false, 00:18:29.024 "method": "bdev_nvme_attach_controller", 00:18:29.024 "req_id": 1 00:18:29.024 } 00:18:29.024 Got JSON-RPC error response 00:18:29.024 response: 00:18:29.024 { 00:18:29.024 "code": -5, 00:18:29.024 "message": "Input/output error" 00:18:29.024 } 00:18:29.024 10:37:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:18:29.024 10:37:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:29.024 10:37:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:29.024 10:37:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:29.024 10:37:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@213 -- # bdev_connect -b nvme0 --dhchap-key key0 00:18:29.024 10:37:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:18:29.024 10:37:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:18:29.282 nvme0n1 00:18:29.282 10:37:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # hostrpc bdev_nvme_get_controllers 00:18:29.282 10:37:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # jq -r '.[].name' 00:18:29.282 10:37:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:29.540 10:37:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:29.540 10:37:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@215 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:29.540 10:37:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:29.798 10:37:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@218 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --dhchap-key key1 00:18:29.798 10:37:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:29.798 10:37:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:29.798 10:37:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:29.798 10:37:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@219 -- # bdev_connect -b nvme0 --dhchap-key key1 00:18:29.798 10:37:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:18:29.798 10:37:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:18:31.170 nvme0n1 00:18:31.170 10:37:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # hostrpc bdev_nvme_get_controllers 00:18:31.170 10:37:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # jq -r '.[].name' 00:18:31.170 10:37:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:31.429 10:37:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:31.429 10:37:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@222 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --dhchap-key key2 --dhchap-ctrlr-key key3 00:18:31.429 10:37:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:31.429 10:37:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:31.687 10:37:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:31.687 10:37:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # hostrpc bdev_nvme_get_controllers 00:18:31.687 10:37:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # jq -r '.[].name' 00:18:31.687 10:37:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:31.944 10:37:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:31.944 10:37:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@225 -- # nvme_connect --dhchap-secret DHHC-1:02:NGE5M2M4NjliNWQ0OThlNWEzOTY1OTQwMDk1OTllZmRhNjc4ZDdhMzBjMjc0YmI51dL9kA==: --dhchap-ctrl-secret DHHC-1:03:MzBiMDU1YWUxODUwOWFhY2EwNDI1MjExOWFiNzY2NTQyZDJlMGYzNzI2ZGE0MTgzZmU3MjNlMzllNDM4OWRlM4eymdA=: 00:18:31.944 10:37:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid 8b464f06-2980-e311-ba20-001e67a94acd -l 0 --dhchap-secret DHHC-1:02:NGE5M2M4NjliNWQ0OThlNWEzOTY1OTQwMDk1OTllZmRhNjc4ZDdhMzBjMjc0YmI51dL9kA==: --dhchap-ctrl-secret DHHC-1:03:MzBiMDU1YWUxODUwOWFhY2EwNDI1MjExOWFiNzY2NTQyZDJlMGYzNzI2ZGE0MTgzZmU3MjNlMzllNDM4OWRlM4eymdA=: 00:18:32.879 10:37:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nvme_get_ctrlr 00:18:32.879 10:37:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@41 -- # local dev 00:18:32.879 10:37:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@43 -- # for dev in /sys/devices/virtual/nvme-fabrics/ctl/nvme* 00:18:32.879 10:37:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nqn.2024-03.io.spdk:cnode0 == \n\q\n\.\2\0\2\4\-\0\3\.\i\o\.\s\p\d\k\:\c\n\o\d\e\0 ]] 00:18:32.879 10:37:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # echo nvme0 00:18:32.879 10:37:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # break 00:18:32.879 10:37:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nctrlr=nvme0 00:18:32.879 10:37:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@227 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:32.879 10:37:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:33.136 10:37:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@228 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 00:18:33.136 10:37:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:18:33.136 10:37:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 00:18:33.136 10:37:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:18:33.136 10:37:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:33.136 10:37:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:18:33.136 10:37:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:33.136 10:37:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key1 00:18:33.136 10:37:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:18:33.136 10:37:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:18:34.069 request: 00:18:34.069 { 00:18:34.069 "name": "nvme0", 00:18:34.069 "trtype": "tcp", 00:18:34.069 "traddr": "10.0.0.2", 00:18:34.069 "adrfam": "ipv4", 00:18:34.069 "trsvcid": "4420", 00:18:34.069 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:34.069 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd", 00:18:34.069 "prchk_reftag": false, 00:18:34.069 "prchk_guard": false, 00:18:34.069 "hdgst": false, 00:18:34.069 "ddgst": false, 00:18:34.069 "dhchap_key": "key1", 00:18:34.069 "allow_unrecognized_csi": false, 00:18:34.069 "method": "bdev_nvme_attach_controller", 00:18:34.069 "req_id": 1 00:18:34.069 } 00:18:34.069 Got JSON-RPC error response 00:18:34.069 response: 00:18:34.069 { 00:18:34.069 "code": -5, 00:18:34.069 "message": "Input/output error" 00:18:34.069 } 00:18:34.069 10:37:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:18:34.069 10:37:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:34.069 10:37:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:34.069 10:37:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:34.069 10:37:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@229 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:18:34.069 10:37:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:18:34.070 10:37:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:18:35.444 nvme0n1 00:18:35.444 10:37:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # hostrpc bdev_nvme_get_controllers 00:18:35.444 10:37:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # jq -r '.[].name' 00:18:35.444 10:37:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:35.444 10:37:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:35.444 10:37:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@231 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:35.444 10:37:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:35.701 10:37:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@233 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:18:35.701 10:37:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:35.701 10:37:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:35.701 10:37:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:35.701 10:37:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@234 -- # bdev_connect -b nvme0 00:18:35.701 10:37:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:18:35.701 10:37:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:18:36.268 nvme0n1 00:18:36.268 10:37:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # hostrpc bdev_nvme_get_controllers 00:18:36.268 10:37:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # jq -r '.[].name' 00:18:36.268 10:37:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:36.526 10:37:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:36.526 10:37:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@236 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:36.526 10:37:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:36.784 10:37:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@239 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --dhchap-key key1 --dhchap-ctrlr-key key3 00:18:36.784 10:37:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:36.784 10:37:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:36.784 10:37:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:36.784 10:37:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@240 -- # nvme_set_keys nvme0 DHHC-1:01:YWY1MDI3MDIyY2I2OTZkODA2YmU0Yzk2N2Q2ZmE5ZGbrQth/: '' 2s 00:18:36.784 10:37:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:18:36.784 10:37:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:18:36.784 10:37:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key=DHHC-1:01:YWY1MDI3MDIyY2I2OTZkODA2YmU0Yzk2N2Q2ZmE5ZGbrQth/: 00:18:36.784 10:37:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey= 00:18:36.784 10:37:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:18:36.784 10:37:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:18:36.784 10:37:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z DHHC-1:01:YWY1MDI3MDIyY2I2OTZkODA2YmU0Yzk2N2Q2ZmE5ZGbrQth/: ]] 00:18:36.784 10:37:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # echo DHHC-1:01:YWY1MDI3MDIyY2I2OTZkODA2YmU0Yzk2N2Q2ZmE5ZGbrQth/: 00:18:36.784 10:37:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z '' ]] 00:18:36.784 10:37:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:18:36.784 10:37:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:18:38.686 10:37:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@241 -- # waitforblk nvme0n1 00:18:38.686 10:37:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1237 -- # local i=0 00:18:38.686 10:37:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1238 -- # lsblk -l -o NAME 00:18:38.686 10:37:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1238 -- # grep -q -w nvme0n1 00:18:38.686 10:37:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1244 -- # lsblk -l -o NAME 00:18:38.686 10:37:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1244 -- # grep -q -w nvme0n1 00:18:38.686 10:37:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1248 -- # return 0 00:18:38.686 10:37:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@243 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --dhchap-key key1 --dhchap-ctrlr-key key2 00:18:38.686 10:37:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:38.686 10:37:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:38.686 10:37:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:38.686 10:37:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@244 -- # nvme_set_keys nvme0 '' DHHC-1:02:NGE5M2M4NjliNWQ0OThlNWEzOTY1OTQwMDk1OTllZmRhNjc4ZDdhMzBjMjc0YmI51dL9kA==: 2s 00:18:38.686 10:37:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:18:38.686 10:37:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:18:38.686 10:37:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key= 00:18:38.686 10:37:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey=DHHC-1:02:NGE5M2M4NjliNWQ0OThlNWEzOTY1OTQwMDk1OTllZmRhNjc4ZDdhMzBjMjc0YmI51dL9kA==: 00:18:38.686 10:37:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:18:38.686 10:37:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:18:38.686 10:37:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z '' ]] 00:18:38.686 10:37:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z DHHC-1:02:NGE5M2M4NjliNWQ0OThlNWEzOTY1OTQwMDk1OTllZmRhNjc4ZDdhMzBjMjc0YmI51dL9kA==: ]] 00:18:38.686 10:37:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # echo DHHC-1:02:NGE5M2M4NjliNWQ0OThlNWEzOTY1OTQwMDk1OTllZmRhNjc4ZDdhMzBjMjc0YmI51dL9kA==: 00:18:38.686 10:37:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:18:38.686 10:37:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:18:41.216 10:37:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@245 -- # waitforblk nvme0n1 00:18:41.216 10:37:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1237 -- # local i=0 00:18:41.216 10:37:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1238 -- # lsblk -l -o NAME 00:18:41.216 10:37:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1238 -- # grep -q -w nvme0n1 00:18:41.216 10:37:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1244 -- # lsblk -l -o NAME 00:18:41.216 10:37:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1244 -- # grep -q -w nvme0n1 00:18:41.216 10:37:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1248 -- # return 0 00:18:41.216 10:37:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@246 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:41.216 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:41.216 10:37:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@249 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:41.216 10:37:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:41.216 10:37:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:41.216 10:37:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:41.216 10:37:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@250 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:18:41.216 10:37:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:18:41.216 10:37:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:18:42.149 nvme0n1 00:18:42.149 10:37:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@252 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --dhchap-key key2 --dhchap-ctrlr-key key3 00:18:42.149 10:37:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:42.149 10:37:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:42.149 10:37:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:42.149 10:37:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@253 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:18:42.149 10:37:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:18:43.084 10:37:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # hostrpc bdev_nvme_get_controllers 00:18:43.084 10:37:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # jq -r '.[].name' 00:18:43.084 10:37:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:43.342 10:37:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:43.342 10:37:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@256 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:18:43.342 10:37:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:43.342 10:37:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:43.342 10:37:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:43.342 10:37:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@257 -- # hostrpc bdev_nvme_set_keys nvme0 00:18:43.342 10:37:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 00:18:43.601 10:37:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # hostrpc bdev_nvme_get_controllers 00:18:43.601 10:37:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:43.601 10:37:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # jq -r '.[].name' 00:18:43.859 10:37:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:43.859 10:37:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@260 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --dhchap-key key2 --dhchap-ctrlr-key key3 00:18:43.859 10:37:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:43.859 10:37:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:43.859 10:37:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:43.859 10:37:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@261 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:18:43.859 10:37:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:18:43.859 10:37:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:18:43.859 10:37:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:18:43.859 10:37:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:43.859 10:37:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:18:43.859 10:37:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:43.859 10:37:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:18:43.859 10:37:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:18:44.791 request: 00:18:44.791 { 00:18:44.791 "name": "nvme0", 00:18:44.791 "dhchap_key": "key1", 00:18:44.791 "dhchap_ctrlr_key": "key3", 00:18:44.791 "method": "bdev_nvme_set_keys", 00:18:44.791 "req_id": 1 00:18:44.791 } 00:18:44.791 Got JSON-RPC error response 00:18:44.791 response: 00:18:44.791 { 00:18:44.791 "code": -13, 00:18:44.791 "message": "Permission denied" 00:18:44.791 } 00:18:44.791 10:37:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:18:44.791 10:37:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:44.791 10:37:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:44.791 10:37:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:44.791 10:37:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:18:44.791 10:37:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:18:44.791 10:37:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:45.048 10:37:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 1 != 0 )) 00:18:45.048 10:37:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@263 -- # sleep 1s 00:18:45.982 10:37:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:18:45.982 10:37:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:18:45.982 10:37:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:46.240 10:37:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 0 != 0 )) 00:18:46.240 10:37:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@267 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:46.240 10:37:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:46.240 10:37:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:46.240 10:37:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:46.240 10:37:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@268 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:18:46.240 10:37:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:18:46.240 10:37:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:18:47.616 nvme0n1 00:18:47.616 10:37:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@270 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --dhchap-key key2 --dhchap-ctrlr-key key3 00:18:47.616 10:37:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:47.616 10:37:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:47.616 10:37:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:47.616 10:37:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@271 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:18:47.616 10:37:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:18:47.616 10:37:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:18:47.616 10:37:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:18:47.616 10:37:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:47.616 10:37:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:18:47.616 10:37:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:47.616 10:37:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:18:47.616 10:37:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:18:48.549 request: 00:18:48.549 { 00:18:48.549 "name": "nvme0", 00:18:48.549 "dhchap_key": "key2", 00:18:48.549 "dhchap_ctrlr_key": "key0", 00:18:48.549 "method": "bdev_nvme_set_keys", 00:18:48.549 "req_id": 1 00:18:48.549 } 00:18:48.549 Got JSON-RPC error response 00:18:48.549 response: 00:18:48.549 { 00:18:48.549 "code": -13, 00:18:48.549 "message": "Permission denied" 00:18:48.549 } 00:18:48.549 10:37:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:18:48.549 10:37:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:48.549 10:37:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:48.549 10:37:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:48.549 10:37:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:18:48.549 10:37:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:48.549 10:37:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:18:48.808 10:37:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 1 != 0 )) 00:18:48.808 10:37:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@273 -- # sleep 1s 00:18:49.742 10:37:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:18:49.742 10:37:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:18:49.742 10:37:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:50.000 10:37:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 0 != 0 )) 00:18:50.000 10:37:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@276 -- # trap - SIGINT SIGTERM EXIT 00:18:50.000 10:37:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@277 -- # cleanup 00:18:50.000 10:37:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 362990 00:18:50.000 10:37:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@952 -- # '[' -z 362990 ']' 00:18:50.000 10:37:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # kill -0 362990 00:18:50.000 10:37:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@957 -- # uname 00:18:50.000 10:37:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:18:50.000 10:37:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 362990 00:18:50.000 10:37:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:18:50.000 10:37:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:18:50.000 10:37:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@970 -- # echo 'killing process with pid 362990' 00:18:50.000 killing process with pid 362990 00:18:50.000 10:37:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@971 -- # kill 362990 00:18:50.000 10:37:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@976 -- # wait 362990 00:18:50.566 10:37:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:18:50.566 10:37:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:18:50.566 10:37:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@121 -- # sync 00:18:50.566 10:37:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:18:50.566 10:37:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@124 -- # set +e 00:18:50.566 10:37:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:50.566 10:37:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:18:50.566 rmmod nvme_tcp 00:18:50.566 rmmod nvme_fabrics 00:18:50.566 rmmod nvme_keyring 00:18:50.566 10:37:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:50.566 10:37:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@128 -- # set -e 00:18:50.566 10:37:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@129 -- # return 0 00:18:50.566 10:37:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@517 -- # '[' -n 386111 ']' 00:18:50.566 10:37:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@518 -- # killprocess 386111 00:18:50.566 10:37:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@952 -- # '[' -z 386111 ']' 00:18:50.566 10:37:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # kill -0 386111 00:18:50.566 10:37:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@957 -- # uname 00:18:50.566 10:37:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:18:50.566 10:37:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 386111 00:18:50.566 10:37:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:18:50.566 10:37:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:18:50.566 10:37:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@970 -- # echo 'killing process with pid 386111' 00:18:50.566 killing process with pid 386111 00:18:50.566 10:37:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@971 -- # kill 386111 00:18:50.566 10:37:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@976 -- # wait 386111 00:18:50.827 10:37:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:18:50.827 10:37:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:18:50.827 10:37:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:18:50.827 10:37:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@297 -- # iptr 00:18:50.827 10:37:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-save 00:18:50.827 10:37:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:18:50.827 10:37:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-restore 00:18:50.827 10:37:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:18:50.827 10:37:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:18:50.827 10:37:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:50.827 10:37:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:50.827 10:37:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:52.737 10:37:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:18:52.737 10:37:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.O51 /tmp/spdk.key-sha256.osT /tmp/spdk.key-sha384.D7y /tmp/spdk.key-sha512.aWh /tmp/spdk.key-sha512.z1q /tmp/spdk.key-sha384.3Ky /tmp/spdk.key-sha256.2Z2 '' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf-auth.log 00:18:52.737 00:18:52.737 real 3m33.963s 00:18:52.737 user 8m19.987s 00:18:52.737 sys 0m28.664s 00:18:52.737 10:37:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1128 -- # xtrace_disable 00:18:52.737 10:37:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:52.737 ************************************ 00:18:52.737 END TEST nvmf_auth_target 00:18:52.737 ************************************ 00:18:52.737 10:37:41 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@39 -- # '[' tcp = tcp ']' 00:18:52.737 10:37:41 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@40 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:18:52.737 10:37:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:18:52.737 10:37:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:18:52.737 10:37:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:52.996 ************************************ 00:18:52.996 START TEST nvmf_bdevio_no_huge 00:18:52.996 ************************************ 00:18:52.996 10:37:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:18:52.996 * Looking for test storage... 00:18:52.996 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:52.996 10:37:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:18:52.996 10:37:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1691 -- # lcov --version 00:18:52.996 10:37:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:18:52.996 10:37:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:18:52.996 10:37:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:52.996 10:37:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:52.996 10:37:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:52.996 10:37:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # IFS=.-: 00:18:52.996 10:37:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # read -ra ver1 00:18:52.996 10:37:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # IFS=.-: 00:18:52.996 10:37:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # read -ra ver2 00:18:52.996 10:37:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@338 -- # local 'op=<' 00:18:52.996 10:37:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@340 -- # ver1_l=2 00:18:52.996 10:37:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@341 -- # ver2_l=1 00:18:52.996 10:37:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:52.996 10:37:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@344 -- # case "$op" in 00:18:52.996 10:37:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@345 -- # : 1 00:18:52.996 10:37:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:52.996 10:37:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:52.996 10:37:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # decimal 1 00:18:52.996 10:37:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=1 00:18:52.996 10:37:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:52.996 10:37:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 1 00:18:52.996 10:37:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # ver1[v]=1 00:18:52.996 10:37:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # decimal 2 00:18:52.996 10:37:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=2 00:18:52.996 10:37:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:52.996 10:37:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 2 00:18:52.996 10:37:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # ver2[v]=2 00:18:52.996 10:37:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:52.996 10:37:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:52.996 10:37:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # return 0 00:18:52.996 10:37:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:52.997 10:37:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:18:52.997 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:52.997 --rc genhtml_branch_coverage=1 00:18:52.997 --rc genhtml_function_coverage=1 00:18:52.997 --rc genhtml_legend=1 00:18:52.997 --rc geninfo_all_blocks=1 00:18:52.997 --rc geninfo_unexecuted_blocks=1 00:18:52.997 00:18:52.997 ' 00:18:52.997 10:37:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:18:52.997 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:52.997 --rc genhtml_branch_coverage=1 00:18:52.997 --rc genhtml_function_coverage=1 00:18:52.997 --rc genhtml_legend=1 00:18:52.997 --rc geninfo_all_blocks=1 00:18:52.997 --rc geninfo_unexecuted_blocks=1 00:18:52.997 00:18:52.997 ' 00:18:52.997 10:37:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:18:52.997 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:52.997 --rc genhtml_branch_coverage=1 00:18:52.997 --rc genhtml_function_coverage=1 00:18:52.997 --rc genhtml_legend=1 00:18:52.997 --rc geninfo_all_blocks=1 00:18:52.997 --rc geninfo_unexecuted_blocks=1 00:18:52.997 00:18:52.997 ' 00:18:52.997 10:37:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:18:52.997 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:52.997 --rc genhtml_branch_coverage=1 00:18:52.997 --rc genhtml_function_coverage=1 00:18:52.997 --rc genhtml_legend=1 00:18:52.997 --rc geninfo_all_blocks=1 00:18:52.997 --rc geninfo_unexecuted_blocks=1 00:18:52.997 00:18:52.997 ' 00:18:52.997 10:37:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:52.997 10:37:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:18:52.997 10:37:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:52.997 10:37:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:52.997 10:37:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:52.997 10:37:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:52.997 10:37:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:52.997 10:37:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:52.997 10:37:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:52.997 10:37:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:52.997 10:37:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:52.997 10:37:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:52.997 10:37:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:18:52.997 10:37:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=8b464f06-2980-e311-ba20-001e67a94acd 00:18:52.997 10:37:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:52.997 10:37:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:52.997 10:37:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:52.997 10:37:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:52.997 10:37:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:52.997 10:37:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@15 -- # shopt -s extglob 00:18:52.997 10:37:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:52.997 10:37:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:52.997 10:37:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:52.997 10:37:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:52.997 10:37:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:52.997 10:37:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:52.997 10:37:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:18:52.997 10:37:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:52.997 10:37:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # : 0 00:18:52.997 10:37:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:52.997 10:37:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:52.997 10:37:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:52.997 10:37:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:52.997 10:37:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:52.997 10:37:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:52.997 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:52.997 10:37:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:52.997 10:37:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:52.997 10:37:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:52.997 10:37:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:52.997 10:37:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:52.997 10:37:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:18:52.997 10:37:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:18:52.997 10:37:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:52.997 10:37:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@476 -- # prepare_net_devs 00:18:52.997 10:37:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@438 -- # local -g is_hw=no 00:18:52.997 10:37:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@440 -- # remove_spdk_ns 00:18:52.997 10:37:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:52.997 10:37:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:52.997 10:37:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:52.997 10:37:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:18:52.997 10:37:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:18:52.997 10:37:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@309 -- # xtrace_disable 00:18:52.997 10:37:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:55.530 10:37:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:55.530 10:37:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # pci_devs=() 00:18:55.530 10:37:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # local -a pci_devs 00:18:55.530 10:37:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@316 -- # pci_net_devs=() 00:18:55.530 10:37:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:18:55.530 10:37:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # pci_drivers=() 00:18:55.530 10:37:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # local -A pci_drivers 00:18:55.530 10:37:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@319 -- # net_devs=() 00:18:55.530 10:37:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@319 -- # local -ga net_devs 00:18:55.530 10:37:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # e810=() 00:18:55.530 10:37:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # local -ga e810 00:18:55.530 10:37:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # x722=() 00:18:55.530 10:37:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # local -ga x722 00:18:55.530 10:37:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@322 -- # mlx=() 00:18:55.530 10:37:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@322 -- # local -ga mlx 00:18:55.530 10:37:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:55.530 10:37:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:55.530 10:37:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:55.530 10:37:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:55.530 10:37:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:55.530 10:37:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:55.530 10:37:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:55.530 10:37:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:18:55.530 10:37:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:55.530 10:37:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:55.530 10:37:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:55.531 10:37:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:55.531 10:37:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:18:55.531 10:37:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:18:55.531 10:37:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:18:55.531 10:37:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:18:55.531 10:37:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:18:55.531 10:37:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:18:55.531 10:37:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:55.531 10:37:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@367 -- # echo 'Found 0000:82:00.0 (0x8086 - 0x159b)' 00:18:55.531 Found 0000:82:00.0 (0x8086 - 0x159b) 00:18:55.531 10:37:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:55.531 10:37:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:55.531 10:37:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:55.531 10:37:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:55.531 10:37:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:55.531 10:37:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:55.531 10:37:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@367 -- # echo 'Found 0000:82:00.1 (0x8086 - 0x159b)' 00:18:55.531 Found 0000:82:00.1 (0x8086 - 0x159b) 00:18:55.531 10:37:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:55.531 10:37:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:55.531 10:37:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:55.531 10:37:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:55.531 10:37:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:55.531 10:37:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:18:55.531 10:37:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:18:55.531 10:37:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:18:55.531 10:37:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:55.531 10:37:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:55.531 10:37:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:18:55.531 10:37:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:55.531 10:37:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # [[ up == up ]] 00:18:55.531 10:37:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:55.531 10:37:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:55.531 10:37:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:82:00.0: cvl_0_0' 00:18:55.531 Found net devices under 0000:82:00.0: cvl_0_0 00:18:55.531 10:37:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:55.531 10:37:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:55.531 10:37:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:55.531 10:37:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:18:55.531 10:37:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:55.531 10:37:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # [[ up == up ]] 00:18:55.531 10:37:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:55.531 10:37:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:55.531 10:37:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:82:00.1: cvl_0_1' 00:18:55.531 Found net devices under 0000:82:00.1: cvl_0_1 00:18:55.531 10:37:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:55.531 10:37:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:18:55.531 10:37:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # is_hw=yes 00:18:55.531 10:37:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:18:55.531 10:37:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:18:55.531 10:37:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:18:55.531 10:37:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:18:55.531 10:37:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:55.531 10:37:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:55.531 10:37:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:55.531 10:37:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:18:55.531 10:37:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:55.531 10:37:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:55.531 10:37:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:18:55.531 10:37:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:18:55.531 10:37:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:55.531 10:37:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:55.531 10:37:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:18:55.531 10:37:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:18:55.531 10:37:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:18:55.531 10:37:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:55.531 10:37:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:55.531 10:37:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:55.531 10:37:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:18:55.531 10:37:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:55.531 10:37:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:55.531 10:37:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:55.531 10:37:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:18:55.531 10:37:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:18:55.531 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:55.531 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.345 ms 00:18:55.531 00:18:55.531 --- 10.0.0.2 ping statistics --- 00:18:55.531 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:55.531 rtt min/avg/max/mdev = 0.345/0.345/0.345/0.000 ms 00:18:55.531 10:37:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:55.531 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:55.531 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.125 ms 00:18:55.531 00:18:55.531 --- 10.0.0.1 ping statistics --- 00:18:55.531 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:55.531 rtt min/avg/max/mdev = 0.125/0.125/0.125/0.000 ms 00:18:55.531 10:37:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:55.531 10:37:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@450 -- # return 0 00:18:55.531 10:37:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:18:55.531 10:37:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:55.532 10:37:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:18:55.532 10:37:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:18:55.532 10:37:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:55.532 10:37:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:18:55.532 10:37:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:18:55.532 10:37:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:18:55.532 10:37:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:55.532 10:37:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@724 -- # xtrace_disable 00:18:55.532 10:37:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:55.532 10:37:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@509 -- # nvmfpid=391247 00:18:55.532 10:37:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:18:55.532 10:37:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@510 -- # waitforlisten 391247 00:18:55.532 10:37:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@833 -- # '[' -z 391247 ']' 00:18:55.532 10:37:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:55.532 10:37:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@838 -- # local max_retries=100 00:18:55.532 10:37:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:55.532 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:55.532 10:37:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@842 -- # xtrace_disable 00:18:55.532 10:37:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:55.532 [2024-11-15 10:37:43.713238] Starting SPDK v25.01-pre git sha1 318515b44 / DPDK 24.03.0 initialization... 00:18:55.532 [2024-11-15 10:37:43.713325] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:18:55.532 [2024-11-15 10:37:43.788420] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:55.532 [2024-11-15 10:37:43.844727] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:55.532 [2024-11-15 10:37:43.844789] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:55.532 [2024-11-15 10:37:43.844810] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:55.532 [2024-11-15 10:37:43.844828] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:55.532 [2024-11-15 10:37:43.844842] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:55.532 [2024-11-15 10:37:43.846517] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:18:55.532 [2024-11-15 10:37:43.846554] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:18:55.532 [2024-11-15 10:37:43.846601] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:18:55.532 [2024-11-15 10:37:43.846604] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:18:55.532 10:37:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:18:55.532 10:37:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@866 -- # return 0 00:18:55.532 10:37:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:55.532 10:37:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@730 -- # xtrace_disable 00:18:55.532 10:37:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:55.791 10:37:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:55.791 10:37:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:55.791 10:37:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:55.791 10:37:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:55.791 [2024-11-15 10:37:44.002270] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:55.791 10:37:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:55.791 10:37:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:18:55.791 10:37:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:55.791 10:37:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:55.791 Malloc0 00:18:55.791 10:37:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:55.791 10:37:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:18:55.791 10:37:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:55.791 10:37:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:55.791 10:37:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:55.791 10:37:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:55.791 10:37:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:55.791 10:37:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:55.791 10:37:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:55.791 10:37:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:55.791 10:37:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:55.791 10:37:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:55.791 [2024-11-15 10:37:44.040037] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:55.791 10:37:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:55.791 10:37:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:18:55.791 10:37:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:18:55.791 10:37:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # config=() 00:18:55.791 10:37:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # local subsystem config 00:18:55.791 10:37:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:18:55.791 10:37:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:18:55.791 { 00:18:55.791 "params": { 00:18:55.791 "name": "Nvme$subsystem", 00:18:55.791 "trtype": "$TEST_TRANSPORT", 00:18:55.791 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:55.791 "adrfam": "ipv4", 00:18:55.791 "trsvcid": "$NVMF_PORT", 00:18:55.791 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:55.791 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:55.791 "hdgst": ${hdgst:-false}, 00:18:55.791 "ddgst": ${ddgst:-false} 00:18:55.791 }, 00:18:55.791 "method": "bdev_nvme_attach_controller" 00:18:55.791 } 00:18:55.791 EOF 00:18:55.791 )") 00:18:55.791 10:37:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # cat 00:18:55.791 10:37:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@584 -- # jq . 00:18:55.791 10:37:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@585 -- # IFS=, 00:18:55.791 10:37:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:18:55.791 "params": { 00:18:55.791 "name": "Nvme1", 00:18:55.791 "trtype": "tcp", 00:18:55.791 "traddr": "10.0.0.2", 00:18:55.791 "adrfam": "ipv4", 00:18:55.791 "trsvcid": "4420", 00:18:55.791 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:55.791 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:55.791 "hdgst": false, 00:18:55.791 "ddgst": false 00:18:55.791 }, 00:18:55.791 "method": "bdev_nvme_attach_controller" 00:18:55.791 }' 00:18:55.791 [2024-11-15 10:37:44.089886] Starting SPDK v25.01-pre git sha1 318515b44 / DPDK 24.03.0 initialization... 00:18:55.791 [2024-11-15 10:37:44.089960] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid391394 ] 00:18:55.791 [2024-11-15 10:37:44.164052] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:18:55.791 [2024-11-15 10:37:44.229763] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:55.791 [2024-11-15 10:37:44.229815] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:55.791 [2024-11-15 10:37:44.229819] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:56.358 I/O targets: 00:18:56.358 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:18:56.358 00:18:56.358 00:18:56.358 CUnit - A unit testing framework for C - Version 2.1-3 00:18:56.358 http://cunit.sourceforge.net/ 00:18:56.358 00:18:56.358 00:18:56.358 Suite: bdevio tests on: Nvme1n1 00:18:56.358 Test: blockdev write read block ...passed 00:18:56.358 Test: blockdev write zeroes read block ...passed 00:18:56.358 Test: blockdev write zeroes read no split ...passed 00:18:56.358 Test: blockdev write zeroes read split ...passed 00:18:56.358 Test: blockdev write zeroes read split partial ...passed 00:18:56.358 Test: blockdev reset ...[2024-11-15 10:37:44.711170] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:18:56.358 [2024-11-15 10:37:44.711290] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18d06e0 (9): Bad file descriptor 00:18:56.358 [2024-11-15 10:37:44.724941] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:18:56.358 passed 00:18:56.358 Test: blockdev write read 8 blocks ...passed 00:18:56.358 Test: blockdev write read size > 128k ...passed 00:18:56.358 Test: blockdev write read invalid size ...passed 00:18:56.358 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:18:56.358 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:18:56.358 Test: blockdev write read max offset ...passed 00:18:56.615 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:18:56.615 Test: blockdev writev readv 8 blocks ...passed 00:18:56.615 Test: blockdev writev readv 30 x 1block ...passed 00:18:56.615 Test: blockdev writev readv block ...passed 00:18:56.615 Test: blockdev writev readv size > 128k ...passed 00:18:56.615 Test: blockdev writev readv size > 128k in two iovs ...passed 00:18:56.615 Test: blockdev comparev and writev ...[2024-11-15 10:37:45.019665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:56.615 [2024-11-15 10:37:45.019707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:56.615 [2024-11-15 10:37:45.019733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:56.616 [2024-11-15 10:37:45.019750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:56.616 [2024-11-15 10:37:45.020108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:56.616 [2024-11-15 10:37:45.020133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:18:56.616 [2024-11-15 10:37:45.020155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:56.616 [2024-11-15 10:37:45.020172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:18:56.616 [2024-11-15 10:37:45.020536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:56.616 [2024-11-15 10:37:45.020561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:18:56.616 [2024-11-15 10:37:45.020583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:56.616 [2024-11-15 10:37:45.020599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:18:56.616 [2024-11-15 10:37:45.020901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:56.616 [2024-11-15 10:37:45.020925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:18:56.616 [2024-11-15 10:37:45.020947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:56.616 [2024-11-15 10:37:45.020962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:18:56.616 passed 00:18:56.873 Test: blockdev nvme passthru rw ...passed 00:18:56.873 Test: blockdev nvme passthru vendor specific ...[2024-11-15 10:37:45.102628] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:56.873 [2024-11-15 10:37:45.102656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:18:56.873 [2024-11-15 10:37:45.102798] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:56.873 [2024-11-15 10:37:45.102820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:18:56.873 [2024-11-15 10:37:45.102962] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:56.873 [2024-11-15 10:37:45.102984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:18:56.873 [2024-11-15 10:37:45.103117] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:56.873 [2024-11-15 10:37:45.103138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:18:56.873 passed 00:18:56.873 Test: blockdev nvme admin passthru ...passed 00:18:56.873 Test: blockdev copy ...passed 00:18:56.873 00:18:56.873 Run Summary: Type Total Ran Passed Failed Inactive 00:18:56.873 suites 1 1 n/a 0 0 00:18:56.873 tests 23 23 23 0 0 00:18:56.873 asserts 152 152 152 0 n/a 00:18:56.873 00:18:56.873 Elapsed time = 1.232 seconds 00:18:57.132 10:37:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:57.132 10:37:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:57.132 10:37:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:57.132 10:37:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:57.132 10:37:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:18:57.132 10:37:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:18:57.132 10:37:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@516 -- # nvmfcleanup 00:18:57.132 10:37:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # sync 00:18:57.132 10:37:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:18:57.132 10:37:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set +e 00:18:57.132 10:37:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:57.132 10:37:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:18:57.132 rmmod nvme_tcp 00:18:57.132 rmmod nvme_fabrics 00:18:57.132 rmmod nvme_keyring 00:18:57.132 10:37:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:57.132 10:37:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@128 -- # set -e 00:18:57.132 10:37:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@129 -- # return 0 00:18:57.132 10:37:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@517 -- # '[' -n 391247 ']' 00:18:57.132 10:37:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@518 -- # killprocess 391247 00:18:57.132 10:37:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@952 -- # '[' -z 391247 ']' 00:18:57.132 10:37:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@956 -- # kill -0 391247 00:18:57.132 10:37:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@957 -- # uname 00:18:57.132 10:37:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:18:57.132 10:37:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 391247 00:18:57.390 10:37:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@958 -- # process_name=reactor_3 00:18:57.390 10:37:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@962 -- # '[' reactor_3 = sudo ']' 00:18:57.390 10:37:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@970 -- # echo 'killing process with pid 391247' 00:18:57.390 killing process with pid 391247 00:18:57.390 10:37:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@971 -- # kill 391247 00:18:57.390 10:37:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@976 -- # wait 391247 00:18:57.649 10:37:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:18:57.649 10:37:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:18:57.649 10:37:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:18:57.649 10:37:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # iptr 00:18:57.649 10:37:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-save 00:18:57.649 10:37:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:18:57.649 10:37:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-restore 00:18:57.649 10:37:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:18:57.649 10:37:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@302 -- # remove_spdk_ns 00:18:57.649 10:37:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:57.649 10:37:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:57.649 10:37:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:00.186 10:37:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:19:00.186 00:19:00.186 real 0m6.824s 00:19:00.186 user 0m11.705s 00:19:00.186 sys 0m2.601s 00:19:00.186 10:37:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1128 -- # xtrace_disable 00:19:00.186 10:37:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:00.186 ************************************ 00:19:00.186 END TEST nvmf_bdevio_no_huge 00:19:00.186 ************************************ 00:19:00.186 10:37:48 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@41 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:19:00.186 10:37:48 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:19:00.186 10:37:48 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:19:00.186 10:37:48 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:00.186 ************************************ 00:19:00.186 START TEST nvmf_tls 00:19:00.186 ************************************ 00:19:00.186 10:37:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:19:00.186 * Looking for test storage... 00:19:00.186 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:00.186 10:37:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:19:00.186 10:37:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1691 -- # lcov --version 00:19:00.186 10:37:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:19:00.186 10:37:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:19:00.186 10:37:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:00.186 10:37:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:00.186 10:37:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:00.186 10:37:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # IFS=.-: 00:19:00.186 10:37:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # read -ra ver1 00:19:00.186 10:37:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # IFS=.-: 00:19:00.186 10:37:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # read -ra ver2 00:19:00.186 10:37:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@338 -- # local 'op=<' 00:19:00.186 10:37:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@340 -- # ver1_l=2 00:19:00.186 10:37:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@341 -- # ver2_l=1 00:19:00.186 10:37:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:00.186 10:37:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@344 -- # case "$op" in 00:19:00.186 10:37:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@345 -- # : 1 00:19:00.186 10:37:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:00.186 10:37:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:00.186 10:37:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # decimal 1 00:19:00.186 10:37:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=1 00:19:00.186 10:37:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:00.186 10:37:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 1 00:19:00.186 10:37:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # ver1[v]=1 00:19:00.186 10:37:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # decimal 2 00:19:00.186 10:37:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=2 00:19:00.186 10:37:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:00.186 10:37:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 2 00:19:00.186 10:37:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # ver2[v]=2 00:19:00.186 10:37:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:00.186 10:37:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:00.186 10:37:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # return 0 00:19:00.186 10:37:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:00.186 10:37:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:19:00.186 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:00.186 --rc genhtml_branch_coverage=1 00:19:00.186 --rc genhtml_function_coverage=1 00:19:00.186 --rc genhtml_legend=1 00:19:00.186 --rc geninfo_all_blocks=1 00:19:00.186 --rc geninfo_unexecuted_blocks=1 00:19:00.186 00:19:00.186 ' 00:19:00.186 10:37:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:19:00.186 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:00.186 --rc genhtml_branch_coverage=1 00:19:00.186 --rc genhtml_function_coverage=1 00:19:00.186 --rc genhtml_legend=1 00:19:00.186 --rc geninfo_all_blocks=1 00:19:00.186 --rc geninfo_unexecuted_blocks=1 00:19:00.186 00:19:00.186 ' 00:19:00.186 10:37:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:19:00.186 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:00.186 --rc genhtml_branch_coverage=1 00:19:00.186 --rc genhtml_function_coverage=1 00:19:00.186 --rc genhtml_legend=1 00:19:00.186 --rc geninfo_all_blocks=1 00:19:00.186 --rc geninfo_unexecuted_blocks=1 00:19:00.186 00:19:00.186 ' 00:19:00.186 10:37:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:19:00.186 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:00.186 --rc genhtml_branch_coverage=1 00:19:00.186 --rc genhtml_function_coverage=1 00:19:00.186 --rc genhtml_legend=1 00:19:00.186 --rc geninfo_all_blocks=1 00:19:00.186 --rc geninfo_unexecuted_blocks=1 00:19:00.186 00:19:00.186 ' 00:19:00.186 10:37:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:00.186 10:37:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:19:00.186 10:37:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:00.186 10:37:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:00.186 10:37:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:00.186 10:37:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:00.186 10:37:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:00.186 10:37:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:00.186 10:37:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:00.186 10:37:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:00.186 10:37:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:00.186 10:37:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:00.187 10:37:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:19:00.187 10:37:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=8b464f06-2980-e311-ba20-001e67a94acd 00:19:00.187 10:37:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:00.187 10:37:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:00.187 10:37:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:00.187 10:37:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:00.187 10:37:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:00.187 10:37:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@15 -- # shopt -s extglob 00:19:00.187 10:37:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:00.187 10:37:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:00.187 10:37:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:00.187 10:37:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:00.187 10:37:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:00.187 10:37:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:00.187 10:37:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:19:00.187 10:37:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:00.187 10:37:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@51 -- # : 0 00:19:00.187 10:37:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:00.187 10:37:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:00.187 10:37:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:00.187 10:37:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:00.187 10:37:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:00.187 10:37:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:00.187 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:00.187 10:37:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:00.187 10:37:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:00.187 10:37:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:00.187 10:37:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:19:00.187 10:37:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@63 -- # nvmftestinit 00:19:00.187 10:37:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:19:00.187 10:37:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:00.187 10:37:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@476 -- # prepare_net_devs 00:19:00.187 10:37:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@438 -- # local -g is_hw=no 00:19:00.187 10:37:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@440 -- # remove_spdk_ns 00:19:00.187 10:37:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:00.187 10:37:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:00.187 10:37:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:00.187 10:37:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:19:00.187 10:37:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:19:00.187 10:37:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@309 -- # xtrace_disable 00:19:00.187 10:37:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:02.090 10:37:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:02.090 10:37:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # pci_devs=() 00:19:02.090 10:37:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # local -a pci_devs 00:19:02.090 10:37:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@316 -- # pci_net_devs=() 00:19:02.090 10:37:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:19:02.090 10:37:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # pci_drivers=() 00:19:02.090 10:37:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # local -A pci_drivers 00:19:02.090 10:37:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@319 -- # net_devs=() 00:19:02.090 10:37:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@319 -- # local -ga net_devs 00:19:02.090 10:37:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # e810=() 00:19:02.090 10:37:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # local -ga e810 00:19:02.090 10:37:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # x722=() 00:19:02.090 10:37:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # local -ga x722 00:19:02.090 10:37:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@322 -- # mlx=() 00:19:02.090 10:37:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@322 -- # local -ga mlx 00:19:02.090 10:37:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:02.090 10:37:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:02.090 10:37:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:02.090 10:37:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:02.090 10:37:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:02.090 10:37:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:02.090 10:37:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:02.090 10:37:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:19:02.090 10:37:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:02.090 10:37:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:02.090 10:37:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:02.090 10:37:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:02.090 10:37:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:19:02.090 10:37:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:19:02.090 10:37:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:19:02.090 10:37:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:19:02.090 10:37:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:19:02.090 10:37:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:19:02.090 10:37:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:02.090 10:37:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@367 -- # echo 'Found 0000:82:00.0 (0x8086 - 0x159b)' 00:19:02.090 Found 0000:82:00.0 (0x8086 - 0x159b) 00:19:02.090 10:37:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:02.090 10:37:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:02.090 10:37:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:02.090 10:37:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:02.090 10:37:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:02.090 10:37:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:02.090 10:37:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@367 -- # echo 'Found 0000:82:00.1 (0x8086 - 0x159b)' 00:19:02.090 Found 0000:82:00.1 (0x8086 - 0x159b) 00:19:02.090 10:37:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:02.090 10:37:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:02.090 10:37:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:02.090 10:37:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:02.090 10:37:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:02.090 10:37:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:19:02.090 10:37:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:19:02.090 10:37:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:19:02.090 10:37:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:02.090 10:37:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:02.090 10:37:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:02.090 10:37:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:02.090 10:37:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:02.090 10:37:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:02.090 10:37:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:02.090 10:37:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:82:00.0: cvl_0_0' 00:19:02.090 Found net devices under 0000:82:00.0: cvl_0_0 00:19:02.090 10:37:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:02.090 10:37:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:02.090 10:37:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:02.090 10:37:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:02.091 10:37:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:02.091 10:37:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:02.091 10:37:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:02.091 10:37:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:02.091 10:37:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:82:00.1: cvl_0_1' 00:19:02.091 Found net devices under 0000:82:00.1: cvl_0_1 00:19:02.091 10:37:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:02.091 10:37:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:19:02.091 10:37:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # is_hw=yes 00:19:02.091 10:37:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:19:02.091 10:37:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:19:02.091 10:37:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:19:02.091 10:37:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:02.091 10:37:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:02.091 10:37:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:02.091 10:37:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:02.091 10:37:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:19:02.091 10:37:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:02.091 10:37:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:02.091 10:37:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:19:02.091 10:37:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:19:02.091 10:37:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:02.091 10:37:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:02.091 10:37:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:19:02.091 10:37:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:19:02.091 10:37:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:19:02.091 10:37:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:02.091 10:37:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:02.091 10:37:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:02.091 10:37:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:19:02.091 10:37:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:02.350 10:37:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:02.350 10:37:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:02.350 10:37:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:19:02.350 10:37:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:19:02.350 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:02.350 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.253 ms 00:19:02.350 00:19:02.350 --- 10.0.0.2 ping statistics --- 00:19:02.350 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:02.350 rtt min/avg/max/mdev = 0.253/0.253/0.253/0.000 ms 00:19:02.350 10:37:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:02.350 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:02.350 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.125 ms 00:19:02.350 00:19:02.350 --- 10.0.0.1 ping statistics --- 00:19:02.350 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:02.350 rtt min/avg/max/mdev = 0.125/0.125/0.125/0.000 ms 00:19:02.350 10:37:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:02.350 10:37:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@450 -- # return 0 00:19:02.350 10:37:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:19:02.350 10:37:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:02.350 10:37:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:19:02.350 10:37:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:19:02.350 10:37:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:02.350 10:37:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:19:02.350 10:37:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:19:02.350 10:37:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@64 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:19:02.350 10:37:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:02.350 10:37:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:02.350 10:37:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:02.350 10:37:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=393478 00:19:02.350 10:37:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:19:02.350 10:37:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 393478 00:19:02.350 10:37:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 393478 ']' 00:19:02.350 10:37:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:02.350 10:37:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:19:02.350 10:37:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:02.350 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:02.350 10:37:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:19:02.350 10:37:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:02.350 [2024-11-15 10:37:50.692115] Starting SPDK v25.01-pre git sha1 318515b44 / DPDK 24.03.0 initialization... 00:19:02.350 [2024-11-15 10:37:50.692213] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:02.350 [2024-11-15 10:37:50.767443] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:02.608 [2024-11-15 10:37:50.825091] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:02.608 [2024-11-15 10:37:50.825143] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:02.608 [2024-11-15 10:37:50.825170] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:02.608 [2024-11-15 10:37:50.825181] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:02.608 [2024-11-15 10:37:50.825190] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:02.608 [2024-11-15 10:37:50.825864] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:02.608 10:37:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:19:02.608 10:37:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:19:02.608 10:37:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:02.608 10:37:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:02.608 10:37:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:02.608 10:37:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:02.608 10:37:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@66 -- # '[' tcp '!=' tcp ']' 00:19:02.608 10:37:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:19:02.867 true 00:19:02.867 10:37:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:02.867 10:37:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # jq -r .tls_version 00:19:03.125 10:37:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # version=0 00:19:03.125 10:37:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@75 -- # [[ 0 != \0 ]] 00:19:03.125 10:37:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:19:03.383 10:37:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:03.383 10:37:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # jq -r .tls_version 00:19:03.640 10:37:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # version=13 00:19:03.640 10:37:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@83 -- # [[ 13 != \1\3 ]] 00:19:03.641 10:37:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:19:04.207 10:37:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:04.207 10:37:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # jq -r .tls_version 00:19:04.207 10:37:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # version=7 00:19:04.207 10:37:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@91 -- # [[ 7 != \7 ]] 00:19:04.207 10:37:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:04.207 10:37:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # jq -r .enable_ktls 00:19:04.465 10:37:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # ktls=false 00:19:04.465 10:37:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@98 -- # [[ false != \f\a\l\s\e ]] 00:19:04.465 10:37:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:19:05.029 10:37:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:05.029 10:37:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # jq -r .enable_ktls 00:19:05.029 10:37:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # ktls=true 00:19:05.029 10:37:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@106 -- # [[ true != \t\r\u\e ]] 00:19:05.029 10:37:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:19:05.286 10:37:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:05.286 10:37:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # jq -r .enable_ktls 00:19:05.543 10:37:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # ktls=false 00:19:05.543 10:37:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@114 -- # [[ false != \f\a\l\s\e ]] 00:19:05.543 10:37:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:19:05.543 10:37:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:19:05.543 10:37:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:19:05.543 10:37:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:19:05.543 10:37:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:19:05.543 10:37:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:19:05.543 10:37:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:19:05.801 10:37:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:19:05.801 10:37:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:19:05.801 10:37:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:19:05.801 10:37:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:19:05.801 10:37:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:19:05.801 10:37:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=ffeeddccbbaa99887766554433221100 00:19:05.801 10:37:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:19:05.801 10:37:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:19:05.801 10:37:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:19:05.801 10:37:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:19:05.801 10:37:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # key_path=/tmp/tmp.MNsoSSh8sG 00:19:05.801 10:37:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # mktemp 00:19:05.801 10:37:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # key_2_path=/tmp/tmp.FtOw4ABGoR 00:19:05.801 10:37:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:19:05.801 10:37:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@126 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:19:05.801 10:37:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.MNsoSSh8sG 00:19:05.801 10:37:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@129 -- # chmod 0600 /tmp/tmp.FtOw4ABGoR 00:19:05.801 10:37:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:19:06.057 10:37:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@132 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:19:06.314 10:37:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@134 -- # setup_nvmf_tgt /tmp/tmp.MNsoSSh8sG 00:19:06.314 10:37:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.MNsoSSh8sG 00:19:06.314 10:37:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:19:06.877 [2024-11-15 10:37:55.042688] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:06.877 10:37:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:19:07.135 10:37:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:19:07.393 [2024-11-15 10:37:55.644303] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:07.393 [2024-11-15 10:37:55.644598] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:07.393 10:37:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:19:07.651 malloc0 00:19:07.651 10:37:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:19:07.909 10:37:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.MNsoSSh8sG 00:19:08.166 10:37:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:19:08.422 10:37:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@138 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.MNsoSSh8sG 00:19:20.621 Initializing NVMe Controllers 00:19:20.621 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:19:20.621 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:19:20.621 Initialization complete. Launching workers. 00:19:20.621 ======================================================== 00:19:20.621 Latency(us) 00:19:20.621 Device Information : IOPS MiB/s Average min max 00:19:20.621 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 8619.36 33.67 7426.43 1219.19 9367.15 00:19:20.621 ======================================================== 00:19:20.621 Total : 8619.36 33.67 7426.43 1219.19 9367.15 00:19:20.621 00:19:20.621 10:38:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@144 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.MNsoSSh8sG 00:19:20.621 10:38:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:20.621 10:38:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:20.621 10:38:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:20.621 10:38:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.MNsoSSh8sG 00:19:20.621 10:38:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:20.621 10:38:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=395498 00:19:20.621 10:38:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:20.621 10:38:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:20.621 10:38:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 395498 /var/tmp/bdevperf.sock 00:19:20.621 10:38:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 395498 ']' 00:19:20.621 10:38:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:20.621 10:38:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:19:20.621 10:38:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:20.621 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:20.621 10:38:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:19:20.621 10:38:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:20.621 [2024-11-15 10:38:06.976222] Starting SPDK v25.01-pre git sha1 318515b44 / DPDK 24.03.0 initialization... 00:19:20.621 [2024-11-15 10:38:06.976296] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid395498 ] 00:19:20.621 [2024-11-15 10:38:07.041842] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:20.621 [2024-11-15 10:38:07.098128] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:20.621 10:38:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:19:20.621 10:38:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:19:20.621 10:38:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.MNsoSSh8sG 00:19:20.621 10:38:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:19:20.621 [2024-11-15 10:38:07.779580] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:20.621 TLSTESTn1 00:19:20.621 10:38:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:19:20.621 Running I/O for 10 seconds... 00:19:21.555 3493.00 IOPS, 13.64 MiB/s [2024-11-15T09:38:11.392Z] 3537.50 IOPS, 13.82 MiB/s [2024-11-15T09:38:12.326Z] 3462.00 IOPS, 13.52 MiB/s [2024-11-15T09:38:13.260Z] 3501.50 IOPS, 13.68 MiB/s [2024-11-15T09:38:14.194Z] 3513.20 IOPS, 13.72 MiB/s [2024-11-15T09:38:15.128Z] 3547.67 IOPS, 13.86 MiB/s [2024-11-15T09:38:16.062Z] 3538.14 IOPS, 13.82 MiB/s [2024-11-15T09:38:17.436Z] 3520.00 IOPS, 13.75 MiB/s [2024-11-15T09:38:18.371Z] 3514.44 IOPS, 13.73 MiB/s [2024-11-15T09:38:18.371Z] 3543.10 IOPS, 13.84 MiB/s 00:19:29.908 Latency(us) 00:19:29.908 [2024-11-15T09:38:18.371Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:29.908 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:19:29.908 Verification LBA range: start 0x0 length 0x2000 00:19:29.908 TLSTESTn1 : 10.02 3548.07 13.86 0.00 0.00 36013.58 7864.32 43496.49 00:19:29.908 [2024-11-15T09:38:18.371Z] =================================================================================================================== 00:19:29.908 [2024-11-15T09:38:18.371Z] Total : 3548.07 13.86 0.00 0.00 36013.58 7864.32 43496.49 00:19:29.908 { 00:19:29.908 "results": [ 00:19:29.908 { 00:19:29.908 "job": "TLSTESTn1", 00:19:29.908 "core_mask": "0x4", 00:19:29.908 "workload": "verify", 00:19:29.908 "status": "finished", 00:19:29.908 "verify_range": { 00:19:29.908 "start": 0, 00:19:29.908 "length": 8192 00:19:29.908 }, 00:19:29.908 "queue_depth": 128, 00:19:29.908 "io_size": 4096, 00:19:29.908 "runtime": 10.021493, 00:19:29.908 "iops": 3548.074124284675, 00:19:29.908 "mibps": 13.85966454798701, 00:19:29.908 "io_failed": 0, 00:19:29.908 "io_timeout": 0, 00:19:29.908 "avg_latency_us": 36013.57950833247, 00:19:29.908 "min_latency_us": 7864.32, 00:19:29.908 "max_latency_us": 43496.485925925925 00:19:29.908 } 00:19:29.908 ], 00:19:29.908 "core_count": 1 00:19:29.908 } 00:19:29.908 10:38:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:29.908 10:38:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 395498 00:19:29.908 10:38:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 395498 ']' 00:19:29.908 10:38:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 395498 00:19:29.908 10:38:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:19:29.908 10:38:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:19:29.908 10:38:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 395498 00:19:29.908 10:38:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:19:29.908 10:38:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:19:29.908 10:38:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 395498' 00:19:29.908 killing process with pid 395498 00:19:29.908 10:38:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 395498 00:19:29.908 Received shutdown signal, test time was about 10.000000 seconds 00:19:29.908 00:19:29.908 Latency(us) 00:19:29.908 [2024-11-15T09:38:18.371Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:29.908 [2024-11-15T09:38:18.371Z] =================================================================================================================== 00:19:29.908 [2024-11-15T09:38:18.371Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:29.908 10:38:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 395498 00:19:29.908 10:38:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@147 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.FtOw4ABGoR 00:19:29.908 10:38:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:19:29.908 10:38:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.FtOw4ABGoR 00:19:29.908 10:38:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:19:29.908 10:38:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:29.908 10:38:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:19:29.908 10:38:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:29.908 10:38:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.FtOw4ABGoR 00:19:29.908 10:38:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:29.908 10:38:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:29.908 10:38:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:29.908 10:38:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.FtOw4ABGoR 00:19:29.908 10:38:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:29.908 10:38:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=396816 00:19:29.908 10:38:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:29.908 10:38:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:29.908 10:38:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 396816 /var/tmp/bdevperf.sock 00:19:29.908 10:38:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 396816 ']' 00:19:29.908 10:38:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:29.908 10:38:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:19:29.908 10:38:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:29.909 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:29.909 10:38:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:19:29.909 10:38:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:29.909 [2024-11-15 10:38:18.356821] Starting SPDK v25.01-pre git sha1 318515b44 / DPDK 24.03.0 initialization... 00:19:29.909 [2024-11-15 10:38:18.356900] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid396816 ] 00:19:30.167 [2024-11-15 10:38:18.423108] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:30.167 [2024-11-15 10:38:18.481203] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:30.167 10:38:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:19:30.167 10:38:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:19:30.167 10:38:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.FtOw4ABGoR 00:19:30.425 10:38:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:19:30.684 [2024-11-15 10:38:19.090771] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:30.684 [2024-11-15 10:38:19.098384] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:19:30.684 [2024-11-15 10:38:19.098902] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa002c0 (107): Transport endpoint is not connected 00:19:30.684 [2024-11-15 10:38:19.099891] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa002c0 (9): Bad file descriptor 00:19:30.684 [2024-11-15 10:38:19.100892] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:19:30.684 [2024-11-15 10:38:19.100910] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:19:30.684 [2024-11-15 10:38:19.100938] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:19:30.684 [2024-11-15 10:38:19.100957] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:19:30.684 request: 00:19:30.684 { 00:19:30.684 "name": "TLSTEST", 00:19:30.684 "trtype": "tcp", 00:19:30.684 "traddr": "10.0.0.2", 00:19:30.684 "adrfam": "ipv4", 00:19:30.684 "trsvcid": "4420", 00:19:30.684 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:30.684 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:30.684 "prchk_reftag": false, 00:19:30.684 "prchk_guard": false, 00:19:30.684 "hdgst": false, 00:19:30.684 "ddgst": false, 00:19:30.684 "psk": "key0", 00:19:30.684 "allow_unrecognized_csi": false, 00:19:30.684 "method": "bdev_nvme_attach_controller", 00:19:30.684 "req_id": 1 00:19:30.684 } 00:19:30.684 Got JSON-RPC error response 00:19:30.684 response: 00:19:30.684 { 00:19:30.684 "code": -5, 00:19:30.684 "message": "Input/output error" 00:19:30.684 } 00:19:30.684 10:38:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 396816 00:19:30.684 10:38:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 396816 ']' 00:19:30.684 10:38:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 396816 00:19:30.684 10:38:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:19:30.684 10:38:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:19:30.684 10:38:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 396816 00:19:30.942 10:38:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:19:30.942 10:38:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:19:30.942 10:38:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 396816' 00:19:30.942 killing process with pid 396816 00:19:30.942 10:38:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 396816 00:19:30.942 Received shutdown signal, test time was about 10.000000 seconds 00:19:30.942 00:19:30.942 Latency(us) 00:19:30.942 [2024-11-15T09:38:19.405Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:30.942 [2024-11-15T09:38:19.405Z] =================================================================================================================== 00:19:30.942 [2024-11-15T09:38:19.405Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:30.942 10:38:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 396816 00:19:30.942 10:38:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:19:30.942 10:38:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:19:30.943 10:38:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:19:30.943 10:38:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:19:30.943 10:38:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:19:30.943 10:38:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@150 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.MNsoSSh8sG 00:19:30.943 10:38:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:19:30.943 10:38:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.MNsoSSh8sG 00:19:30.943 10:38:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:19:30.943 10:38:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:30.943 10:38:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:19:30.943 10:38:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:30.943 10:38:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.MNsoSSh8sG 00:19:30.943 10:38:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:30.943 10:38:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:30.943 10:38:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:19:30.943 10:38:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.MNsoSSh8sG 00:19:30.943 10:38:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:30.943 10:38:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=396917 00:19:30.943 10:38:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:30.943 10:38:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:30.943 10:38:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 396917 /var/tmp/bdevperf.sock 00:19:30.943 10:38:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 396917 ']' 00:19:30.943 10:38:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:30.943 10:38:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:19:30.943 10:38:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:30.943 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:30.943 10:38:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:19:30.943 10:38:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:31.201 [2024-11-15 10:38:19.421000] Starting SPDK v25.01-pre git sha1 318515b44 / DPDK 24.03.0 initialization... 00:19:31.201 [2024-11-15 10:38:19.421096] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid396917 ] 00:19:31.201 [2024-11-15 10:38:19.493856] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:31.201 [2024-11-15 10:38:19.555970] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:31.201 10:38:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:19:31.201 10:38:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:19:31.201 10:38:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.MNsoSSh8sG 00:19:31.768 10:38:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk key0 00:19:31.768 [2024-11-15 10:38:20.192637] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:31.768 [2024-11-15 10:38:20.201089] tcp.c: 969:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:19:31.768 [2024-11-15 10:38:20.201116] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:19:31.768 [2024-11-15 10:38:20.201169] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:19:31.768 [2024-11-15 10:38:20.201972] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbe62c0 (107): Transport endpoint is not connected 00:19:31.768 [2024-11-15 10:38:20.202945] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbe62c0 (9): Bad file descriptor 00:19:31.768 [2024-11-15 10:38:20.203945] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:19:31.768 [2024-11-15 10:38:20.203963] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:19:31.768 [2024-11-15 10:38:20.203991] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:19:31.768 [2024-11-15 10:38:20.204009] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:19:31.768 request: 00:19:31.768 { 00:19:31.768 "name": "TLSTEST", 00:19:31.768 "trtype": "tcp", 00:19:31.768 "traddr": "10.0.0.2", 00:19:31.768 "adrfam": "ipv4", 00:19:31.768 "trsvcid": "4420", 00:19:31.768 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:31.768 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:19:31.768 "prchk_reftag": false, 00:19:31.768 "prchk_guard": false, 00:19:31.768 "hdgst": false, 00:19:31.768 "ddgst": false, 00:19:31.768 "psk": "key0", 00:19:31.768 "allow_unrecognized_csi": false, 00:19:31.768 "method": "bdev_nvme_attach_controller", 00:19:31.768 "req_id": 1 00:19:31.768 } 00:19:31.768 Got JSON-RPC error response 00:19:31.768 response: 00:19:31.768 { 00:19:31.768 "code": -5, 00:19:31.768 "message": "Input/output error" 00:19:31.768 } 00:19:31.768 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 396917 00:19:31.768 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 396917 ']' 00:19:31.768 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 396917 00:19:31.768 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:19:31.768 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:19:31.768 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 396917 00:19:32.026 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:19:32.026 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:19:32.026 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 396917' 00:19:32.026 killing process with pid 396917 00:19:32.026 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 396917 00:19:32.026 Received shutdown signal, test time was about 10.000000 seconds 00:19:32.026 00:19:32.026 Latency(us) 00:19:32.026 [2024-11-15T09:38:20.489Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:32.026 [2024-11-15T09:38:20.489Z] =================================================================================================================== 00:19:32.026 [2024-11-15T09:38:20.489Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:32.026 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 396917 00:19:32.026 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:19:32.026 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:19:32.026 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:19:32.026 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:19:32.026 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:19:32.026 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@153 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.MNsoSSh8sG 00:19:32.026 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:19:32.026 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.MNsoSSh8sG 00:19:32.026 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:19:32.026 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:32.026 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:19:32.026 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:32.026 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.MNsoSSh8sG 00:19:32.026 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:32.026 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:19:32.026 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:32.026 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.MNsoSSh8sG 00:19:32.026 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:32.026 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=397021 00:19:32.026 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:32.026 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:32.026 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 397021 /var/tmp/bdevperf.sock 00:19:32.026 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 397021 ']' 00:19:32.026 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:32.026 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:19:32.026 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:32.026 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:32.026 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:19:32.026 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:32.284 [2024-11-15 10:38:20.531570] Starting SPDK v25.01-pre git sha1 318515b44 / DPDK 24.03.0 initialization... 00:19:32.284 [2024-11-15 10:38:20.531680] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid397021 ] 00:19:32.284 [2024-11-15 10:38:20.604125] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:32.284 [2024-11-15 10:38:20.666866] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:32.542 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:19:32.542 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:19:32.542 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.MNsoSSh8sG 00:19:32.800 10:38:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk key0 00:19:33.058 [2024-11-15 10:38:21.332088] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:33.058 [2024-11-15 10:38:21.344041] tcp.c: 969:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:19:33.058 [2024-11-15 10:38:21.344069] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:19:33.058 [2024-11-15 10:38:21.344120] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:19:33.058 [2024-11-15 10:38:21.344294] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22032c0 (107): Transport endpoint is not connected 00:19:33.058 [2024-11-15 10:38:21.345285] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22032c0 (9): Bad file descriptor 00:19:33.058 [2024-11-15 10:38:21.346285] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] Ctrlr is in error state 00:19:33.058 [2024-11-15 10:38:21.346304] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:19:33.058 [2024-11-15 10:38:21.346333] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode2, Operation not permitted 00:19:33.058 [2024-11-15 10:38:21.346350] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] in failed state. 00:19:33.058 request: 00:19:33.058 { 00:19:33.058 "name": "TLSTEST", 00:19:33.058 "trtype": "tcp", 00:19:33.058 "traddr": "10.0.0.2", 00:19:33.058 "adrfam": "ipv4", 00:19:33.058 "trsvcid": "4420", 00:19:33.058 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:19:33.058 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:33.058 "prchk_reftag": false, 00:19:33.058 "prchk_guard": false, 00:19:33.058 "hdgst": false, 00:19:33.058 "ddgst": false, 00:19:33.058 "psk": "key0", 00:19:33.058 "allow_unrecognized_csi": false, 00:19:33.058 "method": "bdev_nvme_attach_controller", 00:19:33.058 "req_id": 1 00:19:33.058 } 00:19:33.058 Got JSON-RPC error response 00:19:33.058 response: 00:19:33.058 { 00:19:33.058 "code": -5, 00:19:33.058 "message": "Input/output error" 00:19:33.058 } 00:19:33.058 10:38:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 397021 00:19:33.058 10:38:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 397021 ']' 00:19:33.058 10:38:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 397021 00:19:33.058 10:38:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:19:33.058 10:38:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:19:33.058 10:38:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 397021 00:19:33.058 10:38:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:19:33.058 10:38:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:19:33.058 10:38:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 397021' 00:19:33.058 killing process with pid 397021 00:19:33.058 10:38:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 397021 00:19:33.058 Received shutdown signal, test time was about 10.000000 seconds 00:19:33.058 00:19:33.058 Latency(us) 00:19:33.058 [2024-11-15T09:38:21.521Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:33.058 [2024-11-15T09:38:21.521Z] =================================================================================================================== 00:19:33.058 [2024-11-15T09:38:21.521Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:33.058 10:38:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 397021 00:19:33.317 10:38:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:19:33.317 10:38:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:19:33.317 10:38:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:19:33.317 10:38:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:19:33.317 10:38:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:19:33.317 10:38:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@156 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:19:33.317 10:38:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:19:33.317 10:38:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:19:33.317 10:38:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:19:33.317 10:38:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:33.317 10:38:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:19:33.317 10:38:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:33.317 10:38:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:19:33.317 10:38:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:33.317 10:38:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:33.317 10:38:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:33.317 10:38:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk= 00:19:33.317 10:38:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:33.317 10:38:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=397154 00:19:33.317 10:38:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:33.317 10:38:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:33.317 10:38:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 397154 /var/tmp/bdevperf.sock 00:19:33.317 10:38:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 397154 ']' 00:19:33.317 10:38:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:33.317 10:38:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:19:33.317 10:38:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:33.317 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:33.317 10:38:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:19:33.317 10:38:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:33.317 [2024-11-15 10:38:21.639125] Starting SPDK v25.01-pre git sha1 318515b44 / DPDK 24.03.0 initialization... 00:19:33.317 [2024-11-15 10:38:21.639202] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid397154 ] 00:19:33.317 [2024-11-15 10:38:21.710121] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:33.317 [2024-11-15 10:38:21.768936] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:33.576 10:38:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:19:33.576 10:38:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:19:33.576 10:38:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 '' 00:19:33.833 [2024-11-15 10:38:22.131704] keyring.c: 24:keyring_file_check_path: *ERROR*: Non-absolute paths are not allowed: 00:19:33.833 [2024-11-15 10:38:22.131742] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:19:33.833 request: 00:19:33.833 { 00:19:33.833 "name": "key0", 00:19:33.833 "path": "", 00:19:33.833 "method": "keyring_file_add_key", 00:19:33.833 "req_id": 1 00:19:33.833 } 00:19:33.833 Got JSON-RPC error response 00:19:33.833 response: 00:19:33.833 { 00:19:33.833 "code": -1, 00:19:33.833 "message": "Operation not permitted" 00:19:33.833 } 00:19:33.833 10:38:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:19:34.091 [2024-11-15 10:38:22.392523] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:34.091 [2024-11-15 10:38:22.392584] bdev_nvme.c:6622:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:19:34.091 request: 00:19:34.091 { 00:19:34.091 "name": "TLSTEST", 00:19:34.091 "trtype": "tcp", 00:19:34.091 "traddr": "10.0.0.2", 00:19:34.091 "adrfam": "ipv4", 00:19:34.091 "trsvcid": "4420", 00:19:34.091 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:34.091 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:34.091 "prchk_reftag": false, 00:19:34.091 "prchk_guard": false, 00:19:34.091 "hdgst": false, 00:19:34.091 "ddgst": false, 00:19:34.091 "psk": "key0", 00:19:34.091 "allow_unrecognized_csi": false, 00:19:34.091 "method": "bdev_nvme_attach_controller", 00:19:34.091 "req_id": 1 00:19:34.091 } 00:19:34.091 Got JSON-RPC error response 00:19:34.091 response: 00:19:34.091 { 00:19:34.091 "code": -126, 00:19:34.091 "message": "Required key not available" 00:19:34.091 } 00:19:34.091 10:38:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 397154 00:19:34.091 10:38:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 397154 ']' 00:19:34.091 10:38:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 397154 00:19:34.091 10:38:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:19:34.091 10:38:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:19:34.091 10:38:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 397154 00:19:34.091 10:38:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:19:34.091 10:38:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:19:34.091 10:38:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 397154' 00:19:34.091 killing process with pid 397154 00:19:34.091 10:38:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 397154 00:19:34.091 Received shutdown signal, test time was about 10.000000 seconds 00:19:34.091 00:19:34.091 Latency(us) 00:19:34.091 [2024-11-15T09:38:22.554Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:34.091 [2024-11-15T09:38:22.554Z] =================================================================================================================== 00:19:34.091 [2024-11-15T09:38:22.554Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:34.091 10:38:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 397154 00:19:34.349 10:38:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:19:34.349 10:38:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:19:34.349 10:38:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:19:34.349 10:38:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:19:34.349 10:38:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:19:34.349 10:38:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # killprocess 393478 00:19:34.349 10:38:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 393478 ']' 00:19:34.349 10:38:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 393478 00:19:34.349 10:38:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:19:34.349 10:38:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:19:34.349 10:38:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 393478 00:19:34.349 10:38:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:19:34.349 10:38:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:19:34.349 10:38:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 393478' 00:19:34.349 killing process with pid 393478 00:19:34.349 10:38:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 393478 00:19:34.349 10:38:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 393478 00:19:34.606 10:38:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:19:34.606 10:38:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:19:34.606 10:38:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:19:34.606 10:38:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:19:34.606 10:38:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:19:34.606 10:38:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=2 00:19:34.606 10:38:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:19:34.606 10:38:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:19:34.607 10:38:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # mktemp 00:19:34.607 10:38:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # key_long_path=/tmp/tmp.gk0wdYDzMM 00:19:34.607 10:38:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@162 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:19:34.607 10:38:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@163 -- # chmod 0600 /tmp/tmp.gk0wdYDzMM 00:19:34.607 10:38:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@164 -- # nvmfappstart -m 0x2 00:19:34.607 10:38:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:34.607 10:38:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:34.607 10:38:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:34.607 10:38:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=397396 00:19:34.607 10:38:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:19:34.607 10:38:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 397396 00:19:34.607 10:38:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 397396 ']' 00:19:34.607 10:38:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:34.607 10:38:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:19:34.607 10:38:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:34.607 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:34.607 10:38:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:19:34.607 10:38:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:34.607 [2024-11-15 10:38:23.016138] Starting SPDK v25.01-pre git sha1 318515b44 / DPDK 24.03.0 initialization... 00:19:34.607 [2024-11-15 10:38:23.016215] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:34.865 [2024-11-15 10:38:23.091025] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:34.865 [2024-11-15 10:38:23.147839] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:34.865 [2024-11-15 10:38:23.147910] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:34.865 [2024-11-15 10:38:23.147923] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:34.865 [2024-11-15 10:38:23.147934] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:34.865 [2024-11-15 10:38:23.147957] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:34.865 [2024-11-15 10:38:23.148554] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:34.865 10:38:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:19:34.865 10:38:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:19:34.865 10:38:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:34.865 10:38:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:34.865 10:38:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:34.865 10:38:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:34.865 10:38:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@166 -- # setup_nvmf_tgt /tmp/tmp.gk0wdYDzMM 00:19:34.865 10:38:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.gk0wdYDzMM 00:19:34.865 10:38:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:19:35.123 [2024-11-15 10:38:23.517840] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:35.123 10:38:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:19:35.380 10:38:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:19:35.637 [2024-11-15 10:38:24.059296] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:35.637 [2024-11-15 10:38:24.059554] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:35.637 10:38:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:19:35.895 malloc0 00:19:35.895 10:38:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:19:36.460 10:38:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.gk0wdYDzMM 00:19:36.460 10:38:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:19:36.717 10:38:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@168 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.gk0wdYDzMM 00:19:36.717 10:38:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:36.717 10:38:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:36.717 10:38:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:36.717 10:38:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.gk0wdYDzMM 00:19:36.717 10:38:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:36.717 10:38:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=397682 00:19:36.717 10:38:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:36.717 10:38:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:36.717 10:38:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 397682 /var/tmp/bdevperf.sock 00:19:36.717 10:38:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 397682 ']' 00:19:36.717 10:38:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:36.717 10:38:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:19:36.717 10:38:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:36.717 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:36.717 10:38:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:19:36.717 10:38:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:36.975 [2024-11-15 10:38:25.201500] Starting SPDK v25.01-pre git sha1 318515b44 / DPDK 24.03.0 initialization... 00:19:36.975 [2024-11-15 10:38:25.201574] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid397682 ] 00:19:36.975 [2024-11-15 10:38:25.266243] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:36.975 [2024-11-15 10:38:25.323684] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:36.975 10:38:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:19:36.975 10:38:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:19:36.975 10:38:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.gk0wdYDzMM 00:19:37.233 10:38:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:19:37.490 [2024-11-15 10:38:25.931444] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:37.749 TLSTESTn1 00:19:37.749 10:38:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:19:37.749 Running I/O for 10 seconds... 00:19:40.055 3394.00 IOPS, 13.26 MiB/s [2024-11-15T09:38:29.452Z] 3388.50 IOPS, 13.24 MiB/s [2024-11-15T09:38:30.386Z] 3431.33 IOPS, 13.40 MiB/s [2024-11-15T09:38:31.321Z] 3448.00 IOPS, 13.47 MiB/s [2024-11-15T09:38:32.256Z] 3423.60 IOPS, 13.37 MiB/s [2024-11-15T09:38:33.191Z] 3427.00 IOPS, 13.39 MiB/s [2024-11-15T09:38:34.566Z] 3448.00 IOPS, 13.47 MiB/s [2024-11-15T09:38:35.500Z] 3444.50 IOPS, 13.46 MiB/s [2024-11-15T09:38:36.434Z] 3456.78 IOPS, 13.50 MiB/s [2024-11-15T09:38:36.434Z] 3460.80 IOPS, 13.52 MiB/s 00:19:47.971 Latency(us) 00:19:47.971 [2024-11-15T09:38:36.434Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:47.971 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:19:47.971 Verification LBA range: start 0x0 length 0x2000 00:19:47.971 TLSTESTn1 : 10.02 3466.70 13.54 0.00 0.00 36865.85 6092.42 30292.20 00:19:47.971 [2024-11-15T09:38:36.434Z] =================================================================================================================== 00:19:47.971 [2024-11-15T09:38:36.434Z] Total : 3466.70 13.54 0.00 0.00 36865.85 6092.42 30292.20 00:19:47.971 { 00:19:47.971 "results": [ 00:19:47.971 { 00:19:47.971 "job": "TLSTESTn1", 00:19:47.971 "core_mask": "0x4", 00:19:47.971 "workload": "verify", 00:19:47.971 "status": "finished", 00:19:47.971 "verify_range": { 00:19:47.971 "start": 0, 00:19:47.971 "length": 8192 00:19:47.971 }, 00:19:47.971 "queue_depth": 128, 00:19:47.971 "io_size": 4096, 00:19:47.971 "runtime": 10.019605, 00:19:47.971 "iops": 3466.7035277338778, 00:19:47.971 "mibps": 13.54181065521046, 00:19:47.971 "io_failed": 0, 00:19:47.971 "io_timeout": 0, 00:19:47.971 "avg_latency_us": 36865.8474353438, 00:19:47.971 "min_latency_us": 6092.420740740741, 00:19:47.971 "max_latency_us": 30292.195555555554 00:19:47.971 } 00:19:47.971 ], 00:19:47.971 "core_count": 1 00:19:47.971 } 00:19:47.971 10:38:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:47.971 10:38:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 397682 00:19:47.971 10:38:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 397682 ']' 00:19:47.971 10:38:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 397682 00:19:47.971 10:38:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:19:47.971 10:38:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:19:47.971 10:38:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 397682 00:19:47.971 10:38:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:19:47.971 10:38:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:19:47.971 10:38:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 397682' 00:19:47.971 killing process with pid 397682 00:19:47.971 10:38:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 397682 00:19:47.971 Received shutdown signal, test time was about 10.000000 seconds 00:19:47.971 00:19:47.971 Latency(us) 00:19:47.971 [2024-11-15T09:38:36.434Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:47.971 [2024-11-15T09:38:36.434Z] =================================================================================================================== 00:19:47.971 [2024-11-15T09:38:36.434Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:47.971 10:38:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 397682 00:19:48.229 10:38:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@171 -- # chmod 0666 /tmp/tmp.gk0wdYDzMM 00:19:48.229 10:38:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@172 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.gk0wdYDzMM 00:19:48.229 10:38:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:19:48.229 10:38:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.gk0wdYDzMM 00:19:48.229 10:38:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:19:48.229 10:38:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:48.229 10:38:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:19:48.229 10:38:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:48.229 10:38:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.gk0wdYDzMM 00:19:48.229 10:38:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:48.229 10:38:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:48.229 10:38:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:48.229 10:38:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.gk0wdYDzMM 00:19:48.229 10:38:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:48.229 10:38:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=399002 00:19:48.229 10:38:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:48.229 10:38:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:48.229 10:38:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 399002 /var/tmp/bdevperf.sock 00:19:48.229 10:38:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 399002 ']' 00:19:48.229 10:38:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:48.229 10:38:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:19:48.229 10:38:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:48.229 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:48.229 10:38:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:19:48.229 10:38:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:48.229 [2024-11-15 10:38:36.500096] Starting SPDK v25.01-pre git sha1 318515b44 / DPDK 24.03.0 initialization... 00:19:48.229 [2024-11-15 10:38:36.500176] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid399002 ] 00:19:48.229 [2024-11-15 10:38:36.565797] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:48.229 [2024-11-15 10:38:36.623064] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:48.486 10:38:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:19:48.486 10:38:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:19:48.486 10:38:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.gk0wdYDzMM 00:19:48.743 [2024-11-15 10:38:36.982272] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.gk0wdYDzMM': 0100666 00:19:48.743 [2024-11-15 10:38:36.982318] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:19:48.743 request: 00:19:48.743 { 00:19:48.743 "name": "key0", 00:19:48.743 "path": "/tmp/tmp.gk0wdYDzMM", 00:19:48.743 "method": "keyring_file_add_key", 00:19:48.743 "req_id": 1 00:19:48.743 } 00:19:48.743 Got JSON-RPC error response 00:19:48.743 response: 00:19:48.743 { 00:19:48.743 "code": -1, 00:19:48.743 "message": "Operation not permitted" 00:19:48.743 } 00:19:48.743 10:38:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:19:49.002 [2024-11-15 10:38:37.243106] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:49.002 [2024-11-15 10:38:37.243166] bdev_nvme.c:6622:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:19:49.002 request: 00:19:49.002 { 00:19:49.002 "name": "TLSTEST", 00:19:49.002 "trtype": "tcp", 00:19:49.002 "traddr": "10.0.0.2", 00:19:49.002 "adrfam": "ipv4", 00:19:49.002 "trsvcid": "4420", 00:19:49.002 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:49.002 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:49.002 "prchk_reftag": false, 00:19:49.002 "prchk_guard": false, 00:19:49.002 "hdgst": false, 00:19:49.002 "ddgst": false, 00:19:49.002 "psk": "key0", 00:19:49.002 "allow_unrecognized_csi": false, 00:19:49.002 "method": "bdev_nvme_attach_controller", 00:19:49.002 "req_id": 1 00:19:49.002 } 00:19:49.002 Got JSON-RPC error response 00:19:49.002 response: 00:19:49.002 { 00:19:49.002 "code": -126, 00:19:49.002 "message": "Required key not available" 00:19:49.002 } 00:19:49.002 10:38:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 399002 00:19:49.002 10:38:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 399002 ']' 00:19:49.002 10:38:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 399002 00:19:49.002 10:38:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:19:49.002 10:38:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:19:49.002 10:38:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 399002 00:19:49.002 10:38:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:19:49.002 10:38:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:19:49.002 10:38:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 399002' 00:19:49.002 killing process with pid 399002 00:19:49.002 10:38:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 399002 00:19:49.002 Received shutdown signal, test time was about 10.000000 seconds 00:19:49.002 00:19:49.002 Latency(us) 00:19:49.002 [2024-11-15T09:38:37.465Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:49.002 [2024-11-15T09:38:37.465Z] =================================================================================================================== 00:19:49.002 [2024-11-15T09:38:37.465Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:49.002 10:38:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 399002 00:19:49.260 10:38:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:19:49.260 10:38:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:19:49.260 10:38:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:19:49.260 10:38:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:19:49.260 10:38:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:19:49.260 10:38:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@175 -- # killprocess 397396 00:19:49.260 10:38:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 397396 ']' 00:19:49.260 10:38:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 397396 00:19:49.260 10:38:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:19:49.260 10:38:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:19:49.260 10:38:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 397396 00:19:49.260 10:38:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:19:49.260 10:38:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:19:49.260 10:38:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 397396' 00:19:49.260 killing process with pid 397396 00:19:49.260 10:38:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 397396 00:19:49.260 10:38:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 397396 00:19:49.519 10:38:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@176 -- # nvmfappstart -m 0x2 00:19:49.519 10:38:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:49.519 10:38:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:49.519 10:38:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:49.519 10:38:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:19:49.519 10:38:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=399145 00:19:49.519 10:38:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 399145 00:19:49.519 10:38:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 399145 ']' 00:19:49.519 10:38:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:49.519 10:38:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:19:49.519 10:38:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:49.519 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:49.519 10:38:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:19:49.519 10:38:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:49.519 [2024-11-15 10:38:37.840896] Starting SPDK v25.01-pre git sha1 318515b44 / DPDK 24.03.0 initialization... 00:19:49.519 [2024-11-15 10:38:37.841000] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:49.519 [2024-11-15 10:38:37.916624] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:49.519 [2024-11-15 10:38:37.973556] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:49.519 [2024-11-15 10:38:37.973611] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:49.519 [2024-11-15 10:38:37.973641] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:49.519 [2024-11-15 10:38:37.973652] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:49.519 [2024-11-15 10:38:37.973661] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:49.519 [2024-11-15 10:38:37.974239] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:49.778 10:38:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:19:49.778 10:38:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:19:49.778 10:38:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:49.778 10:38:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:49.778 10:38:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:49.778 10:38:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:49.778 10:38:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@178 -- # NOT setup_nvmf_tgt /tmp/tmp.gk0wdYDzMM 00:19:49.778 10:38:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:19:49.778 10:38:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.gk0wdYDzMM 00:19:49.778 10:38:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=setup_nvmf_tgt 00:19:49.778 10:38:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:49.778 10:38:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t setup_nvmf_tgt 00:19:49.778 10:38:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:49.778 10:38:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # setup_nvmf_tgt /tmp/tmp.gk0wdYDzMM 00:19:49.778 10:38:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.gk0wdYDzMM 00:19:49.778 10:38:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:19:50.036 [2024-11-15 10:38:38.358214] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:50.036 10:38:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:19:50.294 10:38:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:19:50.552 [2024-11-15 10:38:38.891697] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:50.552 [2024-11-15 10:38:38.891976] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:50.552 10:38:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:19:50.811 malloc0 00:19:50.811 10:38:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:19:51.069 10:38:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.gk0wdYDzMM 00:19:51.330 [2024-11-15 10:38:39.752332] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.gk0wdYDzMM': 0100666 00:19:51.330 [2024-11-15 10:38:39.752409] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:19:51.330 request: 00:19:51.330 { 00:19:51.330 "name": "key0", 00:19:51.330 "path": "/tmp/tmp.gk0wdYDzMM", 00:19:51.330 "method": "keyring_file_add_key", 00:19:51.330 "req_id": 1 00:19:51.330 } 00:19:51.330 Got JSON-RPC error response 00:19:51.330 response: 00:19:51.330 { 00:19:51.330 "code": -1, 00:19:51.330 "message": "Operation not permitted" 00:19:51.330 } 00:19:51.330 10:38:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:19:51.896 [2024-11-15 10:38:40.073251] tcp.c:3792:nvmf_tcp_subsystem_add_host: *ERROR*: Key 'key0' does not exist 00:19:51.896 [2024-11-15 10:38:40.073332] subsystem.c:1051:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:19:51.896 request: 00:19:51.896 { 00:19:51.896 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:51.896 "host": "nqn.2016-06.io.spdk:host1", 00:19:51.896 "psk": "key0", 00:19:51.896 "method": "nvmf_subsystem_add_host", 00:19:51.896 "req_id": 1 00:19:51.896 } 00:19:51.896 Got JSON-RPC error response 00:19:51.896 response: 00:19:51.896 { 00:19:51.896 "code": -32603, 00:19:51.896 "message": "Internal error" 00:19:51.896 } 00:19:51.896 10:38:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:19:51.896 10:38:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:19:51.896 10:38:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:19:51.896 10:38:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:19:51.896 10:38:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@181 -- # killprocess 399145 00:19:51.897 10:38:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 399145 ']' 00:19:51.897 10:38:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 399145 00:19:51.897 10:38:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:19:51.897 10:38:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:19:51.897 10:38:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 399145 00:19:51.897 10:38:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:19:51.897 10:38:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:19:51.897 10:38:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 399145' 00:19:51.897 killing process with pid 399145 00:19:51.897 10:38:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 399145 00:19:51.897 10:38:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 399145 00:19:52.155 10:38:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@182 -- # chmod 0600 /tmp/tmp.gk0wdYDzMM 00:19:52.155 10:38:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@185 -- # nvmfappstart -m 0x2 00:19:52.155 10:38:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:52.155 10:38:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:52.155 10:38:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:52.155 10:38:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=399455 00:19:52.155 10:38:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:19:52.155 10:38:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 399455 00:19:52.155 10:38:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 399455 ']' 00:19:52.155 10:38:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:52.155 10:38:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:19:52.155 10:38:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:52.155 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:52.155 10:38:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:19:52.155 10:38:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:52.155 [2024-11-15 10:38:40.429261] Starting SPDK v25.01-pre git sha1 318515b44 / DPDK 24.03.0 initialization... 00:19:52.155 [2024-11-15 10:38:40.429350] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:52.155 [2024-11-15 10:38:40.500024] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:52.155 [2024-11-15 10:38:40.554296] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:52.155 [2024-11-15 10:38:40.554360] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:52.156 [2024-11-15 10:38:40.554396] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:52.156 [2024-11-15 10:38:40.554407] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:52.156 [2024-11-15 10:38:40.554416] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:52.156 [2024-11-15 10:38:40.554970] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:52.414 10:38:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:19:52.414 10:38:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:19:52.414 10:38:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:52.414 10:38:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:52.414 10:38:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:52.414 10:38:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:52.414 10:38:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@186 -- # setup_nvmf_tgt /tmp/tmp.gk0wdYDzMM 00:19:52.414 10:38:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.gk0wdYDzMM 00:19:52.414 10:38:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:19:52.672 [2024-11-15 10:38:40.946541] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:52.672 10:38:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:19:52.929 10:38:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:19:53.187 [2024-11-15 10:38:41.471983] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:53.187 [2024-11-15 10:38:41.472228] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:53.187 10:38:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:19:53.445 malloc0 00:19:53.445 10:38:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:19:53.703 10:38:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.gk0wdYDzMM 00:19:53.961 10:38:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:19:54.220 10:38:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@189 -- # bdevperf_pid=399739 00:19:54.221 10:38:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@188 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:54.221 10:38:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@191 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:54.221 10:38:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@192 -- # waitforlisten 399739 /var/tmp/bdevperf.sock 00:19:54.221 10:38:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 399739 ']' 00:19:54.221 10:38:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:54.221 10:38:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:19:54.221 10:38:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:54.221 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:54.221 10:38:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:19:54.221 10:38:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:54.221 [2024-11-15 10:38:42.599462] Starting SPDK v25.01-pre git sha1 318515b44 / DPDK 24.03.0 initialization... 00:19:54.221 [2024-11-15 10:38:42.599539] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid399739 ] 00:19:54.221 [2024-11-15 10:38:42.664300] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:54.479 [2024-11-15 10:38:42.721911] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:54.479 10:38:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:19:54.479 10:38:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:19:54.479 10:38:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@193 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.gk0wdYDzMM 00:19:54.737 10:38:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@194 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:19:54.995 [2024-11-15 10:38:43.351118] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:54.995 TLSTESTn1 00:19:54.995 10:38:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:19:55.564 10:38:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # tgtconf='{ 00:19:55.564 "subsystems": [ 00:19:55.564 { 00:19:55.564 "subsystem": "keyring", 00:19:55.564 "config": [ 00:19:55.564 { 00:19:55.564 "method": "keyring_file_add_key", 00:19:55.564 "params": { 00:19:55.564 "name": "key0", 00:19:55.564 "path": "/tmp/tmp.gk0wdYDzMM" 00:19:55.564 } 00:19:55.564 } 00:19:55.564 ] 00:19:55.564 }, 00:19:55.564 { 00:19:55.564 "subsystem": "iobuf", 00:19:55.564 "config": [ 00:19:55.564 { 00:19:55.564 "method": "iobuf_set_options", 00:19:55.564 "params": { 00:19:55.564 "small_pool_count": 8192, 00:19:55.564 "large_pool_count": 1024, 00:19:55.564 "small_bufsize": 8192, 00:19:55.564 "large_bufsize": 135168, 00:19:55.564 "enable_numa": false 00:19:55.564 } 00:19:55.564 } 00:19:55.564 ] 00:19:55.564 }, 00:19:55.564 { 00:19:55.564 "subsystem": "sock", 00:19:55.564 "config": [ 00:19:55.564 { 00:19:55.564 "method": "sock_set_default_impl", 00:19:55.564 "params": { 00:19:55.564 "impl_name": "posix" 00:19:55.564 } 00:19:55.564 }, 00:19:55.564 { 00:19:55.564 "method": "sock_impl_set_options", 00:19:55.564 "params": { 00:19:55.564 "impl_name": "ssl", 00:19:55.564 "recv_buf_size": 4096, 00:19:55.564 "send_buf_size": 4096, 00:19:55.564 "enable_recv_pipe": true, 00:19:55.564 "enable_quickack": false, 00:19:55.564 "enable_placement_id": 0, 00:19:55.564 "enable_zerocopy_send_server": true, 00:19:55.564 "enable_zerocopy_send_client": false, 00:19:55.564 "zerocopy_threshold": 0, 00:19:55.564 "tls_version": 0, 00:19:55.564 "enable_ktls": false 00:19:55.564 } 00:19:55.564 }, 00:19:55.564 { 00:19:55.564 "method": "sock_impl_set_options", 00:19:55.564 "params": { 00:19:55.564 "impl_name": "posix", 00:19:55.564 "recv_buf_size": 2097152, 00:19:55.564 "send_buf_size": 2097152, 00:19:55.564 "enable_recv_pipe": true, 00:19:55.564 "enable_quickack": false, 00:19:55.564 "enable_placement_id": 0, 00:19:55.564 "enable_zerocopy_send_server": true, 00:19:55.564 "enable_zerocopy_send_client": false, 00:19:55.564 "zerocopy_threshold": 0, 00:19:55.564 "tls_version": 0, 00:19:55.564 "enable_ktls": false 00:19:55.564 } 00:19:55.564 } 00:19:55.564 ] 00:19:55.564 }, 00:19:55.564 { 00:19:55.564 "subsystem": "vmd", 00:19:55.564 "config": [] 00:19:55.564 }, 00:19:55.564 { 00:19:55.564 "subsystem": "accel", 00:19:55.564 "config": [ 00:19:55.564 { 00:19:55.564 "method": "accel_set_options", 00:19:55.564 "params": { 00:19:55.564 "small_cache_size": 128, 00:19:55.564 "large_cache_size": 16, 00:19:55.564 "task_count": 2048, 00:19:55.564 "sequence_count": 2048, 00:19:55.564 "buf_count": 2048 00:19:55.565 } 00:19:55.565 } 00:19:55.565 ] 00:19:55.565 }, 00:19:55.565 { 00:19:55.565 "subsystem": "bdev", 00:19:55.565 "config": [ 00:19:55.565 { 00:19:55.565 "method": "bdev_set_options", 00:19:55.565 "params": { 00:19:55.565 "bdev_io_pool_size": 65535, 00:19:55.565 "bdev_io_cache_size": 256, 00:19:55.565 "bdev_auto_examine": true, 00:19:55.565 "iobuf_small_cache_size": 128, 00:19:55.565 "iobuf_large_cache_size": 16 00:19:55.565 } 00:19:55.565 }, 00:19:55.565 { 00:19:55.565 "method": "bdev_raid_set_options", 00:19:55.565 "params": { 00:19:55.565 "process_window_size_kb": 1024, 00:19:55.565 "process_max_bandwidth_mb_sec": 0 00:19:55.565 } 00:19:55.565 }, 00:19:55.565 { 00:19:55.565 "method": "bdev_iscsi_set_options", 00:19:55.565 "params": { 00:19:55.565 "timeout_sec": 30 00:19:55.565 } 00:19:55.565 }, 00:19:55.565 { 00:19:55.565 "method": "bdev_nvme_set_options", 00:19:55.565 "params": { 00:19:55.565 "action_on_timeout": "none", 00:19:55.565 "timeout_us": 0, 00:19:55.565 "timeout_admin_us": 0, 00:19:55.565 "keep_alive_timeout_ms": 10000, 00:19:55.565 "arbitration_burst": 0, 00:19:55.565 "low_priority_weight": 0, 00:19:55.565 "medium_priority_weight": 0, 00:19:55.565 "high_priority_weight": 0, 00:19:55.565 "nvme_adminq_poll_period_us": 10000, 00:19:55.565 "nvme_ioq_poll_period_us": 0, 00:19:55.565 "io_queue_requests": 0, 00:19:55.565 "delay_cmd_submit": true, 00:19:55.565 "transport_retry_count": 4, 00:19:55.565 "bdev_retry_count": 3, 00:19:55.565 "transport_ack_timeout": 0, 00:19:55.565 "ctrlr_loss_timeout_sec": 0, 00:19:55.565 "reconnect_delay_sec": 0, 00:19:55.565 "fast_io_fail_timeout_sec": 0, 00:19:55.565 "disable_auto_failback": false, 00:19:55.565 "generate_uuids": false, 00:19:55.565 "transport_tos": 0, 00:19:55.565 "nvme_error_stat": false, 00:19:55.565 "rdma_srq_size": 0, 00:19:55.565 "io_path_stat": false, 00:19:55.565 "allow_accel_sequence": false, 00:19:55.565 "rdma_max_cq_size": 0, 00:19:55.565 "rdma_cm_event_timeout_ms": 0, 00:19:55.565 "dhchap_digests": [ 00:19:55.565 "sha256", 00:19:55.565 "sha384", 00:19:55.565 "sha512" 00:19:55.565 ], 00:19:55.565 "dhchap_dhgroups": [ 00:19:55.565 "null", 00:19:55.565 "ffdhe2048", 00:19:55.565 "ffdhe3072", 00:19:55.565 "ffdhe4096", 00:19:55.565 "ffdhe6144", 00:19:55.565 "ffdhe8192" 00:19:55.565 ] 00:19:55.565 } 00:19:55.565 }, 00:19:55.565 { 00:19:55.565 "method": "bdev_nvme_set_hotplug", 00:19:55.565 "params": { 00:19:55.565 "period_us": 100000, 00:19:55.565 "enable": false 00:19:55.565 } 00:19:55.565 }, 00:19:55.565 { 00:19:55.565 "method": "bdev_malloc_create", 00:19:55.565 "params": { 00:19:55.565 "name": "malloc0", 00:19:55.565 "num_blocks": 8192, 00:19:55.565 "block_size": 4096, 00:19:55.565 "physical_block_size": 4096, 00:19:55.565 "uuid": "45f4d8f4-4ba9-4e07-a63f-160906000249", 00:19:55.565 "optimal_io_boundary": 0, 00:19:55.565 "md_size": 0, 00:19:55.565 "dif_type": 0, 00:19:55.565 "dif_is_head_of_md": false, 00:19:55.565 "dif_pi_format": 0 00:19:55.565 } 00:19:55.565 }, 00:19:55.565 { 00:19:55.565 "method": "bdev_wait_for_examine" 00:19:55.565 } 00:19:55.565 ] 00:19:55.565 }, 00:19:55.565 { 00:19:55.565 "subsystem": "nbd", 00:19:55.565 "config": [] 00:19:55.565 }, 00:19:55.565 { 00:19:55.565 "subsystem": "scheduler", 00:19:55.565 "config": [ 00:19:55.565 { 00:19:55.565 "method": "framework_set_scheduler", 00:19:55.565 "params": { 00:19:55.565 "name": "static" 00:19:55.565 } 00:19:55.565 } 00:19:55.565 ] 00:19:55.565 }, 00:19:55.565 { 00:19:55.565 "subsystem": "nvmf", 00:19:55.565 "config": [ 00:19:55.565 { 00:19:55.565 "method": "nvmf_set_config", 00:19:55.565 "params": { 00:19:55.565 "discovery_filter": "match_any", 00:19:55.565 "admin_cmd_passthru": { 00:19:55.565 "identify_ctrlr": false 00:19:55.565 }, 00:19:55.565 "dhchap_digests": [ 00:19:55.565 "sha256", 00:19:55.565 "sha384", 00:19:55.565 "sha512" 00:19:55.565 ], 00:19:55.565 "dhchap_dhgroups": [ 00:19:55.565 "null", 00:19:55.565 "ffdhe2048", 00:19:55.565 "ffdhe3072", 00:19:55.565 "ffdhe4096", 00:19:55.565 "ffdhe6144", 00:19:55.565 "ffdhe8192" 00:19:55.565 ] 00:19:55.565 } 00:19:55.565 }, 00:19:55.565 { 00:19:55.565 "method": "nvmf_set_max_subsystems", 00:19:55.565 "params": { 00:19:55.565 "max_subsystems": 1024 00:19:55.565 } 00:19:55.565 }, 00:19:55.565 { 00:19:55.565 "method": "nvmf_set_crdt", 00:19:55.565 "params": { 00:19:55.565 "crdt1": 0, 00:19:55.565 "crdt2": 0, 00:19:55.565 "crdt3": 0 00:19:55.565 } 00:19:55.565 }, 00:19:55.565 { 00:19:55.565 "method": "nvmf_create_transport", 00:19:55.565 "params": { 00:19:55.565 "trtype": "TCP", 00:19:55.565 "max_queue_depth": 128, 00:19:55.565 "max_io_qpairs_per_ctrlr": 127, 00:19:55.565 "in_capsule_data_size": 4096, 00:19:55.565 "max_io_size": 131072, 00:19:55.565 "io_unit_size": 131072, 00:19:55.565 "max_aq_depth": 128, 00:19:55.565 "num_shared_buffers": 511, 00:19:55.565 "buf_cache_size": 4294967295, 00:19:55.565 "dif_insert_or_strip": false, 00:19:55.565 "zcopy": false, 00:19:55.565 "c2h_success": false, 00:19:55.565 "sock_priority": 0, 00:19:55.565 "abort_timeout_sec": 1, 00:19:55.565 "ack_timeout": 0, 00:19:55.565 "data_wr_pool_size": 0 00:19:55.565 } 00:19:55.565 }, 00:19:55.565 { 00:19:55.565 "method": "nvmf_create_subsystem", 00:19:55.565 "params": { 00:19:55.565 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:55.565 "allow_any_host": false, 00:19:55.565 "serial_number": "SPDK00000000000001", 00:19:55.565 "model_number": "SPDK bdev Controller", 00:19:55.565 "max_namespaces": 10, 00:19:55.565 "min_cntlid": 1, 00:19:55.565 "max_cntlid": 65519, 00:19:55.565 "ana_reporting": false 00:19:55.565 } 00:19:55.565 }, 00:19:55.565 { 00:19:55.565 "method": "nvmf_subsystem_add_host", 00:19:55.565 "params": { 00:19:55.565 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:55.565 "host": "nqn.2016-06.io.spdk:host1", 00:19:55.565 "psk": "key0" 00:19:55.565 } 00:19:55.565 }, 00:19:55.565 { 00:19:55.565 "method": "nvmf_subsystem_add_ns", 00:19:55.565 "params": { 00:19:55.565 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:55.565 "namespace": { 00:19:55.565 "nsid": 1, 00:19:55.565 "bdev_name": "malloc0", 00:19:55.565 "nguid": "45F4D8F44BA94E07A63F160906000249", 00:19:55.565 "uuid": "45f4d8f4-4ba9-4e07-a63f-160906000249", 00:19:55.565 "no_auto_visible": false 00:19:55.565 } 00:19:55.565 } 00:19:55.565 }, 00:19:55.565 { 00:19:55.565 "method": "nvmf_subsystem_add_listener", 00:19:55.565 "params": { 00:19:55.565 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:55.565 "listen_address": { 00:19:55.565 "trtype": "TCP", 00:19:55.565 "adrfam": "IPv4", 00:19:55.565 "traddr": "10.0.0.2", 00:19:55.565 "trsvcid": "4420" 00:19:55.565 }, 00:19:55.565 "secure_channel": true 00:19:55.565 } 00:19:55.565 } 00:19:55.565 ] 00:19:55.565 } 00:19:55.565 ] 00:19:55.565 }' 00:19:55.565 10:38:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:19:55.824 10:38:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # bdevperfconf='{ 00:19:55.824 "subsystems": [ 00:19:55.824 { 00:19:55.824 "subsystem": "keyring", 00:19:55.824 "config": [ 00:19:55.824 { 00:19:55.824 "method": "keyring_file_add_key", 00:19:55.824 "params": { 00:19:55.824 "name": "key0", 00:19:55.824 "path": "/tmp/tmp.gk0wdYDzMM" 00:19:55.824 } 00:19:55.824 } 00:19:55.824 ] 00:19:55.824 }, 00:19:55.824 { 00:19:55.824 "subsystem": "iobuf", 00:19:55.824 "config": [ 00:19:55.824 { 00:19:55.824 "method": "iobuf_set_options", 00:19:55.824 "params": { 00:19:55.824 "small_pool_count": 8192, 00:19:55.824 "large_pool_count": 1024, 00:19:55.824 "small_bufsize": 8192, 00:19:55.824 "large_bufsize": 135168, 00:19:55.824 "enable_numa": false 00:19:55.824 } 00:19:55.824 } 00:19:55.824 ] 00:19:55.824 }, 00:19:55.824 { 00:19:55.824 "subsystem": "sock", 00:19:55.824 "config": [ 00:19:55.824 { 00:19:55.824 "method": "sock_set_default_impl", 00:19:55.824 "params": { 00:19:55.824 "impl_name": "posix" 00:19:55.824 } 00:19:55.824 }, 00:19:55.824 { 00:19:55.824 "method": "sock_impl_set_options", 00:19:55.824 "params": { 00:19:55.824 "impl_name": "ssl", 00:19:55.824 "recv_buf_size": 4096, 00:19:55.824 "send_buf_size": 4096, 00:19:55.824 "enable_recv_pipe": true, 00:19:55.824 "enable_quickack": false, 00:19:55.824 "enable_placement_id": 0, 00:19:55.824 "enable_zerocopy_send_server": true, 00:19:55.824 "enable_zerocopy_send_client": false, 00:19:55.824 "zerocopy_threshold": 0, 00:19:55.824 "tls_version": 0, 00:19:55.824 "enable_ktls": false 00:19:55.824 } 00:19:55.824 }, 00:19:55.824 { 00:19:55.824 "method": "sock_impl_set_options", 00:19:55.824 "params": { 00:19:55.824 "impl_name": "posix", 00:19:55.824 "recv_buf_size": 2097152, 00:19:55.824 "send_buf_size": 2097152, 00:19:55.824 "enable_recv_pipe": true, 00:19:55.824 "enable_quickack": false, 00:19:55.824 "enable_placement_id": 0, 00:19:55.824 "enable_zerocopy_send_server": true, 00:19:55.824 "enable_zerocopy_send_client": false, 00:19:55.824 "zerocopy_threshold": 0, 00:19:55.824 "tls_version": 0, 00:19:55.824 "enable_ktls": false 00:19:55.824 } 00:19:55.824 } 00:19:55.824 ] 00:19:55.824 }, 00:19:55.824 { 00:19:55.824 "subsystem": "vmd", 00:19:55.824 "config": [] 00:19:55.824 }, 00:19:55.824 { 00:19:55.824 "subsystem": "accel", 00:19:55.824 "config": [ 00:19:55.824 { 00:19:55.824 "method": "accel_set_options", 00:19:55.824 "params": { 00:19:55.824 "small_cache_size": 128, 00:19:55.824 "large_cache_size": 16, 00:19:55.824 "task_count": 2048, 00:19:55.824 "sequence_count": 2048, 00:19:55.824 "buf_count": 2048 00:19:55.824 } 00:19:55.824 } 00:19:55.824 ] 00:19:55.824 }, 00:19:55.824 { 00:19:55.824 "subsystem": "bdev", 00:19:55.824 "config": [ 00:19:55.824 { 00:19:55.824 "method": "bdev_set_options", 00:19:55.824 "params": { 00:19:55.824 "bdev_io_pool_size": 65535, 00:19:55.824 "bdev_io_cache_size": 256, 00:19:55.824 "bdev_auto_examine": true, 00:19:55.824 "iobuf_small_cache_size": 128, 00:19:55.824 "iobuf_large_cache_size": 16 00:19:55.824 } 00:19:55.824 }, 00:19:55.824 { 00:19:55.824 "method": "bdev_raid_set_options", 00:19:55.824 "params": { 00:19:55.824 "process_window_size_kb": 1024, 00:19:55.824 "process_max_bandwidth_mb_sec": 0 00:19:55.824 } 00:19:55.824 }, 00:19:55.824 { 00:19:55.824 "method": "bdev_iscsi_set_options", 00:19:55.824 "params": { 00:19:55.824 "timeout_sec": 30 00:19:55.824 } 00:19:55.824 }, 00:19:55.824 { 00:19:55.824 "method": "bdev_nvme_set_options", 00:19:55.824 "params": { 00:19:55.824 "action_on_timeout": "none", 00:19:55.824 "timeout_us": 0, 00:19:55.824 "timeout_admin_us": 0, 00:19:55.824 "keep_alive_timeout_ms": 10000, 00:19:55.824 "arbitration_burst": 0, 00:19:55.824 "low_priority_weight": 0, 00:19:55.824 "medium_priority_weight": 0, 00:19:55.824 "high_priority_weight": 0, 00:19:55.824 "nvme_adminq_poll_period_us": 10000, 00:19:55.824 "nvme_ioq_poll_period_us": 0, 00:19:55.824 "io_queue_requests": 512, 00:19:55.824 "delay_cmd_submit": true, 00:19:55.824 "transport_retry_count": 4, 00:19:55.824 "bdev_retry_count": 3, 00:19:55.824 "transport_ack_timeout": 0, 00:19:55.824 "ctrlr_loss_timeout_sec": 0, 00:19:55.824 "reconnect_delay_sec": 0, 00:19:55.824 "fast_io_fail_timeout_sec": 0, 00:19:55.824 "disable_auto_failback": false, 00:19:55.824 "generate_uuids": false, 00:19:55.824 "transport_tos": 0, 00:19:55.824 "nvme_error_stat": false, 00:19:55.824 "rdma_srq_size": 0, 00:19:55.824 "io_path_stat": false, 00:19:55.824 "allow_accel_sequence": false, 00:19:55.824 "rdma_max_cq_size": 0, 00:19:55.824 "rdma_cm_event_timeout_ms": 0, 00:19:55.824 "dhchap_digests": [ 00:19:55.824 "sha256", 00:19:55.824 "sha384", 00:19:55.824 "sha512" 00:19:55.824 ], 00:19:55.824 "dhchap_dhgroups": [ 00:19:55.824 "null", 00:19:55.824 "ffdhe2048", 00:19:55.824 "ffdhe3072", 00:19:55.824 "ffdhe4096", 00:19:55.825 "ffdhe6144", 00:19:55.825 "ffdhe8192" 00:19:55.825 ] 00:19:55.825 } 00:19:55.825 }, 00:19:55.825 { 00:19:55.825 "method": "bdev_nvme_attach_controller", 00:19:55.825 "params": { 00:19:55.825 "name": "TLSTEST", 00:19:55.825 "trtype": "TCP", 00:19:55.825 "adrfam": "IPv4", 00:19:55.825 "traddr": "10.0.0.2", 00:19:55.825 "trsvcid": "4420", 00:19:55.825 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:55.825 "prchk_reftag": false, 00:19:55.825 "prchk_guard": false, 00:19:55.825 "ctrlr_loss_timeout_sec": 0, 00:19:55.825 "reconnect_delay_sec": 0, 00:19:55.825 "fast_io_fail_timeout_sec": 0, 00:19:55.825 "psk": "key0", 00:19:55.825 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:55.825 "hdgst": false, 00:19:55.825 "ddgst": false, 00:19:55.825 "multipath": "multipath" 00:19:55.825 } 00:19:55.825 }, 00:19:55.825 { 00:19:55.825 "method": "bdev_nvme_set_hotplug", 00:19:55.825 "params": { 00:19:55.825 "period_us": 100000, 00:19:55.825 "enable": false 00:19:55.825 } 00:19:55.825 }, 00:19:55.825 { 00:19:55.825 "method": "bdev_wait_for_examine" 00:19:55.825 } 00:19:55.825 ] 00:19:55.825 }, 00:19:55.825 { 00:19:55.825 "subsystem": "nbd", 00:19:55.825 "config": [] 00:19:55.825 } 00:19:55.825 ] 00:19:55.825 }' 00:19:55.825 10:38:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@201 -- # killprocess 399739 00:19:55.825 10:38:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 399739 ']' 00:19:55.825 10:38:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 399739 00:19:55.825 10:38:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:19:55.825 10:38:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:19:55.825 10:38:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 399739 00:19:55.825 10:38:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:19:55.825 10:38:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:19:55.825 10:38:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 399739' 00:19:55.825 killing process with pid 399739 00:19:55.825 10:38:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 399739 00:19:55.825 Received shutdown signal, test time was about 10.000000 seconds 00:19:55.825 00:19:55.825 Latency(us) 00:19:55.825 [2024-11-15T09:38:44.288Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:55.825 [2024-11-15T09:38:44.288Z] =================================================================================================================== 00:19:55.825 [2024-11-15T09:38:44.288Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:55.825 10:38:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 399739 00:19:56.083 10:38:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@202 -- # killprocess 399455 00:19:56.083 10:38:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 399455 ']' 00:19:56.083 10:38:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 399455 00:19:56.083 10:38:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:19:56.083 10:38:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:19:56.083 10:38:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 399455 00:19:56.083 10:38:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:19:56.083 10:38:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:19:56.083 10:38:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 399455' 00:19:56.083 killing process with pid 399455 00:19:56.083 10:38:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 399455 00:19:56.083 10:38:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 399455 00:19:56.358 10:38:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:19:56.358 10:38:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:56.358 10:38:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # echo '{ 00:19:56.358 "subsystems": [ 00:19:56.358 { 00:19:56.358 "subsystem": "keyring", 00:19:56.358 "config": [ 00:19:56.358 { 00:19:56.358 "method": "keyring_file_add_key", 00:19:56.358 "params": { 00:19:56.358 "name": "key0", 00:19:56.358 "path": "/tmp/tmp.gk0wdYDzMM" 00:19:56.358 } 00:19:56.358 } 00:19:56.358 ] 00:19:56.358 }, 00:19:56.358 { 00:19:56.358 "subsystem": "iobuf", 00:19:56.358 "config": [ 00:19:56.358 { 00:19:56.358 "method": "iobuf_set_options", 00:19:56.358 "params": { 00:19:56.358 "small_pool_count": 8192, 00:19:56.358 "large_pool_count": 1024, 00:19:56.358 "small_bufsize": 8192, 00:19:56.358 "large_bufsize": 135168, 00:19:56.358 "enable_numa": false 00:19:56.358 } 00:19:56.358 } 00:19:56.358 ] 00:19:56.358 }, 00:19:56.358 { 00:19:56.358 "subsystem": "sock", 00:19:56.358 "config": [ 00:19:56.358 { 00:19:56.358 "method": "sock_set_default_impl", 00:19:56.358 "params": { 00:19:56.358 "impl_name": "posix" 00:19:56.358 } 00:19:56.358 }, 00:19:56.358 { 00:19:56.358 "method": "sock_impl_set_options", 00:19:56.358 "params": { 00:19:56.358 "impl_name": "ssl", 00:19:56.358 "recv_buf_size": 4096, 00:19:56.358 "send_buf_size": 4096, 00:19:56.358 "enable_recv_pipe": true, 00:19:56.359 "enable_quickack": false, 00:19:56.359 "enable_placement_id": 0, 00:19:56.359 "enable_zerocopy_send_server": true, 00:19:56.359 "enable_zerocopy_send_client": false, 00:19:56.359 "zerocopy_threshold": 0, 00:19:56.359 "tls_version": 0, 00:19:56.359 "enable_ktls": false 00:19:56.359 } 00:19:56.359 }, 00:19:56.359 { 00:19:56.359 "method": "sock_impl_set_options", 00:19:56.359 "params": { 00:19:56.359 "impl_name": "posix", 00:19:56.359 "recv_buf_size": 2097152, 00:19:56.359 "send_buf_size": 2097152, 00:19:56.359 "enable_recv_pipe": true, 00:19:56.359 "enable_quickack": false, 00:19:56.359 "enable_placement_id": 0, 00:19:56.359 "enable_zerocopy_send_server": true, 00:19:56.359 "enable_zerocopy_send_client": false, 00:19:56.359 "zerocopy_threshold": 0, 00:19:56.359 "tls_version": 0, 00:19:56.359 "enable_ktls": false 00:19:56.359 } 00:19:56.359 } 00:19:56.359 ] 00:19:56.359 }, 00:19:56.359 { 00:19:56.359 "subsystem": "vmd", 00:19:56.359 "config": [] 00:19:56.359 }, 00:19:56.359 { 00:19:56.359 "subsystem": "accel", 00:19:56.359 "config": [ 00:19:56.359 { 00:19:56.359 "method": "accel_set_options", 00:19:56.359 "params": { 00:19:56.359 "small_cache_size": 128, 00:19:56.359 "large_cache_size": 16, 00:19:56.359 "task_count": 2048, 00:19:56.359 "sequence_count": 2048, 00:19:56.359 "buf_count": 2048 00:19:56.359 } 00:19:56.359 } 00:19:56.359 ] 00:19:56.359 }, 00:19:56.359 { 00:19:56.359 "subsystem": "bdev", 00:19:56.359 "config": [ 00:19:56.359 { 00:19:56.359 "method": "bdev_set_options", 00:19:56.359 "params": { 00:19:56.359 "bdev_io_pool_size": 65535, 00:19:56.359 "bdev_io_cache_size": 256, 00:19:56.359 "bdev_auto_examine": true, 00:19:56.359 "iobuf_small_cache_size": 128, 00:19:56.360 "iobuf_large_cache_size": 16 00:19:56.360 } 00:19:56.360 }, 00:19:56.360 { 00:19:56.360 "method": "bdev_raid_set_options", 00:19:56.360 "params": { 00:19:56.360 "process_window_size_kb": 1024, 00:19:56.360 "process_max_bandwidth_mb_sec": 0 00:19:56.360 } 00:19:56.360 }, 00:19:56.360 { 00:19:56.360 "method": "bdev_iscsi_set_options", 00:19:56.360 "params": { 00:19:56.360 "timeout_sec": 30 00:19:56.360 } 00:19:56.360 }, 00:19:56.360 { 00:19:56.360 "method": "bdev_nvme_set_options", 00:19:56.360 "params": { 00:19:56.360 "action_on_timeout": "none", 00:19:56.360 "timeout_us": 0, 00:19:56.360 "timeout_admin_us": 0, 00:19:56.360 "keep_alive_timeout_ms": 10000, 00:19:56.360 "arbitration_burst": 0, 00:19:56.360 "low_priority_weight": 0, 00:19:56.360 "medium_priority_weight": 0, 00:19:56.360 "high_priority_weight": 0, 00:19:56.360 "nvme_adminq_poll_period_us": 10000, 00:19:56.360 "nvme_ioq_poll_period_us": 0, 00:19:56.360 "io_queue_requests": 0, 00:19:56.360 "delay_cmd_submit": true, 00:19:56.360 "transport_retry_count": 4, 00:19:56.360 "bdev_retry_count": 3, 00:19:56.360 "transport_ack_timeout": 0, 00:19:56.360 "ctrlr_loss_timeout_sec": 0, 00:19:56.360 "reconnect_delay_sec": 0, 00:19:56.360 "fast_io_fail_timeout_sec": 0, 00:19:56.360 "disable_auto_failback": false, 00:19:56.360 "generate_uuids": false, 00:19:56.360 "transport_tos": 0, 00:19:56.360 "nvme_error_stat": false, 00:19:56.361 "rdma_srq_size": 0, 00:19:56.361 "io_path_stat": false, 00:19:56.361 "allow_accel_sequence": false, 00:19:56.361 "rdma_max_cq_size": 0, 00:19:56.361 "rdma_cm_event_timeout_ms": 0, 00:19:56.361 "dhchap_digests": [ 00:19:56.361 "sha256", 00:19:56.361 "sha384", 00:19:56.361 "sha512" 00:19:56.361 ], 00:19:56.361 "dhchap_dhgroups": [ 00:19:56.361 "null", 00:19:56.361 "ffdhe2048", 00:19:56.361 "ffdhe3072", 00:19:56.361 "ffdhe4096", 00:19:56.361 "ffdhe6144", 00:19:56.361 "ffdhe8192" 00:19:56.361 ] 00:19:56.361 } 00:19:56.361 }, 00:19:56.361 { 00:19:56.361 "method": "bdev_nvme_set_hotplug", 00:19:56.361 "params": { 00:19:56.361 "period_us": 100000, 00:19:56.361 "enable": false 00:19:56.361 } 00:19:56.361 }, 00:19:56.361 { 00:19:56.361 "method": "bdev_malloc_create", 00:19:56.361 "params": { 00:19:56.361 "name": "malloc0", 00:19:56.361 "num_blocks": 8192, 00:19:56.361 "block_size": 4096, 00:19:56.361 "physical_block_size": 4096, 00:19:56.361 "uuid": "45f4d8f4-4ba9-4e07-a63f-160906000249", 00:19:56.361 "optimal_io_boundary": 0, 00:19:56.361 "md_size": 0, 00:19:56.361 "dif_type": 0, 00:19:56.361 "dif_is_head_of_md": false, 00:19:56.361 "dif_pi_format": 0 00:19:56.361 } 00:19:56.361 }, 00:19:56.361 { 00:19:56.361 "method": "bdev_wait_for_examine" 00:19:56.361 } 00:19:56.361 ] 00:19:56.361 }, 00:19:56.361 { 00:19:56.361 "subsystem": "nbd", 00:19:56.361 "config": [] 00:19:56.361 }, 00:19:56.361 { 00:19:56.361 "subsystem": "scheduler", 00:19:56.361 "config": [ 00:19:56.361 { 00:19:56.361 "method": "framework_set_scheduler", 00:19:56.361 "params": { 00:19:56.361 "name": "static" 00:19:56.361 } 00:19:56.361 } 00:19:56.361 ] 00:19:56.361 }, 00:19:56.361 { 00:19:56.361 "subsystem": "nvmf", 00:19:56.361 "config": [ 00:19:56.361 { 00:19:56.361 "method": "nvmf_set_config", 00:19:56.361 "params": { 00:19:56.361 "discovery_filter": "match_any", 00:19:56.362 "admin_cmd_passthru": { 00:19:56.362 "identify_ctrlr": false 00:19:56.362 }, 00:19:56.362 "dhchap_digests": [ 00:19:56.362 "sha256", 00:19:56.362 "sha384", 00:19:56.362 "sha512" 00:19:56.362 ], 00:19:56.362 "dhchap_dhgroups": [ 00:19:56.362 "null", 00:19:56.362 "ffdhe2048", 00:19:56.362 "ffdhe3072", 00:19:56.362 "ffdhe4096", 00:19:56.362 "ffdhe6144", 00:19:56.362 "ffdhe8192" 00:19:56.362 ] 00:19:56.362 } 00:19:56.362 }, 00:19:56.362 { 00:19:56.362 "method": "nvmf_set_max_subsystems", 00:19:56.362 "params": { 00:19:56.362 "max_subsystems": 1024 00:19:56.362 } 00:19:56.362 }, 00:19:56.362 { 00:19:56.362 "method": "nvmf_set_crdt", 00:19:56.362 "params": { 00:19:56.362 "crdt1": 0, 00:19:56.362 "crdt2": 0, 00:19:56.362 "crdt3": 0 00:19:56.362 } 00:19:56.362 }, 00:19:56.362 { 00:19:56.362 "method": "nvmf_create_transport", 00:19:56.362 "params": { 00:19:56.362 "trtype": "TCP", 00:19:56.362 "max_queue_depth": 128, 00:19:56.362 "max_io_qpairs_per_ctrlr": 127, 00:19:56.362 "in_capsule_data_size": 4096, 00:19:56.362 "max_io_size": 131072, 00:19:56.362 "io_unit_size": 131072, 00:19:56.362 "max_aq_depth": 128, 00:19:56.362 "num_shared_buffers": 511, 00:19:56.362 "buf_cache_size": 4294967295, 00:19:56.362 "dif_insert_or_strip": false, 00:19:56.362 "zcopy": false, 00:19:56.362 "c2h_success": false, 00:19:56.362 "sock_priority": 0, 00:19:56.362 "abort_timeout_sec": 1, 00:19:56.362 "ack_timeout": 0, 00:19:56.362 "data_wr_pool_size": 0 00:19:56.362 } 00:19:56.362 }, 00:19:56.362 { 00:19:56.363 "method": "nvmf_create_subsystem", 00:19:56.363 "params": { 00:19:56.363 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:56.363 "allow_any_host": false, 00:19:56.363 "serial_number": "SPDK00000000000001", 00:19:56.363 "model_number": "SPDK bdev Controller", 00:19:56.363 "max_namespaces": 10, 00:19:56.363 "min_cntlid": 1, 00:19:56.363 "max_cntlid": 65519, 00:19:56.363 "ana_reporting": false 00:19:56.363 } 00:19:56.363 }, 00:19:56.363 { 00:19:56.363 "method": "nvmf_subsystem_add_host", 00:19:56.363 "params": { 00:19:56.363 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:56.363 "host": "nqn.2016-06.io.spdk:host1", 00:19:56.363 "psk": "key0" 00:19:56.363 } 00:19:56.363 }, 00:19:56.363 { 00:19:56.363 "method": "nvmf_subsystem_add_ns", 00:19:56.363 "params": { 00:19:56.363 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:56.363 "namespace": { 00:19:56.363 "nsid": 1, 00:19:56.363 "bdev_name": "malloc0", 00:19:56.363 "nguid": "45F4D8F44BA94E07A63F160906000249", 00:19:56.363 "uuid": "45f4d8f4-4ba9-4e07-a63f-160906000249", 00:19:56.363 "no_auto_visible": false 00:19:56.363 } 00:19:56.363 } 00:19:56.363 }, 00:19:56.363 { 00:19:56.363 "method": "nvmf_subsystem_add_listener", 00:19:56.363 "params": { 00:19:56.363 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:56.363 "listen_address": { 00:19:56.363 "trtype": "TCP", 00:19:56.363 "adrfam": "IPv4", 00:19:56.363 "traddr": "10.0.0.2", 00:19:56.363 "trsvcid": "4420" 00:19:56.363 }, 00:19:56.363 "secure_channel": true 00:19:56.363 } 00:19:56.363 } 00:19:56.363 ] 00:19:56.363 } 00:19:56.363 ] 00:19:56.363 }' 00:19:56.363 10:38:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:56.363 10:38:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:56.363 10:38:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=400016 00:19:56.363 10:38:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:19:56.363 10:38:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 400016 00:19:56.363 10:38:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 400016 ']' 00:19:56.363 10:38:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:56.363 10:38:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:19:56.363 10:38:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:56.363 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:56.363 10:38:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:19:56.363 10:38:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:56.363 [2024-11-15 10:38:44.663053] Starting SPDK v25.01-pre git sha1 318515b44 / DPDK 24.03.0 initialization... 00:19:56.363 [2024-11-15 10:38:44.663137] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:56.363 [2024-11-15 10:38:44.734183] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:56.363 [2024-11-15 10:38:44.790976] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:56.363 [2024-11-15 10:38:44.791021] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:56.363 [2024-11-15 10:38:44.791049] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:56.363 [2024-11-15 10:38:44.791061] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:56.363 [2024-11-15 10:38:44.791076] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:56.363 [2024-11-15 10:38:44.791752] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:56.622 [2024-11-15 10:38:45.022836] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:56.622 [2024-11-15 10:38:45.054865] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:56.622 [2024-11-15 10:38:45.055124] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:57.556 10:38:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:19:57.556 10:38:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:19:57.556 10:38:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:57.556 10:38:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:57.556 10:38:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:57.556 10:38:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:57.556 10:38:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@209 -- # bdevperf_pid=400167 00:19:57.556 10:38:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@210 -- # waitforlisten 400167 /var/tmp/bdevperf.sock 00:19:57.556 10:38:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 400167 ']' 00:19:57.556 10:38:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:57.556 10:38:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:19:57.556 10:38:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:19:57.556 10:38:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:57.556 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:57.556 10:38:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:19:57.556 10:38:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:57.556 10:38:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # echo '{ 00:19:57.556 "subsystems": [ 00:19:57.556 { 00:19:57.556 "subsystem": "keyring", 00:19:57.556 "config": [ 00:19:57.556 { 00:19:57.556 "method": "keyring_file_add_key", 00:19:57.556 "params": { 00:19:57.556 "name": "key0", 00:19:57.556 "path": "/tmp/tmp.gk0wdYDzMM" 00:19:57.556 } 00:19:57.556 } 00:19:57.556 ] 00:19:57.556 }, 00:19:57.556 { 00:19:57.556 "subsystem": "iobuf", 00:19:57.556 "config": [ 00:19:57.556 { 00:19:57.556 "method": "iobuf_set_options", 00:19:57.556 "params": { 00:19:57.556 "small_pool_count": 8192, 00:19:57.556 "large_pool_count": 1024, 00:19:57.556 "small_bufsize": 8192, 00:19:57.556 "large_bufsize": 135168, 00:19:57.556 "enable_numa": false 00:19:57.556 } 00:19:57.556 } 00:19:57.556 ] 00:19:57.556 }, 00:19:57.556 { 00:19:57.556 "subsystem": "sock", 00:19:57.556 "config": [ 00:19:57.556 { 00:19:57.556 "method": "sock_set_default_impl", 00:19:57.556 "params": { 00:19:57.556 "impl_name": "posix" 00:19:57.556 } 00:19:57.556 }, 00:19:57.556 { 00:19:57.556 "method": "sock_impl_set_options", 00:19:57.556 "params": { 00:19:57.556 "impl_name": "ssl", 00:19:57.556 "recv_buf_size": 4096, 00:19:57.556 "send_buf_size": 4096, 00:19:57.556 "enable_recv_pipe": true, 00:19:57.556 "enable_quickack": false, 00:19:57.556 "enable_placement_id": 0, 00:19:57.556 "enable_zerocopy_send_server": true, 00:19:57.556 "enable_zerocopy_send_client": false, 00:19:57.556 "zerocopy_threshold": 0, 00:19:57.556 "tls_version": 0, 00:19:57.556 "enable_ktls": false 00:19:57.556 } 00:19:57.556 }, 00:19:57.556 { 00:19:57.556 "method": "sock_impl_set_options", 00:19:57.556 "params": { 00:19:57.556 "impl_name": "posix", 00:19:57.556 "recv_buf_size": 2097152, 00:19:57.556 "send_buf_size": 2097152, 00:19:57.556 "enable_recv_pipe": true, 00:19:57.556 "enable_quickack": false, 00:19:57.556 "enable_placement_id": 0, 00:19:57.556 "enable_zerocopy_send_server": true, 00:19:57.556 "enable_zerocopy_send_client": false, 00:19:57.556 "zerocopy_threshold": 0, 00:19:57.556 "tls_version": 0, 00:19:57.556 "enable_ktls": false 00:19:57.556 } 00:19:57.556 } 00:19:57.556 ] 00:19:57.556 }, 00:19:57.556 { 00:19:57.556 "subsystem": "vmd", 00:19:57.556 "config": [] 00:19:57.556 }, 00:19:57.556 { 00:19:57.556 "subsystem": "accel", 00:19:57.556 "config": [ 00:19:57.556 { 00:19:57.556 "method": "accel_set_options", 00:19:57.556 "params": { 00:19:57.556 "small_cache_size": 128, 00:19:57.556 "large_cache_size": 16, 00:19:57.556 "task_count": 2048, 00:19:57.556 "sequence_count": 2048, 00:19:57.556 "buf_count": 2048 00:19:57.556 } 00:19:57.556 } 00:19:57.556 ] 00:19:57.556 }, 00:19:57.556 { 00:19:57.556 "subsystem": "bdev", 00:19:57.556 "config": [ 00:19:57.556 { 00:19:57.556 "method": "bdev_set_options", 00:19:57.556 "params": { 00:19:57.556 "bdev_io_pool_size": 65535, 00:19:57.556 "bdev_io_cache_size": 256, 00:19:57.556 "bdev_auto_examine": true, 00:19:57.556 "iobuf_small_cache_size": 128, 00:19:57.556 "iobuf_large_cache_size": 16 00:19:57.556 } 00:19:57.556 }, 00:19:57.556 { 00:19:57.556 "method": "bdev_raid_set_options", 00:19:57.556 "params": { 00:19:57.556 "process_window_size_kb": 1024, 00:19:57.556 "process_max_bandwidth_mb_sec": 0 00:19:57.556 } 00:19:57.556 }, 00:19:57.556 { 00:19:57.556 "method": "bdev_iscsi_set_options", 00:19:57.556 "params": { 00:19:57.556 "timeout_sec": 30 00:19:57.556 } 00:19:57.556 }, 00:19:57.556 { 00:19:57.556 "method": "bdev_nvme_set_options", 00:19:57.556 "params": { 00:19:57.556 "action_on_timeout": "none", 00:19:57.556 "timeout_us": 0, 00:19:57.556 "timeout_admin_us": 0, 00:19:57.556 "keep_alive_timeout_ms": 10000, 00:19:57.556 "arbitration_burst": 0, 00:19:57.556 "low_priority_weight": 0, 00:19:57.556 "medium_priority_weight": 0, 00:19:57.557 "high_priority_weight": 0, 00:19:57.557 "nvme_adminq_poll_period_us": 10000, 00:19:57.557 "nvme_ioq_poll_period_us": 0, 00:19:57.557 "io_queue_requests": 512, 00:19:57.557 "delay_cmd_submit": true, 00:19:57.557 "transport_retry_count": 4, 00:19:57.557 "bdev_retry_count": 3, 00:19:57.557 "transport_ack_timeout": 0, 00:19:57.557 "ctrlr_loss_timeout_sec": 0, 00:19:57.557 "reconnect_delay_sec": 0, 00:19:57.557 "fast_io_fail_timeout_sec": 0, 00:19:57.557 "disable_auto_failback": false, 00:19:57.557 "generate_uuids": false, 00:19:57.557 "transport_tos": 0, 00:19:57.557 "nvme_error_stat": false, 00:19:57.557 "rdma_srq_size": 0, 00:19:57.557 "io_path_stat": false, 00:19:57.557 "allow_accel_sequence": false, 00:19:57.557 "rdma_max_cq_size": 0, 00:19:57.557 "rdma_cm_event_timeout_ms": 0, 00:19:57.557 "dhchap_digests": [ 00:19:57.557 "sha256", 00:19:57.557 "sha384", 00:19:57.557 "sha512" 00:19:57.557 ], 00:19:57.557 "dhchap_dhgroups": [ 00:19:57.557 "null", 00:19:57.557 "ffdhe2048", 00:19:57.557 "ffdhe3072", 00:19:57.557 "ffdhe4096", 00:19:57.557 "ffdhe6144", 00:19:57.557 "ffdhe8192" 00:19:57.557 ] 00:19:57.557 } 00:19:57.557 }, 00:19:57.557 { 00:19:57.557 "method": "bdev_nvme_attach_controller", 00:19:57.557 "params": { 00:19:57.557 "name": "TLSTEST", 00:19:57.557 "trtype": "TCP", 00:19:57.557 "adrfam": "IPv4", 00:19:57.557 "traddr": "10.0.0.2", 00:19:57.557 "trsvcid": "4420", 00:19:57.557 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:57.557 "prchk_reftag": false, 00:19:57.557 "prchk_guard": false, 00:19:57.557 "ctrlr_loss_timeout_sec": 0, 00:19:57.557 "reconnect_delay_sec": 0, 00:19:57.557 "fast_io_fail_timeout_sec": 0, 00:19:57.557 "psk": "key0", 00:19:57.557 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:57.557 "hdgst": false, 00:19:57.557 "ddgst": false, 00:19:57.557 "multipath": "multipath" 00:19:57.557 } 00:19:57.557 }, 00:19:57.557 { 00:19:57.557 "method": "bdev_nvme_set_hotplug", 00:19:57.557 "params": { 00:19:57.557 "period_us": 100000, 00:19:57.557 "enable": false 00:19:57.557 } 00:19:57.557 }, 00:19:57.557 { 00:19:57.557 "method": "bdev_wait_for_examine" 00:19:57.557 } 00:19:57.557 ] 00:19:57.557 }, 00:19:57.557 { 00:19:57.557 "subsystem": "nbd", 00:19:57.557 "config": [] 00:19:57.557 } 00:19:57.557 ] 00:19:57.557 }' 00:19:57.557 [2024-11-15 10:38:45.765509] Starting SPDK v25.01-pre git sha1 318515b44 / DPDK 24.03.0 initialization... 00:19:57.557 [2024-11-15 10:38:45.765588] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid400167 ] 00:19:57.557 [2024-11-15 10:38:45.830933] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:57.557 [2024-11-15 10:38:45.888979] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:57.815 [2024-11-15 10:38:46.065578] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:57.815 10:38:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:19:57.815 10:38:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:19:57.815 10:38:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@213 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:19:58.073 Running I/O for 10 seconds... 00:19:59.942 3273.00 IOPS, 12.79 MiB/s [2024-11-15T09:38:49.338Z] 3222.00 IOPS, 12.59 MiB/s [2024-11-15T09:38:50.712Z] 3279.33 IOPS, 12.81 MiB/s [2024-11-15T09:38:51.647Z] 3260.75 IOPS, 12.74 MiB/s [2024-11-15T09:38:52.581Z] 3292.40 IOPS, 12.86 MiB/s [2024-11-15T09:38:53.516Z] 3321.50 IOPS, 12.97 MiB/s [2024-11-15T09:38:54.451Z] 3349.43 IOPS, 13.08 MiB/s [2024-11-15T09:38:55.384Z] 3348.12 IOPS, 13.08 MiB/s [2024-11-15T09:38:56.757Z] 3370.67 IOPS, 13.17 MiB/s [2024-11-15T09:38:56.757Z] 3362.50 IOPS, 13.13 MiB/s 00:20:08.294 Latency(us) 00:20:08.294 [2024-11-15T09:38:56.757Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:08.294 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:20:08.294 Verification LBA range: start 0x0 length 0x2000 00:20:08.294 TLSTESTn1 : 10.02 3368.55 13.16 0.00 0.00 37938.13 5971.06 76895.57 00:20:08.294 [2024-11-15T09:38:56.757Z] =================================================================================================================== 00:20:08.294 [2024-11-15T09:38:56.757Z] Total : 3368.55 13.16 0.00 0.00 37938.13 5971.06 76895.57 00:20:08.294 { 00:20:08.294 "results": [ 00:20:08.294 { 00:20:08.294 "job": "TLSTESTn1", 00:20:08.294 "core_mask": "0x4", 00:20:08.294 "workload": "verify", 00:20:08.294 "status": "finished", 00:20:08.294 "verify_range": { 00:20:08.294 "start": 0, 00:20:08.294 "length": 8192 00:20:08.294 }, 00:20:08.294 "queue_depth": 128, 00:20:08.294 "io_size": 4096, 00:20:08.294 "runtime": 10.01945, 00:20:08.294 "iops": 3368.5481738019553, 00:20:08.294 "mibps": 13.158391303913888, 00:20:08.294 "io_failed": 0, 00:20:08.294 "io_timeout": 0, 00:20:08.294 "avg_latency_us": 37938.127666384644, 00:20:08.294 "min_latency_us": 5971.057777777778, 00:20:08.294 "max_latency_us": 76895.57333333333 00:20:08.294 } 00:20:08.294 ], 00:20:08.294 "core_count": 1 00:20:08.294 } 00:20:08.294 10:38:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@215 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:08.294 10:38:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@216 -- # killprocess 400167 00:20:08.294 10:38:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 400167 ']' 00:20:08.294 10:38:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 400167 00:20:08.294 10:38:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:20:08.294 10:38:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:20:08.294 10:38:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 400167 00:20:08.294 10:38:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:20:08.294 10:38:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:20:08.294 10:38:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 400167' 00:20:08.294 killing process with pid 400167 00:20:08.294 10:38:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 400167 00:20:08.294 Received shutdown signal, test time was about 10.000000 seconds 00:20:08.294 00:20:08.294 Latency(us) 00:20:08.294 [2024-11-15T09:38:56.757Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:08.294 [2024-11-15T09:38:56.757Z] =================================================================================================================== 00:20:08.294 [2024-11-15T09:38:56.757Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:08.294 10:38:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 400167 00:20:08.294 10:38:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@217 -- # killprocess 400016 00:20:08.294 10:38:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 400016 ']' 00:20:08.294 10:38:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 400016 00:20:08.294 10:38:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:20:08.294 10:38:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:20:08.294 10:38:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 400016 00:20:08.294 10:38:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:20:08.294 10:38:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:20:08.294 10:38:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 400016' 00:20:08.294 killing process with pid 400016 00:20:08.294 10:38:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 400016 00:20:08.294 10:38:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 400016 00:20:08.552 10:38:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@220 -- # nvmfappstart 00:20:08.552 10:38:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:08.552 10:38:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:08.552 10:38:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:08.552 10:38:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=401469 00:20:08.552 10:38:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:20:08.552 10:38:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 401469 00:20:08.552 10:38:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 401469 ']' 00:20:08.552 10:38:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:08.552 10:38:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:20:08.552 10:38:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:08.552 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:08.552 10:38:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:20:08.552 10:38:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:08.552 [2024-11-15 10:38:56.910743] Starting SPDK v25.01-pre git sha1 318515b44 / DPDK 24.03.0 initialization... 00:20:08.552 [2024-11-15 10:38:56.910843] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:08.552 [2024-11-15 10:38:56.982990] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:08.811 [2024-11-15 10:38:57.042828] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:08.811 [2024-11-15 10:38:57.042888] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:08.811 [2024-11-15 10:38:57.042917] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:08.811 [2024-11-15 10:38:57.042929] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:08.811 [2024-11-15 10:38:57.042938] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:08.811 [2024-11-15 10:38:57.043595] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:08.811 10:38:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:20:08.811 10:38:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:20:08.811 10:38:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:08.811 10:38:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:08.811 10:38:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:08.811 10:38:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:08.811 10:38:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@221 -- # setup_nvmf_tgt /tmp/tmp.gk0wdYDzMM 00:20:08.811 10:38:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.gk0wdYDzMM 00:20:08.811 10:38:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:20:09.069 [2024-11-15 10:38:57.471179] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:09.069 10:38:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:20:09.633 10:38:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:20:09.633 [2024-11-15 10:38:58.088867] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:09.633 [2024-11-15 10:38:58.089123] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:09.891 10:38:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:20:10.148 malloc0 00:20:10.148 10:38:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:20:10.406 10:38:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.gk0wdYDzMM 00:20:10.663 10:38:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:20:10.921 10:38:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@224 -- # bdevperf_pid=401781 00:20:10.921 10:38:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@222 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:20:10.921 10:38:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@226 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:10.921 10:38:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@227 -- # waitforlisten 401781 /var/tmp/bdevperf.sock 00:20:10.921 10:38:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 401781 ']' 00:20:10.921 10:38:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:10.921 10:38:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:20:10.921 10:38:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:10.921 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:10.921 10:38:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:20:10.921 10:38:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:10.921 [2024-11-15 10:38:59.321511] Starting SPDK v25.01-pre git sha1 318515b44 / DPDK 24.03.0 initialization... 00:20:10.921 [2024-11-15 10:38:59.321591] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid401781 ] 00:20:11.180 [2024-11-15 10:38:59.395414] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:11.180 [2024-11-15 10:38:59.456280] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:11.180 10:38:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:20:11.180 10:38:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:20:11.180 10:38:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@229 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.gk0wdYDzMM 00:20:11.438 10:38:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@230 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:20:11.696 [2024-11-15 10:39:00.102704] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:11.954 nvme0n1 00:20:11.954 10:39:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@234 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:11.954 Running I/O for 1 seconds... 00:20:12.927 3323.00 IOPS, 12.98 MiB/s 00:20:12.927 Latency(us) 00:20:12.927 [2024-11-15T09:39:01.390Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:12.927 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:20:12.927 Verification LBA range: start 0x0 length 0x2000 00:20:12.927 nvme0n1 : 1.02 3387.47 13.23 0.00 0.00 37442.11 6990.51 33399.09 00:20:12.927 [2024-11-15T09:39:01.390Z] =================================================================================================================== 00:20:12.927 [2024-11-15T09:39:01.390Z] Total : 3387.47 13.23 0.00 0.00 37442.11 6990.51 33399.09 00:20:12.927 { 00:20:12.927 "results": [ 00:20:12.927 { 00:20:12.927 "job": "nvme0n1", 00:20:12.927 "core_mask": "0x2", 00:20:12.927 "workload": "verify", 00:20:12.927 "status": "finished", 00:20:12.927 "verify_range": { 00:20:12.927 "start": 0, 00:20:12.927 "length": 8192 00:20:12.927 }, 00:20:12.927 "queue_depth": 128, 00:20:12.927 "io_size": 4096, 00:20:12.927 "runtime": 1.018755, 00:20:12.927 "iops": 3387.46803696669, 00:20:12.927 "mibps": 13.232297019401132, 00:20:12.927 "io_failed": 0, 00:20:12.927 "io_timeout": 0, 00:20:12.927 "avg_latency_us": 37442.109361752366, 00:20:12.927 "min_latency_us": 6990.506666666667, 00:20:12.927 "max_latency_us": 33399.08740740741 00:20:12.927 } 00:20:12.927 ], 00:20:12.927 "core_count": 1 00:20:12.927 } 00:20:12.927 10:39:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@236 -- # killprocess 401781 00:20:12.927 10:39:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 401781 ']' 00:20:12.927 10:39:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 401781 00:20:12.927 10:39:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:20:12.927 10:39:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:20:12.927 10:39:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 401781 00:20:12.927 10:39:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:20:12.928 10:39:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:20:12.928 10:39:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 401781' 00:20:12.928 killing process with pid 401781 00:20:12.928 10:39:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 401781 00:20:12.928 Received shutdown signal, test time was about 1.000000 seconds 00:20:12.928 00:20:12.928 Latency(us) 00:20:12.928 [2024-11-15T09:39:01.391Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:12.928 [2024-11-15T09:39:01.391Z] =================================================================================================================== 00:20:12.928 [2024-11-15T09:39:01.391Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:12.928 10:39:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 401781 00:20:13.203 10:39:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@237 -- # killprocess 401469 00:20:13.203 10:39:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 401469 ']' 00:20:13.203 10:39:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 401469 00:20:13.203 10:39:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:20:13.203 10:39:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:20:13.203 10:39:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 401469 00:20:13.203 10:39:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:20:13.203 10:39:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:20:13.203 10:39:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 401469' 00:20:13.203 killing process with pid 401469 00:20:13.203 10:39:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 401469 00:20:13.203 10:39:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 401469 00:20:13.483 10:39:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@242 -- # nvmfappstart 00:20:13.483 10:39:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:13.483 10:39:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:13.483 10:39:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:13.483 10:39:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=402164 00:20:13.483 10:39:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:20:13.483 10:39:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 402164 00:20:13.483 10:39:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 402164 ']' 00:20:13.483 10:39:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:13.483 10:39:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:20:13.483 10:39:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:13.483 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:13.483 10:39:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:20:13.483 10:39:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:13.483 [2024-11-15 10:39:01.888903] Starting SPDK v25.01-pre git sha1 318515b44 / DPDK 24.03.0 initialization... 00:20:13.483 [2024-11-15 10:39:01.889005] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:13.753 [2024-11-15 10:39:01.967585] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:13.753 [2024-11-15 10:39:02.026349] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:13.753 [2024-11-15 10:39:02.026430] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:13.753 [2024-11-15 10:39:02.026460] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:13.753 [2024-11-15 10:39:02.026472] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:13.753 [2024-11-15 10:39:02.026483] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:13.753 [2024-11-15 10:39:02.027132] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:13.753 10:39:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:20:13.753 10:39:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:20:13.753 10:39:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:13.753 10:39:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:13.753 10:39:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:13.753 10:39:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:13.753 10:39:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@243 -- # rpc_cmd 00:20:13.753 10:39:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:13.753 10:39:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:13.753 [2024-11-15 10:39:02.171040] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:13.753 malloc0 00:20:13.753 [2024-11-15 10:39:02.202822] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:13.753 [2024-11-15 10:39:02.203087] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:14.035 10:39:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:14.035 10:39:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@256 -- # bdevperf_pid=402210 00:20:14.035 10:39:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@254 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:20:14.035 10:39:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@258 -- # waitforlisten 402210 /var/tmp/bdevperf.sock 00:20:14.035 10:39:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 402210 ']' 00:20:14.035 10:39:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:14.035 10:39:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:20:14.035 10:39:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:14.035 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:14.035 10:39:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:20:14.035 10:39:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:14.035 [2024-11-15 10:39:02.275199] Starting SPDK v25.01-pre git sha1 318515b44 / DPDK 24.03.0 initialization... 00:20:14.035 [2024-11-15 10:39:02.275264] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid402210 ] 00:20:14.035 [2024-11-15 10:39:02.342635] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:14.035 [2024-11-15 10:39:02.400810] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:14.313 10:39:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:20:14.313 10:39:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:20:14.313 10:39:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@259 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.gk0wdYDzMM 00:20:14.594 10:39:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@260 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:20:14.594 [2024-11-15 10:39:03.038492] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:14.876 nvme0n1 00:20:14.876 10:39:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@264 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:14.876 Running I/O for 1 seconds... 00:20:15.811 2978.00 IOPS, 11.63 MiB/s 00:20:15.811 Latency(us) 00:20:15.811 [2024-11-15T09:39:04.274Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:15.811 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:20:15.811 Verification LBA range: start 0x0 length 0x2000 00:20:15.811 nvme0n1 : 1.03 3008.62 11.75 0.00 0.00 41953.30 5801.15 86216.25 00:20:15.811 [2024-11-15T09:39:04.274Z] =================================================================================================================== 00:20:15.811 [2024-11-15T09:39:04.274Z] Total : 3008.62 11.75 0.00 0.00 41953.30 5801.15 86216.25 00:20:15.811 { 00:20:15.811 "results": [ 00:20:15.811 { 00:20:15.811 "job": "nvme0n1", 00:20:15.811 "core_mask": "0x2", 00:20:15.811 "workload": "verify", 00:20:15.811 "status": "finished", 00:20:15.811 "verify_range": { 00:20:15.811 "start": 0, 00:20:15.811 "length": 8192 00:20:15.811 }, 00:20:15.811 "queue_depth": 128, 00:20:15.811 "io_size": 4096, 00:20:15.811 "runtime": 1.032699, 00:20:15.811 "iops": 3008.6210986938113, 00:20:15.811 "mibps": 11.7524261667727, 00:20:15.811 "io_failed": 0, 00:20:15.811 "io_timeout": 0, 00:20:15.811 "avg_latency_us": 41953.29745544708, 00:20:15.811 "min_latency_us": 5801.14962962963, 00:20:15.811 "max_latency_us": 86216.24888888889 00:20:15.811 } 00:20:15.811 ], 00:20:15.811 "core_count": 1 00:20:15.811 } 00:20:16.069 10:39:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # rpc_cmd save_config 00:20:16.069 10:39:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:16.069 10:39:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:16.069 10:39:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:16.069 10:39:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # tgtcfg='{ 00:20:16.069 "subsystems": [ 00:20:16.069 { 00:20:16.069 "subsystem": "keyring", 00:20:16.069 "config": [ 00:20:16.069 { 00:20:16.069 "method": "keyring_file_add_key", 00:20:16.069 "params": { 00:20:16.069 "name": "key0", 00:20:16.069 "path": "/tmp/tmp.gk0wdYDzMM" 00:20:16.069 } 00:20:16.069 } 00:20:16.069 ] 00:20:16.069 }, 00:20:16.069 { 00:20:16.069 "subsystem": "iobuf", 00:20:16.069 "config": [ 00:20:16.069 { 00:20:16.069 "method": "iobuf_set_options", 00:20:16.069 "params": { 00:20:16.069 "small_pool_count": 8192, 00:20:16.069 "large_pool_count": 1024, 00:20:16.069 "small_bufsize": 8192, 00:20:16.069 "large_bufsize": 135168, 00:20:16.069 "enable_numa": false 00:20:16.069 } 00:20:16.069 } 00:20:16.069 ] 00:20:16.069 }, 00:20:16.069 { 00:20:16.069 "subsystem": "sock", 00:20:16.069 "config": [ 00:20:16.069 { 00:20:16.069 "method": "sock_set_default_impl", 00:20:16.069 "params": { 00:20:16.069 "impl_name": "posix" 00:20:16.069 } 00:20:16.069 }, 00:20:16.069 { 00:20:16.069 "method": "sock_impl_set_options", 00:20:16.069 "params": { 00:20:16.069 "impl_name": "ssl", 00:20:16.069 "recv_buf_size": 4096, 00:20:16.069 "send_buf_size": 4096, 00:20:16.069 "enable_recv_pipe": true, 00:20:16.069 "enable_quickack": false, 00:20:16.069 "enable_placement_id": 0, 00:20:16.069 "enable_zerocopy_send_server": true, 00:20:16.069 "enable_zerocopy_send_client": false, 00:20:16.069 "zerocopy_threshold": 0, 00:20:16.069 "tls_version": 0, 00:20:16.069 "enable_ktls": false 00:20:16.069 } 00:20:16.069 }, 00:20:16.069 { 00:20:16.069 "method": "sock_impl_set_options", 00:20:16.069 "params": { 00:20:16.069 "impl_name": "posix", 00:20:16.069 "recv_buf_size": 2097152, 00:20:16.069 "send_buf_size": 2097152, 00:20:16.069 "enable_recv_pipe": true, 00:20:16.069 "enable_quickack": false, 00:20:16.069 "enable_placement_id": 0, 00:20:16.069 "enable_zerocopy_send_server": true, 00:20:16.069 "enable_zerocopy_send_client": false, 00:20:16.069 "zerocopy_threshold": 0, 00:20:16.069 "tls_version": 0, 00:20:16.069 "enable_ktls": false 00:20:16.069 } 00:20:16.069 } 00:20:16.069 ] 00:20:16.069 }, 00:20:16.069 { 00:20:16.069 "subsystem": "vmd", 00:20:16.069 "config": [] 00:20:16.069 }, 00:20:16.069 { 00:20:16.069 "subsystem": "accel", 00:20:16.069 "config": [ 00:20:16.069 { 00:20:16.069 "method": "accel_set_options", 00:20:16.069 "params": { 00:20:16.069 "small_cache_size": 128, 00:20:16.069 "large_cache_size": 16, 00:20:16.069 "task_count": 2048, 00:20:16.069 "sequence_count": 2048, 00:20:16.069 "buf_count": 2048 00:20:16.069 } 00:20:16.069 } 00:20:16.069 ] 00:20:16.069 }, 00:20:16.069 { 00:20:16.069 "subsystem": "bdev", 00:20:16.069 "config": [ 00:20:16.069 { 00:20:16.069 "method": "bdev_set_options", 00:20:16.069 "params": { 00:20:16.069 "bdev_io_pool_size": 65535, 00:20:16.069 "bdev_io_cache_size": 256, 00:20:16.069 "bdev_auto_examine": true, 00:20:16.069 "iobuf_small_cache_size": 128, 00:20:16.069 "iobuf_large_cache_size": 16 00:20:16.069 } 00:20:16.069 }, 00:20:16.069 { 00:20:16.069 "method": "bdev_raid_set_options", 00:20:16.069 "params": { 00:20:16.069 "process_window_size_kb": 1024, 00:20:16.069 "process_max_bandwidth_mb_sec": 0 00:20:16.069 } 00:20:16.069 }, 00:20:16.069 { 00:20:16.069 "method": "bdev_iscsi_set_options", 00:20:16.069 "params": { 00:20:16.069 "timeout_sec": 30 00:20:16.069 } 00:20:16.069 }, 00:20:16.069 { 00:20:16.069 "method": "bdev_nvme_set_options", 00:20:16.069 "params": { 00:20:16.069 "action_on_timeout": "none", 00:20:16.069 "timeout_us": 0, 00:20:16.069 "timeout_admin_us": 0, 00:20:16.069 "keep_alive_timeout_ms": 10000, 00:20:16.070 "arbitration_burst": 0, 00:20:16.070 "low_priority_weight": 0, 00:20:16.070 "medium_priority_weight": 0, 00:20:16.070 "high_priority_weight": 0, 00:20:16.070 "nvme_adminq_poll_period_us": 10000, 00:20:16.070 "nvme_ioq_poll_period_us": 0, 00:20:16.070 "io_queue_requests": 0, 00:20:16.070 "delay_cmd_submit": true, 00:20:16.070 "transport_retry_count": 4, 00:20:16.070 "bdev_retry_count": 3, 00:20:16.070 "transport_ack_timeout": 0, 00:20:16.070 "ctrlr_loss_timeout_sec": 0, 00:20:16.070 "reconnect_delay_sec": 0, 00:20:16.070 "fast_io_fail_timeout_sec": 0, 00:20:16.070 "disable_auto_failback": false, 00:20:16.070 "generate_uuids": false, 00:20:16.070 "transport_tos": 0, 00:20:16.070 "nvme_error_stat": false, 00:20:16.070 "rdma_srq_size": 0, 00:20:16.070 "io_path_stat": false, 00:20:16.070 "allow_accel_sequence": false, 00:20:16.070 "rdma_max_cq_size": 0, 00:20:16.070 "rdma_cm_event_timeout_ms": 0, 00:20:16.070 "dhchap_digests": [ 00:20:16.070 "sha256", 00:20:16.070 "sha384", 00:20:16.070 "sha512" 00:20:16.070 ], 00:20:16.070 "dhchap_dhgroups": [ 00:20:16.070 "null", 00:20:16.070 "ffdhe2048", 00:20:16.070 "ffdhe3072", 00:20:16.070 "ffdhe4096", 00:20:16.070 "ffdhe6144", 00:20:16.070 "ffdhe8192" 00:20:16.070 ] 00:20:16.070 } 00:20:16.070 }, 00:20:16.070 { 00:20:16.070 "method": "bdev_nvme_set_hotplug", 00:20:16.070 "params": { 00:20:16.070 "period_us": 100000, 00:20:16.070 "enable": false 00:20:16.070 } 00:20:16.070 }, 00:20:16.070 { 00:20:16.070 "method": "bdev_malloc_create", 00:20:16.070 "params": { 00:20:16.070 "name": "malloc0", 00:20:16.070 "num_blocks": 8192, 00:20:16.070 "block_size": 4096, 00:20:16.070 "physical_block_size": 4096, 00:20:16.070 "uuid": "69f584a9-4b0b-4dec-a535-ba23ac86a1bb", 00:20:16.070 "optimal_io_boundary": 0, 00:20:16.070 "md_size": 0, 00:20:16.070 "dif_type": 0, 00:20:16.070 "dif_is_head_of_md": false, 00:20:16.070 "dif_pi_format": 0 00:20:16.070 } 00:20:16.070 }, 00:20:16.070 { 00:20:16.070 "method": "bdev_wait_for_examine" 00:20:16.070 } 00:20:16.070 ] 00:20:16.070 }, 00:20:16.070 { 00:20:16.070 "subsystem": "nbd", 00:20:16.070 "config": [] 00:20:16.070 }, 00:20:16.070 { 00:20:16.070 "subsystem": "scheduler", 00:20:16.070 "config": [ 00:20:16.070 { 00:20:16.070 "method": "framework_set_scheduler", 00:20:16.070 "params": { 00:20:16.070 "name": "static" 00:20:16.070 } 00:20:16.070 } 00:20:16.070 ] 00:20:16.070 }, 00:20:16.070 { 00:20:16.070 "subsystem": "nvmf", 00:20:16.070 "config": [ 00:20:16.070 { 00:20:16.070 "method": "nvmf_set_config", 00:20:16.070 "params": { 00:20:16.070 "discovery_filter": "match_any", 00:20:16.070 "admin_cmd_passthru": { 00:20:16.070 "identify_ctrlr": false 00:20:16.070 }, 00:20:16.070 "dhchap_digests": [ 00:20:16.070 "sha256", 00:20:16.070 "sha384", 00:20:16.070 "sha512" 00:20:16.070 ], 00:20:16.070 "dhchap_dhgroups": [ 00:20:16.070 "null", 00:20:16.070 "ffdhe2048", 00:20:16.070 "ffdhe3072", 00:20:16.070 "ffdhe4096", 00:20:16.070 "ffdhe6144", 00:20:16.070 "ffdhe8192" 00:20:16.070 ] 00:20:16.070 } 00:20:16.070 }, 00:20:16.070 { 00:20:16.070 "method": "nvmf_set_max_subsystems", 00:20:16.070 "params": { 00:20:16.070 "max_subsystems": 1024 00:20:16.070 } 00:20:16.070 }, 00:20:16.070 { 00:20:16.070 "method": "nvmf_set_crdt", 00:20:16.070 "params": { 00:20:16.070 "crdt1": 0, 00:20:16.070 "crdt2": 0, 00:20:16.070 "crdt3": 0 00:20:16.070 } 00:20:16.070 }, 00:20:16.070 { 00:20:16.070 "method": "nvmf_create_transport", 00:20:16.070 "params": { 00:20:16.070 "trtype": "TCP", 00:20:16.070 "max_queue_depth": 128, 00:20:16.070 "max_io_qpairs_per_ctrlr": 127, 00:20:16.070 "in_capsule_data_size": 4096, 00:20:16.070 "max_io_size": 131072, 00:20:16.070 "io_unit_size": 131072, 00:20:16.070 "max_aq_depth": 128, 00:20:16.070 "num_shared_buffers": 511, 00:20:16.070 "buf_cache_size": 4294967295, 00:20:16.070 "dif_insert_or_strip": false, 00:20:16.070 "zcopy": false, 00:20:16.070 "c2h_success": false, 00:20:16.070 "sock_priority": 0, 00:20:16.070 "abort_timeout_sec": 1, 00:20:16.070 "ack_timeout": 0, 00:20:16.070 "data_wr_pool_size": 0 00:20:16.070 } 00:20:16.070 }, 00:20:16.070 { 00:20:16.070 "method": "nvmf_create_subsystem", 00:20:16.070 "params": { 00:20:16.070 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:16.070 "allow_any_host": false, 00:20:16.070 "serial_number": "00000000000000000000", 00:20:16.070 "model_number": "SPDK bdev Controller", 00:20:16.070 "max_namespaces": 32, 00:20:16.070 "min_cntlid": 1, 00:20:16.070 "max_cntlid": 65519, 00:20:16.070 "ana_reporting": false 00:20:16.070 } 00:20:16.070 }, 00:20:16.070 { 00:20:16.070 "method": "nvmf_subsystem_add_host", 00:20:16.070 "params": { 00:20:16.070 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:16.070 "host": "nqn.2016-06.io.spdk:host1", 00:20:16.070 "psk": "key0" 00:20:16.070 } 00:20:16.070 }, 00:20:16.070 { 00:20:16.070 "method": "nvmf_subsystem_add_ns", 00:20:16.070 "params": { 00:20:16.070 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:16.070 "namespace": { 00:20:16.070 "nsid": 1, 00:20:16.070 "bdev_name": "malloc0", 00:20:16.070 "nguid": "69F584A94B0B4DECA535BA23AC86A1BB", 00:20:16.070 "uuid": "69f584a9-4b0b-4dec-a535-ba23ac86a1bb", 00:20:16.070 "no_auto_visible": false 00:20:16.070 } 00:20:16.070 } 00:20:16.070 }, 00:20:16.070 { 00:20:16.070 "method": "nvmf_subsystem_add_listener", 00:20:16.070 "params": { 00:20:16.070 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:16.070 "listen_address": { 00:20:16.070 "trtype": "TCP", 00:20:16.070 "adrfam": "IPv4", 00:20:16.070 "traddr": "10.0.0.2", 00:20:16.070 "trsvcid": "4420" 00:20:16.070 }, 00:20:16.070 "secure_channel": false, 00:20:16.070 "sock_impl": "ssl" 00:20:16.070 } 00:20:16.070 } 00:20:16.070 ] 00:20:16.070 } 00:20:16.070 ] 00:20:16.070 }' 00:20:16.070 10:39:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:20:16.329 10:39:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # bperfcfg='{ 00:20:16.329 "subsystems": [ 00:20:16.329 { 00:20:16.329 "subsystem": "keyring", 00:20:16.329 "config": [ 00:20:16.329 { 00:20:16.329 "method": "keyring_file_add_key", 00:20:16.329 "params": { 00:20:16.329 "name": "key0", 00:20:16.329 "path": "/tmp/tmp.gk0wdYDzMM" 00:20:16.329 } 00:20:16.329 } 00:20:16.329 ] 00:20:16.329 }, 00:20:16.329 { 00:20:16.329 "subsystem": "iobuf", 00:20:16.329 "config": [ 00:20:16.329 { 00:20:16.329 "method": "iobuf_set_options", 00:20:16.329 "params": { 00:20:16.329 "small_pool_count": 8192, 00:20:16.329 "large_pool_count": 1024, 00:20:16.329 "small_bufsize": 8192, 00:20:16.329 "large_bufsize": 135168, 00:20:16.329 "enable_numa": false 00:20:16.329 } 00:20:16.329 } 00:20:16.329 ] 00:20:16.329 }, 00:20:16.329 { 00:20:16.329 "subsystem": "sock", 00:20:16.329 "config": [ 00:20:16.329 { 00:20:16.329 "method": "sock_set_default_impl", 00:20:16.329 "params": { 00:20:16.329 "impl_name": "posix" 00:20:16.329 } 00:20:16.329 }, 00:20:16.329 { 00:20:16.329 "method": "sock_impl_set_options", 00:20:16.329 "params": { 00:20:16.329 "impl_name": "ssl", 00:20:16.329 "recv_buf_size": 4096, 00:20:16.329 "send_buf_size": 4096, 00:20:16.329 "enable_recv_pipe": true, 00:20:16.329 "enable_quickack": false, 00:20:16.329 "enable_placement_id": 0, 00:20:16.329 "enable_zerocopy_send_server": true, 00:20:16.329 "enable_zerocopy_send_client": false, 00:20:16.329 "zerocopy_threshold": 0, 00:20:16.329 "tls_version": 0, 00:20:16.329 "enable_ktls": false 00:20:16.329 } 00:20:16.329 }, 00:20:16.329 { 00:20:16.329 "method": "sock_impl_set_options", 00:20:16.329 "params": { 00:20:16.329 "impl_name": "posix", 00:20:16.329 "recv_buf_size": 2097152, 00:20:16.329 "send_buf_size": 2097152, 00:20:16.329 "enable_recv_pipe": true, 00:20:16.329 "enable_quickack": false, 00:20:16.329 "enable_placement_id": 0, 00:20:16.329 "enable_zerocopy_send_server": true, 00:20:16.329 "enable_zerocopy_send_client": false, 00:20:16.329 "zerocopy_threshold": 0, 00:20:16.329 "tls_version": 0, 00:20:16.329 "enable_ktls": false 00:20:16.329 } 00:20:16.329 } 00:20:16.329 ] 00:20:16.329 }, 00:20:16.329 { 00:20:16.329 "subsystem": "vmd", 00:20:16.329 "config": [] 00:20:16.329 }, 00:20:16.329 { 00:20:16.329 "subsystem": "accel", 00:20:16.329 "config": [ 00:20:16.329 { 00:20:16.329 "method": "accel_set_options", 00:20:16.329 "params": { 00:20:16.329 "small_cache_size": 128, 00:20:16.329 "large_cache_size": 16, 00:20:16.329 "task_count": 2048, 00:20:16.329 "sequence_count": 2048, 00:20:16.329 "buf_count": 2048 00:20:16.329 } 00:20:16.329 } 00:20:16.329 ] 00:20:16.329 }, 00:20:16.329 { 00:20:16.329 "subsystem": "bdev", 00:20:16.329 "config": [ 00:20:16.329 { 00:20:16.329 "method": "bdev_set_options", 00:20:16.329 "params": { 00:20:16.329 "bdev_io_pool_size": 65535, 00:20:16.329 "bdev_io_cache_size": 256, 00:20:16.329 "bdev_auto_examine": true, 00:20:16.329 "iobuf_small_cache_size": 128, 00:20:16.329 "iobuf_large_cache_size": 16 00:20:16.329 } 00:20:16.329 }, 00:20:16.329 { 00:20:16.329 "method": "bdev_raid_set_options", 00:20:16.329 "params": { 00:20:16.329 "process_window_size_kb": 1024, 00:20:16.329 "process_max_bandwidth_mb_sec": 0 00:20:16.329 } 00:20:16.329 }, 00:20:16.329 { 00:20:16.329 "method": "bdev_iscsi_set_options", 00:20:16.329 "params": { 00:20:16.329 "timeout_sec": 30 00:20:16.329 } 00:20:16.329 }, 00:20:16.329 { 00:20:16.329 "method": "bdev_nvme_set_options", 00:20:16.329 "params": { 00:20:16.329 "action_on_timeout": "none", 00:20:16.329 "timeout_us": 0, 00:20:16.329 "timeout_admin_us": 0, 00:20:16.329 "keep_alive_timeout_ms": 10000, 00:20:16.329 "arbitration_burst": 0, 00:20:16.329 "low_priority_weight": 0, 00:20:16.329 "medium_priority_weight": 0, 00:20:16.329 "high_priority_weight": 0, 00:20:16.329 "nvme_adminq_poll_period_us": 10000, 00:20:16.329 "nvme_ioq_poll_period_us": 0, 00:20:16.329 "io_queue_requests": 512, 00:20:16.329 "delay_cmd_submit": true, 00:20:16.329 "transport_retry_count": 4, 00:20:16.329 "bdev_retry_count": 3, 00:20:16.329 "transport_ack_timeout": 0, 00:20:16.329 "ctrlr_loss_timeout_sec": 0, 00:20:16.329 "reconnect_delay_sec": 0, 00:20:16.330 "fast_io_fail_timeout_sec": 0, 00:20:16.330 "disable_auto_failback": false, 00:20:16.330 "generate_uuids": false, 00:20:16.330 "transport_tos": 0, 00:20:16.330 "nvme_error_stat": false, 00:20:16.330 "rdma_srq_size": 0, 00:20:16.330 "io_path_stat": false, 00:20:16.330 "allow_accel_sequence": false, 00:20:16.330 "rdma_max_cq_size": 0, 00:20:16.330 "rdma_cm_event_timeout_ms": 0, 00:20:16.330 "dhchap_digests": [ 00:20:16.330 "sha256", 00:20:16.330 "sha384", 00:20:16.330 "sha512" 00:20:16.330 ], 00:20:16.330 "dhchap_dhgroups": [ 00:20:16.330 "null", 00:20:16.330 "ffdhe2048", 00:20:16.330 "ffdhe3072", 00:20:16.330 "ffdhe4096", 00:20:16.330 "ffdhe6144", 00:20:16.330 "ffdhe8192" 00:20:16.330 ] 00:20:16.330 } 00:20:16.330 }, 00:20:16.330 { 00:20:16.330 "method": "bdev_nvme_attach_controller", 00:20:16.330 "params": { 00:20:16.330 "name": "nvme0", 00:20:16.330 "trtype": "TCP", 00:20:16.330 "adrfam": "IPv4", 00:20:16.330 "traddr": "10.0.0.2", 00:20:16.330 "trsvcid": "4420", 00:20:16.330 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:16.330 "prchk_reftag": false, 00:20:16.330 "prchk_guard": false, 00:20:16.330 "ctrlr_loss_timeout_sec": 0, 00:20:16.330 "reconnect_delay_sec": 0, 00:20:16.330 "fast_io_fail_timeout_sec": 0, 00:20:16.330 "psk": "key0", 00:20:16.330 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:16.330 "hdgst": false, 00:20:16.330 "ddgst": false, 00:20:16.330 "multipath": "multipath" 00:20:16.330 } 00:20:16.330 }, 00:20:16.330 { 00:20:16.330 "method": "bdev_nvme_set_hotplug", 00:20:16.330 "params": { 00:20:16.330 "period_us": 100000, 00:20:16.330 "enable": false 00:20:16.330 } 00:20:16.330 }, 00:20:16.330 { 00:20:16.330 "method": "bdev_enable_histogram", 00:20:16.330 "params": { 00:20:16.330 "name": "nvme0n1", 00:20:16.330 "enable": true 00:20:16.330 } 00:20:16.330 }, 00:20:16.330 { 00:20:16.330 "method": "bdev_wait_for_examine" 00:20:16.330 } 00:20:16.330 ] 00:20:16.330 }, 00:20:16.330 { 00:20:16.330 "subsystem": "nbd", 00:20:16.330 "config": [] 00:20:16.330 } 00:20:16.330 ] 00:20:16.330 }' 00:20:16.330 10:39:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@270 -- # killprocess 402210 00:20:16.330 10:39:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 402210 ']' 00:20:16.330 10:39:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 402210 00:20:16.330 10:39:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:20:16.330 10:39:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:20:16.330 10:39:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 402210 00:20:16.330 10:39:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:20:16.330 10:39:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:20:16.330 10:39:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 402210' 00:20:16.330 killing process with pid 402210 00:20:16.330 10:39:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 402210 00:20:16.330 Received shutdown signal, test time was about 1.000000 seconds 00:20:16.330 00:20:16.330 Latency(us) 00:20:16.330 [2024-11-15T09:39:04.793Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:16.330 [2024-11-15T09:39:04.793Z] =================================================================================================================== 00:20:16.330 [2024-11-15T09:39:04.793Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:16.330 10:39:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 402210 00:20:16.587 10:39:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # killprocess 402164 00:20:16.587 10:39:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 402164 ']' 00:20:16.587 10:39:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 402164 00:20:16.587 10:39:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:20:16.587 10:39:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:20:16.587 10:39:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 402164 00:20:16.587 10:39:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:20:16.587 10:39:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:20:16.587 10:39:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 402164' 00:20:16.587 killing process with pid 402164 00:20:16.587 10:39:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 402164 00:20:16.587 10:39:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 402164 00:20:16.847 10:39:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # nvmfappstart -c /dev/fd/62 00:20:16.847 10:39:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:16.847 10:39:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # echo '{ 00:20:16.847 "subsystems": [ 00:20:16.847 { 00:20:16.847 "subsystem": "keyring", 00:20:16.847 "config": [ 00:20:16.847 { 00:20:16.847 "method": "keyring_file_add_key", 00:20:16.847 "params": { 00:20:16.847 "name": "key0", 00:20:16.847 "path": "/tmp/tmp.gk0wdYDzMM" 00:20:16.847 } 00:20:16.847 } 00:20:16.847 ] 00:20:16.847 }, 00:20:16.847 { 00:20:16.847 "subsystem": "iobuf", 00:20:16.847 "config": [ 00:20:16.847 { 00:20:16.847 "method": "iobuf_set_options", 00:20:16.847 "params": { 00:20:16.847 "small_pool_count": 8192, 00:20:16.847 "large_pool_count": 1024, 00:20:16.847 "small_bufsize": 8192, 00:20:16.847 "large_bufsize": 135168, 00:20:16.847 "enable_numa": false 00:20:16.847 } 00:20:16.847 } 00:20:16.847 ] 00:20:16.847 }, 00:20:16.847 { 00:20:16.847 "subsystem": "sock", 00:20:16.847 "config": [ 00:20:16.847 { 00:20:16.847 "method": "sock_set_default_impl", 00:20:16.847 "params": { 00:20:16.847 "impl_name": "posix" 00:20:16.847 } 00:20:16.847 }, 00:20:16.847 { 00:20:16.847 "method": "sock_impl_set_options", 00:20:16.847 "params": { 00:20:16.847 "impl_name": "ssl", 00:20:16.847 "recv_buf_size": 4096, 00:20:16.847 "send_buf_size": 4096, 00:20:16.848 "enable_recv_pipe": true, 00:20:16.848 "enable_quickack": false, 00:20:16.848 "enable_placement_id": 0, 00:20:16.848 "enable_zerocopy_send_server": true, 00:20:16.848 "enable_zerocopy_send_client": false, 00:20:16.848 "zerocopy_threshold": 0, 00:20:16.848 "tls_version": 0, 00:20:16.848 "enable_ktls": false 00:20:16.848 } 00:20:16.848 }, 00:20:16.848 { 00:20:16.848 "method": "sock_impl_set_options", 00:20:16.848 "params": { 00:20:16.848 "impl_name": "posix", 00:20:16.848 "recv_buf_size": 2097152, 00:20:16.848 "send_buf_size": 2097152, 00:20:16.848 "enable_recv_pipe": true, 00:20:16.848 "enable_quickack": false, 00:20:16.848 "enable_placement_id": 0, 00:20:16.848 "enable_zerocopy_send_server": true, 00:20:16.848 "enable_zerocopy_send_client": false, 00:20:16.848 "zerocopy_threshold": 0, 00:20:16.848 "tls_version": 0, 00:20:16.848 "enable_ktls": false 00:20:16.848 } 00:20:16.848 } 00:20:16.848 ] 00:20:16.848 }, 00:20:16.848 { 00:20:16.848 "subsystem": "vmd", 00:20:16.848 "config": [] 00:20:16.848 }, 00:20:16.848 { 00:20:16.848 "subsystem": "accel", 00:20:16.848 "config": [ 00:20:16.848 { 00:20:16.848 "method": "accel_set_options", 00:20:16.848 "params": { 00:20:16.848 "small_cache_size": 128, 00:20:16.848 "large_cache_size": 16, 00:20:16.848 "task_count": 2048, 00:20:16.848 "sequence_count": 2048, 00:20:16.848 "buf_count": 2048 00:20:16.848 } 00:20:16.848 } 00:20:16.848 ] 00:20:16.848 }, 00:20:16.848 { 00:20:16.848 "subsystem": "bdev", 00:20:16.848 "config": [ 00:20:16.848 { 00:20:16.848 "method": "bdev_set_options", 00:20:16.848 "params": { 00:20:16.848 "bdev_io_pool_size": 65535, 00:20:16.848 "bdev_io_cache_size": 256, 00:20:16.848 "bdev_auto_examine": true, 00:20:16.848 "iobuf_small_cache_size": 128, 00:20:16.848 "iobuf_large_cache_size": 16 00:20:16.848 } 00:20:16.848 }, 00:20:16.848 { 00:20:16.848 "method": "bdev_raid_set_options", 00:20:16.848 "params": { 00:20:16.848 "process_window_size_kb": 1024, 00:20:16.848 "process_max_bandwidth_mb_sec": 0 00:20:16.848 } 00:20:16.848 }, 00:20:16.848 { 00:20:16.848 "method": "bdev_iscsi_set_options", 00:20:16.848 "params": { 00:20:16.848 "timeout_sec": 30 00:20:16.848 } 00:20:16.848 }, 00:20:16.848 { 00:20:16.848 "method": "bdev_nvme_set_options", 00:20:16.848 "params": { 00:20:16.848 "action_on_timeout": "none", 00:20:16.848 "timeout_us": 0, 00:20:16.848 "timeout_admin_us": 0, 00:20:16.848 "keep_alive_timeout_ms": 10000, 00:20:16.848 "arbitration_burst": 0, 00:20:16.848 "low_priority_weight": 0, 00:20:16.848 "medium_priority_weight": 0, 00:20:16.848 "high_priority_weight": 0, 00:20:16.848 "nvme_adminq_poll_period_us": 10000, 00:20:16.848 "nvme_ioq_poll_period_us": 0, 00:20:16.848 "io_queue_requests": 0, 00:20:16.848 "delay_cmd_submit": true, 00:20:16.848 "transport_retry_count": 4, 00:20:16.848 "bdev_retry_count": 3, 00:20:16.848 "transport_ack_timeout": 0, 00:20:16.848 "ctrlr_loss_timeout_sec": 0, 00:20:16.848 "reconnect_delay_sec": 0, 00:20:16.848 "fast_io_fail_timeout_sec": 0, 00:20:16.848 "disable_auto_failback": false, 00:20:16.848 "generate_uuids": false, 00:20:16.848 "transport_tos": 0, 00:20:16.848 "nvme_error_stat": false, 00:20:16.848 "rdma_srq_size": 0, 00:20:16.848 "io_path_stat": false, 00:20:16.848 "allow_accel_sequence": false, 00:20:16.848 "rdma_max_cq_size": 0, 00:20:16.848 "rdma_cm_event_timeout_ms": 0, 00:20:16.848 "dhchap_digests": [ 00:20:16.848 "sha256", 00:20:16.848 "sha384", 00:20:16.848 "sha512" 00:20:16.848 ], 00:20:16.848 "dhchap_dhgroups": [ 00:20:16.848 "null", 00:20:16.848 "ffdhe2048", 00:20:16.848 "ffdhe3072", 00:20:16.848 "ffdhe4096", 00:20:16.848 "ffdhe6144", 00:20:16.848 "ffdhe8192" 00:20:16.848 ] 00:20:16.848 } 00:20:16.848 }, 00:20:16.848 { 00:20:16.848 "method": "bdev_nvme_set_hotplug", 00:20:16.848 "params": { 00:20:16.848 "period_us": 100000, 00:20:16.848 "enable": false 00:20:16.848 } 00:20:16.848 }, 00:20:16.848 { 00:20:16.848 "method": "bdev_malloc_create", 00:20:16.848 "params": { 00:20:16.848 "name": "malloc0", 00:20:16.848 "num_blocks": 8192, 00:20:16.848 "block_size": 4096, 00:20:16.848 "physical_block_size": 4096, 00:20:16.848 "uuid": "69f584a9-4b0b-4dec-a535-ba23ac86a1bb", 00:20:16.848 "optimal_io_boundary": 0, 00:20:16.848 "md_size": 0, 00:20:16.848 "dif_type": 0, 00:20:16.848 "dif_is_head_of_md": false, 00:20:16.848 "dif_pi_format": 0 00:20:16.848 } 00:20:16.848 }, 00:20:16.848 { 00:20:16.848 "method": "bdev_wait_for_examine" 00:20:16.848 } 00:20:16.848 ] 00:20:16.848 }, 00:20:16.848 { 00:20:16.848 "subsystem": "nbd", 00:20:16.848 "config": [] 00:20:16.848 }, 00:20:16.848 { 00:20:16.848 "subsystem": "scheduler", 00:20:16.848 "config": [ 00:20:16.848 { 00:20:16.848 "method": "framework_set_scheduler", 00:20:16.848 "params": { 00:20:16.848 "name": "static" 00:20:16.848 } 00:20:16.848 } 00:20:16.848 ] 00:20:16.848 }, 00:20:16.848 { 00:20:16.848 "subsystem": "nvmf", 00:20:16.848 "config": [ 00:20:16.848 { 00:20:16.848 "method": "nvmf_set_config", 00:20:16.848 "params": { 00:20:16.848 "discovery_filter": "match_any", 00:20:16.848 "admin_cmd_passthru": { 00:20:16.848 "identify_ctrlr": false 00:20:16.848 }, 00:20:16.848 "dhchap_digests": [ 00:20:16.848 "sha256", 00:20:16.848 "sha384", 00:20:16.848 "sha512" 00:20:16.848 ], 00:20:16.848 "dhchap_dhgroups": [ 00:20:16.848 "null", 00:20:16.848 "ffdhe2048", 00:20:16.848 "ffdhe3072", 00:20:16.848 "ffdhe4096", 00:20:16.848 "ffdhe6144", 00:20:16.848 "ffdhe8192" 00:20:16.848 ] 00:20:16.848 } 00:20:16.848 }, 00:20:16.848 { 00:20:16.848 "method": "nvmf_set_max_subsystems", 00:20:16.848 "params": { 00:20:16.848 "max_subsystems": 1024 00:20:16.848 } 00:20:16.848 }, 00:20:16.848 { 00:20:16.848 "method": "nvmf_set_crdt", 00:20:16.848 "params": { 00:20:16.848 "crdt1": 0, 00:20:16.848 "crdt2": 0, 00:20:16.848 "crdt3": 0 00:20:16.848 } 00:20:16.848 }, 00:20:16.848 { 00:20:16.848 "method": "nvmf_create_transport", 00:20:16.848 "params": { 00:20:16.848 "trtype": "TCP", 00:20:16.848 "max_queue_depth": 128, 00:20:16.848 "max_io_qpairs_per_ctrlr": 127, 00:20:16.848 "in_capsule_data_size": 4096, 00:20:16.848 "max_io_size": 131072, 00:20:16.848 "io_unit_size": 131072, 00:20:16.848 "max_aq_depth": 128, 00:20:16.848 "num_shared_buffers": 511, 00:20:16.848 "buf_cache_size": 4294967295, 00:20:16.848 "dif_insert_or_strip": false, 00:20:16.848 "zcopy": false, 00:20:16.848 "c2h_success": false, 00:20:16.848 "sock_priority": 0, 00:20:16.848 "abort_timeout_sec": 1, 00:20:16.848 "ack_timeout": 0, 00:20:16.848 "data_wr_pool_size": 0 00:20:16.848 } 00:20:16.848 }, 00:20:16.848 { 00:20:16.848 "method": "nvmf_create_subsystem", 00:20:16.848 "params": { 00:20:16.848 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:16.848 "allow_any_host": false, 00:20:16.848 "serial_number": "00000000000000000000", 00:20:16.848 "model_number": "SPDK bdev Controller", 00:20:16.848 "max_namespaces": 32, 00:20:16.848 "min_cntlid": 1, 00:20:16.848 "max_cntlid": 65519, 00:20:16.848 "ana_reporting": false 00:20:16.848 } 00:20:16.848 }, 00:20:16.848 { 00:20:16.848 "method": "nvmf_subsystem_add_host", 00:20:16.848 "params": { 00:20:16.848 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:16.848 "host": "nqn.2016-06.io.spdk:host1", 00:20:16.848 "psk": "key0" 00:20:16.848 } 00:20:16.848 }, 00:20:16.848 { 00:20:16.848 "method": "nvmf_subsystem_add_ns", 00:20:16.848 "params": { 00:20:16.848 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:16.848 "namespace": { 00:20:16.848 "nsid": 1, 00:20:16.848 "bdev_name": "malloc0", 00:20:16.848 "nguid": "69F584A94B0B4DECA535BA23AC86A1BB", 00:20:16.848 "uuid": "69f584a9-4b0b-4dec-a535-ba23ac86a1bb", 00:20:16.848 "no_auto_visible": false 00:20:16.848 } 00:20:16.848 } 00:20:16.849 }, 00:20:16.849 { 00:20:16.849 "method": "nvmf_subsystem_add_listener", 00:20:16.849 "params": { 00:20:16.849 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:16.849 "listen_address": { 00:20:16.849 "trtype": "TCP", 00:20:16.849 "adrfam": "IPv4", 00:20:16.849 "traddr": "10.0.0.2", 00:20:16.849 "trsvcid": "4420" 00:20:16.849 }, 00:20:16.849 "secure_channel": false, 00:20:16.849 "sock_impl": "ssl" 00:20:16.849 } 00:20:16.849 } 00:20:16.849 ] 00:20:16.849 } 00:20:16.849 ] 00:20:16.849 }' 00:20:16.849 10:39:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:16.849 10:39:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:16.849 10:39:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=402626 00:20:16.849 10:39:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:20:16.849 10:39:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 402626 00:20:16.849 10:39:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 402626 ']' 00:20:16.849 10:39:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:16.849 10:39:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:20:16.849 10:39:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:16.849 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:16.849 10:39:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:20:16.849 10:39:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:16.849 [2024-11-15 10:39:05.303878] Starting SPDK v25.01-pre git sha1 318515b44 / DPDK 24.03.0 initialization... 00:20:16.849 [2024-11-15 10:39:05.303969] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:17.107 [2024-11-15 10:39:05.374838] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:17.107 [2024-11-15 10:39:05.428295] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:17.107 [2024-11-15 10:39:05.428355] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:17.107 [2024-11-15 10:39:05.428393] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:17.107 [2024-11-15 10:39:05.428405] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:17.107 [2024-11-15 10:39:05.428414] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:17.107 [2024-11-15 10:39:05.429063] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:17.365 [2024-11-15 10:39:05.675125] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:17.366 [2024-11-15 10:39:05.707157] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:17.366 [2024-11-15 10:39:05.707441] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:17.933 10:39:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:20:17.933 10:39:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:20:17.933 10:39:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:17.933 10:39:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:17.933 10:39:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:17.933 10:39:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:17.933 10:39:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@276 -- # bdevperf_pid=402779 00:20:17.933 10:39:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # waitforlisten 402779 /var/tmp/bdevperf.sock 00:20:17.933 10:39:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 402779 ']' 00:20:17.933 10:39:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:17.933 10:39:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:20:17.933 10:39:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:20:17.933 10:39:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # echo '{ 00:20:17.933 "subsystems": [ 00:20:17.933 { 00:20:17.933 "subsystem": "keyring", 00:20:17.933 "config": [ 00:20:17.933 { 00:20:17.933 "method": "keyring_file_add_key", 00:20:17.933 "params": { 00:20:17.933 "name": "key0", 00:20:17.933 "path": "/tmp/tmp.gk0wdYDzMM" 00:20:17.933 } 00:20:17.933 } 00:20:17.933 ] 00:20:17.933 }, 00:20:17.933 { 00:20:17.933 "subsystem": "iobuf", 00:20:17.933 "config": [ 00:20:17.933 { 00:20:17.933 "method": "iobuf_set_options", 00:20:17.933 "params": { 00:20:17.933 "small_pool_count": 8192, 00:20:17.933 "large_pool_count": 1024, 00:20:17.933 "small_bufsize": 8192, 00:20:17.933 "large_bufsize": 135168, 00:20:17.933 "enable_numa": false 00:20:17.933 } 00:20:17.933 } 00:20:17.933 ] 00:20:17.933 }, 00:20:17.933 { 00:20:17.933 "subsystem": "sock", 00:20:17.933 "config": [ 00:20:17.933 { 00:20:17.933 "method": "sock_set_default_impl", 00:20:17.933 "params": { 00:20:17.933 "impl_name": "posix" 00:20:17.933 } 00:20:17.933 }, 00:20:17.933 { 00:20:17.933 "method": "sock_impl_set_options", 00:20:17.933 "params": { 00:20:17.933 "impl_name": "ssl", 00:20:17.933 "recv_buf_size": 4096, 00:20:17.933 "send_buf_size": 4096, 00:20:17.933 "enable_recv_pipe": true, 00:20:17.933 "enable_quickack": false, 00:20:17.933 "enable_placement_id": 0, 00:20:17.933 "enable_zerocopy_send_server": true, 00:20:17.933 "enable_zerocopy_send_client": false, 00:20:17.933 "zerocopy_threshold": 0, 00:20:17.933 "tls_version": 0, 00:20:17.933 "enable_ktls": false 00:20:17.933 } 00:20:17.933 }, 00:20:17.933 { 00:20:17.933 "method": "sock_impl_set_options", 00:20:17.933 "params": { 00:20:17.933 "impl_name": "posix", 00:20:17.933 "recv_buf_size": 2097152, 00:20:17.933 "send_buf_size": 2097152, 00:20:17.933 "enable_recv_pipe": true, 00:20:17.933 "enable_quickack": false, 00:20:17.933 "enable_placement_id": 0, 00:20:17.933 "enable_zerocopy_send_server": true, 00:20:17.933 "enable_zerocopy_send_client": false, 00:20:17.933 "zerocopy_threshold": 0, 00:20:17.933 "tls_version": 0, 00:20:17.933 "enable_ktls": false 00:20:17.933 } 00:20:17.933 } 00:20:17.933 ] 00:20:17.933 }, 00:20:17.933 { 00:20:17.933 "subsystem": "vmd", 00:20:17.933 "config": [] 00:20:17.933 }, 00:20:17.933 { 00:20:17.933 "subsystem": "accel", 00:20:17.933 "config": [ 00:20:17.933 { 00:20:17.933 "method": "accel_set_options", 00:20:17.933 "params": { 00:20:17.933 "small_cache_size": 128, 00:20:17.933 "large_cache_size": 16, 00:20:17.933 "task_count": 2048, 00:20:17.933 "sequence_count": 2048, 00:20:17.933 "buf_count": 2048 00:20:17.933 } 00:20:17.933 } 00:20:17.933 ] 00:20:17.933 }, 00:20:17.933 { 00:20:17.933 "subsystem": "bdev", 00:20:17.933 "config": [ 00:20:17.933 { 00:20:17.933 "method": "bdev_set_options", 00:20:17.933 "params": { 00:20:17.933 "bdev_io_pool_size": 65535, 00:20:17.933 "bdev_io_cache_size": 256, 00:20:17.933 "bdev_auto_examine": true, 00:20:17.933 "iobuf_small_cache_size": 128, 00:20:17.933 "iobuf_large_cache_size": 16 00:20:17.933 } 00:20:17.933 }, 00:20:17.933 { 00:20:17.933 "method": "bdev_raid_set_options", 00:20:17.933 "params": { 00:20:17.933 "process_window_size_kb": 1024, 00:20:17.933 "process_max_bandwidth_mb_sec": 0 00:20:17.933 } 00:20:17.933 }, 00:20:17.933 { 00:20:17.933 "method": "bdev_iscsi_set_options", 00:20:17.933 "params": { 00:20:17.933 "timeout_sec": 30 00:20:17.933 } 00:20:17.933 }, 00:20:17.933 { 00:20:17.933 "method": "bdev_nvme_set_options", 00:20:17.933 "params": { 00:20:17.933 "action_on_timeout": "none", 00:20:17.933 "timeout_us": 0, 00:20:17.933 "timeout_admin_us": 0, 00:20:17.933 "keep_alive_timeout_ms": 10000, 00:20:17.933 "arbitration_burst": 0, 00:20:17.933 "low_priority_weight": 0, 00:20:17.933 "medium_priority_weight": 0, 00:20:17.933 "high_priority_weight": 0, 00:20:17.933 "nvme_adminq_poll_period_us": 10000, 00:20:17.933 "nvme_ioq_poll_period_us": 0, 00:20:17.933 "io_queue_requests": 512, 00:20:17.933 "delay_cmd_submit": true, 00:20:17.933 "transport_retry_count": 4, 00:20:17.933 "bdev_retry_count": 3, 00:20:17.933 "transport_ack_timeout": 0, 00:20:17.933 "ctrlr_loss_timeout_sec": 0, 00:20:17.933 "reconnect_delay_sec": 0, 00:20:17.933 "fast_io_fail_timeout_sec": 0, 00:20:17.933 "disable_auto_failback": false, 00:20:17.933 "generate_uuids": false, 00:20:17.933 "transport_tos": 0, 00:20:17.933 "nvme_error_stat": false, 00:20:17.934 "rdma_srq_size": 0, 00:20:17.934 "io_path_stat": false, 00:20:17.934 "allow_accel_sequence": false, 00:20:17.934 "rdma_max_cq_size": 0, 00:20:17.934 "rdma_cm_event_timeout_ms": 0 10:39:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:17.934 , 00:20:17.934 "dhchap_digests": [ 00:20:17.934 "sha256", 00:20:17.934 "sha384", 00:20:17.934 "sha512" 00:20:17.934 ], 00:20:17.934 "dhchap_dhgroups": [ 00:20:17.934 "null", 00:20:17.934 "ffdhe2048", 00:20:17.934 "ffdhe3072", 00:20:17.934 "ffdhe4096", 00:20:17.934 "ffdhe6144", 00:20:17.934 "ffdhe8192" 00:20:17.934 ] 00:20:17.934 } 00:20:17.934 }, 00:20:17.934 { 00:20:17.934 "method": "bdev_nvme_attach_controller", 00:20:17.934 "params": { 00:20:17.934 "name": "nvme0", 00:20:17.934 "trtype": "TCP", 00:20:17.934 "adrfam": "IPv4", 00:20:17.934 "traddr": "10.0.0.2", 00:20:17.934 "trsvcid": "4420", 00:20:17.934 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:17.934 "prchk_reftag": false, 00:20:17.934 "prchk_guard": false, 00:20:17.934 "ctrlr_loss_timeout_sec": 0, 00:20:17.934 "reconnect_delay_sec": 0, 00:20:17.934 "fast_io_fail_timeout_sec": 0, 00:20:17.934 "psk": "key0", 00:20:17.934 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:17.934 "hdgst": false, 00:20:17.934 "ddgst": false, 00:20:17.934 "multipath": "multipath" 00:20:17.934 } 00:20:17.934 }, 00:20:17.934 { 00:20:17.934 "method": "bdev_nvme_set_hotplug", 00:20:17.934 "params": { 00:20:17.934 "period_us": 100000, 00:20:17.934 "enable": false 00:20:17.934 } 00:20:17.934 }, 00:20:17.934 { 00:20:17.934 "method": "bdev_enable_histogram", 00:20:17.934 "params": { 00:20:17.934 "name": "nvme0n1", 00:20:17.934 "enable": true 00:20:17.934 } 00:20:17.934 }, 00:20:17.934 { 00:20:17.934 "method": "bdev_wait_for_examine" 00:20:17.934 } 00:20:17.934 ] 00:20:17.934 }, 00:20:17.934 { 00:20:17.934 "subsystem": "nbd", 00:20:17.934 "config": [] 00:20:17.934 } 00:20:17.934 ] 00:20:17.934 }' 00:20:17.934 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:17.934 10:39:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:20:17.934 10:39:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:17.934 [2024-11-15 10:39:06.368617] Starting SPDK v25.01-pre git sha1 318515b44 / DPDK 24.03.0 initialization... 00:20:17.934 [2024-11-15 10:39:06.368706] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid402779 ] 00:20:18.215 [2024-11-15 10:39:06.435656] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:18.215 [2024-11-15 10:39:06.495177] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:18.215 [2024-11-15 10:39:06.680176] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:19.148 10:39:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:20:19.148 10:39:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:20:19.148 10:39:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:20:19.148 10:39:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # jq -r '.[].name' 00:20:19.148 10:39:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:19.148 10:39:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@280 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:19.406 Running I/O for 1 seconds... 00:20:20.340 3478.00 IOPS, 13.59 MiB/s 00:20:20.340 Latency(us) 00:20:20.340 [2024-11-15T09:39:08.803Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:20.340 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:20:20.340 Verification LBA range: start 0x0 length 0x2000 00:20:20.340 nvme0n1 : 1.02 3540.83 13.83 0.00 0.00 35832.66 6068.15 48156.82 00:20:20.340 [2024-11-15T09:39:08.803Z] =================================================================================================================== 00:20:20.340 [2024-11-15T09:39:08.803Z] Total : 3540.83 13.83 0.00 0.00 35832.66 6068.15 48156.82 00:20:20.340 { 00:20:20.340 "results": [ 00:20:20.340 { 00:20:20.340 "job": "nvme0n1", 00:20:20.340 "core_mask": "0x2", 00:20:20.340 "workload": "verify", 00:20:20.340 "status": "finished", 00:20:20.340 "verify_range": { 00:20:20.340 "start": 0, 00:20:20.340 "length": 8192 00:20:20.340 }, 00:20:20.340 "queue_depth": 128, 00:20:20.340 "io_size": 4096, 00:20:20.340 "runtime": 1.018405, 00:20:20.340 "iops": 3540.8310053465957, 00:20:20.340 "mibps": 13.83137111463514, 00:20:20.340 "io_failed": 0, 00:20:20.340 "io_timeout": 0, 00:20:20.340 "avg_latency_us": 35832.66050245475, 00:20:20.340 "min_latency_us": 6068.148148148148, 00:20:20.340 "max_latency_us": 48156.8237037037 00:20:20.340 } 00:20:20.340 ], 00:20:20.340 "core_count": 1 00:20:20.340 } 00:20:20.340 10:39:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@282 -- # trap - SIGINT SIGTERM EXIT 00:20:20.340 10:39:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@283 -- # cleanup 00:20:20.340 10:39:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:20:20.340 10:39:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@810 -- # type=--id 00:20:20.340 10:39:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@811 -- # id=0 00:20:20.340 10:39:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@812 -- # '[' --id = --pid ']' 00:20:20.340 10:39:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@816 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:20:20.340 10:39:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@816 -- # shm_files=nvmf_trace.0 00:20:20.340 10:39:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # [[ -z nvmf_trace.0 ]] 00:20:20.340 10:39:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@822 -- # for n in $shm_files 00:20:20.340 10:39:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@823 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:20:20.340 nvmf_trace.0 00:20:20.598 10:39:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@825 -- # return 0 00:20:20.598 10:39:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@16 -- # killprocess 402779 00:20:20.598 10:39:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 402779 ']' 00:20:20.598 10:39:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 402779 00:20:20.598 10:39:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:20:20.598 10:39:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:20:20.598 10:39:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 402779 00:20:20.598 10:39:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:20:20.598 10:39:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:20:20.598 10:39:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 402779' 00:20:20.598 killing process with pid 402779 00:20:20.598 10:39:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 402779 00:20:20.598 Received shutdown signal, test time was about 1.000000 seconds 00:20:20.598 00:20:20.598 Latency(us) 00:20:20.598 [2024-11-15T09:39:09.061Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:20.598 [2024-11-15T09:39:09.061Z] =================================================================================================================== 00:20:20.598 [2024-11-15T09:39:09.061Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:20.598 10:39:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 402779 00:20:20.856 10:39:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:20:20.856 10:39:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:20.856 10:39:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@121 -- # sync 00:20:20.856 10:39:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:20.856 10:39:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@124 -- # set +e 00:20:20.856 10:39:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:20.856 10:39:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:20.856 rmmod nvme_tcp 00:20:20.856 rmmod nvme_fabrics 00:20:20.856 rmmod nvme_keyring 00:20:20.856 10:39:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:20.856 10:39:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@128 -- # set -e 00:20:20.856 10:39:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@129 -- # return 0 00:20:20.856 10:39:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@517 -- # '[' -n 402626 ']' 00:20:20.856 10:39:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@518 -- # killprocess 402626 00:20:20.856 10:39:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 402626 ']' 00:20:20.856 10:39:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 402626 00:20:20.856 10:39:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:20:20.856 10:39:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:20:20.856 10:39:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 402626 00:20:20.856 10:39:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:20:20.856 10:39:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:20:20.856 10:39:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 402626' 00:20:20.856 killing process with pid 402626 00:20:20.856 10:39:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 402626 00:20:20.856 10:39:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 402626 00:20:21.114 10:39:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:20:21.114 10:39:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:20:21.114 10:39:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:20:21.114 10:39:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@297 -- # iptr 00:20:21.114 10:39:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-save 00:20:21.114 10:39:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:20:21.114 10:39:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-restore 00:20:21.114 10:39:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:21.114 10:39:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@302 -- # remove_spdk_ns 00:20:21.114 10:39:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:21.114 10:39:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:21.114 10:39:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:23.017 10:39:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:20:23.017 10:39:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.MNsoSSh8sG /tmp/tmp.FtOw4ABGoR /tmp/tmp.gk0wdYDzMM 00:20:23.017 00:20:23.017 real 1m23.386s 00:20:23.017 user 2m17.133s 00:20:23.017 sys 0m28.414s 00:20:23.017 10:39:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1128 -- # xtrace_disable 00:20:23.017 10:39:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:23.017 ************************************ 00:20:23.017 END TEST nvmf_tls 00:20:23.017 ************************************ 00:20:23.277 10:39:11 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@42 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:20:23.277 10:39:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:20:23.277 10:39:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:20:23.277 10:39:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:20:23.277 ************************************ 00:20:23.277 START TEST nvmf_fips 00:20:23.277 ************************************ 00:20:23.277 10:39:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:20:23.277 * Looking for test storage... 00:20:23.277 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:20:23.277 10:39:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:20:23.277 10:39:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1691 -- # lcov --version 00:20:23.277 10:39:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:20:23.277 10:39:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:20:23.277 10:39:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:23.277 10:39:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:23.277 10:39:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:23.277 10:39:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:20:23.277 10:39:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:20:23.277 10:39:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:20:23.277 10:39:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:20:23.277 10:39:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=<' 00:20:23.277 10:39:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=2 00:20:23.277 10:39:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=1 00:20:23.277 10:39:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:23.277 10:39:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:20:23.277 10:39:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:20:23.277 10:39:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:23.277 10:39:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:23.277 10:39:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:20:23.277 10:39:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:20:23.277 10:39:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:23.277 10:39:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:20:23.277 10:39:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:20:23.277 10:39:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 2 00:20:23.277 10:39:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=2 00:20:23.277 10:39:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:23.277 10:39:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 2 00:20:23.277 10:39:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=2 00:20:23.277 10:39:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:23.277 10:39:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:23.277 10:39:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # return 0 00:20:23.277 10:39:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:23.277 10:39:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:20:23.277 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:23.277 --rc genhtml_branch_coverage=1 00:20:23.277 --rc genhtml_function_coverage=1 00:20:23.277 --rc genhtml_legend=1 00:20:23.277 --rc geninfo_all_blocks=1 00:20:23.277 --rc geninfo_unexecuted_blocks=1 00:20:23.277 00:20:23.277 ' 00:20:23.277 10:39:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:20:23.277 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:23.277 --rc genhtml_branch_coverage=1 00:20:23.277 --rc genhtml_function_coverage=1 00:20:23.277 --rc genhtml_legend=1 00:20:23.277 --rc geninfo_all_blocks=1 00:20:23.277 --rc geninfo_unexecuted_blocks=1 00:20:23.277 00:20:23.277 ' 00:20:23.277 10:39:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:20:23.277 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:23.277 --rc genhtml_branch_coverage=1 00:20:23.277 --rc genhtml_function_coverage=1 00:20:23.277 --rc genhtml_legend=1 00:20:23.277 --rc geninfo_all_blocks=1 00:20:23.277 --rc geninfo_unexecuted_blocks=1 00:20:23.277 00:20:23.277 ' 00:20:23.277 10:39:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:20:23.277 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:23.277 --rc genhtml_branch_coverage=1 00:20:23.277 --rc genhtml_function_coverage=1 00:20:23.277 --rc genhtml_legend=1 00:20:23.277 --rc geninfo_all_blocks=1 00:20:23.277 --rc geninfo_unexecuted_blocks=1 00:20:23.277 00:20:23.277 ' 00:20:23.277 10:39:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:23.277 10:39:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:20:23.277 10:39:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:23.277 10:39:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:23.277 10:39:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:23.277 10:39:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:23.277 10:39:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:23.277 10:39:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:23.277 10:39:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:23.277 10:39:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:23.277 10:39:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:23.277 10:39:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:23.277 10:39:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:20:23.277 10:39:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=8b464f06-2980-e311-ba20-001e67a94acd 00:20:23.277 10:39:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:23.277 10:39:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:23.277 10:39:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:23.277 10:39:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:23.277 10:39:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:23.277 10:39:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@15 -- # shopt -s extglob 00:20:23.277 10:39:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:23.277 10:39:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:23.277 10:39:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:23.277 10:39:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:23.277 10:39:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:23.278 10:39:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:23.278 10:39:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:20:23.278 10:39:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:23.278 10:39:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@51 -- # : 0 00:20:23.278 10:39:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:23.278 10:39:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:23.278 10:39:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:23.278 10:39:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:23.278 10:39:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:23.278 10:39:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:23.278 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:23.278 10:39:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:23.278 10:39:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:23.278 10:39:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:23.278 10:39:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:20:23.278 10:39:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@90 -- # check_openssl_version 00:20:23.278 10:39:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@84 -- # local target=3.0.0 00:20:23.278 10:39:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # openssl version 00:20:23.278 10:39:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # awk '{print $2}' 00:20:23.278 10:39:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # ge 3.1.1 3.0.0 00:20:23.278 10:39:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@376 -- # cmp_versions 3.1.1 '>=' 3.0.0 00:20:23.278 10:39:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:23.278 10:39:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:23.278 10:39:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:20:23.278 10:39:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:20:23.278 10:39:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:20:23.278 10:39:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:20:23.278 10:39:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=>=' 00:20:23.278 10:39:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=3 00:20:23.278 10:39:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=3 00:20:23.278 10:39:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:23.278 10:39:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:20:23.278 10:39:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@348 -- # : 1 00:20:23.278 10:39:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:23.278 10:39:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:23.278 10:39:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 3 00:20:23.278 10:39:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:20:23.278 10:39:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:20:23.278 10:39:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:20:23.278 10:39:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=3 00:20:23.278 10:39:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 3 00:20:23.278 10:39:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:20:23.278 10:39:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:20:23.278 10:39:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:20:23.278 10:39:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=3 00:20:23.278 10:39:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:23.278 10:39:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:23.278 10:39:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v++ )) 00:20:23.278 10:39:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:23.278 10:39:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:20:23.278 10:39:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:20:23.278 10:39:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:23.278 10:39:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:20:23.278 10:39:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:20:23.278 10:39:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 0 00:20:23.278 10:39:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=0 00:20:23.278 10:39:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 0 =~ ^[0-9]+$ ]] 00:20:23.278 10:39:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 0 00:20:23.278 10:39:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=0 00:20:23.278 10:39:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:23.278 10:39:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # return 0 00:20:23.278 10:39:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # openssl info -modulesdir 00:20:23.278 10:39:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:20:23.278 10:39:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # openssl fipsinstall -help 00:20:23.278 10:39:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:20:23.278 10:39:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@102 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:20:23.278 10:39:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # export callback=build_openssl_config 00:20:23.278 10:39:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # callback=build_openssl_config 00:20:23.278 10:39:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # build_openssl_config 00:20:23.278 10:39:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@38 -- # cat 00:20:23.537 10:39:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@58 -- # [[ ! -t 0 ]] 00:20:23.537 10:39:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@59 -- # cat - 00:20:23.537 10:39:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # export OPENSSL_CONF=spdk_fips.conf 00:20:23.537 10:39:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # OPENSSL_CONF=spdk_fips.conf 00:20:23.537 10:39:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # mapfile -t providers 00:20:23.537 10:39:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # openssl list -providers 00:20:23.537 10:39:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # grep name 00:20:23.537 10:39:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # (( 2 != 2 )) 00:20:23.537 10:39:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: openssl base provider != *base* ]] 00:20:23.537 10:39:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:20:23.537 10:39:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # NOT openssl md5 /dev/fd/62 00:20:23.537 10:39:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # : 00:20:23.537 10:39:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@650 -- # local es=0 00:20:23.537 10:39:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@652 -- # valid_exec_arg openssl md5 /dev/fd/62 00:20:23.537 10:39:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@638 -- # local arg=openssl 00:20:23.537 10:39:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:23.537 10:39:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # type -t openssl 00:20:23.537 10:39:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:23.537 10:39:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # type -P openssl 00:20:23.537 10:39:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:23.537 10:39:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # arg=/usr/bin/openssl 00:20:23.537 10:39:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # [[ -x /usr/bin/openssl ]] 00:20:23.537 10:39:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@653 -- # openssl md5 /dev/fd/62 00:20:23.537 Error setting digest 00:20:23.537 408204BAF37F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:341:Global default library context, Algorithm (MD5 : 95), Properties () 00:20:23.537 408204BAF37F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:272: 00:20:23.537 10:39:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@653 -- # es=1 00:20:23.537 10:39:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:20:23.537 10:39:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:20:23.537 10:39:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:20:23.537 10:39:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@131 -- # nvmftestinit 00:20:23.537 10:39:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:20:23.537 10:39:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:23.537 10:39:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@476 -- # prepare_net_devs 00:20:23.537 10:39:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@438 -- # local -g is_hw=no 00:20:23.537 10:39:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@440 -- # remove_spdk_ns 00:20:23.537 10:39:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:23.537 10:39:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:23.537 10:39:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:23.537 10:39:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:20:23.537 10:39:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:20:23.537 10:39:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@309 -- # xtrace_disable 00:20:23.537 10:39:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:20:26.070 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:26.070 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # pci_devs=() 00:20:26.070 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # local -a pci_devs 00:20:26.070 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@316 -- # pci_net_devs=() 00:20:26.070 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:20:26.070 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # pci_drivers=() 00:20:26.070 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # local -A pci_drivers 00:20:26.070 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@319 -- # net_devs=() 00:20:26.070 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@319 -- # local -ga net_devs 00:20:26.070 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # e810=() 00:20:26.070 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # local -ga e810 00:20:26.070 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # x722=() 00:20:26.070 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # local -ga x722 00:20:26.070 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@322 -- # mlx=() 00:20:26.070 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@322 -- # local -ga mlx 00:20:26.070 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:26.070 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:26.070 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:26.070 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:26.070 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:26.070 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:26.070 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:26.070 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:20:26.070 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:26.070 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:26.070 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:26.070 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:26.070 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:20:26.070 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:20:26.070 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:20:26.070 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:20:26.070 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:20:26.070 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:20:26.070 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:26.070 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@367 -- # echo 'Found 0000:82:00.0 (0x8086 - 0x159b)' 00:20:26.070 Found 0000:82:00.0 (0x8086 - 0x159b) 00:20:26.070 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:26.070 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:26.070 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:26.070 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:26.070 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:26.070 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:26.070 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@367 -- # echo 'Found 0000:82:00.1 (0x8086 - 0x159b)' 00:20:26.070 Found 0000:82:00.1 (0x8086 - 0x159b) 00:20:26.070 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:26.070 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:26.070 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:26.070 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:26.070 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:26.070 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:20:26.070 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:20:26.070 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:20:26.070 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:26.070 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:26.070 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:26.070 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:26.070 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:26.070 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:26.070 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:26.070 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:82:00.0: cvl_0_0' 00:20:26.070 Found net devices under 0000:82:00.0: cvl_0_0 00:20:26.070 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:26.070 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:26.070 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:26.070 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:26.070 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:26.070 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:26.070 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:26.070 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:26.070 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:82:00.1: cvl_0_1' 00:20:26.070 Found net devices under 0000:82:00.1: cvl_0_1 00:20:26.070 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:26.070 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:20:26.070 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # is_hw=yes 00:20:26.070 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:20:26.070 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:20:26.070 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:20:26.070 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:26.070 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:26.070 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:26.070 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:26.070 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:20:26.070 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:26.070 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:26.070 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:20:26.070 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:20:26.070 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:26.070 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:26.070 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:20:26.070 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:20:26.070 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:20:26.071 10:39:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:26.071 10:39:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:26.071 10:39:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:26.071 10:39:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:20:26.071 10:39:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:26.071 10:39:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:26.071 10:39:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:26.071 10:39:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:20:26.071 10:39:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:20:26.071 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:26.071 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.307 ms 00:20:26.071 00:20:26.071 --- 10.0.0.2 ping statistics --- 00:20:26.071 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:26.071 rtt min/avg/max/mdev = 0.307/0.307/0.307/0.000 ms 00:20:26.071 10:39:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:26.071 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:26.071 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.152 ms 00:20:26.071 00:20:26.071 --- 10.0.0.1 ping statistics --- 00:20:26.071 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:26.071 rtt min/avg/max/mdev = 0.152/0.152/0.152/0.000 ms 00:20:26.071 10:39:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:26.071 10:39:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@450 -- # return 0 00:20:26.071 10:39:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:20:26.071 10:39:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:26.071 10:39:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:20:26.071 10:39:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:20:26.071 10:39:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:26.071 10:39:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:20:26.071 10:39:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:20:26.071 10:39:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@132 -- # nvmfappstart -m 0x2 00:20:26.071 10:39:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:26.071 10:39:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:26.071 10:39:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:20:26.071 10:39:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@509 -- # nvmfpid=405642 00:20:26.071 10:39:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:20:26.071 10:39:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@510 -- # waitforlisten 405642 00:20:26.071 10:39:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@833 -- # '[' -z 405642 ']' 00:20:26.071 10:39:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:26.071 10:39:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@838 -- # local max_retries=100 00:20:26.071 10:39:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:26.071 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:26.071 10:39:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # xtrace_disable 00:20:26.071 10:39:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:20:26.071 [2024-11-15 10:39:14.232946] Starting SPDK v25.01-pre git sha1 318515b44 / DPDK 24.03.0 initialization... 00:20:26.071 [2024-11-15 10:39:14.233035] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:26.071 [2024-11-15 10:39:14.303179] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:26.071 [2024-11-15 10:39:14.358627] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:26.071 [2024-11-15 10:39:14.358699] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:26.071 [2024-11-15 10:39:14.358712] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:26.071 [2024-11-15 10:39:14.358723] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:26.071 [2024-11-15 10:39:14.358732] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:26.071 [2024-11-15 10:39:14.359238] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:26.071 10:39:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:20:26.071 10:39:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@866 -- # return 0 00:20:26.071 10:39:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:26.071 10:39:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:26.071 10:39:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:20:26.071 10:39:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:26.071 10:39:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@134 -- # trap cleanup EXIT 00:20:26.071 10:39:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@137 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:20:26.071 10:39:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # mktemp -t spdk-psk.XXX 00:20:26.071 10:39:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # key_path=/tmp/spdk-psk.61f 00:20:26.071 10:39:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@139 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:20:26.071 10:39:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@140 -- # chmod 0600 /tmp/spdk-psk.61f 00:20:26.071 10:39:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@142 -- # setup_nvmf_tgt_conf /tmp/spdk-psk.61f 00:20:26.071 10:39:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@22 -- # local key=/tmp/spdk-psk.61f 00:20:26.071 10:39:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:20:26.329 [2024-11-15 10:39:14.756627] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:26.329 [2024-11-15 10:39:14.772626] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:26.329 [2024-11-15 10:39:14.772841] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:26.587 malloc0 00:20:26.587 10:39:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@145 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:26.587 10:39:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@148 -- # bdevperf_pid=405678 00:20:26.587 10:39:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@146 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:26.587 10:39:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@149 -- # waitforlisten 405678 /var/tmp/bdevperf.sock 00:20:26.587 10:39:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@833 -- # '[' -z 405678 ']' 00:20:26.587 10:39:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:26.587 10:39:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@838 -- # local max_retries=100 00:20:26.587 10:39:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:26.587 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:26.587 10:39:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # xtrace_disable 00:20:26.587 10:39:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:20:26.587 [2024-11-15 10:39:14.899520] Starting SPDK v25.01-pre git sha1 318515b44 / DPDK 24.03.0 initialization... 00:20:26.587 [2024-11-15 10:39:14.899601] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid405678 ] 00:20:26.587 [2024-11-15 10:39:14.968052] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:26.587 [2024-11-15 10:39:15.027098] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:26.845 10:39:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:20:26.845 10:39:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@866 -- # return 0 00:20:26.845 10:39:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/spdk-psk.61f 00:20:27.103 10:39:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@152 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:20:27.361 [2024-11-15 10:39:15.657270] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:27.361 TLSTESTn1 00:20:27.361 10:39:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@156 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:27.619 Running I/O for 10 seconds... 00:20:29.486 3514.00 IOPS, 13.73 MiB/s [2024-11-15T09:39:18.884Z] 3484.50 IOPS, 13.61 MiB/s [2024-11-15T09:39:20.258Z] 3439.00 IOPS, 13.43 MiB/s [2024-11-15T09:39:21.191Z] 3415.25 IOPS, 13.34 MiB/s [2024-11-15T09:39:22.126Z] 3420.20 IOPS, 13.36 MiB/s [2024-11-15T09:39:23.060Z] 3411.67 IOPS, 13.33 MiB/s [2024-11-15T09:39:23.995Z] 3399.57 IOPS, 13.28 MiB/s [2024-11-15T09:39:24.930Z] 3382.88 IOPS, 13.21 MiB/s [2024-11-15T09:39:26.304Z] 3395.11 IOPS, 13.26 MiB/s [2024-11-15T09:39:26.304Z] 3397.00 IOPS, 13.27 MiB/s 00:20:37.841 Latency(us) 00:20:37.841 [2024-11-15T09:39:26.304Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:37.841 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:20:37.841 Verification LBA range: start 0x0 length 0x2000 00:20:37.841 TLSTESTn1 : 10.02 3402.35 13.29 0.00 0.00 37558.97 8980.86 41166.32 00:20:37.841 [2024-11-15T09:39:26.304Z] =================================================================================================================== 00:20:37.841 [2024-11-15T09:39:26.304Z] Total : 3402.35 13.29 0.00 0.00 37558.97 8980.86 41166.32 00:20:37.841 { 00:20:37.841 "results": [ 00:20:37.841 { 00:20:37.841 "job": "TLSTESTn1", 00:20:37.841 "core_mask": "0x4", 00:20:37.841 "workload": "verify", 00:20:37.841 "status": "finished", 00:20:37.841 "verify_range": { 00:20:37.841 "start": 0, 00:20:37.841 "length": 8192 00:20:37.841 }, 00:20:37.841 "queue_depth": 128, 00:20:37.841 "io_size": 4096, 00:20:37.841 "runtime": 10.021902, 00:20:37.841 "iops": 3402.348177022685, 00:20:37.841 "mibps": 13.290422566494863, 00:20:37.841 "io_failed": 0, 00:20:37.841 "io_timeout": 0, 00:20:37.841 "avg_latency_us": 37558.966865961505, 00:20:37.841 "min_latency_us": 8980.85925925926, 00:20:37.841 "max_latency_us": 41166.317037037035 00:20:37.841 } 00:20:37.841 ], 00:20:37.841 "core_count": 1 00:20:37.841 } 00:20:37.842 10:39:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:20:37.842 10:39:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:20:37.842 10:39:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@810 -- # type=--id 00:20:37.842 10:39:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@811 -- # id=0 00:20:37.842 10:39:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@812 -- # '[' --id = --pid ']' 00:20:37.842 10:39:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@816 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:20:37.842 10:39:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@816 -- # shm_files=nvmf_trace.0 00:20:37.842 10:39:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # [[ -z nvmf_trace.0 ]] 00:20:37.842 10:39:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@822 -- # for n in $shm_files 00:20:37.842 10:39:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@823 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:20:37.842 nvmf_trace.0 00:20:37.842 10:39:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@825 -- # return 0 00:20:37.842 10:39:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@16 -- # killprocess 405678 00:20:37.842 10:39:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@952 -- # '[' -z 405678 ']' 00:20:37.842 10:39:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # kill -0 405678 00:20:37.842 10:39:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@957 -- # uname 00:20:37.842 10:39:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:20:37.842 10:39:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 405678 00:20:37.842 10:39:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:20:37.842 10:39:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:20:37.842 10:39:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@970 -- # echo 'killing process with pid 405678' 00:20:37.842 killing process with pid 405678 00:20:37.842 10:39:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@971 -- # kill 405678 00:20:37.842 Received shutdown signal, test time was about 10.000000 seconds 00:20:37.842 00:20:37.842 Latency(us) 00:20:37.842 [2024-11-15T09:39:26.305Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:37.842 [2024-11-15T09:39:26.305Z] =================================================================================================================== 00:20:37.842 [2024-11-15T09:39:26.305Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:37.842 10:39:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@976 -- # wait 405678 00:20:37.842 10:39:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:20:37.842 10:39:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:37.842 10:39:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@121 -- # sync 00:20:37.842 10:39:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:37.842 10:39:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@124 -- # set +e 00:20:37.842 10:39:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:37.842 10:39:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:37.842 rmmod nvme_tcp 00:20:37.842 rmmod nvme_fabrics 00:20:37.842 rmmod nvme_keyring 00:20:37.842 10:39:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:38.100 10:39:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@128 -- # set -e 00:20:38.100 10:39:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@129 -- # return 0 00:20:38.100 10:39:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@517 -- # '[' -n 405642 ']' 00:20:38.100 10:39:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@518 -- # killprocess 405642 00:20:38.100 10:39:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@952 -- # '[' -z 405642 ']' 00:20:38.100 10:39:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # kill -0 405642 00:20:38.100 10:39:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@957 -- # uname 00:20:38.100 10:39:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:20:38.100 10:39:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 405642 00:20:38.100 10:39:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:20:38.100 10:39:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:20:38.100 10:39:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@970 -- # echo 'killing process with pid 405642' 00:20:38.100 killing process with pid 405642 00:20:38.100 10:39:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@971 -- # kill 405642 00:20:38.100 10:39:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@976 -- # wait 405642 00:20:38.358 10:39:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:20:38.358 10:39:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:20:38.358 10:39:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:20:38.358 10:39:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@297 -- # iptr 00:20:38.358 10:39:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-save 00:20:38.358 10:39:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:20:38.358 10:39:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-restore 00:20:38.358 10:39:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:38.358 10:39:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@302 -- # remove_spdk_ns 00:20:38.358 10:39:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:38.358 10:39:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:38.358 10:39:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:40.264 10:39:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:20:40.264 10:39:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@18 -- # rm -f /tmp/spdk-psk.61f 00:20:40.264 00:20:40.264 real 0m17.090s 00:20:40.264 user 0m21.512s 00:20:40.264 sys 0m6.546s 00:20:40.264 10:39:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1128 -- # xtrace_disable 00:20:40.264 10:39:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:20:40.264 ************************************ 00:20:40.264 END TEST nvmf_fips 00:20:40.264 ************************************ 00:20:40.264 10:39:28 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@43 -- # run_test nvmf_control_msg_list /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:20:40.264 10:39:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:20:40.264 10:39:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:20:40.264 10:39:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:20:40.264 ************************************ 00:20:40.264 START TEST nvmf_control_msg_list 00:20:40.264 ************************************ 00:20:40.264 10:39:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:20:40.264 * Looking for test storage... 00:20:40.264 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:20:40.264 10:39:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:20:40.264 10:39:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1691 -- # lcov --version 00:20:40.265 10:39:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:20:40.524 10:39:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:20:40.524 10:39:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:40.524 10:39:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:40.524 10:39:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:40.524 10:39:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # IFS=.-: 00:20:40.524 10:39:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # read -ra ver1 00:20:40.524 10:39:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # IFS=.-: 00:20:40.524 10:39:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # read -ra ver2 00:20:40.524 10:39:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@338 -- # local 'op=<' 00:20:40.524 10:39:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@340 -- # ver1_l=2 00:20:40.524 10:39:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@341 -- # ver2_l=1 00:20:40.524 10:39:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:40.524 10:39:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@344 -- # case "$op" in 00:20:40.524 10:39:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@345 -- # : 1 00:20:40.524 10:39:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:40.524 10:39:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:40.524 10:39:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # decimal 1 00:20:40.524 10:39:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=1 00:20:40.524 10:39:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:40.524 10:39:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 1 00:20:40.524 10:39:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # ver1[v]=1 00:20:40.524 10:39:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # decimal 2 00:20:40.524 10:39:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=2 00:20:40.524 10:39:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:40.524 10:39:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 2 00:20:40.524 10:39:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # ver2[v]=2 00:20:40.524 10:39:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:40.524 10:39:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:40.524 10:39:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # return 0 00:20:40.524 10:39:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:40.524 10:39:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:20:40.524 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:40.524 --rc genhtml_branch_coverage=1 00:20:40.524 --rc genhtml_function_coverage=1 00:20:40.524 --rc genhtml_legend=1 00:20:40.524 --rc geninfo_all_blocks=1 00:20:40.524 --rc geninfo_unexecuted_blocks=1 00:20:40.524 00:20:40.524 ' 00:20:40.524 10:39:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:20:40.524 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:40.524 --rc genhtml_branch_coverage=1 00:20:40.524 --rc genhtml_function_coverage=1 00:20:40.524 --rc genhtml_legend=1 00:20:40.524 --rc geninfo_all_blocks=1 00:20:40.524 --rc geninfo_unexecuted_blocks=1 00:20:40.524 00:20:40.524 ' 00:20:40.524 10:39:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:20:40.525 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:40.525 --rc genhtml_branch_coverage=1 00:20:40.525 --rc genhtml_function_coverage=1 00:20:40.525 --rc genhtml_legend=1 00:20:40.525 --rc geninfo_all_blocks=1 00:20:40.525 --rc geninfo_unexecuted_blocks=1 00:20:40.525 00:20:40.525 ' 00:20:40.525 10:39:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:20:40.525 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:40.525 --rc genhtml_branch_coverage=1 00:20:40.525 --rc genhtml_function_coverage=1 00:20:40.525 --rc genhtml_legend=1 00:20:40.525 --rc geninfo_all_blocks=1 00:20:40.525 --rc geninfo_unexecuted_blocks=1 00:20:40.525 00:20:40.525 ' 00:20:40.525 10:39:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:40.525 10:39:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # uname -s 00:20:40.525 10:39:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:40.525 10:39:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:40.525 10:39:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:40.525 10:39:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:40.525 10:39:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:40.525 10:39:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:40.525 10:39:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:40.525 10:39:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:40.525 10:39:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:40.525 10:39:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:40.525 10:39:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:20:40.525 10:39:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@18 -- # NVME_HOSTID=8b464f06-2980-e311-ba20-001e67a94acd 00:20:40.525 10:39:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:40.525 10:39:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:40.525 10:39:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:40.525 10:39:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:40.525 10:39:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:40.525 10:39:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@15 -- # shopt -s extglob 00:20:40.525 10:39:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:40.525 10:39:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:40.525 10:39:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:40.525 10:39:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:40.525 10:39:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:40.525 10:39:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:40.525 10:39:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@5 -- # export PATH 00:20:40.525 10:39:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:40.525 10:39:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@51 -- # : 0 00:20:40.525 10:39:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:40.525 10:39:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:40.525 10:39:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:40.525 10:39:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:40.525 10:39:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:40.525 10:39:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:40.525 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:40.525 10:39:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:40.525 10:39:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:40.525 10:39:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:40.525 10:39:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@12 -- # nvmftestinit 00:20:40.525 10:39:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:20:40.525 10:39:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:40.525 10:39:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@476 -- # prepare_net_devs 00:20:40.525 10:39:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@438 -- # local -g is_hw=no 00:20:40.525 10:39:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@440 -- # remove_spdk_ns 00:20:40.525 10:39:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:40.525 10:39:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:40.525 10:39:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:40.525 10:39:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:20:40.525 10:39:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:20:40.525 10:39:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@309 -- # xtrace_disable 00:20:40.525 10:39:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:20:43.058 10:39:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:43.058 10:39:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@315 -- # pci_devs=() 00:20:43.058 10:39:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@315 -- # local -a pci_devs 00:20:43.058 10:39:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@316 -- # pci_net_devs=() 00:20:43.058 10:39:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:20:43.058 10:39:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@317 -- # pci_drivers=() 00:20:43.058 10:39:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@317 -- # local -A pci_drivers 00:20:43.058 10:39:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@319 -- # net_devs=() 00:20:43.058 10:39:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@319 -- # local -ga net_devs 00:20:43.058 10:39:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@320 -- # e810=() 00:20:43.058 10:39:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@320 -- # local -ga e810 00:20:43.058 10:39:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@321 -- # x722=() 00:20:43.058 10:39:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@321 -- # local -ga x722 00:20:43.058 10:39:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@322 -- # mlx=() 00:20:43.058 10:39:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@322 -- # local -ga mlx 00:20:43.058 10:39:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:43.058 10:39:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:43.058 10:39:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:43.058 10:39:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:43.058 10:39:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:43.058 10:39:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:43.058 10:39:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:43.058 10:39:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:20:43.058 10:39:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:43.058 10:39:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:43.058 10:39:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:43.058 10:39:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:43.058 10:39:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:20:43.058 10:39:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:20:43.058 10:39:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:20:43.058 10:39:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:20:43.058 10:39:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:20:43.058 10:39:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:20:43.058 10:39:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:43.058 10:39:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@367 -- # echo 'Found 0000:82:00.0 (0x8086 - 0x159b)' 00:20:43.058 Found 0000:82:00.0 (0x8086 - 0x159b) 00:20:43.058 10:39:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:43.058 10:39:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:43.058 10:39:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:43.058 10:39:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:43.058 10:39:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:43.058 10:39:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:43.058 10:39:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@367 -- # echo 'Found 0000:82:00.1 (0x8086 - 0x159b)' 00:20:43.058 Found 0000:82:00.1 (0x8086 - 0x159b) 00:20:43.058 10:39:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:43.058 10:39:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:43.058 10:39:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:43.058 10:39:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:43.058 10:39:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:43.058 10:39:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:20:43.059 10:39:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:20:43.059 10:39:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:20:43.059 10:39:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:43.059 10:39:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:43.059 10:39:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:43.059 10:39:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:43.059 10:39:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:43.059 10:39:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:43.059 10:39:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:43.059 10:39:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:82:00.0: cvl_0_0' 00:20:43.059 Found net devices under 0000:82:00.0: cvl_0_0 00:20:43.059 10:39:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:43.059 10:39:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:43.059 10:39:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:43.059 10:39:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:43.059 10:39:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:43.059 10:39:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:43.059 10:39:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:43.059 10:39:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:43.059 10:39:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:82:00.1: cvl_0_1' 00:20:43.059 Found net devices under 0000:82:00.1: cvl_0_1 00:20:43.059 10:39:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:43.059 10:39:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:20:43.059 10:39:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # is_hw=yes 00:20:43.059 10:39:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:20:43.059 10:39:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:20:43.059 10:39:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:20:43.059 10:39:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:43.059 10:39:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:43.059 10:39:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:43.059 10:39:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:43.059 10:39:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:20:43.059 10:39:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:43.059 10:39:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:43.059 10:39:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:20:43.059 10:39:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:20:43.059 10:39:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:43.059 10:39:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:43.059 10:39:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:20:43.059 10:39:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:20:43.059 10:39:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:20:43.059 10:39:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:43.059 10:39:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:43.059 10:39:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:43.059 10:39:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:20:43.059 10:39:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:43.059 10:39:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:43.059 10:39:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:43.059 10:39:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:20:43.059 10:39:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:20:43.059 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:43.059 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.256 ms 00:20:43.059 00:20:43.059 --- 10.0.0.2 ping statistics --- 00:20:43.059 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:43.059 rtt min/avg/max/mdev = 0.256/0.256/0.256/0.000 ms 00:20:43.059 10:39:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:43.059 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:43.059 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.106 ms 00:20:43.059 00:20:43.059 --- 10.0.0.1 ping statistics --- 00:20:43.059 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:43.059 rtt min/avg/max/mdev = 0.106/0.106/0.106/0.000 ms 00:20:43.059 10:39:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:43.059 10:39:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@450 -- # return 0 00:20:43.059 10:39:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:20:43.059 10:39:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:43.059 10:39:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:20:43.059 10:39:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:20:43.059 10:39:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:43.059 10:39:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:20:43.059 10:39:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:20:43.059 10:39:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@13 -- # nvmfappstart 00:20:43.059 10:39:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:43.059 10:39:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:43.059 10:39:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:20:43.059 10:39:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@509 -- # nvmfpid=409056 00:20:43.059 10:39:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:20:43.059 10:39:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@510 -- # waitforlisten 409056 00:20:43.059 10:39:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@833 -- # '[' -z 409056 ']' 00:20:43.059 10:39:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:43.059 10:39:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@838 -- # local max_retries=100 00:20:43.059 10:39:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:43.059 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:43.059 10:39:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@842 -- # xtrace_disable 00:20:43.059 10:39:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:20:43.059 [2024-11-15 10:39:31.228619] Starting SPDK v25.01-pre git sha1 318515b44 / DPDK 24.03.0 initialization... 00:20:43.059 [2024-11-15 10:39:31.228721] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:43.059 [2024-11-15 10:39:31.300622] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:43.059 [2024-11-15 10:39:31.354867] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:43.059 [2024-11-15 10:39:31.354941] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:43.059 [2024-11-15 10:39:31.354954] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:43.059 [2024-11-15 10:39:31.354965] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:43.059 [2024-11-15 10:39:31.354974] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:43.059 [2024-11-15 10:39:31.355598] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:43.059 10:39:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:20:43.059 10:39:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@866 -- # return 0 00:20:43.059 10:39:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:43.059 10:39:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:43.059 10:39:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:20:43.059 10:39:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:43.059 10:39:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:20:43.059 10:39:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@16 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:20:43.059 10:39:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@19 -- # rpc_cmd nvmf_create_transport '-t tcp -o' --in-capsule-data-size 768 --control-msg-num 1 00:20:43.059 10:39:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:43.059 10:39:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:20:43.060 [2024-11-15 10:39:31.494143] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:43.060 10:39:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:43.060 10:39:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a 00:20:43.060 10:39:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:43.060 10:39:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:20:43.060 10:39:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:43.060 10:39:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:20:43.060 10:39:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:43.060 10:39:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:20:43.060 Malloc0 00:20:43.060 10:39:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:43.060 10:39:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:20:43.060 10:39:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:43.060 10:39:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:20:43.318 10:39:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:43.318 10:39:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:20:43.318 10:39:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:43.318 10:39:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:20:43.318 [2024-11-15 10:39:31.533950] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:43.318 10:39:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:43.318 10:39:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@27 -- # perf_pid1=409083 00:20:43.318 10:39:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x2 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:20:43.318 10:39:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@29 -- # perf_pid2=409084 00:20:43.318 10:39:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x4 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:20:43.318 10:39:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@31 -- # perf_pid3=409085 00:20:43.318 10:39:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@33 -- # wait 409083 00:20:43.318 10:39:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x8 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:20:43.318 [2024-11-15 10:39:31.592489] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:20:43.318 [2024-11-15 10:39:31.602492] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:20:43.318 [2024-11-15 10:39:31.602750] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:20:44.691 Initializing NVMe Controllers 00:20:44.691 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:20:44.691 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 3 00:20:44.691 Initialization complete. Launching workers. 00:20:44.691 ======================================================== 00:20:44.691 Latency(us) 00:20:44.691 Device Information : IOPS MiB/s Average min max 00:20:44.691 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 3: 5354.00 20.91 186.22 152.11 297.01 00:20:44.691 ======================================================== 00:20:44.691 Total : 5354.00 20.91 186.22 152.11 297.01 00:20:44.691 00:20:44.691 Initializing NVMe Controllers 00:20:44.691 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:20:44.691 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 1 00:20:44.691 Initialization complete. Launching workers. 00:20:44.691 ======================================================== 00:20:44.691 Latency(us) 00:20:44.691 Device Information : IOPS MiB/s Average min max 00:20:44.691 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 1: 25.00 0.10 40932.66 40766.16 41944.23 00:20:44.691 ======================================================== 00:20:44.691 Total : 25.00 0.10 40932.66 40766.16 41944.23 00:20:44.691 00:20:44.691 10:39:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@34 -- # wait 409084 00:20:44.691 Initializing NVMe Controllers 00:20:44.691 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:20:44.691 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 2 00:20:44.691 Initialization complete. Launching workers. 00:20:44.691 ======================================================== 00:20:44.691 Latency(us) 00:20:44.691 Device Information : IOPS MiB/s Average min max 00:20:44.691 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 2: 27.00 0.11 37894.13 319.57 41882.73 00:20:44.691 ======================================================== 00:20:44.691 Total : 27.00 0.11 37894.13 319.57 41882.73 00:20:44.691 00:20:44.691 10:39:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@35 -- # wait 409085 00:20:44.691 10:39:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:20:44.691 10:39:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@38 -- # nvmftestfini 00:20:44.691 10:39:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:44.691 10:39:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@121 -- # sync 00:20:44.691 10:39:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:44.691 10:39:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@124 -- # set +e 00:20:44.691 10:39:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:44.691 10:39:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:44.691 rmmod nvme_tcp 00:20:44.691 rmmod nvme_fabrics 00:20:44.691 rmmod nvme_keyring 00:20:44.691 10:39:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:44.691 10:39:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@128 -- # set -e 00:20:44.691 10:39:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@129 -- # return 0 00:20:44.691 10:39:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@517 -- # '[' -n 409056 ']' 00:20:44.691 10:39:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@518 -- # killprocess 409056 00:20:44.691 10:39:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@952 -- # '[' -z 409056 ']' 00:20:44.691 10:39:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@956 -- # kill -0 409056 00:20:44.691 10:39:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@957 -- # uname 00:20:44.691 10:39:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:20:44.691 10:39:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 409056 00:20:44.691 10:39:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:20:44.691 10:39:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:20:44.691 10:39:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@970 -- # echo 'killing process with pid 409056' 00:20:44.691 killing process with pid 409056 00:20:44.691 10:39:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@971 -- # kill 409056 00:20:44.691 10:39:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@976 -- # wait 409056 00:20:44.948 10:39:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:20:44.948 10:39:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:20:44.948 10:39:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:20:44.948 10:39:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@297 -- # iptr 00:20:44.948 10:39:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-save 00:20:44.948 10:39:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:20:44.948 10:39:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-restore 00:20:44.948 10:39:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:44.948 10:39:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@302 -- # remove_spdk_ns 00:20:44.948 10:39:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:44.948 10:39:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:44.948 10:39:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:46.859 10:39:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:20:46.859 00:20:46.859 real 0m6.583s 00:20:46.859 user 0m6.065s 00:20:46.859 sys 0m2.788s 00:20:46.859 10:39:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1128 -- # xtrace_disable 00:20:46.859 10:39:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:20:46.859 ************************************ 00:20:46.859 END TEST nvmf_control_msg_list 00:20:46.859 ************************************ 00:20:46.859 10:39:35 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@44 -- # run_test nvmf_wait_for_buf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:20:46.859 10:39:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:20:46.859 10:39:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:20:46.859 10:39:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:20:46.859 ************************************ 00:20:46.859 START TEST nvmf_wait_for_buf 00:20:46.859 ************************************ 00:20:46.859 10:39:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:20:47.118 * Looking for test storage... 00:20:47.118 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:20:47.118 10:39:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:20:47.118 10:39:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1691 -- # lcov --version 00:20:47.118 10:39:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:20:47.118 10:39:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:20:47.118 10:39:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:47.118 10:39:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:47.118 10:39:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:47.118 10:39:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # IFS=.-: 00:20:47.118 10:39:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # read -ra ver1 00:20:47.118 10:39:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # IFS=.-: 00:20:47.118 10:39:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # read -ra ver2 00:20:47.118 10:39:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@338 -- # local 'op=<' 00:20:47.118 10:39:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@340 -- # ver1_l=2 00:20:47.118 10:39:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@341 -- # ver2_l=1 00:20:47.118 10:39:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:47.118 10:39:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@344 -- # case "$op" in 00:20:47.118 10:39:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@345 -- # : 1 00:20:47.118 10:39:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:47.118 10:39:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:47.118 10:39:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # decimal 1 00:20:47.118 10:39:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=1 00:20:47.118 10:39:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:47.118 10:39:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 1 00:20:47.118 10:39:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # ver1[v]=1 00:20:47.118 10:39:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # decimal 2 00:20:47.118 10:39:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=2 00:20:47.118 10:39:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:47.118 10:39:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 2 00:20:47.119 10:39:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # ver2[v]=2 00:20:47.119 10:39:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:47.119 10:39:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:47.119 10:39:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # return 0 00:20:47.119 10:39:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:47.119 10:39:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:20:47.119 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:47.119 --rc genhtml_branch_coverage=1 00:20:47.119 --rc genhtml_function_coverage=1 00:20:47.119 --rc genhtml_legend=1 00:20:47.119 --rc geninfo_all_blocks=1 00:20:47.119 --rc geninfo_unexecuted_blocks=1 00:20:47.119 00:20:47.119 ' 00:20:47.119 10:39:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:20:47.119 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:47.119 --rc genhtml_branch_coverage=1 00:20:47.119 --rc genhtml_function_coverage=1 00:20:47.119 --rc genhtml_legend=1 00:20:47.119 --rc geninfo_all_blocks=1 00:20:47.119 --rc geninfo_unexecuted_blocks=1 00:20:47.119 00:20:47.119 ' 00:20:47.119 10:39:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:20:47.119 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:47.119 --rc genhtml_branch_coverage=1 00:20:47.119 --rc genhtml_function_coverage=1 00:20:47.119 --rc genhtml_legend=1 00:20:47.119 --rc geninfo_all_blocks=1 00:20:47.119 --rc geninfo_unexecuted_blocks=1 00:20:47.119 00:20:47.119 ' 00:20:47.119 10:39:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:20:47.119 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:47.119 --rc genhtml_branch_coverage=1 00:20:47.119 --rc genhtml_function_coverage=1 00:20:47.119 --rc genhtml_legend=1 00:20:47.119 --rc geninfo_all_blocks=1 00:20:47.119 --rc geninfo_unexecuted_blocks=1 00:20:47.119 00:20:47.119 ' 00:20:47.119 10:39:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:47.119 10:39:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # uname -s 00:20:47.119 10:39:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:47.119 10:39:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:47.119 10:39:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:47.119 10:39:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:47.119 10:39:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:47.119 10:39:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:47.119 10:39:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:47.119 10:39:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:47.119 10:39:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:47.119 10:39:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:47.119 10:39:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:20:47.119 10:39:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@18 -- # NVME_HOSTID=8b464f06-2980-e311-ba20-001e67a94acd 00:20:47.119 10:39:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:47.119 10:39:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:47.119 10:39:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:47.119 10:39:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:47.119 10:39:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:47.119 10:39:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@15 -- # shopt -s extglob 00:20:47.119 10:39:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:47.119 10:39:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:47.119 10:39:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:47.119 10:39:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:47.119 10:39:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:47.119 10:39:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:47.119 10:39:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@5 -- # export PATH 00:20:47.119 10:39:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:47.119 10:39:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@51 -- # : 0 00:20:47.119 10:39:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:47.119 10:39:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:47.119 10:39:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:47.119 10:39:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:47.119 10:39:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:47.119 10:39:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:47.119 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:47.119 10:39:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:47.119 10:39:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:47.119 10:39:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:47.119 10:39:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@12 -- # nvmftestinit 00:20:47.119 10:39:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:20:47.119 10:39:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:47.119 10:39:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@476 -- # prepare_net_devs 00:20:47.119 10:39:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:20:47.119 10:39:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:20:47.119 10:39:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:47.119 10:39:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:47.119 10:39:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:47.119 10:39:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:20:47.119 10:39:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:20:47.119 10:39:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@309 -- # xtrace_disable 00:20:47.119 10:39:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:49.649 10:39:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:49.649 10:39:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@315 -- # pci_devs=() 00:20:49.649 10:39:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@315 -- # local -a pci_devs 00:20:49.649 10:39:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:20:49.649 10:39:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:20:49.649 10:39:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@317 -- # pci_drivers=() 00:20:49.649 10:39:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:20:49.649 10:39:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@319 -- # net_devs=() 00:20:49.649 10:39:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@319 -- # local -ga net_devs 00:20:49.650 10:39:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@320 -- # e810=() 00:20:49.650 10:39:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@320 -- # local -ga e810 00:20:49.650 10:39:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@321 -- # x722=() 00:20:49.650 10:39:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@321 -- # local -ga x722 00:20:49.650 10:39:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@322 -- # mlx=() 00:20:49.650 10:39:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@322 -- # local -ga mlx 00:20:49.650 10:39:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:49.650 10:39:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:49.650 10:39:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:49.650 10:39:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:49.650 10:39:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:49.650 10:39:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:49.650 10:39:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:49.650 10:39:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:20:49.650 10:39:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:49.650 10:39:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:49.650 10:39:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:49.650 10:39:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:49.650 10:39:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:20:49.650 10:39:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:20:49.650 10:39:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:20:49.650 10:39:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:20:49.650 10:39:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:20:49.650 10:39:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:20:49.650 10:39:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:49.650 10:39:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@367 -- # echo 'Found 0000:82:00.0 (0x8086 - 0x159b)' 00:20:49.650 Found 0000:82:00.0 (0x8086 - 0x159b) 00:20:49.650 10:39:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:49.650 10:39:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:49.650 10:39:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:49.650 10:39:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:49.650 10:39:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:49.650 10:39:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:49.650 10:39:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@367 -- # echo 'Found 0000:82:00.1 (0x8086 - 0x159b)' 00:20:49.650 Found 0000:82:00.1 (0x8086 - 0x159b) 00:20:49.650 10:39:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:49.650 10:39:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:49.650 10:39:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:49.650 10:39:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:49.650 10:39:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:49.650 10:39:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:20:49.650 10:39:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:20:49.650 10:39:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:20:49.650 10:39:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:49.650 10:39:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:49.650 10:39:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:49.650 10:39:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:49.650 10:39:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:49.650 10:39:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:49.650 10:39:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:49.650 10:39:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:82:00.0: cvl_0_0' 00:20:49.650 Found net devices under 0000:82:00.0: cvl_0_0 00:20:49.650 10:39:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:49.650 10:39:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:49.650 10:39:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:49.650 10:39:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:49.650 10:39:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:49.650 10:39:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:49.650 10:39:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:49.650 10:39:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:49.650 10:39:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:82:00.1: cvl_0_1' 00:20:49.650 Found net devices under 0000:82:00.1: cvl_0_1 00:20:49.650 10:39:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:49.650 10:39:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:20:49.650 10:39:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # is_hw=yes 00:20:49.650 10:39:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:20:49.650 10:39:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:20:49.650 10:39:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:20:49.650 10:39:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:49.650 10:39:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:49.650 10:39:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:49.650 10:39:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:49.650 10:39:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:20:49.650 10:39:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:49.650 10:39:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:49.650 10:39:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:20:49.650 10:39:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:20:49.650 10:39:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:49.650 10:39:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:49.650 10:39:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:20:49.650 10:39:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:20:49.650 10:39:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:20:49.650 10:39:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:49.650 10:39:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:49.650 10:39:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:49.650 10:39:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:20:49.650 10:39:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:49.650 10:39:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:49.650 10:39:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:49.650 10:39:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:20:49.650 10:39:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:20:49.650 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:49.650 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.310 ms 00:20:49.650 00:20:49.650 --- 10.0.0.2 ping statistics --- 00:20:49.650 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:49.650 rtt min/avg/max/mdev = 0.310/0.310/0.310/0.000 ms 00:20:49.650 10:39:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:49.650 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:49.650 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.110 ms 00:20:49.650 00:20:49.650 --- 10.0.0.1 ping statistics --- 00:20:49.650 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:49.650 rtt min/avg/max/mdev = 0.110/0.110/0.110/0.000 ms 00:20:49.650 10:39:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:49.650 10:39:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@450 -- # return 0 00:20:49.650 10:39:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:20:49.650 10:39:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:49.650 10:39:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:20:49.650 10:39:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:20:49.650 10:39:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:49.650 10:39:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:20:49.651 10:39:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:20:49.651 10:39:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@13 -- # nvmfappstart --wait-for-rpc 00:20:49.651 10:39:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:49.651 10:39:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:49.651 10:39:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:49.651 10:39:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@509 -- # nvmfpid=411157 00:20:49.651 10:39:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@510 -- # waitforlisten 411157 00:20:49.651 10:39:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@833 -- # '[' -z 411157 ']' 00:20:49.651 10:39:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:20:49.651 10:39:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:49.651 10:39:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@838 -- # local max_retries=100 00:20:49.651 10:39:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:49.651 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:49.651 10:39:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@842 -- # xtrace_disable 00:20:49.651 10:39:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:49.651 [2024-11-15 10:39:37.706752] Starting SPDK v25.01-pre git sha1 318515b44 / DPDK 24.03.0 initialization... 00:20:49.651 [2024-11-15 10:39:37.706850] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:49.651 [2024-11-15 10:39:37.781271] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:49.651 [2024-11-15 10:39:37.838738] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:49.651 [2024-11-15 10:39:37.838790] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:49.651 [2024-11-15 10:39:37.838818] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:49.651 [2024-11-15 10:39:37.838829] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:49.651 [2024-11-15 10:39:37.838839] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:49.651 [2024-11-15 10:39:37.839478] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:49.651 10:39:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:20:49.651 10:39:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@866 -- # return 0 00:20:49.651 10:39:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:49.651 10:39:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:49.651 10:39:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:49.651 10:39:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:49.651 10:39:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:20:49.651 10:39:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@16 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:20:49.651 10:39:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@19 -- # rpc_cmd accel_set_options --small-cache-size 0 --large-cache-size 0 00:20:49.651 10:39:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:49.651 10:39:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:49.651 10:39:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:49.651 10:39:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@20 -- # rpc_cmd iobuf_set_options --small-pool-count 154 --small_bufsize=8192 00:20:49.651 10:39:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:49.651 10:39:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:49.651 10:39:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:49.651 10:39:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@21 -- # rpc_cmd framework_start_init 00:20:49.651 10:39:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:49.651 10:39:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:49.651 10:39:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:49.651 10:39:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@22 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:20:49.651 10:39:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:49.651 10:39:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:49.651 Malloc0 00:20:49.651 10:39:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:49.651 10:39:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@23 -- # rpc_cmd nvmf_create_transport '-t tcp -o' -u 8192 -n 24 -b 24 00:20:49.651 10:39:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:49.651 10:39:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:49.651 [2024-11-15 10:39:38.086121] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:49.651 10:39:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:49.651 10:39:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a -s SPDK00000000000001 00:20:49.651 10:39:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:49.651 10:39:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:49.651 10:39:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:49.651 10:39:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:20:49.651 10:39:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:49.651 10:39:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:49.651 10:39:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:49.651 10:39:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:20:49.651 10:39:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:49.651 10:39:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:49.651 [2024-11-15 10:39:38.110317] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:49.651 10:39:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:49.651 10:39:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 4 -o 131072 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:20:49.909 [2024-11-15 10:39:38.197508] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:20:51.283 Initializing NVMe Controllers 00:20:51.283 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:20:51.283 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 0 00:20:51.283 Initialization complete. Launching workers. 00:20:51.283 ======================================================== 00:20:51.283 Latency(us) 00:20:51.283 Device Information : IOPS MiB/s Average min max 00:20:51.283 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 0: 124.00 15.50 33562.57 7038.51 71839.59 00:20:51.283 ======================================================== 00:20:51.283 Total : 124.00 15.50 33562.57 7038.51 71839.59 00:20:51.283 00:20:51.283 10:39:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # rpc_cmd iobuf_get_stats 00:20:51.283 10:39:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # jq -r '.[] | select(.module == "nvmf_TCP") | .small_pool.retry' 00:20:51.283 10:39:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:51.283 10:39:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:51.283 10:39:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:51.283 10:39:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # retry_count=1958 00:20:51.283 10:39:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@33 -- # [[ 1958 -eq 0 ]] 00:20:51.283 10:39:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:20:51.283 10:39:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@38 -- # nvmftestfini 00:20:51.283 10:39:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:51.283 10:39:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@121 -- # sync 00:20:51.283 10:39:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:51.283 10:39:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@124 -- # set +e 00:20:51.283 10:39:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:51.283 10:39:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:51.283 rmmod nvme_tcp 00:20:51.283 rmmod nvme_fabrics 00:20:51.283 rmmod nvme_keyring 00:20:51.283 10:39:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:51.283 10:39:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@128 -- # set -e 00:20:51.283 10:39:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@129 -- # return 0 00:20:51.283 10:39:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@517 -- # '[' -n 411157 ']' 00:20:51.283 10:39:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@518 -- # killprocess 411157 00:20:51.283 10:39:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@952 -- # '[' -z 411157 ']' 00:20:51.283 10:39:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@956 -- # kill -0 411157 00:20:51.283 10:39:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@957 -- # uname 00:20:51.283 10:39:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:20:51.284 10:39:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 411157 00:20:51.543 10:39:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:20:51.543 10:39:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:20:51.543 10:39:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@970 -- # echo 'killing process with pid 411157' 00:20:51.543 killing process with pid 411157 00:20:51.543 10:39:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@971 -- # kill 411157 00:20:51.543 10:39:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@976 -- # wait 411157 00:20:51.543 10:39:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:20:51.543 10:39:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:20:51.543 10:39:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:20:51.543 10:39:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@297 -- # iptr 00:20:51.543 10:39:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-save 00:20:51.543 10:39:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:20:51.543 10:39:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-restore 00:20:51.543 10:39:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:51.543 10:39:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:20:51.543 10:39:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:51.543 10:39:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:51.543 10:39:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:54.078 10:39:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:20:54.078 00:20:54.078 real 0m6.714s 00:20:54.078 user 0m3.167s 00:20:54.078 sys 0m2.028s 00:20:54.078 10:39:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:20:54.078 10:39:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:54.078 ************************************ 00:20:54.078 END TEST nvmf_wait_for_buf 00:20:54.078 ************************************ 00:20:54.078 10:39:42 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@47 -- # '[' 0 -eq 1 ']' 00:20:54.078 10:39:42 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@53 -- # [[ phy == phy ]] 00:20:54.078 10:39:42 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@54 -- # '[' tcp = tcp ']' 00:20:54.078 10:39:42 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@55 -- # gather_supported_nvmf_pci_devs 00:20:54.078 10:39:42 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@309 -- # xtrace_disable 00:20:54.078 10:39:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:20:55.983 10:39:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:55.983 10:39:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # pci_devs=() 00:20:55.983 10:39:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # local -a pci_devs 00:20:55.983 10:39:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@316 -- # pci_net_devs=() 00:20:55.983 10:39:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:20:55.983 10:39:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # pci_drivers=() 00:20:55.983 10:39:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # local -A pci_drivers 00:20:55.983 10:39:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@319 -- # net_devs=() 00:20:55.983 10:39:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@319 -- # local -ga net_devs 00:20:55.983 10:39:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # e810=() 00:20:55.983 10:39:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # local -ga e810 00:20:55.983 10:39:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # x722=() 00:20:55.983 10:39:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # local -ga x722 00:20:55.983 10:39:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@322 -- # mlx=() 00:20:55.983 10:39:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@322 -- # local -ga mlx 00:20:55.983 10:39:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:55.983 10:39:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:55.983 10:39:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:55.983 10:39:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:55.983 10:39:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:55.983 10:39:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:55.983 10:39:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:55.983 10:39:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:20:55.983 10:39:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:55.983 10:39:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:55.983 10:39:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:55.983 10:39:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:55.983 10:39:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:20:55.983 10:39:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:20:55.983 10:39:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:20:55.983 10:39:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:20:55.983 10:39:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:20:55.983 10:39:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:20:55.983 10:39:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:55.983 10:39:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@367 -- # echo 'Found 0000:82:00.0 (0x8086 - 0x159b)' 00:20:55.983 Found 0000:82:00.0 (0x8086 - 0x159b) 00:20:55.983 10:39:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:55.983 10:39:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:55.983 10:39:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:55.983 10:39:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:55.983 10:39:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:55.983 10:39:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:55.983 10:39:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@367 -- # echo 'Found 0000:82:00.1 (0x8086 - 0x159b)' 00:20:55.983 Found 0000:82:00.1 (0x8086 - 0x159b) 00:20:55.983 10:39:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:55.983 10:39:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:55.983 10:39:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:55.983 10:39:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:55.983 10:39:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:55.983 10:39:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:20:55.983 10:39:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:20:55.983 10:39:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:20:55.983 10:39:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:55.983 10:39:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:55.983 10:39:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:55.983 10:39:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:55.983 10:39:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:55.983 10:39:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:55.983 10:39:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:55.983 10:39:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:82:00.0: cvl_0_0' 00:20:55.983 Found net devices under 0000:82:00.0: cvl_0_0 00:20:55.983 10:39:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:55.983 10:39:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:55.983 10:39:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:55.983 10:39:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:55.983 10:39:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:55.983 10:39:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:55.983 10:39:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:55.983 10:39:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:55.983 10:39:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:82:00.1: cvl_0_1' 00:20:55.984 Found net devices under 0000:82:00.1: cvl_0_1 00:20:55.984 10:39:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:55.984 10:39:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:20:55.984 10:39:44 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@56 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:55.984 10:39:44 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@57 -- # (( 2 > 0 )) 00:20:55.984 10:39:44 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@58 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:20:55.984 10:39:44 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:20:55.984 10:39:44 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:20:55.984 10:39:44 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:20:55.984 ************************************ 00:20:55.984 START TEST nvmf_perf_adq 00:20:55.984 ************************************ 00:20:55.984 10:39:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:20:55.984 * Looking for test storage... 00:20:55.984 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:20:55.984 10:39:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:20:55.984 10:39:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1691 -- # lcov --version 00:20:55.984 10:39:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:20:55.984 10:39:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:20:55.984 10:39:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:55.984 10:39:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:55.984 10:39:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:55.984 10:39:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@336 -- # IFS=.-: 00:20:55.984 10:39:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@336 -- # read -ra ver1 00:20:55.984 10:39:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@337 -- # IFS=.-: 00:20:55.984 10:39:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@337 -- # read -ra ver2 00:20:55.984 10:39:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@338 -- # local 'op=<' 00:20:55.984 10:39:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@340 -- # ver1_l=2 00:20:55.984 10:39:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@341 -- # ver2_l=1 00:20:55.984 10:39:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:55.984 10:39:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@344 -- # case "$op" in 00:20:55.984 10:39:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@345 -- # : 1 00:20:55.984 10:39:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:55.984 10:39:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:55.984 10:39:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@365 -- # decimal 1 00:20:55.984 10:39:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@353 -- # local d=1 00:20:55.984 10:39:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:55.984 10:39:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@355 -- # echo 1 00:20:55.984 10:39:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@365 -- # ver1[v]=1 00:20:55.984 10:39:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@366 -- # decimal 2 00:20:55.984 10:39:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@353 -- # local d=2 00:20:55.984 10:39:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:55.984 10:39:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@355 -- # echo 2 00:20:55.984 10:39:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@366 -- # ver2[v]=2 00:20:55.984 10:39:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:55.984 10:39:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:55.984 10:39:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@368 -- # return 0 00:20:55.984 10:39:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:55.984 10:39:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:20:55.984 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:55.984 --rc genhtml_branch_coverage=1 00:20:55.984 --rc genhtml_function_coverage=1 00:20:55.984 --rc genhtml_legend=1 00:20:55.984 --rc geninfo_all_blocks=1 00:20:55.984 --rc geninfo_unexecuted_blocks=1 00:20:55.984 00:20:55.984 ' 00:20:55.984 10:39:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:20:55.984 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:55.984 --rc genhtml_branch_coverage=1 00:20:55.984 --rc genhtml_function_coverage=1 00:20:55.984 --rc genhtml_legend=1 00:20:55.984 --rc geninfo_all_blocks=1 00:20:55.984 --rc geninfo_unexecuted_blocks=1 00:20:55.984 00:20:55.984 ' 00:20:55.984 10:39:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:20:55.984 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:55.984 --rc genhtml_branch_coverage=1 00:20:55.984 --rc genhtml_function_coverage=1 00:20:55.984 --rc genhtml_legend=1 00:20:55.984 --rc geninfo_all_blocks=1 00:20:55.984 --rc geninfo_unexecuted_blocks=1 00:20:55.984 00:20:55.984 ' 00:20:55.984 10:39:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:20:55.984 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:55.984 --rc genhtml_branch_coverage=1 00:20:55.984 --rc genhtml_function_coverage=1 00:20:55.984 --rc genhtml_legend=1 00:20:55.984 --rc geninfo_all_blocks=1 00:20:55.984 --rc geninfo_unexecuted_blocks=1 00:20:55.984 00:20:55.984 ' 00:20:55.984 10:39:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:55.984 10:39:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # uname -s 00:20:55.984 10:39:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:55.984 10:39:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:55.984 10:39:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:55.984 10:39:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:55.984 10:39:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:55.984 10:39:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:55.984 10:39:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:55.984 10:39:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:55.984 10:39:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:55.984 10:39:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:55.984 10:39:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:20:55.984 10:39:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@18 -- # NVME_HOSTID=8b464f06-2980-e311-ba20-001e67a94acd 00:20:55.984 10:39:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:55.984 10:39:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:55.984 10:39:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:55.984 10:39:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:55.984 10:39:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:55.984 10:39:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@15 -- # shopt -s extglob 00:20:55.984 10:39:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:55.984 10:39:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:55.984 10:39:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:55.984 10:39:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:55.984 10:39:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:55.984 10:39:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:55.984 10:39:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@5 -- # export PATH 00:20:55.984 10:39:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:55.984 10:39:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@51 -- # : 0 00:20:55.984 10:39:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:55.984 10:39:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:55.985 10:39:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:55.985 10:39:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:55.985 10:39:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:55.985 10:39:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:55.985 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:55.985 10:39:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:55.985 10:39:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:55.985 10:39:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:55.985 10:39:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:20:55.985 10:39:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:20:55.985 10:39:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:58.517 10:39:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:58.517 10:39:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:20:58.517 10:39:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:20:58.517 10:39:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:20:58.517 10:39:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:20:58.517 10:39:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:20:58.517 10:39:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:20:58.517 10:39:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:20:58.517 10:39:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:20:58.517 10:39:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:20:58.517 10:39:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:20:58.517 10:39:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:20:58.517 10:39:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:20:58.517 10:39:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:20:58.517 10:39:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:20:58.517 10:39:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:58.517 10:39:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:58.517 10:39:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:58.517 10:39:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:58.517 10:39:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:58.517 10:39:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:58.517 10:39:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:58.517 10:39:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:20:58.517 10:39:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:58.517 10:39:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:58.517 10:39:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:58.517 10:39:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:58.517 10:39:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:20:58.517 10:39:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:20:58.517 10:39:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:20:58.517 10:39:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:20:58.517 10:39:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:20:58.517 10:39:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:20:58.517 10:39:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:58.517 10:39:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:82:00.0 (0x8086 - 0x159b)' 00:20:58.517 Found 0000:82:00.0 (0x8086 - 0x159b) 00:20:58.517 10:39:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:58.517 10:39:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:58.517 10:39:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:58.517 10:39:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:58.517 10:39:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:58.517 10:39:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:58.517 10:39:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:82:00.1 (0x8086 - 0x159b)' 00:20:58.517 Found 0000:82:00.1 (0x8086 - 0x159b) 00:20:58.517 10:39:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:58.517 10:39:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:58.517 10:39:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:58.517 10:39:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:58.517 10:39:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:58.517 10:39:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:20:58.517 10:39:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:20:58.517 10:39:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:20:58.517 10:39:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:58.517 10:39:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:58.517 10:39:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:58.517 10:39:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:58.517 10:39:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:58.517 10:39:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:58.517 10:39:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:58.517 10:39:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:82:00.0: cvl_0_0' 00:20:58.517 Found net devices under 0000:82:00.0: cvl_0_0 00:20:58.517 10:39:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:58.517 10:39:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:58.517 10:39:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:58.517 10:39:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:58.517 10:39:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:58.517 10:39:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:58.517 10:39:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:58.517 10:39:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:58.517 10:39:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:82:00.1: cvl_0_1' 00:20:58.517 Found net devices under 0000:82:00.1: cvl_0_1 00:20:58.517 10:39:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:58.517 10:39:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:20:58.517 10:39:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:58.517 10:39:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:20:58.517 10:39:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:20:58.517 10:39:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@68 -- # adq_reload_driver 00:20:58.517 10:39:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@58 -- # modprobe -a sch_mqprio 00:20:58.517 10:39:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@61 -- # rmmod ice 00:20:58.776 10:39:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@62 -- # modprobe ice 00:21:02.970 10:39:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@63 -- # sleep 5 00:21:07.223 10:39:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@76 -- # nvmftestinit 00:21:07.223 10:39:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:07.223 10:39:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:07.223 10:39:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:07.223 10:39:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:07.223 10:39:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:07.223 10:39:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:07.223 10:39:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:07.224 10:39:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:07.483 10:39:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:21:07.483 10:39:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:21:07.483 10:39:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:21:07.483 10:39:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:07.483 10:39:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:07.483 10:39:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:21:07.483 10:39:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:07.483 10:39:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:07.483 10:39:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:07.483 10:39:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:07.483 10:39:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:07.483 10:39:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:21:07.483 10:39:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:07.483 10:39:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:21:07.483 10:39:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:21:07.483 10:39:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:21:07.483 10:39:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:21:07.483 10:39:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:21:07.483 10:39:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:21:07.483 10:39:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:07.483 10:39:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:07.483 10:39:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:07.483 10:39:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:07.483 10:39:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:07.483 10:39:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:07.483 10:39:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:07.483 10:39:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:07.483 10:39:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:07.483 10:39:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:07.483 10:39:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:07.483 10:39:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:07.483 10:39:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:07.483 10:39:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:07.483 10:39:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:07.483 10:39:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:07.483 10:39:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:07.483 10:39:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:07.483 10:39:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:07.483 10:39:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:82:00.0 (0x8086 - 0x159b)' 00:21:07.483 Found 0000:82:00.0 (0x8086 - 0x159b) 00:21:07.483 10:39:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:07.483 10:39:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:07.483 10:39:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:07.483 10:39:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:07.483 10:39:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:07.483 10:39:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:07.483 10:39:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:82:00.1 (0x8086 - 0x159b)' 00:21:07.483 Found 0000:82:00.1 (0x8086 - 0x159b) 00:21:07.483 10:39:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:07.483 10:39:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:07.484 10:39:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:07.484 10:39:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:07.484 10:39:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:07.484 10:39:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:07.484 10:39:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:07.484 10:39:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:07.484 10:39:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:07.484 10:39:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:07.484 10:39:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:07.484 10:39:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:07.484 10:39:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:07.484 10:39:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:07.484 10:39:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:07.484 10:39:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:82:00.0: cvl_0_0' 00:21:07.484 Found net devices under 0000:82:00.0: cvl_0_0 00:21:07.484 10:39:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:07.484 10:39:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:07.484 10:39:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:07.484 10:39:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:07.484 10:39:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:07.484 10:39:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:07.484 10:39:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:07.484 10:39:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:07.484 10:39:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:82:00.1: cvl_0_1' 00:21:07.484 Found net devices under 0000:82:00.1: cvl_0_1 00:21:07.484 10:39:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:07.484 10:39:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:07.484 10:39:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # is_hw=yes 00:21:07.484 10:39:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:21:07.484 10:39:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:21:07.484 10:39:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:21:07.484 10:39:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:07.484 10:39:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:07.484 10:39:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:07.484 10:39:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:07.484 10:39:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:07.484 10:39:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:07.484 10:39:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:07.484 10:39:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:07.484 10:39:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:07.484 10:39:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:07.484 10:39:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:07.484 10:39:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:07.484 10:39:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:07.484 10:39:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:07.484 10:39:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:07.484 10:39:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:07.484 10:39:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:07.484 10:39:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:07.484 10:39:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:07.484 10:39:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:07.484 10:39:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:07.484 10:39:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:07.484 10:39:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:07.484 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:07.484 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.188 ms 00:21:07.484 00:21:07.484 --- 10.0.0.2 ping statistics --- 00:21:07.484 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:07.484 rtt min/avg/max/mdev = 0.188/0.188/0.188/0.000 ms 00:21:07.484 10:39:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:07.484 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:07.484 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.115 ms 00:21:07.484 00:21:07.484 --- 10.0.0.1 ping statistics --- 00:21:07.484 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:07.484 rtt min/avg/max/mdev = 0.115/0.115/0.115/0.000 ms 00:21:07.484 10:39:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:07.484 10:39:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@450 -- # return 0 00:21:07.484 10:39:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:07.484 10:39:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:07.484 10:39:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:07.484 10:39:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:07.484 10:39:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:07.484 10:39:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:07.484 10:39:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:07.484 10:39:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@77 -- # nvmfappstart -m 0xF --wait-for-rpc 00:21:07.484 10:39:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:07.484 10:39:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:07.484 10:39:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:07.484 10:39:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@509 -- # nvmfpid=416136 00:21:07.484 10:39:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:21:07.484 10:39:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@510 -- # waitforlisten 416136 00:21:07.484 10:39:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@833 -- # '[' -z 416136 ']' 00:21:07.484 10:39:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:07.484 10:39:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@838 -- # local max_retries=100 00:21:07.484 10:39:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:07.484 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:07.484 10:39:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@842 -- # xtrace_disable 00:21:07.484 10:39:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:07.484 [2024-11-15 10:39:55.911244] Starting SPDK v25.01-pre git sha1 318515b44 / DPDK 24.03.0 initialization... 00:21:07.484 [2024-11-15 10:39:55.911331] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:07.743 [2024-11-15 10:39:55.984121] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:07.743 [2024-11-15 10:39:56.042799] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:07.743 [2024-11-15 10:39:56.042846] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:07.743 [2024-11-15 10:39:56.042880] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:07.743 [2024-11-15 10:39:56.042892] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:07.743 [2024-11-15 10:39:56.042902] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:07.743 [2024-11-15 10:39:56.044471] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:07.743 [2024-11-15 10:39:56.044533] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:07.743 [2024-11-15 10:39:56.044582] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:21:07.743 [2024-11-15 10:39:56.044586] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:07.743 10:39:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:21:07.743 10:39:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@866 -- # return 0 00:21:07.743 10:39:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:07.743 10:39:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:07.743 10:39:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:07.743 10:39:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:07.743 10:39:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@78 -- # adq_configure_nvmf_target 0 00:21:07.743 10:39:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:21:07.743 10:39:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:21:07.743 10:39:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:07.743 10:39:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:07.743 10:39:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:07.743 10:39:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:21:07.743 10:39:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:21:07.743 10:39:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:07.743 10:39:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:07.743 10:39:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:07.743 10:39:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:21:07.743 10:39:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:07.743 10:39:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:08.001 10:39:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:08.001 10:39:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:21:08.001 10:39:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:08.001 10:39:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:08.001 [2024-11-15 10:39:56.313075] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:08.001 10:39:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:08.001 10:39:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:21:08.001 10:39:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:08.001 10:39:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:08.001 Malloc1 00:21:08.001 10:39:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:08.001 10:39:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:08.001 10:39:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:08.001 10:39:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:08.001 10:39:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:08.001 10:39:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:21:08.001 10:39:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:08.001 10:39:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:08.001 10:39:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:08.001 10:39:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:08.001 10:39:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:08.001 10:39:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:08.001 [2024-11-15 10:39:56.373445] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:08.001 10:39:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:08.001 10:39:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@82 -- # perfpid=416285 00:21:08.001 10:39:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:21:08.001 10:39:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@83 -- # sleep 2 00:21:10.529 10:39:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@85 -- # rpc_cmd nvmf_get_stats 00:21:10.529 10:39:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:10.529 10:39:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:10.529 10:39:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:10.529 10:39:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@85 -- # nvmf_stats='{ 00:21:10.529 "tick_rate": 2700000000, 00:21:10.529 "poll_groups": [ 00:21:10.529 { 00:21:10.529 "name": "nvmf_tgt_poll_group_000", 00:21:10.529 "admin_qpairs": 1, 00:21:10.529 "io_qpairs": 1, 00:21:10.529 "current_admin_qpairs": 1, 00:21:10.529 "current_io_qpairs": 1, 00:21:10.529 "pending_bdev_io": 0, 00:21:10.529 "completed_nvme_io": 18994, 00:21:10.529 "transports": [ 00:21:10.529 { 00:21:10.529 "trtype": "TCP" 00:21:10.529 } 00:21:10.529 ] 00:21:10.529 }, 00:21:10.529 { 00:21:10.529 "name": "nvmf_tgt_poll_group_001", 00:21:10.529 "admin_qpairs": 0, 00:21:10.529 "io_qpairs": 1, 00:21:10.529 "current_admin_qpairs": 0, 00:21:10.529 "current_io_qpairs": 1, 00:21:10.529 "pending_bdev_io": 0, 00:21:10.529 "completed_nvme_io": 19235, 00:21:10.529 "transports": [ 00:21:10.529 { 00:21:10.529 "trtype": "TCP" 00:21:10.529 } 00:21:10.529 ] 00:21:10.529 }, 00:21:10.529 { 00:21:10.529 "name": "nvmf_tgt_poll_group_002", 00:21:10.529 "admin_qpairs": 0, 00:21:10.529 "io_qpairs": 1, 00:21:10.529 "current_admin_qpairs": 0, 00:21:10.529 "current_io_qpairs": 1, 00:21:10.529 "pending_bdev_io": 0, 00:21:10.529 "completed_nvme_io": 19503, 00:21:10.529 "transports": [ 00:21:10.529 { 00:21:10.529 "trtype": "TCP" 00:21:10.529 } 00:21:10.529 ] 00:21:10.529 }, 00:21:10.529 { 00:21:10.529 "name": "nvmf_tgt_poll_group_003", 00:21:10.529 "admin_qpairs": 0, 00:21:10.529 "io_qpairs": 1, 00:21:10.529 "current_admin_qpairs": 0, 00:21:10.529 "current_io_qpairs": 1, 00:21:10.529 "pending_bdev_io": 0, 00:21:10.529 "completed_nvme_io": 19126, 00:21:10.529 "transports": [ 00:21:10.529 { 00:21:10.529 "trtype": "TCP" 00:21:10.529 } 00:21:10.529 ] 00:21:10.529 } 00:21:10.529 ] 00:21:10.529 }' 00:21:10.529 10:39:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:21:10.529 10:39:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # wc -l 00:21:10.529 10:39:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # count=4 00:21:10.529 10:39:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@87 -- # [[ 4 -ne 4 ]] 00:21:10.529 10:39:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@91 -- # wait 416285 00:21:18.634 Initializing NVMe Controllers 00:21:18.634 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:18.634 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:21:18.634 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:21:18.634 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:21:18.634 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:21:18.634 Initialization complete. Launching workers. 00:21:18.634 ======================================================== 00:21:18.634 Latency(us) 00:21:18.634 Device Information : IOPS MiB/s Average min max 00:21:18.634 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 10025.10 39.16 6385.33 2542.97 10838.21 00:21:18.634 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 10146.80 39.64 6309.04 2543.18 10439.70 00:21:18.634 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 10267.00 40.11 6233.95 2736.73 10190.85 00:21:18.634 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 10126.80 39.56 6321.43 2177.33 10769.51 00:21:18.634 ======================================================== 00:21:18.634 Total : 40565.68 158.46 6311.98 2177.33 10838.21 00:21:18.634 00:21:18.634 10:40:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@92 -- # nvmftestfini 00:21:18.634 10:40:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:18.634 10:40:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # sync 00:21:18.634 10:40:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:18.634 10:40:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set +e 00:21:18.634 10:40:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:18.634 10:40:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:18.634 rmmod nvme_tcp 00:21:18.634 rmmod nvme_fabrics 00:21:18.634 rmmod nvme_keyring 00:21:18.634 10:40:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:18.634 10:40:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@128 -- # set -e 00:21:18.634 10:40:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # return 0 00:21:18.634 10:40:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@517 -- # '[' -n 416136 ']' 00:21:18.634 10:40:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@518 -- # killprocess 416136 00:21:18.634 10:40:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@952 -- # '[' -z 416136 ']' 00:21:18.634 10:40:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@956 -- # kill -0 416136 00:21:18.634 10:40:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@957 -- # uname 00:21:18.634 10:40:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:21:18.634 10:40:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 416136 00:21:18.634 10:40:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:21:18.634 10:40:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:21:18.634 10:40:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@970 -- # echo 'killing process with pid 416136' 00:21:18.634 killing process with pid 416136 00:21:18.634 10:40:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@971 -- # kill 416136 00:21:18.634 10:40:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@976 -- # wait 416136 00:21:18.634 10:40:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:18.634 10:40:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:18.634 10:40:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:18.634 10:40:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # iptr 00:21:18.634 10:40:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-save 00:21:18.634 10:40:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:18.634 10:40:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-restore 00:21:18.634 10:40:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:18.634 10:40:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:18.634 10:40:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:18.634 10:40:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:18.634 10:40:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:20.540 10:40:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:20.540 10:40:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@94 -- # adq_reload_driver 00:21:20.540 10:40:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@58 -- # modprobe -a sch_mqprio 00:21:20.540 10:40:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@61 -- # rmmod ice 00:21:21.110 10:40:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@62 -- # modprobe ice 00:21:23.015 10:40:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@63 -- # sleep 5 00:21:28.292 10:40:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@97 -- # nvmftestinit 00:21:28.292 10:40:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:28.292 10:40:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:28.292 10:40:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:28.292 10:40:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:28.292 10:40:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:28.292 10:40:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:28.292 10:40:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:28.292 10:40:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:28.292 10:40:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:21:28.292 10:40:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:21:28.292 10:40:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:21:28.292 10:40:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:28.292 10:40:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:28.292 10:40:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:21:28.292 10:40:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:28.292 10:40:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:28.292 10:40:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:28.292 10:40:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:28.292 10:40:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:28.292 10:40:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:21:28.292 10:40:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:28.292 10:40:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:21:28.292 10:40:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:21:28.292 10:40:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:21:28.292 10:40:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:21:28.292 10:40:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:21:28.293 10:40:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:21:28.293 10:40:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:28.293 10:40:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:28.293 10:40:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:28.293 10:40:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:28.293 10:40:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:28.293 10:40:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:28.293 10:40:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:28.293 10:40:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:28.293 10:40:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:28.293 10:40:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:28.293 10:40:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:28.293 10:40:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:28.293 10:40:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:28.293 10:40:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:28.293 10:40:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:28.293 10:40:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:28.293 10:40:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:28.293 10:40:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:28.293 10:40:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:28.293 10:40:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:82:00.0 (0x8086 - 0x159b)' 00:21:28.293 Found 0000:82:00.0 (0x8086 - 0x159b) 00:21:28.293 10:40:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:28.293 10:40:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:28.293 10:40:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:28.293 10:40:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:28.293 10:40:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:28.293 10:40:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:28.293 10:40:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:82:00.1 (0x8086 - 0x159b)' 00:21:28.293 Found 0000:82:00.1 (0x8086 - 0x159b) 00:21:28.293 10:40:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:28.293 10:40:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:28.293 10:40:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:28.293 10:40:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:28.293 10:40:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:28.293 10:40:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:28.293 10:40:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:28.293 10:40:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:28.293 10:40:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:28.293 10:40:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:28.293 10:40:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:28.293 10:40:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:28.293 10:40:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:28.293 10:40:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:28.293 10:40:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:28.293 10:40:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:82:00.0: cvl_0_0' 00:21:28.293 Found net devices under 0000:82:00.0: cvl_0_0 00:21:28.293 10:40:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:28.293 10:40:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:28.293 10:40:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:28.293 10:40:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:28.293 10:40:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:28.293 10:40:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:28.293 10:40:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:28.293 10:40:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:28.293 10:40:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:82:00.1: cvl_0_1' 00:21:28.293 Found net devices under 0000:82:00.1: cvl_0_1 00:21:28.293 10:40:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:28.293 10:40:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:28.293 10:40:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # is_hw=yes 00:21:28.293 10:40:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:21:28.293 10:40:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:21:28.293 10:40:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:21:28.293 10:40:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:28.293 10:40:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:28.293 10:40:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:28.293 10:40:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:28.293 10:40:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:28.293 10:40:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:28.293 10:40:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:28.293 10:40:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:28.293 10:40:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:28.293 10:40:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:28.293 10:40:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:28.293 10:40:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:28.293 10:40:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:28.293 10:40:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:28.293 10:40:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:28.293 10:40:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:28.293 10:40:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:28.293 10:40:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:28.293 10:40:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:28.293 10:40:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:28.293 10:40:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:28.293 10:40:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:28.293 10:40:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:28.293 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:28.293 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.173 ms 00:21:28.293 00:21:28.293 --- 10.0.0.2 ping statistics --- 00:21:28.293 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:28.293 rtt min/avg/max/mdev = 0.173/0.173/0.173/0.000 ms 00:21:28.293 10:40:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:28.293 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:28.293 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.115 ms 00:21:28.293 00:21:28.293 --- 10.0.0.1 ping statistics --- 00:21:28.293 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:28.293 rtt min/avg/max/mdev = 0.115/0.115/0.115/0.000 ms 00:21:28.293 10:40:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:28.293 10:40:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@450 -- # return 0 00:21:28.293 10:40:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:28.293 10:40:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:28.293 10:40:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:28.293 10:40:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:28.293 10:40:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:28.293 10:40:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:28.293 10:40:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:28.293 10:40:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@98 -- # adq_configure_driver 00:21:28.293 10:40:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:21:28.293 10:40:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:21:28.293 10:40:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:21:28.293 net.core.busy_poll = 1 00:21:28.293 10:40:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:21:28.294 net.core.busy_read = 1 00:21:28.294 10:40:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:21:28.294 10:40:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:21:28.294 10:40:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:21:28.552 10:40:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:21:28.552 10:40:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:21:28.552 10:40:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@99 -- # nvmfappstart -m 0xF --wait-for-rpc 00:21:28.552 10:40:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:28.552 10:40:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:28.552 10:40:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:28.552 10:40:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@509 -- # nvmfpid=418886 00:21:28.552 10:40:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:21:28.552 10:40:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@510 -- # waitforlisten 418886 00:21:28.552 10:40:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@833 -- # '[' -z 418886 ']' 00:21:28.552 10:40:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:28.552 10:40:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@838 -- # local max_retries=100 00:21:28.552 10:40:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:28.552 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:28.552 10:40:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@842 -- # xtrace_disable 00:21:28.552 10:40:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:28.552 [2024-11-15 10:40:16.846877] Starting SPDK v25.01-pre git sha1 318515b44 / DPDK 24.03.0 initialization... 00:21:28.552 [2024-11-15 10:40:16.846951] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:28.552 [2024-11-15 10:40:16.920964] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:28.552 [2024-11-15 10:40:16.981605] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:28.552 [2024-11-15 10:40:16.981669] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:28.552 [2024-11-15 10:40:16.981698] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:28.552 [2024-11-15 10:40:16.981709] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:28.552 [2024-11-15 10:40:16.981719] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:28.553 [2024-11-15 10:40:16.983322] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:28.553 [2024-11-15 10:40:16.983409] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:28.553 [2024-11-15 10:40:16.983404] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:21:28.553 [2024-11-15 10:40:16.983346] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:28.810 10:40:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:21:28.810 10:40:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@866 -- # return 0 00:21:28.810 10:40:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:28.811 10:40:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:28.811 10:40:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:28.811 10:40:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:28.811 10:40:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@100 -- # adq_configure_nvmf_target 1 00:21:28.811 10:40:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:21:28.811 10:40:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:28.811 10:40:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:21:28.811 10:40:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:28.811 10:40:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:28.811 10:40:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:21:28.811 10:40:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:21:28.811 10:40:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:28.811 10:40:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:28.811 10:40:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:28.811 10:40:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:21:28.811 10:40:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:28.811 10:40:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:28.811 10:40:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:28.811 10:40:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:21:28.811 10:40:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:28.811 10:40:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:28.811 [2024-11-15 10:40:17.268465] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:28.811 10:40:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:28.811 10:40:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:21:28.811 10:40:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:29.068 10:40:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:29.068 Malloc1 00:21:29.068 10:40:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:29.068 10:40:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:29.068 10:40:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:29.068 10:40:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:29.068 10:40:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:29.068 10:40:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:21:29.068 10:40:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:29.068 10:40:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:29.068 10:40:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:29.068 10:40:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:29.068 10:40:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:29.068 10:40:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:29.068 [2024-11-15 10:40:17.333568] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:29.068 10:40:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:29.068 10:40:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@104 -- # perfpid=418936 00:21:29.068 10:40:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@105 -- # sleep 2 00:21:29.068 10:40:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:21:30.967 10:40:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # rpc_cmd nvmf_get_stats 00:21:30.967 10:40:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:30.967 10:40:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:30.967 10:40:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:30.967 10:40:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # nvmf_stats='{ 00:21:30.967 "tick_rate": 2700000000, 00:21:30.967 "poll_groups": [ 00:21:30.967 { 00:21:30.967 "name": "nvmf_tgt_poll_group_000", 00:21:30.967 "admin_qpairs": 1, 00:21:30.967 "io_qpairs": 1, 00:21:30.967 "current_admin_qpairs": 1, 00:21:30.967 "current_io_qpairs": 1, 00:21:30.967 "pending_bdev_io": 0, 00:21:30.967 "completed_nvme_io": 25763, 00:21:30.967 "transports": [ 00:21:30.967 { 00:21:30.967 "trtype": "TCP" 00:21:30.967 } 00:21:30.967 ] 00:21:30.967 }, 00:21:30.967 { 00:21:30.967 "name": "nvmf_tgt_poll_group_001", 00:21:30.967 "admin_qpairs": 0, 00:21:30.967 "io_qpairs": 3, 00:21:30.967 "current_admin_qpairs": 0, 00:21:30.967 "current_io_qpairs": 3, 00:21:30.967 "pending_bdev_io": 0, 00:21:30.967 "completed_nvme_io": 26781, 00:21:30.967 "transports": [ 00:21:30.967 { 00:21:30.967 "trtype": "TCP" 00:21:30.967 } 00:21:30.967 ] 00:21:30.967 }, 00:21:30.967 { 00:21:30.967 "name": "nvmf_tgt_poll_group_002", 00:21:30.967 "admin_qpairs": 0, 00:21:30.967 "io_qpairs": 0, 00:21:30.967 "current_admin_qpairs": 0, 00:21:30.967 "current_io_qpairs": 0, 00:21:30.967 "pending_bdev_io": 0, 00:21:30.967 "completed_nvme_io": 0, 00:21:30.967 "transports": [ 00:21:30.967 { 00:21:30.967 "trtype": "TCP" 00:21:30.967 } 00:21:30.967 ] 00:21:30.967 }, 00:21:30.967 { 00:21:30.967 "name": "nvmf_tgt_poll_group_003", 00:21:30.967 "admin_qpairs": 0, 00:21:30.967 "io_qpairs": 0, 00:21:30.967 "current_admin_qpairs": 0, 00:21:30.967 "current_io_qpairs": 0, 00:21:30.967 "pending_bdev_io": 0, 00:21:30.967 "completed_nvme_io": 0, 00:21:30.967 "transports": [ 00:21:30.967 { 00:21:30.967 "trtype": "TCP" 00:21:30.967 } 00:21:30.967 ] 00:21:30.967 } 00:21:30.967 ] 00:21:30.967 }' 00:21:30.967 10:40:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:21:30.967 10:40:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # wc -l 00:21:30.967 10:40:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # count=2 00:21:30.967 10:40:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@109 -- # [[ 2 -lt 2 ]] 00:21:30.967 10:40:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@114 -- # wait 418936 00:21:39.070 Initializing NVMe Controllers 00:21:39.070 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:39.070 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:21:39.070 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:21:39.071 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:21:39.071 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:21:39.071 Initialization complete. Launching workers. 00:21:39.071 ======================================================== 00:21:39.071 Latency(us) 00:21:39.071 Device Information : IOPS MiB/s Average min max 00:21:39.071 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 4168.90 16.28 15362.46 2174.20 64475.83 00:21:39.071 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 4574.20 17.87 14000.03 2993.68 61983.98 00:21:39.071 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 4995.80 19.51 12819.35 1950.85 62509.91 00:21:39.071 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 13555.20 52.95 4721.37 1795.84 6855.05 00:21:39.071 ======================================================== 00:21:39.071 Total : 27294.10 106.62 9383.92 1795.84 64475.83 00:21:39.071 00:21:39.071 10:40:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@115 -- # nvmftestfini 00:21:39.071 10:40:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:39.071 10:40:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # sync 00:21:39.071 10:40:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:39.071 10:40:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set +e 00:21:39.071 10:40:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:39.071 10:40:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:39.071 rmmod nvme_tcp 00:21:39.071 rmmod nvme_fabrics 00:21:39.071 rmmod nvme_keyring 00:21:39.071 10:40:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:39.071 10:40:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@128 -- # set -e 00:21:39.071 10:40:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # return 0 00:21:39.071 10:40:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@517 -- # '[' -n 418886 ']' 00:21:39.071 10:40:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@518 -- # killprocess 418886 00:21:39.071 10:40:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@952 -- # '[' -z 418886 ']' 00:21:39.071 10:40:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@956 -- # kill -0 418886 00:21:39.071 10:40:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@957 -- # uname 00:21:39.071 10:40:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:21:39.071 10:40:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 418886 00:21:39.329 10:40:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:21:39.329 10:40:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:21:39.329 10:40:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@970 -- # echo 'killing process with pid 418886' 00:21:39.329 killing process with pid 418886 00:21:39.329 10:40:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@971 -- # kill 418886 00:21:39.329 10:40:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@976 -- # wait 418886 00:21:39.329 10:40:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:39.329 10:40:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:39.329 10:40:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:39.329 10:40:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # iptr 00:21:39.329 10:40:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-save 00:21:39.329 10:40:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:39.329 10:40:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-restore 00:21:39.589 10:40:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:39.589 10:40:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:39.589 10:40:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:39.589 10:40:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:39.589 10:40:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:41.495 10:40:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:41.495 10:40:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@117 -- # trap - SIGINT SIGTERM EXIT 00:21:41.495 00:21:41.495 real 0m45.679s 00:21:41.495 user 2m40.732s 00:21:41.495 sys 0m10.936s 00:21:41.495 10:40:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1128 -- # xtrace_disable 00:21:41.495 10:40:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:41.495 ************************************ 00:21:41.495 END TEST nvmf_perf_adq 00:21:41.495 ************************************ 00:21:41.495 10:40:29 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@65 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:21:41.495 10:40:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:21:41.495 10:40:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:21:41.495 10:40:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:21:41.495 ************************************ 00:21:41.495 START TEST nvmf_shutdown 00:21:41.495 ************************************ 00:21:41.495 10:40:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:21:41.495 * Looking for test storage... 00:21:41.495 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:21:41.495 10:40:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:21:41.495 10:40:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1691 -- # lcov --version 00:21:41.495 10:40:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:21:41.754 10:40:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:21:41.754 10:40:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:41.754 10:40:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:41.754 10:40:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:41.754 10:40:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:21:41.754 10:40:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:21:41.754 10:40:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:21:41.754 10:40:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:21:41.754 10:40:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:21:41.754 10:40:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:21:41.754 10:40:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:21:41.754 10:40:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:41.754 10:40:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:21:41.754 10:40:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@345 -- # : 1 00:21:41.754 10:40:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:41.755 10:40:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:41.755 10:40:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # decimal 1 00:21:41.755 10:40:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=1 00:21:41.755 10:40:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:41.755 10:40:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 1 00:21:41.755 10:40:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:21:41.755 10:40:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # decimal 2 00:21:41.755 10:40:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=2 00:21:41.755 10:40:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:41.755 10:40:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 2 00:21:41.755 10:40:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:21:41.755 10:40:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:41.755 10:40:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:41.755 10:40:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # return 0 00:21:41.755 10:40:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:41.755 10:40:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:21:41.755 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:41.755 --rc genhtml_branch_coverage=1 00:21:41.755 --rc genhtml_function_coverage=1 00:21:41.755 --rc genhtml_legend=1 00:21:41.755 --rc geninfo_all_blocks=1 00:21:41.755 --rc geninfo_unexecuted_blocks=1 00:21:41.755 00:21:41.755 ' 00:21:41.755 10:40:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:21:41.755 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:41.755 --rc genhtml_branch_coverage=1 00:21:41.755 --rc genhtml_function_coverage=1 00:21:41.755 --rc genhtml_legend=1 00:21:41.755 --rc geninfo_all_blocks=1 00:21:41.755 --rc geninfo_unexecuted_blocks=1 00:21:41.755 00:21:41.755 ' 00:21:41.755 10:40:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:21:41.755 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:41.755 --rc genhtml_branch_coverage=1 00:21:41.755 --rc genhtml_function_coverage=1 00:21:41.755 --rc genhtml_legend=1 00:21:41.755 --rc geninfo_all_blocks=1 00:21:41.755 --rc geninfo_unexecuted_blocks=1 00:21:41.755 00:21:41.755 ' 00:21:41.755 10:40:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:21:41.755 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:41.755 --rc genhtml_branch_coverage=1 00:21:41.755 --rc genhtml_function_coverage=1 00:21:41.755 --rc genhtml_legend=1 00:21:41.755 --rc geninfo_all_blocks=1 00:21:41.755 --rc geninfo_unexecuted_blocks=1 00:21:41.755 00:21:41.755 ' 00:21:41.755 10:40:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:41.755 10:40:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:21:41.755 10:40:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:41.755 10:40:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:41.755 10:40:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:41.755 10:40:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:41.755 10:40:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:41.755 10:40:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:41.755 10:40:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:41.755 10:40:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:41.755 10:40:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:41.755 10:40:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:41.755 10:40:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:21:41.755 10:40:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=8b464f06-2980-e311-ba20-001e67a94acd 00:21:41.755 10:40:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:41.755 10:40:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:41.755 10:40:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:41.755 10:40:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:41.755 10:40:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:41.755 10:40:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@15 -- # shopt -s extglob 00:21:41.755 10:40:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:41.755 10:40:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:41.755 10:40:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:41.755 10:40:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:41.755 10:40:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:41.755 10:40:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:41.755 10:40:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:21:41.755 10:40:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:41.755 10:40:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@51 -- # : 0 00:21:41.755 10:40:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:41.755 10:40:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:41.755 10:40:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:41.755 10:40:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:41.755 10:40:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:41.755 10:40:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:41.755 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:41.755 10:40:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:41.755 10:40:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:41.755 10:40:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:41.755 10:40:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BDEV_SIZE=64 00:21:41.755 10:40:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:21:41.755 10:40:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@162 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:21:41.755 10:40:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:21:41.755 10:40:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1109 -- # xtrace_disable 00:21:41.755 10:40:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:21:41.755 ************************************ 00:21:41.755 START TEST nvmf_shutdown_tc1 00:21:41.755 ************************************ 00:21:41.755 10:40:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1127 -- # nvmf_shutdown_tc1 00:21:41.755 10:40:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@75 -- # starttarget 00:21:41.755 10:40:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@16 -- # nvmftestinit 00:21:41.755 10:40:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:41.755 10:40:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:41.755 10:40:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:41.755 10:40:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:41.755 10:40:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:41.755 10:40:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:41.755 10:40:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:41.756 10:40:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:41.756 10:40:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:21:41.756 10:40:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:21:41.756 10:40:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@309 -- # xtrace_disable 00:21:41.756 10:40:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:44.288 10:40:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:44.288 10:40:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # pci_devs=() 00:21:44.288 10:40:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:44.289 10:40:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:44.289 10:40:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:44.289 10:40:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:44.289 10:40:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:44.289 10:40:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # net_devs=() 00:21:44.289 10:40:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:44.289 10:40:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # e810=() 00:21:44.289 10:40:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # local -ga e810 00:21:44.289 10:40:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # x722=() 00:21:44.289 10:40:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # local -ga x722 00:21:44.289 10:40:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # mlx=() 00:21:44.289 10:40:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # local -ga mlx 00:21:44.289 10:40:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:44.289 10:40:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:44.289 10:40:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:44.289 10:40:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:44.289 10:40:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:44.289 10:40:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:44.289 10:40:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:44.289 10:40:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:44.289 10:40:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:44.289 10:40:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:44.289 10:40:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:44.289 10:40:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:44.289 10:40:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:44.289 10:40:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:44.289 10:40:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:44.289 10:40:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:44.289 10:40:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:44.289 10:40:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:44.289 10:40:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:44.289 10:40:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@367 -- # echo 'Found 0000:82:00.0 (0x8086 - 0x159b)' 00:21:44.289 Found 0000:82:00.0 (0x8086 - 0x159b) 00:21:44.289 10:40:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:44.289 10:40:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:44.289 10:40:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:44.289 10:40:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:44.289 10:40:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:44.289 10:40:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:44.289 10:40:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@367 -- # echo 'Found 0000:82:00.1 (0x8086 - 0x159b)' 00:21:44.289 Found 0000:82:00.1 (0x8086 - 0x159b) 00:21:44.289 10:40:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:44.289 10:40:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:44.289 10:40:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:44.289 10:40:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:44.289 10:40:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:44.289 10:40:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:44.289 10:40:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:44.289 10:40:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:44.289 10:40:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:44.289 10:40:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:44.289 10:40:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:44.289 10:40:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:44.289 10:40:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:44.289 10:40:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:44.289 10:40:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:44.289 10:40:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:82:00.0: cvl_0_0' 00:21:44.289 Found net devices under 0000:82:00.0: cvl_0_0 00:21:44.289 10:40:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:44.289 10:40:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:44.289 10:40:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:44.289 10:40:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:44.289 10:40:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:44.289 10:40:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:44.289 10:40:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:44.289 10:40:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:44.289 10:40:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:82:00.1: cvl_0_1' 00:21:44.289 Found net devices under 0000:82:00.1: cvl_0_1 00:21:44.289 10:40:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:44.289 10:40:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:44.289 10:40:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # is_hw=yes 00:21:44.289 10:40:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:21:44.289 10:40:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:21:44.289 10:40:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:21:44.289 10:40:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:44.289 10:40:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:44.289 10:40:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:44.289 10:40:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:44.289 10:40:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:44.289 10:40:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:44.289 10:40:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:44.289 10:40:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:44.289 10:40:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:44.289 10:40:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:44.289 10:40:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:44.289 10:40:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:44.289 10:40:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:44.289 10:40:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:44.289 10:40:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:44.289 10:40:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:44.289 10:40:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:44.289 10:40:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:44.289 10:40:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:44.289 10:40:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:44.289 10:40:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:44.289 10:40:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:44.289 10:40:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:44.289 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:44.289 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.254 ms 00:21:44.289 00:21:44.290 --- 10.0.0.2 ping statistics --- 00:21:44.290 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:44.290 rtt min/avg/max/mdev = 0.254/0.254/0.254/0.000 ms 00:21:44.290 10:40:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:44.290 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:44.290 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.141 ms 00:21:44.290 00:21:44.290 --- 10.0.0.1 ping statistics --- 00:21:44.290 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:44.290 rtt min/avg/max/mdev = 0.141/0.141/0.141/0.000 ms 00:21:44.290 10:40:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:44.290 10:40:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@450 -- # return 0 00:21:44.290 10:40:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:44.290 10:40:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:44.290 10:40:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:44.290 10:40:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:44.290 10:40:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:44.290 10:40:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:44.290 10:40:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:44.290 10:40:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:21:44.290 10:40:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:44.290 10:40:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:44.290 10:40:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:44.290 10:40:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@509 -- # nvmfpid=422099 00:21:44.290 10:40:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:21:44.290 10:40:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@510 -- # waitforlisten 422099 00:21:44.290 10:40:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@833 -- # '[' -z 422099 ']' 00:21:44.290 10:40:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:44.290 10:40:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # local max_retries=100 00:21:44.290 10:40:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:44.290 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:44.290 10:40:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@842 -- # xtrace_disable 00:21:44.290 10:40:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:44.290 [2024-11-15 10:40:32.435279] Starting SPDK v25.01-pre git sha1 318515b44 / DPDK 24.03.0 initialization... 00:21:44.290 [2024-11-15 10:40:32.435374] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:44.290 [2024-11-15 10:40:32.507299] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:44.290 [2024-11-15 10:40:32.566896] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:44.290 [2024-11-15 10:40:32.566962] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:44.290 [2024-11-15 10:40:32.566991] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:44.290 [2024-11-15 10:40:32.567002] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:44.290 [2024-11-15 10:40:32.567012] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:44.290 [2024-11-15 10:40:32.568621] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:44.290 [2024-11-15 10:40:32.568690] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:21:44.290 [2024-11-15 10:40:32.568786] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:21:44.290 [2024-11-15 10:40:32.568788] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:44.290 10:40:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:21:44.290 10:40:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@866 -- # return 0 00:21:44.290 10:40:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:44.290 10:40:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:44.290 10:40:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:44.290 10:40:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:44.290 10:40:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:44.290 10:40:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:44.290 10:40:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:44.290 [2024-11-15 10:40:32.717463] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:44.290 10:40:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:44.290 10:40:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:21:44.290 10:40:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:21:44.290 10:40:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:44.290 10:40:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:44.290 10:40:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:21:44.290 10:40:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:44.290 10:40:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:21:44.290 10:40:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:44.290 10:40:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:21:44.290 10:40:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:44.290 10:40:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:21:44.290 10:40:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:44.290 10:40:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:21:44.290 10:40:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:44.290 10:40:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:21:44.290 10:40:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:44.290 10:40:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:21:44.290 10:40:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:44.290 10:40:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:21:44.290 10:40:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:44.290 10:40:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:21:44.290 10:40:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:44.290 10:40:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:21:44.548 10:40:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:44.548 10:40:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:21:44.548 10:40:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # rpc_cmd 00:21:44.548 10:40:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:44.548 10:40:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:44.548 Malloc1 00:21:44.548 [2024-11-15 10:40:32.815989] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:44.548 Malloc2 00:21:44.548 Malloc3 00:21:44.548 Malloc4 00:21:44.548 Malloc5 00:21:44.807 Malloc6 00:21:44.807 Malloc7 00:21:44.807 Malloc8 00:21:44.807 Malloc9 00:21:44.807 Malloc10 00:21:44.807 10:40:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:44.807 10:40:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:21:44.807 10:40:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:44.807 10:40:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:45.066 10:40:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # perfpid=422278 00:21:45.066 10:40:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # waitforlisten 422278 /var/tmp/bdevperf.sock 00:21:45.066 10:40:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@833 -- # '[' -z 422278 ']' 00:21:45.066 10:40:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:21:45.066 10:40:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:45.066 10:40:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:21:45.066 10:40:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # local max_retries=100 00:21:45.066 10:40:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # config=() 00:21:45.066 10:40:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:45.066 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:45.066 10:40:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@842 -- # xtrace_disable 00:21:45.066 10:40:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # local subsystem config 00:21:45.066 10:40:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:45.066 10:40:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:45.066 10:40:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:45.066 { 00:21:45.066 "params": { 00:21:45.066 "name": "Nvme$subsystem", 00:21:45.066 "trtype": "$TEST_TRANSPORT", 00:21:45.066 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:45.066 "adrfam": "ipv4", 00:21:45.066 "trsvcid": "$NVMF_PORT", 00:21:45.066 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:45.066 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:45.066 "hdgst": ${hdgst:-false}, 00:21:45.066 "ddgst": ${ddgst:-false} 00:21:45.066 }, 00:21:45.066 "method": "bdev_nvme_attach_controller" 00:21:45.066 } 00:21:45.066 EOF 00:21:45.066 )") 00:21:45.066 10:40:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:21:45.066 10:40:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:45.066 10:40:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:45.066 { 00:21:45.066 "params": { 00:21:45.066 "name": "Nvme$subsystem", 00:21:45.066 "trtype": "$TEST_TRANSPORT", 00:21:45.066 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:45.066 "adrfam": "ipv4", 00:21:45.066 "trsvcid": "$NVMF_PORT", 00:21:45.066 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:45.066 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:45.066 "hdgst": ${hdgst:-false}, 00:21:45.066 "ddgst": ${ddgst:-false} 00:21:45.066 }, 00:21:45.066 "method": "bdev_nvme_attach_controller" 00:21:45.066 } 00:21:45.066 EOF 00:21:45.066 )") 00:21:45.066 10:40:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:21:45.066 10:40:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:45.066 10:40:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:45.066 { 00:21:45.066 "params": { 00:21:45.066 "name": "Nvme$subsystem", 00:21:45.066 "trtype": "$TEST_TRANSPORT", 00:21:45.066 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:45.066 "adrfam": "ipv4", 00:21:45.066 "trsvcid": "$NVMF_PORT", 00:21:45.066 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:45.066 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:45.066 "hdgst": ${hdgst:-false}, 00:21:45.066 "ddgst": ${ddgst:-false} 00:21:45.066 }, 00:21:45.066 "method": "bdev_nvme_attach_controller" 00:21:45.066 } 00:21:45.066 EOF 00:21:45.066 )") 00:21:45.066 10:40:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:21:45.066 10:40:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:45.066 10:40:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:45.066 { 00:21:45.066 "params": { 00:21:45.066 "name": "Nvme$subsystem", 00:21:45.066 "trtype": "$TEST_TRANSPORT", 00:21:45.066 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:45.066 "adrfam": "ipv4", 00:21:45.066 "trsvcid": "$NVMF_PORT", 00:21:45.066 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:45.066 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:45.066 "hdgst": ${hdgst:-false}, 00:21:45.066 "ddgst": ${ddgst:-false} 00:21:45.066 }, 00:21:45.066 "method": "bdev_nvme_attach_controller" 00:21:45.066 } 00:21:45.066 EOF 00:21:45.066 )") 00:21:45.066 10:40:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:21:45.066 10:40:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:45.066 10:40:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:45.066 { 00:21:45.067 "params": { 00:21:45.067 "name": "Nvme$subsystem", 00:21:45.067 "trtype": "$TEST_TRANSPORT", 00:21:45.067 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:45.067 "adrfam": "ipv4", 00:21:45.067 "trsvcid": "$NVMF_PORT", 00:21:45.067 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:45.067 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:45.067 "hdgst": ${hdgst:-false}, 00:21:45.067 "ddgst": ${ddgst:-false} 00:21:45.067 }, 00:21:45.067 "method": "bdev_nvme_attach_controller" 00:21:45.067 } 00:21:45.067 EOF 00:21:45.067 )") 00:21:45.067 10:40:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:21:45.067 10:40:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:45.067 10:40:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:45.067 { 00:21:45.067 "params": { 00:21:45.067 "name": "Nvme$subsystem", 00:21:45.067 "trtype": "$TEST_TRANSPORT", 00:21:45.067 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:45.067 "adrfam": "ipv4", 00:21:45.067 "trsvcid": "$NVMF_PORT", 00:21:45.067 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:45.067 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:45.067 "hdgst": ${hdgst:-false}, 00:21:45.067 "ddgst": ${ddgst:-false} 00:21:45.067 }, 00:21:45.067 "method": "bdev_nvme_attach_controller" 00:21:45.067 } 00:21:45.067 EOF 00:21:45.067 )") 00:21:45.067 10:40:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:21:45.067 10:40:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:45.067 10:40:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:45.067 { 00:21:45.067 "params": { 00:21:45.067 "name": "Nvme$subsystem", 00:21:45.067 "trtype": "$TEST_TRANSPORT", 00:21:45.067 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:45.067 "adrfam": "ipv4", 00:21:45.067 "trsvcid": "$NVMF_PORT", 00:21:45.067 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:45.067 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:45.067 "hdgst": ${hdgst:-false}, 00:21:45.067 "ddgst": ${ddgst:-false} 00:21:45.067 }, 00:21:45.067 "method": "bdev_nvme_attach_controller" 00:21:45.067 } 00:21:45.067 EOF 00:21:45.067 )") 00:21:45.067 10:40:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:21:45.067 10:40:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:45.067 10:40:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:45.067 { 00:21:45.067 "params": { 00:21:45.067 "name": "Nvme$subsystem", 00:21:45.067 "trtype": "$TEST_TRANSPORT", 00:21:45.067 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:45.067 "adrfam": "ipv4", 00:21:45.067 "trsvcid": "$NVMF_PORT", 00:21:45.067 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:45.067 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:45.067 "hdgst": ${hdgst:-false}, 00:21:45.067 "ddgst": ${ddgst:-false} 00:21:45.067 }, 00:21:45.067 "method": "bdev_nvme_attach_controller" 00:21:45.067 } 00:21:45.067 EOF 00:21:45.067 )") 00:21:45.067 10:40:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:21:45.067 10:40:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:45.067 10:40:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:45.067 { 00:21:45.067 "params": { 00:21:45.067 "name": "Nvme$subsystem", 00:21:45.067 "trtype": "$TEST_TRANSPORT", 00:21:45.067 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:45.067 "adrfam": "ipv4", 00:21:45.067 "trsvcid": "$NVMF_PORT", 00:21:45.067 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:45.067 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:45.067 "hdgst": ${hdgst:-false}, 00:21:45.067 "ddgst": ${ddgst:-false} 00:21:45.067 }, 00:21:45.067 "method": "bdev_nvme_attach_controller" 00:21:45.067 } 00:21:45.067 EOF 00:21:45.067 )") 00:21:45.067 10:40:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:21:45.067 10:40:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:45.067 10:40:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:45.067 { 00:21:45.067 "params": { 00:21:45.067 "name": "Nvme$subsystem", 00:21:45.067 "trtype": "$TEST_TRANSPORT", 00:21:45.067 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:45.067 "adrfam": "ipv4", 00:21:45.067 "trsvcid": "$NVMF_PORT", 00:21:45.067 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:45.067 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:45.067 "hdgst": ${hdgst:-false}, 00:21:45.067 "ddgst": ${ddgst:-false} 00:21:45.067 }, 00:21:45.067 "method": "bdev_nvme_attach_controller" 00:21:45.067 } 00:21:45.067 EOF 00:21:45.067 )") 00:21:45.067 10:40:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:21:45.067 10:40:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@584 -- # jq . 00:21:45.067 10:40:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@585 -- # IFS=, 00:21:45.067 10:40:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:21:45.067 "params": { 00:21:45.067 "name": "Nvme1", 00:21:45.067 "trtype": "tcp", 00:21:45.067 "traddr": "10.0.0.2", 00:21:45.067 "adrfam": "ipv4", 00:21:45.067 "trsvcid": "4420", 00:21:45.067 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:45.067 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:45.067 "hdgst": false, 00:21:45.067 "ddgst": false 00:21:45.067 }, 00:21:45.067 "method": "bdev_nvme_attach_controller" 00:21:45.067 },{ 00:21:45.067 "params": { 00:21:45.067 "name": "Nvme2", 00:21:45.067 "trtype": "tcp", 00:21:45.067 "traddr": "10.0.0.2", 00:21:45.067 "adrfam": "ipv4", 00:21:45.067 "trsvcid": "4420", 00:21:45.067 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:21:45.067 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:21:45.067 "hdgst": false, 00:21:45.067 "ddgst": false 00:21:45.067 }, 00:21:45.067 "method": "bdev_nvme_attach_controller" 00:21:45.067 },{ 00:21:45.067 "params": { 00:21:45.067 "name": "Nvme3", 00:21:45.067 "trtype": "tcp", 00:21:45.067 "traddr": "10.0.0.2", 00:21:45.067 "adrfam": "ipv4", 00:21:45.067 "trsvcid": "4420", 00:21:45.067 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:21:45.067 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:21:45.067 "hdgst": false, 00:21:45.067 "ddgst": false 00:21:45.067 }, 00:21:45.067 "method": "bdev_nvme_attach_controller" 00:21:45.067 },{ 00:21:45.067 "params": { 00:21:45.067 "name": "Nvme4", 00:21:45.067 "trtype": "tcp", 00:21:45.067 "traddr": "10.0.0.2", 00:21:45.067 "adrfam": "ipv4", 00:21:45.067 "trsvcid": "4420", 00:21:45.067 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:21:45.067 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:21:45.067 "hdgst": false, 00:21:45.067 "ddgst": false 00:21:45.067 }, 00:21:45.067 "method": "bdev_nvme_attach_controller" 00:21:45.067 },{ 00:21:45.067 "params": { 00:21:45.067 "name": "Nvme5", 00:21:45.067 "trtype": "tcp", 00:21:45.067 "traddr": "10.0.0.2", 00:21:45.067 "adrfam": "ipv4", 00:21:45.067 "trsvcid": "4420", 00:21:45.067 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:21:45.067 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:21:45.067 "hdgst": false, 00:21:45.067 "ddgst": false 00:21:45.067 }, 00:21:45.067 "method": "bdev_nvme_attach_controller" 00:21:45.067 },{ 00:21:45.067 "params": { 00:21:45.067 "name": "Nvme6", 00:21:45.067 "trtype": "tcp", 00:21:45.067 "traddr": "10.0.0.2", 00:21:45.067 "adrfam": "ipv4", 00:21:45.067 "trsvcid": "4420", 00:21:45.067 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:21:45.067 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:21:45.067 "hdgst": false, 00:21:45.067 "ddgst": false 00:21:45.067 }, 00:21:45.067 "method": "bdev_nvme_attach_controller" 00:21:45.067 },{ 00:21:45.067 "params": { 00:21:45.067 "name": "Nvme7", 00:21:45.067 "trtype": "tcp", 00:21:45.067 "traddr": "10.0.0.2", 00:21:45.067 "adrfam": "ipv4", 00:21:45.067 "trsvcid": "4420", 00:21:45.067 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:21:45.067 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:21:45.067 "hdgst": false, 00:21:45.067 "ddgst": false 00:21:45.067 }, 00:21:45.067 "method": "bdev_nvme_attach_controller" 00:21:45.067 },{ 00:21:45.067 "params": { 00:21:45.067 "name": "Nvme8", 00:21:45.067 "trtype": "tcp", 00:21:45.067 "traddr": "10.0.0.2", 00:21:45.067 "adrfam": "ipv4", 00:21:45.067 "trsvcid": "4420", 00:21:45.067 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:21:45.067 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:21:45.067 "hdgst": false, 00:21:45.067 "ddgst": false 00:21:45.067 }, 00:21:45.067 "method": "bdev_nvme_attach_controller" 00:21:45.067 },{ 00:21:45.067 "params": { 00:21:45.067 "name": "Nvme9", 00:21:45.067 "trtype": "tcp", 00:21:45.067 "traddr": "10.0.0.2", 00:21:45.067 "adrfam": "ipv4", 00:21:45.067 "trsvcid": "4420", 00:21:45.067 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:21:45.067 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:21:45.067 "hdgst": false, 00:21:45.067 "ddgst": false 00:21:45.067 }, 00:21:45.067 "method": "bdev_nvme_attach_controller" 00:21:45.067 },{ 00:21:45.067 "params": { 00:21:45.068 "name": "Nvme10", 00:21:45.068 "trtype": "tcp", 00:21:45.068 "traddr": "10.0.0.2", 00:21:45.068 "adrfam": "ipv4", 00:21:45.068 "trsvcid": "4420", 00:21:45.068 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:21:45.068 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:21:45.068 "hdgst": false, 00:21:45.068 "ddgst": false 00:21:45.068 }, 00:21:45.068 "method": "bdev_nvme_attach_controller" 00:21:45.068 }' 00:21:45.068 [2024-11-15 10:40:33.337931] Starting SPDK v25.01-pre git sha1 318515b44 / DPDK 24.03.0 initialization... 00:21:45.068 [2024-11-15 10:40:33.338010] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:21:45.068 [2024-11-15 10:40:33.410328] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:45.068 [2024-11-15 10:40:33.469301] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:46.964 10:40:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:21:46.964 10:40:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@866 -- # return 0 00:21:46.964 10:40:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@81 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:21:46.964 10:40:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:46.964 10:40:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:46.964 10:40:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:46.964 10:40:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # kill -9 422278 00:21:46.964 10:40:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@85 -- # rm -f /var/run/spdk_bdev1 00:21:46.964 10:40:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # sleep 1 00:21:48.338 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 74: 422278 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:21:48.338 10:40:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@89 -- # kill -0 422099 00:21:48.338 10:40:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:21:48.338 10:40:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:21:48.338 10:40:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # config=() 00:21:48.338 10:40:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # local subsystem config 00:21:48.338 10:40:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:48.338 10:40:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:48.338 { 00:21:48.338 "params": { 00:21:48.338 "name": "Nvme$subsystem", 00:21:48.338 "trtype": "$TEST_TRANSPORT", 00:21:48.338 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:48.338 "adrfam": "ipv4", 00:21:48.338 "trsvcid": "$NVMF_PORT", 00:21:48.338 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:48.338 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:48.338 "hdgst": ${hdgst:-false}, 00:21:48.338 "ddgst": ${ddgst:-false} 00:21:48.338 }, 00:21:48.338 "method": "bdev_nvme_attach_controller" 00:21:48.338 } 00:21:48.338 EOF 00:21:48.338 )") 00:21:48.338 10:40:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:21:48.338 10:40:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:48.338 10:40:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:48.338 { 00:21:48.338 "params": { 00:21:48.338 "name": "Nvme$subsystem", 00:21:48.338 "trtype": "$TEST_TRANSPORT", 00:21:48.338 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:48.338 "adrfam": "ipv4", 00:21:48.338 "trsvcid": "$NVMF_PORT", 00:21:48.338 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:48.338 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:48.338 "hdgst": ${hdgst:-false}, 00:21:48.338 "ddgst": ${ddgst:-false} 00:21:48.338 }, 00:21:48.338 "method": "bdev_nvme_attach_controller" 00:21:48.338 } 00:21:48.338 EOF 00:21:48.338 )") 00:21:48.338 10:40:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:21:48.338 10:40:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:48.338 10:40:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:48.338 { 00:21:48.338 "params": { 00:21:48.338 "name": "Nvme$subsystem", 00:21:48.338 "trtype": "$TEST_TRANSPORT", 00:21:48.338 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:48.338 "adrfam": "ipv4", 00:21:48.338 "trsvcid": "$NVMF_PORT", 00:21:48.338 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:48.338 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:48.338 "hdgst": ${hdgst:-false}, 00:21:48.338 "ddgst": ${ddgst:-false} 00:21:48.338 }, 00:21:48.338 "method": "bdev_nvme_attach_controller" 00:21:48.338 } 00:21:48.338 EOF 00:21:48.338 )") 00:21:48.338 10:40:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:21:48.338 10:40:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:48.339 10:40:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:48.339 { 00:21:48.339 "params": { 00:21:48.339 "name": "Nvme$subsystem", 00:21:48.339 "trtype": "$TEST_TRANSPORT", 00:21:48.339 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:48.339 "adrfam": "ipv4", 00:21:48.339 "trsvcid": "$NVMF_PORT", 00:21:48.339 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:48.339 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:48.339 "hdgst": ${hdgst:-false}, 00:21:48.339 "ddgst": ${ddgst:-false} 00:21:48.339 }, 00:21:48.339 "method": "bdev_nvme_attach_controller" 00:21:48.339 } 00:21:48.339 EOF 00:21:48.339 )") 00:21:48.339 10:40:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:21:48.339 10:40:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:48.339 10:40:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:48.339 { 00:21:48.339 "params": { 00:21:48.339 "name": "Nvme$subsystem", 00:21:48.339 "trtype": "$TEST_TRANSPORT", 00:21:48.339 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:48.339 "adrfam": "ipv4", 00:21:48.339 "trsvcid": "$NVMF_PORT", 00:21:48.339 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:48.339 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:48.339 "hdgst": ${hdgst:-false}, 00:21:48.339 "ddgst": ${ddgst:-false} 00:21:48.339 }, 00:21:48.339 "method": "bdev_nvme_attach_controller" 00:21:48.339 } 00:21:48.339 EOF 00:21:48.339 )") 00:21:48.339 10:40:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:21:48.339 10:40:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:48.339 10:40:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:48.339 { 00:21:48.339 "params": { 00:21:48.339 "name": "Nvme$subsystem", 00:21:48.339 "trtype": "$TEST_TRANSPORT", 00:21:48.339 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:48.339 "adrfam": "ipv4", 00:21:48.339 "trsvcid": "$NVMF_PORT", 00:21:48.339 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:48.339 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:48.339 "hdgst": ${hdgst:-false}, 00:21:48.339 "ddgst": ${ddgst:-false} 00:21:48.339 }, 00:21:48.339 "method": "bdev_nvme_attach_controller" 00:21:48.339 } 00:21:48.339 EOF 00:21:48.339 )") 00:21:48.339 10:40:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:21:48.339 10:40:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:48.339 10:40:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:48.339 { 00:21:48.339 "params": { 00:21:48.339 "name": "Nvme$subsystem", 00:21:48.339 "trtype": "$TEST_TRANSPORT", 00:21:48.339 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:48.339 "adrfam": "ipv4", 00:21:48.339 "trsvcid": "$NVMF_PORT", 00:21:48.339 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:48.339 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:48.339 "hdgst": ${hdgst:-false}, 00:21:48.339 "ddgst": ${ddgst:-false} 00:21:48.339 }, 00:21:48.339 "method": "bdev_nvme_attach_controller" 00:21:48.339 } 00:21:48.339 EOF 00:21:48.339 )") 00:21:48.339 10:40:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:21:48.339 10:40:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:48.339 10:40:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:48.339 { 00:21:48.339 "params": { 00:21:48.339 "name": "Nvme$subsystem", 00:21:48.339 "trtype": "$TEST_TRANSPORT", 00:21:48.339 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:48.339 "adrfam": "ipv4", 00:21:48.339 "trsvcid": "$NVMF_PORT", 00:21:48.339 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:48.339 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:48.339 "hdgst": ${hdgst:-false}, 00:21:48.339 "ddgst": ${ddgst:-false} 00:21:48.339 }, 00:21:48.339 "method": "bdev_nvme_attach_controller" 00:21:48.339 } 00:21:48.339 EOF 00:21:48.339 )") 00:21:48.339 10:40:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:21:48.339 10:40:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:48.339 10:40:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:48.339 { 00:21:48.339 "params": { 00:21:48.339 "name": "Nvme$subsystem", 00:21:48.339 "trtype": "$TEST_TRANSPORT", 00:21:48.339 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:48.339 "adrfam": "ipv4", 00:21:48.339 "trsvcid": "$NVMF_PORT", 00:21:48.339 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:48.339 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:48.339 "hdgst": ${hdgst:-false}, 00:21:48.339 "ddgst": ${ddgst:-false} 00:21:48.339 }, 00:21:48.339 "method": "bdev_nvme_attach_controller" 00:21:48.339 } 00:21:48.339 EOF 00:21:48.339 )") 00:21:48.339 10:40:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:21:48.339 10:40:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:48.339 10:40:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:48.339 { 00:21:48.339 "params": { 00:21:48.339 "name": "Nvme$subsystem", 00:21:48.339 "trtype": "$TEST_TRANSPORT", 00:21:48.339 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:48.339 "adrfam": "ipv4", 00:21:48.339 "trsvcid": "$NVMF_PORT", 00:21:48.339 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:48.339 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:48.339 "hdgst": ${hdgst:-false}, 00:21:48.339 "ddgst": ${ddgst:-false} 00:21:48.339 }, 00:21:48.339 "method": "bdev_nvme_attach_controller" 00:21:48.339 } 00:21:48.339 EOF 00:21:48.339 )") 00:21:48.339 10:40:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:21:48.339 10:40:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@584 -- # jq . 00:21:48.339 10:40:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@585 -- # IFS=, 00:21:48.339 10:40:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:21:48.339 "params": { 00:21:48.339 "name": "Nvme1", 00:21:48.339 "trtype": "tcp", 00:21:48.339 "traddr": "10.0.0.2", 00:21:48.339 "adrfam": "ipv4", 00:21:48.339 "trsvcid": "4420", 00:21:48.339 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:48.339 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:48.339 "hdgst": false, 00:21:48.339 "ddgst": false 00:21:48.339 }, 00:21:48.339 "method": "bdev_nvme_attach_controller" 00:21:48.339 },{ 00:21:48.339 "params": { 00:21:48.339 "name": "Nvme2", 00:21:48.339 "trtype": "tcp", 00:21:48.339 "traddr": "10.0.0.2", 00:21:48.339 "adrfam": "ipv4", 00:21:48.339 "trsvcid": "4420", 00:21:48.339 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:21:48.339 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:21:48.339 "hdgst": false, 00:21:48.339 "ddgst": false 00:21:48.339 }, 00:21:48.339 "method": "bdev_nvme_attach_controller" 00:21:48.339 },{ 00:21:48.339 "params": { 00:21:48.339 "name": "Nvme3", 00:21:48.339 "trtype": "tcp", 00:21:48.339 "traddr": "10.0.0.2", 00:21:48.339 "adrfam": "ipv4", 00:21:48.339 "trsvcid": "4420", 00:21:48.339 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:21:48.339 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:21:48.339 "hdgst": false, 00:21:48.339 "ddgst": false 00:21:48.339 }, 00:21:48.339 "method": "bdev_nvme_attach_controller" 00:21:48.339 },{ 00:21:48.339 "params": { 00:21:48.339 "name": "Nvme4", 00:21:48.339 "trtype": "tcp", 00:21:48.339 "traddr": "10.0.0.2", 00:21:48.339 "adrfam": "ipv4", 00:21:48.339 "trsvcid": "4420", 00:21:48.339 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:21:48.339 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:21:48.339 "hdgst": false, 00:21:48.339 "ddgst": false 00:21:48.339 }, 00:21:48.339 "method": "bdev_nvme_attach_controller" 00:21:48.339 },{ 00:21:48.339 "params": { 00:21:48.339 "name": "Nvme5", 00:21:48.339 "trtype": "tcp", 00:21:48.339 "traddr": "10.0.0.2", 00:21:48.339 "adrfam": "ipv4", 00:21:48.339 "trsvcid": "4420", 00:21:48.339 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:21:48.339 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:21:48.339 "hdgst": false, 00:21:48.339 "ddgst": false 00:21:48.339 }, 00:21:48.339 "method": "bdev_nvme_attach_controller" 00:21:48.339 },{ 00:21:48.339 "params": { 00:21:48.339 "name": "Nvme6", 00:21:48.339 "trtype": "tcp", 00:21:48.339 "traddr": "10.0.0.2", 00:21:48.339 "adrfam": "ipv4", 00:21:48.339 "trsvcid": "4420", 00:21:48.339 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:21:48.339 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:21:48.339 "hdgst": false, 00:21:48.339 "ddgst": false 00:21:48.339 }, 00:21:48.339 "method": "bdev_nvme_attach_controller" 00:21:48.339 },{ 00:21:48.339 "params": { 00:21:48.339 "name": "Nvme7", 00:21:48.339 "trtype": "tcp", 00:21:48.339 "traddr": "10.0.0.2", 00:21:48.339 "adrfam": "ipv4", 00:21:48.339 "trsvcid": "4420", 00:21:48.339 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:21:48.339 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:21:48.340 "hdgst": false, 00:21:48.340 "ddgst": false 00:21:48.340 }, 00:21:48.340 "method": "bdev_nvme_attach_controller" 00:21:48.340 },{ 00:21:48.340 "params": { 00:21:48.340 "name": "Nvme8", 00:21:48.340 "trtype": "tcp", 00:21:48.340 "traddr": "10.0.0.2", 00:21:48.340 "adrfam": "ipv4", 00:21:48.340 "trsvcid": "4420", 00:21:48.340 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:21:48.340 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:21:48.340 "hdgst": false, 00:21:48.340 "ddgst": false 00:21:48.340 }, 00:21:48.340 "method": "bdev_nvme_attach_controller" 00:21:48.340 },{ 00:21:48.340 "params": { 00:21:48.340 "name": "Nvme9", 00:21:48.340 "trtype": "tcp", 00:21:48.340 "traddr": "10.0.0.2", 00:21:48.340 "adrfam": "ipv4", 00:21:48.340 "trsvcid": "4420", 00:21:48.340 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:21:48.340 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:21:48.340 "hdgst": false, 00:21:48.340 "ddgst": false 00:21:48.340 }, 00:21:48.340 "method": "bdev_nvme_attach_controller" 00:21:48.340 },{ 00:21:48.340 "params": { 00:21:48.340 "name": "Nvme10", 00:21:48.340 "trtype": "tcp", 00:21:48.340 "traddr": "10.0.0.2", 00:21:48.340 "adrfam": "ipv4", 00:21:48.340 "trsvcid": "4420", 00:21:48.340 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:21:48.340 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:21:48.340 "hdgst": false, 00:21:48.340 "ddgst": false 00:21:48.340 }, 00:21:48.340 "method": "bdev_nvme_attach_controller" 00:21:48.340 }' 00:21:48.340 [2024-11-15 10:40:36.453862] Starting SPDK v25.01-pre git sha1 318515b44 / DPDK 24.03.0 initialization... 00:21:48.340 [2024-11-15 10:40:36.453947] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid422696 ] 00:21:48.340 [2024-11-15 10:40:36.526699] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:48.340 [2024-11-15 10:40:36.588322] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:49.711 Running I/O for 1 seconds... 00:21:50.902 1672.00 IOPS, 104.50 MiB/s 00:21:50.902 Latency(us) 00:21:50.902 [2024-11-15T09:40:39.365Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:50.902 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:50.902 Verification LBA range: start 0x0 length 0x400 00:21:50.902 Nvme1n1 : 1.11 172.29 10.77 0.00 0.00 367667.58 20000.62 307582.29 00:21:50.902 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:50.902 Verification LBA range: start 0x0 length 0x400 00:21:50.902 Nvme2n1 : 1.14 224.23 14.01 0.00 0.00 277471.76 33787.45 240784.12 00:21:50.902 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:50.902 Verification LBA range: start 0x0 length 0x400 00:21:50.902 Nvme3n1 : 1.13 227.22 14.20 0.00 0.00 269075.15 21262.79 268746.15 00:21:50.902 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:50.902 Verification LBA range: start 0x0 length 0x400 00:21:50.902 Nvme4n1 : 1.13 226.65 14.17 0.00 0.00 264881.87 33593.27 253211.69 00:21:50.902 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:50.902 Verification LBA range: start 0x0 length 0x400 00:21:50.902 Nvme5n1 : 1.14 227.47 14.22 0.00 0.00 257957.07 5558.42 267192.70 00:21:50.902 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:50.902 Verification LBA range: start 0x0 length 0x400 00:21:50.902 Nvme6n1 : 1.17 223.02 13.94 0.00 0.00 259853.13 3592.34 284280.60 00:21:50.902 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:50.902 Verification LBA range: start 0x0 length 0x400 00:21:50.902 Nvme7n1 : 1.15 226.19 14.14 0.00 0.00 251151.46 2402.99 274959.93 00:21:50.902 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:50.902 Verification LBA range: start 0x0 length 0x400 00:21:50.902 Nvme8n1 : 1.16 221.58 13.85 0.00 0.00 252589.13 18641.35 271853.04 00:21:50.902 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:50.902 Verification LBA range: start 0x0 length 0x400 00:21:50.902 Nvme9n1 : 1.16 223.22 13.95 0.00 0.00 245752.11 2936.98 287387.50 00:21:50.902 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:50.902 Verification LBA range: start 0x0 length 0x400 00:21:50.902 Nvme10n1 : 1.21 264.25 16.52 0.00 0.00 205798.36 7039.05 282727.16 00:21:50.902 [2024-11-15T09:40:39.365Z] =================================================================================================================== 00:21:50.902 [2024-11-15T09:40:39.365Z] Total : 2236.12 139.76 0.00 0.00 261126.48 2402.99 307582.29 00:21:51.160 10:40:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@95 -- # stoptarget 00:21:51.160 10:40:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:21:51.160 10:40:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:21:51.160 10:40:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:21:51.160 10:40:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@46 -- # nvmftestfini 00:21:51.160 10:40:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:51.160 10:40:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # sync 00:21:51.160 10:40:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:51.160 10:40:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set +e 00:21:51.160 10:40:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:51.160 10:40:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:51.160 rmmod nvme_tcp 00:21:51.160 rmmod nvme_fabrics 00:21:51.160 rmmod nvme_keyring 00:21:51.160 10:40:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:51.160 10:40:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@128 -- # set -e 00:21:51.160 10:40:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@129 -- # return 0 00:21:51.160 10:40:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@517 -- # '[' -n 422099 ']' 00:21:51.160 10:40:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@518 -- # killprocess 422099 00:21:51.160 10:40:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@952 -- # '[' -z 422099 ']' 00:21:51.160 10:40:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@956 -- # kill -0 422099 00:21:51.160 10:40:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@957 -- # uname 00:21:51.160 10:40:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:21:51.160 10:40:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 422099 00:21:51.161 10:40:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:21:51.161 10:40:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:21:51.161 10:40:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@970 -- # echo 'killing process with pid 422099' 00:21:51.161 killing process with pid 422099 00:21:51.161 10:40:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@971 -- # kill 422099 00:21:51.161 10:40:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@976 -- # wait 422099 00:21:51.728 10:40:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:51.728 10:40:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:51.728 10:40:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:51.728 10:40:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # iptr 00:21:51.728 10:40:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # iptables-save 00:21:51.728 10:40:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:51.728 10:40:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # iptables-restore 00:21:51.728 10:40:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:51.728 10:40:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:51.728 10:40:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:51.728 10:40:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:51.728 10:40:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:54.266 10:40:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:54.266 00:21:54.266 real 0m12.072s 00:21:54.266 user 0m35.206s 00:21:54.266 sys 0m3.326s 00:21:54.266 10:40:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1128 -- # xtrace_disable 00:21:54.266 10:40:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:54.266 ************************************ 00:21:54.266 END TEST nvmf_shutdown_tc1 00:21:54.266 ************************************ 00:21:54.266 10:40:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@163 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:21:54.266 10:40:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:21:54.266 10:40:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1109 -- # xtrace_disable 00:21:54.266 10:40:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:21:54.266 ************************************ 00:21:54.266 START TEST nvmf_shutdown_tc2 00:21:54.266 ************************************ 00:21:54.266 10:40:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1127 -- # nvmf_shutdown_tc2 00:21:54.266 10:40:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@100 -- # starttarget 00:21:54.266 10:40:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@16 -- # nvmftestinit 00:21:54.266 10:40:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:54.266 10:40:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:54.266 10:40:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:54.267 10:40:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:54.267 10:40:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:54.267 10:40:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:54.267 10:40:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:54.267 10:40:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:54.267 10:40:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:21:54.267 10:40:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:21:54.267 10:40:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@309 -- # xtrace_disable 00:21:54.267 10:40:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:54.267 10:40:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:54.267 10:40:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # pci_devs=() 00:21:54.267 10:40:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:54.267 10:40:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:54.267 10:40:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:54.267 10:40:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:54.267 10:40:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:54.267 10:40:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # net_devs=() 00:21:54.267 10:40:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:54.267 10:40:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # e810=() 00:21:54.267 10:40:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # local -ga e810 00:21:54.267 10:40:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # x722=() 00:21:54.267 10:40:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # local -ga x722 00:21:54.267 10:40:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # mlx=() 00:21:54.267 10:40:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # local -ga mlx 00:21:54.267 10:40:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:54.267 10:40:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:54.267 10:40:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:54.267 10:40:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:54.267 10:40:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:54.267 10:40:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:54.267 10:40:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:54.267 10:40:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:54.267 10:40:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:54.267 10:40:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:54.267 10:40:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:54.267 10:40:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:54.267 10:40:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:54.267 10:40:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:54.267 10:40:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:54.267 10:40:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:54.267 10:40:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:54.267 10:40:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:54.267 10:40:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:54.267 10:40:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@367 -- # echo 'Found 0000:82:00.0 (0x8086 - 0x159b)' 00:21:54.267 Found 0000:82:00.0 (0x8086 - 0x159b) 00:21:54.267 10:40:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:54.267 10:40:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:54.267 10:40:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:54.267 10:40:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:54.267 10:40:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:54.267 10:40:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:54.267 10:40:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@367 -- # echo 'Found 0000:82:00.1 (0x8086 - 0x159b)' 00:21:54.267 Found 0000:82:00.1 (0x8086 - 0x159b) 00:21:54.267 10:40:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:54.267 10:40:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:54.267 10:40:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:54.267 10:40:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:54.267 10:40:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:54.267 10:40:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:54.267 10:40:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:54.267 10:40:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:54.267 10:40:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:54.267 10:40:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:54.267 10:40:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:54.267 10:40:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:54.267 10:40:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:54.267 10:40:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:54.267 10:40:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:54.267 10:40:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:82:00.0: cvl_0_0' 00:21:54.267 Found net devices under 0000:82:00.0: cvl_0_0 00:21:54.267 10:40:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:54.267 10:40:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:54.267 10:40:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:54.267 10:40:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:54.267 10:40:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:54.267 10:40:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:54.267 10:40:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:54.267 10:40:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:54.267 10:40:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:82:00.1: cvl_0_1' 00:21:54.267 Found net devices under 0000:82:00.1: cvl_0_1 00:21:54.267 10:40:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:54.267 10:40:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:54.267 10:40:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # is_hw=yes 00:21:54.267 10:40:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:21:54.267 10:40:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:21:54.267 10:40:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:21:54.267 10:40:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:54.267 10:40:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:54.267 10:40:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:54.267 10:40:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:54.267 10:40:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:54.267 10:40:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:54.267 10:40:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:54.267 10:40:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:54.267 10:40:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:54.267 10:40:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:54.267 10:40:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:54.267 10:40:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:54.267 10:40:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:54.267 10:40:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:54.267 10:40:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:54.268 10:40:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:54.268 10:40:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:54.268 10:40:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:54.268 10:40:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:54.268 10:40:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:54.268 10:40:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:54.268 10:40:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:54.268 10:40:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:54.268 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:54.268 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.122 ms 00:21:54.268 00:21:54.268 --- 10.0.0.2 ping statistics --- 00:21:54.268 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:54.268 rtt min/avg/max/mdev = 0.122/0.122/0.122/0.000 ms 00:21:54.268 10:40:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:54.268 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:54.268 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.052 ms 00:21:54.268 00:21:54.268 --- 10.0.0.1 ping statistics --- 00:21:54.268 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:54.268 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:21:54.268 10:40:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:54.268 10:40:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@450 -- # return 0 00:21:54.268 10:40:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:54.268 10:40:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:54.268 10:40:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:54.268 10:40:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:54.268 10:40:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:54.268 10:40:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:54.268 10:40:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:54.268 10:40:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:21:54.268 10:40:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:54.268 10:40:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:54.268 10:40:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:54.268 10:40:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@509 -- # nvmfpid=423465 00:21:54.268 10:40:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:21:54.268 10:40:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@510 -- # waitforlisten 423465 00:21:54.268 10:40:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@833 -- # '[' -z 423465 ']' 00:21:54.268 10:40:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:54.268 10:40:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # local max_retries=100 00:21:54.268 10:40:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:54.268 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:54.268 10:40:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@842 -- # xtrace_disable 00:21:54.268 10:40:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:54.268 [2024-11-15 10:40:42.457654] Starting SPDK v25.01-pre git sha1 318515b44 / DPDK 24.03.0 initialization... 00:21:54.268 [2024-11-15 10:40:42.457740] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:54.268 [2024-11-15 10:40:42.535609] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:54.268 [2024-11-15 10:40:42.597812] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:54.268 [2024-11-15 10:40:42.597870] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:54.268 [2024-11-15 10:40:42.597899] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:54.268 [2024-11-15 10:40:42.597918] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:54.268 [2024-11-15 10:40:42.597928] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:54.268 [2024-11-15 10:40:42.599660] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:54.268 [2024-11-15 10:40:42.599701] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:21:54.268 [2024-11-15 10:40:42.599789] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:21:54.268 [2024-11-15 10:40:42.599792] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:54.268 10:40:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:21:54.268 10:40:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@866 -- # return 0 00:21:54.268 10:40:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:54.268 10:40:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:54.268 10:40:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:54.527 10:40:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:54.527 10:40:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:54.527 10:40:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:54.527 10:40:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:54.527 [2024-11-15 10:40:42.756053] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:54.527 10:40:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:54.527 10:40:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:21:54.527 10:40:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:21:54.527 10:40:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:54.527 10:40:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:54.527 10:40:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:21:54.527 10:40:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:54.527 10:40:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:21:54.527 10:40:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:54.527 10:40:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:21:54.527 10:40:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:54.527 10:40:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:21:54.527 10:40:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:54.527 10:40:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:21:54.527 10:40:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:54.527 10:40:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:21:54.527 10:40:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:54.527 10:40:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:21:54.527 10:40:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:54.527 10:40:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:21:54.527 10:40:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:54.527 10:40:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:21:54.527 10:40:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:54.527 10:40:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:21:54.527 10:40:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:54.527 10:40:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:21:54.527 10:40:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # rpc_cmd 00:21:54.527 10:40:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:54.527 10:40:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:54.527 Malloc1 00:21:54.527 [2024-11-15 10:40:42.861836] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:54.527 Malloc2 00:21:54.527 Malloc3 00:21:54.527 Malloc4 00:21:54.785 Malloc5 00:21:54.785 Malloc6 00:21:54.785 Malloc7 00:21:54.785 Malloc8 00:21:54.785 Malloc9 00:21:55.044 Malloc10 00:21:55.044 10:40:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:55.044 10:40:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:21:55.044 10:40:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:55.044 10:40:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:55.044 10:40:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # perfpid=423641 00:21:55.044 10:40:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # waitforlisten 423641 /var/tmp/bdevperf.sock 00:21:55.044 10:40:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@833 -- # '[' -z 423641 ']' 00:21:55.044 10:40:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:55.044 10:40:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:21:55.044 10:40:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:21:55.044 10:40:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # local max_retries=100 00:21:55.044 10:40:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # config=() 00:21:55.044 10:40:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:55.044 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:55.044 10:40:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # local subsystem config 00:21:55.044 10:40:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@842 -- # xtrace_disable 00:21:55.044 10:40:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:55.044 10:40:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:55.044 10:40:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:55.044 { 00:21:55.044 "params": { 00:21:55.044 "name": "Nvme$subsystem", 00:21:55.044 "trtype": "$TEST_TRANSPORT", 00:21:55.044 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:55.044 "adrfam": "ipv4", 00:21:55.044 "trsvcid": "$NVMF_PORT", 00:21:55.044 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:55.044 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:55.044 "hdgst": ${hdgst:-false}, 00:21:55.044 "ddgst": ${ddgst:-false} 00:21:55.044 }, 00:21:55.044 "method": "bdev_nvme_attach_controller" 00:21:55.044 } 00:21:55.044 EOF 00:21:55.044 )") 00:21:55.044 10:40:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:21:55.044 10:40:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:55.044 10:40:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:55.044 { 00:21:55.044 "params": { 00:21:55.044 "name": "Nvme$subsystem", 00:21:55.044 "trtype": "$TEST_TRANSPORT", 00:21:55.044 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:55.044 "adrfam": "ipv4", 00:21:55.044 "trsvcid": "$NVMF_PORT", 00:21:55.044 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:55.044 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:55.044 "hdgst": ${hdgst:-false}, 00:21:55.044 "ddgst": ${ddgst:-false} 00:21:55.044 }, 00:21:55.044 "method": "bdev_nvme_attach_controller" 00:21:55.044 } 00:21:55.044 EOF 00:21:55.044 )") 00:21:55.044 10:40:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:21:55.044 10:40:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:55.044 10:40:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:55.044 { 00:21:55.044 "params": { 00:21:55.044 "name": "Nvme$subsystem", 00:21:55.044 "trtype": "$TEST_TRANSPORT", 00:21:55.044 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:55.044 "adrfam": "ipv4", 00:21:55.044 "trsvcid": "$NVMF_PORT", 00:21:55.044 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:55.044 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:55.044 "hdgst": ${hdgst:-false}, 00:21:55.044 "ddgst": ${ddgst:-false} 00:21:55.044 }, 00:21:55.044 "method": "bdev_nvme_attach_controller" 00:21:55.044 } 00:21:55.044 EOF 00:21:55.044 )") 00:21:55.044 10:40:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:21:55.044 10:40:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:55.044 10:40:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:55.044 { 00:21:55.044 "params": { 00:21:55.044 "name": "Nvme$subsystem", 00:21:55.044 "trtype": "$TEST_TRANSPORT", 00:21:55.044 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:55.044 "adrfam": "ipv4", 00:21:55.044 "trsvcid": "$NVMF_PORT", 00:21:55.044 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:55.044 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:55.044 "hdgst": ${hdgst:-false}, 00:21:55.044 "ddgst": ${ddgst:-false} 00:21:55.044 }, 00:21:55.044 "method": "bdev_nvme_attach_controller" 00:21:55.044 } 00:21:55.044 EOF 00:21:55.044 )") 00:21:55.044 10:40:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:21:55.044 10:40:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:55.044 10:40:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:55.044 { 00:21:55.044 "params": { 00:21:55.044 "name": "Nvme$subsystem", 00:21:55.044 "trtype": "$TEST_TRANSPORT", 00:21:55.044 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:55.044 "adrfam": "ipv4", 00:21:55.044 "trsvcid": "$NVMF_PORT", 00:21:55.045 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:55.045 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:55.045 "hdgst": ${hdgst:-false}, 00:21:55.045 "ddgst": ${ddgst:-false} 00:21:55.045 }, 00:21:55.045 "method": "bdev_nvme_attach_controller" 00:21:55.045 } 00:21:55.045 EOF 00:21:55.045 )") 00:21:55.045 10:40:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:21:55.045 10:40:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:55.045 10:40:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:55.045 { 00:21:55.045 "params": { 00:21:55.045 "name": "Nvme$subsystem", 00:21:55.045 "trtype": "$TEST_TRANSPORT", 00:21:55.045 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:55.045 "adrfam": "ipv4", 00:21:55.045 "trsvcid": "$NVMF_PORT", 00:21:55.045 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:55.045 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:55.045 "hdgst": ${hdgst:-false}, 00:21:55.045 "ddgst": ${ddgst:-false} 00:21:55.045 }, 00:21:55.045 "method": "bdev_nvme_attach_controller" 00:21:55.045 } 00:21:55.045 EOF 00:21:55.045 )") 00:21:55.045 10:40:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:21:55.045 10:40:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:55.045 10:40:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:55.045 { 00:21:55.045 "params": { 00:21:55.045 "name": "Nvme$subsystem", 00:21:55.045 "trtype": "$TEST_TRANSPORT", 00:21:55.045 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:55.045 "adrfam": "ipv4", 00:21:55.045 "trsvcid": "$NVMF_PORT", 00:21:55.045 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:55.045 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:55.045 "hdgst": ${hdgst:-false}, 00:21:55.045 "ddgst": ${ddgst:-false} 00:21:55.045 }, 00:21:55.045 "method": "bdev_nvme_attach_controller" 00:21:55.045 } 00:21:55.045 EOF 00:21:55.045 )") 00:21:55.045 10:40:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:21:55.045 10:40:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:55.045 10:40:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:55.045 { 00:21:55.045 "params": { 00:21:55.045 "name": "Nvme$subsystem", 00:21:55.045 "trtype": "$TEST_TRANSPORT", 00:21:55.045 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:55.045 "adrfam": "ipv4", 00:21:55.045 "trsvcid": "$NVMF_PORT", 00:21:55.045 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:55.045 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:55.045 "hdgst": ${hdgst:-false}, 00:21:55.045 "ddgst": ${ddgst:-false} 00:21:55.045 }, 00:21:55.045 "method": "bdev_nvme_attach_controller" 00:21:55.045 } 00:21:55.045 EOF 00:21:55.045 )") 00:21:55.045 10:40:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:21:55.045 10:40:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:55.045 10:40:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:55.045 { 00:21:55.045 "params": { 00:21:55.045 "name": "Nvme$subsystem", 00:21:55.045 "trtype": "$TEST_TRANSPORT", 00:21:55.045 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:55.045 "adrfam": "ipv4", 00:21:55.045 "trsvcid": "$NVMF_PORT", 00:21:55.045 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:55.045 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:55.045 "hdgst": ${hdgst:-false}, 00:21:55.045 "ddgst": ${ddgst:-false} 00:21:55.045 }, 00:21:55.045 "method": "bdev_nvme_attach_controller" 00:21:55.045 } 00:21:55.045 EOF 00:21:55.045 )") 00:21:55.045 10:40:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:21:55.045 10:40:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:55.045 10:40:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:55.045 { 00:21:55.045 "params": { 00:21:55.045 "name": "Nvme$subsystem", 00:21:55.045 "trtype": "$TEST_TRANSPORT", 00:21:55.045 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:55.045 "adrfam": "ipv4", 00:21:55.045 "trsvcid": "$NVMF_PORT", 00:21:55.045 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:55.045 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:55.045 "hdgst": ${hdgst:-false}, 00:21:55.045 "ddgst": ${ddgst:-false} 00:21:55.045 }, 00:21:55.045 "method": "bdev_nvme_attach_controller" 00:21:55.045 } 00:21:55.045 EOF 00:21:55.045 )") 00:21:55.045 10:40:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:21:55.045 10:40:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@584 -- # jq . 00:21:55.045 10:40:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@585 -- # IFS=, 00:21:55.045 10:40:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:21:55.045 "params": { 00:21:55.045 "name": "Nvme1", 00:21:55.045 "trtype": "tcp", 00:21:55.045 "traddr": "10.0.0.2", 00:21:55.045 "adrfam": "ipv4", 00:21:55.045 "trsvcid": "4420", 00:21:55.045 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:55.045 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:55.045 "hdgst": false, 00:21:55.045 "ddgst": false 00:21:55.045 }, 00:21:55.045 "method": "bdev_nvme_attach_controller" 00:21:55.045 },{ 00:21:55.045 "params": { 00:21:55.045 "name": "Nvme2", 00:21:55.045 "trtype": "tcp", 00:21:55.045 "traddr": "10.0.0.2", 00:21:55.045 "adrfam": "ipv4", 00:21:55.045 "trsvcid": "4420", 00:21:55.045 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:21:55.045 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:21:55.045 "hdgst": false, 00:21:55.045 "ddgst": false 00:21:55.045 }, 00:21:55.045 "method": "bdev_nvme_attach_controller" 00:21:55.045 },{ 00:21:55.045 "params": { 00:21:55.045 "name": "Nvme3", 00:21:55.045 "trtype": "tcp", 00:21:55.045 "traddr": "10.0.0.2", 00:21:55.045 "adrfam": "ipv4", 00:21:55.045 "trsvcid": "4420", 00:21:55.045 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:21:55.045 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:21:55.045 "hdgst": false, 00:21:55.045 "ddgst": false 00:21:55.045 }, 00:21:55.045 "method": "bdev_nvme_attach_controller" 00:21:55.045 },{ 00:21:55.045 "params": { 00:21:55.045 "name": "Nvme4", 00:21:55.045 "trtype": "tcp", 00:21:55.045 "traddr": "10.0.0.2", 00:21:55.045 "adrfam": "ipv4", 00:21:55.045 "trsvcid": "4420", 00:21:55.045 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:21:55.045 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:21:55.045 "hdgst": false, 00:21:55.045 "ddgst": false 00:21:55.045 }, 00:21:55.045 "method": "bdev_nvme_attach_controller" 00:21:55.045 },{ 00:21:55.045 "params": { 00:21:55.045 "name": "Nvme5", 00:21:55.045 "trtype": "tcp", 00:21:55.045 "traddr": "10.0.0.2", 00:21:55.045 "adrfam": "ipv4", 00:21:55.045 "trsvcid": "4420", 00:21:55.045 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:21:55.045 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:21:55.045 "hdgst": false, 00:21:55.045 "ddgst": false 00:21:55.045 }, 00:21:55.045 "method": "bdev_nvme_attach_controller" 00:21:55.045 },{ 00:21:55.045 "params": { 00:21:55.045 "name": "Nvme6", 00:21:55.045 "trtype": "tcp", 00:21:55.045 "traddr": "10.0.0.2", 00:21:55.045 "adrfam": "ipv4", 00:21:55.045 "trsvcid": "4420", 00:21:55.045 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:21:55.045 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:21:55.045 "hdgst": false, 00:21:55.045 "ddgst": false 00:21:55.045 }, 00:21:55.045 "method": "bdev_nvme_attach_controller" 00:21:55.045 },{ 00:21:55.045 "params": { 00:21:55.045 "name": "Nvme7", 00:21:55.045 "trtype": "tcp", 00:21:55.045 "traddr": "10.0.0.2", 00:21:55.045 "adrfam": "ipv4", 00:21:55.045 "trsvcid": "4420", 00:21:55.045 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:21:55.045 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:21:55.045 "hdgst": false, 00:21:55.045 "ddgst": false 00:21:55.045 }, 00:21:55.045 "method": "bdev_nvme_attach_controller" 00:21:55.045 },{ 00:21:55.045 "params": { 00:21:55.045 "name": "Nvme8", 00:21:55.045 "trtype": "tcp", 00:21:55.045 "traddr": "10.0.0.2", 00:21:55.045 "adrfam": "ipv4", 00:21:55.045 "trsvcid": "4420", 00:21:55.045 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:21:55.045 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:21:55.045 "hdgst": false, 00:21:55.045 "ddgst": false 00:21:55.045 }, 00:21:55.045 "method": "bdev_nvme_attach_controller" 00:21:55.045 },{ 00:21:55.045 "params": { 00:21:55.045 "name": "Nvme9", 00:21:55.045 "trtype": "tcp", 00:21:55.045 "traddr": "10.0.0.2", 00:21:55.045 "adrfam": "ipv4", 00:21:55.045 "trsvcid": "4420", 00:21:55.046 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:21:55.046 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:21:55.046 "hdgst": false, 00:21:55.046 "ddgst": false 00:21:55.046 }, 00:21:55.046 "method": "bdev_nvme_attach_controller" 00:21:55.046 },{ 00:21:55.046 "params": { 00:21:55.046 "name": "Nvme10", 00:21:55.046 "trtype": "tcp", 00:21:55.046 "traddr": "10.0.0.2", 00:21:55.046 "adrfam": "ipv4", 00:21:55.046 "trsvcid": "4420", 00:21:55.046 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:21:55.046 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:21:55.046 "hdgst": false, 00:21:55.046 "ddgst": false 00:21:55.046 }, 00:21:55.046 "method": "bdev_nvme_attach_controller" 00:21:55.046 }' 00:21:55.046 [2024-11-15 10:40:43.387837] Starting SPDK v25.01-pre git sha1 318515b44 / DPDK 24.03.0 initialization... 00:21:55.046 [2024-11-15 10:40:43.387910] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid423641 ] 00:21:55.046 [2024-11-15 10:40:43.459448] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:55.304 [2024-11-15 10:40:43.519070] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:56.677 Running I/O for 10 seconds... 00:21:57.243 10:40:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:21:57.243 10:40:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@866 -- # return 0 00:21:57.243 10:40:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@106 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:21:57.243 10:40:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:57.243 10:40:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:57.243 10:40:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:57.243 10:40:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@108 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:21:57.243 10:40:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:21:57.243 10:40:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:21:57.243 10:40:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local ret=1 00:21:57.243 10:40:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # local i 00:21:57.243 10:40:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:21:57.243 10:40:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:21:57.243 10:40:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:21:57.243 10:40:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:21:57.243 10:40:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:57.243 10:40:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:57.243 10:40:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:57.243 10:40:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=67 00:21:57.243 10:40:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 67 -ge 100 ']' 00:21:57.243 10:40:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@68 -- # sleep 0.25 00:21:57.501 10:40:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i-- )) 00:21:57.502 10:40:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:21:57.502 10:40:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:21:57.502 10:40:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:57.502 10:40:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:21:57.502 10:40:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:57.502 10:40:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:57.502 10:40:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=131 00:21:57.502 10:40:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 131 -ge 100 ']' 00:21:57.502 10:40:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # ret=0 00:21:57.502 10:40:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@66 -- # break 00:21:57.502 10:40:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@70 -- # return 0 00:21:57.502 10:40:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@111 -- # killprocess 423641 00:21:57.502 10:40:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # '[' -z 423641 ']' 00:21:57.502 10:40:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # kill -0 423641 00:21:57.502 10:40:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@957 -- # uname 00:21:57.502 10:40:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:21:57.502 10:40:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 423641 00:21:57.502 10:40:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:21:57.502 10:40:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:21:57.502 10:40:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@970 -- # echo 'killing process with pid 423641' 00:21:57.502 killing process with pid 423641 00:21:57.502 10:40:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@971 -- # kill 423641 00:21:57.502 10:40:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@976 -- # wait 423641 00:21:57.760 Received shutdown signal, test time was about 0.883553 seconds 00:21:57.760 00:21:57.760 Latency(us) 00:21:57.760 [2024-11-15T09:40:46.223Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:57.760 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:57.760 Verification LBA range: start 0x0 length 0x400 00:21:57.760 Nvme1n1 : 0.85 225.76 14.11 0.00 0.00 278725.21 32622.36 248551.35 00:21:57.760 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:57.760 Verification LBA range: start 0x0 length 0x400 00:21:57.760 Nvme2n1 : 0.85 232.40 14.53 0.00 0.00 261526.22 13592.65 257872.02 00:21:57.760 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:57.760 Verification LBA range: start 0x0 length 0x400 00:21:57.760 Nvme3n1 : 0.84 227.95 14.25 0.00 0.00 263643.34 31457.28 256318.58 00:21:57.760 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:57.760 Verification LBA range: start 0x0 length 0x400 00:21:57.760 Nvme4n1 : 0.84 227.39 14.21 0.00 0.00 257779.23 19320.98 267192.70 00:21:57.760 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:57.760 Verification LBA range: start 0x0 length 0x400 00:21:57.760 Nvme5n1 : 0.87 221.77 13.86 0.00 0.00 258947.60 34564.17 259425.47 00:21:57.760 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:57.760 Verification LBA range: start 0x0 length 0x400 00:21:57.760 Nvme6n1 : 0.87 219.44 13.72 0.00 0.00 256087.74 20971.52 271853.04 00:21:57.760 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:57.760 Verification LBA range: start 0x0 length 0x400 00:21:57.760 Nvme7n1 : 0.86 223.26 13.95 0.00 0.00 245020.19 32039.82 248551.35 00:21:57.760 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:57.760 Verification LBA range: start 0x0 length 0x400 00:21:57.760 Nvme8n1 : 0.86 222.49 13.91 0.00 0.00 239955.56 17670.45 267192.70 00:21:57.760 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:57.760 Verification LBA range: start 0x0 length 0x400 00:21:57.760 Nvme9n1 : 0.88 218.36 13.65 0.00 0.00 239410.95 20388.98 281173.71 00:21:57.760 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:57.760 Verification LBA range: start 0x0 length 0x400 00:21:57.760 Nvme10n1 : 0.88 214.11 13.38 0.00 0.00 237330.73 19903.53 298261.62 00:21:57.760 [2024-11-15T09:40:46.223Z] =================================================================================================================== 00:21:57.760 [2024-11-15T09:40:46.223Z] Total : 2232.94 139.56 0.00 0.00 253888.44 13592.65 298261.62 00:21:57.760 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # sleep 1 00:21:59.133 10:40:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@115 -- # kill -0 423465 00:21:59.133 10:40:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@117 -- # stoptarget 00:21:59.133 10:40:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:21:59.133 10:40:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:21:59.133 10:40:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:21:59.133 10:40:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@46 -- # nvmftestfini 00:21:59.133 10:40:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:59.133 10:40:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # sync 00:21:59.133 10:40:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:59.133 10:40:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set +e 00:21:59.133 10:40:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:59.133 10:40:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:59.133 rmmod nvme_tcp 00:21:59.133 rmmod nvme_fabrics 00:21:59.133 rmmod nvme_keyring 00:21:59.133 10:40:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:59.133 10:40:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@128 -- # set -e 00:21:59.133 10:40:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@129 -- # return 0 00:21:59.133 10:40:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@517 -- # '[' -n 423465 ']' 00:21:59.133 10:40:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@518 -- # killprocess 423465 00:21:59.133 10:40:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # '[' -z 423465 ']' 00:21:59.133 10:40:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # kill -0 423465 00:21:59.133 10:40:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@957 -- # uname 00:21:59.133 10:40:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:21:59.133 10:40:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 423465 00:21:59.133 10:40:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:21:59.133 10:40:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:21:59.133 10:40:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@970 -- # echo 'killing process with pid 423465' 00:21:59.133 killing process with pid 423465 00:21:59.133 10:40:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@971 -- # kill 423465 00:21:59.133 10:40:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@976 -- # wait 423465 00:21:59.393 10:40:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:59.393 10:40:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:59.393 10:40:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:59.393 10:40:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # iptr 00:21:59.393 10:40:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # iptables-save 00:21:59.393 10:40:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:59.393 10:40:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # iptables-restore 00:21:59.393 10:40:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:59.393 10:40:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:59.393 10:40:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:59.393 10:40:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:59.393 10:40:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:01.935 10:40:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:01.935 00:22:01.935 real 0m7.623s 00:22:01.935 user 0m23.259s 00:22:01.935 sys 0m1.487s 00:22:01.935 10:40:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1128 -- # xtrace_disable 00:22:01.935 10:40:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:01.935 ************************************ 00:22:01.935 END TEST nvmf_shutdown_tc2 00:22:01.935 ************************************ 00:22:01.935 10:40:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@164 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:22:01.935 10:40:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:22:01.935 10:40:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1109 -- # xtrace_disable 00:22:01.935 10:40:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:22:01.935 ************************************ 00:22:01.935 START TEST nvmf_shutdown_tc3 00:22:01.935 ************************************ 00:22:01.935 10:40:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1127 -- # nvmf_shutdown_tc3 00:22:01.935 10:40:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@122 -- # starttarget 00:22:01.935 10:40:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@16 -- # nvmftestinit 00:22:01.935 10:40:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:01.935 10:40:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:01.935 10:40:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:01.935 10:40:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:01.935 10:40:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:01.935 10:40:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:01.935 10:40:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:01.935 10:40:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:01.935 10:40:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:01.935 10:40:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:01.935 10:40:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@309 -- # xtrace_disable 00:22:01.935 10:40:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:01.935 10:40:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:01.935 10:40:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # pci_devs=() 00:22:01.935 10:40:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:01.935 10:40:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:01.935 10:40:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:01.935 10:40:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:01.935 10:40:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:01.935 10:40:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # net_devs=() 00:22:01.935 10:40:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:01.935 10:40:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # e810=() 00:22:01.935 10:40:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # local -ga e810 00:22:01.935 10:40:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # x722=() 00:22:01.935 10:40:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # local -ga x722 00:22:01.935 10:40:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # mlx=() 00:22:01.935 10:40:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # local -ga mlx 00:22:01.935 10:40:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:01.935 10:40:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:01.935 10:40:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:01.935 10:40:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:01.935 10:40:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:01.935 10:40:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:01.935 10:40:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:01.935 10:40:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:01.935 10:40:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:01.935 10:40:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:01.935 10:40:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:01.935 10:40:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:01.935 10:40:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:01.935 10:40:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:01.935 10:40:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:01.936 10:40:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:01.936 10:40:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:01.936 10:40:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:01.936 10:40:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:01.936 10:40:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@367 -- # echo 'Found 0000:82:00.0 (0x8086 - 0x159b)' 00:22:01.936 Found 0000:82:00.0 (0x8086 - 0x159b) 00:22:01.936 10:40:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:01.936 10:40:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:01.936 10:40:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:01.936 10:40:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:01.936 10:40:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:01.936 10:40:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:01.936 10:40:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@367 -- # echo 'Found 0000:82:00.1 (0x8086 - 0x159b)' 00:22:01.936 Found 0000:82:00.1 (0x8086 - 0x159b) 00:22:01.936 10:40:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:01.936 10:40:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:01.936 10:40:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:01.936 10:40:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:01.936 10:40:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:01.936 10:40:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:01.936 10:40:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:01.936 10:40:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:01.936 10:40:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:01.936 10:40:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:01.936 10:40:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:01.936 10:40:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:01.936 10:40:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:01.936 10:40:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:01.936 10:40:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:01.936 10:40:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:82:00.0: cvl_0_0' 00:22:01.936 Found net devices under 0000:82:00.0: cvl_0_0 00:22:01.936 10:40:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:01.936 10:40:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:01.936 10:40:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:01.936 10:40:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:01.936 10:40:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:01.936 10:40:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:01.936 10:40:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:01.936 10:40:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:01.936 10:40:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:82:00.1: cvl_0_1' 00:22:01.936 Found net devices under 0000:82:00.1: cvl_0_1 00:22:01.936 10:40:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:01.936 10:40:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:01.936 10:40:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # is_hw=yes 00:22:01.936 10:40:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:01.936 10:40:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:01.936 10:40:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:01.936 10:40:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:01.936 10:40:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:01.936 10:40:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:01.936 10:40:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:01.936 10:40:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:01.936 10:40:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:01.936 10:40:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:01.936 10:40:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:01.936 10:40:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:01.936 10:40:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:01.936 10:40:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:01.936 10:40:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:01.936 10:40:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:01.936 10:40:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:01.936 10:40:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:01.936 10:40:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:01.936 10:40:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:01.936 10:40:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:01.936 10:40:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:01.936 10:40:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:01.936 10:40:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:01.936 10:40:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:01.936 10:40:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:01.936 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:01.936 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.168 ms 00:22:01.936 00:22:01.936 --- 10.0.0.2 ping statistics --- 00:22:01.936 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:01.936 rtt min/avg/max/mdev = 0.168/0.168/0.168/0.000 ms 00:22:01.936 10:40:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:01.936 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:01.936 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.105 ms 00:22:01.936 00:22:01.936 --- 10.0.0.1 ping statistics --- 00:22:01.936 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:01.936 rtt min/avg/max/mdev = 0.105/0.105/0.105/0.000 ms 00:22:01.936 10:40:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:01.936 10:40:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@450 -- # return 0 00:22:01.936 10:40:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:01.936 10:40:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:01.936 10:40:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:01.936 10:40:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:01.936 10:40:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:01.936 10:40:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:01.936 10:40:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:01.936 10:40:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:22:01.936 10:40:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:01.936 10:40:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:01.936 10:40:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:01.936 10:40:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@509 -- # nvmfpid=424553 00:22:01.936 10:40:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:22:01.936 10:40:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@510 -- # waitforlisten 424553 00:22:01.936 10:40:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@833 -- # '[' -z 424553 ']' 00:22:01.936 10:40:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:01.936 10:40:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # local max_retries=100 00:22:01.936 10:40:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:01.937 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:01.937 10:40:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@842 -- # xtrace_disable 00:22:01.937 10:40:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:01.937 [2024-11-15 10:40:50.113438] Starting SPDK v25.01-pre git sha1 318515b44 / DPDK 24.03.0 initialization... 00:22:01.937 [2024-11-15 10:40:50.113513] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:01.937 [2024-11-15 10:40:50.188163] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:01.937 [2024-11-15 10:40:50.248616] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:01.937 [2024-11-15 10:40:50.248681] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:01.937 [2024-11-15 10:40:50.248710] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:01.937 [2024-11-15 10:40:50.248722] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:01.937 [2024-11-15 10:40:50.248732] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:01.937 [2024-11-15 10:40:50.250344] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:01.937 [2024-11-15 10:40:50.250408] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:22:01.937 [2024-11-15 10:40:50.250475] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:22:01.937 [2024-11-15 10:40:50.250478] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:01.937 10:40:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:22:01.937 10:40:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@866 -- # return 0 00:22:01.937 10:40:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:01.937 10:40:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:01.937 10:40:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:01.937 10:40:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:01.937 10:40:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:01.937 10:40:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:01.937 10:40:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:02.195 [2024-11-15 10:40:50.403849] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:02.195 10:40:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:02.195 10:40:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:22:02.195 10:40:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:22:02.195 10:40:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:02.195 10:40:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:02.195 10:40:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:02.195 10:40:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:02.195 10:40:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:22:02.195 10:40:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:02.195 10:40:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:22:02.195 10:40:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:02.195 10:40:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:22:02.195 10:40:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:02.195 10:40:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:22:02.195 10:40:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:02.195 10:40:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:22:02.195 10:40:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:02.195 10:40:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:22:02.195 10:40:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:02.195 10:40:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:22:02.195 10:40:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:02.195 10:40:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:22:02.195 10:40:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:02.195 10:40:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:22:02.195 10:40:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:02.195 10:40:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:22:02.195 10:40:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # rpc_cmd 00:22:02.195 10:40:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:02.195 10:40:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:02.195 Malloc1 00:22:02.195 [2024-11-15 10:40:50.497509] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:02.195 Malloc2 00:22:02.195 Malloc3 00:22:02.195 Malloc4 00:22:02.453 Malloc5 00:22:02.453 Malloc6 00:22:02.453 Malloc7 00:22:02.453 Malloc8 00:22:02.453 Malloc9 00:22:02.712 Malloc10 00:22:02.712 10:40:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:02.712 10:40:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:22:02.712 10:40:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:02.712 10:40:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:02.712 10:40:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # perfpid=424632 00:22:02.712 10:40:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # waitforlisten 424632 /var/tmp/bdevperf.sock 00:22:02.712 10:40:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@833 -- # '[' -z 424632 ']' 00:22:02.713 10:40:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:22:02.713 10:40:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:22:02.713 10:40:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:02.713 10:40:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # local max_retries=100 00:22:02.713 10:40:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # config=() 00:22:02.713 10:40:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:02.713 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:02.713 10:40:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # local subsystem config 00:22:02.713 10:40:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@842 -- # xtrace_disable 00:22:02.713 10:40:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:02.713 10:40:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:02.713 10:40:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:02.713 { 00:22:02.713 "params": { 00:22:02.713 "name": "Nvme$subsystem", 00:22:02.713 "trtype": "$TEST_TRANSPORT", 00:22:02.713 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:02.713 "adrfam": "ipv4", 00:22:02.713 "trsvcid": "$NVMF_PORT", 00:22:02.713 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:02.713 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:02.713 "hdgst": ${hdgst:-false}, 00:22:02.713 "ddgst": ${ddgst:-false} 00:22:02.713 }, 00:22:02.713 "method": "bdev_nvme_attach_controller" 00:22:02.713 } 00:22:02.713 EOF 00:22:02.713 )") 00:22:02.713 10:40:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:22:02.713 10:40:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:02.713 10:40:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:02.713 { 00:22:02.713 "params": { 00:22:02.713 "name": "Nvme$subsystem", 00:22:02.713 "trtype": "$TEST_TRANSPORT", 00:22:02.713 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:02.713 "adrfam": "ipv4", 00:22:02.713 "trsvcid": "$NVMF_PORT", 00:22:02.713 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:02.713 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:02.713 "hdgst": ${hdgst:-false}, 00:22:02.713 "ddgst": ${ddgst:-false} 00:22:02.713 }, 00:22:02.713 "method": "bdev_nvme_attach_controller" 00:22:02.713 } 00:22:02.713 EOF 00:22:02.713 )") 00:22:02.713 10:40:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:22:02.713 10:40:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:02.713 10:40:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:02.713 { 00:22:02.713 "params": { 00:22:02.713 "name": "Nvme$subsystem", 00:22:02.713 "trtype": "$TEST_TRANSPORT", 00:22:02.713 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:02.713 "adrfam": "ipv4", 00:22:02.713 "trsvcid": "$NVMF_PORT", 00:22:02.713 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:02.713 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:02.713 "hdgst": ${hdgst:-false}, 00:22:02.713 "ddgst": ${ddgst:-false} 00:22:02.713 }, 00:22:02.713 "method": "bdev_nvme_attach_controller" 00:22:02.713 } 00:22:02.713 EOF 00:22:02.713 )") 00:22:02.713 10:40:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:22:02.713 10:40:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:02.713 10:40:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:02.713 { 00:22:02.713 "params": { 00:22:02.713 "name": "Nvme$subsystem", 00:22:02.713 "trtype": "$TEST_TRANSPORT", 00:22:02.713 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:02.713 "adrfam": "ipv4", 00:22:02.713 "trsvcid": "$NVMF_PORT", 00:22:02.713 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:02.713 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:02.713 "hdgst": ${hdgst:-false}, 00:22:02.713 "ddgst": ${ddgst:-false} 00:22:02.713 }, 00:22:02.713 "method": "bdev_nvme_attach_controller" 00:22:02.713 } 00:22:02.713 EOF 00:22:02.713 )") 00:22:02.713 10:40:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:22:02.713 10:40:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:02.713 10:40:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:02.713 { 00:22:02.713 "params": { 00:22:02.713 "name": "Nvme$subsystem", 00:22:02.713 "trtype": "$TEST_TRANSPORT", 00:22:02.713 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:02.713 "adrfam": "ipv4", 00:22:02.713 "trsvcid": "$NVMF_PORT", 00:22:02.713 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:02.713 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:02.713 "hdgst": ${hdgst:-false}, 00:22:02.713 "ddgst": ${ddgst:-false} 00:22:02.713 }, 00:22:02.713 "method": "bdev_nvme_attach_controller" 00:22:02.713 } 00:22:02.713 EOF 00:22:02.713 )") 00:22:02.713 10:40:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:22:02.713 10:40:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:02.713 10:40:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:02.713 { 00:22:02.713 "params": { 00:22:02.713 "name": "Nvme$subsystem", 00:22:02.713 "trtype": "$TEST_TRANSPORT", 00:22:02.713 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:02.713 "adrfam": "ipv4", 00:22:02.713 "trsvcid": "$NVMF_PORT", 00:22:02.713 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:02.713 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:02.713 "hdgst": ${hdgst:-false}, 00:22:02.713 "ddgst": ${ddgst:-false} 00:22:02.713 }, 00:22:02.713 "method": "bdev_nvme_attach_controller" 00:22:02.713 } 00:22:02.713 EOF 00:22:02.713 )") 00:22:02.713 10:40:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:22:02.713 10:40:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:02.713 10:40:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:02.713 { 00:22:02.713 "params": { 00:22:02.713 "name": "Nvme$subsystem", 00:22:02.713 "trtype": "$TEST_TRANSPORT", 00:22:02.713 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:02.713 "adrfam": "ipv4", 00:22:02.713 "trsvcid": "$NVMF_PORT", 00:22:02.713 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:02.713 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:02.713 "hdgst": ${hdgst:-false}, 00:22:02.713 "ddgst": ${ddgst:-false} 00:22:02.713 }, 00:22:02.713 "method": "bdev_nvme_attach_controller" 00:22:02.713 } 00:22:02.713 EOF 00:22:02.713 )") 00:22:02.713 10:40:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:22:02.713 10:40:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:02.713 10:40:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:02.713 { 00:22:02.713 "params": { 00:22:02.713 "name": "Nvme$subsystem", 00:22:02.713 "trtype": "$TEST_TRANSPORT", 00:22:02.713 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:02.713 "adrfam": "ipv4", 00:22:02.713 "trsvcid": "$NVMF_PORT", 00:22:02.713 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:02.713 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:02.713 "hdgst": ${hdgst:-false}, 00:22:02.713 "ddgst": ${ddgst:-false} 00:22:02.713 }, 00:22:02.713 "method": "bdev_nvme_attach_controller" 00:22:02.713 } 00:22:02.713 EOF 00:22:02.713 )") 00:22:02.713 10:40:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:22:02.713 10:40:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:02.713 10:40:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:02.713 { 00:22:02.713 "params": { 00:22:02.713 "name": "Nvme$subsystem", 00:22:02.713 "trtype": "$TEST_TRANSPORT", 00:22:02.713 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:02.713 "adrfam": "ipv4", 00:22:02.713 "trsvcid": "$NVMF_PORT", 00:22:02.713 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:02.713 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:02.713 "hdgst": ${hdgst:-false}, 00:22:02.713 "ddgst": ${ddgst:-false} 00:22:02.713 }, 00:22:02.713 "method": "bdev_nvme_attach_controller" 00:22:02.713 } 00:22:02.713 EOF 00:22:02.713 )") 00:22:02.713 10:40:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:22:02.713 10:40:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:02.713 10:40:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:02.713 { 00:22:02.713 "params": { 00:22:02.713 "name": "Nvme$subsystem", 00:22:02.713 "trtype": "$TEST_TRANSPORT", 00:22:02.713 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:02.713 "adrfam": "ipv4", 00:22:02.713 "trsvcid": "$NVMF_PORT", 00:22:02.714 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:02.714 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:02.714 "hdgst": ${hdgst:-false}, 00:22:02.714 "ddgst": ${ddgst:-false} 00:22:02.714 }, 00:22:02.714 "method": "bdev_nvme_attach_controller" 00:22:02.714 } 00:22:02.714 EOF 00:22:02.714 )") 00:22:02.714 10:40:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:22:02.714 10:40:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@584 -- # jq . 00:22:02.714 10:40:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@585 -- # IFS=, 00:22:02.714 10:40:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:22:02.714 "params": { 00:22:02.714 "name": "Nvme1", 00:22:02.714 "trtype": "tcp", 00:22:02.714 "traddr": "10.0.0.2", 00:22:02.714 "adrfam": "ipv4", 00:22:02.714 "trsvcid": "4420", 00:22:02.714 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:02.714 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:02.714 "hdgst": false, 00:22:02.714 "ddgst": false 00:22:02.714 }, 00:22:02.714 "method": "bdev_nvme_attach_controller" 00:22:02.714 },{ 00:22:02.714 "params": { 00:22:02.714 "name": "Nvme2", 00:22:02.714 "trtype": "tcp", 00:22:02.714 "traddr": "10.0.0.2", 00:22:02.714 "adrfam": "ipv4", 00:22:02.714 "trsvcid": "4420", 00:22:02.714 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:22:02.714 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:22:02.714 "hdgst": false, 00:22:02.714 "ddgst": false 00:22:02.714 }, 00:22:02.714 "method": "bdev_nvme_attach_controller" 00:22:02.714 },{ 00:22:02.714 "params": { 00:22:02.714 "name": "Nvme3", 00:22:02.714 "trtype": "tcp", 00:22:02.714 "traddr": "10.0.0.2", 00:22:02.714 "adrfam": "ipv4", 00:22:02.714 "trsvcid": "4420", 00:22:02.714 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:22:02.714 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:22:02.714 "hdgst": false, 00:22:02.714 "ddgst": false 00:22:02.714 }, 00:22:02.714 "method": "bdev_nvme_attach_controller" 00:22:02.714 },{ 00:22:02.714 "params": { 00:22:02.714 "name": "Nvme4", 00:22:02.714 "trtype": "tcp", 00:22:02.714 "traddr": "10.0.0.2", 00:22:02.714 "adrfam": "ipv4", 00:22:02.714 "trsvcid": "4420", 00:22:02.714 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:22:02.714 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:22:02.714 "hdgst": false, 00:22:02.714 "ddgst": false 00:22:02.714 }, 00:22:02.714 "method": "bdev_nvme_attach_controller" 00:22:02.714 },{ 00:22:02.714 "params": { 00:22:02.714 "name": "Nvme5", 00:22:02.714 "trtype": "tcp", 00:22:02.714 "traddr": "10.0.0.2", 00:22:02.714 "adrfam": "ipv4", 00:22:02.714 "trsvcid": "4420", 00:22:02.714 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:22:02.714 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:22:02.714 "hdgst": false, 00:22:02.714 "ddgst": false 00:22:02.714 }, 00:22:02.714 "method": "bdev_nvme_attach_controller" 00:22:02.714 },{ 00:22:02.714 "params": { 00:22:02.714 "name": "Nvme6", 00:22:02.714 "trtype": "tcp", 00:22:02.714 "traddr": "10.0.0.2", 00:22:02.714 "adrfam": "ipv4", 00:22:02.714 "trsvcid": "4420", 00:22:02.714 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:22:02.714 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:22:02.714 "hdgst": false, 00:22:02.714 "ddgst": false 00:22:02.714 }, 00:22:02.714 "method": "bdev_nvme_attach_controller" 00:22:02.714 },{ 00:22:02.714 "params": { 00:22:02.714 "name": "Nvme7", 00:22:02.714 "trtype": "tcp", 00:22:02.714 "traddr": "10.0.0.2", 00:22:02.714 "adrfam": "ipv4", 00:22:02.714 "trsvcid": "4420", 00:22:02.714 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:22:02.714 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:22:02.714 "hdgst": false, 00:22:02.714 "ddgst": false 00:22:02.714 }, 00:22:02.714 "method": "bdev_nvme_attach_controller" 00:22:02.714 },{ 00:22:02.714 "params": { 00:22:02.714 "name": "Nvme8", 00:22:02.714 "trtype": "tcp", 00:22:02.714 "traddr": "10.0.0.2", 00:22:02.714 "adrfam": "ipv4", 00:22:02.714 "trsvcid": "4420", 00:22:02.714 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:22:02.714 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:22:02.714 "hdgst": false, 00:22:02.714 "ddgst": false 00:22:02.714 }, 00:22:02.714 "method": "bdev_nvme_attach_controller" 00:22:02.714 },{ 00:22:02.714 "params": { 00:22:02.714 "name": "Nvme9", 00:22:02.714 "trtype": "tcp", 00:22:02.714 "traddr": "10.0.0.2", 00:22:02.714 "adrfam": "ipv4", 00:22:02.714 "trsvcid": "4420", 00:22:02.714 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:22:02.714 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:22:02.714 "hdgst": false, 00:22:02.714 "ddgst": false 00:22:02.714 }, 00:22:02.714 "method": "bdev_nvme_attach_controller" 00:22:02.714 },{ 00:22:02.714 "params": { 00:22:02.714 "name": "Nvme10", 00:22:02.714 "trtype": "tcp", 00:22:02.714 "traddr": "10.0.0.2", 00:22:02.714 "adrfam": "ipv4", 00:22:02.714 "trsvcid": "4420", 00:22:02.714 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:22:02.714 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:22:02.714 "hdgst": false, 00:22:02.714 "ddgst": false 00:22:02.714 }, 00:22:02.714 "method": "bdev_nvme_attach_controller" 00:22:02.714 }' 00:22:02.714 [2024-11-15 10:40:51.026123] Starting SPDK v25.01-pre git sha1 318515b44 / DPDK 24.03.0 initialization... 00:22:02.714 [2024-11-15 10:40:51.026197] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid424632 ] 00:22:02.714 [2024-11-15 10:40:51.100584] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:02.714 [2024-11-15 10:40:51.160021] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:04.613 Running I/O for 10 seconds... 00:22:04.871 10:40:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:22:04.871 10:40:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@866 -- # return 0 00:22:04.871 10:40:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@128 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:22:04.871 10:40:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:04.871 10:40:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:04.872 10:40:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:04.872 10:40:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@131 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:04.872 10:40:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@133 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:22:04.872 10:40:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:22:04.872 10:40:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:22:04.872 10:40:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local ret=1 00:22:04.872 10:40:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # local i 00:22:04.872 10:40:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:22:04.872 10:40:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:22:04.872 10:40:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:22:04.872 10:40:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:22:04.872 10:40:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:04.872 10:40:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:04.872 10:40:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:04.872 10:40:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=67 00:22:04.872 10:40:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 67 -ge 100 ']' 00:22:04.872 10:40:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@68 -- # sleep 0.25 00:22:05.145 10:40:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i-- )) 00:22:05.145 10:40:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:22:05.145 10:40:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:22:05.145 10:40:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:22:05.145 10:40:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:05.145 10:40:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:05.145 10:40:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:05.145 10:40:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=131 00:22:05.145 10:40:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 131 -ge 100 ']' 00:22:05.145 10:40:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # ret=0 00:22:05.145 10:40:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@66 -- # break 00:22:05.145 10:40:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@70 -- # return 0 00:22:05.145 10:40:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # killprocess 424553 00:22:05.145 10:40:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@952 -- # '[' -z 424553 ']' 00:22:05.145 10:40:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@956 -- # kill -0 424553 00:22:05.145 10:40:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@957 -- # uname 00:22:05.145 10:40:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:22:05.145 10:40:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 424553 00:22:05.145 10:40:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:22:05.145 10:40:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:22:05.145 10:40:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@970 -- # echo 'killing process with pid 424553' 00:22:05.145 killing process with pid 424553 00:22:05.145 10:40:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@971 -- # kill 424553 00:22:05.145 10:40:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@976 -- # wait 424553 00:22:05.145 [2024-11-15 10:40:53.494599] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b5d1b0 is same with the state(6) to be set 00:22:05.145 [2024-11-15 10:40:53.494760] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b5d1b0 is same with the state(6) to be set 00:22:05.145 [2024-11-15 10:40:53.494776] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b5d1b0 is same with the state(6) to be set 00:22:05.145 [2024-11-15 10:40:53.494788] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b5d1b0 is same with the state(6) to be set 00:22:05.145 [2024-11-15 10:40:53.494800] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b5d1b0 is same with the state(6) to be set 00:22:05.145 [2024-11-15 10:40:53.494812] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b5d1b0 is same with the state(6) to be set 00:22:05.145 [2024-11-15 10:40:53.494825] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b5d1b0 is same with the state(6) to be set 00:22:05.145 [2024-11-15 10:40:53.494846] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b5d1b0 is same with the state(6) to be set 00:22:05.145 [2024-11-15 10:40:53.494858] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b5d1b0 is same with the state(6) to be set 00:22:05.145 [2024-11-15 10:40:53.494870] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b5d1b0 is same with the state(6) to be set 00:22:05.145 [2024-11-15 10:40:53.494882] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b5d1b0 is same with the state(6) to be set 00:22:05.145 [2024-11-15 10:40:53.494894] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b5d1b0 is same with the state(6) to be set 00:22:05.145 [2024-11-15 10:40:53.494905] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b5d1b0 is same with the state(6) to be set 00:22:05.145 [2024-11-15 10:40:53.494917] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b5d1b0 is same with the state(6) to be set 00:22:05.145 [2024-11-15 10:40:53.494929] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b5d1b0 is same with the state(6) to be set 00:22:05.145 [2024-11-15 10:40:53.494941] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b5d1b0 is same with the state(6) to be set 00:22:05.145 [2024-11-15 10:40:53.494953] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b5d1b0 is same with the state(6) to be set 00:22:05.145 [2024-11-15 10:40:53.494965] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b5d1b0 is same with the state(6) to be set 00:22:05.145 [2024-11-15 10:40:53.494976] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b5d1b0 is same with the state(6) to be set 00:22:05.145 [2024-11-15 10:40:53.494988] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b5d1b0 is same with the state(6) to be set 00:22:05.145 [2024-11-15 10:40:53.495000] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b5d1b0 is same with the state(6) to be set 00:22:05.145 [2024-11-15 10:40:53.495012] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b5d1b0 is same with the state(6) to be set 00:22:05.145 [2024-11-15 10:40:53.495024] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b5d1b0 is same with the state(6) to be set 00:22:05.146 [2024-11-15 10:40:53.495036] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b5d1b0 is same with the state(6) to be set 00:22:05.146 [2024-11-15 10:40:53.495048] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b5d1b0 is same with the state(6) to be set 00:22:05.146 [2024-11-15 10:40:53.495060] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b5d1b0 is same with the state(6) to be set 00:22:05.146 [2024-11-15 10:40:53.495071] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b5d1b0 is same with the state(6) to be set 00:22:05.146 [2024-11-15 10:40:53.495083] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b5d1b0 is same with the state(6) to be set 00:22:05.146 [2024-11-15 10:40:53.495095] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b5d1b0 is same with the state(6) to be set 00:22:05.146 [2024-11-15 10:40:53.495107] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b5d1b0 is same with the state(6) to be set 00:22:05.146 [2024-11-15 10:40:53.495118] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b5d1b0 is same with the state(6) to be set 00:22:05.146 [2024-11-15 10:40:53.495130] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b5d1b0 is same with the state(6) to be set 00:22:05.146 [2024-11-15 10:40:53.495142] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b5d1b0 is same with the state(6) to be set 00:22:05.146 [2024-11-15 10:40:53.495154] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b5d1b0 is same with the state(6) to be set 00:22:05.146 [2024-11-15 10:40:53.495170] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b5d1b0 is same with the state(6) to be set 00:22:05.146 [2024-11-15 10:40:53.495182] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b5d1b0 is same with the state(6) to be set 00:22:05.146 [2024-11-15 10:40:53.495194] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b5d1b0 is same with the state(6) to be set 00:22:05.146 [2024-11-15 10:40:53.495206] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b5d1b0 is same with the state(6) to be set 00:22:05.146 [2024-11-15 10:40:53.495217] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b5d1b0 is same with the state(6) to be set 00:22:05.146 [2024-11-15 10:40:53.495229] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b5d1b0 is same with the state(6) to be set 00:22:05.146 [2024-11-15 10:40:53.495241] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b5d1b0 is same with the state(6) to be set 00:22:05.146 [2024-11-15 10:40:53.495252] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b5d1b0 is same with the state(6) to be set 00:22:05.146 [2024-11-15 10:40:53.495264] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b5d1b0 is same with the state(6) to be set 00:22:05.146 [2024-11-15 10:40:53.495276] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b5d1b0 is same with the state(6) to be set 00:22:05.146 [2024-11-15 10:40:53.495287] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b5d1b0 is same with the state(6) to be set 00:22:05.146 [2024-11-15 10:40:53.495300] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b5d1b0 is same with the state(6) to be set 00:22:05.146 [2024-11-15 10:40:53.495312] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b5d1b0 is same with the state(6) to be set 00:22:05.146 [2024-11-15 10:40:53.495324] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b5d1b0 is same with the state(6) to be set 00:22:05.146 [2024-11-15 10:40:53.495335] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b5d1b0 is same with the state(6) to be set 00:22:05.146 [2024-11-15 10:40:53.495369] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b5d1b0 is same with the state(6) to be set 00:22:05.146 [2024-11-15 10:40:53.495384] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b5d1b0 is same with the state(6) to be set 00:22:05.146 [2024-11-15 10:40:53.495396] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b5d1b0 is same with the state(6) to be set 00:22:05.146 [2024-11-15 10:40:53.495408] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b5d1b0 is same with the state(6) to be set 00:22:05.146 [2024-11-15 10:40:53.495421] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b5d1b0 is same with the state(6) to be set 00:22:05.146 [2024-11-15 10:40:53.495433] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b5d1b0 is same with the state(6) to be set 00:22:05.146 [2024-11-15 10:40:53.495445] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b5d1b0 is same with the state(6) to be set 00:22:05.146 [2024-11-15 10:40:53.495457] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b5d1b0 is same with the state(6) to be set 00:22:05.146 [2024-11-15 10:40:53.495469] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b5d1b0 is same with the state(6) to be set 00:22:05.146 [2024-11-15 10:40:53.495481] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b5d1b0 is same with the state(6) to be set 00:22:05.146 [2024-11-15 10:40:53.495493] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b5d1b0 is same with the state(6) to be set 00:22:05.146 [2024-11-15 10:40:53.495505] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b5d1b0 is same with the state(6) to be set 00:22:05.146 [2024-11-15 10:40:53.495520] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b5d1b0 is same with the state(6) to be set 00:22:05.146 [2024-11-15 10:40:53.495533] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b5d1b0 is same with the state(6) to be set 00:22:05.146 [2024-11-15 10:40:53.496820] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b3e010 is same with the state(6) to be set 00:22:05.146 [2024-11-15 10:40:53.496851] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b3e010 is same with the state(6) to be set 00:22:05.146 [2024-11-15 10:40:53.496866] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b3e010 is same with the state(6) to be set 00:22:05.146 [2024-11-15 10:40:53.496878] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b3e010 is same with the state(6) to be set 00:22:05.146 [2024-11-15 10:40:53.496895] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b3e010 is same with the state(6) to be set 00:22:05.146 [2024-11-15 10:40:53.496907] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b3e010 is same with the state(6) to be set 00:22:05.146 [2024-11-15 10:40:53.496919] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b3e010 is same with the state(6) to be set 00:22:05.146 [2024-11-15 10:40:53.496931] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b3e010 is same with the state(6) to be set 00:22:05.146 [2024-11-15 10:40:53.496942] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b3e010 is same with the state(6) to be set 00:22:05.146 [2024-11-15 10:40:53.496954] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b3e010 is same with the state(6) to be set 00:22:05.146 [2024-11-15 10:40:53.496966] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b3e010 is same with the state(6) to be set 00:22:05.146 [2024-11-15 10:40:53.496978] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b3e010 is same with the state(6) to be set 00:22:05.146 [2024-11-15 10:40:53.496989] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b3e010 is same with the state(6) to be set 00:22:05.146 [2024-11-15 10:40:53.497001] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b3e010 is same with the state(6) to be set 00:22:05.146 [2024-11-15 10:40:53.497013] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b3e010 is same with the state(6) to be set 00:22:05.146 [2024-11-15 10:40:53.497025] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b3e010 is same with the state(6) to be set 00:22:05.146 [2024-11-15 10:40:53.497037] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b3e010 is same with the state(6) to be set 00:22:05.146 [2024-11-15 10:40:53.497049] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b3e010 is same with the state(6) to be set 00:22:05.146 [2024-11-15 10:40:53.497061] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b3e010 is same with the state(6) to be set 00:22:05.146 [2024-11-15 10:40:53.497073] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b3e010 is same with the state(6) to be set 00:22:05.146 [2024-11-15 10:40:53.497085] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b3e010 is same with the state(6) to be set 00:22:05.146 [2024-11-15 10:40:53.497097] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b3e010 is same with the state(6) to be set 00:22:05.146 [2024-11-15 10:40:53.497109] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b3e010 is same with the state(6) to be set 00:22:05.146 [2024-11-15 10:40:53.497121] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b3e010 is same with the state(6) to be set 00:22:05.146 [2024-11-15 10:40:53.497132] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b3e010 is same with the state(6) to be set 00:22:05.146 [2024-11-15 10:40:53.497144] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b3e010 is same with the state(6) to be set 00:22:05.146 [2024-11-15 10:40:53.497161] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b3e010 is same with the state(6) to be set 00:22:05.146 [2024-11-15 10:40:53.497174] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b3e010 is same with the state(6) to be set 00:22:05.146 [2024-11-15 10:40:53.497187] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b3e010 is same with the state(6) to be set 00:22:05.146 [2024-11-15 10:40:53.497199] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b3e010 is same with the state(6) to be set 00:22:05.146 [2024-11-15 10:40:53.497212] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b3e010 is same with the state(6) to be set 00:22:05.146 [2024-11-15 10:40:53.497224] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b3e010 is same with the state(6) to be set 00:22:05.146 [2024-11-15 10:40:53.497236] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b3e010 is same with the state(6) to be set 00:22:05.146 [2024-11-15 10:40:53.497248] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b3e010 is same with the state(6) to be set 00:22:05.146 [2024-11-15 10:40:53.497260] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b3e010 is same with the state(6) to be set 00:22:05.146 [2024-11-15 10:40:53.497272] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b3e010 is same with the state(6) to be set 00:22:05.146 [2024-11-15 10:40:53.497284] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b3e010 is same with the state(6) to be set 00:22:05.146 [2024-11-15 10:40:53.497296] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b3e010 is same with the state(6) to be set 00:22:05.146 [2024-11-15 10:40:53.497308] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b3e010 is same with the state(6) to be set 00:22:05.146 [2024-11-15 10:40:53.497320] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b3e010 is same with the state(6) to be set 00:22:05.146 [2024-11-15 10:40:53.497331] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b3e010 is same with the state(6) to be set 00:22:05.146 [2024-11-15 10:40:53.497377] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b3e010 is same with the state(6) to be set 00:22:05.147 [2024-11-15 10:40:53.497393] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b3e010 is same with the state(6) to be set 00:22:05.147 [2024-11-15 10:40:53.497405] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b3e010 is same with the state(6) to be set 00:22:05.147 [2024-11-15 10:40:53.497418] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b3e010 is same with the state(6) to be set 00:22:05.147 [2024-11-15 10:40:53.497430] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b3e010 is same with the state(6) to be set 00:22:05.147 [2024-11-15 10:40:53.497442] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b3e010 is same with the state(6) to be set 00:22:05.147 [2024-11-15 10:40:53.497455] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b3e010 is same with the state(6) to be set 00:22:05.147 [2024-11-15 10:40:53.497468] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b3e010 is same with the state(6) to be set 00:22:05.147 [2024-11-15 10:40:53.497480] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b3e010 is same with the state(6) to be set 00:22:05.147 [2024-11-15 10:40:53.497492] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b3e010 is same with the state(6) to be set 00:22:05.147 [2024-11-15 10:40:53.497504] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b3e010 is same with the state(6) to be set 00:22:05.147 [2024-11-15 10:40:53.497516] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b3e010 is same with the state(6) to be set 00:22:05.147 [2024-11-15 10:40:53.497532] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b3e010 is same with the state(6) to be set 00:22:05.147 [2024-11-15 10:40:53.497544] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b3e010 is same with the state(6) to be set 00:22:05.147 [2024-11-15 10:40:53.497557] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b3e010 is same with the state(6) to be set 00:22:05.147 [2024-11-15 10:40:53.497569] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b3e010 is same with the state(6) to be set 00:22:05.147 [2024-11-15 10:40:53.497581] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b3e010 is same with the state(6) to be set 00:22:05.147 [2024-11-15 10:40:53.497607] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b3e010 is same with the state(6) to be set 00:22:05.147 [2024-11-15 10:40:53.497620] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b3e010 is same with the state(6) to be set 00:22:05.147 [2024-11-15 10:40:53.497633] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b3e010 is same with the state(6) to be set 00:22:05.147 [2024-11-15 10:40:53.497656] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b3e010 is same with the state(6) to be set 00:22:05.147 [2024-11-15 10:40:53.497683] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b3e010 is same with the state(6) to be set 00:22:05.147 [2024-11-15 10:40:53.499121] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b5d680 is same with the state(6) to be set 00:22:05.147 [2024-11-15 10:40:53.499143] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b5d680 is same with the state(6) to be set 00:22:05.147 [2024-11-15 10:40:53.499156] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b5d680 is same with the state(6) to be set 00:22:05.147 [2024-11-15 10:40:53.499168] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b5d680 is same with the state(6) to be set 00:22:05.147 [2024-11-15 10:40:53.499180] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b5d680 is same with the state(6) to be set 00:22:05.147 [2024-11-15 10:40:53.499192] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b5d680 is same with the state(6) to be set 00:22:05.147 [2024-11-15 10:40:53.499204] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b5d680 is same with the state(6) to be set 00:22:05.147 [2024-11-15 10:40:53.499216] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b5d680 is same with the state(6) to be set 00:22:05.147 [2024-11-15 10:40:53.499227] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b5d680 is same with the state(6) to be set 00:22:05.147 [2024-11-15 10:40:53.499239] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b5d680 is same with the state(6) to be set 00:22:05.147 [2024-11-15 10:40:53.499251] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b5d680 is same with the state(6) to be set 00:22:05.147 [2024-11-15 10:40:53.499262] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b5d680 is same with the state(6) to be set 00:22:05.147 [2024-11-15 10:40:53.499274] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b5d680 is same with the state(6) to be set 00:22:05.147 [2024-11-15 10:40:53.499286] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b5d680 is same with the state(6) to be set 00:22:05.147 [2024-11-15 10:40:53.499298] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b5d680 is same with the state(6) to be set 00:22:05.147 [2024-11-15 10:40:53.499310] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b5d680 is same with the state(6) to be set 00:22:05.147 [2024-11-15 10:40:53.499321] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b5d680 is same with the state(6) to be set 00:22:05.147 [2024-11-15 10:40:53.499336] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b5d680 is same with the state(6) to be set 00:22:05.147 [2024-11-15 10:40:53.499353] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b5d680 is same with the state(6) to be set 00:22:05.147 [2024-11-15 10:40:53.499372] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b5d680 is same with the state(6) to be set 00:22:05.147 [2024-11-15 10:40:53.499401] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b5d680 is same with the state(6) to be set 00:22:05.147 [2024-11-15 10:40:53.499414] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b5d680 is same with the state(6) to be set 00:22:05.147 [2024-11-15 10:40:53.499426] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b5d680 is same with the state(6) to be set 00:22:05.147 [2024-11-15 10:40:53.499438] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b5d680 is same with the state(6) to be set 00:22:05.147 [2024-11-15 10:40:53.499450] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b5d680 is same with the state(6) to be set 00:22:05.147 [2024-11-15 10:40:53.499462] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b5d680 is same with the state(6) to be set 00:22:05.147 [2024-11-15 10:40:53.499474] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b5d680 is same with the state(6) to be set 00:22:05.147 [2024-11-15 10:40:53.499491] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b5d680 is same with the state(6) to be set 00:22:05.147 [2024-11-15 10:40:53.499503] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b5d680 is same with the state(6) to be set 00:22:05.147 [2024-11-15 10:40:53.499516] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b5d680 is same with the state(6) to be set 00:22:05.147 [2024-11-15 10:40:53.499528] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b5d680 is same with the state(6) to be set 00:22:05.147 [2024-11-15 10:40:53.499540] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b5d680 is same with the state(6) to be set 00:22:05.147 [2024-11-15 10:40:53.499553] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b5d680 is same with the state(6) to be set 00:22:05.147 [2024-11-15 10:40:53.499565] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b5d680 is same with the state(6) to be set 00:22:05.147 [2024-11-15 10:40:53.499577] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b5d680 is same with the state(6) to be set 00:22:05.147 [2024-11-15 10:40:53.499589] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b5d680 is same with the state(6) to be set 00:22:05.147 [2024-11-15 10:40:53.499601] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b5d680 is same with the state(6) to be set 00:22:05.147 [2024-11-15 10:40:53.499613] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b5d680 is same with the state(6) to be set 00:22:05.147 [2024-11-15 10:40:53.499625] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b5d680 is same with the state(6) to be set 00:22:05.147 [2024-11-15 10:40:53.499637] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b5d680 is same with the state(6) to be set 00:22:05.147 [2024-11-15 10:40:53.499653] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b5d680 is same with the state(6) to be set 00:22:05.147 [2024-11-15 10:40:53.499665] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b5d680 is same with the state(6) to be set 00:22:05.147 [2024-11-15 10:40:53.499692] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b5d680 is same with the state(6) to be set 00:22:05.147 [2024-11-15 10:40:53.499704] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b5d680 is same with the state(6) to be set 00:22:05.147 [2024-11-15 10:40:53.499720] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b5d680 is same with the state(6) to be set 00:22:05.147 [2024-11-15 10:40:53.499732] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b5d680 is same with the state(6) to be set 00:22:05.147 [2024-11-15 10:40:53.499744] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b5d680 is same with the state(6) to be set 00:22:05.147 [2024-11-15 10:40:53.499756] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b5d680 is same with the state(6) to be set 00:22:05.147 [2024-11-15 10:40:53.499768] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b5d680 is same with the state(6) to be set 00:22:05.147 [2024-11-15 10:40:53.499779] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b5d680 is same with the state(6) to be set 00:22:05.147 [2024-11-15 10:40:53.499791] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b5d680 is same with the state(6) to be set 00:22:05.147 [2024-11-15 10:40:53.499803] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b5d680 is same with the state(6) to be set 00:22:05.147 [2024-11-15 10:40:53.499814] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b5d680 is same with the state(6) to be set 00:22:05.147 [2024-11-15 10:40:53.499826] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b5d680 is same with the state(6) to be set 00:22:05.148 [2024-11-15 10:40:53.499838] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b5d680 is same with the state(6) to be set 00:22:05.148 [2024-11-15 10:40:53.499850] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b5d680 is same with the state(6) to be set 00:22:05.148 [2024-11-15 10:40:53.499861] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b5d680 is same with the state(6) to be set 00:22:05.148 [2024-11-15 10:40:53.499873] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b5d680 is same with the state(6) to be set 00:22:05.148 [2024-11-15 10:40:53.499885] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b5d680 is same with the state(6) to be set 00:22:05.148 [2024-11-15 10:40:53.499901] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b5d680 is same with the state(6) to be set 00:22:05.148 [2024-11-15 10:40:53.499913] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b5d680 is same with the state(6) to be set 00:22:05.148 [2024-11-15 10:40:53.499925] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b5d680 is same with the state(6) to be set 00:22:05.148 [2024-11-15 10:40:53.499936] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b5d680 is same with the state(6) to be set 00:22:05.148 [2024-11-15 10:40:53.501595] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b5db50 is same with the state(6) to be set 00:22:05.148 [2024-11-15 10:40:53.501642] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b5db50 is same with the state(6) to be set 00:22:05.148 [2024-11-15 10:40:53.501669] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b5db50 is same with the state(6) to be set 00:22:05.148 [2024-11-15 10:40:53.501706] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b5db50 is same with the state(6) to be set 00:22:05.148 [2024-11-15 10:40:53.501727] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b5db50 is same with the state(6) to be set 00:22:05.148 [2024-11-15 10:40:53.501754] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b5db50 is same with the state(6) to be set 00:22:05.148 [2024-11-15 10:40:53.501775] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b5db50 is same with the state(6) to be set 00:22:05.148 [2024-11-15 10:40:53.501798] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b5db50 is same with the state(6) to be set 00:22:05.148 [2024-11-15 10:40:53.501826] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b5db50 is same with the state(6) to be set 00:22:05.148 [2024-11-15 10:40:53.501848] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b5db50 is same with the state(6) to be set 00:22:05.148 [2024-11-15 10:40:53.501869] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b5db50 is same with the state(6) to be set 00:22:05.148 [2024-11-15 10:40:53.501890] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b5db50 is same with the state(6) to be set 00:22:05.148 [2024-11-15 10:40:53.501911] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b5db50 is same with the state(6) to be set 00:22:05.148 [2024-11-15 10:40:53.501932] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b5db50 is same with the state(6) to be set 00:22:05.148 [2024-11-15 10:40:53.501956] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b5db50 is same with the state(6) to be set 00:22:05.148 [2024-11-15 10:40:53.501976] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b5db50 is same with the state(6) to be set 00:22:05.148 [2024-11-15 10:40:53.501998] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b5db50 is same with the state(6) to be set 00:22:05.148 [2024-11-15 10:40:53.502019] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b5db50 is same with the state(6) to be set 00:22:05.148 [2024-11-15 10:40:53.502041] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b5db50 is same with the state(6) to be set 00:22:05.148 [2024-11-15 10:40:53.502062] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b5db50 is same with the state(6) to be set 00:22:05.148 [2024-11-15 10:40:53.502082] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b5db50 is same with the state(6) to be set 00:22:05.148 [2024-11-15 10:40:53.502104] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b5db50 is same with the state(6) to be set 00:22:05.148 [2024-11-15 10:40:53.502125] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b5db50 is same with the state(6) to be set 00:22:05.148 [2024-11-15 10:40:53.502147] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b5db50 is same with the state(6) to be set 00:22:05.148 [2024-11-15 10:40:53.502169] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b5db50 is same with the state(6) to be set 00:22:05.148 [2024-11-15 10:40:53.502189] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b5db50 is same with the state(6) to be set 00:22:05.148 [2024-11-15 10:40:53.502211] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b5db50 is same with the state(6) to be set 00:22:05.148 [2024-11-15 10:40:53.502231] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b5db50 is same with the state(6) to be set 00:22:05.148 [2024-11-15 10:40:53.502254] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b5db50 is same with the state(6) to be set 00:22:05.148 [2024-11-15 10:40:53.502275] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b5db50 is same with the state(6) to be set 00:22:05.148 [2024-11-15 10:40:53.502296] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b5db50 is same with the state(6) to be set 00:22:05.148 [2024-11-15 10:40:53.502317] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b5db50 is same with the state(6) to be set 00:22:05.148 [2024-11-15 10:40:53.502336] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b5db50 is same with the state(6) to be set 00:22:05.148 [2024-11-15 10:40:53.502389] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b5db50 is same with the state(6) to be set 00:22:05.148 [2024-11-15 10:40:53.502413] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b5db50 is same with the state(6) to be set 00:22:05.148 [2024-11-15 10:40:53.502443] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b5db50 is same with the state(6) to be set 00:22:05.148 [2024-11-15 10:40:53.502464] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b5db50 is same with the state(6) to be set 00:22:05.148 [2024-11-15 10:40:53.502486] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b5db50 is same with the state(6) to be set 00:22:05.148 [2024-11-15 10:40:53.502507] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b5db50 is same with the state(6) to be set 00:22:05.148 [2024-11-15 10:40:53.502527] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b5db50 is same with the state(6) to be set 00:22:05.148 [2024-11-15 10:40:53.502549] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b5db50 is same with the state(6) to be set 00:22:05.148 [2024-11-15 10:40:53.502570] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b5db50 is same with the state(6) to be set 00:22:05.148 [2024-11-15 10:40:53.502591] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b5db50 is same with the state(6) to be set 00:22:05.148 [2024-11-15 10:40:53.502613] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b5db50 is same with the state(6) to be set 00:22:05.148 [2024-11-15 10:40:53.502634] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b5db50 is same with the state(6) to be set 00:22:05.148 [2024-11-15 10:40:53.502666] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b5db50 is same with the state(6) to be set 00:22:05.148 [2024-11-15 10:40:53.502700] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b5db50 is same with the state(6) to be set 00:22:05.148 [2024-11-15 10:40:53.502730] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b5db50 is same with the state(6) to be set 00:22:05.148 [2024-11-15 10:40:53.502749] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b5db50 is same with the state(6) to be set 00:22:05.148 [2024-11-15 10:40:53.502770] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b5db50 is same with the state(6) to be set 00:22:05.148 [2024-11-15 10:40:53.502791] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b5db50 is same with the state(6) to be set 00:22:05.148 [2024-11-15 10:40:53.502810] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b5db50 is same with the state(6) to be set 00:22:05.148 [2024-11-15 10:40:53.502831] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b5db50 is same with the state(6) to be set 00:22:05.148 [2024-11-15 10:40:53.502851] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b5db50 is same with the state(6) to be set 00:22:05.148 [2024-11-15 10:40:53.502871] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b5db50 is same with the state(6) to be set 00:22:05.148 [2024-11-15 10:40:53.502893] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b5db50 is same with the state(6) to be set 00:22:05.148 [2024-11-15 10:40:53.502912] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b5db50 is same with the state(6) to be set 00:22:05.148 [2024-11-15 10:40:53.502933] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b5db50 is same with the state(6) to be set 00:22:05.148 [2024-11-15 10:40:53.502953] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b5db50 is same with the state(6) to be set 00:22:05.148 [2024-11-15 10:40:53.502973] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b5db50 is same with the state(6) to be set 00:22:05.148 [2024-11-15 10:40:53.502996] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b5db50 is same with the state(6) to be set 00:22:05.148 [2024-11-15 10:40:53.503016] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b5db50 is same with the state(6) to be set 00:22:05.148 [2024-11-15 10:40:53.503043] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b5db50 is same with the state(6) to be set 00:22:05.148 [2024-11-15 10:40:53.504875] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:05.148 [2024-11-15 10:40:53.504916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.148 [2024-11-15 10:40:53.504933] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:05.148 [2024-11-15 10:40:53.504947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.148 [2024-11-15 10:40:53.504960] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:05.148 [2024-11-15 10:40:53.504973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.148 [2024-11-15 10:40:53.504987] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:05.148 [2024-11-15 10:40:53.504988] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b5e040 is same with [2024-11-15 10:40:53.505000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cthe state(6) to be set 00:22:05.148 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.148 [2024-11-15 10:40:53.505015] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfa4220 is same with the state(6) to be set 00:22:05.149 [2024-11-15 10:40:53.505019] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b5e040 is same with the state(6) to be set 00:22:05.149 [2024-11-15 10:40:53.505034] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b5e040 is same with the state(6) to be set 00:22:05.149 [2024-11-15 10:40:53.505047] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b5e040 is same with the state(6) to be set 00:22:05.149 [2024-11-15 10:40:53.505059] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b5e040 is same with the state(6) to be set 00:22:05.149 [2024-11-15 10:40:53.505071] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b5e040 is same with the state(6) to be set 00:22:05.149 [2024-11-15 10:40:53.505083] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b5e040 is same with the state(6) to be set 00:22:05.149 [2024-11-15 10:40:53.505087] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 ns[2024-11-15 10:40:53.505095] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b5e040 is same with id:0 cdw10:00000000 cdw11:00000000 00:22:05.149 the state(6) to be set 00:22:05.149 [2024-11-15 10:40:53.505109] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b5e040 is same with the state(6) to be set 00:22:05.149 [2024-11-15 10:40:53.505111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.149 [2024-11-15 10:40:53.505122] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b5e040 is same with the state(6) to be set 00:22:05.149 [2024-11-15 10:40:53.505126] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:05.149 [2024-11-15 10:40:53.505134] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b5e040 is same with the state(6) to be set 00:22:05.149 [2024-11-15 10:40:53.505140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.149 [2024-11-15 10:40:53.505146] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b5e040 is same with the state(6) to be set 00:22:05.149 [2024-11-15 10:40:53.505154] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 ns[2024-11-15 10:40:53.505159] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b5e040 is same with id:0 cdw10:00000000 cdw11:00000000 00:22:05.149 the state(6) to be set 00:22:05.149 [2024-11-15 10:40:53.505174] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b5e040 is same with [2024-11-15 10:40:53.505174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cthe state(6) to be set 00:22:05.149 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.149 [2024-11-15 10:40:53.505188] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b5e040 is same with the state(6) to be set 00:22:05.149 [2024-11-15 10:40:53.505191] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:05.149 [2024-11-15 10:40:53.505201] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b5e040 is same with the state(6) to be set 00:22:05.149 [2024-11-15 10:40:53.505204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.149 [2024-11-15 10:40:53.505214] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b5e040 is same with the state(6) to be set 00:22:05.149 [2024-11-15 10:40:53.505217] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f9060 is same with the state(6) to be set 00:22:05.149 [2024-11-15 10:40:53.505226] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b5e040 is same with the state(6) to be set 00:22:05.149 [2024-11-15 10:40:53.505239] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b5e040 is same with the state(6) to be set 00:22:05.149 [2024-11-15 10:40:53.505251] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b5e040 is same with the state(6) to be set 00:22:05.149 [2024-11-15 10:40:53.505263] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b5e040 is same with the state(6) to be set 00:22:05.149 [2024-11-15 10:40:53.505275] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b5e040 is same with the state(6) to be set 00:22:05.149 [2024-11-15 10:40:53.505278] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:05.149 [2024-11-15 10:40:53.505289] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b5e040 is same with the state(6) to be set 00:22:05.149 [2024-11-15 10:40:53.505299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.149 [2024-11-15 10:40:53.505302] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b5e040 is same with the state(6) to be set 00:22:05.149 [2024-11-15 10:40:53.505314] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b5e040 is same with [2024-11-15 10:40:53.505314] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsthe state(6) to be set 00:22:05.149 id:0 cdw10:00000000 cdw11:00000000 00:22:05.149 [2024-11-15 10:40:53.505328] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b5e040 is same with the state(6) to be set 00:22:05.149 [2024-11-15 10:40:53.505330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.149 [2024-11-15 10:40:53.505341] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b5e040 is same with the state(6) to be set 00:22:05.149 [2024-11-15 10:40:53.505345] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:05.149 [2024-11-15 10:40:53.505353] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b5e040 is same with the state(6) to be set 00:22:05.149 [2024-11-15 10:40:53.505358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 c[2024-11-15 10:40:53.505388] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b5e040 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.149 the state(6) to be set 00:22:05.149 [2024-11-15 10:40:53.505406] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b5e040 is same with the state(6) to be set 00:22:05.149 [2024-11-15 10:40:53.505407] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:05.149 [2024-11-15 10:40:53.505418] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b5e040 is same with the state(6) to be set 00:22:05.149 [2024-11-15 10:40:53.505423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.149 [2024-11-15 10:40:53.505432] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b5e040 is same with the state(6) to be set 00:22:05.149 [2024-11-15 10:40:53.505436] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfab0b0 is same with the state(6) to be set 00:22:05.149 [2024-11-15 10:40:53.505445] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b5e040 is same with the state(6) to be set 00:22:05.149 [2024-11-15 10:40:53.505457] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b5e040 is same with the state(6) to be set 00:22:05.149 [2024-11-15 10:40:53.505470] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b5e040 is same with the state(6) to be set 00:22:05.149 [2024-11-15 10:40:53.505482] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b5e040 is same with the state(6) to be set 00:22:05.149 [2024-11-15 10:40:53.505486] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:05.149 [2024-11-15 10:40:53.505494] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b5e040 is same with the state(6) to be set 00:22:05.149 [2024-11-15 10:40:53.505506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 c[2024-11-15 10:40:53.505507] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b5e040 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.149 the state(6) to be set 00:22:05.149 [2024-11-15 10:40:53.505522] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b5e040 is same with [2024-11-15 10:40:53.505522] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsthe state(6) to be set 00:22:05.149 id:0 cdw10:00000000 cdw11:00000000 00:22:05.149 [2024-11-15 10:40:53.505536] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b5e040 is same with [2024-11-15 10:40:53.505537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cthe state(6) to be set 00:22:05.149 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.149 [2024-11-15 10:40:53.505550] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b5e040 is same with the state(6) to be set 00:22:05.149 [2024-11-15 10:40:53.505552] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:05.149 [2024-11-15 10:40:53.505562] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b5e040 is same with the state(6) to be set 00:22:05.149 [2024-11-15 10:40:53.505565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.149 [2024-11-15 10:40:53.505575] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b5e040 is same with the state(6) to be set 00:22:05.149 [2024-11-15 10:40:53.505580] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:05.149 [2024-11-15 10:40:53.505588] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b5e040 is same with the state(6) to be set 00:22:05.149 [2024-11-15 10:40:53.505594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.149 [2024-11-15 10:40:53.505600] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b5e040 is same with the state(6) to be set 00:22:05.149 [2024-11-15 10:40:53.505612] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfad6f0 is same w[2024-11-15 10:40:53.505613] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b5e040 is same with ith the state(6) to be set 00:22:05.149 the state(6) to be set 00:22:05.149 [2024-11-15 10:40:53.505627] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b5e040 is same with the state(6) to be set 00:22:05.149 [2024-11-15 10:40:53.505639] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b5e040 is same with the state(6) to be set 00:22:05.149 [2024-11-15 10:40:53.505652] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b5e040 is same with the state(6) to be set 00:22:05.149 [2024-11-15 10:40:53.505657] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 ns[2024-11-15 10:40:53.505663] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b5e040 is same with id:0 cdw10:00000000 cdw11:00000000 00:22:05.149 the state(6) to be set 00:22:05.149 [2024-11-15 10:40:53.505692] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b5e040 is same with the state(6) to be set 00:22:05.149 [2024-11-15 10:40:53.505693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.149 [2024-11-15 10:40:53.505704] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b5e040 is same with the state(6) to be set 00:22:05.149 [2024-11-15 10:40:53.505708] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:05.149 [2024-11-15 10:40:53.505716] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b5e040 is same with the state(6) to be set 00:22:05.150 [2024-11-15 10:40:53.505722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.150 [2024-11-15 10:40:53.505728] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b5e040 is same with the state(6) to be set 00:22:05.150 [2024-11-15 10:40:53.505736] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:05.150 [2024-11-15 10:40:53.505741] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b5e040 is same with the state(6) to be set 00:22:05.150 [2024-11-15 10:40:53.505750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.150 [2024-11-15 10:40:53.505753] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b5e040 is same with the state(6) to be set 00:22:05.150 [2024-11-15 10:40:53.505763] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:05.150 [2024-11-15 10:40:53.505766] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b5e040 is same with the state(6) to be set 00:22:05.150 [2024-11-15 10:40:53.505776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.150 [2024-11-15 10:40:53.505778] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b5e040 is same with the state(6) to be set 00:22:05.150 [2024-11-15 10:40:53.505789] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d8f70 is same with the state(6) to be set 00:22:05.150 [2024-11-15 10:40:53.505791] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b5e040 is same with the state(6) to be set 00:22:05.150 [2024-11-15 10:40:53.505804] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b5e040 is same with the state(6) to be set 00:22:05.150 [2024-11-15 10:40:53.505829] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b5e040 is same with the state(6) to be set 00:22:05.150 [2024-11-15 10:40:53.505843] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b5e040 is same with the state(6) to be set 00:22:05.150 [2024-11-15 10:40:53.505855] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b5e040 is same with the state(6) to be set 00:22:05.150 [2024-11-15 10:40:53.508000] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18edde0 is same with the state(6) to be set 00:22:05.150 [2024-11-15 10:40:53.508028] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18edde0 is same with the state(6) to be set 00:22:05.150 [2024-11-15 10:40:53.508040] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18edde0 is same with the state(6) to be set 00:22:05.150 [2024-11-15 10:40:53.508052] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18edde0 is same with the state(6) to be set 00:22:05.150 [2024-11-15 10:40:53.508064] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18edde0 is same with the state(6) to be set 00:22:05.150 [2024-11-15 10:40:53.508076] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18edde0 is same with the state(6) to be set 00:22:05.150 [2024-11-15 10:40:53.508087] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18edde0 is same with the state(6) to be set 00:22:05.150 [2024-11-15 10:40:53.508098] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18edde0 is same with the state(6) to be set 00:22:05.150 [2024-11-15 10:40:53.508109] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18edde0 is same with the state(6) to be set 00:22:05.150 [2024-11-15 10:40:53.508120] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18edde0 is same with the state(6) to be set 00:22:05.150 [2024-11-15 10:40:53.508132] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18edde0 is same with the state(6) to be set 00:22:05.150 [2024-11-15 10:40:53.508142] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18edde0 is same with the state(6) to be set 00:22:05.150 [2024-11-15 10:40:53.508154] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18edde0 is same with the state(6) to be set 00:22:05.150 [2024-11-15 10:40:53.508165] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18edde0 is same with the state(6) to be set 00:22:05.150 [2024-11-15 10:40:53.508176] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18edde0 is same with the state(6) to be set 00:22:05.150 [2024-11-15 10:40:53.508187] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18edde0 is same with the state(6) to be set 00:22:05.150 [2024-11-15 10:40:53.508198] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18edde0 is same with the state(6) to be set 00:22:05.150 [2024-11-15 10:40:53.508210] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18edde0 is same with the state(6) to be set 00:22:05.150 [2024-11-15 10:40:53.508221] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18edde0 is same with the state(6) to be set 00:22:05.150 [2024-11-15 10:40:53.508232] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18edde0 is same with the state(6) to be set 00:22:05.150 [2024-11-15 10:40:53.508244] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18edde0 is same with the state(6) to be set 00:22:05.150 [2024-11-15 10:40:53.508255] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18edde0 is same with the state(6) to be set 00:22:05.150 [2024-11-15 10:40:53.508266] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18edde0 is same with the state(6) to be set 00:22:05.150 [2024-11-15 10:40:53.508277] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18edde0 is same with the state(6) to be set 00:22:05.150 [2024-11-15 10:40:53.508294] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18edde0 is same with the state(6) to be set 00:22:05.150 [2024-11-15 10:40:53.508306] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18edde0 is same with the state(6) to be set 00:22:05.150 [2024-11-15 10:40:53.508317] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18edde0 is same with the state(6) to be set 00:22:05.150 [2024-11-15 10:40:53.508328] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18edde0 is same with the state(6) to be set 00:22:05.150 [2024-11-15 10:40:53.508339] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18edde0 is same with the state(6) to be set 00:22:05.150 [2024-11-15 10:40:53.508351] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18edde0 is same with the state(6) to be set 00:22:05.150 [2024-11-15 10:40:53.508369] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18edde0 is same with the state(6) to be set 00:22:05.150 [2024-11-15 10:40:53.508399] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18edde0 is same with the state(6) to be set 00:22:05.150 [2024-11-15 10:40:53.508411] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18edde0 is same with the state(6) to be set 00:22:05.150 [2024-11-15 10:40:53.508423] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18edde0 is same with the state(6) to be set 00:22:05.150 [2024-11-15 10:40:53.508435] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18edde0 is same with the state(6) to be set 00:22:05.150 [2024-11-15 10:40:53.508447] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18edde0 is same with the state(6) to be set 00:22:05.150 [2024-11-15 10:40:53.508459] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18edde0 is same with the state(6) to be set 00:22:05.150 [2024-11-15 10:40:53.508471] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18edde0 is same with the state(6) to be set 00:22:05.150 [2024-11-15 10:40:53.508483] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18edde0 is same with the state(6) to be set 00:22:05.150 [2024-11-15 10:40:53.508495] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18edde0 is same with the state(6) to be set 00:22:05.151 [2024-11-15 10:40:53.508507] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18edde0 is same with the state(6) to be set 00:22:05.151 [2024-11-15 10:40:53.508518] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18edde0 is same with the state(6) to be set 00:22:05.151 [2024-11-15 10:40:53.508530] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18edde0 is same with the state(6) to be set 00:22:05.151 [2024-11-15 10:40:53.508541] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18edde0 is same with the state(6) to be set 00:22:05.151 [2024-11-15 10:40:53.508553] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18edde0 is same with the state(6) to be set 00:22:05.151 [2024-11-15 10:40:53.508565] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18edde0 is same with the state(6) to be set 00:22:05.151 [2024-11-15 10:40:53.508577] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18edde0 is same with the state(6) to be set 00:22:05.151 [2024-11-15 10:40:53.508588] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18edde0 is same with the state(6) to be set 00:22:05.151 [2024-11-15 10:40:53.508600] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18edde0 is same with the state(6) to be set 00:22:05.151 [2024-11-15 10:40:53.508611] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18edde0 is same with the state(6) to be set 00:22:05.151 [2024-11-15 10:40:53.508623] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18edde0 is same with the state(6) to be set 00:22:05.151 [2024-11-15 10:40:53.508639] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18edde0 is same with the state(6) to be set 00:22:05.151 [2024-11-15 10:40:53.508651] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18edde0 is same with the state(6) to be set 00:22:05.151 [2024-11-15 10:40:53.508663] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18edde0 is same with the state(6) to be set 00:22:05.151 [2024-11-15 10:40:53.508674] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18edde0 is same with the state(6) to be set 00:22:05.151 [2024-11-15 10:40:53.508702] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18edde0 is same with the state(6) to be set 00:22:05.151 [2024-11-15 10:40:53.508713] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18edde0 is same with the state(6) to be set 00:22:05.151 [2024-11-15 10:40:53.508725] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18edde0 is same with the state(6) to be set 00:22:05.151 [2024-11-15 10:40:53.509059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.151 [2024-11-15 10:40:53.509086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.151 [2024-11-15 10:40:53.509112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.151 [2024-11-15 10:40:53.509128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.151 [2024-11-15 10:40:53.509145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.151 [2024-11-15 10:40:53.509158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.151 [2024-11-15 10:40:53.509173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.151 [2024-11-15 10:40:53.509186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.151 [2024-11-15 10:40:53.509201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.151 [2024-11-15 10:40:53.509214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.151 [2024-11-15 10:40:53.509229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.151 [2024-11-15 10:40:53.509242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.151 [2024-11-15 10:40:53.509257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.151 [2024-11-15 10:40:53.509270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.151 [2024-11-15 10:40:53.509284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.151 [2024-11-15 10:40:53.509299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.151 [2024-11-15 10:40:53.509314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.151 [2024-11-15 10:40:53.509327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.151 [2024-11-15 10:40:53.509341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.151 [2024-11-15 10:40:53.509360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.151 [2024-11-15 10:40:53.509404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.151 [2024-11-15 10:40:53.509419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.151 [2024-11-15 10:40:53.509435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.151 [2024-11-15 10:40:53.509449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.151 [2024-11-15 10:40:53.509464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.151 [2024-11-15 10:40:53.509478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.151 [2024-11-15 10:40:53.509493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.151 [2024-11-15 10:40:53.509506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.151 [2024-11-15 10:40:53.509522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.151 [2024-11-15 10:40:53.509536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.151 [2024-11-15 10:40:53.509550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.151 [2024-11-15 10:40:53.509564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.151 [2024-11-15 10:40:53.509579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.151 [2024-11-15 10:40:53.509592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.151 [2024-11-15 10:40:53.509607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.151 [2024-11-15 10:40:53.509620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.151 [2024-11-15 10:40:53.509635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.151 [2024-11-15 10:40:53.509649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.151 [2024-11-15 10:40:53.509663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.151 [2024-11-15 10:40:53.509676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.151 [2024-11-15 10:40:53.509691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.151 [2024-11-15 10:40:53.509705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.151 [2024-11-15 10:40:53.509720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.151 [2024-11-15 10:40:53.509734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.151 [2024-11-15 10:40:53.509753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.151 [2024-11-15 10:40:53.509768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.151 [2024-11-15 10:40:53.509783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.151 [2024-11-15 10:40:53.509797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.151 [2024-11-15 10:40:53.509812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.151 [2024-11-15 10:40:53.509825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.151 [2024-11-15 10:40:53.509840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.151 [2024-11-15 10:40:53.509854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.151 [2024-11-15 10:40:53.509869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.151 [2024-11-15 10:40:53.509882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.151 [2024-11-15 10:40:53.509876] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ee2b0 is same with the state(6) to be set 00:22:05.151 [2024-11-15 10:40:53.509897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.151 [2024-11-15 10:40:53.509905] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ee2b0 is same with the state(6) to be set 00:22:05.151 [2024-11-15 10:40:53.509911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.151 [2024-11-15 10:40:53.509935] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ee2b0 is same with the state(6) to be set 00:22:05.151 [2024-11-15 10:40:53.509942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.151 [2024-11-15 10:40:53.509948] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ee2b0 is same with the state(6) to be set 00:22:05.151 [2024-11-15 10:40:53.509956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.152 [2024-11-15 10:40:53.509971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.152 [2024-11-15 10:40:53.509984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.152 [2024-11-15 10:40:53.509998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.152 [2024-11-15 10:40:53.510010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.152 [2024-11-15 10:40:53.510025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.152 [2024-11-15 10:40:53.510037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.152 [2024-11-15 10:40:53.510051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.152 [2024-11-15 10:40:53.510068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.152 [2024-11-15 10:40:53.510083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.152 [2024-11-15 10:40:53.510096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.152 [2024-11-15 10:40:53.510110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.152 [2024-11-15 10:40:53.510122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.152 [2024-11-15 10:40:53.510136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.152 [2024-11-15 10:40:53.510149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.152 [2024-11-15 10:40:53.510164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.152 [2024-11-15 10:40:53.510177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.152 [2024-11-15 10:40:53.510191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.152 [2024-11-15 10:40:53.510204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.152 [2024-11-15 10:40:53.510218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.152 [2024-11-15 10:40:53.510231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.152 [2024-11-15 10:40:53.510245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.152 [2024-11-15 10:40:53.510258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.152 [2024-11-15 10:40:53.510272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.152 [2024-11-15 10:40:53.510285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.152 [2024-11-15 10:40:53.510309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.152 [2024-11-15 10:40:53.510322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.152 [2024-11-15 10:40:53.510336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.152 [2024-11-15 10:40:53.510349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.152 [2024-11-15 10:40:53.510385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.152 [2024-11-15 10:40:53.510402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.152 [2024-11-15 10:40:53.510418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.152 [2024-11-15 10:40:53.510432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.152 [2024-11-15 10:40:53.510452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.152 [2024-11-15 10:40:53.510466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.152 [2024-11-15 10:40:53.510481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.152 [2024-11-15 10:40:53.510501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.152 [2024-11-15 10:40:53.510516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.152 [2024-11-15 10:40:53.510529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.152 [2024-11-15 10:40:53.510543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.152 [2024-11-15 10:40:53.510557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.152 [2024-11-15 10:40:53.510572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.152 [2024-11-15 10:40:53.510585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.152 [2024-11-15 10:40:53.510599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.152 [2024-11-15 10:40:53.510612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.152 [2024-11-15 10:40:53.510627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.152 [2024-11-15 10:40:53.510641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.152 [2024-11-15 10:40:53.510656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.152 [2024-11-15 10:40:53.510669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.152 [2024-11-15 10:40:53.510683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.152 [2024-11-15 10:40:53.510696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.152 [2024-11-15 10:40:53.510711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.152 [2024-11-15 10:40:53.510724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.152 [2024-11-15 10:40:53.510738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.152 [2024-11-15 10:40:53.510752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.152 [2024-11-15 10:40:53.510766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.152 [2024-11-15 10:40:53.510779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.152 [2024-11-15 10:40:53.510794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.152 [2024-11-15 10:40:53.510811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.152 [2024-11-15 10:40:53.510826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.152 [2024-11-15 10:40:53.510840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.152 [2024-11-15 10:40:53.510854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.152 [2024-11-15 10:40:53.510868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.152 [2024-11-15 10:40:53.510882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.152 [2024-11-15 10:40:53.510895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.152 [2024-11-15 10:40:53.510910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.152 [2024-11-15 10:40:53.510923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.152 [2024-11-15 10:40:53.510940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.152 [2024-11-15 10:40:53.510958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.152 [2024-11-15 10:40:53.510974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.152 [2024-11-15 10:40:53.510987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.152 [2024-11-15 10:40:53.511025] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:22:05.152 [2024-11-15 10:40:53.511187] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b5e3c0 is same with the state(6) to be set 00:22:05.152 [2024-11-15 10:40:53.511221] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b5e3c0 is same with the state(6) to be set 00:22:05.152 [2024-11-15 10:40:53.511256] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b5e3c0 is same with the state(6) to be set 00:22:05.152 [2024-11-15 10:40:53.511278] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b5e3c0 is same with the state(6) to be set 00:22:05.152 [2024-11-15 10:40:53.511308] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b5e3c0 is same with the state(6) to be set 00:22:05.152 [2024-11-15 10:40:53.511331] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b5e3c0 is same with the state(6) to be set 00:22:05.152 [2024-11-15 10:40:53.511351] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b5e3c0 is same with the state(6) to be set 00:22:05.152 [2024-11-15 10:40:53.511380] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b5e3c0 is same with the state(6) to be set 00:22:05.153 [2024-11-15 10:40:53.511398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16640 len:12[2024-11-15 10:40:53.511403] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b5e3c0 is same with 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.153 the state(6) to be set 00:22:05.153 [2024-11-15 10:40:53.511425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.153 [2024-11-15 10:40:53.511425] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b5e3c0 is same with the state(6) to be set 00:22:05.153 [2024-11-15 10:40:53.511446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.153 [2024-11-15 10:40:53.511456] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b5e3c0 is same with [2024-11-15 10:40:53.511461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(6) to be set 00:22:05.153 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.153 [2024-11-15 10:40:53.511481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.153 [2024-11-15 10:40:53.511480] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b5e3c0 is same with the state(6) to be set 00:22:05.153 [2024-11-15 10:40:53.511494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.153 [2024-11-15 10:40:53.511503] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b5e3c0 is same with the state(6) to be set 00:22:05.153 [2024-11-15 10:40:53.511509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.153 [2024-11-15 10:40:53.511525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.153 [2024-11-15 10:40:53.511523] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b5e3c0 is same with the state(6) to be set 00:22:05.153 [2024-11-15 10:40:53.511540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.153 [2024-11-15 10:40:53.511546] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b5e3c0 is same with the state(6) to be set 00:22:05.153 [2024-11-15 10:40:53.511554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.153 [2024-11-15 10:40:53.511570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:17280 len:12[2024-11-15 10:40:53.511568] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b5e3c0 is same with 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.153 the state(6) to be set 00:22:05.153 [2024-11-15 10:40:53.511587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.153 [2024-11-15 10:40:53.511592] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b5e3c0 is same with the state(6) to be set 00:22:05.153 [2024-11-15 10:40:53.511602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.153 [2024-11-15 10:40:53.511625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.153 [2024-11-15 10:40:53.511624] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b5e3c0 is same with the state(6) to be set 00:22:05.153 [2024-11-15 10:40:53.511655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.153 [2024-11-15 10:40:53.511660] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b5e3c0 is same with the state(6) to be set 00:22:05.153 [2024-11-15 10:40:53.511670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.153 [2024-11-15 10:40:53.511692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.153 [2024-11-15 10:40:53.511691] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b5e3c0 is same with the state(6) to be set 00:22:05.153 [2024-11-15 10:40:53.511705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.153 [2024-11-15 10:40:53.511713] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b5e3c0 is same with the state(6) to be set 00:22:05.153 [2024-11-15 10:40:53.511724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.153 [2024-11-15 10:40:53.511738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-11-15 10:40:53.511735] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b5e3c0 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.153 the state(6) to be set 00:22:05.153 [2024-11-15 10:40:53.511756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.153 [2024-11-15 10:40:53.511757] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b5e3c0 is same with the state(6) to be set 00:22:05.153 [2024-11-15 10:40:53.511769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.153 [2024-11-15 10:40:53.511777] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b5e3c0 is same with [2024-11-15 10:40:53.511784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:18048 len:1the state(6) to be set 00:22:05.153 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.153 [2024-11-15 10:40:53.511799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.153 [2024-11-15 10:40:53.511800] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b5e3c0 is same with the state(6) to be set 00:22:05.153 [2024-11-15 10:40:53.511813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.153 [2024-11-15 10:40:53.511821] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b5e3c0 is same with [2024-11-15 10:40:53.511827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(6) to be set 00:22:05.153 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.153 [2024-11-15 10:40:53.511845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.153 [2024-11-15 10:40:53.511845] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b5e3c0 is same with the state(6) to be set 00:22:05.153 [2024-11-15 10:40:53.511858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.153 [2024-11-15 10:40:53.511865] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b5e3c0 is same with the state(6) to be set 00:22:05.153 [2024-11-15 10:40:53.511878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.153 [2024-11-15 10:40:53.511887] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b5e3c0 is same with [2024-11-15 10:40:53.511892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(6) to be set 00:22:05.153 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.153 [2024-11-15 10:40:53.511909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.153 [2024-11-15 10:40:53.511908] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b5e3c0 is same with the state(6) to be set 00:22:05.153 [2024-11-15 10:40:53.511922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.153 [2024-11-15 10:40:53.511929] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b5e3c0 is same with the state(6) to be set 00:22:05.153 [2024-11-15 10:40:53.511936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.153 [2024-11-15 10:40:53.511956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.153 [2024-11-15 10:40:53.511957] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b5e3c0 is same with the state(6) to be set 00:22:05.153 [2024-11-15 10:40:53.511970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.153 [2024-11-15 10:40:53.511978] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b5e3c0 is same with [2024-11-15 10:40:53.511984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(6) to be set 00:22:05.153 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.153 [2024-11-15 10:40:53.512002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.153 [2024-11-15 10:40:53.512002] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b5e3c0 is same with the state(6) to be set 00:22:05.153 [2024-11-15 10:40:53.512015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.153 [2024-11-15 10:40:53.512022] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b5e3c0 is same with the state(6) to be set 00:22:05.153 [2024-11-15 10:40:53.512029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.153 [2024-11-15 10:40:53.512044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.153 [2024-11-15 10:40:53.512045] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b5e3c0 is same with the state(6) to be set 00:22:05.153 [2024-11-15 10:40:53.512058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.153 [2024-11-15 10:40:53.512065] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b5e3c0 is same with [2024-11-15 10:40:53.512071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(6) to be set 00:22:05.153 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.153 [2024-11-15 10:40:53.512087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.153 [2024-11-15 10:40:53.512087] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b5e3c0 is same with the state(6) to be set 00:22:05.153 [2024-11-15 10:40:53.512100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.153 [2024-11-15 10:40:53.512108] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b5e3c0 is same with the state(6) to be set 00:22:05.153 [2024-11-15 10:40:53.512114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.153 [2024-11-15 10:40:53.512128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.153 [2024-11-15 10:40:53.512128] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b5e3c0 is same with the state(6) to be set 00:22:05.153 [2024-11-15 10:40:53.512143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.153 [2024-11-15 10:40:53.512149] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b5e3c0 is same with the state(6) to be set 00:22:05.153 [2024-11-15 10:40:53.512161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.154 [2024-11-15 10:40:53.512169] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b5e3c0 is same with the state(6) to be set 00:22:05.154 [2024-11-15 10:40:53.512180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.154 [2024-11-15 10:40:53.512194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.154 [2024-11-15 10:40:53.512191] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b5e3c0 is same with the state(6) to be set 00:22:05.154 [2024-11-15 10:40:53.512209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.154 [2024-11-15 10:40:53.512213] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b5e3c0 is same with the state(6) to be set 00:22:05.154 [2024-11-15 10:40:53.512222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.154 [2024-11-15 10:40:53.512233] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b5e3c0 is same with [2024-11-15 10:40:53.512237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:19968 len:1the state(6) to be set 00:22:05.154 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.154 [2024-11-15 10:40:53.512254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.154 [2024-11-15 10:40:53.512255] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b5e3c0 is same with the state(6) to be set 00:22:05.154 [2024-11-15 10:40:53.512268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.154 [2024-11-15 10:40:53.512275] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b5e3c0 is same with [2024-11-15 10:40:53.512282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(6) to be set 00:22:05.154 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.154 [2024-11-15 10:40:53.512298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.154 [2024-11-15 10:40:53.512298] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b5e3c0 is same with the state(6) to be set 00:22:05.154 [2024-11-15 10:40:53.512312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.154 [2024-11-15 10:40:53.512320] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b5e3c0 is same with the state(6) to be set 00:22:05.154 [2024-11-15 10:40:53.512326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.154 [2024-11-15 10:40:53.512340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.154 [2024-11-15 10:40:53.512351] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b5e3c0 is same with [2024-11-15 10:40:53.512355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:20480 len:1the state(6) to be set 00:22:05.154 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.154 [2024-11-15 10:40:53.512398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.154 [2024-11-15 10:40:53.512399] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b5e3c0 is same with the state(6) to be set 00:22:05.154 [2024-11-15 10:40:53.512415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.154 [2024-11-15 10:40:53.512422] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b5e3c0 is same with [2024-11-15 10:40:53.512428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(6) to be set 00:22:05.154 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.154 [2024-11-15 10:40:53.512451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.154 [2024-11-15 10:40:53.512452] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b5e3c0 is same with the state(6) to be set 00:22:05.154 [2024-11-15 10:40:53.512465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.154 [2024-11-15 10:40:53.512474] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b5e3c0 is same with [2024-11-15 10:40:53.512480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:20864 len:1the state(6) to be set 00:22:05.154 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.154 [2024-11-15 10:40:53.512496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.154 [2024-11-15 10:40:53.512498] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b5e3c0 is same with the state(6) to be set 00:22:05.154 [2024-11-15 10:40:53.512511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.154 [2024-11-15 10:40:53.512520] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b5e3c0 is same with [2024-11-15 10:40:53.512525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(6) to be set 00:22:05.154 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.154 [2024-11-15 10:40:53.512543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.154 [2024-11-15 10:40:53.512543] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b5e3c0 is same with the state(6) to be set 00:22:05.154 [2024-11-15 10:40:53.512557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.154 [2024-11-15 10:40:53.512565] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b5e3c0 is same with the state(6) to be set 00:22:05.154 [2024-11-15 10:40:53.512572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.154 [2024-11-15 10:40:53.512587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.154 [2024-11-15 10:40:53.512587] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b5e3c0 is same with the state(6) to be set 00:22:05.154 [2024-11-15 10:40:53.512601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.154 [2024-11-15 10:40:53.512610] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b5e3c0 is same with [2024-11-15 10:40:53.512615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(6) to be set 00:22:05.154 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.154 [2024-11-15 10:40:53.512633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.154 [2024-11-15 10:40:53.512632] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b5e3c0 is same with the state(6) to be set 00:22:05.154 [2024-11-15 10:40:53.512650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.154 [2024-11-15 10:40:53.512659] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b5e3c0 is same with [2024-11-15 10:40:53.512665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:21632 len:1the state(6) to be set 00:22:05.154 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.154 [2024-11-15 10:40:53.512701] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b5e3c0 is same with the state(6) to be set 00:22:05.154 [2024-11-15 10:40:53.512714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.154 [2024-11-15 10:40:53.512731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.154 [2024-11-15 10:40:53.512744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.154 [2024-11-15 10:40:53.512759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.154 [2024-11-15 10:40:53.512772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.154 [2024-11-15 10:40:53.512786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.154 [2024-11-15 10:40:53.512799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.154 [2024-11-15 10:40:53.512813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.154 [2024-11-15 10:40:53.512826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.154 [2024-11-15 10:40:53.512840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.154 [2024-11-15 10:40:53.512853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.154 [2024-11-15 10:40:53.512867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.154 [2024-11-15 10:40:53.512880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.154 [2024-11-15 10:40:53.512895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.154 [2024-11-15 10:40:53.512908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.154 [2024-11-15 10:40:53.512923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.154 [2024-11-15 10:40:53.512935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.155 [2024-11-15 10:40:53.512949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.155 [2024-11-15 10:40:53.512962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.155 [2024-11-15 10:40:53.512976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.155 [2024-11-15 10:40:53.512989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.155 [2024-11-15 10:40:53.513003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.155 [2024-11-15 10:40:53.513016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.155 [2024-11-15 10:40:53.513030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.155 [2024-11-15 10:40:53.513046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.155 [2024-11-15 10:40:53.513061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.155 [2024-11-15 10:40:53.513074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.155 [2024-11-15 10:40:53.513088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.155 [2024-11-15 10:40:53.513101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.155 [2024-11-15 10:40:53.513116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.155 [2024-11-15 10:40:53.513129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.155 [2024-11-15 10:40:53.513142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.155 [2024-11-15 10:40:53.513160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.155 [2024-11-15 10:40:53.513174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.155 [2024-11-15 10:40:53.513187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.155 [2024-11-15 10:40:53.513202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.155 [2024-11-15 10:40:53.513215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.155 [2024-11-15 10:40:53.513229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.155 [2024-11-15 10:40:53.513242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.155 [2024-11-15 10:40:53.513256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.155 [2024-11-15 10:40:53.513268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.155 [2024-11-15 10:40:53.513282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.155 [2024-11-15 10:40:53.513295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.155 [2024-11-15 10:40:53.513309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.155 [2024-11-15 10:40:53.513327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.155 [2024-11-15 10:40:53.513342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.155 [2024-11-15 10:40:53.513355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.155 [2024-11-15 10:40:53.513393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.155 [2024-11-15 10:40:53.513417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.155 [2024-11-15 10:40:53.513456] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:22:05.155 [2024-11-15 10:40:53.514136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.155 [2024-11-15 10:40:53.514174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.155 [2024-11-15 10:40:53.514200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.155 [2024-11-15 10:40:53.514215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.155 [2024-11-15 10:40:53.514230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.155 [2024-11-15 10:40:53.514243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.155 [2024-11-15 10:40:53.514258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.155 [2024-11-15 10:40:53.514277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.155 [2024-11-15 10:40:53.514291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.155 [2024-11-15 10:40:53.514304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.155 [2024-11-15 10:40:53.514319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.155 [2024-11-15 10:40:53.514333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.155 [2024-11-15 10:40:53.514347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.155 [2024-11-15 10:40:53.514383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.155 [2024-11-15 10:40:53.514404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.155 [2024-11-15 10:40:53.514431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.155 [2024-11-15 10:40:53.514448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.155 [2024-11-15 10:40:53.514461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.155 [2024-11-15 10:40:53.514477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.155 [2024-11-15 10:40:53.514579] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b5eac0 is same with the state(6) to be set 00:22:05.155 [2024-11-15 10:40:53.514605] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b5eac0 is same with the state(6) to be set 00:22:05.155 [2024-11-15 10:40:53.514618] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b5eac0 is same with the state(6) to be set 00:22:05.155 [2024-11-15 10:40:53.514631] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b5eac0 is same with the state(6) to be set 00:22:05.155 [2024-11-15 10:40:53.514643] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b5eac0 is same with the state(6) to be set 00:22:05.155 [2024-11-15 10:40:53.514660] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b5eac0 is same with the state(6) to be set 00:22:05.155 [2024-11-15 10:40:53.514673] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b5eac0 is same with the state(6) to be set 00:22:05.155 [2024-11-15 10:40:53.514685] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b5eac0 is same with the state(6) to be set 00:22:05.155 [2024-11-15 10:40:53.514697] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b5eac0 is same with the state(6) to be set 00:22:05.155 [2024-11-15 10:40:53.514709] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b5eac0 is same with the state(6) to be set 00:22:05.155 [2024-11-15 10:40:53.514720] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b5eac0 is same with the state(6) to be set 00:22:05.155 [2024-11-15 10:40:53.514732] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b5eac0 is same with the state(6) to be set 00:22:05.155 [2024-11-15 10:40:53.514744] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b5eac0 is same with the state(6) to be set 00:22:05.155 [2024-11-15 10:40:53.514757] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b5eac0 is same with the state(6) to be set 00:22:05.155 [2024-11-15 10:40:53.514769] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b5eac0 is same with the state(6) to be set 00:22:05.155 [2024-11-15 10:40:53.514781] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b5eac0 is same with the state(6) to be set 00:22:05.155 [2024-11-15 10:40:53.514793] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b5eac0 is same with the state(6) to be set 00:22:05.155 [2024-11-15 10:40:53.514804] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b5eac0 is same with the state(6) to be set 00:22:05.155 [2024-11-15 10:40:53.514816] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b5eac0 is same with the state(6) to be set 00:22:05.155 [2024-11-15 10:40:53.514828] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b5eac0 is same with the state(6) to be set 00:22:05.155 [2024-11-15 10:40:53.514840] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b5eac0 is same with the state(6) to be set 00:22:05.155 [2024-11-15 10:40:53.514852] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b5eac0 is same with the state(6) to be set 00:22:05.155 [2024-11-15 10:40:53.514863] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b5eac0 is same with the state(6) to be set 00:22:05.155 [2024-11-15 10:40:53.514875] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b5eac0 is same with the state(6) to be set 00:22:05.155 [2024-11-15 10:40:53.514887] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b5eac0 is same with the state(6) to be set 00:22:05.155 [2024-11-15 10:40:53.514899] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b5eac0 is same with the state(6) to be set 00:22:05.155 [2024-11-15 10:40:53.514926] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b5eac0 is same with the state(6) to be set 00:22:05.155 [2024-11-15 10:40:53.514938] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b5eac0 is same with the state(6) to be set 00:22:05.155 [2024-11-15 10:40:53.514949] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b5eac0 is same with the state(6) to be set 00:22:05.156 [2024-11-15 10:40:53.514961] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b5eac0 is same with the state(6) to be set 00:22:05.156 [2024-11-15 10:40:53.514972] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b5eac0 is same with the state(6) to be set 00:22:05.156 [2024-11-15 10:40:53.514984] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b5eac0 is same with the state(6) to be set 00:22:05.156 [2024-11-15 10:40:53.514998] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b5eac0 is same with the state(6) to be set 00:22:05.156 [2024-11-15 10:40:53.515025] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b5eac0 is same with the state(6) to be set 00:22:05.156 [2024-11-15 10:40:53.515037] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b5eac0 is same with the state(6) to be set 00:22:05.156 [2024-11-15 10:40:53.515049] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b5eac0 is same with the state(6) to be set 00:22:05.156 [2024-11-15 10:40:53.515060] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b5eac0 is same with the state(6) to be set 00:22:05.156 [2024-11-15 10:40:53.515072] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b5eac0 is same with the state(6) to be set 00:22:05.156 [2024-11-15 10:40:53.515083] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b5eac0 is same with the state(6) to be set 00:22:05.156 [2024-11-15 10:40:53.515094] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b5eac0 is same with the state(6) to be set 00:22:05.156 [2024-11-15 10:40:53.515105] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b5eac0 is same with the state(6) to be set 00:22:05.156 [2024-11-15 10:40:53.515116] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b5eac0 is same with the state(6) to be set 00:22:05.156 [2024-11-15 10:40:53.515127] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b5eac0 is same with the state(6) to be set 00:22:05.156 [2024-11-15 10:40:53.515139] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b5eac0 is same with the state(6) to be set 00:22:05.156 [2024-11-15 10:40:53.515150] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b5eac0 is same with the state(6) to be set 00:22:05.156 [2024-11-15 10:40:53.515161] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b5eac0 is same with the state(6) to be set 00:22:05.156 [2024-11-15 10:40:53.515173] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b5eac0 is same with the state(6) to be set 00:22:05.156 [2024-11-15 10:40:53.515184] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b5eac0 is same with the state(6) to be set 00:22:05.156 [2024-11-15 10:40:53.515195] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b5eac0 is same with the state(6) to be set 00:22:05.156 [2024-11-15 10:40:53.515206] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b5eac0 is same with the state(6) to be set 00:22:05.156 [2024-11-15 10:40:53.515218] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b5eac0 is same with the state(6) to be set 00:22:05.156 [2024-11-15 10:40:53.515230] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b5eac0 is same with the state(6) to be set 00:22:05.156 [2024-11-15 10:40:53.515241] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b5eac0 is same with the state(6) to be set 00:22:05.156 [2024-11-15 10:40:53.515263] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b5eac0 is same with the state(6) to be set 00:22:05.156 [2024-11-15 10:40:53.515275] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b5eac0 is same with the state(6) to be set 00:22:05.156 [2024-11-15 10:40:53.515287] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b5eac0 is same with the state(6) to be set 00:22:05.156 [2024-11-15 10:40:53.515298] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b5eac0 is same with the state(6) to be set 00:22:05.156 [2024-11-15 10:40:53.515309] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b5eac0 is same with the state(6) to be set 00:22:05.156 [2024-11-15 10:40:53.515321] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b5eac0 is same with the state(6) to be set 00:22:05.156 [2024-11-15 10:40:53.515335] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b5eac0 is same with the state(6) to be set 00:22:05.156 [2024-11-15 10:40:53.515370] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b5eac0 is same with the state(6) to be set 00:22:05.156 [2024-11-15 10:40:53.515385] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b5eac0 is same with the state(6) to be set 00:22:05.156 [2024-11-15 10:40:53.515396] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b5eac0 is same with the state(6) to be set 00:22:05.156 [2024-11-15 10:40:53.530643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.156 [2024-11-15 10:40:53.530725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.156 [2024-11-15 10:40:53.530743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.156 [2024-11-15 10:40:53.530760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.156 [2024-11-15 10:40:53.530775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.156 [2024-11-15 10:40:53.530792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.156 [2024-11-15 10:40:53.530806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.156 [2024-11-15 10:40:53.530822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.156 [2024-11-15 10:40:53.530836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.156 [2024-11-15 10:40:53.530853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.156 [2024-11-15 10:40:53.530867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.156 [2024-11-15 10:40:53.530883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.156 [2024-11-15 10:40:53.530897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.156 [2024-11-15 10:40:53.530912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.156 [2024-11-15 10:40:53.530926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.156 [2024-11-15 10:40:53.530942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.156 [2024-11-15 10:40:53.530955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.156 [2024-11-15 10:40:53.530971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.156 [2024-11-15 10:40:53.530984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.156 [2024-11-15 10:40:53.531000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.156 [2024-11-15 10:40:53.531014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.156 [2024-11-15 10:40:53.531029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.156 [2024-11-15 10:40:53.531051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.156 [2024-11-15 10:40:53.531068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.156 [2024-11-15 10:40:53.531081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.156 [2024-11-15 10:40:53.531097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.156 [2024-11-15 10:40:53.531110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.156 [2024-11-15 10:40:53.531126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.156 [2024-11-15 10:40:53.531140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.156 [2024-11-15 10:40:53.531156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.156 [2024-11-15 10:40:53.531170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.156 [2024-11-15 10:40:53.531185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.156 [2024-11-15 10:40:53.531199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.156 [2024-11-15 10:40:53.531214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.156 [2024-11-15 10:40:53.531227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.156 [2024-11-15 10:40:53.531242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.156 [2024-11-15 10:40:53.531257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.156 [2024-11-15 10:40:53.531272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.156 [2024-11-15 10:40:53.531285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.156 [2024-11-15 10:40:53.531300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.156 [2024-11-15 10:40:53.531314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.156 [2024-11-15 10:40:53.531330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.156 [2024-11-15 10:40:53.531343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.156 [2024-11-15 10:40:53.531358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.156 [2024-11-15 10:40:53.531384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.156 [2024-11-15 10:40:53.531400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.156 [2024-11-15 10:40:53.531414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.157 [2024-11-15 10:40:53.531435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.157 [2024-11-15 10:40:53.531450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.157 [2024-11-15 10:40:53.531465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.157 [2024-11-15 10:40:53.531479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.157 [2024-11-15 10:40:53.531495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.157 [2024-11-15 10:40:53.531509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.157 [2024-11-15 10:40:53.531525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.157 [2024-11-15 10:40:53.531538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.157 [2024-11-15 10:40:53.531554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.157 [2024-11-15 10:40:53.531567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.157 [2024-11-15 10:40:53.531583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.157 [2024-11-15 10:40:53.531596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.157 [2024-11-15 10:40:53.531612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.157 [2024-11-15 10:40:53.531625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.157 [2024-11-15 10:40:53.531641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.157 [2024-11-15 10:40:53.531655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.157 [2024-11-15 10:40:53.531670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.157 [2024-11-15 10:40:53.531684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.157 [2024-11-15 10:40:53.531699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.157 [2024-11-15 10:40:53.531713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.157 [2024-11-15 10:40:53.531729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.157 [2024-11-15 10:40:53.531742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.157 [2024-11-15 10:40:53.531757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.157 [2024-11-15 10:40:53.531771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.157 [2024-11-15 10:40:53.531786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.157 [2024-11-15 10:40:53.531807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.157 [2024-11-15 10:40:53.531823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.157 [2024-11-15 10:40:53.531837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.157 [2024-11-15 10:40:53.531853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.157 [2024-11-15 10:40:53.531866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.157 [2024-11-15 10:40:53.531882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.157 [2024-11-15 10:40:53.531895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.157 [2024-11-15 10:40:53.531910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.157 [2024-11-15 10:40:53.531924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.157 [2024-11-15 10:40:53.531939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.157 [2024-11-15 10:40:53.531953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.157 [2024-11-15 10:40:53.531969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.157 [2024-11-15 10:40:53.531982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.157 [2024-11-15 10:40:53.531997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.157 [2024-11-15 10:40:53.532011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.157 [2024-11-15 10:40:53.532026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.157 [2024-11-15 10:40:53.532040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.157 [2024-11-15 10:40:53.532055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.157 [2024-11-15 10:40:53.532068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.157 [2024-11-15 10:40:53.532084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.157 [2024-11-15 10:40:53.532097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.157 [2024-11-15 10:40:53.532112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.157 [2024-11-15 10:40:53.532126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.157 [2024-11-15 10:40:53.532141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.157 [2024-11-15 10:40:53.532155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.157 [2024-11-15 10:40:53.532174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.157 [2024-11-15 10:40:53.532189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.157 [2024-11-15 10:40:53.532204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.157 [2024-11-15 10:40:53.532217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.157 [2024-11-15 10:40:53.532232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.157 [2024-11-15 10:40:53.532246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.157 [2024-11-15 10:40:53.532261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.157 [2024-11-15 10:40:53.532275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.157 [2024-11-15 10:40:53.532290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.157 [2024-11-15 10:40:53.532303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.157 [2024-11-15 10:40:53.532319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.157 [2024-11-15 10:40:53.532333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.157 [2024-11-15 10:40:53.532445] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:22:05.157 [2024-11-15 10:40:53.532867] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfa4220 (9): Bad file descriptor 00:22:05.157 [2024-11-15 10:40:53.532935] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:05.157 [2024-11-15 10:40:53.532956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.157 [2024-11-15 10:40:53.532971] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:05.157 [2024-11-15 10:40:53.532985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.157 [2024-11-15 10:40:53.532999] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:05.157 [2024-11-15 10:40:53.533013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.157 [2024-11-15 10:40:53.533026] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:05.157 [2024-11-15 10:40:53.533039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.157 [2024-11-15 10:40:53.533052] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf15110 is same with the state(6) to be set 00:22:05.157 [2024-11-15 10:40:53.533076] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13f9060 (9): Bad file descriptor 00:22:05.157 [2024-11-15 10:40:53.533131] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:05.157 [2024-11-15 10:40:53.533160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.157 [2024-11-15 10:40:53.533175] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:05.157 [2024-11-15 10:40:53.533188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.157 [2024-11-15 10:40:53.533202] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:05.157 [2024-11-15 10:40:53.533214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.158 [2024-11-15 10:40:53.533229] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:05.158 [2024-11-15 10:40:53.533242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.158 [2024-11-15 10:40:53.533254] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d1090 is same with the state(6) to be set 00:22:05.158 [2024-11-15 10:40:53.533285] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfab0b0 (9): Bad file descriptor 00:22:05.158 [2024-11-15 10:40:53.533318] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfad6f0 (9): Bad file descriptor 00:22:05.158 [2024-11-15 10:40:53.533350] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13d8f70 (9): Bad file descriptor 00:22:05.158 [2024-11-15 10:40:53.533409] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:05.158 [2024-11-15 10:40:53.533440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.158 [2024-11-15 10:40:53.533455] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:05.158 [2024-11-15 10:40:53.533467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.158 [2024-11-15 10:40:53.533493] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:05.158 [2024-11-15 10:40:53.533506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.158 [2024-11-15 10:40:53.533520] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:05.158 [2024-11-15 10:40:53.533533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.158 [2024-11-15 10:40:53.533564] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d1e90 is same with the state(6) to be set 00:22:05.158 [2024-11-15 10:40:53.533605] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:05.158 [2024-11-15 10:40:53.533625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.158 [2024-11-15 10:40:53.533640] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:05.158 [2024-11-15 10:40:53.533653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.158 [2024-11-15 10:40:53.533669] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:05.158 [2024-11-15 10:40:53.533687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.158 [2024-11-15 10:40:53.533705] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:05.158 [2024-11-15 10:40:53.533719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.158 [2024-11-15 10:40:53.533732] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d81f0 is same with the state(6) to be set 00:22:05.158 [2024-11-15 10:40:53.533782] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:05.158 [2024-11-15 10:40:53.533803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.158 [2024-11-15 10:40:53.533817] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:05.158 [2024-11-15 10:40:53.533830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.158 [2024-11-15 10:40:53.533844] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:05.158 [2024-11-15 10:40:53.533857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.158 [2024-11-15 10:40:53.533870] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:05.158 [2024-11-15 10:40:53.533882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.158 [2024-11-15 10:40:53.533895] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d21c0 is same with the state(6) to be set 00:22:05.158 [2024-11-15 10:40:53.537854] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] resetting controller 00:22:05.158 [2024-11-15 10:40:53.537910] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] resetting controller 00:22:05.158 [2024-11-15 10:40:53.537934] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf15110 (9): Bad file descriptor 00:22:05.158 [2024-11-15 10:40:53.537956] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13d1090 (9): Bad file descriptor 00:22:05.158 [2024-11-15 10:40:53.538471] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:22:05.158 [2024-11-15 10:40:53.538509] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] resetting controller 00:22:05.158 [2024-11-15 10:40:53.538537] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13d1e90 (9): Bad file descriptor 00:22:05.158 [2024-11-15 10:40:53.538631] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:22:05.158 [2024-11-15 10:40:53.538704] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:22:05.158 [2024-11-15 10:40:53.538773] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:22:05.158 [2024-11-15 10:40:53.540014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:05.158 [2024-11-15 10:40:53.540048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13d1090 with addr=10.0.0.2, port=4420 00:22:05.158 [2024-11-15 10:40:53.540067] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d1090 is same with the state(6) to be set 00:22:05.158 [2024-11-15 10:40:53.540204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:05.158 [2024-11-15 10:40:53.540229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf15110 with addr=10.0.0.2, port=4420 00:22:05.158 [2024-11-15 10:40:53.540245] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf15110 is same with the state(6) to be set 00:22:05.158 [2024-11-15 10:40:53.540403] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:22:05.158 [2024-11-15 10:40:53.540502] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:22:05.158 [2024-11-15 10:40:53.540581] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:22:05.158 [2024-11-15 10:40:53.540683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:05.158 [2024-11-15 10:40:53.540709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13d1e90 with addr=10.0.0.2, port=4420 00:22:05.158 [2024-11-15 10:40:53.540726] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d1e90 is same with the state(6) to be set 00:22:05.158 [2024-11-15 10:40:53.540744] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13d1090 (9): Bad file descriptor 00:22:05.158 [2024-11-15 10:40:53.540765] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf15110 (9): Bad file descriptor 00:22:05.158 [2024-11-15 10:40:53.540907] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13d1e90 (9): Bad file descriptor 00:22:05.158 [2024-11-15 10:40:53.540931] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Ctrlr is in error state 00:22:05.158 [2024-11-15 10:40:53.540945] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] controller reinitialization failed 00:22:05.158 [2024-11-15 10:40:53.540961] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] in failed state. 00:22:05.158 [2024-11-15 10:40:53.540978] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Resetting controller failed. 00:22:05.158 [2024-11-15 10:40:53.540993] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Ctrlr is in error state 00:22:05.158 [2024-11-15 10:40:53.541005] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] controller reinitialization failed 00:22:05.158 [2024-11-15 10:40:53.541017] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] in failed state. 00:22:05.158 [2024-11-15 10:40:53.541029] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Resetting controller failed. 00:22:05.158 [2024-11-15 10:40:53.541081] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Ctrlr is in error state 00:22:05.158 [2024-11-15 10:40:53.541098] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] controller reinitialization failed 00:22:05.158 [2024-11-15 10:40:53.541111] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] in failed state. 00:22:05.158 [2024-11-15 10:40:53.541124] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Resetting controller failed. 00:22:05.158 [2024-11-15 10:40:53.542848] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13d81f0 (9): Bad file descriptor 00:22:05.158 [2024-11-15 10:40:53.542890] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13d21c0 (9): Bad file descriptor 00:22:05.158 [2024-11-15 10:40:53.543042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.158 [2024-11-15 10:40:53.543067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.158 [2024-11-15 10:40:53.543095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.158 [2024-11-15 10:40:53.543111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.158 [2024-11-15 10:40:53.543128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.158 [2024-11-15 10:40:53.543142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.158 [2024-11-15 10:40:53.543165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.158 [2024-11-15 10:40:53.543180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.158 [2024-11-15 10:40:53.543196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.158 [2024-11-15 10:40:53.543211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.158 [2024-11-15 10:40:53.543227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.158 [2024-11-15 10:40:53.543242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.159 [2024-11-15 10:40:53.543258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.159 [2024-11-15 10:40:53.543271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.159 [2024-11-15 10:40:53.543286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.159 [2024-11-15 10:40:53.543300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.159 [2024-11-15 10:40:53.543315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.159 [2024-11-15 10:40:53.543329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.159 [2024-11-15 10:40:53.543344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.159 [2024-11-15 10:40:53.543358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.159 [2024-11-15 10:40:53.543382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.159 [2024-11-15 10:40:53.543398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.159 [2024-11-15 10:40:53.543413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.159 [2024-11-15 10:40:53.543426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.159 [2024-11-15 10:40:53.543441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.159 [2024-11-15 10:40:53.543454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.159 [2024-11-15 10:40:53.543469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.159 [2024-11-15 10:40:53.543483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.159 [2024-11-15 10:40:53.543498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.159 [2024-11-15 10:40:53.543511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.159 [2024-11-15 10:40:53.543526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.159 [2024-11-15 10:40:53.543544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.159 [2024-11-15 10:40:53.543568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.159 [2024-11-15 10:40:53.543582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.159 [2024-11-15 10:40:53.543597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.159 [2024-11-15 10:40:53.543610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.159 [2024-11-15 10:40:53.543625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.159 [2024-11-15 10:40:53.543639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.159 [2024-11-15 10:40:53.543654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.159 [2024-11-15 10:40:53.543668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.159 [2024-11-15 10:40:53.543683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.159 [2024-11-15 10:40:53.543697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.159 [2024-11-15 10:40:53.543712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.159 [2024-11-15 10:40:53.543725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.159 [2024-11-15 10:40:53.543741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.159 [2024-11-15 10:40:53.543754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.159 [2024-11-15 10:40:53.543769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.159 [2024-11-15 10:40:53.543787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.159 [2024-11-15 10:40:53.543803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.159 [2024-11-15 10:40:53.543816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.159 [2024-11-15 10:40:53.543832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.159 [2024-11-15 10:40:53.543845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.159 [2024-11-15 10:40:53.543860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.159 [2024-11-15 10:40:53.543874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.159 [2024-11-15 10:40:53.543889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.159 [2024-11-15 10:40:53.543903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.159 [2024-11-15 10:40:53.543922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.159 [2024-11-15 10:40:53.543936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.159 [2024-11-15 10:40:53.543952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.159 [2024-11-15 10:40:53.543966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.159 [2024-11-15 10:40:53.543981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.159 [2024-11-15 10:40:53.543994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.159 [2024-11-15 10:40:53.544009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.159 [2024-11-15 10:40:53.544023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.159 [2024-11-15 10:40:53.544038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.159 [2024-11-15 10:40:53.544051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.159 [2024-11-15 10:40:53.544066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.159 [2024-11-15 10:40:53.544080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.159 [2024-11-15 10:40:53.544095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.159 [2024-11-15 10:40:53.544108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.159 [2024-11-15 10:40:53.544124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.159 [2024-11-15 10:40:53.544138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.159 [2024-11-15 10:40:53.544153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.159 [2024-11-15 10:40:53.544166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.159 [2024-11-15 10:40:53.544182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.159 [2024-11-15 10:40:53.544195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.159 [2024-11-15 10:40:53.544211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.159 [2024-11-15 10:40:53.544224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.159 [2024-11-15 10:40:53.544240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.159 [2024-11-15 10:40:53.544254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.159 [2024-11-15 10:40:53.544269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.159 [2024-11-15 10:40:53.544286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.160 [2024-11-15 10:40:53.544302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.160 [2024-11-15 10:40:53.544316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.160 [2024-11-15 10:40:53.544331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.160 [2024-11-15 10:40:53.544354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.160 [2024-11-15 10:40:53.544376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.160 [2024-11-15 10:40:53.544391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.160 [2024-11-15 10:40:53.544406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.160 [2024-11-15 10:40:53.544420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.160 [2024-11-15 10:40:53.544435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.160 [2024-11-15 10:40:53.544448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.160 [2024-11-15 10:40:53.544464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.160 [2024-11-15 10:40:53.544477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.160 [2024-11-15 10:40:53.544493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.160 [2024-11-15 10:40:53.544507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.160 [2024-11-15 10:40:53.544522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.160 [2024-11-15 10:40:53.544535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.160 [2024-11-15 10:40:53.544550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.160 [2024-11-15 10:40:53.544564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.160 [2024-11-15 10:40:53.544579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.160 [2024-11-15 10:40:53.544593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.160 [2024-11-15 10:40:53.544608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.160 [2024-11-15 10:40:53.544622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.160 [2024-11-15 10:40:53.544636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.160 [2024-11-15 10:40:53.544650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.160 [2024-11-15 10:40:53.544678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.160 [2024-11-15 10:40:53.544692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.160 [2024-11-15 10:40:53.544707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.160 [2024-11-15 10:40:53.544720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.160 [2024-11-15 10:40:53.544735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.160 [2024-11-15 10:40:53.544748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.160 [2024-11-15 10:40:53.544763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.160 [2024-11-15 10:40:53.544776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.160 [2024-11-15 10:40:53.544792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.160 [2024-11-15 10:40:53.544805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.160 [2024-11-15 10:40:53.544820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.160 [2024-11-15 10:40:53.544833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.160 [2024-11-15 10:40:53.544848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.160 [2024-11-15 10:40:53.544861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.160 [2024-11-15 10:40:53.544880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.160 [2024-11-15 10:40:53.544893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.160 [2024-11-15 10:40:53.544909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.160 [2024-11-15 10:40:53.544922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.160 [2024-11-15 10:40:53.544937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.160 [2024-11-15 10:40:53.544950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.160 [2024-11-15 10:40:53.544965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.160 [2024-11-15 10:40:53.544979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.160 [2024-11-15 10:40:53.544992] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11b1770 is same with the state(6) to be set 00:22:05.160 [2024-11-15 10:40:53.546276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.160 [2024-11-15 10:40:53.546300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.160 [2024-11-15 10:40:53.546325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.160 [2024-11-15 10:40:53.546342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.160 [2024-11-15 10:40:53.546358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.160 [2024-11-15 10:40:53.546388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.160 [2024-11-15 10:40:53.546405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.160 [2024-11-15 10:40:53.546419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.160 [2024-11-15 10:40:53.546435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.160 [2024-11-15 10:40:53.546449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.160 [2024-11-15 10:40:53.546464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.160 [2024-11-15 10:40:53.546478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.160 [2024-11-15 10:40:53.546494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.160 [2024-11-15 10:40:53.546508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.160 [2024-11-15 10:40:53.546523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.160 [2024-11-15 10:40:53.546537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.160 [2024-11-15 10:40:53.546553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.160 [2024-11-15 10:40:53.546566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.160 [2024-11-15 10:40:53.546582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.160 [2024-11-15 10:40:53.546596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.160 [2024-11-15 10:40:53.546611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.160 [2024-11-15 10:40:53.546625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.160 [2024-11-15 10:40:53.546640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.160 [2024-11-15 10:40:53.546654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.160 [2024-11-15 10:40:53.546670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.160 [2024-11-15 10:40:53.546684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.160 [2024-11-15 10:40:53.546699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.160 [2024-11-15 10:40:53.546716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.160 [2024-11-15 10:40:53.546733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.160 [2024-11-15 10:40:53.546746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.160 [2024-11-15 10:40:53.546762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.160 [2024-11-15 10:40:53.546775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.161 [2024-11-15 10:40:53.546790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.161 [2024-11-15 10:40:53.546804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.161 [2024-11-15 10:40:53.546819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.161 [2024-11-15 10:40:53.546833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.161 [2024-11-15 10:40:53.546848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.161 [2024-11-15 10:40:53.546863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.161 [2024-11-15 10:40:53.546878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.161 [2024-11-15 10:40:53.546892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.161 [2024-11-15 10:40:53.546908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.161 [2024-11-15 10:40:53.546921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.161 [2024-11-15 10:40:53.546936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.161 [2024-11-15 10:40:53.546949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.161 [2024-11-15 10:40:53.546965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.161 [2024-11-15 10:40:53.546979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.161 [2024-11-15 10:40:53.546995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.161 [2024-11-15 10:40:53.547009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.161 [2024-11-15 10:40:53.547024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.161 [2024-11-15 10:40:53.547043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.161 [2024-11-15 10:40:53.547059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.161 [2024-11-15 10:40:53.547072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.161 [2024-11-15 10:40:53.547087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.161 [2024-11-15 10:40:53.547104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.161 [2024-11-15 10:40:53.547120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.161 [2024-11-15 10:40:53.547135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.161 [2024-11-15 10:40:53.547150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.161 [2024-11-15 10:40:53.547163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.161 [2024-11-15 10:40:53.547179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.161 [2024-11-15 10:40:53.547192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.161 [2024-11-15 10:40:53.547208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.161 [2024-11-15 10:40:53.547222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.161 [2024-11-15 10:40:53.547238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.161 [2024-11-15 10:40:53.547251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.161 [2024-11-15 10:40:53.547267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.161 [2024-11-15 10:40:53.547282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.161 [2024-11-15 10:40:53.547298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.161 [2024-11-15 10:40:53.547311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.161 [2024-11-15 10:40:53.547327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.161 [2024-11-15 10:40:53.547340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.161 [2024-11-15 10:40:53.547356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.161 [2024-11-15 10:40:53.547379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.161 [2024-11-15 10:40:53.547396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.161 [2024-11-15 10:40:53.547431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.161 [2024-11-15 10:40:53.547448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.161 [2024-11-15 10:40:53.547462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.161 [2024-11-15 10:40:53.547478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.161 [2024-11-15 10:40:53.547491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.161 [2024-11-15 10:40:53.547511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.161 [2024-11-15 10:40:53.547526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.161 [2024-11-15 10:40:53.547541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.161 [2024-11-15 10:40:53.547555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.161 [2024-11-15 10:40:53.547570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.161 [2024-11-15 10:40:53.547584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.161 [2024-11-15 10:40:53.547599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.161 [2024-11-15 10:40:53.547613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.161 [2024-11-15 10:40:53.547628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.161 [2024-11-15 10:40:53.547642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.161 [2024-11-15 10:40:53.547658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.161 [2024-11-15 10:40:53.547673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.161 [2024-11-15 10:40:53.547690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.161 [2024-11-15 10:40:53.547704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.161 [2024-11-15 10:40:53.547720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.161 [2024-11-15 10:40:53.547734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.161 [2024-11-15 10:40:53.547750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.161 [2024-11-15 10:40:53.547764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.161 [2024-11-15 10:40:53.547780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.161 [2024-11-15 10:40:53.547794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.161 [2024-11-15 10:40:53.547810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.161 [2024-11-15 10:40:53.547823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.161 [2024-11-15 10:40:53.547838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.161 [2024-11-15 10:40:53.547852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.161 [2024-11-15 10:40:53.547867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.161 [2024-11-15 10:40:53.547889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.162 [2024-11-15 10:40:53.547906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.162 [2024-11-15 10:40:53.547919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.162 [2024-11-15 10:40:53.547935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.162 [2024-11-15 10:40:53.547949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.162 [2024-11-15 10:40:53.547965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.162 [2024-11-15 10:40:53.547978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.162 [2024-11-15 10:40:53.547994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.162 [2024-11-15 10:40:53.548008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.162 [2024-11-15 10:40:53.548023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.162 [2024-11-15 10:40:53.548037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.162 [2024-11-15 10:40:53.548053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.162 [2024-11-15 10:40:53.548067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.162 [2024-11-15 10:40:53.548082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.162 [2024-11-15 10:40:53.548095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.162 [2024-11-15 10:40:53.548111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.162 [2024-11-15 10:40:53.548126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.162 [2024-11-15 10:40:53.548141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.162 [2024-11-15 10:40:53.548154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.162 [2024-11-15 10:40:53.548170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.162 [2024-11-15 10:40:53.548184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.162 [2024-11-15 10:40:53.548200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.162 [2024-11-15 10:40:53.548213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.162 [2024-11-15 10:40:53.548228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.162 [2024-11-15 10:40:53.548242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.162 [2024-11-15 10:40:53.548260] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11b28d0 is same with the state(6) to be set 00:22:05.162 [2024-11-15 10:40:53.549518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.162 [2024-11-15 10:40:53.549542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.162 [2024-11-15 10:40:53.549562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.162 [2024-11-15 10:40:53.549577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.162 [2024-11-15 10:40:53.549593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.162 [2024-11-15 10:40:53.549607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.162 [2024-11-15 10:40:53.549623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.162 [2024-11-15 10:40:53.549637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.162 [2024-11-15 10:40:53.549652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.162 [2024-11-15 10:40:53.549670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.162 [2024-11-15 10:40:53.549685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.162 [2024-11-15 10:40:53.549699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.162 [2024-11-15 10:40:53.549715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.162 [2024-11-15 10:40:53.549728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.162 [2024-11-15 10:40:53.549744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.162 [2024-11-15 10:40:53.549757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.162 [2024-11-15 10:40:53.549773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.162 [2024-11-15 10:40:53.549787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.162 [2024-11-15 10:40:53.549803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.162 [2024-11-15 10:40:53.549817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.162 [2024-11-15 10:40:53.549832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.162 [2024-11-15 10:40:53.549845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.162 [2024-11-15 10:40:53.549860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.162 [2024-11-15 10:40:53.549874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.162 [2024-11-15 10:40:53.549894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.162 [2024-11-15 10:40:53.549910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.162 [2024-11-15 10:40:53.549925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.162 [2024-11-15 10:40:53.549939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.162 [2024-11-15 10:40:53.549954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.162 [2024-11-15 10:40:53.549968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.162 [2024-11-15 10:40:53.549983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.162 [2024-11-15 10:40:53.549997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.162 [2024-11-15 10:40:53.550013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.162 [2024-11-15 10:40:53.550026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.162 [2024-11-15 10:40:53.550042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.162 [2024-11-15 10:40:53.550055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.162 [2024-11-15 10:40:53.550071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.162 [2024-11-15 10:40:53.550084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.162 [2024-11-15 10:40:53.550099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.162 [2024-11-15 10:40:53.550112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.162 [2024-11-15 10:40:53.550128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.162 [2024-11-15 10:40:53.550142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.162 [2024-11-15 10:40:53.550157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.162 [2024-11-15 10:40:53.550170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.162 [2024-11-15 10:40:53.550185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.162 [2024-11-15 10:40:53.550200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.162 [2024-11-15 10:40:53.550216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.162 [2024-11-15 10:40:53.550230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.162 [2024-11-15 10:40:53.550245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.162 [2024-11-15 10:40:53.550262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.162 [2024-11-15 10:40:53.550279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.162 [2024-11-15 10:40:53.550293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.162 [2024-11-15 10:40:53.550308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.162 [2024-11-15 10:40:53.550321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.163 [2024-11-15 10:40:53.550337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.163 [2024-11-15 10:40:53.550351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.163 [2024-11-15 10:40:53.550379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.163 [2024-11-15 10:40:53.550395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.163 [2024-11-15 10:40:53.550411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.163 [2024-11-15 10:40:53.550424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.163 [2024-11-15 10:40:53.550440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.163 [2024-11-15 10:40:53.550453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.163 [2024-11-15 10:40:53.550469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.163 [2024-11-15 10:40:53.550482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.163 [2024-11-15 10:40:53.550505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.163 [2024-11-15 10:40:53.550519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.163 [2024-11-15 10:40:53.550535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.163 [2024-11-15 10:40:53.550548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.163 [2024-11-15 10:40:53.550564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.163 [2024-11-15 10:40:53.550577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.163 [2024-11-15 10:40:53.550592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.163 [2024-11-15 10:40:53.550606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.163 [2024-11-15 10:40:53.550621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.163 [2024-11-15 10:40:53.550634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.163 [2024-11-15 10:40:53.550653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.163 [2024-11-15 10:40:53.550668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.163 [2024-11-15 10:40:53.550684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.163 [2024-11-15 10:40:53.550698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.163 [2024-11-15 10:40:53.550713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.163 [2024-11-15 10:40:53.550727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.163 [2024-11-15 10:40:53.550742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.163 [2024-11-15 10:40:53.550756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.163 [2024-11-15 10:40:53.550772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.163 [2024-11-15 10:40:53.550785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.163 [2024-11-15 10:40:53.550801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.163 [2024-11-15 10:40:53.550814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.163 [2024-11-15 10:40:53.550829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.163 [2024-11-15 10:40:53.550843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.163 [2024-11-15 10:40:53.550859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.163 [2024-11-15 10:40:53.550872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.163 [2024-11-15 10:40:53.550887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.163 [2024-11-15 10:40:53.550901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.163 [2024-11-15 10:40:53.550916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.163 [2024-11-15 10:40:53.550929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.163 [2024-11-15 10:40:53.550945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.163 [2024-11-15 10:40:53.550959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.163 [2024-11-15 10:40:53.550979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.163 [2024-11-15 10:40:53.550993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.163 [2024-11-15 10:40:53.551009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.163 [2024-11-15 10:40:53.551026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.163 [2024-11-15 10:40:53.551042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.163 [2024-11-15 10:40:53.551055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.163 [2024-11-15 10:40:53.551071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.163 [2024-11-15 10:40:53.551084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.163 [2024-11-15 10:40:53.551099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.163 [2024-11-15 10:40:53.551112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.163 [2024-11-15 10:40:53.551128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.163 [2024-11-15 10:40:53.551141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.163 [2024-11-15 10:40:53.551157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.163 [2024-11-15 10:40:53.551171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.163 [2024-11-15 10:40:53.551186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.163 [2024-11-15 10:40:53.551199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.163 [2024-11-15 10:40:53.551214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.163 [2024-11-15 10:40:53.551227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.163 [2024-11-15 10:40:53.551243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.163 [2024-11-15 10:40:53.551256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.163 [2024-11-15 10:40:53.551271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.163 [2024-11-15 10:40:53.551284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.163 [2024-11-15 10:40:53.551300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.163 [2024-11-15 10:40:53.551313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.163 [2024-11-15 10:40:53.551328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.163 [2024-11-15 10:40:53.551342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.163 [2024-11-15 10:40:53.551357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.164 [2024-11-15 10:40:53.551389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.164 [2024-11-15 10:40:53.551410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.164 [2024-11-15 10:40:53.551424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.164 [2024-11-15 10:40:53.551439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.164 [2024-11-15 10:40:53.551452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.164 [2024-11-15 10:40:53.551471] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13835b0 is same with the state(6) to be set 00:22:05.164 [2024-11-15 10:40:53.552735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.164 [2024-11-15 10:40:53.552758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.164 [2024-11-15 10:40:53.552779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.164 [2024-11-15 10:40:53.552794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.164 [2024-11-15 10:40:53.552811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.164 [2024-11-15 10:40:53.552825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.164 [2024-11-15 10:40:53.552840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.164 [2024-11-15 10:40:53.552854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.164 [2024-11-15 10:40:53.552870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.164 [2024-11-15 10:40:53.552884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.164 [2024-11-15 10:40:53.552899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.164 [2024-11-15 10:40:53.552913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.164 [2024-11-15 10:40:53.552928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.164 [2024-11-15 10:40:53.552942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.164 [2024-11-15 10:40:53.552958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.164 [2024-11-15 10:40:53.552971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.164 [2024-11-15 10:40:53.552987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.164 [2024-11-15 10:40:53.553000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.164 [2024-11-15 10:40:53.553016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.164 [2024-11-15 10:40:53.553029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.164 [2024-11-15 10:40:53.553049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.164 [2024-11-15 10:40:53.553064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.164 [2024-11-15 10:40:53.553080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.164 [2024-11-15 10:40:53.553100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.164 [2024-11-15 10:40:53.553116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.164 [2024-11-15 10:40:53.553131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.164 [2024-11-15 10:40:53.553146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.164 [2024-11-15 10:40:53.553159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.164 [2024-11-15 10:40:53.553175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.164 [2024-11-15 10:40:53.553188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.164 [2024-11-15 10:40:53.553204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.164 [2024-11-15 10:40:53.553223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.164 [2024-11-15 10:40:53.553239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.164 [2024-11-15 10:40:53.553252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.164 [2024-11-15 10:40:53.553268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.164 [2024-11-15 10:40:53.553282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.164 [2024-11-15 10:40:53.553297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.164 [2024-11-15 10:40:53.553310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.164 [2024-11-15 10:40:53.553326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.164 [2024-11-15 10:40:53.553339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.164 [2024-11-15 10:40:53.553368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.164 [2024-11-15 10:40:53.553384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.164 [2024-11-15 10:40:53.553399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.164 [2024-11-15 10:40:53.553412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.164 [2024-11-15 10:40:53.553428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.164 [2024-11-15 10:40:53.553445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.164 [2024-11-15 10:40:53.553460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.164 [2024-11-15 10:40:53.553474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.164 [2024-11-15 10:40:53.553489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.164 [2024-11-15 10:40:53.553502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.164 [2024-11-15 10:40:53.553518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.164 [2024-11-15 10:40:53.553531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.164 [2024-11-15 10:40:53.553555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.164 [2024-11-15 10:40:53.553568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.164 [2024-11-15 10:40:53.553583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.164 [2024-11-15 10:40:53.553602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.164 [2024-11-15 10:40:53.553622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.164 [2024-11-15 10:40:53.553635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.164 [2024-11-15 10:40:53.553650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.164 [2024-11-15 10:40:53.553663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.164 [2024-11-15 10:40:53.553686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.164 [2024-11-15 10:40:53.553700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.164 [2024-11-15 10:40:53.553715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.164 [2024-11-15 10:40:53.553728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.164 [2024-11-15 10:40:53.553744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.164 [2024-11-15 10:40:53.553760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.164 [2024-11-15 10:40:53.553775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.164 [2024-11-15 10:40:53.553788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.164 [2024-11-15 10:40:53.553804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.164 [2024-11-15 10:40:53.563094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.164 [2024-11-15 10:40:53.563195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.164 [2024-11-15 10:40:53.563211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.164 [2024-11-15 10:40:53.563227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.165 [2024-11-15 10:40:53.563240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.165 [2024-11-15 10:40:53.563256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.165 [2024-11-15 10:40:53.563269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.165 [2024-11-15 10:40:53.563285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.165 [2024-11-15 10:40:53.563298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.165 [2024-11-15 10:40:53.563313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.165 [2024-11-15 10:40:53.563334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.165 [2024-11-15 10:40:53.563349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.165 [2024-11-15 10:40:53.563372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.165 [2024-11-15 10:40:53.563391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.165 [2024-11-15 10:40:53.563405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.165 [2024-11-15 10:40:53.563420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.165 [2024-11-15 10:40:53.563433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.165 [2024-11-15 10:40:53.563449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.165 [2024-11-15 10:40:53.563465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.165 [2024-11-15 10:40:53.563480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.165 [2024-11-15 10:40:53.563493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.165 [2024-11-15 10:40:53.563508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.165 [2024-11-15 10:40:53.563522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.165 [2024-11-15 10:40:53.563538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.165 [2024-11-15 10:40:53.563551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.165 [2024-11-15 10:40:53.563566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.165 [2024-11-15 10:40:53.563584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.165 [2024-11-15 10:40:53.563600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.165 [2024-11-15 10:40:53.563615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.165 [2024-11-15 10:40:53.563630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.165 [2024-11-15 10:40:53.563644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.165 [2024-11-15 10:40:53.563659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.165 [2024-11-15 10:40:53.563672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.165 [2024-11-15 10:40:53.563688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.165 [2024-11-15 10:40:53.563701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.165 [2024-11-15 10:40:53.563716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.165 [2024-11-15 10:40:53.563730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.165 [2024-11-15 10:40:53.563745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.165 [2024-11-15 10:40:53.563759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.165 [2024-11-15 10:40:53.563774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.165 [2024-11-15 10:40:53.563789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.165 [2024-11-15 10:40:53.563804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.165 [2024-11-15 10:40:53.563818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.165 [2024-11-15 10:40:53.563844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.165 [2024-11-15 10:40:53.563858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.165 [2024-11-15 10:40:53.563873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.165 [2024-11-15 10:40:53.563887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.165 [2024-11-15 10:40:53.563902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.165 [2024-11-15 10:40:53.563915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.165 [2024-11-15 10:40:53.563931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.165 [2024-11-15 10:40:53.563945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.165 [2024-11-15 10:40:53.563964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.165 [2024-11-15 10:40:53.563978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.165 [2024-11-15 10:40:53.563993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.165 [2024-11-15 10:40:53.564007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.165 [2024-11-15 10:40:53.564023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.165 [2024-11-15 10:40:53.564036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.165 [2024-11-15 10:40:53.564051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.165 [2024-11-15 10:40:53.564065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.165 [2024-11-15 10:40:53.564080] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b06e0 is same with the state(6) to be set 00:22:05.165 [2024-11-15 10:40:53.565525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.165 [2024-11-15 10:40:53.565550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.165 [2024-11-15 10:40:53.565574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.165 [2024-11-15 10:40:53.565590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.165 [2024-11-15 10:40:53.565606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.165 [2024-11-15 10:40:53.565619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.165 [2024-11-15 10:40:53.565634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.165 [2024-11-15 10:40:53.565648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.165 [2024-11-15 10:40:53.565663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.165 [2024-11-15 10:40:53.565676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.165 [2024-11-15 10:40:53.565692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.165 [2024-11-15 10:40:53.565710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.165 [2024-11-15 10:40:53.565726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.165 [2024-11-15 10:40:53.565740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.165 [2024-11-15 10:40:53.565755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.165 [2024-11-15 10:40:53.565769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.165 [2024-11-15 10:40:53.565784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.165 [2024-11-15 10:40:53.565804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.165 [2024-11-15 10:40:53.565821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.165 [2024-11-15 10:40:53.565835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.165 [2024-11-15 10:40:53.565850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.165 [2024-11-15 10:40:53.565863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.165 [2024-11-15 10:40:53.565879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.166 [2024-11-15 10:40:53.565892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.166 [2024-11-15 10:40:53.565908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.166 [2024-11-15 10:40:53.565930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.166 [2024-11-15 10:40:53.565946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.166 [2024-11-15 10:40:53.565959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.166 [2024-11-15 10:40:53.565974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.166 [2024-11-15 10:40:53.565988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.166 [2024-11-15 10:40:53.566004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.166 [2024-11-15 10:40:53.566018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.166 [2024-11-15 10:40:53.566033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:10240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.166 [2024-11-15 10:40:53.566046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.166 [2024-11-15 10:40:53.566062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:10368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.166 [2024-11-15 10:40:53.566077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.166 [2024-11-15 10:40:53.566093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:10496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.166 [2024-11-15 10:40:53.566106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.166 [2024-11-15 10:40:53.566121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:10624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.166 [2024-11-15 10:40:53.566135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.166 [2024-11-15 10:40:53.566150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:10752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.166 [2024-11-15 10:40:53.566163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.166 [2024-11-15 10:40:53.566183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:10880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.166 [2024-11-15 10:40:53.566197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.166 [2024-11-15 10:40:53.566217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:11008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.166 [2024-11-15 10:40:53.566231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.166 [2024-11-15 10:40:53.566246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:11136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.166 [2024-11-15 10:40:53.566259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.166 [2024-11-15 10:40:53.566274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:11264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.166 [2024-11-15 10:40:53.566288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.166 [2024-11-15 10:40:53.566303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:11392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.166 [2024-11-15 10:40:53.566316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.166 [2024-11-15 10:40:53.566331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:11520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.166 [2024-11-15 10:40:53.566344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.166 [2024-11-15 10:40:53.566360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:11648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.166 [2024-11-15 10:40:53.566387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.166 [2024-11-15 10:40:53.566402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:11776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.166 [2024-11-15 10:40:53.566424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.166 [2024-11-15 10:40:53.566439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:11904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.166 [2024-11-15 10:40:53.566453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.166 [2024-11-15 10:40:53.566468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:12032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.166 [2024-11-15 10:40:53.566488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.166 [2024-11-15 10:40:53.566504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:12160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.166 [2024-11-15 10:40:53.566518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.166 [2024-11-15 10:40:53.566534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:12288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.166 [2024-11-15 10:40:53.566547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.166 [2024-11-15 10:40:53.566562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:12416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.166 [2024-11-15 10:40:53.566580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.166 [2024-11-15 10:40:53.566596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:12544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.166 [2024-11-15 10:40:53.566610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.166 [2024-11-15 10:40:53.566626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:12672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.166 [2024-11-15 10:40:53.566639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.166 [2024-11-15 10:40:53.566654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:12800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.166 [2024-11-15 10:40:53.566668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.166 [2024-11-15 10:40:53.566683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:12928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.166 [2024-11-15 10:40:53.566696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.166 [2024-11-15 10:40:53.566711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:13056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.166 [2024-11-15 10:40:53.566733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.166 [2024-11-15 10:40:53.566748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:13184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.166 [2024-11-15 10:40:53.566762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.166 [2024-11-15 10:40:53.566777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:13312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.166 [2024-11-15 10:40:53.566790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.166 [2024-11-15 10:40:53.566805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:13440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.166 [2024-11-15 10:40:53.566818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.166 [2024-11-15 10:40:53.566833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:13568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.166 [2024-11-15 10:40:53.566847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.166 [2024-11-15 10:40:53.566862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:13696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.166 [2024-11-15 10:40:53.566875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.166 [2024-11-15 10:40:53.566890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:13824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.166 [2024-11-15 10:40:53.566904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.166 [2024-11-15 10:40:53.566919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:13952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.166 [2024-11-15 10:40:53.566933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.166 [2024-11-15 10:40:53.566957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:14080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.166 [2024-11-15 10:40:53.566971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.166 [2024-11-15 10:40:53.566987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:14208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.166 [2024-11-15 10:40:53.567000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.166 [2024-11-15 10:40:53.567015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:14336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.166 [2024-11-15 10:40:53.567029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.166 [2024-11-15 10:40:53.567044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:14464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.166 [2024-11-15 10:40:53.567058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.166 [2024-11-15 10:40:53.567073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:14592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.166 [2024-11-15 10:40:53.567087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.166 [2024-11-15 10:40:53.567102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:14720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.167 [2024-11-15 10:40:53.567116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.167 [2024-11-15 10:40:53.567130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:14848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.167 [2024-11-15 10:40:53.567144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.167 [2024-11-15 10:40:53.567160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:14976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.167 [2024-11-15 10:40:53.567173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.167 [2024-11-15 10:40:53.567189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:15104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.167 [2024-11-15 10:40:53.567202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.167 [2024-11-15 10:40:53.567218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:15232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.167 [2024-11-15 10:40:53.567231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.167 [2024-11-15 10:40:53.567247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:15360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.167 [2024-11-15 10:40:53.567260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.167 [2024-11-15 10:40:53.567275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:15488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.167 [2024-11-15 10:40:53.567289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.167 [2024-11-15 10:40:53.567304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:15616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.167 [2024-11-15 10:40:53.567321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.167 [2024-11-15 10:40:53.567337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:15744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.167 [2024-11-15 10:40:53.567351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.167 [2024-11-15 10:40:53.567374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:15872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.167 [2024-11-15 10:40:53.567390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.167 [2024-11-15 10:40:53.567416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:16000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.167 [2024-11-15 10:40:53.567430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.167 [2024-11-15 10:40:53.567446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:16128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.167 [2024-11-15 10:40:53.567459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.167 [2024-11-15 10:40:53.567474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:16256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.167 [2024-11-15 10:40:53.567488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.167 [2024-11-15 10:40:53.567503] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f0dd0 is same with the state(6) to be set 00:22:05.167 [2024-11-15 10:40:53.568748] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:22:05.167 [2024-11-15 10:40:53.568782] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] resetting controller 00:22:05.167 [2024-11-15 10:40:53.568801] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] resetting controller 00:22:05.167 [2024-11-15 10:40:53.568817] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] resetting controller 00:22:05.167 [2024-11-15 10:40:53.568941] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] Unable to perform failover, already in progress. 00:22:05.167 [2024-11-15 10:40:53.569084] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] resetting controller 00:22:05.167 [2024-11-15 10:40:53.569413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:05.167 [2024-11-15 10:40:53.569443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfad6f0 with addr=10.0.0.2, port=4420 00:22:05.167 [2024-11-15 10:40:53.569459] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfad6f0 is same with the state(6) to be set 00:22:05.167 [2024-11-15 10:40:53.569586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:05.167 [2024-11-15 10:40:53.569611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa4220 with addr=10.0.0.2, port=4420 00:22:05.167 [2024-11-15 10:40:53.569634] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfa4220 is same with the state(6) to be set 00:22:05.167 [2024-11-15 10:40:53.569793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:05.167 [2024-11-15 10:40:53.569817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfab0b0 with addr=10.0.0.2, port=4420 00:22:05.167 [2024-11-15 10:40:53.569833] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfab0b0 is same with the state(6) to be set 00:22:05.167 [2024-11-15 10:40:53.569994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:05.167 [2024-11-15 10:40:53.570019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13d8f70 with addr=10.0.0.2, port=4420 00:22:05.167 [2024-11-15 10:40:53.570034] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d8f70 is same with the state(6) to be set 00:22:05.167 [2024-11-15 10:40:53.571167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.167 [2024-11-15 10:40:53.571191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.167 [2024-11-15 10:40:53.571214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.167 [2024-11-15 10:40:53.571229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.167 [2024-11-15 10:40:53.571246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.167 [2024-11-15 10:40:53.571260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.167 [2024-11-15 10:40:53.571276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.167 [2024-11-15 10:40:53.571290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.167 [2024-11-15 10:40:53.571306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.167 [2024-11-15 10:40:53.571320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.167 [2024-11-15 10:40:53.571335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.167 [2024-11-15 10:40:53.571348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.167 [2024-11-15 10:40:53.571371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.167 [2024-11-15 10:40:53.571388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.167 [2024-11-15 10:40:53.571403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.167 [2024-11-15 10:40:53.571417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.167 [2024-11-15 10:40:53.571433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.167 [2024-11-15 10:40:53.571446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.167 [2024-11-15 10:40:53.571461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.167 [2024-11-15 10:40:53.571475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.167 [2024-11-15 10:40:53.571491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.167 [2024-11-15 10:40:53.571505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.167 [2024-11-15 10:40:53.571520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.167 [2024-11-15 10:40:53.571538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.167 [2024-11-15 10:40:53.571555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.167 [2024-11-15 10:40:53.571568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.167 [2024-11-15 10:40:53.571583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.167 [2024-11-15 10:40:53.571597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.167 [2024-11-15 10:40:53.571612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.167 [2024-11-15 10:40:53.571625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.167 [2024-11-15 10:40:53.571641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.167 [2024-11-15 10:40:53.571654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.167 [2024-11-15 10:40:53.571669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.167 [2024-11-15 10:40:53.571682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.167 [2024-11-15 10:40:53.571697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.167 [2024-11-15 10:40:53.571711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.167 [2024-11-15 10:40:53.571726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.168 [2024-11-15 10:40:53.571739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.168 [2024-11-15 10:40:53.571755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.168 [2024-11-15 10:40:53.571769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.168 [2024-11-15 10:40:53.571784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.168 [2024-11-15 10:40:53.571798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.168 [2024-11-15 10:40:53.571814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.168 [2024-11-15 10:40:53.571827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.168 [2024-11-15 10:40:53.571842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.168 [2024-11-15 10:40:53.571855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.168 [2024-11-15 10:40:53.571871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.168 [2024-11-15 10:40:53.571884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.168 [2024-11-15 10:40:53.571904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.168 [2024-11-15 10:40:53.571918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.168 [2024-11-15 10:40:53.571933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.168 [2024-11-15 10:40:53.571947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.168 [2024-11-15 10:40:53.571962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.168 [2024-11-15 10:40:53.571976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.168 [2024-11-15 10:40:53.571991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.168 [2024-11-15 10:40:53.572004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.168 [2024-11-15 10:40:53.572020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.168 [2024-11-15 10:40:53.572033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.168 [2024-11-15 10:40:53.572049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.168 [2024-11-15 10:40:53.572062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.168 [2024-11-15 10:40:53.572077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.168 [2024-11-15 10:40:53.572091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.168 [2024-11-15 10:40:53.572106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.168 [2024-11-15 10:40:53.572119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.168 [2024-11-15 10:40:53.572134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.168 [2024-11-15 10:40:53.572148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.168 [2024-11-15 10:40:53.572162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.168 [2024-11-15 10:40:53.572176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.168 [2024-11-15 10:40:53.572192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.168 [2024-11-15 10:40:53.572205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.168 [2024-11-15 10:40:53.572220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.168 [2024-11-15 10:40:53.572233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.168 [2024-11-15 10:40:53.572248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.168 [2024-11-15 10:40:53.572265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.168 [2024-11-15 10:40:53.572281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.168 [2024-11-15 10:40:53.572295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.168 [2024-11-15 10:40:53.572310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.168 [2024-11-15 10:40:53.572324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.168 [2024-11-15 10:40:53.572339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.168 [2024-11-15 10:40:53.572352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.168 [2024-11-15 10:40:53.572374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.168 [2024-11-15 10:40:53.572388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.168 [2024-11-15 10:40:53.572404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.168 [2024-11-15 10:40:53.572417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.168 [2024-11-15 10:40:53.572432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.168 [2024-11-15 10:40:53.572446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.168 [2024-11-15 10:40:53.572462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.168 [2024-11-15 10:40:53.572475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.168 [2024-11-15 10:40:53.572491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.168 [2024-11-15 10:40:53.572504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.168 [2024-11-15 10:40:53.572519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.168 [2024-11-15 10:40:53.572532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.168 [2024-11-15 10:40:53.572547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.168 [2024-11-15 10:40:53.572560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.168 [2024-11-15 10:40:53.572576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.168 [2024-11-15 10:40:53.572589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.168 [2024-11-15 10:40:53.572605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.168 [2024-11-15 10:40:53.572618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.168 [2024-11-15 10:40:53.572637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.168 [2024-11-15 10:40:53.572651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.168 [2024-11-15 10:40:53.572667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.168 [2024-11-15 10:40:53.572680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.168 [2024-11-15 10:40:53.572695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.168 [2024-11-15 10:40:53.572708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.168 [2024-11-15 10:40:53.572723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.169 [2024-11-15 10:40:53.572736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.169 [2024-11-15 10:40:53.572751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.169 [2024-11-15 10:40:53.572764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.169 [2024-11-15 10:40:53.572779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.169 [2024-11-15 10:40:53.572792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.169 [2024-11-15 10:40:53.572807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.169 [2024-11-15 10:40:53.572820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.169 [2024-11-15 10:40:53.572835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.169 [2024-11-15 10:40:53.572848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.169 [2024-11-15 10:40:53.572863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.169 [2024-11-15 10:40:53.572876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.169 [2024-11-15 10:40:53.572891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.169 [2024-11-15 10:40:53.572904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.169 [2024-11-15 10:40:53.572920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.169 [2024-11-15 10:40:53.572933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.169 [2024-11-15 10:40:53.572948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.169 [2024-11-15 10:40:53.572962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.169 [2024-11-15 10:40:53.572977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.169 [2024-11-15 10:40:53.572994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.169 [2024-11-15 10:40:53.573010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.169 [2024-11-15 10:40:53.573023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.169 [2024-11-15 10:40:53.573038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.169 [2024-11-15 10:40:53.573051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.169 [2024-11-15 10:40:53.573065] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b44e0 is same with the state(6) to be set 00:22:05.169 [2024-11-15 10:40:53.574332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.169 [2024-11-15 10:40:53.574355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.169 [2024-11-15 10:40:53.574384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.169 [2024-11-15 10:40:53.574400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.169 [2024-11-15 10:40:53.574416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.169 [2024-11-15 10:40:53.574430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.169 [2024-11-15 10:40:53.574446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.169 [2024-11-15 10:40:53.574460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.169 [2024-11-15 10:40:53.574475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.169 [2024-11-15 10:40:53.574489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.169 [2024-11-15 10:40:53.574504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.169 [2024-11-15 10:40:53.574518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.169 [2024-11-15 10:40:53.574533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.169 [2024-11-15 10:40:53.574547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.169 [2024-11-15 10:40:53.574562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.169 [2024-11-15 10:40:53.574576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.169 [2024-11-15 10:40:53.574591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.169 [2024-11-15 10:40:53.574604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.169 [2024-11-15 10:40:53.574619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.169 [2024-11-15 10:40:53.574637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.169 [2024-11-15 10:40:53.574654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.169 [2024-11-15 10:40:53.574668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.169 [2024-11-15 10:40:53.574683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.169 [2024-11-15 10:40:53.574696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.169 [2024-11-15 10:40:53.574711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.169 [2024-11-15 10:40:53.574725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.169 [2024-11-15 10:40:53.574741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.169 [2024-11-15 10:40:53.574754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.169 [2024-11-15 10:40:53.574769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.169 [2024-11-15 10:40:53.574782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.169 [2024-11-15 10:40:53.574798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.169 [2024-11-15 10:40:53.574811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.169 [2024-11-15 10:40:53.574826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:10240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.169 [2024-11-15 10:40:53.574840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.169 [2024-11-15 10:40:53.574855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:10368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.169 [2024-11-15 10:40:53.574868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.169 [2024-11-15 10:40:53.574884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:10496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.169 [2024-11-15 10:40:53.574897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.169 [2024-11-15 10:40:53.574913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:10624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.169 [2024-11-15 10:40:53.574926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.169 [2024-11-15 10:40:53.574941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:10752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.169 [2024-11-15 10:40:53.574955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.169 [2024-11-15 10:40:53.574970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:10880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.169 [2024-11-15 10:40:53.574983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.169 [2024-11-15 10:40:53.575001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:11008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.169 [2024-11-15 10:40:53.575016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.169 [2024-11-15 10:40:53.575031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:11136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.169 [2024-11-15 10:40:53.575044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.169 [2024-11-15 10:40:53.575060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:11264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.169 [2024-11-15 10:40:53.575073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.169 [2024-11-15 10:40:53.575088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:11392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.169 [2024-11-15 10:40:53.575101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.169 [2024-11-15 10:40:53.575116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:11520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.169 [2024-11-15 10:40:53.575129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.169 [2024-11-15 10:40:53.575145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:11648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.170 [2024-11-15 10:40:53.575158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.170 [2024-11-15 10:40:53.575173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:11776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.170 [2024-11-15 10:40:53.575187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.170 [2024-11-15 10:40:53.575202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:11904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.170 [2024-11-15 10:40:53.575215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.170 [2024-11-15 10:40:53.575231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:12032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.170 [2024-11-15 10:40:53.575244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.170 [2024-11-15 10:40:53.575259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:12160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.170 [2024-11-15 10:40:53.575272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.170 [2024-11-15 10:40:53.575287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:12288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.170 [2024-11-15 10:40:53.575301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.170 [2024-11-15 10:40:53.575315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:12416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.170 [2024-11-15 10:40:53.575329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.170 [2024-11-15 10:40:53.575344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:12544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.170 [2024-11-15 10:40:53.575369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.170 [2024-11-15 10:40:53.575388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:12672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.170 [2024-11-15 10:40:53.575402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.170 [2024-11-15 10:40:53.575417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:12800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.170 [2024-11-15 10:40:53.575431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.170 [2024-11-15 10:40:53.575446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:12928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.170 [2024-11-15 10:40:53.575459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.170 [2024-11-15 10:40:53.575473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:13056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.170 [2024-11-15 10:40:53.575487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.170 [2024-11-15 10:40:53.575501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:13184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.170 [2024-11-15 10:40:53.575514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.170 [2024-11-15 10:40:53.575529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:13312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.170 [2024-11-15 10:40:53.575542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.170 [2024-11-15 10:40:53.575556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:13440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.170 [2024-11-15 10:40:53.575570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.170 [2024-11-15 10:40:53.575585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:13568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.170 [2024-11-15 10:40:53.575598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.170 [2024-11-15 10:40:53.575613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:13696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.170 [2024-11-15 10:40:53.575626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.170 [2024-11-15 10:40:53.575641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:13824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.170 [2024-11-15 10:40:53.575654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.170 [2024-11-15 10:40:53.575669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:13952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.170 [2024-11-15 10:40:53.575682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.170 [2024-11-15 10:40:53.575697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:14080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.170 [2024-11-15 10:40:53.575711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.170 [2024-11-15 10:40:53.575725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:14208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.170 [2024-11-15 10:40:53.575742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.170 [2024-11-15 10:40:53.575758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:14336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.170 [2024-11-15 10:40:53.575771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.170 [2024-11-15 10:40:53.575787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:14464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.170 [2024-11-15 10:40:53.575801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.170 [2024-11-15 10:40:53.575816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:14592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.170 [2024-11-15 10:40:53.575832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.170 [2024-11-15 10:40:53.575849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:14720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.170 [2024-11-15 10:40:53.575862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.170 [2024-11-15 10:40:53.575878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:14848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.170 [2024-11-15 10:40:53.575892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.170 [2024-11-15 10:40:53.575908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:14976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.170 [2024-11-15 10:40:53.575922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.170 [2024-11-15 10:40:53.575937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:15104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.170 [2024-11-15 10:40:53.575951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.170 [2024-11-15 10:40:53.575966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:15232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.170 [2024-11-15 10:40:53.575980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.170 [2024-11-15 10:40:53.575996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:15360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.170 [2024-11-15 10:40:53.576010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.170 [2024-11-15 10:40:53.576026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:15488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.170 [2024-11-15 10:40:53.576040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.170 [2024-11-15 10:40:53.576055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:15616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.170 [2024-11-15 10:40:53.576068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.170 [2024-11-15 10:40:53.576084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:15744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.170 [2024-11-15 10:40:53.576097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.170 [2024-11-15 10:40:53.576117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:15872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.170 [2024-11-15 10:40:53.576131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.170 [2024-11-15 10:40:53.576146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:16000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.170 [2024-11-15 10:40:53.576160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.170 [2024-11-15 10:40:53.576176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:16128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.170 [2024-11-15 10:40:53.576190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.170 [2024-11-15 10:40:53.576205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:16256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.170 [2024-11-15 10:40:53.576219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.170 [2024-11-15 10:40:53.576233] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ef8c0 is same with the state(6) to be set 00:22:05.170 [2024-11-15 10:40:53.577721] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] resetting controller 00:22:05.170 [2024-11-15 10:40:53.577764] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] resetting controller 00:22:05.170 [2024-11-15 10:40:53.577785] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] resetting controller 00:22:05.170 [2024-11-15 10:40:53.577802] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] resetting controller 00:22:05.170 task offset: 16384 on job bdev=Nvme5n1 fails 00:22:05.170 00:22:05.170 Latency(us) 00:22:05.170 [2024-11-15T09:40:53.633Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:05.171 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:05.171 Job: Nvme1n1 ended in about 0.79 seconds with error 00:22:05.171 Verification LBA range: start 0x0 length 0x400 00:22:05.171 Nvme1n1 : 0.79 162.87 10.18 81.44 0.00 258405.26 19612.25 260978.92 00:22:05.171 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:05.171 Job: Nvme2n1 ended in about 0.79 seconds with error 00:22:05.171 Verification LBA range: start 0x0 length 0x400 00:22:05.171 Nvme2n1 : 0.79 162.20 10.14 81.10 0.00 253143.92 19126.80 257872.02 00:22:05.171 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:05.171 Job: Nvme3n1 ended in about 0.79 seconds with error 00:22:05.171 Verification LBA range: start 0x0 length 0x400 00:22:05.171 Nvme3n1 : 0.79 161.55 10.10 80.77 0.00 247842.89 20000.62 260978.92 00:22:05.171 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:05.171 Job: Nvme4n1 ended in about 0.80 seconds with error 00:22:05.171 Verification LBA range: start 0x0 length 0x400 00:22:05.171 Nvme4n1 : 0.80 163.98 10.25 79.51 0.00 240737.06 19126.80 245444.46 00:22:05.171 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:05.171 Job: Nvme5n1 ended in about 0.78 seconds with error 00:22:05.171 Verification LBA range: start 0x0 length 0x400 00:22:05.171 Nvme5n1 : 0.78 165.13 10.32 82.57 0.00 229398.76 23592.96 243891.01 00:22:05.171 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:05.171 Job: Nvme6n1 ended in about 0.78 seconds with error 00:22:05.171 Verification LBA range: start 0x0 length 0x400 00:22:05.171 Nvme6n1 : 0.78 164.90 10.31 82.45 0.00 223567.27 24175.50 268746.15 00:22:05.171 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:05.171 Job: Nvme7n1 ended in about 0.81 seconds with error 00:22:05.171 Verification LBA range: start 0x0 length 0x400 00:22:05.171 Nvme7n1 : 0.81 157.27 9.83 78.63 0.00 230150.45 21456.97 262532.36 00:22:05.171 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:05.171 Job: Nvme8n1 ended in about 0.78 seconds with error 00:22:05.171 Verification LBA range: start 0x0 length 0x400 00:22:05.171 Nvme8n1 : 0.78 164.65 10.29 82.32 0.00 211907.07 22719.15 228356.55 00:22:05.171 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:05.171 Job: Nvme9n1 ended in about 0.82 seconds with error 00:22:05.171 Verification LBA range: start 0x0 length 0x400 00:22:05.171 Nvme9n1 : 0.82 78.33 4.90 78.33 0.00 328844.33 21845.33 313796.08 00:22:05.171 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:05.171 Job: Nvme10n1 ended in about 0.81 seconds with error 00:22:05.171 Verification LBA range: start 0x0 length 0x400 00:22:05.171 Nvme10n1 : 0.81 79.18 4.95 79.18 0.00 315437.13 21262.79 290494.39 00:22:05.171 [2024-11-15T09:40:53.634Z] =================================================================================================================== 00:22:05.171 [2024-11-15T09:40:53.634Z] Total : 1460.06 91.25 806.30 0.00 249053.61 19126.80 313796.08 00:22:05.430 [2024-11-15 10:40:53.611563] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:22:05.430 [2024-11-15 10:40:53.611667] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9, 1] resetting controller 00:22:05.430 [2024-11-15 10:40:53.612034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:05.430 [2024-11-15 10:40:53.612072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f9060 with addr=10.0.0.2, port=4420 00:22:05.430 [2024-11-15 10:40:53.612095] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f9060 is same with the state(6) to be set 00:22:05.430 [2024-11-15 10:40:53.612127] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfad6f0 (9): Bad file descriptor 00:22:05.430 [2024-11-15 10:40:53.612153] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfa4220 (9): Bad file descriptor 00:22:05.430 [2024-11-15 10:40:53.612172] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfab0b0 (9): Bad file descriptor 00:22:05.430 [2024-11-15 10:40:53.612190] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13d8f70 (9): Bad file descriptor 00:22:05.430 [2024-11-15 10:40:53.612640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:05.430 [2024-11-15 10:40:53.612671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf15110 with addr=10.0.0.2, port=4420 00:22:05.430 [2024-11-15 10:40:53.612688] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf15110 is same with the state(6) to be set 00:22:05.430 [2024-11-15 10:40:53.612816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:05.430 [2024-11-15 10:40:53.612842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13d1090 with addr=10.0.0.2, port=4420 00:22:05.430 [2024-11-15 10:40:53.612859] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d1090 is same with the state(6) to be set 00:22:05.430 [2024-11-15 10:40:53.613054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:05.430 [2024-11-15 10:40:53.613080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13d1e90 with addr=10.0.0.2, port=4420 00:22:05.430 [2024-11-15 10:40:53.613096] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d1e90 is same with the state(6) to be set 00:22:05.430 [2024-11-15 10:40:53.613327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:05.430 [2024-11-15 10:40:53.613353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13d81f0 with addr=10.0.0.2, port=4420 00:22:05.430 [2024-11-15 10:40:53.613387] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d81f0 is same with the state(6) to be set 00:22:05.430 [2024-11-15 10:40:53.613628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:05.430 [2024-11-15 10:40:53.613654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13d21c0 with addr=10.0.0.2, port=4420 00:22:05.430 [2024-11-15 10:40:53.613670] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d21c0 is same with the state(6) to be set 00:22:05.430 [2024-11-15 10:40:53.613689] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13f9060 (9): Bad file descriptor 00:22:05.430 [2024-11-15 10:40:53.613708] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:22:05.430 [2024-11-15 10:40:53.613723] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:22:05.430 [2024-11-15 10:40:53.613740] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:22:05.430 [2024-11-15 10:40:53.613758] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:22:05.430 [2024-11-15 10:40:53.613775] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Ctrlr is in error state 00:22:05.430 [2024-11-15 10:40:53.613787] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] controller reinitialization failed 00:22:05.430 [2024-11-15 10:40:53.613800] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] in failed state. 00:22:05.430 [2024-11-15 10:40:53.613812] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Resetting controller failed. 00:22:05.430 [2024-11-15 10:40:53.613825] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Ctrlr is in error state 00:22:05.430 [2024-11-15 10:40:53.613837] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] controller reinitialization failed 00:22:05.430 [2024-11-15 10:40:53.613849] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] in failed state. 00:22:05.430 [2024-11-15 10:40:53.613861] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Resetting controller failed. 00:22:05.430 [2024-11-15 10:40:53.613875] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Ctrlr is in error state 00:22:05.430 [2024-11-15 10:40:53.613887] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] controller reinitialization failed 00:22:05.430 [2024-11-15 10:40:53.613899] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] in failed state. 00:22:05.430 [2024-11-15 10:40:53.613910] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Resetting controller failed. 00:22:05.430 [2024-11-15 10:40:53.613965] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] Unable to perform failover, already in progress. 00:22:05.430 [2024-11-15 10:40:53.614658] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf15110 (9): Bad file descriptor 00:22:05.430 [2024-11-15 10:40:53.614688] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13d1090 (9): Bad file descriptor 00:22:05.430 [2024-11-15 10:40:53.614708] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13d1e90 (9): Bad file descriptor 00:22:05.430 [2024-11-15 10:40:53.614726] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13d81f0 (9): Bad file descriptor 00:22:05.430 [2024-11-15 10:40:53.614742] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13d21c0 (9): Bad file descriptor 00:22:05.430 [2024-11-15 10:40:53.614757] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Ctrlr is in error state 00:22:05.430 [2024-11-15 10:40:53.614769] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] controller reinitialization failed 00:22:05.430 [2024-11-15 10:40:53.614788] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] in failed state. 00:22:05.430 [2024-11-15 10:40:53.614801] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Resetting controller failed. 00:22:05.430 [2024-11-15 10:40:53.614864] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] resetting controller 00:22:05.430 [2024-11-15 10:40:53.614889] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] resetting controller 00:22:05.430 [2024-11-15 10:40:53.614906] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] resetting controller 00:22:05.430 [2024-11-15 10:40:53.614922] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:22:05.430 [2024-11-15 10:40:53.614964] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Ctrlr is in error state 00:22:05.430 [2024-11-15 10:40:53.614981] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] controller reinitialization failed 00:22:05.430 [2024-11-15 10:40:53.614994] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] in failed state. 00:22:05.430 [2024-11-15 10:40:53.615007] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Resetting controller failed. 00:22:05.430 [2024-11-15 10:40:53.615020] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Ctrlr is in error state 00:22:05.430 [2024-11-15 10:40:53.615032] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] controller reinitialization failed 00:22:05.430 [2024-11-15 10:40:53.615045] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] in failed state. 00:22:05.430 [2024-11-15 10:40:53.615056] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Resetting controller failed. 00:22:05.430 [2024-11-15 10:40:53.615069] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Ctrlr is in error state 00:22:05.430 [2024-11-15 10:40:53.615081] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] controller reinitialization failed 00:22:05.430 [2024-11-15 10:40:53.615093] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] in failed state. 00:22:05.430 [2024-11-15 10:40:53.615105] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Resetting controller failed. 00:22:05.430 [2024-11-15 10:40:53.615117] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Ctrlr is in error state 00:22:05.430 [2024-11-15 10:40:53.615129] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] controller reinitialization failed 00:22:05.430 [2024-11-15 10:40:53.615141] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] in failed state. 00:22:05.430 [2024-11-15 10:40:53.615152] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Resetting controller failed. 00:22:05.430 [2024-11-15 10:40:53.615165] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Ctrlr is in error state 00:22:05.430 [2024-11-15 10:40:53.615177] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] controller reinitialization failed 00:22:05.430 [2024-11-15 10:40:53.615189] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] in failed state. 00:22:05.430 [2024-11-15 10:40:53.615201] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Resetting controller failed. 00:22:05.430 [2024-11-15 10:40:53.615477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:05.430 [2024-11-15 10:40:53.615506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13d8f70 with addr=10.0.0.2, port=4420 00:22:05.431 [2024-11-15 10:40:53.615521] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d8f70 is same with the state(6) to be set 00:22:05.431 [2024-11-15 10:40:53.615623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:05.431 [2024-11-15 10:40:53.615648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfab0b0 with addr=10.0.0.2, port=4420 00:22:05.431 [2024-11-15 10:40:53.615663] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfab0b0 is same with the state(6) to be set 00:22:05.431 [2024-11-15 10:40:53.615844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:05.431 [2024-11-15 10:40:53.615869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa4220 with addr=10.0.0.2, port=4420 00:22:05.431 [2024-11-15 10:40:53.615884] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfa4220 is same with the state(6) to be set 00:22:05.431 [2024-11-15 10:40:53.616109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:05.431 [2024-11-15 10:40:53.616133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfad6f0 with addr=10.0.0.2, port=4420 00:22:05.431 [2024-11-15 10:40:53.616148] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfad6f0 is same with the state(6) to be set 00:22:05.431 [2024-11-15 10:40:53.616192] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13d8f70 (9): Bad file descriptor 00:22:05.431 [2024-11-15 10:40:53.616216] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfab0b0 (9): Bad file descriptor 00:22:05.431 [2024-11-15 10:40:53.616234] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfa4220 (9): Bad file descriptor 00:22:05.431 [2024-11-15 10:40:53.616251] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfad6f0 (9): Bad file descriptor 00:22:05.431 [2024-11-15 10:40:53.616289] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Ctrlr is in error state 00:22:05.431 [2024-11-15 10:40:53.616307] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] controller reinitialization failed 00:22:05.431 [2024-11-15 10:40:53.616320] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] in failed state. 00:22:05.431 [2024-11-15 10:40:53.616332] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Resetting controller failed. 00:22:05.431 [2024-11-15 10:40:53.616347] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Ctrlr is in error state 00:22:05.431 [2024-11-15 10:40:53.616360] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] controller reinitialization failed 00:22:05.431 [2024-11-15 10:40:53.616381] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] in failed state. 00:22:05.431 [2024-11-15 10:40:53.616393] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Resetting controller failed. 00:22:05.431 [2024-11-15 10:40:53.616407] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Ctrlr is in error state 00:22:05.431 [2024-11-15 10:40:53.616419] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] controller reinitialization failed 00:22:05.431 [2024-11-15 10:40:53.616431] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] in failed state. 00:22:05.431 [2024-11-15 10:40:53.616442] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Resetting controller failed. 00:22:05.431 [2024-11-15 10:40:53.616455] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:22:05.431 [2024-11-15 10:40:53.616466] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:22:05.431 [2024-11-15 10:40:53.616479] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:22:05.431 [2024-11-15 10:40:53.616490] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:22:05.691 10:40:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@137 -- # sleep 1 00:22:06.628 10:40:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@138 -- # NOT wait 424632 00:22:06.628 10:40:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@650 -- # local es=0 00:22:06.628 10:40:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@652 -- # valid_exec_arg wait 424632 00:22:06.628 10:40:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@638 -- # local arg=wait 00:22:06.628 10:40:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:06.628 10:40:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@642 -- # type -t wait 00:22:06.628 10:40:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:06.628 10:40:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@653 -- # wait 424632 00:22:06.628 10:40:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@653 -- # es=255 00:22:06.628 10:40:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:06.628 10:40:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@662 -- # es=127 00:22:06.628 10:40:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@663 -- # case "$es" in 00:22:06.628 10:40:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@670 -- # es=1 00:22:06.628 10:40:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:06.628 10:40:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@140 -- # stoptarget 00:22:06.628 10:40:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:22:06.628 10:40:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:22:06.628 10:40:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:06.628 10:40:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@46 -- # nvmftestfini 00:22:06.628 10:40:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:06.628 10:40:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # sync 00:22:06.628 10:40:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:06.628 10:40:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set +e 00:22:06.628 10:40:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:06.628 10:40:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:06.628 rmmod nvme_tcp 00:22:06.628 rmmod nvme_fabrics 00:22:06.628 rmmod nvme_keyring 00:22:06.888 10:40:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:06.888 10:40:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@128 -- # set -e 00:22:06.888 10:40:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@129 -- # return 0 00:22:06.888 10:40:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@517 -- # '[' -n 424553 ']' 00:22:06.888 10:40:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@518 -- # killprocess 424553 00:22:06.888 10:40:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@952 -- # '[' -z 424553 ']' 00:22:06.888 10:40:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@956 -- # kill -0 424553 00:22:06.888 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 956: kill: (424553) - No such process 00:22:06.888 10:40:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@979 -- # echo 'Process with pid 424553 is not found' 00:22:06.888 Process with pid 424553 is not found 00:22:06.888 10:40:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:06.888 10:40:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:06.888 10:40:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:06.888 10:40:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # iptr 00:22:06.888 10:40:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # iptables-save 00:22:06.888 10:40:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:06.888 10:40:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # iptables-restore 00:22:06.888 10:40:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:06.888 10:40:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:06.888 10:40:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:06.888 10:40:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:06.888 10:40:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:08.796 10:40:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:08.796 00:22:08.796 real 0m7.265s 00:22:08.796 user 0m17.519s 00:22:08.796 sys 0m1.429s 00:22:08.796 10:40:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1128 -- # xtrace_disable 00:22:08.796 10:40:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:08.796 ************************************ 00:22:08.796 END TEST nvmf_shutdown_tc3 00:22:08.796 ************************************ 00:22:08.796 10:40:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ e810 == \e\8\1\0 ]] 00:22:08.796 10:40:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ tcp == \r\d\m\a ]] 00:22:08.796 10:40:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@167 -- # run_test nvmf_shutdown_tc4 nvmf_shutdown_tc4 00:22:08.796 10:40:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:22:08.796 10:40:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1109 -- # xtrace_disable 00:22:08.796 10:40:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:22:08.796 ************************************ 00:22:08.796 START TEST nvmf_shutdown_tc4 00:22:08.796 ************************************ 00:22:08.796 10:40:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1127 -- # nvmf_shutdown_tc4 00:22:08.796 10:40:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@145 -- # starttarget 00:22:08.796 10:40:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@16 -- # nvmftestinit 00:22:08.796 10:40:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:08.796 10:40:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:08.796 10:40:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:08.796 10:40:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:08.796 10:40:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:08.796 10:40:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:08.796 10:40:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:08.796 10:40:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:08.796 10:40:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:08.796 10:40:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:08.796 10:40:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@309 -- # xtrace_disable 00:22:08.796 10:40:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:22:08.796 10:40:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:08.796 10:40:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # pci_devs=() 00:22:08.796 10:40:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:08.796 10:40:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:08.796 10:40:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:08.796 10:40:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:08.796 10:40:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:08.796 10:40:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # net_devs=() 00:22:08.796 10:40:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:08.796 10:40:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # e810=() 00:22:08.796 10:40:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # local -ga e810 00:22:08.796 10:40:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # x722=() 00:22:08.796 10:40:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # local -ga x722 00:22:08.796 10:40:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # mlx=() 00:22:08.796 10:40:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # local -ga mlx 00:22:08.796 10:40:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:08.796 10:40:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:08.796 10:40:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:08.796 10:40:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:08.796 10:40:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:08.796 10:40:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:08.796 10:40:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:08.796 10:40:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:08.796 10:40:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:08.796 10:40:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:08.796 10:40:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:08.796 10:40:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:08.796 10:40:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:08.796 10:40:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:08.796 10:40:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:08.796 10:40:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:08.796 10:40:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:08.797 10:40:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:08.797 10:40:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:08.797 10:40:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@367 -- # echo 'Found 0000:82:00.0 (0x8086 - 0x159b)' 00:22:08.797 Found 0000:82:00.0 (0x8086 - 0x159b) 00:22:08.797 10:40:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:08.797 10:40:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:08.797 10:40:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:08.797 10:40:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:08.797 10:40:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:08.797 10:40:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:08.797 10:40:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@367 -- # echo 'Found 0000:82:00.1 (0x8086 - 0x159b)' 00:22:08.797 Found 0000:82:00.1 (0x8086 - 0x159b) 00:22:08.797 10:40:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:08.797 10:40:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:08.797 10:40:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:08.797 10:40:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:08.797 10:40:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:08.797 10:40:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:08.797 10:40:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:08.797 10:40:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:08.797 10:40:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:08.797 10:40:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:08.797 10:40:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:08.797 10:40:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:08.797 10:40:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:08.797 10:40:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:08.797 10:40:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:08.797 10:40:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:82:00.0: cvl_0_0' 00:22:08.797 Found net devices under 0000:82:00.0: cvl_0_0 00:22:08.797 10:40:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:08.797 10:40:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:08.797 10:40:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:08.797 10:40:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:08.797 10:40:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:08.797 10:40:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:08.797 10:40:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:08.797 10:40:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:08.797 10:40:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:82:00.1: cvl_0_1' 00:22:08.797 Found net devices under 0000:82:00.1: cvl_0_1 00:22:08.797 10:40:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:08.797 10:40:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:08.797 10:40:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # is_hw=yes 00:22:08.797 10:40:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:08.797 10:40:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:08.797 10:40:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:08.797 10:40:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:08.797 10:40:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:08.797 10:40:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:08.797 10:40:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:08.797 10:40:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:08.797 10:40:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:08.797 10:40:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:08.797 10:40:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:08.797 10:40:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:08.797 10:40:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:08.797 10:40:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:08.797 10:40:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:08.797 10:40:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:08.797 10:40:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:08.797 10:40:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:09.056 10:40:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:09.056 10:40:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:09.056 10:40:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:09.056 10:40:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:09.056 10:40:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:09.056 10:40:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:09.056 10:40:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:09.056 10:40:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:09.056 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:09.056 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.208 ms 00:22:09.056 00:22:09.056 --- 10.0.0.2 ping statistics --- 00:22:09.056 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:09.056 rtt min/avg/max/mdev = 0.208/0.208/0.208/0.000 ms 00:22:09.056 10:40:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:09.056 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:09.056 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.106 ms 00:22:09.056 00:22:09.056 --- 10.0.0.1 ping statistics --- 00:22:09.056 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:09.056 rtt min/avg/max/mdev = 0.106/0.106/0.106/0.000 ms 00:22:09.056 10:40:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:09.056 10:40:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@450 -- # return 0 00:22:09.056 10:40:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:09.056 10:40:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:09.056 10:40:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:09.056 10:40:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:09.056 10:40:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:09.056 10:40:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:09.056 10:40:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:09.056 10:40:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:22:09.056 10:40:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:09.056 10:40:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:09.056 10:40:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:22:09.056 10:40:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@509 -- # nvmfpid=425521 00:22:09.056 10:40:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:22:09.056 10:40:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@510 -- # waitforlisten 425521 00:22:09.056 10:40:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@833 -- # '[' -z 425521 ']' 00:22:09.056 10:40:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:09.056 10:40:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@838 -- # local max_retries=100 00:22:09.056 10:40:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:09.056 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:09.056 10:40:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@842 -- # xtrace_disable 00:22:09.056 10:40:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:22:09.056 [2024-11-15 10:40:57.448651] Starting SPDK v25.01-pre git sha1 318515b44 / DPDK 24.03.0 initialization... 00:22:09.056 [2024-11-15 10:40:57.448768] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:09.315 [2024-11-15 10:40:57.527165] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:09.315 [2024-11-15 10:40:57.587175] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:09.315 [2024-11-15 10:40:57.587241] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:09.315 [2024-11-15 10:40:57.587270] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:09.315 [2024-11-15 10:40:57.587282] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:09.315 [2024-11-15 10:40:57.587291] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:09.315 [2024-11-15 10:40:57.588971] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:09.315 [2024-11-15 10:40:57.589033] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:22:09.315 [2024-11-15 10:40:57.589099] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:22:09.315 [2024-11-15 10:40:57.589102] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:09.315 10:40:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:22:09.315 10:40:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@866 -- # return 0 00:22:09.315 10:40:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:09.315 10:40:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:09.315 10:40:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:22:09.315 10:40:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:09.315 10:40:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:09.315 10:40:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:09.315 10:40:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:22:09.315 [2024-11-15 10:40:57.746167] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:09.315 10:40:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:09.315 10:40:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:22:09.316 10:40:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:22:09.316 10:40:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:09.316 10:40:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:22:09.316 10:40:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:09.316 10:40:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:09.316 10:40:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:22:09.316 10:40:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:09.316 10:40:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:22:09.316 10:40:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:09.316 10:40:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:22:09.316 10:40:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:09.316 10:40:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:22:09.316 10:40:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:09.316 10:40:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:22:09.316 10:40:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:09.316 10:40:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:22:09.316 10:40:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:09.316 10:40:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:22:09.574 10:40:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:09.574 10:40:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:22:09.574 10:40:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:09.574 10:40:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:22:09.574 10:40:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:09.574 10:40:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:22:09.574 10:40:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@36 -- # rpc_cmd 00:22:09.574 10:40:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:09.574 10:40:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:22:09.574 Malloc1 00:22:09.574 [2024-11-15 10:40:57.848032] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:09.574 Malloc2 00:22:09.574 Malloc3 00:22:09.574 Malloc4 00:22:09.574 Malloc5 00:22:09.832 Malloc6 00:22:09.832 Malloc7 00:22:09.832 Malloc8 00:22:09.832 Malloc9 00:22:09.832 Malloc10 00:22:09.832 10:40:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:09.832 10:40:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:22:09.832 10:40:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:09.832 10:40:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:22:10.089 10:40:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@149 -- # perfpid=425696 00:22:10.090 10:40:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@148 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 45056 -O 4096 -w randwrite -t 20 -r 'trtype:tcp adrfam:IPV4 traddr:10.0.0.2 trsvcid:4420' -P 4 00:22:10.090 10:40:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@150 -- # sleep 5 00:22:10.090 [2024-11-15 10:40:58.356919] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:22:15.356 10:41:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@152 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:15.356 10:41:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@155 -- # killprocess 425521 00:22:15.356 10:41:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@952 -- # '[' -z 425521 ']' 00:22:15.356 10:41:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@956 -- # kill -0 425521 00:22:15.356 10:41:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@957 -- # uname 00:22:15.356 10:41:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:22:15.356 10:41:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 425521 00:22:15.356 10:41:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:22:15.356 10:41:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:22:15.356 10:41:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@970 -- # echo 'killing process with pid 425521' 00:22:15.356 killing process with pid 425521 00:22:15.356 10:41:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@971 -- # kill 425521 00:22:15.356 10:41:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@976 -- # wait 425521 00:22:15.356 [2024-11-15 10:41:03.364215] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3c310 is same with the state(6) to be set 00:22:15.356 [2024-11-15 10:41:03.364303] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3c310 is same with the state(6) to be set 00:22:15.356 [2024-11-15 10:41:03.364320] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3c310 is same with the state(6) to be set 00:22:15.356 [2024-11-15 10:41:03.364368] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3c310 is same with the state(6) to be set 00:22:15.356 [2024-11-15 10:41:03.364385] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3c310 is same with the state(6) to be set 00:22:15.356 [2024-11-15 10:41:03.364399] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3c310 is same with the state(6) to be set 00:22:15.356 [2024-11-15 10:41:03.364412] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3c310 is same with the state(6) to be set 00:22:15.356 [2024-11-15 10:41:03.365411] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3c7e0 is same with the state(6) to be set 00:22:15.356 [2024-11-15 10:41:03.365460] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3c7e0 is same with the state(6) to be set 00:22:15.356 [2024-11-15 10:41:03.365486] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3c7e0 is same with the state(6) to be set 00:22:15.356 [2024-11-15 10:41:03.365511] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3c7e0 is same with the state(6) to be set 00:22:15.357 [2024-11-15 10:41:03.365532] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3c7e0 is same with the state(6) to be set 00:22:15.357 [2024-11-15 10:41:03.366822] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3b970 is same with the state(6) to be set 00:22:15.357 [2024-11-15 10:41:03.366859] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3b970 is same with the state(6) to be set 00:22:15.357 [2024-11-15 10:41:03.366881] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3b970 is same with the state(6) to be set 00:22:15.357 [2024-11-15 10:41:03.366895] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3b970 is same with the state(6) to be set 00:22:15.357 [2024-11-15 10:41:03.366909] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3b970 is same with the state(6) to be set 00:22:15.357 [2024-11-15 10:41:03.366922] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3b970 is same with the state(6) to be set 00:22:15.357 [2024-11-15 10:41:03.366961] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3b970 is same with the state(6) to be set 00:22:15.357 [2024-11-15 10:41:03.366982] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3b970 is same with the state(6) to be set 00:22:15.357 [2024-11-15 10:41:03.366996] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3b970 is same with the state(6) to be set 00:22:15.357 [2024-11-15 10:41:03.368185] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d805f0 is same with the state(6) to be set 00:22:15.357 [2024-11-15 10:41:03.368226] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d805f0 is same with the state(6) to be set 00:22:15.357 [2024-11-15 10:41:03.368243] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d805f0 is same with the state(6) to be set 00:22:15.357 [2024-11-15 10:41:03.368257] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d805f0 is same with the state(6) to be set 00:22:15.357 [2024-11-15 10:41:03.368269] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d805f0 is same with the state(6) to be set 00:22:15.357 [2024-11-15 10:41:03.368281] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d805f0 is same with the state(6) to be set 00:22:15.357 [2024-11-15 10:41:03.368672] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fef1c0 is same with the state(6) to be set 00:22:15.357 [2024-11-15 10:41:03.368701] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fef1c0 is same with the state(6) to be set 00:22:15.357 [2024-11-15 10:41:03.368716] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fef1c0 is same with the state(6) to be set 00:22:15.357 [2024-11-15 10:41:03.368729] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fef1c0 is same with the state(6) to be set 00:22:15.357 [2024-11-15 10:41:03.368741] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fef1c0 is same with the state(6) to be set 00:22:15.357 [2024-11-15 10:41:03.368753] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fef1c0 is same with the state(6) to be set 00:22:15.357 [2024-11-15 10:41:03.368765] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fef1c0 is same with the state(6) to be set 00:22:15.357 [2024-11-15 10:41:03.368777] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fef1c0 is same with the state(6) to be set 00:22:15.357 [2024-11-15 10:41:03.368789] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fef1c0 is same with the state(6) to be set 00:22:15.357 [2024-11-15 10:41:03.368801] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fef1c0 is same with the state(6) to be set 00:22:15.357 [2024-11-15 10:41:03.369605] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fef690 is same with the state(6) to be set 00:22:15.357 [2024-11-15 10:41:03.369636] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fef690 is same with the state(6) to be set 00:22:15.357 [2024-11-15 10:41:03.369661] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fef690 is same with the state(6) to be set 00:22:15.357 [2024-11-15 10:41:03.370487] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d80120 is same with the state(6) to be set 00:22:15.357 [2024-11-15 10:41:03.370516] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d80120 is same with the state(6) to be set 00:22:15.357 [2024-11-15 10:41:03.370531] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d80120 is same with the state(6) to be set 00:22:15.357 [2024-11-15 10:41:03.370544] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d80120 is same with the state(6) to be set 00:22:15.357 [2024-11-15 10:41:03.370557] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d80120 is same with the state(6) to be set 00:22:15.357 [2024-11-15 10:41:03.370569] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d80120 is same with the state(6) to be set 00:22:15.357 [2024-11-15 10:41:03.370588] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d80120 is same with the state(6) to be set 00:22:15.357 [2024-11-15 10:41:03.370601] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d80120 is same with the state(6) to be set 00:22:15.357 [2024-11-15 10:41:03.370613] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d80120 is same with the state(6) to be set 00:22:15.357 [2024-11-15 10:41:03.379716] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd9780 is same with the state(6) to be set 00:22:15.357 [2024-11-15 10:41:03.379778] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd9780 is same with the state(6) to be set 00:22:15.357 [2024-11-15 10:41:03.379793] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd9780 is same with the state(6) to be set 00:22:15.357 [2024-11-15 10:41:03.379807] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd9780 is same with the state(6) to be set 00:22:15.357 [2024-11-15 10:41:03.379819] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd9780 is same with the state(6) to be set 00:22:15.357 [2024-11-15 10:41:03.379831] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd9780 is same with the state(6) to be set 00:22:15.357 [2024-11-15 10:41:03.380392] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd9c70 is same with the state(6) to be set 00:22:15.357 [2024-11-15 10:41:03.380425] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd9c70 is same with the state(6) to be set 00:22:15.357 [2024-11-15 10:41:03.380441] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd9c70 is same with the state(6) to be set 00:22:15.357 [2024-11-15 10:41:03.380455] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd9c70 is same with the state(6) to be set 00:22:15.357 [2024-11-15 10:41:03.380468] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd9c70 is same with the state(6) to be set 00:22:15.357 [2024-11-15 10:41:03.380481] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd9c70 is same with the state(6) to be set 00:22:15.357 [2024-11-15 10:41:03.381211] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fda140 is same with the state(6) to be set 00:22:15.357 [2024-11-15 10:41:03.381243] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fda140 is same with the state(6) to be set 00:22:15.357 [2024-11-15 10:41:03.381268] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fda140 is same with the state(6) to be set 00:22:15.357 [2024-11-15 10:41:03.381293] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fda140 is same with the state(6) to be set 00:22:15.357 [2024-11-15 10:41:03.381313] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fda140 is same with the state(6) to be set 00:22:15.357 [2024-11-15 10:41:03.381336] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fda140 is same with the state(6) to be set 00:22:15.357 Write completed with error (sct=0, sc=8) 00:22:15.357 Write completed with error (sct=0, sc=8) 00:22:15.357 starting I/O failed: -6 00:22:15.357 Write completed with error (sct=0, sc=8) 00:22:15.357 Write completed with error (sct=0, sc=8) 00:22:15.357 Write completed with error (sct=0, sc=8) 00:22:15.357 Write completed with error (sct=0, sc=8) 00:22:15.357 starting I/O failed: -6 00:22:15.357 Write completed with error (sct=0, sc=8) 00:22:15.357 Write completed with error (sct=0, sc=8) 00:22:15.357 Write completed with error (sct=0, sc=8) 00:22:15.357 Write completed with error (sct=0, sc=8) 00:22:15.357 starting I/O failed: -6 00:22:15.357 Write completed with error (sct=0, sc=8) 00:22:15.357 Write completed with error (sct=0, sc=8) 00:22:15.357 Write completed with error (sct=0, sc=8) 00:22:15.357 Write completed with error (sct=0, sc=8) 00:22:15.357 starting I/O failed: -6 00:22:15.357 Write completed with error (sct=0, sc=8) 00:22:15.357 Write completed with error (sct=0, sc=8) 00:22:15.357 Write completed with error (sct=0, sc=8) 00:22:15.357 Write completed with error (sct=0, sc=8) 00:22:15.357 starting I/O failed: -6 00:22:15.357 Write completed with error (sct=0, sc=8) 00:22:15.357 Write completed with error (sct=0, sc=8) 00:22:15.357 Write completed with error (sct=0, sc=8) 00:22:15.357 Write completed with error (sct=0, sc=8) 00:22:15.357 starting I/O failed: -6 00:22:15.357 Write completed with error (sct=0, sc=8) 00:22:15.357 Write completed with error (sct=0, sc=8) 00:22:15.357 Write completed with error (sct=0, sc=8) 00:22:15.357 Write completed with error (sct=0, sc=8) 00:22:15.357 starting I/O failed: -6 00:22:15.357 Write completed with error (sct=0, sc=8) 00:22:15.357 Write completed with error (sct=0, sc=8) 00:22:15.357 Write completed with error (sct=0, sc=8) 00:22:15.357 Write completed with error (sct=0, sc=8) 00:22:15.357 starting I/O failed: -6 00:22:15.357 Write completed with error (sct=0, sc=8) 00:22:15.357 Write completed with error (sct=0, sc=8) 00:22:15.357 Write completed with error (sct=0, sc=8) 00:22:15.357 Write completed with error (sct=0, sc=8) 00:22:15.357 [2024-11-15 10:41:03.382255] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd92b0 is same with the state(6) to be set 00:22:15.357 starting I/O failed: -6 00:22:15.357 Write completed with error (sct=0, sc=8) 00:22:15.357 [2024-11-15 10:41:03.382294] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd92b0 is same with the state(6) to be set 00:22:15.357 Write completed with error (sct=0, sc=8) 00:22:15.357 [2024-11-15 10:41:03.382309] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd92b0 is same with the state(6) to be set 00:22:15.357 Write completed with error (sct=0, sc=8) 00:22:15.357 [2024-11-15 10:41:03.382322] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd92b0 is same with the state(6) to be set 00:22:15.357 [2024-11-15 10:41:03.382335] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd92b0 is same with the state(6) to be set 00:22:15.357 [2024-11-15 10:41:03.382347] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd92b0 is same with the state(6) to be set 00:22:15.357 [2024-11-15 10:41:03.382359] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd92b0 is same with the state(6) to be set 00:22:15.357 [2024-11-15 10:41:03.382396] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd92b0 is same with the state(6) to be set 00:22:15.357 [2024-11-15 10:41:03.382397] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:22:15.357 Write completed with error (sct=0, sc=8) 00:22:15.357 starting I/O failed: -6 00:22:15.357 Write completed with error (sct=0, sc=8) 00:22:15.358 Write completed with error (sct=0, sc=8) 00:22:15.358 Write completed with error (sct=0, sc=8) 00:22:15.358 starting I/O failed: -6 00:22:15.358 Write completed with error (sct=0, sc=8) 00:22:15.358 starting I/O failed: -6 00:22:15.358 Write completed with error (sct=0, sc=8) 00:22:15.358 Write completed with error (sct=0, sc=8) 00:22:15.358 Write completed with error (sct=0, sc=8) 00:22:15.358 starting I/O failed: -6 00:22:15.358 Write completed with error (sct=0, sc=8) 00:22:15.358 starting I/O failed: -6 00:22:15.358 Write completed with error (sct=0, sc=8) 00:22:15.358 [2024-11-15 10:41:03.382803] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fdab00 is same with the state(6) to be set 00:22:15.358 Write completed with error (sct=0, sc=8) 00:22:15.358 [2024-11-15 10:41:03.382828] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fdab00 is same with the state(6) to be set 00:22:15.358 Write completed with error (sct=0, sc=8) 00:22:15.358 [2024-11-15 10:41:03.382841] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fdab00 is same with starting I/O failed: -6 00:22:15.358 the state(6) to be set 00:22:15.358 Write completed with error (sct=0, sc=8) 00:22:15.358 [2024-11-15 10:41:03.382855] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fdab00 is same with the state(6) to be set 00:22:15.358 starting I/O failed: -6 00:22:15.358 [2024-11-15 10:41:03.382868] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fdab00 is same with Write completed with error (sct=0, sc=8) 00:22:15.358 the state(6) to be set 00:22:15.358 [2024-11-15 10:41:03.382880] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fdab00 is same with the state(6) to be set 00:22:15.358 Write completed with error (sct=0, sc=8) 00:22:15.358 Write completed with error (sct=0, sc=8) 00:22:15.358 starting I/O failed: -6 00:22:15.358 Write completed with error (sct=0, sc=8) 00:22:15.358 starting I/O failed: -6 00:22:15.358 Write completed with error (sct=0, sc=8) 00:22:15.358 Write completed with error (sct=0, sc=8) 00:22:15.358 Write completed with error (sct=0, sc=8) 00:22:15.358 starting I/O failed: -6 00:22:15.358 Write completed with error (sct=0, sc=8) 00:22:15.358 starting I/O failed: -6 00:22:15.358 Write completed with error (sct=0, sc=8) 00:22:15.358 Write completed with error (sct=0, sc=8) 00:22:15.358 Write completed with error (sct=0, sc=8) 00:22:15.358 starting I/O failed: -6 00:22:15.358 Write completed with error (sct=0, sc=8) 00:22:15.358 starting I/O failed: -6 00:22:15.358 Write completed with error (sct=0, sc=8) 00:22:15.358 Write completed with error (sct=0, sc=8) 00:22:15.358 Write completed with error (sct=0, sc=8) 00:22:15.358 starting I/O failed: -6 00:22:15.358 Write completed with error (sct=0, sc=8) 00:22:15.358 starting I/O failed: -6 00:22:15.358 Write completed with error (sct=0, sc=8) 00:22:15.358 Write completed with error (sct=0, sc=8) 00:22:15.358 Write completed with error (sct=0, sc=8) 00:22:15.358 [2024-11-15 10:41:03.383266] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fdaff0 is same with starting I/O failed: -6 00:22:15.358 the state(6) to be set 00:22:15.358 [2024-11-15 10:41:03.383293] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fdaff0 is same with the state(6) to be set 00:22:15.358 Write completed with error (sct=0, sc=8) 00:22:15.358 starting I/O failed: -6 00:22:15.358 [2024-11-15 10:41:03.383307] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fdaff0 is same with the state(6) to be set 00:22:15.358 [2024-11-15 10:41:03.383320] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fdaff0 is same with the state(6) to be set 00:22:15.358 Write completed with error (sct=0, sc=8) 00:22:15.358 [2024-11-15 10:41:03.383333] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fdaff0 is same with the state(6) to be set 00:22:15.358 Write completed with error (sct=0, sc=8) 00:22:15.358 [2024-11-15 10:41:03.383345] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fdaff0 is same with the state(6) to be set 00:22:15.358 [2024-11-15 10:41:03.383358] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fdaff0 is same with the state(6) to be set 00:22:15.358 Write completed with error (sct=0, sc=8) 00:22:15.358 starting I/O failed: -6 00:22:15.358 [2024-11-15 10:41:03.383396] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fdaff0 is same with the state(6) to be set 00:22:15.358 [2024-11-15 10:41:03.383409] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fdaff0 is same with the state(6) to be set 00:22:15.358 Write completed with error (sct=0, sc=8) 00:22:15.358 starting I/O failed: -6 00:22:15.358 Write completed with error (sct=0, sc=8) 00:22:15.358 Write completed with error (sct=0, sc=8) 00:22:15.358 Write completed with error (sct=0, sc=8) 00:22:15.358 starting I/O failed: -6 00:22:15.358 Write completed with error (sct=0, sc=8) 00:22:15.358 starting I/O failed: -6 00:22:15.358 Write completed with error (sct=0, sc=8) 00:22:15.358 Write completed with error (sct=0, sc=8) 00:22:15.358 Write completed with error (sct=0, sc=8) 00:22:15.358 starting I/O failed: -6 00:22:15.358 Write completed with error (sct=0, sc=8) 00:22:15.358 starting I/O failed: -6 00:22:15.358 Write completed with error (sct=0, sc=8) 00:22:15.358 [2024-11-15 10:41:03.383661] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:22:15.358 Write completed with error (sct=0, sc=8) 00:22:15.358 starting I/O failed: -6 00:22:15.358 Write completed with error (sct=0, sc=8) 00:22:15.358 starting I/O failed: -6 00:22:15.358 Write completed with error (sct=0, sc=8) 00:22:15.358 starting I/O failed: -6 00:22:15.358 [2024-11-15 10:41:03.383867] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fdb4c0 is same with the state(6) to be set 00:22:15.358 Write completed with error (sct=0, sc=8) 00:22:15.358 [2024-11-15 10:41:03.383901] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fdb4c0 is same with Write completed with error (sct=0, sc=8) 00:22:15.358 the state(6) to be set 00:22:15.358 starting I/O failed: -6 00:22:15.358 [2024-11-15 10:41:03.383929] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fdb4c0 is same with the state(6) to be set 00:22:15.358 Write completed with error (sct=0, sc=8) 00:22:15.358 starting I/O failed: -6 00:22:15.358 [2024-11-15 10:41:03.383953] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fdb4c0 is same with the state(6) to be set 00:22:15.358 Write completed with error (sct=0, sc=8) 00:22:15.358 starting I/O failed: -6 00:22:15.358 [2024-11-15 10:41:03.383985] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fdb4c0 is same with the state(6) to be set 00:22:15.358 Write completed with error (sct=0, sc=8) 00:22:15.358 [2024-11-15 10:41:03.384008] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fdb4c0 is same with Write completed with error (sct=0, sc=8) 00:22:15.358 the state(6) to be set 00:22:15.358 starting I/O failed: -6 00:22:15.358 [2024-11-15 10:41:03.384033] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fdb4c0 is same with the state(6) to be set 00:22:15.358 Write completed with error (sct=0, sc=8) 00:22:15.358 starting I/O failed: -6 00:22:15.358 [2024-11-15 10:41:03.384053] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fdb4c0 is same with the state(6) to be set 00:22:15.358 Write completed with error (sct=0, sc=8) 00:22:15.358 starting I/O failed: -6 00:22:15.358 [2024-11-15 10:41:03.384075] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fdb4c0 is same with the state(6) to be set 00:22:15.358 Write completed with error (sct=0, sc=8) 00:22:15.358 [2024-11-15 10:41:03.384096] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fdb4c0 is same with the state(6) to be set 00:22:15.358 Write completed with error (sct=0, sc=8) 00:22:15.358 starting I/O failed: -6 00:22:15.358 Write completed with error (sct=0, sc=8) 00:22:15.358 starting I/O failed: -6 00:22:15.358 Write completed with error (sct=0, sc=8) 00:22:15.358 starting I/O failed: -6 00:22:15.358 Write completed with error (sct=0, sc=8) 00:22:15.358 Write completed with error (sct=0, sc=8) 00:22:15.358 starting I/O failed: -6 00:22:15.358 Write completed with error (sct=0, sc=8) 00:22:15.358 starting I/O failed: -6 00:22:15.358 Write completed with error (sct=0, sc=8) 00:22:15.358 starting I/O failed: -6 00:22:15.358 Write completed with error (sct=0, sc=8) 00:22:15.358 Write completed with error (sct=0, sc=8) 00:22:15.358 starting I/O failed: -6 00:22:15.358 Write completed with error (sct=0, sc=8) 00:22:15.358 starting I/O failed: -6 00:22:15.358 Write completed with error (sct=0, sc=8) 00:22:15.358 starting I/O failed: -6 00:22:15.358 Write completed with error (sct=0, sc=8) 00:22:15.358 Write completed with error (sct=0, sc=8) 00:22:15.358 starting I/O failed: -6 00:22:15.358 Write completed with error (sct=0, sc=8) 00:22:15.358 starting I/O failed: -6 00:22:15.358 Write completed with error (sct=0, sc=8) 00:22:15.358 starting I/O failed: -6 00:22:15.358 Write completed with error (sct=0, sc=8) 00:22:15.358 Write completed with error (sct=0, sc=8) 00:22:15.358 starting I/O failed: -6 00:22:15.358 Write completed with error (sct=0, sc=8) 00:22:15.358 starting I/O failed: -6 00:22:15.358 Write completed with error (sct=0, sc=8) 00:22:15.358 starting I/O failed: -6 00:22:15.358 Write completed with error (sct=0, sc=8) 00:22:15.358 Write completed with error (sct=0, sc=8) 00:22:15.358 starting I/O failed: -6 00:22:15.358 Write completed with error (sct=0, sc=8) 00:22:15.358 starting I/O failed: -6 00:22:15.358 Write completed with error (sct=0, sc=8) 00:22:15.358 starting I/O failed: -6 00:22:15.358 Write completed with error (sct=0, sc=8) 00:22:15.358 Write completed with error (sct=0, sc=8) 00:22:15.358 starting I/O failed: -6 00:22:15.358 [2024-11-15 10:41:03.384738] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fda630 is same with the state(6) to be set 00:22:15.358 [2024-11-15 10:41:03.384762] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fda630 is same with the state(6) to be set 00:22:15.358 Write completed with error (sct=0, sc=8) 00:22:15.358 starting I/O failed: -6 00:22:15.358 [2024-11-15 10:41:03.384776] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fda630 is same with the state(6) to be set 00:22:15.358 [2024-11-15 10:41:03.384788] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fda630 is same with the state(6) to be set 00:22:15.358 Write completed with error (sct=0, sc=8) 00:22:15.358 [2024-11-15 10:41:03.384800] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fda630 is same with starting I/O failed: -6 00:22:15.358 the state(6) to be set 00:22:15.359 [2024-11-15 10:41:03.384813] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fda630 is same with the state(6) to be set 00:22:15.359 Write completed with error (sct=0, sc=8) 00:22:15.359 [2024-11-15 10:41:03.384826] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fda630 is same with the state(6) to be set 00:22:15.359 Write completed with error (sct=0, sc=8) 00:22:15.359 [2024-11-15 10:41:03.384839] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fda630 is same with starting I/O failed: -6 00:22:15.359 the state(6) to be set 00:22:15.359 [2024-11-15 10:41:03.384859] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fda630 is same with the state(6) to be set 00:22:15.359 Write completed with error (sct=0, sc=8) 00:22:15.359 [2024-11-15 10:41:03.384871] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fda630 is same with starting I/O failed: -6 00:22:15.359 the state(6) to be set 00:22:15.359 [2024-11-15 10:41:03.384885] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fda630 is same with the state(6) to be set 00:22:15.359 Write completed with error (sct=0, sc=8) 00:22:15.359 starting I/O failed: -6 00:22:15.359 Write completed with error (sct=0, sc=8) 00:22:15.359 Write completed with error (sct=0, sc=8) 00:22:15.359 starting I/O failed: -6 00:22:15.359 Write completed with error (sct=0, sc=8) 00:22:15.359 starting I/O failed: -6 00:22:15.359 Write completed with error (sct=0, sc=8) 00:22:15.359 starting I/O failed: -6 00:22:15.359 Write completed with error (sct=0, sc=8) 00:22:15.359 Write completed with error (sct=0, sc=8) 00:22:15.359 starting I/O failed: -6 00:22:15.359 [2024-11-15 10:41:03.385075] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:15.359 Write completed with error (sct=0, sc=8) 00:22:15.359 starting I/O failed: -6 00:22:15.359 Write completed with error (sct=0, sc=8) 00:22:15.359 starting I/O failed: -6 00:22:15.359 Write completed with error (sct=0, sc=8) 00:22:15.359 starting I/O failed: -6 00:22:15.359 Write completed with error (sct=0, sc=8) 00:22:15.359 starting I/O failed: -6 00:22:15.359 Write completed with error (sct=0, sc=8) 00:22:15.359 starting I/O failed: -6 00:22:15.359 Write completed with error (sct=0, sc=8) 00:22:15.359 starting I/O failed: -6 00:22:15.359 Write completed with error (sct=0, sc=8) 00:22:15.359 starting I/O failed: -6 00:22:15.359 Write completed with error (sct=0, sc=8) 00:22:15.359 starting I/O failed: -6 00:22:15.359 Write completed with error (sct=0, sc=8) 00:22:15.359 starting I/O failed: -6 00:22:15.359 Write completed with error (sct=0, sc=8) 00:22:15.359 starting I/O failed: -6 00:22:15.359 Write completed with error (sct=0, sc=8) 00:22:15.359 starting I/O failed: -6 00:22:15.359 Write completed with error (sct=0, sc=8) 00:22:15.359 starting I/O failed: -6 00:22:15.359 Write completed with error (sct=0, sc=8) 00:22:15.359 starting I/O failed: -6 00:22:15.359 Write completed with error (sct=0, sc=8) 00:22:15.359 starting I/O failed: -6 00:22:15.359 Write completed with error (sct=0, sc=8) 00:22:15.359 starting I/O failed: -6 00:22:15.359 Write completed with error (sct=0, sc=8) 00:22:15.359 starting I/O failed: -6 00:22:15.359 Write completed with error (sct=0, sc=8) 00:22:15.359 starting I/O failed: -6 00:22:15.359 Write completed with error (sct=0, sc=8) 00:22:15.359 starting I/O failed: -6 00:22:15.359 Write completed with error (sct=0, sc=8) 00:22:15.359 starting I/O failed: -6 00:22:15.359 Write completed with error (sct=0, sc=8) 00:22:15.359 starting I/O failed: -6 00:22:15.359 Write completed with error (sct=0, sc=8) 00:22:15.359 starting I/O failed: -6 00:22:15.359 Write completed with error (sct=0, sc=8) 00:22:15.359 starting I/O failed: -6 00:22:15.359 Write completed with error (sct=0, sc=8) 00:22:15.359 starting I/O failed: -6 00:22:15.359 Write completed with error (sct=0, sc=8) 00:22:15.359 starting I/O failed: -6 00:22:15.359 Write completed with error (sct=0, sc=8) 00:22:15.359 starting I/O failed: -6 00:22:15.359 Write completed with error (sct=0, sc=8) 00:22:15.359 starting I/O failed: -6 00:22:15.359 Write completed with error (sct=0, sc=8) 00:22:15.359 starting I/O failed: -6 00:22:15.359 Write completed with error (sct=0, sc=8) 00:22:15.359 starting I/O failed: -6 00:22:15.359 Write completed with error (sct=0, sc=8) 00:22:15.359 starting I/O failed: -6 00:22:15.359 Write completed with error (sct=0, sc=8) 00:22:15.359 starting I/O failed: -6 00:22:15.359 Write completed with error (sct=0, sc=8) 00:22:15.359 starting I/O failed: -6 00:22:15.359 Write completed with error (sct=0, sc=8) 00:22:15.359 starting I/O failed: -6 00:22:15.359 Write completed with error (sct=0, sc=8) 00:22:15.359 starting I/O failed: -6 00:22:15.359 Write completed with error (sct=0, sc=8) 00:22:15.359 starting I/O failed: -6 00:22:15.359 Write completed with error (sct=0, sc=8) 00:22:15.359 starting I/O failed: -6 00:22:15.359 Write completed with error (sct=0, sc=8) 00:22:15.359 starting I/O failed: -6 00:22:15.359 Write completed with error (sct=0, sc=8) 00:22:15.359 starting I/O failed: -6 00:22:15.359 Write completed with error (sct=0, sc=8) 00:22:15.359 starting I/O failed: -6 00:22:15.359 Write completed with error (sct=0, sc=8) 00:22:15.359 starting I/O failed: -6 00:22:15.359 Write completed with error (sct=0, sc=8) 00:22:15.359 starting I/O failed: -6 00:22:15.359 Write completed with error (sct=0, sc=8) 00:22:15.359 starting I/O failed: -6 00:22:15.359 Write completed with error (sct=0, sc=8) 00:22:15.359 starting I/O failed: -6 00:22:15.359 Write completed with error (sct=0, sc=8) 00:22:15.359 starting I/O failed: -6 00:22:15.359 Write completed with error (sct=0, sc=8) 00:22:15.359 starting I/O failed: -6 00:22:15.359 Write completed with error (sct=0, sc=8) 00:22:15.359 starting I/O failed: -6 00:22:15.359 Write completed with error (sct=0, sc=8) 00:22:15.359 starting I/O failed: -6 00:22:15.359 Write completed with error (sct=0, sc=8) 00:22:15.359 starting I/O failed: -6 00:22:15.359 Write completed with error (sct=0, sc=8) 00:22:15.359 starting I/O failed: -6 00:22:15.359 Write completed with error (sct=0, sc=8) 00:22:15.359 starting I/O failed: -6 00:22:15.359 Write completed with error (sct=0, sc=8) 00:22:15.359 starting I/O failed: -6 00:22:15.359 Write completed with error (sct=0, sc=8) 00:22:15.359 starting I/O failed: -6 00:22:15.359 Write completed with error (sct=0, sc=8) 00:22:15.359 starting I/O failed: -6 00:22:15.359 Write completed with error (sct=0, sc=8) 00:22:15.359 starting I/O failed: -6 00:22:15.359 Write completed with error (sct=0, sc=8) 00:22:15.359 starting I/O failed: -6 00:22:15.359 Write completed with error (sct=0, sc=8) 00:22:15.359 starting I/O failed: -6 00:22:15.359 Write completed with error (sct=0, sc=8) 00:22:15.359 starting I/O failed: -6 00:22:15.359 Write completed with error (sct=0, sc=8) 00:22:15.359 starting I/O failed: -6 00:22:15.359 Write completed with error (sct=0, sc=8) 00:22:15.359 starting I/O failed: -6 00:22:15.359 Write completed with error (sct=0, sc=8) 00:22:15.359 starting I/O failed: -6 00:22:15.359 [2024-11-15 10:41:03.387055] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:22:15.359 NVMe io qpair process completion error 00:22:15.359 Write completed with error (sct=0, sc=8) 00:22:15.359 Write completed with error (sct=0, sc=8) 00:22:15.359 Write completed with error (sct=0, sc=8) 00:22:15.359 starting I/O failed: -6 00:22:15.359 Write completed with error (sct=0, sc=8) 00:22:15.359 Write completed with error (sct=0, sc=8) 00:22:15.359 Write completed with error (sct=0, sc=8) 00:22:15.359 Write completed with error (sct=0, sc=8) 00:22:15.359 starting I/O failed: -6 00:22:15.359 Write completed with error (sct=0, sc=8) 00:22:15.359 Write completed with error (sct=0, sc=8) 00:22:15.359 Write completed with error (sct=0, sc=8) 00:22:15.359 Write completed with error (sct=0, sc=8) 00:22:15.359 starting I/O failed: -6 00:22:15.359 Write completed with error (sct=0, sc=8) 00:22:15.359 Write completed with error (sct=0, sc=8) 00:22:15.359 Write completed with error (sct=0, sc=8) 00:22:15.359 Write completed with error (sct=0, sc=8) 00:22:15.359 starting I/O failed: -6 00:22:15.359 Write completed with error (sct=0, sc=8) 00:22:15.359 Write completed with error (sct=0, sc=8) 00:22:15.359 Write completed with error (sct=0, sc=8) 00:22:15.359 Write completed with error (sct=0, sc=8) 00:22:15.359 starting I/O failed: -6 00:22:15.359 Write completed with error (sct=0, sc=8) 00:22:15.359 Write completed with error (sct=0, sc=8) 00:22:15.359 Write completed with error (sct=0, sc=8) 00:22:15.359 Write completed with error (sct=0, sc=8) 00:22:15.359 starting I/O failed: -6 00:22:15.359 Write completed with error (sct=0, sc=8) 00:22:15.359 Write completed with error (sct=0, sc=8) 00:22:15.359 Write completed with error (sct=0, sc=8) 00:22:15.359 Write completed with error (sct=0, sc=8) 00:22:15.359 starting I/O failed: -6 00:22:15.359 Write completed with error (sct=0, sc=8) 00:22:15.359 Write completed with error (sct=0, sc=8) 00:22:15.359 Write completed with error (sct=0, sc=8) 00:22:15.359 Write completed with error (sct=0, sc=8) 00:22:15.359 starting I/O failed: -6 00:22:15.359 Write completed with error (sct=0, sc=8) 00:22:15.359 Write completed with error (sct=0, sc=8) 00:22:15.359 Write completed with error (sct=0, sc=8) 00:22:15.359 Write completed with error (sct=0, sc=8) 00:22:15.359 starting I/O failed: -6 00:22:15.359 Write completed with error (sct=0, sc=8) 00:22:15.359 Write completed with error (sct=0, sc=8) 00:22:15.359 Write completed with error (sct=0, sc=8) 00:22:15.359 Write completed with error (sct=0, sc=8) 00:22:15.359 starting I/O failed: -6 00:22:15.359 Write completed with error (sct=0, sc=8) 00:22:15.359 Write completed with error (sct=0, sc=8) 00:22:15.359 Write completed with error (sct=0, sc=8) 00:22:15.359 [2024-11-15 10:41:03.388561] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:15.359 starting I/O failed: -6 00:22:15.359 Write completed with error (sct=0, sc=8) 00:22:15.359 starting I/O failed: -6 00:22:15.359 Write completed with error (sct=0, sc=8) 00:22:15.359 starting I/O failed: -6 00:22:15.359 Write completed with error (sct=0, sc=8) 00:22:15.359 Write completed with error (sct=0, sc=8) 00:22:15.359 Write completed with error (sct=0, sc=8) 00:22:15.360 starting I/O failed: -6 00:22:15.360 Write completed with error (sct=0, sc=8) 00:22:15.360 starting I/O failed: -6 00:22:15.360 Write completed with error (sct=0, sc=8) 00:22:15.360 Write completed with error (sct=0, sc=8) 00:22:15.360 Write completed with error (sct=0, sc=8) 00:22:15.360 starting I/O failed: -6 00:22:15.360 Write completed with error (sct=0, sc=8) 00:22:15.360 starting I/O failed: -6 00:22:15.360 Write completed with error (sct=0, sc=8) 00:22:15.360 Write completed with error (sct=0, sc=8) 00:22:15.360 Write completed with error (sct=0, sc=8) 00:22:15.360 starting I/O failed: -6 00:22:15.360 Write completed with error (sct=0, sc=8) 00:22:15.360 starting I/O failed: -6 00:22:15.360 Write completed with error (sct=0, sc=8) 00:22:15.360 Write completed with error (sct=0, sc=8) 00:22:15.360 Write completed with error (sct=0, sc=8) 00:22:15.360 starting I/O failed: -6 00:22:15.360 Write completed with error (sct=0, sc=8) 00:22:15.360 starting I/O failed: -6 00:22:15.360 Write completed with error (sct=0, sc=8) 00:22:15.360 Write completed with error (sct=0, sc=8) 00:22:15.360 Write completed with error (sct=0, sc=8) 00:22:15.360 starting I/O failed: -6 00:22:15.360 Write completed with error (sct=0, sc=8) 00:22:15.360 starting I/O failed: -6 00:22:15.360 Write completed with error (sct=0, sc=8) 00:22:15.360 Write completed with error (sct=0, sc=8) 00:22:15.360 Write completed with error (sct=0, sc=8) 00:22:15.360 starting I/O failed: -6 00:22:15.360 Write completed with error (sct=0, sc=8) 00:22:15.360 starting I/O failed: -6 00:22:15.360 Write completed with error (sct=0, sc=8) 00:22:15.360 Write completed with error (sct=0, sc=8) 00:22:15.360 Write completed with error (sct=0, sc=8) 00:22:15.360 starting I/O failed: -6 00:22:15.360 Write completed with error (sct=0, sc=8) 00:22:15.360 starting I/O failed: -6 00:22:15.360 Write completed with error (sct=0, sc=8) 00:22:15.360 Write completed with error (sct=0, sc=8) 00:22:15.360 Write completed with error (sct=0, sc=8) 00:22:15.360 starting I/O failed: -6 00:22:15.360 Write completed with error (sct=0, sc=8) 00:22:15.360 starting I/O failed: -6 00:22:15.360 Write completed with error (sct=0, sc=8) 00:22:15.360 Write completed with error (sct=0, sc=8) 00:22:15.360 Write completed with error (sct=0, sc=8) 00:22:15.360 starting I/O failed: -6 00:22:15.360 [2024-11-15 10:41:03.389625] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:22:15.360 Write completed with error (sct=0, sc=8) 00:22:15.360 starting I/O failed: -6 00:22:15.360 Write completed with error (sct=0, sc=8) 00:22:15.360 Write completed with error (sct=0, sc=8) 00:22:15.360 starting I/O failed: -6 00:22:15.360 Write completed with error (sct=0, sc=8) 00:22:15.360 starting I/O failed: -6 00:22:15.360 Write completed with error (sct=0, sc=8) 00:22:15.360 starting I/O failed: -6 00:22:15.360 Write completed with error (sct=0, sc=8) 00:22:15.360 Write completed with error (sct=0, sc=8) 00:22:15.360 starting I/O failed: -6 00:22:15.360 Write completed with error (sct=0, sc=8) 00:22:15.360 starting I/O failed: -6 00:22:15.360 Write completed with error (sct=0, sc=8) 00:22:15.360 starting I/O failed: -6 00:22:15.360 Write completed with error (sct=0, sc=8) 00:22:15.360 Write completed with error (sct=0, sc=8) 00:22:15.360 starting I/O failed: -6 00:22:15.360 Write completed with error (sct=0, sc=8) 00:22:15.360 starting I/O failed: -6 00:22:15.360 Write completed with error (sct=0, sc=8) 00:22:15.360 starting I/O failed: -6 00:22:15.360 Write completed with error (sct=0, sc=8) 00:22:15.360 Write completed with error (sct=0, sc=8) 00:22:15.360 starting I/O failed: -6 00:22:15.360 Write completed with error (sct=0, sc=8) 00:22:15.360 starting I/O failed: -6 00:22:15.360 Write completed with error (sct=0, sc=8) 00:22:15.360 starting I/O failed: -6 00:22:15.360 Write completed with error (sct=0, sc=8) 00:22:15.360 Write completed with error (sct=0, sc=8) 00:22:15.360 starting I/O failed: -6 00:22:15.360 Write completed with error (sct=0, sc=8) 00:22:15.360 starting I/O failed: -6 00:22:15.360 Write completed with error (sct=0, sc=8) 00:22:15.360 starting I/O failed: -6 00:22:15.360 Write completed with error (sct=0, sc=8) 00:22:15.360 Write completed with error (sct=0, sc=8) 00:22:15.360 starting I/O failed: -6 00:22:15.360 Write completed with error (sct=0, sc=8) 00:22:15.360 starting I/O failed: -6 00:22:15.360 Write completed with error (sct=0, sc=8) 00:22:15.360 starting I/O failed: -6 00:22:15.360 Write completed with error (sct=0, sc=8) 00:22:15.360 Write completed with error (sct=0, sc=8) 00:22:15.360 starting I/O failed: -6 00:22:15.360 Write completed with error (sct=0, sc=8) 00:22:15.360 starting I/O failed: -6 00:22:15.360 Write completed with error (sct=0, sc=8) 00:22:15.360 starting I/O failed: -6 00:22:15.360 Write completed with error (sct=0, sc=8) 00:22:15.360 Write completed with error (sct=0, sc=8) 00:22:15.360 starting I/O failed: -6 00:22:15.360 Write completed with error (sct=0, sc=8) 00:22:15.360 starting I/O failed: -6 00:22:15.360 Write completed with error (sct=0, sc=8) 00:22:15.360 starting I/O failed: -6 00:22:15.360 Write completed with error (sct=0, sc=8) 00:22:15.360 Write completed with error (sct=0, sc=8) 00:22:15.360 starting I/O failed: -6 00:22:15.360 Write completed with error (sct=0, sc=8) 00:22:15.360 starting I/O failed: -6 00:22:15.360 Write completed with error (sct=0, sc=8) 00:22:15.360 starting I/O failed: -6 00:22:15.360 Write completed with error (sct=0, sc=8) 00:22:15.360 Write completed with error (sct=0, sc=8) 00:22:15.360 starting I/O failed: -6 00:22:15.360 Write completed with error (sct=0, sc=8) 00:22:15.360 starting I/O failed: -6 00:22:15.360 Write completed with error (sct=0, sc=8) 00:22:15.360 starting I/O failed: -6 00:22:15.360 Write completed with error (sct=0, sc=8) 00:22:15.360 Write completed with error (sct=0, sc=8) 00:22:15.360 starting I/O failed: -6 00:22:15.360 Write completed with error (sct=0, sc=8) 00:22:15.360 starting I/O failed: -6 00:22:15.360 Write completed with error (sct=0, sc=8) 00:22:15.360 starting I/O failed: -6 00:22:15.360 Write completed with error (sct=0, sc=8) 00:22:15.360 Write completed with error (sct=0, sc=8) 00:22:15.360 starting I/O failed: -6 00:22:15.360 Write completed with error (sct=0, sc=8) 00:22:15.360 starting I/O failed: -6 00:22:15.360 Write completed with error (sct=0, sc=8) 00:22:15.360 starting I/O failed: -6 00:22:15.360 [2024-11-15 10:41:03.390986] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:22:15.360 Write completed with error (sct=0, sc=8) 00:22:15.360 starting I/O failed: -6 00:22:15.360 Write completed with error (sct=0, sc=8) 00:22:15.360 starting I/O failed: -6 00:22:15.360 Write completed with error (sct=0, sc=8) 00:22:15.360 starting I/O failed: -6 00:22:15.360 Write completed with error (sct=0, sc=8) 00:22:15.360 starting I/O failed: -6 00:22:15.360 Write completed with error (sct=0, sc=8) 00:22:15.360 starting I/O failed: -6 00:22:15.360 Write completed with error (sct=0, sc=8) 00:22:15.360 starting I/O failed: -6 00:22:15.360 Write completed with error (sct=0, sc=8) 00:22:15.360 starting I/O failed: -6 00:22:15.360 Write completed with error (sct=0, sc=8) 00:22:15.360 starting I/O failed: -6 00:22:15.360 Write completed with error (sct=0, sc=8) 00:22:15.360 starting I/O failed: -6 00:22:15.360 Write completed with error (sct=0, sc=8) 00:22:15.360 starting I/O failed: -6 00:22:15.360 Write completed with error (sct=0, sc=8) 00:22:15.360 starting I/O failed: -6 00:22:15.360 Write completed with error (sct=0, sc=8) 00:22:15.360 starting I/O failed: -6 00:22:15.360 Write completed with error (sct=0, sc=8) 00:22:15.360 starting I/O failed: -6 00:22:15.360 Write completed with error (sct=0, sc=8) 00:22:15.360 starting I/O failed: -6 00:22:15.360 Write completed with error (sct=0, sc=8) 00:22:15.360 starting I/O failed: -6 00:22:15.360 Write completed with error (sct=0, sc=8) 00:22:15.360 starting I/O failed: -6 00:22:15.360 Write completed with error (sct=0, sc=8) 00:22:15.360 starting I/O failed: -6 00:22:15.360 Write completed with error (sct=0, sc=8) 00:22:15.360 starting I/O failed: -6 00:22:15.360 Write completed with error (sct=0, sc=8) 00:22:15.360 starting I/O failed: -6 00:22:15.360 Write completed with error (sct=0, sc=8) 00:22:15.360 starting I/O failed: -6 00:22:15.360 Write completed with error (sct=0, sc=8) 00:22:15.360 starting I/O failed: -6 00:22:15.360 Write completed with error (sct=0, sc=8) 00:22:15.360 starting I/O failed: -6 00:22:15.360 Write completed with error (sct=0, sc=8) 00:22:15.360 starting I/O failed: -6 00:22:15.360 Write completed with error (sct=0, sc=8) 00:22:15.360 starting I/O failed: -6 00:22:15.360 Write completed with error (sct=0, sc=8) 00:22:15.360 starting I/O failed: -6 00:22:15.360 Write completed with error (sct=0, sc=8) 00:22:15.360 starting I/O failed: -6 00:22:15.360 Write completed with error (sct=0, sc=8) 00:22:15.360 starting I/O failed: -6 00:22:15.360 Write completed with error (sct=0, sc=8) 00:22:15.360 starting I/O failed: -6 00:22:15.360 Write completed with error (sct=0, sc=8) 00:22:15.360 starting I/O failed: -6 00:22:15.360 Write completed with error (sct=0, sc=8) 00:22:15.360 starting I/O failed: -6 00:22:15.360 Write completed with error (sct=0, sc=8) 00:22:15.360 starting I/O failed: -6 00:22:15.360 Write completed with error (sct=0, sc=8) 00:22:15.360 starting I/O failed: -6 00:22:15.360 Write completed with error (sct=0, sc=8) 00:22:15.360 starting I/O failed: -6 00:22:15.360 Write completed with error (sct=0, sc=8) 00:22:15.360 starting I/O failed: -6 00:22:15.360 Write completed with error (sct=0, sc=8) 00:22:15.360 starting I/O failed: -6 00:22:15.360 Write completed with error (sct=0, sc=8) 00:22:15.360 starting I/O failed: -6 00:22:15.360 Write completed with error (sct=0, sc=8) 00:22:15.360 starting I/O failed: -6 00:22:15.360 Write completed with error (sct=0, sc=8) 00:22:15.360 starting I/O failed: -6 00:22:15.360 Write completed with error (sct=0, sc=8) 00:22:15.360 starting I/O failed: -6 00:22:15.360 Write completed with error (sct=0, sc=8) 00:22:15.360 starting I/O failed: -6 00:22:15.360 Write completed with error (sct=0, sc=8) 00:22:15.361 starting I/O failed: -6 00:22:15.361 Write completed with error (sct=0, sc=8) 00:22:15.361 starting I/O failed: -6 00:22:15.361 Write completed with error (sct=0, sc=8) 00:22:15.361 starting I/O failed: -6 00:22:15.361 Write completed with error (sct=0, sc=8) 00:22:15.361 starting I/O failed: -6 00:22:15.361 Write completed with error (sct=0, sc=8) 00:22:15.361 starting I/O failed: -6 00:22:15.361 Write completed with error (sct=0, sc=8) 00:22:15.361 starting I/O failed: -6 00:22:15.361 Write completed with error (sct=0, sc=8) 00:22:15.361 starting I/O failed: -6 00:22:15.361 Write completed with error (sct=0, sc=8) 00:22:15.361 starting I/O failed: -6 00:22:15.361 Write completed with error (sct=0, sc=8) 00:22:15.361 starting I/O failed: -6 00:22:15.361 Write completed with error (sct=0, sc=8) 00:22:15.361 starting I/O failed: -6 00:22:15.361 Write completed with error (sct=0, sc=8) 00:22:15.361 starting I/O failed: -6 00:22:15.361 Write completed with error (sct=0, sc=8) 00:22:15.361 starting I/O failed: -6 00:22:15.361 Write completed with error (sct=0, sc=8) 00:22:15.361 starting I/O failed: -6 00:22:15.361 Write completed with error (sct=0, sc=8) 00:22:15.361 starting I/O failed: -6 00:22:15.361 Write completed with error (sct=0, sc=8) 00:22:15.361 starting I/O failed: -6 00:22:15.361 Write completed with error (sct=0, sc=8) 00:22:15.361 starting I/O failed: -6 00:22:15.361 Write completed with error (sct=0, sc=8) 00:22:15.361 starting I/O failed: -6 00:22:15.361 Write completed with error (sct=0, sc=8) 00:22:15.361 starting I/O failed: -6 00:22:15.361 Write completed with error (sct=0, sc=8) 00:22:15.361 starting I/O failed: -6 00:22:15.361 Write completed with error (sct=0, sc=8) 00:22:15.361 starting I/O failed: -6 00:22:15.361 Write completed with error (sct=0, sc=8) 00:22:15.361 starting I/O failed: -6 00:22:15.361 [2024-11-15 10:41:03.393348] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:22:15.361 NVMe io qpair process completion error 00:22:15.361 Write completed with error (sct=0, sc=8) 00:22:15.361 Write completed with error (sct=0, sc=8) 00:22:15.361 Write completed with error (sct=0, sc=8) 00:22:15.361 Write completed with error (sct=0, sc=8) 00:22:15.361 starting I/O failed: -6 00:22:15.361 Write completed with error (sct=0, sc=8) 00:22:15.361 Write completed with error (sct=0, sc=8) 00:22:15.361 Write completed with error (sct=0, sc=8) 00:22:15.361 Write completed with error (sct=0, sc=8) 00:22:15.361 starting I/O failed: -6 00:22:15.361 Write completed with error (sct=0, sc=8) 00:22:15.361 Write completed with error (sct=0, sc=8) 00:22:15.361 Write completed with error (sct=0, sc=8) 00:22:15.361 Write completed with error (sct=0, sc=8) 00:22:15.361 starting I/O failed: -6 00:22:15.361 Write completed with error (sct=0, sc=8) 00:22:15.361 Write completed with error (sct=0, sc=8) 00:22:15.361 Write completed with error (sct=0, sc=8) 00:22:15.361 Write completed with error (sct=0, sc=8) 00:22:15.361 starting I/O failed: -6 00:22:15.361 Write completed with error (sct=0, sc=8) 00:22:15.361 Write completed with error (sct=0, sc=8) 00:22:15.361 Write completed with error (sct=0, sc=8) 00:22:15.361 Write completed with error (sct=0, sc=8) 00:22:15.361 starting I/O failed: -6 00:22:15.361 Write completed with error (sct=0, sc=8) 00:22:15.361 Write completed with error (sct=0, sc=8) 00:22:15.361 Write completed with error (sct=0, sc=8) 00:22:15.361 Write completed with error (sct=0, sc=8) 00:22:15.361 starting I/O failed: -6 00:22:15.361 Write completed with error (sct=0, sc=8) 00:22:15.361 Write completed with error (sct=0, sc=8) 00:22:15.361 Write completed with error (sct=0, sc=8) 00:22:15.361 Write completed with error (sct=0, sc=8) 00:22:15.361 starting I/O failed: -6 00:22:15.361 Write completed with error (sct=0, sc=8) 00:22:15.361 Write completed with error (sct=0, sc=8) 00:22:15.361 Write completed with error (sct=0, sc=8) 00:22:15.361 Write completed with error (sct=0, sc=8) 00:22:15.361 starting I/O failed: -6 00:22:15.361 Write completed with error (sct=0, sc=8) 00:22:15.361 Write completed with error (sct=0, sc=8) 00:22:15.361 Write completed with error (sct=0, sc=8) 00:22:15.361 Write completed with error (sct=0, sc=8) 00:22:15.361 starting I/O failed: -6 00:22:15.361 Write completed with error (sct=0, sc=8) 00:22:15.361 Write completed with error (sct=0, sc=8) 00:22:15.361 Write completed with error (sct=0, sc=8) 00:22:15.361 Write completed with error (sct=0, sc=8) 00:22:15.361 starting I/O failed: -6 00:22:15.361 [2024-11-15 10:41:03.394773] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:15.361 Write completed with error (sct=0, sc=8) 00:22:15.361 Write completed with error (sct=0, sc=8) 00:22:15.361 starting I/O failed: -6 00:22:15.361 Write completed with error (sct=0, sc=8) 00:22:15.361 starting I/O failed: -6 00:22:15.361 Write completed with error (sct=0, sc=8) 00:22:15.361 Write completed with error (sct=0, sc=8) 00:22:15.361 Write completed with error (sct=0, sc=8) 00:22:15.361 starting I/O failed: -6 00:22:15.361 Write completed with error (sct=0, sc=8) 00:22:15.361 starting I/O failed: -6 00:22:15.361 Write completed with error (sct=0, sc=8) 00:22:15.361 Write completed with error (sct=0, sc=8) 00:22:15.361 Write completed with error (sct=0, sc=8) 00:22:15.361 starting I/O failed: -6 00:22:15.361 Write completed with error (sct=0, sc=8) 00:22:15.361 starting I/O failed: -6 00:22:15.361 Write completed with error (sct=0, sc=8) 00:22:15.361 Write completed with error (sct=0, sc=8) 00:22:15.361 Write completed with error (sct=0, sc=8) 00:22:15.361 starting I/O failed: -6 00:22:15.361 Write completed with error (sct=0, sc=8) 00:22:15.361 starting I/O failed: -6 00:22:15.361 Write completed with error (sct=0, sc=8) 00:22:15.361 Write completed with error (sct=0, sc=8) 00:22:15.361 Write completed with error (sct=0, sc=8) 00:22:15.361 starting I/O failed: -6 00:22:15.361 Write completed with error (sct=0, sc=8) 00:22:15.361 starting I/O failed: -6 00:22:15.361 Write completed with error (sct=0, sc=8) 00:22:15.361 Write completed with error (sct=0, sc=8) 00:22:15.361 Write completed with error (sct=0, sc=8) 00:22:15.361 starting I/O failed: -6 00:22:15.361 Write completed with error (sct=0, sc=8) 00:22:15.361 starting I/O failed: -6 00:22:15.361 Write completed with error (sct=0, sc=8) 00:22:15.361 Write completed with error (sct=0, sc=8) 00:22:15.361 Write completed with error (sct=0, sc=8) 00:22:15.361 starting I/O failed: -6 00:22:15.361 Write completed with error (sct=0, sc=8) 00:22:15.361 starting I/O failed: -6 00:22:15.361 Write completed with error (sct=0, sc=8) 00:22:15.361 Write completed with error (sct=0, sc=8) 00:22:15.361 Write completed with error (sct=0, sc=8) 00:22:15.361 starting I/O failed: -6 00:22:15.361 Write completed with error (sct=0, sc=8) 00:22:15.361 starting I/O failed: -6 00:22:15.361 Write completed with error (sct=0, sc=8) 00:22:15.361 Write completed with error (sct=0, sc=8) 00:22:15.361 Write completed with error (sct=0, sc=8) 00:22:15.361 starting I/O failed: -6 00:22:15.361 Write completed with error (sct=0, sc=8) 00:22:15.361 starting I/O failed: -6 00:22:15.361 Write completed with error (sct=0, sc=8) 00:22:15.361 Write completed with error (sct=0, sc=8) 00:22:15.361 Write completed with error (sct=0, sc=8) 00:22:15.361 starting I/O failed: -6 00:22:15.361 Write completed with error (sct=0, sc=8) 00:22:15.361 starting I/O failed: -6 00:22:15.361 Write completed with error (sct=0, sc=8) 00:22:15.361 Write completed with error (sct=0, sc=8) 00:22:15.361 Write completed with error (sct=0, sc=8) 00:22:15.361 starting I/O failed: -6 00:22:15.361 Write completed with error (sct=0, sc=8) 00:22:15.361 starting I/O failed: -6 00:22:15.361 Write completed with error (sct=0, sc=8) 00:22:15.361 [2024-11-15 10:41:03.395971] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:22:15.361 Write completed with error (sct=0, sc=8) 00:22:15.361 starting I/O failed: -6 00:22:15.361 Write completed with error (sct=0, sc=8) 00:22:15.361 starting I/O failed: -6 00:22:15.361 Write completed with error (sct=0, sc=8) 00:22:15.361 starting I/O failed: -6 00:22:15.361 Write completed with error (sct=0, sc=8) 00:22:15.361 Write completed with error (sct=0, sc=8) 00:22:15.361 starting I/O failed: -6 00:22:15.361 Write completed with error (sct=0, sc=8) 00:22:15.361 starting I/O failed: -6 00:22:15.361 Write completed with error (sct=0, sc=8) 00:22:15.361 starting I/O failed: -6 00:22:15.361 Write completed with error (sct=0, sc=8) 00:22:15.361 Write completed with error (sct=0, sc=8) 00:22:15.361 starting I/O failed: -6 00:22:15.361 Write completed with error (sct=0, sc=8) 00:22:15.361 starting I/O failed: -6 00:22:15.361 Write completed with error (sct=0, sc=8) 00:22:15.362 starting I/O failed: -6 00:22:15.362 Write completed with error (sct=0, sc=8) 00:22:15.362 Write completed with error (sct=0, sc=8) 00:22:15.362 starting I/O failed: -6 00:22:15.362 Write completed with error (sct=0, sc=8) 00:22:15.362 starting I/O failed: -6 00:22:15.362 Write completed with error (sct=0, sc=8) 00:22:15.362 starting I/O failed: -6 00:22:15.362 Write completed with error (sct=0, sc=8) 00:22:15.362 Write completed with error (sct=0, sc=8) 00:22:15.362 starting I/O failed: -6 00:22:15.362 Write completed with error (sct=0, sc=8) 00:22:15.362 starting I/O failed: -6 00:22:15.362 Write completed with error (sct=0, sc=8) 00:22:15.362 starting I/O failed: -6 00:22:15.362 Write completed with error (sct=0, sc=8) 00:22:15.362 Write completed with error (sct=0, sc=8) 00:22:15.362 starting I/O failed: -6 00:22:15.362 Write completed with error (sct=0, sc=8) 00:22:15.362 starting I/O failed: -6 00:22:15.362 Write completed with error (sct=0, sc=8) 00:22:15.362 starting I/O failed: -6 00:22:15.362 Write completed with error (sct=0, sc=8) 00:22:15.362 Write completed with error (sct=0, sc=8) 00:22:15.362 starting I/O failed: -6 00:22:15.362 Write completed with error (sct=0, sc=8) 00:22:15.362 starting I/O failed: -6 00:22:15.362 Write completed with error (sct=0, sc=8) 00:22:15.362 starting I/O failed: -6 00:22:15.362 Write completed with error (sct=0, sc=8) 00:22:15.362 Write completed with error (sct=0, sc=8) 00:22:15.362 starting I/O failed: -6 00:22:15.362 Write completed with error (sct=0, sc=8) 00:22:15.362 starting I/O failed: -6 00:22:15.362 Write completed with error (sct=0, sc=8) 00:22:15.362 starting I/O failed: -6 00:22:15.362 Write completed with error (sct=0, sc=8) 00:22:15.362 Write completed with error (sct=0, sc=8) 00:22:15.362 starting I/O failed: -6 00:22:15.362 Write completed with error (sct=0, sc=8) 00:22:15.362 starting I/O failed: -6 00:22:15.362 Write completed with error (sct=0, sc=8) 00:22:15.362 starting I/O failed: -6 00:22:15.362 Write completed with error (sct=0, sc=8) 00:22:15.362 Write completed with error (sct=0, sc=8) 00:22:15.362 starting I/O failed: -6 00:22:15.362 Write completed with error (sct=0, sc=8) 00:22:15.362 starting I/O failed: -6 00:22:15.362 Write completed with error (sct=0, sc=8) 00:22:15.362 starting I/O failed: -6 00:22:15.362 Write completed with error (sct=0, sc=8) 00:22:15.362 Write completed with error (sct=0, sc=8) 00:22:15.362 starting I/O failed: -6 00:22:15.362 Write completed with error (sct=0, sc=8) 00:22:15.362 starting I/O failed: -6 00:22:15.362 Write completed with error (sct=0, sc=8) 00:22:15.362 starting I/O failed: -6 00:22:15.362 Write completed with error (sct=0, sc=8) 00:22:15.362 Write completed with error (sct=0, sc=8) 00:22:15.362 starting I/O failed: -6 00:22:15.362 Write completed with error (sct=0, sc=8) 00:22:15.362 starting I/O failed: -6 00:22:15.362 Write completed with error (sct=0, sc=8) 00:22:15.362 starting I/O failed: -6 00:22:15.362 Write completed with error (sct=0, sc=8) 00:22:15.362 [2024-11-15 10:41:03.397278] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:22:15.362 Write completed with error (sct=0, sc=8) 00:22:15.362 starting I/O failed: -6 00:22:15.362 Write completed with error (sct=0, sc=8) 00:22:15.362 starting I/O failed: -6 00:22:15.362 Write completed with error (sct=0, sc=8) 00:22:15.362 starting I/O failed: -6 00:22:15.362 Write completed with error (sct=0, sc=8) 00:22:15.362 starting I/O failed: -6 00:22:15.362 Write completed with error (sct=0, sc=8) 00:22:15.362 starting I/O failed: -6 00:22:15.362 Write completed with error (sct=0, sc=8) 00:22:15.362 starting I/O failed: -6 00:22:15.362 Write completed with error (sct=0, sc=8) 00:22:15.362 starting I/O failed: -6 00:22:15.362 Write completed with error (sct=0, sc=8) 00:22:15.362 starting I/O failed: -6 00:22:15.362 Write completed with error (sct=0, sc=8) 00:22:15.362 starting I/O failed: -6 00:22:15.362 Write completed with error (sct=0, sc=8) 00:22:15.362 starting I/O failed: -6 00:22:15.362 Write completed with error (sct=0, sc=8) 00:22:15.362 starting I/O failed: -6 00:22:15.362 Write completed with error (sct=0, sc=8) 00:22:15.362 starting I/O failed: -6 00:22:15.362 Write completed with error (sct=0, sc=8) 00:22:15.362 starting I/O failed: -6 00:22:15.362 Write completed with error (sct=0, sc=8) 00:22:15.362 starting I/O failed: -6 00:22:15.362 Write completed with error (sct=0, sc=8) 00:22:15.362 starting I/O failed: -6 00:22:15.362 Write completed with error (sct=0, sc=8) 00:22:15.362 starting I/O failed: -6 00:22:15.362 Write completed with error (sct=0, sc=8) 00:22:15.362 starting I/O failed: -6 00:22:15.362 Write completed with error (sct=0, sc=8) 00:22:15.362 starting I/O failed: -6 00:22:15.362 Write completed with error (sct=0, sc=8) 00:22:15.362 starting I/O failed: -6 00:22:15.362 Write completed with error (sct=0, sc=8) 00:22:15.362 starting I/O failed: -6 00:22:15.362 Write completed with error (sct=0, sc=8) 00:22:15.362 starting I/O failed: -6 00:22:15.362 Write completed with error (sct=0, sc=8) 00:22:15.362 starting I/O failed: -6 00:22:15.362 Write completed with error (sct=0, sc=8) 00:22:15.362 starting I/O failed: -6 00:22:15.362 Write completed with error (sct=0, sc=8) 00:22:15.362 starting I/O failed: -6 00:22:15.362 Write completed with error (sct=0, sc=8) 00:22:15.362 starting I/O failed: -6 00:22:15.362 Write completed with error (sct=0, sc=8) 00:22:15.362 starting I/O failed: -6 00:22:15.362 Write completed with error (sct=0, sc=8) 00:22:15.362 starting I/O failed: -6 00:22:15.362 Write completed with error (sct=0, sc=8) 00:22:15.362 starting I/O failed: -6 00:22:15.362 Write completed with error (sct=0, sc=8) 00:22:15.362 starting I/O failed: -6 00:22:15.362 Write completed with error (sct=0, sc=8) 00:22:15.362 starting I/O failed: -6 00:22:15.362 Write completed with error (sct=0, sc=8) 00:22:15.362 starting I/O failed: -6 00:22:15.362 Write completed with error (sct=0, sc=8) 00:22:15.362 starting I/O failed: -6 00:22:15.362 Write completed with error (sct=0, sc=8) 00:22:15.362 starting I/O failed: -6 00:22:15.362 Write completed with error (sct=0, sc=8) 00:22:15.362 starting I/O failed: -6 00:22:15.362 Write completed with error (sct=0, sc=8) 00:22:15.362 starting I/O failed: -6 00:22:15.362 Write completed with error (sct=0, sc=8) 00:22:15.362 starting I/O failed: -6 00:22:15.362 Write completed with error (sct=0, sc=8) 00:22:15.362 starting I/O failed: -6 00:22:15.362 Write completed with error (sct=0, sc=8) 00:22:15.362 starting I/O failed: -6 00:22:15.362 Write completed with error (sct=0, sc=8) 00:22:15.362 starting I/O failed: -6 00:22:15.362 Write completed with error (sct=0, sc=8) 00:22:15.362 starting I/O failed: -6 00:22:15.362 Write completed with error (sct=0, sc=8) 00:22:15.362 starting I/O failed: -6 00:22:15.362 Write completed with error (sct=0, sc=8) 00:22:15.362 starting I/O failed: -6 00:22:15.362 Write completed with error (sct=0, sc=8) 00:22:15.362 starting I/O failed: -6 00:22:15.362 Write completed with error (sct=0, sc=8) 00:22:15.362 starting I/O failed: -6 00:22:15.362 Write completed with error (sct=0, sc=8) 00:22:15.362 starting I/O failed: -6 00:22:15.362 Write completed with error (sct=0, sc=8) 00:22:15.362 starting I/O failed: -6 00:22:15.362 Write completed with error (sct=0, sc=8) 00:22:15.362 starting I/O failed: -6 00:22:15.362 Write completed with error (sct=0, sc=8) 00:22:15.362 starting I/O failed: -6 00:22:15.362 Write completed with error (sct=0, sc=8) 00:22:15.362 starting I/O failed: -6 00:22:15.362 Write completed with error (sct=0, sc=8) 00:22:15.362 starting I/O failed: -6 00:22:15.362 Write completed with error (sct=0, sc=8) 00:22:15.362 starting I/O failed: -6 00:22:15.362 Write completed with error (sct=0, sc=8) 00:22:15.362 starting I/O failed: -6 00:22:15.362 Write completed with error (sct=0, sc=8) 00:22:15.362 starting I/O failed: -6 00:22:15.362 Write completed with error (sct=0, sc=8) 00:22:15.362 starting I/O failed: -6 00:22:15.362 Write completed with error (sct=0, sc=8) 00:22:15.362 starting I/O failed: -6 00:22:15.362 Write completed with error (sct=0, sc=8) 00:22:15.362 starting I/O failed: -6 00:22:15.362 Write completed with error (sct=0, sc=8) 00:22:15.362 starting I/O failed: -6 00:22:15.362 Write completed with error (sct=0, sc=8) 00:22:15.362 starting I/O failed: -6 00:22:15.362 Write completed with error (sct=0, sc=8) 00:22:15.362 starting I/O failed: -6 00:22:15.362 Write completed with error (sct=0, sc=8) 00:22:15.362 starting I/O failed: -6 00:22:15.362 [2024-11-15 10:41:03.399690] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:22:15.362 NVMe io qpair process completion error 00:22:15.362 Write completed with error (sct=0, sc=8) 00:22:15.362 Write completed with error (sct=0, sc=8) 00:22:15.362 starting I/O failed: -6 00:22:15.362 Write completed with error (sct=0, sc=8) 00:22:15.362 Write completed with error (sct=0, sc=8) 00:22:15.362 Write completed with error (sct=0, sc=8) 00:22:15.362 Write completed with error (sct=0, sc=8) 00:22:15.362 starting I/O failed: -6 00:22:15.362 Write completed with error (sct=0, sc=8) 00:22:15.362 Write completed with error (sct=0, sc=8) 00:22:15.362 Write completed with error (sct=0, sc=8) 00:22:15.362 Write completed with error (sct=0, sc=8) 00:22:15.362 starting I/O failed: -6 00:22:15.362 Write completed with error (sct=0, sc=8) 00:22:15.362 Write completed with error (sct=0, sc=8) 00:22:15.362 Write completed with error (sct=0, sc=8) 00:22:15.362 Write completed with error (sct=0, sc=8) 00:22:15.362 starting I/O failed: -6 00:22:15.362 Write completed with error (sct=0, sc=8) 00:22:15.362 Write completed with error (sct=0, sc=8) 00:22:15.362 Write completed with error (sct=0, sc=8) 00:22:15.362 Write completed with error (sct=0, sc=8) 00:22:15.362 starting I/O failed: -6 00:22:15.362 Write completed with error (sct=0, sc=8) 00:22:15.362 Write completed with error (sct=0, sc=8) 00:22:15.362 Write completed with error (sct=0, sc=8) 00:22:15.362 Write completed with error (sct=0, sc=8) 00:22:15.362 starting I/O failed: -6 00:22:15.362 Write completed with error (sct=0, sc=8) 00:22:15.362 Write completed with error (sct=0, sc=8) 00:22:15.363 Write completed with error (sct=0, sc=8) 00:22:15.363 Write completed with error (sct=0, sc=8) 00:22:15.363 starting I/O failed: -6 00:22:15.363 Write completed with error (sct=0, sc=8) 00:22:15.363 Write completed with error (sct=0, sc=8) 00:22:15.363 Write completed with error (sct=0, sc=8) 00:22:15.363 Write completed with error (sct=0, sc=8) 00:22:15.363 starting I/O failed: -6 00:22:15.363 Write completed with error (sct=0, sc=8) 00:22:15.363 Write completed with error (sct=0, sc=8) 00:22:15.363 Write completed with error (sct=0, sc=8) 00:22:15.363 Write completed with error (sct=0, sc=8) 00:22:15.363 starting I/O failed: -6 00:22:15.363 Write completed with error (sct=0, sc=8) 00:22:15.363 Write completed with error (sct=0, sc=8) 00:22:15.363 Write completed with error (sct=0, sc=8) 00:22:15.363 Write completed with error (sct=0, sc=8) 00:22:15.363 starting I/O failed: -6 00:22:15.363 Write completed with error (sct=0, sc=8) 00:22:15.363 starting I/O failed: -6 00:22:15.363 Write completed with error (sct=0, sc=8) 00:22:15.363 Write completed with error (sct=0, sc=8) 00:22:15.363 Write completed with error (sct=0, sc=8) 00:22:15.363 starting I/O failed: -6 00:22:15.363 Write completed with error (sct=0, sc=8) 00:22:15.363 starting I/O failed: -6 00:22:15.363 Write completed with error (sct=0, sc=8) 00:22:15.363 Write completed with error (sct=0, sc=8) 00:22:15.363 Write completed with error (sct=0, sc=8) 00:22:15.363 starting I/O failed: -6 00:22:15.363 Write completed with error (sct=0, sc=8) 00:22:15.363 starting I/O failed: -6 00:22:15.363 Write completed with error (sct=0, sc=8) 00:22:15.363 Write completed with error (sct=0, sc=8) 00:22:15.363 Write completed with error (sct=0, sc=8) 00:22:15.363 starting I/O failed: -6 00:22:15.363 Write completed with error (sct=0, sc=8) 00:22:15.363 starting I/O failed: -6 00:22:15.363 Write completed with error (sct=0, sc=8) 00:22:15.363 Write completed with error (sct=0, sc=8) 00:22:15.363 Write completed with error (sct=0, sc=8) 00:22:15.363 starting I/O failed: -6 00:22:15.363 Write completed with error (sct=0, sc=8) 00:22:15.363 starting I/O failed: -6 00:22:15.363 Write completed with error (sct=0, sc=8) 00:22:15.363 Write completed with error (sct=0, sc=8) 00:22:15.363 Write completed with error (sct=0, sc=8) 00:22:15.363 starting I/O failed: -6 00:22:15.363 Write completed with error (sct=0, sc=8) 00:22:15.363 starting I/O failed: -6 00:22:15.363 Write completed with error (sct=0, sc=8) 00:22:15.363 Write completed with error (sct=0, sc=8) 00:22:15.363 Write completed with error (sct=0, sc=8) 00:22:15.363 starting I/O failed: -6 00:22:15.363 Write completed with error (sct=0, sc=8) 00:22:15.363 starting I/O failed: -6 00:22:15.363 Write completed with error (sct=0, sc=8) 00:22:15.363 Write completed with error (sct=0, sc=8) 00:22:15.363 Write completed with error (sct=0, sc=8) 00:22:15.363 starting I/O failed: -6 00:22:15.363 Write completed with error (sct=0, sc=8) 00:22:15.363 starting I/O failed: -6 00:22:15.363 Write completed with error (sct=0, sc=8) 00:22:15.363 Write completed with error (sct=0, sc=8) 00:22:15.363 Write completed with error (sct=0, sc=8) 00:22:15.363 starting I/O failed: -6 00:22:15.363 Write completed with error (sct=0, sc=8) 00:22:15.363 starting I/O failed: -6 00:22:15.363 Write completed with error (sct=0, sc=8) 00:22:15.363 Write completed with error (sct=0, sc=8) 00:22:15.363 Write completed with error (sct=0, sc=8) 00:22:15.363 starting I/O failed: -6 00:22:15.363 Write completed with error (sct=0, sc=8) 00:22:15.363 starting I/O failed: -6 00:22:15.363 Write completed with error (sct=0, sc=8) 00:22:15.363 Write completed with error (sct=0, sc=8) 00:22:15.363 Write completed with error (sct=0, sc=8) 00:22:15.363 starting I/O failed: -6 00:22:15.363 Write completed with error (sct=0, sc=8) 00:22:15.363 starting I/O failed: -6 00:22:15.363 Write completed with error (sct=0, sc=8) 00:22:15.363 Write completed with error (sct=0, sc=8) 00:22:15.363 Write completed with error (sct=0, sc=8) 00:22:15.363 starting I/O failed: -6 00:22:15.363 Write completed with error (sct=0, sc=8) 00:22:15.363 starting I/O failed: -6 00:22:15.363 [2024-11-15 10:41:03.402030] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:15.363 Write completed with error (sct=0, sc=8) 00:22:15.363 Write completed with error (sct=0, sc=8) 00:22:15.363 starting I/O failed: -6 00:22:15.363 Write completed with error (sct=0, sc=8) 00:22:15.363 starting I/O failed: -6 00:22:15.363 Write completed with error (sct=0, sc=8) 00:22:15.363 starting I/O failed: -6 00:22:15.363 Write completed with error (sct=0, sc=8) 00:22:15.363 Write completed with error (sct=0, sc=8) 00:22:15.363 starting I/O failed: -6 00:22:15.363 Write completed with error (sct=0, sc=8) 00:22:15.363 starting I/O failed: -6 00:22:15.363 Write completed with error (sct=0, sc=8) 00:22:15.363 starting I/O failed: -6 00:22:15.363 Write completed with error (sct=0, sc=8) 00:22:15.363 Write completed with error (sct=0, sc=8) 00:22:15.363 starting I/O failed: -6 00:22:15.363 Write completed with error (sct=0, sc=8) 00:22:15.363 starting I/O failed: -6 00:22:15.363 Write completed with error (sct=0, sc=8) 00:22:15.363 starting I/O failed: -6 00:22:15.363 Write completed with error (sct=0, sc=8) 00:22:15.363 Write completed with error (sct=0, sc=8) 00:22:15.363 starting I/O failed: -6 00:22:15.363 Write completed with error (sct=0, sc=8) 00:22:15.363 starting I/O failed: -6 00:22:15.363 Write completed with error (sct=0, sc=8) 00:22:15.363 starting I/O failed: -6 00:22:15.363 Write completed with error (sct=0, sc=8) 00:22:15.363 Write completed with error (sct=0, sc=8) 00:22:15.363 starting I/O failed: -6 00:22:15.363 Write completed with error (sct=0, sc=8) 00:22:15.363 starting I/O failed: -6 00:22:15.363 Write completed with error (sct=0, sc=8) 00:22:15.363 starting I/O failed: -6 00:22:15.363 Write completed with error (sct=0, sc=8) 00:22:15.363 Write completed with error (sct=0, sc=8) 00:22:15.363 starting I/O failed: -6 00:22:15.363 Write completed with error (sct=0, sc=8) 00:22:15.363 starting I/O failed: -6 00:22:15.363 Write completed with error (sct=0, sc=8) 00:22:15.363 starting I/O failed: -6 00:22:15.363 Write completed with error (sct=0, sc=8) 00:22:15.363 Write completed with error (sct=0, sc=8) 00:22:15.363 starting I/O failed: -6 00:22:15.363 Write completed with error (sct=0, sc=8) 00:22:15.363 starting I/O failed: -6 00:22:15.363 Write completed with error (sct=0, sc=8) 00:22:15.363 starting I/O failed: -6 00:22:15.363 Write completed with error (sct=0, sc=8) 00:22:15.363 Write completed with error (sct=0, sc=8) 00:22:15.363 starting I/O failed: -6 00:22:15.363 Write completed with error (sct=0, sc=8) 00:22:15.363 starting I/O failed: -6 00:22:15.363 Write completed with error (sct=0, sc=8) 00:22:15.363 starting I/O failed: -6 00:22:15.363 Write completed with error (sct=0, sc=8) 00:22:15.363 Write completed with error (sct=0, sc=8) 00:22:15.363 starting I/O failed: -6 00:22:15.363 Write completed with error (sct=0, sc=8) 00:22:15.363 starting I/O failed: -6 00:22:15.363 Write completed with error (sct=0, sc=8) 00:22:15.363 starting I/O failed: -6 00:22:15.363 Write completed with error (sct=0, sc=8) 00:22:15.363 Write completed with error (sct=0, sc=8) 00:22:15.363 starting I/O failed: -6 00:22:15.363 Write completed with error (sct=0, sc=8) 00:22:15.363 starting I/O failed: -6 00:22:15.363 Write completed with error (sct=0, sc=8) 00:22:15.363 starting I/O failed: -6 00:22:15.363 Write completed with error (sct=0, sc=8) 00:22:15.363 Write completed with error (sct=0, sc=8) 00:22:15.363 starting I/O failed: -6 00:22:15.363 Write completed with error (sct=0, sc=8) 00:22:15.363 starting I/O failed: -6 00:22:15.363 Write completed with error (sct=0, sc=8) 00:22:15.363 starting I/O failed: -6 00:22:15.363 Write completed with error (sct=0, sc=8) 00:22:15.363 Write completed with error (sct=0, sc=8) 00:22:15.363 starting I/O failed: -6 00:22:15.363 Write completed with error (sct=0, sc=8) 00:22:15.363 starting I/O failed: -6 00:22:15.363 Write completed with error (sct=0, sc=8) 00:22:15.363 starting I/O failed: -6 00:22:15.363 Write completed with error (sct=0, sc=8) 00:22:15.363 Write completed with error (sct=0, sc=8) 00:22:15.363 starting I/O failed: -6 00:22:15.363 [2024-11-15 10:41:03.403426] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:22:15.363 Write completed with error (sct=0, sc=8) 00:22:15.363 starting I/O failed: -6 00:22:15.363 Write completed with error (sct=0, sc=8) 00:22:15.363 starting I/O failed: -6 00:22:15.363 Write completed with error (sct=0, sc=8) 00:22:15.363 starting I/O failed: -6 00:22:15.363 Write completed with error (sct=0, sc=8) 00:22:15.363 starting I/O failed: -6 00:22:15.363 Write completed with error (sct=0, sc=8) 00:22:15.363 starting I/O failed: -6 00:22:15.363 Write completed with error (sct=0, sc=8) 00:22:15.363 starting I/O failed: -6 00:22:15.363 Write completed with error (sct=0, sc=8) 00:22:15.363 starting I/O failed: -6 00:22:15.363 Write completed with error (sct=0, sc=8) 00:22:15.363 starting I/O failed: -6 00:22:15.363 Write completed with error (sct=0, sc=8) 00:22:15.363 starting I/O failed: -6 00:22:15.363 Write completed with error (sct=0, sc=8) 00:22:15.363 starting I/O failed: -6 00:22:15.363 Write completed with error (sct=0, sc=8) 00:22:15.363 starting I/O failed: -6 00:22:15.363 Write completed with error (sct=0, sc=8) 00:22:15.363 starting I/O failed: -6 00:22:15.363 Write completed with error (sct=0, sc=8) 00:22:15.363 starting I/O failed: -6 00:22:15.363 Write completed with error (sct=0, sc=8) 00:22:15.363 starting I/O failed: -6 00:22:15.363 Write completed with error (sct=0, sc=8) 00:22:15.363 starting I/O failed: -6 00:22:15.363 Write completed with error (sct=0, sc=8) 00:22:15.363 starting I/O failed: -6 00:22:15.363 Write completed with error (sct=0, sc=8) 00:22:15.363 starting I/O failed: -6 00:22:15.363 Write completed with error (sct=0, sc=8) 00:22:15.363 starting I/O failed: -6 00:22:15.363 Write completed with error (sct=0, sc=8) 00:22:15.363 starting I/O failed: -6 00:22:15.363 Write completed with error (sct=0, sc=8) 00:22:15.364 starting I/O failed: -6 00:22:15.364 Write completed with error (sct=0, sc=8) 00:22:15.364 starting I/O failed: -6 00:22:15.364 Write completed with error (sct=0, sc=8) 00:22:15.364 starting I/O failed: -6 00:22:15.364 Write completed with error (sct=0, sc=8) 00:22:15.364 starting I/O failed: -6 00:22:15.364 Write completed with error (sct=0, sc=8) 00:22:15.364 starting I/O failed: -6 00:22:15.364 Write completed with error (sct=0, sc=8) 00:22:15.364 starting I/O failed: -6 00:22:15.364 Write completed with error (sct=0, sc=8) 00:22:15.364 starting I/O failed: -6 00:22:15.364 Write completed with error (sct=0, sc=8) 00:22:15.364 starting I/O failed: -6 00:22:15.364 Write completed with error (sct=0, sc=8) 00:22:15.364 starting I/O failed: -6 00:22:15.364 Write completed with error (sct=0, sc=8) 00:22:15.364 starting I/O failed: -6 00:22:15.364 Write completed with error (sct=0, sc=8) 00:22:15.364 starting I/O failed: -6 00:22:15.364 Write completed with error (sct=0, sc=8) 00:22:15.364 starting I/O failed: -6 00:22:15.364 Write completed with error (sct=0, sc=8) 00:22:15.364 starting I/O failed: -6 00:22:15.364 Write completed with error (sct=0, sc=8) 00:22:15.364 starting I/O failed: -6 00:22:15.364 Write completed with error (sct=0, sc=8) 00:22:15.364 starting I/O failed: -6 00:22:15.364 Write completed with error (sct=0, sc=8) 00:22:15.364 starting I/O failed: -6 00:22:15.364 Write completed with error (sct=0, sc=8) 00:22:15.364 starting I/O failed: -6 00:22:15.364 Write completed with error (sct=0, sc=8) 00:22:15.364 starting I/O failed: -6 00:22:15.364 Write completed with error (sct=0, sc=8) 00:22:15.364 starting I/O failed: -6 00:22:15.364 Write completed with error (sct=0, sc=8) 00:22:15.364 starting I/O failed: -6 00:22:15.364 Write completed with error (sct=0, sc=8) 00:22:15.364 starting I/O failed: -6 00:22:15.364 Write completed with error (sct=0, sc=8) 00:22:15.364 starting I/O failed: -6 00:22:15.364 Write completed with error (sct=0, sc=8) 00:22:15.364 starting I/O failed: -6 00:22:15.364 Write completed with error (sct=0, sc=8) 00:22:15.364 starting I/O failed: -6 00:22:15.364 Write completed with error (sct=0, sc=8) 00:22:15.364 starting I/O failed: -6 00:22:15.364 Write completed with error (sct=0, sc=8) 00:22:15.364 starting I/O failed: -6 00:22:15.364 Write completed with error (sct=0, sc=8) 00:22:15.364 starting I/O failed: -6 00:22:15.364 Write completed with error (sct=0, sc=8) 00:22:15.364 starting I/O failed: -6 00:22:15.364 Write completed with error (sct=0, sc=8) 00:22:15.364 starting I/O failed: -6 00:22:15.364 Write completed with error (sct=0, sc=8) 00:22:15.364 starting I/O failed: -6 00:22:15.364 Write completed with error (sct=0, sc=8) 00:22:15.364 starting I/O failed: -6 00:22:15.364 Write completed with error (sct=0, sc=8) 00:22:15.364 starting I/O failed: -6 00:22:15.364 Write completed with error (sct=0, sc=8) 00:22:15.364 starting I/O failed: -6 00:22:15.364 Write completed with error (sct=0, sc=8) 00:22:15.364 starting I/O failed: -6 00:22:15.364 Write completed with error (sct=0, sc=8) 00:22:15.364 starting I/O failed: -6 00:22:15.364 Write completed with error (sct=0, sc=8) 00:22:15.364 starting I/O failed: -6 00:22:15.364 Write completed with error (sct=0, sc=8) 00:22:15.364 starting I/O failed: -6 00:22:15.364 Write completed with error (sct=0, sc=8) 00:22:15.364 starting I/O failed: -6 00:22:15.364 Write completed with error (sct=0, sc=8) 00:22:15.364 starting I/O failed: -6 00:22:15.364 [2024-11-15 10:41:03.405908] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:22:15.364 NVMe io qpair process completion error 00:22:15.364 Write completed with error (sct=0, sc=8) 00:22:15.364 Write completed with error (sct=0, sc=8) 00:22:15.364 starting I/O failed: -6 00:22:15.364 Write completed with error (sct=0, sc=8) 00:22:15.364 Write completed with error (sct=0, sc=8) 00:22:15.364 Write completed with error (sct=0, sc=8) 00:22:15.364 Write completed with error (sct=0, sc=8) 00:22:15.364 starting I/O failed: -6 00:22:15.364 Write completed with error (sct=0, sc=8) 00:22:15.364 Write completed with error (sct=0, sc=8) 00:22:15.364 Write completed with error (sct=0, sc=8) 00:22:15.364 Write completed with error (sct=0, sc=8) 00:22:15.364 starting I/O failed: -6 00:22:15.364 Write completed with error (sct=0, sc=8) 00:22:15.364 Write completed with error (sct=0, sc=8) 00:22:15.364 Write completed with error (sct=0, sc=8) 00:22:15.364 Write completed with error (sct=0, sc=8) 00:22:15.364 starting I/O failed: -6 00:22:15.364 Write completed with error (sct=0, sc=8) 00:22:15.364 Write completed with error (sct=0, sc=8) 00:22:15.364 Write completed with error (sct=0, sc=8) 00:22:15.364 Write completed with error (sct=0, sc=8) 00:22:15.364 starting I/O failed: -6 00:22:15.364 Write completed with error (sct=0, sc=8) 00:22:15.364 Write completed with error (sct=0, sc=8) 00:22:15.364 Write completed with error (sct=0, sc=8) 00:22:15.364 Write completed with error (sct=0, sc=8) 00:22:15.364 starting I/O failed: -6 00:22:15.364 Write completed with error (sct=0, sc=8) 00:22:15.364 Write completed with error (sct=0, sc=8) 00:22:15.364 Write completed with error (sct=0, sc=8) 00:22:15.364 Write completed with error (sct=0, sc=8) 00:22:15.364 starting I/O failed: -6 00:22:15.364 Write completed with error (sct=0, sc=8) 00:22:15.364 Write completed with error (sct=0, sc=8) 00:22:15.364 starting I/O failed: -6 00:22:15.364 Write completed with error (sct=0, sc=8) 00:22:15.364 starting I/O failed: -6 00:22:15.364 Write completed with error (sct=0, sc=8) 00:22:15.364 Write completed with error (sct=0, sc=8) 00:22:15.364 Write completed with error (sct=0, sc=8) 00:22:15.364 starting I/O failed: -6 00:22:15.364 Write completed with error (sct=0, sc=8) 00:22:15.364 starting I/O failed: -6 00:22:15.364 Write completed with error (sct=0, sc=8) 00:22:15.364 Write completed with error (sct=0, sc=8) 00:22:15.364 Write completed with error (sct=0, sc=8) 00:22:15.364 starting I/O failed: -6 00:22:15.364 Write completed with error (sct=0, sc=8) 00:22:15.364 starting I/O failed: -6 00:22:15.364 Write completed with error (sct=0, sc=8) 00:22:15.364 Write completed with error (sct=0, sc=8) 00:22:15.364 Write completed with error (sct=0, sc=8) 00:22:15.364 starting I/O failed: -6 00:22:15.364 Write completed with error (sct=0, sc=8) 00:22:15.364 starting I/O failed: -6 00:22:15.364 Write completed with error (sct=0, sc=8) 00:22:15.364 Write completed with error (sct=0, sc=8) 00:22:15.364 Write completed with error (sct=0, sc=8) 00:22:15.364 starting I/O failed: -6 00:22:15.364 Write completed with error (sct=0, sc=8) 00:22:15.364 starting I/O failed: -6 00:22:15.364 Write completed with error (sct=0, sc=8) 00:22:15.364 Write completed with error (sct=0, sc=8) 00:22:15.364 Write completed with error (sct=0, sc=8) 00:22:15.364 starting I/O failed: -6 00:22:15.364 Write completed with error (sct=0, sc=8) 00:22:15.364 starting I/O failed: -6 00:22:15.364 Write completed with error (sct=0, sc=8) 00:22:15.364 Write completed with error (sct=0, sc=8) 00:22:15.364 Write completed with error (sct=0, sc=8) 00:22:15.364 starting I/O failed: -6 00:22:15.364 Write completed with error (sct=0, sc=8) 00:22:15.364 starting I/O failed: -6 00:22:15.364 Write completed with error (sct=0, sc=8) 00:22:15.364 Write completed with error (sct=0, sc=8) 00:22:15.364 Write completed with error (sct=0, sc=8) 00:22:15.364 starting I/O failed: -6 00:22:15.364 Write completed with error (sct=0, sc=8) 00:22:15.364 starting I/O failed: -6 00:22:15.364 Write completed with error (sct=0, sc=8) 00:22:15.364 Write completed with error (sct=0, sc=8) 00:22:15.364 Write completed with error (sct=0, sc=8) 00:22:15.364 starting I/O failed: -6 00:22:15.364 Write completed with error (sct=0, sc=8) 00:22:15.364 starting I/O failed: -6 00:22:15.364 Write completed with error (sct=0, sc=8) 00:22:15.364 Write completed with error (sct=0, sc=8) 00:22:15.364 Write completed with error (sct=0, sc=8) 00:22:15.364 starting I/O failed: -6 00:22:15.364 Write completed with error (sct=0, sc=8) 00:22:15.364 starting I/O failed: -6 00:22:15.364 Write completed with error (sct=0, sc=8) 00:22:15.364 Write completed with error (sct=0, sc=8) 00:22:15.364 Write completed with error (sct=0, sc=8) 00:22:15.364 starting I/O failed: -6 00:22:15.364 Write completed with error (sct=0, sc=8) 00:22:15.364 starting I/O failed: -6 00:22:15.364 Write completed with error (sct=0, sc=8) 00:22:15.364 Write completed with error (sct=0, sc=8) 00:22:15.364 [2024-11-15 10:41:03.408034] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:15.364 Write completed with error (sct=0, sc=8) 00:22:15.364 starting I/O failed: -6 00:22:15.364 Write completed with error (sct=0, sc=8) 00:22:15.364 starting I/O failed: -6 00:22:15.364 Write completed with error (sct=0, sc=8) 00:22:15.364 Write completed with error (sct=0, sc=8) 00:22:15.364 starting I/O failed: -6 00:22:15.364 Write completed with error (sct=0, sc=8) 00:22:15.364 starting I/O failed: -6 00:22:15.364 Write completed with error (sct=0, sc=8) 00:22:15.364 starting I/O failed: -6 00:22:15.364 Write completed with error (sct=0, sc=8) 00:22:15.364 Write completed with error (sct=0, sc=8) 00:22:15.364 starting I/O failed: -6 00:22:15.364 Write completed with error (sct=0, sc=8) 00:22:15.364 starting I/O failed: -6 00:22:15.364 Write completed with error (sct=0, sc=8) 00:22:15.364 starting I/O failed: -6 00:22:15.364 Write completed with error (sct=0, sc=8) 00:22:15.364 Write completed with error (sct=0, sc=8) 00:22:15.364 starting I/O failed: -6 00:22:15.364 Write completed with error (sct=0, sc=8) 00:22:15.364 starting I/O failed: -6 00:22:15.364 Write completed with error (sct=0, sc=8) 00:22:15.364 starting I/O failed: -6 00:22:15.364 Write completed with error (sct=0, sc=8) 00:22:15.364 Write completed with error (sct=0, sc=8) 00:22:15.364 starting I/O failed: -6 00:22:15.364 Write completed with error (sct=0, sc=8) 00:22:15.364 starting I/O failed: -6 00:22:15.364 Write completed with error (sct=0, sc=8) 00:22:15.364 starting I/O failed: -6 00:22:15.364 Write completed with error (sct=0, sc=8) 00:22:15.364 Write completed with error (sct=0, sc=8) 00:22:15.364 starting I/O failed: -6 00:22:15.364 Write completed with error (sct=0, sc=8) 00:22:15.364 starting I/O failed: -6 00:22:15.365 Write completed with error (sct=0, sc=8) 00:22:15.365 starting I/O failed: -6 00:22:15.365 Write completed with error (sct=0, sc=8) 00:22:15.365 Write completed with error (sct=0, sc=8) 00:22:15.365 starting I/O failed: -6 00:22:15.365 Write completed with error (sct=0, sc=8) 00:22:15.365 starting I/O failed: -6 00:22:15.365 Write completed with error (sct=0, sc=8) 00:22:15.365 starting I/O failed: -6 00:22:15.365 Write completed with error (sct=0, sc=8) 00:22:15.365 Write completed with error (sct=0, sc=8) 00:22:15.365 starting I/O failed: -6 00:22:15.365 Write completed with error (sct=0, sc=8) 00:22:15.365 starting I/O failed: -6 00:22:15.365 Write completed with error (sct=0, sc=8) 00:22:15.365 starting I/O failed: -6 00:22:15.365 Write completed with error (sct=0, sc=8) 00:22:15.365 Write completed with error (sct=0, sc=8) 00:22:15.365 starting I/O failed: -6 00:22:15.365 Write completed with error (sct=0, sc=8) 00:22:15.365 starting I/O failed: -6 00:22:15.365 Write completed with error (sct=0, sc=8) 00:22:15.365 starting I/O failed: -6 00:22:15.365 Write completed with error (sct=0, sc=8) 00:22:15.365 Write completed with error (sct=0, sc=8) 00:22:15.365 starting I/O failed: -6 00:22:15.365 Write completed with error (sct=0, sc=8) 00:22:15.365 starting I/O failed: -6 00:22:15.365 Write completed with error (sct=0, sc=8) 00:22:15.365 starting I/O failed: -6 00:22:15.365 Write completed with error (sct=0, sc=8) 00:22:15.365 Write completed with error (sct=0, sc=8) 00:22:15.365 starting I/O failed: -6 00:22:15.365 Write completed with error (sct=0, sc=8) 00:22:15.365 starting I/O failed: -6 00:22:15.365 Write completed with error (sct=0, sc=8) 00:22:15.365 starting I/O failed: -6 00:22:15.365 Write completed with error (sct=0, sc=8) 00:22:15.365 Write completed with error (sct=0, sc=8) 00:22:15.365 starting I/O failed: -6 00:22:15.365 Write completed with error (sct=0, sc=8) 00:22:15.365 starting I/O failed: -6 00:22:15.365 Write completed with error (sct=0, sc=8) 00:22:15.365 starting I/O failed: -6 00:22:15.365 Write completed with error (sct=0, sc=8) 00:22:15.365 Write completed with error (sct=0, sc=8) 00:22:15.365 starting I/O failed: -6 00:22:15.365 Write completed with error (sct=0, sc=8) 00:22:15.365 starting I/O failed: -6 00:22:15.365 Write completed with error (sct=0, sc=8) 00:22:15.365 starting I/O failed: -6 00:22:15.365 Write completed with error (sct=0, sc=8) 00:22:15.365 Write completed with error (sct=0, sc=8) 00:22:15.365 starting I/O failed: -6 00:22:15.365 Write completed with error (sct=0, sc=8) 00:22:15.365 starting I/O failed: -6 00:22:15.365 Write completed with error (sct=0, sc=8) 00:22:15.365 starting I/O failed: -6 00:22:15.365 [2024-11-15 10:41:03.409530] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:22:15.365 Write completed with error (sct=0, sc=8) 00:22:15.365 starting I/O failed: -6 00:22:15.365 Write completed with error (sct=0, sc=8) 00:22:15.365 starting I/O failed: -6 00:22:15.365 Write completed with error (sct=0, sc=8) 00:22:15.365 starting I/O failed: -6 00:22:15.365 Write completed with error (sct=0, sc=8) 00:22:15.365 starting I/O failed: -6 00:22:15.365 Write completed with error (sct=0, sc=8) 00:22:15.365 starting I/O failed: -6 00:22:15.365 Write completed with error (sct=0, sc=8) 00:22:15.365 starting I/O failed: -6 00:22:15.365 Write completed with error (sct=0, sc=8) 00:22:15.365 starting I/O failed: -6 00:22:15.365 Write completed with error (sct=0, sc=8) 00:22:15.365 starting I/O failed: -6 00:22:15.365 Write completed with error (sct=0, sc=8) 00:22:15.365 starting I/O failed: -6 00:22:15.365 Write completed with error (sct=0, sc=8) 00:22:15.365 starting I/O failed: -6 00:22:15.365 Write completed with error (sct=0, sc=8) 00:22:15.365 starting I/O failed: -6 00:22:15.365 Write completed with error (sct=0, sc=8) 00:22:15.365 starting I/O failed: -6 00:22:15.365 Write completed with error (sct=0, sc=8) 00:22:15.365 starting I/O failed: -6 00:22:15.365 Write completed with error (sct=0, sc=8) 00:22:15.365 starting I/O failed: -6 00:22:15.365 Write completed with error (sct=0, sc=8) 00:22:15.365 starting I/O failed: -6 00:22:15.365 Write completed with error (sct=0, sc=8) 00:22:15.365 starting I/O failed: -6 00:22:15.365 Write completed with error (sct=0, sc=8) 00:22:15.365 starting I/O failed: -6 00:22:15.365 Write completed with error (sct=0, sc=8) 00:22:15.365 starting I/O failed: -6 00:22:15.365 Write completed with error (sct=0, sc=8) 00:22:15.365 starting I/O failed: -6 00:22:15.365 Write completed with error (sct=0, sc=8) 00:22:15.365 starting I/O failed: -6 00:22:15.365 Write completed with error (sct=0, sc=8) 00:22:15.365 starting I/O failed: -6 00:22:15.365 Write completed with error (sct=0, sc=8) 00:22:15.365 starting I/O failed: -6 00:22:15.365 Write completed with error (sct=0, sc=8) 00:22:15.365 starting I/O failed: -6 00:22:15.365 Write completed with error (sct=0, sc=8) 00:22:15.365 starting I/O failed: -6 00:22:15.365 Write completed with error (sct=0, sc=8) 00:22:15.365 starting I/O failed: -6 00:22:15.365 Write completed with error (sct=0, sc=8) 00:22:15.365 starting I/O failed: -6 00:22:15.365 Write completed with error (sct=0, sc=8) 00:22:15.365 starting I/O failed: -6 00:22:15.365 Write completed with error (sct=0, sc=8) 00:22:15.365 starting I/O failed: -6 00:22:15.365 Write completed with error (sct=0, sc=8) 00:22:15.365 starting I/O failed: -6 00:22:15.365 Write completed with error (sct=0, sc=8) 00:22:15.365 starting I/O failed: -6 00:22:15.365 Write completed with error (sct=0, sc=8) 00:22:15.365 starting I/O failed: -6 00:22:15.365 Write completed with error (sct=0, sc=8) 00:22:15.365 starting I/O failed: -6 00:22:15.365 Write completed with error (sct=0, sc=8) 00:22:15.365 starting I/O failed: -6 00:22:15.365 Write completed with error (sct=0, sc=8) 00:22:15.365 starting I/O failed: -6 00:22:15.365 Write completed with error (sct=0, sc=8) 00:22:15.365 starting I/O failed: -6 00:22:15.365 Write completed with error (sct=0, sc=8) 00:22:15.365 starting I/O failed: -6 00:22:15.365 Write completed with error (sct=0, sc=8) 00:22:15.365 starting I/O failed: -6 00:22:15.365 Write completed with error (sct=0, sc=8) 00:22:15.365 starting I/O failed: -6 00:22:15.365 Write completed with error (sct=0, sc=8) 00:22:15.365 starting I/O failed: -6 00:22:15.365 Write completed with error (sct=0, sc=8) 00:22:15.365 starting I/O failed: -6 00:22:15.365 Write completed with error (sct=0, sc=8) 00:22:15.365 starting I/O failed: -6 00:22:15.365 Write completed with error (sct=0, sc=8) 00:22:15.365 starting I/O failed: -6 00:22:15.365 Write completed with error (sct=0, sc=8) 00:22:15.365 starting I/O failed: -6 00:22:15.365 Write completed with error (sct=0, sc=8) 00:22:15.365 starting I/O failed: -6 00:22:15.365 Write completed with error (sct=0, sc=8) 00:22:15.365 starting I/O failed: -6 00:22:15.365 Write completed with error (sct=0, sc=8) 00:22:15.365 starting I/O failed: -6 00:22:15.365 Write completed with error (sct=0, sc=8) 00:22:15.365 starting I/O failed: -6 00:22:15.365 Write completed with error (sct=0, sc=8) 00:22:15.365 starting I/O failed: -6 00:22:15.365 Write completed with error (sct=0, sc=8) 00:22:15.365 starting I/O failed: -6 00:22:15.365 Write completed with error (sct=0, sc=8) 00:22:15.365 starting I/O failed: -6 00:22:15.365 Write completed with error (sct=0, sc=8) 00:22:15.365 starting I/O failed: -6 00:22:15.365 Write completed with error (sct=0, sc=8) 00:22:15.365 starting I/O failed: -6 00:22:15.365 Write completed with error (sct=0, sc=8) 00:22:15.365 starting I/O failed: -6 00:22:15.365 Write completed with error (sct=0, sc=8) 00:22:15.365 starting I/O failed: -6 00:22:15.365 Write completed with error (sct=0, sc=8) 00:22:15.365 starting I/O failed: -6 00:22:15.365 Write completed with error (sct=0, sc=8) 00:22:15.365 starting I/O failed: -6 00:22:15.365 Write completed with error (sct=0, sc=8) 00:22:15.365 starting I/O failed: -6 00:22:15.365 Write completed with error (sct=0, sc=8) 00:22:15.365 starting I/O failed: -6 00:22:15.365 [2024-11-15 10:41:03.412448] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:22:15.365 NVMe io qpair process completion error 00:22:15.365 Write completed with error (sct=0, sc=8) 00:22:15.365 Write completed with error (sct=0, sc=8) 00:22:15.365 Write completed with error (sct=0, sc=8) 00:22:15.365 starting I/O failed: -6 00:22:15.365 Write completed with error (sct=0, sc=8) 00:22:15.365 Write completed with error (sct=0, sc=8) 00:22:15.365 Write completed with error (sct=0, sc=8) 00:22:15.365 Write completed with error (sct=0, sc=8) 00:22:15.365 starting I/O failed: -6 00:22:15.365 Write completed with error (sct=0, sc=8) 00:22:15.365 Write completed with error (sct=0, sc=8) 00:22:15.365 Write completed with error (sct=0, sc=8) 00:22:15.365 Write completed with error (sct=0, sc=8) 00:22:15.365 starting I/O failed: -6 00:22:15.365 Write completed with error (sct=0, sc=8) 00:22:15.365 Write completed with error (sct=0, sc=8) 00:22:15.365 Write completed with error (sct=0, sc=8) 00:22:15.365 Write completed with error (sct=0, sc=8) 00:22:15.365 starting I/O failed: -6 00:22:15.365 Write completed with error (sct=0, sc=8) 00:22:15.365 Write completed with error (sct=0, sc=8) 00:22:15.365 Write completed with error (sct=0, sc=8) 00:22:15.365 Write completed with error (sct=0, sc=8) 00:22:15.365 starting I/O failed: -6 00:22:15.365 Write completed with error (sct=0, sc=8) 00:22:15.365 Write completed with error (sct=0, sc=8) 00:22:15.365 Write completed with error (sct=0, sc=8) 00:22:15.365 Write completed with error (sct=0, sc=8) 00:22:15.365 starting I/O failed: -6 00:22:15.365 Write completed with error (sct=0, sc=8) 00:22:15.365 Write completed with error (sct=0, sc=8) 00:22:15.365 Write completed with error (sct=0, sc=8) 00:22:15.365 Write completed with error (sct=0, sc=8) 00:22:15.365 starting I/O failed: -6 00:22:15.365 Write completed with error (sct=0, sc=8) 00:22:15.365 Write completed with error (sct=0, sc=8) 00:22:15.365 Write completed with error (sct=0, sc=8) 00:22:15.365 Write completed with error (sct=0, sc=8) 00:22:15.365 starting I/O failed: -6 00:22:15.365 Write completed with error (sct=0, sc=8) 00:22:15.366 [2024-11-15 10:41:03.413735] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:15.366 Write completed with error (sct=0, sc=8) 00:22:15.366 Write completed with error (sct=0, sc=8) 00:22:15.366 starting I/O failed: -6 00:22:15.366 Write completed with error (sct=0, sc=8) 00:22:15.366 starting I/O failed: -6 00:22:15.366 Write completed with error (sct=0, sc=8) 00:22:15.366 Write completed with error (sct=0, sc=8) 00:22:15.366 Write completed with error (sct=0, sc=8) 00:22:15.366 starting I/O failed: -6 00:22:15.366 Write completed with error (sct=0, sc=8) 00:22:15.366 starting I/O failed: -6 00:22:15.366 Write completed with error (sct=0, sc=8) 00:22:15.366 Write completed with error (sct=0, sc=8) 00:22:15.366 Write completed with error (sct=0, sc=8) 00:22:15.366 starting I/O failed: -6 00:22:15.366 Write completed with error (sct=0, sc=8) 00:22:15.366 starting I/O failed: -6 00:22:15.366 Write completed with error (sct=0, sc=8) 00:22:15.366 Write completed with error (sct=0, sc=8) 00:22:15.366 Write completed with error (sct=0, sc=8) 00:22:15.366 starting I/O failed: -6 00:22:15.366 Write completed with error (sct=0, sc=8) 00:22:15.366 starting I/O failed: -6 00:22:15.366 Write completed with error (sct=0, sc=8) 00:22:15.366 Write completed with error (sct=0, sc=8) 00:22:15.366 Write completed with error (sct=0, sc=8) 00:22:15.366 starting I/O failed: -6 00:22:15.366 Write completed with error (sct=0, sc=8) 00:22:15.366 starting I/O failed: -6 00:22:15.366 Write completed with error (sct=0, sc=8) 00:22:15.366 Write completed with error (sct=0, sc=8) 00:22:15.366 Write completed with error (sct=0, sc=8) 00:22:15.366 starting I/O failed: -6 00:22:15.366 Write completed with error (sct=0, sc=8) 00:22:15.366 starting I/O failed: -6 00:22:15.366 Write completed with error (sct=0, sc=8) 00:22:15.366 Write completed with error (sct=0, sc=8) 00:22:15.366 Write completed with error (sct=0, sc=8) 00:22:15.366 starting I/O failed: -6 00:22:15.366 Write completed with error (sct=0, sc=8) 00:22:15.366 starting I/O failed: -6 00:22:15.366 Write completed with error (sct=0, sc=8) 00:22:15.366 Write completed with error (sct=0, sc=8) 00:22:15.366 Write completed with error (sct=0, sc=8) 00:22:15.366 starting I/O failed: -6 00:22:15.366 Write completed with error (sct=0, sc=8) 00:22:15.366 starting I/O failed: -6 00:22:15.366 Write completed with error (sct=0, sc=8) 00:22:15.366 Write completed with error (sct=0, sc=8) 00:22:15.366 Write completed with error (sct=0, sc=8) 00:22:15.366 starting I/O failed: -6 00:22:15.366 Write completed with error (sct=0, sc=8) 00:22:15.366 starting I/O failed: -6 00:22:15.366 Write completed with error (sct=0, sc=8) 00:22:15.366 Write completed with error (sct=0, sc=8) 00:22:15.366 Write completed with error (sct=0, sc=8) 00:22:15.366 starting I/O failed: -6 00:22:15.366 Write completed with error (sct=0, sc=8) 00:22:15.366 starting I/O failed: -6 00:22:15.366 Write completed with error (sct=0, sc=8) 00:22:15.366 [2024-11-15 10:41:03.414808] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:22:15.366 Write completed with error (sct=0, sc=8) 00:22:15.366 starting I/O failed: -6 00:22:15.366 Write completed with error (sct=0, sc=8) 00:22:15.366 starting I/O failed: -6 00:22:15.366 Write completed with error (sct=0, sc=8) 00:22:15.366 starting I/O failed: -6 00:22:15.366 Write completed with error (sct=0, sc=8) 00:22:15.366 Write completed with error (sct=0, sc=8) 00:22:15.366 starting I/O failed: -6 00:22:15.366 Write completed with error (sct=0, sc=8) 00:22:15.366 starting I/O failed: -6 00:22:15.366 Write completed with error (sct=0, sc=8) 00:22:15.366 starting I/O failed: -6 00:22:15.366 Write completed with error (sct=0, sc=8) 00:22:15.366 Write completed with error (sct=0, sc=8) 00:22:15.366 starting I/O failed: -6 00:22:15.366 Write completed with error (sct=0, sc=8) 00:22:15.366 starting I/O failed: -6 00:22:15.366 Write completed with error (sct=0, sc=8) 00:22:15.366 starting I/O failed: -6 00:22:15.366 Write completed with error (sct=0, sc=8) 00:22:15.366 Write completed with error (sct=0, sc=8) 00:22:15.366 starting I/O failed: -6 00:22:15.366 Write completed with error (sct=0, sc=8) 00:22:15.366 starting I/O failed: -6 00:22:15.366 Write completed with error (sct=0, sc=8) 00:22:15.366 starting I/O failed: -6 00:22:15.366 Write completed with error (sct=0, sc=8) 00:22:15.366 Write completed with error (sct=0, sc=8) 00:22:15.366 starting I/O failed: -6 00:22:15.366 Write completed with error (sct=0, sc=8) 00:22:15.366 starting I/O failed: -6 00:22:15.366 Write completed with error (sct=0, sc=8) 00:22:15.366 starting I/O failed: -6 00:22:15.366 Write completed with error (sct=0, sc=8) 00:22:15.366 Write completed with error (sct=0, sc=8) 00:22:15.366 starting I/O failed: -6 00:22:15.366 Write completed with error (sct=0, sc=8) 00:22:15.366 starting I/O failed: -6 00:22:15.366 Write completed with error (sct=0, sc=8) 00:22:15.366 starting I/O failed: -6 00:22:15.366 Write completed with error (sct=0, sc=8) 00:22:15.366 Write completed with error (sct=0, sc=8) 00:22:15.366 starting I/O failed: -6 00:22:15.366 Write completed with error (sct=0, sc=8) 00:22:15.366 starting I/O failed: -6 00:22:15.366 Write completed with error (sct=0, sc=8) 00:22:15.366 starting I/O failed: -6 00:22:15.366 Write completed with error (sct=0, sc=8) 00:22:15.366 Write completed with error (sct=0, sc=8) 00:22:15.366 starting I/O failed: -6 00:22:15.366 Write completed with error (sct=0, sc=8) 00:22:15.366 starting I/O failed: -6 00:22:15.366 Write completed with error (sct=0, sc=8) 00:22:15.366 starting I/O failed: -6 00:22:15.366 Write completed with error (sct=0, sc=8) 00:22:15.366 Write completed with error (sct=0, sc=8) 00:22:15.366 starting I/O failed: -6 00:22:15.366 Write completed with error (sct=0, sc=8) 00:22:15.366 starting I/O failed: -6 00:22:15.366 Write completed with error (sct=0, sc=8) 00:22:15.366 starting I/O failed: -6 00:22:15.366 Write completed with error (sct=0, sc=8) 00:22:15.366 Write completed with error (sct=0, sc=8) 00:22:15.366 starting I/O failed: -6 00:22:15.366 Write completed with error (sct=0, sc=8) 00:22:15.366 starting I/O failed: -6 00:22:15.366 Write completed with error (sct=0, sc=8) 00:22:15.366 starting I/O failed: -6 00:22:15.366 Write completed with error (sct=0, sc=8) 00:22:15.366 Write completed with error (sct=0, sc=8) 00:22:15.366 starting I/O failed: -6 00:22:15.366 Write completed with error (sct=0, sc=8) 00:22:15.366 starting I/O failed: -6 00:22:15.366 Write completed with error (sct=0, sc=8) 00:22:15.366 starting I/O failed: -6 00:22:15.366 Write completed with error (sct=0, sc=8) 00:22:15.366 Write completed with error (sct=0, sc=8) 00:22:15.366 starting I/O failed: -6 00:22:15.366 Write completed with error (sct=0, sc=8) 00:22:15.366 starting I/O failed: -6 00:22:15.366 Write completed with error (sct=0, sc=8) 00:22:15.366 starting I/O failed: -6 00:22:15.366 Write completed with error (sct=0, sc=8) 00:22:15.366 Write completed with error (sct=0, sc=8) 00:22:15.366 starting I/O failed: -6 00:22:15.366 [2024-11-15 10:41:03.416167] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:22:15.366 starting I/O failed: -6 00:22:15.366 Write completed with error (sct=0, sc=8) 00:22:15.366 starting I/O failed: -6 00:22:15.366 Write completed with error (sct=0, sc=8) 00:22:15.366 starting I/O failed: -6 00:22:15.366 Write completed with error (sct=0, sc=8) 00:22:15.366 starting I/O failed: -6 00:22:15.366 Write completed with error (sct=0, sc=8) 00:22:15.366 starting I/O failed: -6 00:22:15.366 Write completed with error (sct=0, sc=8) 00:22:15.366 starting I/O failed: -6 00:22:15.366 Write completed with error (sct=0, sc=8) 00:22:15.366 starting I/O failed: -6 00:22:15.366 Write completed with error (sct=0, sc=8) 00:22:15.366 starting I/O failed: -6 00:22:15.366 Write completed with error (sct=0, sc=8) 00:22:15.366 starting I/O failed: -6 00:22:15.366 Write completed with error (sct=0, sc=8) 00:22:15.366 starting I/O failed: -6 00:22:15.366 Write completed with error (sct=0, sc=8) 00:22:15.366 starting I/O failed: -6 00:22:15.366 Write completed with error (sct=0, sc=8) 00:22:15.366 starting I/O failed: -6 00:22:15.366 Write completed with error (sct=0, sc=8) 00:22:15.366 starting I/O failed: -6 00:22:15.366 Write completed with error (sct=0, sc=8) 00:22:15.366 starting I/O failed: -6 00:22:15.366 Write completed with error (sct=0, sc=8) 00:22:15.366 starting I/O failed: -6 00:22:15.366 Write completed with error (sct=0, sc=8) 00:22:15.366 starting I/O failed: -6 00:22:15.366 Write completed with error (sct=0, sc=8) 00:22:15.366 starting I/O failed: -6 00:22:15.366 Write completed with error (sct=0, sc=8) 00:22:15.366 starting I/O failed: -6 00:22:15.366 Write completed with error (sct=0, sc=8) 00:22:15.366 starting I/O failed: -6 00:22:15.366 Write completed with error (sct=0, sc=8) 00:22:15.366 starting I/O failed: -6 00:22:15.366 Write completed with error (sct=0, sc=8) 00:22:15.366 starting I/O failed: -6 00:22:15.366 Write completed with error (sct=0, sc=8) 00:22:15.367 starting I/O failed: -6 00:22:15.367 Write completed with error (sct=0, sc=8) 00:22:15.367 starting I/O failed: -6 00:22:15.367 Write completed with error (sct=0, sc=8) 00:22:15.367 starting I/O failed: -6 00:22:15.367 Write completed with error (sct=0, sc=8) 00:22:15.367 starting I/O failed: -6 00:22:15.367 Write completed with error (sct=0, sc=8) 00:22:15.367 starting I/O failed: -6 00:22:15.367 Write completed with error (sct=0, sc=8) 00:22:15.367 starting I/O failed: -6 00:22:15.367 Write completed with error (sct=0, sc=8) 00:22:15.367 starting I/O failed: -6 00:22:15.367 Write completed with error (sct=0, sc=8) 00:22:15.367 starting I/O failed: -6 00:22:15.367 Write completed with error (sct=0, sc=8) 00:22:15.367 starting I/O failed: -6 00:22:15.367 Write completed with error (sct=0, sc=8) 00:22:15.367 starting I/O failed: -6 00:22:15.367 Write completed with error (sct=0, sc=8) 00:22:15.367 starting I/O failed: -6 00:22:15.367 Write completed with error (sct=0, sc=8) 00:22:15.367 starting I/O failed: -6 00:22:15.367 Write completed with error (sct=0, sc=8) 00:22:15.367 starting I/O failed: -6 00:22:15.367 Write completed with error (sct=0, sc=8) 00:22:15.367 starting I/O failed: -6 00:22:15.367 Write completed with error (sct=0, sc=8) 00:22:15.367 starting I/O failed: -6 00:22:15.367 Write completed with error (sct=0, sc=8) 00:22:15.367 starting I/O failed: -6 00:22:15.367 Write completed with error (sct=0, sc=8) 00:22:15.367 starting I/O failed: -6 00:22:15.367 Write completed with error (sct=0, sc=8) 00:22:15.367 starting I/O failed: -6 00:22:15.367 Write completed with error (sct=0, sc=8) 00:22:15.367 starting I/O failed: -6 00:22:15.367 Write completed with error (sct=0, sc=8) 00:22:15.367 starting I/O failed: -6 00:22:15.367 Write completed with error (sct=0, sc=8) 00:22:15.367 starting I/O failed: -6 00:22:15.367 Write completed with error (sct=0, sc=8) 00:22:15.367 starting I/O failed: -6 00:22:15.367 Write completed with error (sct=0, sc=8) 00:22:15.367 starting I/O failed: -6 00:22:15.367 Write completed with error (sct=0, sc=8) 00:22:15.367 starting I/O failed: -6 00:22:15.367 Write completed with error (sct=0, sc=8) 00:22:15.367 starting I/O failed: -6 00:22:15.367 Write completed with error (sct=0, sc=8) 00:22:15.367 starting I/O failed: -6 00:22:15.367 Write completed with error (sct=0, sc=8) 00:22:15.367 starting I/O failed: -6 00:22:15.367 Write completed with error (sct=0, sc=8) 00:22:15.367 starting I/O failed: -6 00:22:15.367 Write completed with error (sct=0, sc=8) 00:22:15.367 starting I/O failed: -6 00:22:15.367 Write completed with error (sct=0, sc=8) 00:22:15.367 starting I/O failed: -6 00:22:15.367 Write completed with error (sct=0, sc=8) 00:22:15.367 starting I/O failed: -6 00:22:15.367 Write completed with error (sct=0, sc=8) 00:22:15.367 starting I/O failed: -6 00:22:15.367 Write completed with error (sct=0, sc=8) 00:22:15.367 starting I/O failed: -6 00:22:15.367 Write completed with error (sct=0, sc=8) 00:22:15.367 starting I/O failed: -6 00:22:15.367 Write completed with error (sct=0, sc=8) 00:22:15.367 starting I/O failed: -6 00:22:15.367 Write completed with error (sct=0, sc=8) 00:22:15.367 starting I/O failed: -6 00:22:15.367 Write completed with error (sct=0, sc=8) 00:22:15.367 starting I/O failed: -6 00:22:15.367 Write completed with error (sct=0, sc=8) 00:22:15.367 starting I/O failed: -6 00:22:15.367 Write completed with error (sct=0, sc=8) 00:22:15.367 starting I/O failed: -6 00:22:15.367 Write completed with error (sct=0, sc=8) 00:22:15.367 starting I/O failed: -6 00:22:15.367 Write completed with error (sct=0, sc=8) 00:22:15.367 starting I/O failed: -6 00:22:15.367 Write completed with error (sct=0, sc=8) 00:22:15.367 starting I/O failed: -6 00:22:15.367 [2024-11-15 10:41:03.421588] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:22:15.367 NVMe io qpair process completion error 00:22:15.367 Write completed with error (sct=0, sc=8) 00:22:15.367 starting I/O failed: -6 00:22:15.367 Write completed with error (sct=0, sc=8) 00:22:15.367 Write completed with error (sct=0, sc=8) 00:22:15.367 Write completed with error (sct=0, sc=8) 00:22:15.367 Write completed with error (sct=0, sc=8) 00:22:15.367 starting I/O failed: -6 00:22:15.367 Write completed with error (sct=0, sc=8) 00:22:15.367 Write completed with error (sct=0, sc=8) 00:22:15.367 Write completed with error (sct=0, sc=8) 00:22:15.367 Write completed with error (sct=0, sc=8) 00:22:15.367 starting I/O failed: -6 00:22:15.367 Write completed with error (sct=0, sc=8) 00:22:15.367 Write completed with error (sct=0, sc=8) 00:22:15.367 Write completed with error (sct=0, sc=8) 00:22:15.367 Write completed with error (sct=0, sc=8) 00:22:15.367 starting I/O failed: -6 00:22:15.367 Write completed with error (sct=0, sc=8) 00:22:15.367 Write completed with error (sct=0, sc=8) 00:22:15.367 Write completed with error (sct=0, sc=8) 00:22:15.367 Write completed with error (sct=0, sc=8) 00:22:15.367 starting I/O failed: -6 00:22:15.367 Write completed with error (sct=0, sc=8) 00:22:15.367 Write completed with error (sct=0, sc=8) 00:22:15.367 Write completed with error (sct=0, sc=8) 00:22:15.367 Write completed with error (sct=0, sc=8) 00:22:15.367 starting I/O failed: -6 00:22:15.367 Write completed with error (sct=0, sc=8) 00:22:15.367 Write completed with error (sct=0, sc=8) 00:22:15.367 Write completed with error (sct=0, sc=8) 00:22:15.367 Write completed with error (sct=0, sc=8) 00:22:15.367 starting I/O failed: -6 00:22:15.367 Write completed with error (sct=0, sc=8) 00:22:15.367 Write completed with error (sct=0, sc=8) 00:22:15.367 Write completed with error (sct=0, sc=8) 00:22:15.367 Write completed with error (sct=0, sc=8) 00:22:15.367 starting I/O failed: -6 00:22:15.367 Write completed with error (sct=0, sc=8) 00:22:15.367 Write completed with error (sct=0, sc=8) 00:22:15.367 Write completed with error (sct=0, sc=8) 00:22:15.367 Write completed with error (sct=0, sc=8) 00:22:15.367 starting I/O failed: -6 00:22:15.367 Write completed with error (sct=0, sc=8) 00:22:15.367 Write completed with error (sct=0, sc=8) 00:22:15.367 Write completed with error (sct=0, sc=8) 00:22:15.367 Write completed with error (sct=0, sc=8) 00:22:15.367 starting I/O failed: -6 00:22:15.367 Write completed with error (sct=0, sc=8) 00:22:15.367 Write completed with error (sct=0, sc=8) 00:22:15.367 Write completed with error (sct=0, sc=8) 00:22:15.367 [2024-11-15 10:41:03.423117] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:15.367 starting I/O failed: -6 00:22:15.367 starting I/O failed: -6 00:22:15.367 Write completed with error (sct=0, sc=8) 00:22:15.367 starting I/O failed: -6 00:22:15.367 Write completed with error (sct=0, sc=8) 00:22:15.367 Write completed with error (sct=0, sc=8) 00:22:15.367 Write completed with error (sct=0, sc=8) 00:22:15.367 starting I/O failed: -6 00:22:15.367 Write completed with error (sct=0, sc=8) 00:22:15.367 starting I/O failed: -6 00:22:15.367 Write completed with error (sct=0, sc=8) 00:22:15.367 Write completed with error (sct=0, sc=8) 00:22:15.367 Write completed with error (sct=0, sc=8) 00:22:15.367 starting I/O failed: -6 00:22:15.367 Write completed with error (sct=0, sc=8) 00:22:15.367 starting I/O failed: -6 00:22:15.367 Write completed with error (sct=0, sc=8) 00:22:15.367 Write completed with error (sct=0, sc=8) 00:22:15.367 Write completed with error (sct=0, sc=8) 00:22:15.367 starting I/O failed: -6 00:22:15.367 Write completed with error (sct=0, sc=8) 00:22:15.367 starting I/O failed: -6 00:22:15.367 Write completed with error (sct=0, sc=8) 00:22:15.367 Write completed with error (sct=0, sc=8) 00:22:15.367 Write completed with error (sct=0, sc=8) 00:22:15.367 starting I/O failed: -6 00:22:15.367 Write completed with error (sct=0, sc=8) 00:22:15.367 starting I/O failed: -6 00:22:15.367 Write completed with error (sct=0, sc=8) 00:22:15.367 Write completed with error (sct=0, sc=8) 00:22:15.367 Write completed with error (sct=0, sc=8) 00:22:15.367 starting I/O failed: -6 00:22:15.367 Write completed with error (sct=0, sc=8) 00:22:15.367 starting I/O failed: -6 00:22:15.367 Write completed with error (sct=0, sc=8) 00:22:15.367 Write completed with error (sct=0, sc=8) 00:22:15.367 Write completed with error (sct=0, sc=8) 00:22:15.367 starting I/O failed: -6 00:22:15.367 Write completed with error (sct=0, sc=8) 00:22:15.367 starting I/O failed: -6 00:22:15.367 Write completed with error (sct=0, sc=8) 00:22:15.367 Write completed with error (sct=0, sc=8) 00:22:15.367 Write completed with error (sct=0, sc=8) 00:22:15.367 starting I/O failed: -6 00:22:15.367 Write completed with error (sct=0, sc=8) 00:22:15.367 starting I/O failed: -6 00:22:15.367 Write completed with error (sct=0, sc=8) 00:22:15.367 Write completed with error (sct=0, sc=8) 00:22:15.367 Write completed with error (sct=0, sc=8) 00:22:15.367 starting I/O failed: -6 00:22:15.367 Write completed with error (sct=0, sc=8) 00:22:15.367 starting I/O failed: -6 00:22:15.367 Write completed with error (sct=0, sc=8) 00:22:15.367 Write completed with error (sct=0, sc=8) 00:22:15.367 Write completed with error (sct=0, sc=8) 00:22:15.367 starting I/O failed: -6 00:22:15.367 Write completed with error (sct=0, sc=8) 00:22:15.367 starting I/O failed: -6 00:22:15.367 [2024-11-15 10:41:03.424294] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:22:15.367 Write completed with error (sct=0, sc=8) 00:22:15.367 Write completed with error (sct=0, sc=8) 00:22:15.367 starting I/O failed: -6 00:22:15.367 Write completed with error (sct=0, sc=8) 00:22:15.367 starting I/O failed: -6 00:22:15.367 Write completed with error (sct=0, sc=8) 00:22:15.367 starting I/O failed: -6 00:22:15.367 Write completed with error (sct=0, sc=8) 00:22:15.367 Write completed with error (sct=0, sc=8) 00:22:15.367 starting I/O failed: -6 00:22:15.367 Write completed with error (sct=0, sc=8) 00:22:15.367 starting I/O failed: -6 00:22:15.367 Write completed with error (sct=0, sc=8) 00:22:15.367 starting I/O failed: -6 00:22:15.367 Write completed with error (sct=0, sc=8) 00:22:15.367 Write completed with error (sct=0, sc=8) 00:22:15.367 starting I/O failed: -6 00:22:15.367 Write completed with error (sct=0, sc=8) 00:22:15.367 starting I/O failed: -6 00:22:15.368 Write completed with error (sct=0, sc=8) 00:22:15.368 starting I/O failed: -6 00:22:15.368 Write completed with error (sct=0, sc=8) 00:22:15.368 Write completed with error (sct=0, sc=8) 00:22:15.368 starting I/O failed: -6 00:22:15.368 Write completed with error (sct=0, sc=8) 00:22:15.368 starting I/O failed: -6 00:22:15.368 Write completed with error (sct=0, sc=8) 00:22:15.368 starting I/O failed: -6 00:22:15.368 Write completed with error (sct=0, sc=8) 00:22:15.368 Write completed with error (sct=0, sc=8) 00:22:15.368 starting I/O failed: -6 00:22:15.368 Write completed with error (sct=0, sc=8) 00:22:15.368 starting I/O failed: -6 00:22:15.368 Write completed with error (sct=0, sc=8) 00:22:15.368 starting I/O failed: -6 00:22:15.368 Write completed with error (sct=0, sc=8) 00:22:15.368 Write completed with error (sct=0, sc=8) 00:22:15.368 starting I/O failed: -6 00:22:15.368 Write completed with error (sct=0, sc=8) 00:22:15.368 starting I/O failed: -6 00:22:15.368 Write completed with error (sct=0, sc=8) 00:22:15.368 starting I/O failed: -6 00:22:15.368 Write completed with error (sct=0, sc=8) 00:22:15.368 Write completed with error (sct=0, sc=8) 00:22:15.368 starting I/O failed: -6 00:22:15.368 Write completed with error (sct=0, sc=8) 00:22:15.368 starting I/O failed: -6 00:22:15.368 Write completed with error (sct=0, sc=8) 00:22:15.368 starting I/O failed: -6 00:22:15.368 Write completed with error (sct=0, sc=8) 00:22:15.368 Write completed with error (sct=0, sc=8) 00:22:15.368 starting I/O failed: -6 00:22:15.368 Write completed with error (sct=0, sc=8) 00:22:15.368 starting I/O failed: -6 00:22:15.368 Write completed with error (sct=0, sc=8) 00:22:15.368 starting I/O failed: -6 00:22:15.368 Write completed with error (sct=0, sc=8) 00:22:15.368 Write completed with error (sct=0, sc=8) 00:22:15.368 starting I/O failed: -6 00:22:15.368 Write completed with error (sct=0, sc=8) 00:22:15.368 starting I/O failed: -6 00:22:15.368 Write completed with error (sct=0, sc=8) 00:22:15.368 starting I/O failed: -6 00:22:15.368 Write completed with error (sct=0, sc=8) 00:22:15.368 Write completed with error (sct=0, sc=8) 00:22:15.368 starting I/O failed: -6 00:22:15.368 Write completed with error (sct=0, sc=8) 00:22:15.368 starting I/O failed: -6 00:22:15.368 Write completed with error (sct=0, sc=8) 00:22:15.368 starting I/O failed: -6 00:22:15.368 Write completed with error (sct=0, sc=8) 00:22:15.368 Write completed with error (sct=0, sc=8) 00:22:15.368 starting I/O failed: -6 00:22:15.368 Write completed with error (sct=0, sc=8) 00:22:15.368 starting I/O failed: -6 00:22:15.368 Write completed with error (sct=0, sc=8) 00:22:15.368 starting I/O failed: -6 00:22:15.368 Write completed with error (sct=0, sc=8) 00:22:15.368 Write completed with error (sct=0, sc=8) 00:22:15.368 starting I/O failed: -6 00:22:15.368 Write completed with error (sct=0, sc=8) 00:22:15.368 starting I/O failed: -6 00:22:15.368 Write completed with error (sct=0, sc=8) 00:22:15.368 starting I/O failed: -6 00:22:15.368 [2024-11-15 10:41:03.425652] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:22:15.368 Write completed with error (sct=0, sc=8) 00:22:15.368 starting I/O failed: -6 00:22:15.368 Write completed with error (sct=0, sc=8) 00:22:15.368 starting I/O failed: -6 00:22:15.368 Write completed with error (sct=0, sc=8) 00:22:15.368 starting I/O failed: -6 00:22:15.368 Write completed with error (sct=0, sc=8) 00:22:15.368 starting I/O failed: -6 00:22:15.368 Write completed with error (sct=0, sc=8) 00:22:15.368 starting I/O failed: -6 00:22:15.368 Write completed with error (sct=0, sc=8) 00:22:15.368 starting I/O failed: -6 00:22:15.368 Write completed with error (sct=0, sc=8) 00:22:15.368 starting I/O failed: -6 00:22:15.368 Write completed with error (sct=0, sc=8) 00:22:15.368 starting I/O failed: -6 00:22:15.368 Write completed with error (sct=0, sc=8) 00:22:15.368 starting I/O failed: -6 00:22:15.368 Write completed with error (sct=0, sc=8) 00:22:15.368 starting I/O failed: -6 00:22:15.368 Write completed with error (sct=0, sc=8) 00:22:15.368 starting I/O failed: -6 00:22:15.368 Write completed with error (sct=0, sc=8) 00:22:15.368 starting I/O failed: -6 00:22:15.368 Write completed with error (sct=0, sc=8) 00:22:15.368 starting I/O failed: -6 00:22:15.368 Write completed with error (sct=0, sc=8) 00:22:15.368 starting I/O failed: -6 00:22:15.368 Write completed with error (sct=0, sc=8) 00:22:15.368 starting I/O failed: -6 00:22:15.368 Write completed with error (sct=0, sc=8) 00:22:15.368 starting I/O failed: -6 00:22:15.368 Write completed with error (sct=0, sc=8) 00:22:15.368 starting I/O failed: -6 00:22:15.368 Write completed with error (sct=0, sc=8) 00:22:15.368 starting I/O failed: -6 00:22:15.368 Write completed with error (sct=0, sc=8) 00:22:15.368 starting I/O failed: -6 00:22:15.368 Write completed with error (sct=0, sc=8) 00:22:15.368 starting I/O failed: -6 00:22:15.368 Write completed with error (sct=0, sc=8) 00:22:15.368 starting I/O failed: -6 00:22:15.368 Write completed with error (sct=0, sc=8) 00:22:15.368 starting I/O failed: -6 00:22:15.368 Write completed with error (sct=0, sc=8) 00:22:15.368 starting I/O failed: -6 00:22:15.368 Write completed with error (sct=0, sc=8) 00:22:15.368 starting I/O failed: -6 00:22:15.368 Write completed with error (sct=0, sc=8) 00:22:15.368 starting I/O failed: -6 00:22:15.368 Write completed with error (sct=0, sc=8) 00:22:15.368 starting I/O failed: -6 00:22:15.368 Write completed with error (sct=0, sc=8) 00:22:15.368 starting I/O failed: -6 00:22:15.368 Write completed with error (sct=0, sc=8) 00:22:15.368 starting I/O failed: -6 00:22:15.368 Write completed with error (sct=0, sc=8) 00:22:15.368 starting I/O failed: -6 00:22:15.368 Write completed with error (sct=0, sc=8) 00:22:15.368 starting I/O failed: -6 00:22:15.368 Write completed with error (sct=0, sc=8) 00:22:15.368 starting I/O failed: -6 00:22:15.368 Write completed with error (sct=0, sc=8) 00:22:15.368 starting I/O failed: -6 00:22:15.368 Write completed with error (sct=0, sc=8) 00:22:15.368 starting I/O failed: -6 00:22:15.368 Write completed with error (sct=0, sc=8) 00:22:15.368 starting I/O failed: -6 00:22:15.368 Write completed with error (sct=0, sc=8) 00:22:15.368 starting I/O failed: -6 00:22:15.368 Write completed with error (sct=0, sc=8) 00:22:15.368 starting I/O failed: -6 00:22:15.368 Write completed with error (sct=0, sc=8) 00:22:15.368 starting I/O failed: -6 00:22:15.368 Write completed with error (sct=0, sc=8) 00:22:15.368 starting I/O failed: -6 00:22:15.368 Write completed with error (sct=0, sc=8) 00:22:15.368 starting I/O failed: -6 00:22:15.368 Write completed with error (sct=0, sc=8) 00:22:15.368 starting I/O failed: -6 00:22:15.368 Write completed with error (sct=0, sc=8) 00:22:15.368 starting I/O failed: -6 00:22:15.368 Write completed with error (sct=0, sc=8) 00:22:15.368 starting I/O failed: -6 00:22:15.368 Write completed with error (sct=0, sc=8) 00:22:15.368 starting I/O failed: -6 00:22:15.368 Write completed with error (sct=0, sc=8) 00:22:15.368 starting I/O failed: -6 00:22:15.368 Write completed with error (sct=0, sc=8) 00:22:15.368 starting I/O failed: -6 00:22:15.368 Write completed with error (sct=0, sc=8) 00:22:15.368 starting I/O failed: -6 00:22:15.368 Write completed with error (sct=0, sc=8) 00:22:15.368 starting I/O failed: -6 00:22:15.368 Write completed with error (sct=0, sc=8) 00:22:15.368 starting I/O failed: -6 00:22:15.368 Write completed with error (sct=0, sc=8) 00:22:15.368 starting I/O failed: -6 00:22:15.368 Write completed with error (sct=0, sc=8) 00:22:15.368 starting I/O failed: -6 00:22:15.368 Write completed with error (sct=0, sc=8) 00:22:15.368 starting I/O failed: -6 00:22:15.368 Write completed with error (sct=0, sc=8) 00:22:15.368 starting I/O failed: -6 00:22:15.368 Write completed with error (sct=0, sc=8) 00:22:15.368 starting I/O failed: -6 00:22:15.368 Write completed with error (sct=0, sc=8) 00:22:15.368 starting I/O failed: -6 00:22:15.368 Write completed with error (sct=0, sc=8) 00:22:15.368 starting I/O failed: -6 00:22:15.368 Write completed with error (sct=0, sc=8) 00:22:15.368 starting I/O failed: -6 00:22:15.368 Write completed with error (sct=0, sc=8) 00:22:15.368 starting I/O failed: -6 00:22:15.368 Write completed with error (sct=0, sc=8) 00:22:15.368 starting I/O failed: -6 00:22:15.368 Write completed with error (sct=0, sc=8) 00:22:15.368 starting I/O failed: -6 00:22:15.368 Write completed with error (sct=0, sc=8) 00:22:15.368 starting I/O failed: -6 00:22:15.368 Write completed with error (sct=0, sc=8) 00:22:15.368 starting I/O failed: -6 00:22:15.368 [2024-11-15 10:41:03.428298] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:22:15.368 NVMe io qpair process completion error 00:22:15.368 Write completed with error (sct=0, sc=8) 00:22:15.368 Write completed with error (sct=0, sc=8) 00:22:15.368 Write completed with error (sct=0, sc=8) 00:22:15.368 Write completed with error (sct=0, sc=8) 00:22:15.368 starting I/O failed: -6 00:22:15.368 Write completed with error (sct=0, sc=8) 00:22:15.368 Write completed with error (sct=0, sc=8) 00:22:15.368 Write completed with error (sct=0, sc=8) 00:22:15.368 Write completed with error (sct=0, sc=8) 00:22:15.368 starting I/O failed: -6 00:22:15.368 Write completed with error (sct=0, sc=8) 00:22:15.368 Write completed with error (sct=0, sc=8) 00:22:15.368 Write completed with error (sct=0, sc=8) 00:22:15.368 Write completed with error (sct=0, sc=8) 00:22:15.368 starting I/O failed: -6 00:22:15.368 Write completed with error (sct=0, sc=8) 00:22:15.368 Write completed with error (sct=0, sc=8) 00:22:15.368 Write completed with error (sct=0, sc=8) 00:22:15.368 Write completed with error (sct=0, sc=8) 00:22:15.368 starting I/O failed: -6 00:22:15.368 Write completed with error (sct=0, sc=8) 00:22:15.368 Write completed with error (sct=0, sc=8) 00:22:15.368 Write completed with error (sct=0, sc=8) 00:22:15.368 Write completed with error (sct=0, sc=8) 00:22:15.368 starting I/O failed: -6 00:22:15.368 Write completed with error (sct=0, sc=8) 00:22:15.368 Write completed with error (sct=0, sc=8) 00:22:15.368 Write completed with error (sct=0, sc=8) 00:22:15.369 Write completed with error (sct=0, sc=8) 00:22:15.369 starting I/O failed: -6 00:22:15.369 Write completed with error (sct=0, sc=8) 00:22:15.369 Write completed with error (sct=0, sc=8) 00:22:15.369 Write completed with error (sct=0, sc=8) 00:22:15.369 Write completed with error (sct=0, sc=8) 00:22:15.369 starting I/O failed: -6 00:22:15.369 Write completed with error (sct=0, sc=8) 00:22:15.369 Write completed with error (sct=0, sc=8) 00:22:15.369 Write completed with error (sct=0, sc=8) 00:22:15.369 Write completed with error (sct=0, sc=8) 00:22:15.369 starting I/O failed: -6 00:22:15.369 Write completed with error (sct=0, sc=8) 00:22:15.369 Write completed with error (sct=0, sc=8) 00:22:15.369 Write completed with error (sct=0, sc=8) 00:22:15.369 Write completed with error (sct=0, sc=8) 00:22:15.369 starting I/O failed: -6 00:22:15.369 Write completed with error (sct=0, sc=8) 00:22:15.369 Write completed with error (sct=0, sc=8) 00:22:15.369 Write completed with error (sct=0, sc=8) 00:22:15.369 Write completed with error (sct=0, sc=8) 00:22:15.369 starting I/O failed: -6 00:22:15.369 [2024-11-15 10:41:03.429783] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:15.369 Write completed with error (sct=0, sc=8) 00:22:15.369 Write completed with error (sct=0, sc=8) 00:22:15.369 Write completed with error (sct=0, sc=8) 00:22:15.369 starting I/O failed: -6 00:22:15.369 Write completed with error (sct=0, sc=8) 00:22:15.369 starting I/O failed: -6 00:22:15.369 Write completed with error (sct=0, sc=8) 00:22:15.369 Write completed with error (sct=0, sc=8) 00:22:15.369 Write completed with error (sct=0, sc=8) 00:22:15.369 starting I/O failed: -6 00:22:15.369 Write completed with error (sct=0, sc=8) 00:22:15.369 starting I/O failed: -6 00:22:15.369 Write completed with error (sct=0, sc=8) 00:22:15.369 Write completed with error (sct=0, sc=8) 00:22:15.369 Write completed with error (sct=0, sc=8) 00:22:15.369 starting I/O failed: -6 00:22:15.369 Write completed with error (sct=0, sc=8) 00:22:15.369 starting I/O failed: -6 00:22:15.369 Write completed with error (sct=0, sc=8) 00:22:15.369 Write completed with error (sct=0, sc=8) 00:22:15.369 Write completed with error (sct=0, sc=8) 00:22:15.369 starting I/O failed: -6 00:22:15.369 Write completed with error (sct=0, sc=8) 00:22:15.369 starting I/O failed: -6 00:22:15.369 Write completed with error (sct=0, sc=8) 00:22:15.369 Write completed with error (sct=0, sc=8) 00:22:15.369 Write completed with error (sct=0, sc=8) 00:22:15.369 starting I/O failed: -6 00:22:15.369 Write completed with error (sct=0, sc=8) 00:22:15.369 starting I/O failed: -6 00:22:15.369 Write completed with error (sct=0, sc=8) 00:22:15.369 Write completed with error (sct=0, sc=8) 00:22:15.369 Write completed with error (sct=0, sc=8) 00:22:15.369 starting I/O failed: -6 00:22:15.369 Write completed with error (sct=0, sc=8) 00:22:15.369 starting I/O failed: -6 00:22:15.369 Write completed with error (sct=0, sc=8) 00:22:15.369 Write completed with error (sct=0, sc=8) 00:22:15.369 Write completed with error (sct=0, sc=8) 00:22:15.369 starting I/O failed: -6 00:22:15.369 Write completed with error (sct=0, sc=8) 00:22:15.369 starting I/O failed: -6 00:22:15.369 Write completed with error (sct=0, sc=8) 00:22:15.369 Write completed with error (sct=0, sc=8) 00:22:15.369 Write completed with error (sct=0, sc=8) 00:22:15.369 starting I/O failed: -6 00:22:15.369 Write completed with error (sct=0, sc=8) 00:22:15.369 starting I/O failed: -6 00:22:15.369 Write completed with error (sct=0, sc=8) 00:22:15.369 Write completed with error (sct=0, sc=8) 00:22:15.369 Write completed with error (sct=0, sc=8) 00:22:15.369 starting I/O failed: -6 00:22:15.369 Write completed with error (sct=0, sc=8) 00:22:15.369 starting I/O failed: -6 00:22:15.369 Write completed with error (sct=0, sc=8) 00:22:15.369 Write completed with error (sct=0, sc=8) 00:22:15.369 Write completed with error (sct=0, sc=8) 00:22:15.369 starting I/O failed: -6 00:22:15.369 Write completed with error (sct=0, sc=8) 00:22:15.369 starting I/O failed: -6 00:22:15.369 Write completed with error (sct=0, sc=8) 00:22:15.369 Write completed with error (sct=0, sc=8) 00:22:15.369 Write completed with error (sct=0, sc=8) 00:22:15.369 starting I/O failed: -6 00:22:15.369 Write completed with error (sct=0, sc=8) 00:22:15.369 starting I/O failed: -6 00:22:15.369 Write completed with error (sct=0, sc=8) 00:22:15.369 [2024-11-15 10:41:03.430990] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:22:15.369 Write completed with error (sct=0, sc=8) 00:22:15.369 starting I/O failed: -6 00:22:15.369 Write completed with error (sct=0, sc=8) 00:22:15.369 starting I/O failed: -6 00:22:15.369 Write completed with error (sct=0, sc=8) 00:22:15.369 starting I/O failed: -6 00:22:15.369 Write completed with error (sct=0, sc=8) 00:22:15.369 Write completed with error (sct=0, sc=8) 00:22:15.369 starting I/O failed: -6 00:22:15.369 Write completed with error (sct=0, sc=8) 00:22:15.369 starting I/O failed: -6 00:22:15.369 Write completed with error (sct=0, sc=8) 00:22:15.369 starting I/O failed: -6 00:22:15.369 Write completed with error (sct=0, sc=8) 00:22:15.369 Write completed with error (sct=0, sc=8) 00:22:15.369 starting I/O failed: -6 00:22:15.369 Write completed with error (sct=0, sc=8) 00:22:15.369 starting I/O failed: -6 00:22:15.369 Write completed with error (sct=0, sc=8) 00:22:15.369 starting I/O failed: -6 00:22:15.369 Write completed with error (sct=0, sc=8) 00:22:15.369 Write completed with error (sct=0, sc=8) 00:22:15.369 starting I/O failed: -6 00:22:15.369 Write completed with error (sct=0, sc=8) 00:22:15.369 starting I/O failed: -6 00:22:15.369 Write completed with error (sct=0, sc=8) 00:22:15.369 starting I/O failed: -6 00:22:15.369 Write completed with error (sct=0, sc=8) 00:22:15.369 Write completed with error (sct=0, sc=8) 00:22:15.369 starting I/O failed: -6 00:22:15.369 Write completed with error (sct=0, sc=8) 00:22:15.369 starting I/O failed: -6 00:22:15.369 Write completed with error (sct=0, sc=8) 00:22:15.369 starting I/O failed: -6 00:22:15.369 Write completed with error (sct=0, sc=8) 00:22:15.369 Write completed with error (sct=0, sc=8) 00:22:15.369 starting I/O failed: -6 00:22:15.369 Write completed with error (sct=0, sc=8) 00:22:15.369 starting I/O failed: -6 00:22:15.369 Write completed with error (sct=0, sc=8) 00:22:15.369 starting I/O failed: -6 00:22:15.369 Write completed with error (sct=0, sc=8) 00:22:15.369 Write completed with error (sct=0, sc=8) 00:22:15.369 starting I/O failed: -6 00:22:15.369 Write completed with error (sct=0, sc=8) 00:22:15.369 starting I/O failed: -6 00:22:15.369 Write completed with error (sct=0, sc=8) 00:22:15.369 starting I/O failed: -6 00:22:15.369 Write completed with error (sct=0, sc=8) 00:22:15.369 Write completed with error (sct=0, sc=8) 00:22:15.369 starting I/O failed: -6 00:22:15.369 Write completed with error (sct=0, sc=8) 00:22:15.369 starting I/O failed: -6 00:22:15.369 Write completed with error (sct=0, sc=8) 00:22:15.369 starting I/O failed: -6 00:22:15.369 Write completed with error (sct=0, sc=8) 00:22:15.369 Write completed with error (sct=0, sc=8) 00:22:15.369 starting I/O failed: -6 00:22:15.369 Write completed with error (sct=0, sc=8) 00:22:15.369 starting I/O failed: -6 00:22:15.369 Write completed with error (sct=0, sc=8) 00:22:15.369 starting I/O failed: -6 00:22:15.369 Write completed with error (sct=0, sc=8) 00:22:15.369 Write completed with error (sct=0, sc=8) 00:22:15.369 starting I/O failed: -6 00:22:15.369 Write completed with error (sct=0, sc=8) 00:22:15.369 starting I/O failed: -6 00:22:15.369 Write completed with error (sct=0, sc=8) 00:22:15.369 starting I/O failed: -6 00:22:15.369 Write completed with error (sct=0, sc=8) 00:22:15.369 Write completed with error (sct=0, sc=8) 00:22:15.369 starting I/O failed: -6 00:22:15.369 Write completed with error (sct=0, sc=8) 00:22:15.369 starting I/O failed: -6 00:22:15.369 Write completed with error (sct=0, sc=8) 00:22:15.369 starting I/O failed: -6 00:22:15.369 Write completed with error (sct=0, sc=8) 00:22:15.369 Write completed with error (sct=0, sc=8) 00:22:15.369 starting I/O failed: -6 00:22:15.369 Write completed with error (sct=0, sc=8) 00:22:15.369 starting I/O failed: -6 00:22:15.369 Write completed with error (sct=0, sc=8) 00:22:15.369 starting I/O failed: -6 00:22:15.369 [2024-11-15 10:41:03.432311] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:22:15.369 Write completed with error (sct=0, sc=8) 00:22:15.369 starting I/O failed: -6 00:22:15.369 Write completed with error (sct=0, sc=8) 00:22:15.369 starting I/O failed: -6 00:22:15.369 Write completed with error (sct=0, sc=8) 00:22:15.369 starting I/O failed: -6 00:22:15.369 Write completed with error (sct=0, sc=8) 00:22:15.369 starting I/O failed: -6 00:22:15.369 Write completed with error (sct=0, sc=8) 00:22:15.369 starting I/O failed: -6 00:22:15.369 Write completed with error (sct=0, sc=8) 00:22:15.369 starting I/O failed: -6 00:22:15.369 Write completed with error (sct=0, sc=8) 00:22:15.369 starting I/O failed: -6 00:22:15.369 Write completed with error (sct=0, sc=8) 00:22:15.369 starting I/O failed: -6 00:22:15.369 Write completed with error (sct=0, sc=8) 00:22:15.369 starting I/O failed: -6 00:22:15.369 Write completed with error (sct=0, sc=8) 00:22:15.369 starting I/O failed: -6 00:22:15.369 Write completed with error (sct=0, sc=8) 00:22:15.369 starting I/O failed: -6 00:22:15.369 Write completed with error (sct=0, sc=8) 00:22:15.369 starting I/O failed: -6 00:22:15.369 Write completed with error (sct=0, sc=8) 00:22:15.369 starting I/O failed: -6 00:22:15.369 Write completed with error (sct=0, sc=8) 00:22:15.369 starting I/O failed: -6 00:22:15.369 Write completed with error (sct=0, sc=8) 00:22:15.369 starting I/O failed: -6 00:22:15.369 Write completed with error (sct=0, sc=8) 00:22:15.369 starting I/O failed: -6 00:22:15.369 Write completed with error (sct=0, sc=8) 00:22:15.369 starting I/O failed: -6 00:22:15.369 Write completed with error (sct=0, sc=8) 00:22:15.369 starting I/O failed: -6 00:22:15.369 Write completed with error (sct=0, sc=8) 00:22:15.369 starting I/O failed: -6 00:22:15.370 Write completed with error (sct=0, sc=8) 00:22:15.370 starting I/O failed: -6 00:22:15.370 Write completed with error (sct=0, sc=8) 00:22:15.370 starting I/O failed: -6 00:22:15.370 Write completed with error (sct=0, sc=8) 00:22:15.370 starting I/O failed: -6 00:22:15.370 Write completed with error (sct=0, sc=8) 00:22:15.370 starting I/O failed: -6 00:22:15.370 Write completed with error (sct=0, sc=8) 00:22:15.370 starting I/O failed: -6 00:22:15.370 Write completed with error (sct=0, sc=8) 00:22:15.370 starting I/O failed: -6 00:22:15.370 Write completed with error (sct=0, sc=8) 00:22:15.370 starting I/O failed: -6 00:22:15.370 Write completed with error (sct=0, sc=8) 00:22:15.370 starting I/O failed: -6 00:22:15.370 Write completed with error (sct=0, sc=8) 00:22:15.370 starting I/O failed: -6 00:22:15.370 Write completed with error (sct=0, sc=8) 00:22:15.370 starting I/O failed: -6 00:22:15.370 Write completed with error (sct=0, sc=8) 00:22:15.370 starting I/O failed: -6 00:22:15.370 Write completed with error (sct=0, sc=8) 00:22:15.370 starting I/O failed: -6 00:22:15.370 Write completed with error (sct=0, sc=8) 00:22:15.370 starting I/O failed: -6 00:22:15.370 Write completed with error (sct=0, sc=8) 00:22:15.370 starting I/O failed: -6 00:22:15.370 Write completed with error (sct=0, sc=8) 00:22:15.370 starting I/O failed: -6 00:22:15.370 Write completed with error (sct=0, sc=8) 00:22:15.370 starting I/O failed: -6 00:22:15.370 Write completed with error (sct=0, sc=8) 00:22:15.370 starting I/O failed: -6 00:22:15.370 Write completed with error (sct=0, sc=8) 00:22:15.370 starting I/O failed: -6 00:22:15.370 Write completed with error (sct=0, sc=8) 00:22:15.370 starting I/O failed: -6 00:22:15.370 Write completed with error (sct=0, sc=8) 00:22:15.370 starting I/O failed: -6 00:22:15.370 Write completed with error (sct=0, sc=8) 00:22:15.370 starting I/O failed: -6 00:22:15.370 Write completed with error (sct=0, sc=8) 00:22:15.370 starting I/O failed: -6 00:22:15.370 Write completed with error (sct=0, sc=8) 00:22:15.370 starting I/O failed: -6 00:22:15.370 Write completed with error (sct=0, sc=8) 00:22:15.370 starting I/O failed: -6 00:22:15.370 Write completed with error (sct=0, sc=8) 00:22:15.370 starting I/O failed: -6 00:22:15.370 Write completed with error (sct=0, sc=8) 00:22:15.370 starting I/O failed: -6 00:22:15.370 Write completed with error (sct=0, sc=8) 00:22:15.370 starting I/O failed: -6 00:22:15.370 Write completed with error (sct=0, sc=8) 00:22:15.370 starting I/O failed: -6 00:22:15.370 Write completed with error (sct=0, sc=8) 00:22:15.370 starting I/O failed: -6 00:22:15.370 Write completed with error (sct=0, sc=8) 00:22:15.370 starting I/O failed: -6 00:22:15.370 Write completed with error (sct=0, sc=8) 00:22:15.370 starting I/O failed: -6 00:22:15.370 Write completed with error (sct=0, sc=8) 00:22:15.370 starting I/O failed: -6 00:22:15.370 Write completed with error (sct=0, sc=8) 00:22:15.370 starting I/O failed: -6 00:22:15.370 Write completed with error (sct=0, sc=8) 00:22:15.370 starting I/O failed: -6 00:22:15.370 Write completed with error (sct=0, sc=8) 00:22:15.370 starting I/O failed: -6 00:22:15.370 Write completed with error (sct=0, sc=8) 00:22:15.370 starting I/O failed: -6 00:22:15.370 Write completed with error (sct=0, sc=8) 00:22:15.370 starting I/O failed: -6 00:22:15.370 Write completed with error (sct=0, sc=8) 00:22:15.370 starting I/O failed: -6 00:22:15.370 Write completed with error (sct=0, sc=8) 00:22:15.370 starting I/O failed: -6 00:22:15.370 Write completed with error (sct=0, sc=8) 00:22:15.370 starting I/O failed: -6 00:22:15.370 Write completed with error (sct=0, sc=8) 00:22:15.370 starting I/O failed: -6 00:22:15.370 [2024-11-15 10:41:03.434882] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:22:15.370 NVMe io qpair process completion error 00:22:15.370 Write completed with error (sct=0, sc=8) 00:22:15.370 Write completed with error (sct=0, sc=8) 00:22:15.370 starting I/O failed: -6 00:22:15.370 Write completed with error (sct=0, sc=8) 00:22:15.370 Write completed with error (sct=0, sc=8) 00:22:15.370 Write completed with error (sct=0, sc=8) 00:22:15.370 Write completed with error (sct=0, sc=8) 00:22:15.370 starting I/O failed: -6 00:22:15.370 Write completed with error (sct=0, sc=8) 00:22:15.370 Write completed with error (sct=0, sc=8) 00:22:15.370 Write completed with error (sct=0, sc=8) 00:22:15.370 Write completed with error (sct=0, sc=8) 00:22:15.370 starting I/O failed: -6 00:22:15.370 Write completed with error (sct=0, sc=8) 00:22:15.370 Write completed with error (sct=0, sc=8) 00:22:15.370 Write completed with error (sct=0, sc=8) 00:22:15.370 Write completed with error (sct=0, sc=8) 00:22:15.370 starting I/O failed: -6 00:22:15.370 Write completed with error (sct=0, sc=8) 00:22:15.370 Write completed with error (sct=0, sc=8) 00:22:15.370 Write completed with error (sct=0, sc=8) 00:22:15.370 Write completed with error (sct=0, sc=8) 00:22:15.370 starting I/O failed: -6 00:22:15.370 Write completed with error (sct=0, sc=8) 00:22:15.370 Write completed with error (sct=0, sc=8) 00:22:15.370 Write completed with error (sct=0, sc=8) 00:22:15.370 Write completed with error (sct=0, sc=8) 00:22:15.370 starting I/O failed: -6 00:22:15.370 Write completed with error (sct=0, sc=8) 00:22:15.370 Write completed with error (sct=0, sc=8) 00:22:15.370 Write completed with error (sct=0, sc=8) 00:22:15.370 Write completed with error (sct=0, sc=8) 00:22:15.370 starting I/O failed: -6 00:22:15.370 Write completed with error (sct=0, sc=8) 00:22:15.370 Write completed with error (sct=0, sc=8) 00:22:15.370 Write completed with error (sct=0, sc=8) 00:22:15.370 Write completed with error (sct=0, sc=8) 00:22:15.370 starting I/O failed: -6 00:22:15.370 Write completed with error (sct=0, sc=8) 00:22:15.370 Write completed with error (sct=0, sc=8) 00:22:15.370 Write completed with error (sct=0, sc=8) 00:22:15.370 Write completed with error (sct=0, sc=8) 00:22:15.370 starting I/O failed: -6 00:22:15.370 Write completed with error (sct=0, sc=8) 00:22:15.370 Write completed with error (sct=0, sc=8) 00:22:15.370 Write completed with error (sct=0, sc=8) 00:22:15.370 Write completed with error (sct=0, sc=8) 00:22:15.370 starting I/O failed: -6 00:22:15.370 Write completed with error (sct=0, sc=8) 00:22:15.370 Write completed with error (sct=0, sc=8) 00:22:15.370 Write completed with error (sct=0, sc=8) 00:22:15.370 Write completed with error (sct=0, sc=8) 00:22:15.370 starting I/O failed: -6 00:22:15.370 [2024-11-15 10:41:03.436346] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:22:15.370 Write completed with error (sct=0, sc=8) 00:22:15.370 starting I/O failed: -6 00:22:15.370 Write completed with error (sct=0, sc=8) 00:22:15.370 Write completed with error (sct=0, sc=8) 00:22:15.370 Write completed with error (sct=0, sc=8) 00:22:15.370 starting I/O failed: -6 00:22:15.370 Write completed with error (sct=0, sc=8) 00:22:15.370 starting I/O failed: -6 00:22:15.370 Write completed with error (sct=0, sc=8) 00:22:15.370 Write completed with error (sct=0, sc=8) 00:22:15.370 Write completed with error (sct=0, sc=8) 00:22:15.370 starting I/O failed: -6 00:22:15.370 Write completed with error (sct=0, sc=8) 00:22:15.370 starting I/O failed: -6 00:22:15.370 Write completed with error (sct=0, sc=8) 00:22:15.370 Write completed with error (sct=0, sc=8) 00:22:15.370 Write completed with error (sct=0, sc=8) 00:22:15.370 starting I/O failed: -6 00:22:15.370 Write completed with error (sct=0, sc=8) 00:22:15.370 starting I/O failed: -6 00:22:15.370 Write completed with error (sct=0, sc=8) 00:22:15.370 Write completed with error (sct=0, sc=8) 00:22:15.370 Write completed with error (sct=0, sc=8) 00:22:15.370 starting I/O failed: -6 00:22:15.370 Write completed with error (sct=0, sc=8) 00:22:15.370 starting I/O failed: -6 00:22:15.370 Write completed with error (sct=0, sc=8) 00:22:15.370 Write completed with error (sct=0, sc=8) 00:22:15.370 Write completed with error (sct=0, sc=8) 00:22:15.370 starting I/O failed: -6 00:22:15.370 Write completed with error (sct=0, sc=8) 00:22:15.370 starting I/O failed: -6 00:22:15.370 Write completed with error (sct=0, sc=8) 00:22:15.370 Write completed with error (sct=0, sc=8) 00:22:15.370 Write completed with error (sct=0, sc=8) 00:22:15.370 starting I/O failed: -6 00:22:15.370 Write completed with error (sct=0, sc=8) 00:22:15.370 starting I/O failed: -6 00:22:15.370 Write completed with error (sct=0, sc=8) 00:22:15.370 Write completed with error (sct=0, sc=8) 00:22:15.370 Write completed with error (sct=0, sc=8) 00:22:15.370 starting I/O failed: -6 00:22:15.370 Write completed with error (sct=0, sc=8) 00:22:15.370 starting I/O failed: -6 00:22:15.370 Write completed with error (sct=0, sc=8) 00:22:15.370 Write completed with error (sct=0, sc=8) 00:22:15.370 Write completed with error (sct=0, sc=8) 00:22:15.370 starting I/O failed: -6 00:22:15.370 Write completed with error (sct=0, sc=8) 00:22:15.370 starting I/O failed: -6 00:22:15.370 Write completed with error (sct=0, sc=8) 00:22:15.370 Write completed with error (sct=0, sc=8) 00:22:15.370 Write completed with error (sct=0, sc=8) 00:22:15.370 starting I/O failed: -6 00:22:15.370 Write completed with error (sct=0, sc=8) 00:22:15.370 starting I/O failed: -6 00:22:15.370 [2024-11-15 10:41:03.437399] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:22:15.370 Write completed with error (sct=0, sc=8) 00:22:15.370 starting I/O failed: -6 00:22:15.370 Write completed with error (sct=0, sc=8) 00:22:15.370 starting I/O failed: -6 00:22:15.370 Write completed with error (sct=0, sc=8) 00:22:15.370 starting I/O failed: -6 00:22:15.370 Write completed with error (sct=0, sc=8) 00:22:15.370 Write completed with error (sct=0, sc=8) 00:22:15.370 starting I/O failed: -6 00:22:15.370 Write completed with error (sct=0, sc=8) 00:22:15.370 starting I/O failed: -6 00:22:15.370 Write completed with error (sct=0, sc=8) 00:22:15.370 starting I/O failed: -6 00:22:15.370 Write completed with error (sct=0, sc=8) 00:22:15.370 Write completed with error (sct=0, sc=8) 00:22:15.370 starting I/O failed: -6 00:22:15.370 Write completed with error (sct=0, sc=8) 00:22:15.370 starting I/O failed: -6 00:22:15.370 Write completed with error (sct=0, sc=8) 00:22:15.371 starting I/O failed: -6 00:22:15.371 Write completed with error (sct=0, sc=8) 00:22:15.371 Write completed with error (sct=0, sc=8) 00:22:15.371 starting I/O failed: -6 00:22:15.371 Write completed with error (sct=0, sc=8) 00:22:15.371 starting I/O failed: -6 00:22:15.371 Write completed with error (sct=0, sc=8) 00:22:15.371 starting I/O failed: -6 00:22:15.371 Write completed with error (sct=0, sc=8) 00:22:15.371 Write completed with error (sct=0, sc=8) 00:22:15.371 starting I/O failed: -6 00:22:15.371 Write completed with error (sct=0, sc=8) 00:22:15.371 starting I/O failed: -6 00:22:15.371 Write completed with error (sct=0, sc=8) 00:22:15.371 starting I/O failed: -6 00:22:15.371 Write completed with error (sct=0, sc=8) 00:22:15.371 Write completed with error (sct=0, sc=8) 00:22:15.371 starting I/O failed: -6 00:22:15.371 Write completed with error (sct=0, sc=8) 00:22:15.371 starting I/O failed: -6 00:22:15.371 Write completed with error (sct=0, sc=8) 00:22:15.371 starting I/O failed: -6 00:22:15.371 Write completed with error (sct=0, sc=8) 00:22:15.371 Write completed with error (sct=0, sc=8) 00:22:15.371 starting I/O failed: -6 00:22:15.371 Write completed with error (sct=0, sc=8) 00:22:15.371 starting I/O failed: -6 00:22:15.371 Write completed with error (sct=0, sc=8) 00:22:15.371 starting I/O failed: -6 00:22:15.371 Write completed with error (sct=0, sc=8) 00:22:15.371 Write completed with error (sct=0, sc=8) 00:22:15.371 starting I/O failed: -6 00:22:15.371 Write completed with error (sct=0, sc=8) 00:22:15.371 starting I/O failed: -6 00:22:15.371 Write completed with error (sct=0, sc=8) 00:22:15.371 starting I/O failed: -6 00:22:15.371 Write completed with error (sct=0, sc=8) 00:22:15.371 Write completed with error (sct=0, sc=8) 00:22:15.371 starting I/O failed: -6 00:22:15.371 Write completed with error (sct=0, sc=8) 00:22:15.371 starting I/O failed: -6 00:22:15.371 Write completed with error (sct=0, sc=8) 00:22:15.371 starting I/O failed: -6 00:22:15.371 Write completed with error (sct=0, sc=8) 00:22:15.371 Write completed with error (sct=0, sc=8) 00:22:15.371 starting I/O failed: -6 00:22:15.371 Write completed with error (sct=0, sc=8) 00:22:15.371 starting I/O failed: -6 00:22:15.371 Write completed with error (sct=0, sc=8) 00:22:15.371 starting I/O failed: -6 00:22:15.371 Write completed with error (sct=0, sc=8) 00:22:15.371 Write completed with error (sct=0, sc=8) 00:22:15.371 starting I/O failed: -6 00:22:15.371 Write completed with error (sct=0, sc=8) 00:22:15.371 starting I/O failed: -6 00:22:15.371 Write completed with error (sct=0, sc=8) 00:22:15.371 starting I/O failed: -6 00:22:15.371 Write completed with error (sct=0, sc=8) 00:22:15.371 Write completed with error (sct=0, sc=8) 00:22:15.371 starting I/O failed: -6 00:22:15.371 Write completed with error (sct=0, sc=8) 00:22:15.371 starting I/O failed: -6 00:22:15.371 Write completed with error (sct=0, sc=8) 00:22:15.371 starting I/O failed: -6 00:22:15.371 Write completed with error (sct=0, sc=8) 00:22:15.371 Write completed with error (sct=0, sc=8) 00:22:15.371 starting I/O failed: -6 00:22:15.371 [2024-11-15 10:41:03.438805] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:22:15.371 Write completed with error (sct=0, sc=8) 00:22:15.371 starting I/O failed: -6 00:22:15.371 Write completed with error (sct=0, sc=8) 00:22:15.371 starting I/O failed: -6 00:22:15.371 Write completed with error (sct=0, sc=8) 00:22:15.371 starting I/O failed: -6 00:22:15.371 Write completed with error (sct=0, sc=8) 00:22:15.371 starting I/O failed: -6 00:22:15.371 Write completed with error (sct=0, sc=8) 00:22:15.371 starting I/O failed: -6 00:22:15.371 Write completed with error (sct=0, sc=8) 00:22:15.371 starting I/O failed: -6 00:22:15.371 Write completed with error (sct=0, sc=8) 00:22:15.371 starting I/O failed: -6 00:22:15.371 Write completed with error (sct=0, sc=8) 00:22:15.371 starting I/O failed: -6 00:22:15.371 Write completed with error (sct=0, sc=8) 00:22:15.371 starting I/O failed: -6 00:22:15.371 Write completed with error (sct=0, sc=8) 00:22:15.371 starting I/O failed: -6 00:22:15.371 Write completed with error (sct=0, sc=8) 00:22:15.371 starting I/O failed: -6 00:22:15.371 Write completed with error (sct=0, sc=8) 00:22:15.371 starting I/O failed: -6 00:22:15.371 Write completed with error (sct=0, sc=8) 00:22:15.371 starting I/O failed: -6 00:22:15.371 Write completed with error (sct=0, sc=8) 00:22:15.371 starting I/O failed: -6 00:22:15.371 Write completed with error (sct=0, sc=8) 00:22:15.371 starting I/O failed: -6 00:22:15.371 Write completed with error (sct=0, sc=8) 00:22:15.371 starting I/O failed: -6 00:22:15.371 Write completed with error (sct=0, sc=8) 00:22:15.371 starting I/O failed: -6 00:22:15.371 Write completed with error (sct=0, sc=8) 00:22:15.371 starting I/O failed: -6 00:22:15.371 Write completed with error (sct=0, sc=8) 00:22:15.371 starting I/O failed: -6 00:22:15.371 Write completed with error (sct=0, sc=8) 00:22:15.371 starting I/O failed: -6 00:22:15.371 Write completed with error (sct=0, sc=8) 00:22:15.371 starting I/O failed: -6 00:22:15.371 Write completed with error (sct=0, sc=8) 00:22:15.371 starting I/O failed: -6 00:22:15.371 Write completed with error (sct=0, sc=8) 00:22:15.371 starting I/O failed: -6 00:22:15.371 Write completed with error (sct=0, sc=8) 00:22:15.371 starting I/O failed: -6 00:22:15.371 Write completed with error (sct=0, sc=8) 00:22:15.371 starting I/O failed: -6 00:22:15.371 Write completed with error (sct=0, sc=8) 00:22:15.371 starting I/O failed: -6 00:22:15.371 Write completed with error (sct=0, sc=8) 00:22:15.371 starting I/O failed: -6 00:22:15.371 Write completed with error (sct=0, sc=8) 00:22:15.371 starting I/O failed: -6 00:22:15.371 Write completed with error (sct=0, sc=8) 00:22:15.371 starting I/O failed: -6 00:22:15.371 Write completed with error (sct=0, sc=8) 00:22:15.371 starting I/O failed: -6 00:22:15.371 Write completed with error (sct=0, sc=8) 00:22:15.371 starting I/O failed: -6 00:22:15.371 Write completed with error (sct=0, sc=8) 00:22:15.371 starting I/O failed: -6 00:22:15.371 Write completed with error (sct=0, sc=8) 00:22:15.371 starting I/O failed: -6 00:22:15.371 Write completed with error (sct=0, sc=8) 00:22:15.371 starting I/O failed: -6 00:22:15.371 Write completed with error (sct=0, sc=8) 00:22:15.371 starting I/O failed: -6 00:22:15.371 Write completed with error (sct=0, sc=8) 00:22:15.371 starting I/O failed: -6 00:22:15.371 Write completed with error (sct=0, sc=8) 00:22:15.371 starting I/O failed: -6 00:22:15.371 Write completed with error (sct=0, sc=8) 00:22:15.371 starting I/O failed: -6 00:22:15.371 Write completed with error (sct=0, sc=8) 00:22:15.371 starting I/O failed: -6 00:22:15.371 Write completed with error (sct=0, sc=8) 00:22:15.371 starting I/O failed: -6 00:22:15.371 Write completed with error (sct=0, sc=8) 00:22:15.371 starting I/O failed: -6 00:22:15.371 Write completed with error (sct=0, sc=8) 00:22:15.371 starting I/O failed: -6 00:22:15.371 Write completed with error (sct=0, sc=8) 00:22:15.371 starting I/O failed: -6 00:22:15.371 Write completed with error (sct=0, sc=8) 00:22:15.371 starting I/O failed: -6 00:22:15.371 Write completed with error (sct=0, sc=8) 00:22:15.371 starting I/O failed: -6 00:22:15.371 Write completed with error (sct=0, sc=8) 00:22:15.371 starting I/O failed: -6 00:22:15.371 Write completed with error (sct=0, sc=8) 00:22:15.371 starting I/O failed: -6 00:22:15.371 Write completed with error (sct=0, sc=8) 00:22:15.371 starting I/O failed: -6 00:22:15.371 Write completed with error (sct=0, sc=8) 00:22:15.371 starting I/O failed: -6 00:22:15.371 Write completed with error (sct=0, sc=8) 00:22:15.371 starting I/O failed: -6 00:22:15.371 Write completed with error (sct=0, sc=8) 00:22:15.371 starting I/O failed: -6 00:22:15.371 Write completed with error (sct=0, sc=8) 00:22:15.371 starting I/O failed: -6 00:22:15.371 Write completed with error (sct=0, sc=8) 00:22:15.371 starting I/O failed: -6 00:22:15.371 Write completed with error (sct=0, sc=8) 00:22:15.371 starting I/O failed: -6 00:22:15.371 Write completed with error (sct=0, sc=8) 00:22:15.371 starting I/O failed: -6 00:22:15.371 Write completed with error (sct=0, sc=8) 00:22:15.371 starting I/O failed: -6 00:22:15.371 Write completed with error (sct=0, sc=8) 00:22:15.371 starting I/O failed: -6 00:22:15.371 Write completed with error (sct=0, sc=8) 00:22:15.371 starting I/O failed: -6 00:22:15.371 Write completed with error (sct=0, sc=8) 00:22:15.371 starting I/O failed: -6 00:22:15.371 Write completed with error (sct=0, sc=8) 00:22:15.371 starting I/O failed: -6 00:22:15.371 Write completed with error (sct=0, sc=8) 00:22:15.371 starting I/O failed: -6 00:22:15.371 [2024-11-15 10:41:03.442132] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:15.371 NVMe io qpair process completion error 00:22:15.371 Write completed with error (sct=0, sc=8) 00:22:15.371 Write completed with error (sct=0, sc=8) 00:22:15.372 Write completed with error (sct=0, sc=8) 00:22:15.372 Write completed with error (sct=0, sc=8) 00:22:15.372 starting I/O failed: -6 00:22:15.372 Write completed with error (sct=0, sc=8) 00:22:15.372 Write completed with error (sct=0, sc=8) 00:22:15.372 Write completed with error (sct=0, sc=8) 00:22:15.372 Write completed with error (sct=0, sc=8) 00:22:15.372 starting I/O failed: -6 00:22:15.372 Write completed with error (sct=0, sc=8) 00:22:15.372 Write completed with error (sct=0, sc=8) 00:22:15.372 Write completed with error (sct=0, sc=8) 00:22:15.372 Write completed with error (sct=0, sc=8) 00:22:15.372 starting I/O failed: -6 00:22:15.372 Write completed with error (sct=0, sc=8) 00:22:15.372 Write completed with error (sct=0, sc=8) 00:22:15.372 Write completed with error (sct=0, sc=8) 00:22:15.372 Write completed with error (sct=0, sc=8) 00:22:15.372 starting I/O failed: -6 00:22:15.372 Write completed with error (sct=0, sc=8) 00:22:15.372 Write completed with error (sct=0, sc=8) 00:22:15.372 Write completed with error (sct=0, sc=8) 00:22:15.372 Write completed with error (sct=0, sc=8) 00:22:15.372 starting I/O failed: -6 00:22:15.372 Write completed with error (sct=0, sc=8) 00:22:15.372 Write completed with error (sct=0, sc=8) 00:22:15.372 Write completed with error (sct=0, sc=8) 00:22:15.372 Write completed with error (sct=0, sc=8) 00:22:15.372 starting I/O failed: -6 00:22:15.372 Write completed with error (sct=0, sc=8) 00:22:15.372 Write completed with error (sct=0, sc=8) 00:22:15.372 Write completed with error (sct=0, sc=8) 00:22:15.372 Write completed with error (sct=0, sc=8) 00:22:15.372 starting I/O failed: -6 00:22:15.372 Write completed with error (sct=0, sc=8) 00:22:15.372 Write completed with error (sct=0, sc=8) 00:22:15.372 Write completed with error (sct=0, sc=8) 00:22:15.372 Write completed with error (sct=0, sc=8) 00:22:15.372 starting I/O failed: -6 00:22:15.372 Write completed with error (sct=0, sc=8) 00:22:15.372 Write completed with error (sct=0, sc=8) 00:22:15.372 Write completed with error (sct=0, sc=8) 00:22:15.372 Write completed with error (sct=0, sc=8) 00:22:15.372 starting I/O failed: -6 00:22:15.372 Write completed with error (sct=0, sc=8) 00:22:15.372 [2024-11-15 10:41:03.443416] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:22:15.372 Write completed with error (sct=0, sc=8) 00:22:15.372 starting I/O failed: -6 00:22:15.372 Write completed with error (sct=0, sc=8) 00:22:15.372 Write completed with error (sct=0, sc=8) 00:22:15.372 Write completed with error (sct=0, sc=8) 00:22:15.372 starting I/O failed: -6 00:22:15.372 Write completed with error (sct=0, sc=8) 00:22:15.372 starting I/O failed: -6 00:22:15.372 Write completed with error (sct=0, sc=8) 00:22:15.372 Write completed with error (sct=0, sc=8) 00:22:15.372 Write completed with error (sct=0, sc=8) 00:22:15.372 starting I/O failed: -6 00:22:15.372 Write completed with error (sct=0, sc=8) 00:22:15.372 starting I/O failed: -6 00:22:15.372 Write completed with error (sct=0, sc=8) 00:22:15.372 Write completed with error (sct=0, sc=8) 00:22:15.372 Write completed with error (sct=0, sc=8) 00:22:15.372 starting I/O failed: -6 00:22:15.372 Write completed with error (sct=0, sc=8) 00:22:15.372 starting I/O failed: -6 00:22:15.372 Write completed with error (sct=0, sc=8) 00:22:15.372 Write completed with error (sct=0, sc=8) 00:22:15.372 Write completed with error (sct=0, sc=8) 00:22:15.372 starting I/O failed: -6 00:22:15.372 Write completed with error (sct=0, sc=8) 00:22:15.372 starting I/O failed: -6 00:22:15.372 Write completed with error (sct=0, sc=8) 00:22:15.372 Write completed with error (sct=0, sc=8) 00:22:15.372 Write completed with error (sct=0, sc=8) 00:22:15.372 starting I/O failed: -6 00:22:15.372 Write completed with error (sct=0, sc=8) 00:22:15.372 starting I/O failed: -6 00:22:15.372 Write completed with error (sct=0, sc=8) 00:22:15.372 Write completed with error (sct=0, sc=8) 00:22:15.372 Write completed with error (sct=0, sc=8) 00:22:15.372 starting I/O failed: -6 00:22:15.372 Write completed with error (sct=0, sc=8) 00:22:15.372 starting I/O failed: -6 00:22:15.372 Write completed with error (sct=0, sc=8) 00:22:15.372 Write completed with error (sct=0, sc=8) 00:22:15.372 Write completed with error (sct=0, sc=8) 00:22:15.372 starting I/O failed: -6 00:22:15.372 Write completed with error (sct=0, sc=8) 00:22:15.372 starting I/O failed: -6 00:22:15.372 Write completed with error (sct=0, sc=8) 00:22:15.372 Write completed with error (sct=0, sc=8) 00:22:15.372 Write completed with error (sct=0, sc=8) 00:22:15.372 starting I/O failed: -6 00:22:15.372 Write completed with error (sct=0, sc=8) 00:22:15.372 starting I/O failed: -6 00:22:15.372 Write completed with error (sct=0, sc=8) 00:22:15.372 Write completed with error (sct=0, sc=8) 00:22:15.372 Write completed with error (sct=0, sc=8) 00:22:15.372 starting I/O failed: -6 00:22:15.372 Write completed with error (sct=0, sc=8) 00:22:15.372 starting I/O failed: -6 00:22:15.372 Write completed with error (sct=0, sc=8) 00:22:15.372 Write completed with error (sct=0, sc=8) 00:22:15.372 Write completed with error (sct=0, sc=8) 00:22:15.372 starting I/O failed: -6 00:22:15.372 Write completed with error (sct=0, sc=8) 00:22:15.372 starting I/O failed: -6 00:22:15.372 Write completed with error (sct=0, sc=8) 00:22:15.372 Write completed with error (sct=0, sc=8) 00:22:15.372 Write completed with error (sct=0, sc=8) 00:22:15.372 starting I/O failed: -6 00:22:15.372 [2024-11-15 10:41:03.444627] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:22:15.372 Write completed with error (sct=0, sc=8) 00:22:15.372 starting I/O failed: -6 00:22:15.372 Write completed with error (sct=0, sc=8) 00:22:15.372 Write completed with error (sct=0, sc=8) 00:22:15.372 starting I/O failed: -6 00:22:15.372 Write completed with error (sct=0, sc=8) 00:22:15.372 starting I/O failed: -6 00:22:15.372 Write completed with error (sct=0, sc=8) 00:22:15.372 starting I/O failed: -6 00:22:15.372 Write completed with error (sct=0, sc=8) 00:22:15.372 Write completed with error (sct=0, sc=8) 00:22:15.372 starting I/O failed: -6 00:22:15.372 Write completed with error (sct=0, sc=8) 00:22:15.372 starting I/O failed: -6 00:22:15.372 Write completed with error (sct=0, sc=8) 00:22:15.372 starting I/O failed: -6 00:22:15.372 Write completed with error (sct=0, sc=8) 00:22:15.372 Write completed with error (sct=0, sc=8) 00:22:15.372 starting I/O failed: -6 00:22:15.372 Write completed with error (sct=0, sc=8) 00:22:15.372 starting I/O failed: -6 00:22:15.372 Write completed with error (sct=0, sc=8) 00:22:15.372 starting I/O failed: -6 00:22:15.372 Write completed with error (sct=0, sc=8) 00:22:15.372 Write completed with error (sct=0, sc=8) 00:22:15.372 starting I/O failed: -6 00:22:15.372 Write completed with error (sct=0, sc=8) 00:22:15.372 starting I/O failed: -6 00:22:15.372 Write completed with error (sct=0, sc=8) 00:22:15.372 starting I/O failed: -6 00:22:15.372 Write completed with error (sct=0, sc=8) 00:22:15.372 Write completed with error (sct=0, sc=8) 00:22:15.372 starting I/O failed: -6 00:22:15.372 Write completed with error (sct=0, sc=8) 00:22:15.372 starting I/O failed: -6 00:22:15.372 Write completed with error (sct=0, sc=8) 00:22:15.372 starting I/O failed: -6 00:22:15.372 Write completed with error (sct=0, sc=8) 00:22:15.372 Write completed with error (sct=0, sc=8) 00:22:15.372 starting I/O failed: -6 00:22:15.372 Write completed with error (sct=0, sc=8) 00:22:15.372 starting I/O failed: -6 00:22:15.372 Write completed with error (sct=0, sc=8) 00:22:15.372 starting I/O failed: -6 00:22:15.372 Write completed with error (sct=0, sc=8) 00:22:15.372 Write completed with error (sct=0, sc=8) 00:22:15.372 starting I/O failed: -6 00:22:15.372 Write completed with error (sct=0, sc=8) 00:22:15.372 starting I/O failed: -6 00:22:15.372 Write completed with error (sct=0, sc=8) 00:22:15.372 starting I/O failed: -6 00:22:15.372 Write completed with error (sct=0, sc=8) 00:22:15.372 Write completed with error (sct=0, sc=8) 00:22:15.372 starting I/O failed: -6 00:22:15.372 Write completed with error (sct=0, sc=8) 00:22:15.372 starting I/O failed: -6 00:22:15.372 Write completed with error (sct=0, sc=8) 00:22:15.372 starting I/O failed: -6 00:22:15.372 Write completed with error (sct=0, sc=8) 00:22:15.372 Write completed with error (sct=0, sc=8) 00:22:15.372 starting I/O failed: -6 00:22:15.372 Write completed with error (sct=0, sc=8) 00:22:15.372 starting I/O failed: -6 00:22:15.372 Write completed with error (sct=0, sc=8) 00:22:15.372 starting I/O failed: -6 00:22:15.372 Write completed with error (sct=0, sc=8) 00:22:15.372 Write completed with error (sct=0, sc=8) 00:22:15.372 starting I/O failed: -6 00:22:15.372 Write completed with error (sct=0, sc=8) 00:22:15.372 starting I/O failed: -6 00:22:15.372 Write completed with error (sct=0, sc=8) 00:22:15.372 starting I/O failed: -6 00:22:15.372 Write completed with error (sct=0, sc=8) 00:22:15.372 Write completed with error (sct=0, sc=8) 00:22:15.372 starting I/O failed: -6 00:22:15.372 Write completed with error (sct=0, sc=8) 00:22:15.372 starting I/O failed: -6 00:22:15.372 Write completed with error (sct=0, sc=8) 00:22:15.372 starting I/O failed: -6 00:22:15.372 Write completed with error (sct=0, sc=8) 00:22:15.372 Write completed with error (sct=0, sc=8) 00:22:15.372 starting I/O failed: -6 00:22:15.372 Write completed with error (sct=0, sc=8) 00:22:15.372 starting I/O failed: -6 00:22:15.372 Write completed with error (sct=0, sc=8) 00:22:15.372 starting I/O failed: -6 00:22:15.372 Write completed with error (sct=0, sc=8) 00:22:15.372 Write completed with error (sct=0, sc=8) 00:22:15.372 starting I/O failed: -6 00:22:15.372 [2024-11-15 10:41:03.446034] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:22:15.372 Write completed with error (sct=0, sc=8) 00:22:15.372 starting I/O failed: -6 00:22:15.372 Write completed with error (sct=0, sc=8) 00:22:15.372 starting I/O failed: -6 00:22:15.372 Write completed with error (sct=0, sc=8) 00:22:15.372 starting I/O failed: -6 00:22:15.372 Write completed with error (sct=0, sc=8) 00:22:15.372 starting I/O failed: -6 00:22:15.372 Write completed with error (sct=0, sc=8) 00:22:15.372 starting I/O failed: -6 00:22:15.373 Write completed with error (sct=0, sc=8) 00:22:15.373 starting I/O failed: -6 00:22:15.373 Write completed with error (sct=0, sc=8) 00:22:15.373 starting I/O failed: -6 00:22:15.373 Write completed with error (sct=0, sc=8) 00:22:15.373 starting I/O failed: -6 00:22:15.373 Write completed with error (sct=0, sc=8) 00:22:15.373 starting I/O failed: -6 00:22:15.373 Write completed with error (sct=0, sc=8) 00:22:15.373 starting I/O failed: -6 00:22:15.373 Write completed with error (sct=0, sc=8) 00:22:15.373 starting I/O failed: -6 00:22:15.373 Write completed with error (sct=0, sc=8) 00:22:15.373 starting I/O failed: -6 00:22:15.373 Write completed with error (sct=0, sc=8) 00:22:15.373 starting I/O failed: -6 00:22:15.373 Write completed with error (sct=0, sc=8) 00:22:15.373 starting I/O failed: -6 00:22:15.373 Write completed with error (sct=0, sc=8) 00:22:15.373 starting I/O failed: -6 00:22:15.373 Write completed with error (sct=0, sc=8) 00:22:15.373 starting I/O failed: -6 00:22:15.373 Write completed with error (sct=0, sc=8) 00:22:15.373 starting I/O failed: -6 00:22:15.373 Write completed with error (sct=0, sc=8) 00:22:15.373 starting I/O failed: -6 00:22:15.373 Write completed with error (sct=0, sc=8) 00:22:15.373 starting I/O failed: -6 00:22:15.373 Write completed with error (sct=0, sc=8) 00:22:15.373 starting I/O failed: -6 00:22:15.373 Write completed with error (sct=0, sc=8) 00:22:15.373 starting I/O failed: -6 00:22:15.373 Write completed with error (sct=0, sc=8) 00:22:15.373 starting I/O failed: -6 00:22:15.373 Write completed with error (sct=0, sc=8) 00:22:15.373 starting I/O failed: -6 00:22:15.373 Write completed with error (sct=0, sc=8) 00:22:15.373 starting I/O failed: -6 00:22:15.373 Write completed with error (sct=0, sc=8) 00:22:15.373 starting I/O failed: -6 00:22:15.373 Write completed with error (sct=0, sc=8) 00:22:15.373 starting I/O failed: -6 00:22:15.373 Write completed with error (sct=0, sc=8) 00:22:15.373 starting I/O failed: -6 00:22:15.373 Write completed with error (sct=0, sc=8) 00:22:15.373 starting I/O failed: -6 00:22:15.373 Write completed with error (sct=0, sc=8) 00:22:15.373 starting I/O failed: -6 00:22:15.373 Write completed with error (sct=0, sc=8) 00:22:15.373 starting I/O failed: -6 00:22:15.373 Write completed with error (sct=0, sc=8) 00:22:15.373 starting I/O failed: -6 00:22:15.373 Write completed with error (sct=0, sc=8) 00:22:15.373 starting I/O failed: -6 00:22:15.373 Write completed with error (sct=0, sc=8) 00:22:15.373 starting I/O failed: -6 00:22:15.373 Write completed with error (sct=0, sc=8) 00:22:15.373 starting I/O failed: -6 00:22:15.373 Write completed with error (sct=0, sc=8) 00:22:15.373 starting I/O failed: -6 00:22:15.373 Write completed with error (sct=0, sc=8) 00:22:15.373 starting I/O failed: -6 00:22:15.373 Write completed with error (sct=0, sc=8) 00:22:15.373 starting I/O failed: -6 00:22:15.373 Write completed with error (sct=0, sc=8) 00:22:15.373 starting I/O failed: -6 00:22:15.373 Write completed with error (sct=0, sc=8) 00:22:15.373 starting I/O failed: -6 00:22:15.373 Write completed with error (sct=0, sc=8) 00:22:15.373 starting I/O failed: -6 00:22:15.373 Write completed with error (sct=0, sc=8) 00:22:15.373 starting I/O failed: -6 00:22:15.373 Write completed with error (sct=0, sc=8) 00:22:15.373 starting I/O failed: -6 00:22:15.373 Write completed with error (sct=0, sc=8) 00:22:15.373 starting I/O failed: -6 00:22:15.373 Write completed with error (sct=0, sc=8) 00:22:15.373 starting I/O failed: -6 00:22:15.373 Write completed with error (sct=0, sc=8) 00:22:15.373 starting I/O failed: -6 00:22:15.373 Write completed with error (sct=0, sc=8) 00:22:15.373 starting I/O failed: -6 00:22:15.373 Write completed with error (sct=0, sc=8) 00:22:15.373 starting I/O failed: -6 00:22:15.373 Write completed with error (sct=0, sc=8) 00:22:15.373 starting I/O failed: -6 00:22:15.373 Write completed with error (sct=0, sc=8) 00:22:15.373 starting I/O failed: -6 00:22:15.373 Write completed with error (sct=0, sc=8) 00:22:15.373 starting I/O failed: -6 00:22:15.373 Write completed with error (sct=0, sc=8) 00:22:15.373 starting I/O failed: -6 00:22:15.373 Write completed with error (sct=0, sc=8) 00:22:15.373 starting I/O failed: -6 00:22:15.373 Write completed with error (sct=0, sc=8) 00:22:15.373 starting I/O failed: -6 00:22:15.373 Write completed with error (sct=0, sc=8) 00:22:15.373 starting I/O failed: -6 00:22:15.373 Write completed with error (sct=0, sc=8) 00:22:15.373 starting I/O failed: -6 00:22:15.373 Write completed with error (sct=0, sc=8) 00:22:15.373 starting I/O failed: -6 00:22:15.373 Write completed with error (sct=0, sc=8) 00:22:15.373 starting I/O failed: -6 00:22:15.373 Write completed with error (sct=0, sc=8) 00:22:15.373 starting I/O failed: -6 00:22:15.373 Write completed with error (sct=0, sc=8) 00:22:15.373 starting I/O failed: -6 00:22:15.373 [2024-11-15 10:41:03.450620] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:15.373 NVMe io qpair process completion error 00:22:15.373 Initializing NVMe Controllers 00:22:15.373 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode10 00:22:15.373 Controller IO queue size 128, less than required. 00:22:15.373 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:15.373 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode7 00:22:15.373 Controller IO queue size 128, less than required. 00:22:15.373 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:15.373 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode9 00:22:15.373 Controller IO queue size 128, less than required. 00:22:15.373 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:15.373 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode4 00:22:15.373 Controller IO queue size 128, less than required. 00:22:15.373 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:15.373 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode5 00:22:15.373 Controller IO queue size 128, less than required. 00:22:15.373 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:15.373 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode8 00:22:15.373 Controller IO queue size 128, less than required. 00:22:15.373 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:15.373 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode3 00:22:15.373 Controller IO queue size 128, less than required. 00:22:15.373 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:15.373 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode6 00:22:15.373 Controller IO queue size 128, less than required. 00:22:15.373 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:15.373 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode2 00:22:15.373 Controller IO queue size 128, less than required. 00:22:15.373 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:15.373 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:15.373 Controller IO queue size 128, less than required. 00:22:15.373 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:15.373 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 with lcore 0 00:22:15.373 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 with lcore 0 00:22:15.373 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 with lcore 0 00:22:15.373 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 with lcore 0 00:22:15.373 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 with lcore 0 00:22:15.373 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 with lcore 0 00:22:15.373 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 with lcore 0 00:22:15.373 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 with lcore 0 00:22:15.373 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 with lcore 0 00:22:15.373 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:22:15.373 Initialization complete. Launching workers. 00:22:15.373 ======================================================== 00:22:15.373 Latency(us) 00:22:15.373 Device Information : IOPS MiB/s Average min max 00:22:15.373 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 from core 0: 1739.34 74.74 73599.50 1214.60 137288.75 00:22:15.373 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 from core 0: 1710.43 73.49 74887.27 1115.94 136311.23 00:22:15.373 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 from core 0: 1718.41 73.84 74610.55 882.58 144742.78 00:22:15.373 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 from core 0: 1706.11 73.31 75182.72 958.50 148388.34 00:22:15.373 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 from core 0: 1660.59 71.35 77275.65 870.68 151574.31 00:22:15.373 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 from core 0: 1713.66 73.63 74924.40 1059.99 129136.67 00:22:15.373 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 from core 0: 1610.75 69.21 78588.63 1298.12 129835.77 00:22:15.373 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 from core 0: 1673.32 71.90 75672.61 1027.62 129854.15 00:22:15.373 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 from core 0: 1663.83 71.49 76128.28 940.81 129676.70 00:22:15.373 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1671.38 71.82 75803.43 1416.56 129333.94 00:22:15.373 ======================================================== 00:22:15.373 Total : 16867.81 724.79 75639.63 870.68 151574.31 00:22:15.373 00:22:15.373 [2024-11-15 10:41:03.454560] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1304900 is same with the state(6) to be set 00:22:15.373 [2024-11-15 10:41:03.454659] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13035f0 is same with the state(6) to be set 00:22:15.374 [2024-11-15 10:41:03.454723] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1303c50 is same with the state(6) to be set 00:22:15.374 [2024-11-15 10:41:03.454783] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13029e0 is same with the state(6) to be set 00:22:15.374 [2024-11-15 10:41:03.454840] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1302d10 is same with the state(6) to be set 00:22:15.374 [2024-11-15 10:41:03.454908] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1303920 is same with the state(6) to be set 00:22:15.374 [2024-11-15 10:41:03.454965] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13026b0 is same with the state(6) to be set 00:22:15.374 [2024-11-15 10:41:03.455022] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13032c0 is same with the state(6) to be set 00:22:15.374 [2024-11-15 10:41:03.455082] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1304ae0 is same with the state(6) to be set 00:22:15.374 [2024-11-15 10:41:03.455141] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1304720 is same with the state(6) to be set 00:22:15.374 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:22:15.633 10:41:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@156 -- # sleep 1 00:22:16.570 10:41:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@158 -- # NOT wait 425696 00:22:16.570 10:41:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@650 -- # local es=0 00:22:16.570 10:41:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@652 -- # valid_exec_arg wait 425696 00:22:16.570 10:41:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@638 -- # local arg=wait 00:22:16.570 10:41:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:16.570 10:41:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@642 -- # type -t wait 00:22:16.570 10:41:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:16.570 10:41:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@653 -- # wait 425696 00:22:16.570 10:41:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@653 -- # es=1 00:22:16.570 10:41:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:16.570 10:41:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:16.570 10:41:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:16.570 10:41:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@159 -- # stoptarget 00:22:16.570 10:41:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:22:16.570 10:41:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:22:16.570 10:41:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:16.570 10:41:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@46 -- # nvmftestfini 00:22:16.570 10:41:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:16.570 10:41:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@121 -- # sync 00:22:16.570 10:41:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:16.570 10:41:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@124 -- # set +e 00:22:16.570 10:41:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:16.570 10:41:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:16.570 rmmod nvme_tcp 00:22:16.570 rmmod nvme_fabrics 00:22:16.570 rmmod nvme_keyring 00:22:16.570 10:41:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:16.571 10:41:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@128 -- # set -e 00:22:16.571 10:41:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@129 -- # return 0 00:22:16.571 10:41:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@517 -- # '[' -n 425521 ']' 00:22:16.571 10:41:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@518 -- # killprocess 425521 00:22:16.571 10:41:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@952 -- # '[' -z 425521 ']' 00:22:16.571 10:41:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@956 -- # kill -0 425521 00:22:16.571 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 956: kill: (425521) - No such process 00:22:16.571 10:41:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@979 -- # echo 'Process with pid 425521 is not found' 00:22:16.571 Process with pid 425521 is not found 00:22:16.571 10:41:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:16.571 10:41:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:16.571 10:41:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:16.571 10:41:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@297 -- # iptr 00:22:16.571 10:41:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # iptables-save 00:22:16.571 10:41:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:16.571 10:41:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # iptables-restore 00:22:16.571 10:41:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:16.571 10:41:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:16.571 10:41:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:16.571 10:41:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:16.571 10:41:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:19.100 10:41:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:19.100 00:22:19.101 real 0m9.768s 00:22:19.101 user 0m24.262s 00:22:19.101 sys 0m6.073s 00:22:19.101 10:41:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1128 -- # xtrace_disable 00:22:19.101 10:41:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:22:19.101 ************************************ 00:22:19.101 END TEST nvmf_shutdown_tc4 00:22:19.101 ************************************ 00:22:19.101 10:41:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@170 -- # trap - SIGINT SIGTERM EXIT 00:22:19.101 00:22:19.101 real 0m37.111s 00:22:19.101 user 1m40.443s 00:22:19.101 sys 0m12.522s 00:22:19.101 10:41:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1128 -- # xtrace_disable 00:22:19.101 10:41:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:22:19.101 ************************************ 00:22:19.101 END TEST nvmf_shutdown 00:22:19.101 ************************************ 00:22:19.101 10:41:07 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@67 -- # run_test nvmf_nsid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:22:19.101 10:41:07 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:22:19.101 10:41:07 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:22:19.101 10:41:07 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:22:19.101 ************************************ 00:22:19.101 START TEST nvmf_nsid 00:22:19.101 ************************************ 00:22:19.101 10:41:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:22:19.101 * Looking for test storage... 00:22:19.101 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:22:19.101 10:41:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:22:19.101 10:41:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1691 -- # lcov --version 00:22:19.101 10:41:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:22:19.101 10:41:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:22:19.101 10:41:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:19.101 10:41:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:19.101 10:41:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:19.101 10:41:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # IFS=.-: 00:22:19.101 10:41:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # read -ra ver1 00:22:19.101 10:41:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # IFS=.-: 00:22:19.101 10:41:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # read -ra ver2 00:22:19.101 10:41:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@338 -- # local 'op=<' 00:22:19.101 10:41:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@340 -- # ver1_l=2 00:22:19.101 10:41:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@341 -- # ver2_l=1 00:22:19.101 10:41:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:19.101 10:41:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@344 -- # case "$op" in 00:22:19.101 10:41:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@345 -- # : 1 00:22:19.101 10:41:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:19.101 10:41:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:19.101 10:41:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # decimal 1 00:22:19.101 10:41:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=1 00:22:19.101 10:41:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:19.101 10:41:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 1 00:22:19.101 10:41:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # ver1[v]=1 00:22:19.101 10:41:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # decimal 2 00:22:19.101 10:41:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=2 00:22:19.101 10:41:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:19.101 10:41:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 2 00:22:19.101 10:41:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # ver2[v]=2 00:22:19.101 10:41:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:19.101 10:41:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:19.101 10:41:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # return 0 00:22:19.101 10:41:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:19.101 10:41:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:22:19.101 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:19.101 --rc genhtml_branch_coverage=1 00:22:19.101 --rc genhtml_function_coverage=1 00:22:19.101 --rc genhtml_legend=1 00:22:19.101 --rc geninfo_all_blocks=1 00:22:19.101 --rc geninfo_unexecuted_blocks=1 00:22:19.101 00:22:19.101 ' 00:22:19.101 10:41:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:22:19.101 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:19.101 --rc genhtml_branch_coverage=1 00:22:19.101 --rc genhtml_function_coverage=1 00:22:19.101 --rc genhtml_legend=1 00:22:19.101 --rc geninfo_all_blocks=1 00:22:19.101 --rc geninfo_unexecuted_blocks=1 00:22:19.101 00:22:19.101 ' 00:22:19.101 10:41:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:22:19.101 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:19.101 --rc genhtml_branch_coverage=1 00:22:19.101 --rc genhtml_function_coverage=1 00:22:19.101 --rc genhtml_legend=1 00:22:19.101 --rc geninfo_all_blocks=1 00:22:19.101 --rc geninfo_unexecuted_blocks=1 00:22:19.101 00:22:19.101 ' 00:22:19.101 10:41:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:22:19.101 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:19.101 --rc genhtml_branch_coverage=1 00:22:19.101 --rc genhtml_function_coverage=1 00:22:19.101 --rc genhtml_legend=1 00:22:19.101 --rc geninfo_all_blocks=1 00:22:19.101 --rc geninfo_unexecuted_blocks=1 00:22:19.101 00:22:19.101 ' 00:22:19.101 10:41:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:19.101 10:41:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # uname -s 00:22:19.101 10:41:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:19.101 10:41:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:19.101 10:41:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:19.101 10:41:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:19.101 10:41:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:19.101 10:41:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:19.101 10:41:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:19.101 10:41:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:19.101 10:41:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:19.101 10:41:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:19.101 10:41:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:22:19.101 10:41:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@18 -- # NVME_HOSTID=8b464f06-2980-e311-ba20-001e67a94acd 00:22:19.101 10:41:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:19.101 10:41:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:19.101 10:41:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:19.101 10:41:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:19.101 10:41:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:19.101 10:41:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@15 -- # shopt -s extglob 00:22:19.101 10:41:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:19.101 10:41:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:19.101 10:41:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:19.101 10:41:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:19.101 10:41:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:19.101 10:41:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:19.101 10:41:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@5 -- # export PATH 00:22:19.101 10:41:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:19.101 10:41:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@51 -- # : 0 00:22:19.101 10:41:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:19.101 10:41:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:19.101 10:41:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:19.101 10:41:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:19.101 10:41:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:19.101 10:41:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:19.101 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:19.101 10:41:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:19.101 10:41:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:19.101 10:41:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:19.101 10:41:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@11 -- # subnqn1=nqn.2024-10.io.spdk:cnode0 00:22:19.101 10:41:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@12 -- # subnqn2=nqn.2024-10.io.spdk:cnode1 00:22:19.101 10:41:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@13 -- # subnqn3=nqn.2024-10.io.spdk:cnode2 00:22:19.101 10:41:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@14 -- # tgt2sock=/var/tmp/tgt2.sock 00:22:19.101 10:41:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@15 -- # tgt2pid= 00:22:19.101 10:41:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@46 -- # nvmftestinit 00:22:19.101 10:41:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:19.101 10:41:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:19.101 10:41:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:19.101 10:41:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:19.101 10:41:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:19.101 10:41:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:19.101 10:41:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:19.101 10:41:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:19.101 10:41:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:19.101 10:41:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:19.101 10:41:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@309 -- # xtrace_disable 00:22:19.101 10:41:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:22:21.002 10:41:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:21.002 10:41:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@315 -- # pci_devs=() 00:22:21.002 10:41:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:21.002 10:41:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:21.002 10:41:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:21.002 10:41:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:21.002 10:41:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:21.002 10:41:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@319 -- # net_devs=() 00:22:21.002 10:41:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:21.002 10:41:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@320 -- # e810=() 00:22:21.002 10:41:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@320 -- # local -ga e810 00:22:21.002 10:41:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@321 -- # x722=() 00:22:21.002 10:41:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@321 -- # local -ga x722 00:22:21.002 10:41:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@322 -- # mlx=() 00:22:21.002 10:41:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@322 -- # local -ga mlx 00:22:21.002 10:41:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:21.002 10:41:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:21.002 10:41:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:21.002 10:41:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:21.002 10:41:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:21.002 10:41:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:21.002 10:41:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:21.002 10:41:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:21.002 10:41:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:21.002 10:41:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:21.002 10:41:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:21.002 10:41:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:21.002 10:41:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:21.002 10:41:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:21.002 10:41:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:21.002 10:41:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:21.002 10:41:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:21.002 10:41:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:21.002 10:41:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:21.002 10:41:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@367 -- # echo 'Found 0000:82:00.0 (0x8086 - 0x159b)' 00:22:21.002 Found 0000:82:00.0 (0x8086 - 0x159b) 00:22:21.002 10:41:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:21.002 10:41:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:21.002 10:41:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:21.002 10:41:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:21.002 10:41:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:21.002 10:41:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:21.002 10:41:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@367 -- # echo 'Found 0000:82:00.1 (0x8086 - 0x159b)' 00:22:21.002 Found 0000:82:00.1 (0x8086 - 0x159b) 00:22:21.002 10:41:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:21.002 10:41:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:21.002 10:41:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:21.002 10:41:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:21.002 10:41:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:21.002 10:41:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:21.002 10:41:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:21.002 10:41:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:21.002 10:41:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:21.002 10:41:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:21.002 10:41:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:21.002 10:41:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:21.002 10:41:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:21.002 10:41:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:21.002 10:41:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:21.002 10:41:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:82:00.0: cvl_0_0' 00:22:21.002 Found net devices under 0000:82:00.0: cvl_0_0 00:22:21.002 10:41:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:21.002 10:41:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:21.002 10:41:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:21.002 10:41:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:21.002 10:41:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:21.002 10:41:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:21.002 10:41:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:21.002 10:41:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:21.002 10:41:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:82:00.1: cvl_0_1' 00:22:21.002 Found net devices under 0000:82:00.1: cvl_0_1 00:22:21.002 10:41:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:21.002 10:41:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:21.002 10:41:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # is_hw=yes 00:22:21.002 10:41:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:21.002 10:41:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:21.002 10:41:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:21.002 10:41:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:21.002 10:41:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:21.002 10:41:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:21.002 10:41:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:21.002 10:41:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:21.002 10:41:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:21.002 10:41:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:21.002 10:41:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:21.002 10:41:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:21.002 10:41:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:21.002 10:41:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:21.002 10:41:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:21.002 10:41:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:21.002 10:41:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:21.002 10:41:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:21.002 10:41:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:21.002 10:41:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:21.002 10:41:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:21.002 10:41:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:21.002 10:41:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:21.002 10:41:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:21.002 10:41:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:21.002 10:41:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:21.002 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:21.002 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.177 ms 00:22:21.002 00:22:21.003 --- 10.0.0.2 ping statistics --- 00:22:21.003 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:21.003 rtt min/avg/max/mdev = 0.177/0.177/0.177/0.000 ms 00:22:21.003 10:41:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:21.003 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:21.003 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.148 ms 00:22:21.003 00:22:21.003 --- 10.0.0.1 ping statistics --- 00:22:21.003 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:21.003 rtt min/avg/max/mdev = 0.148/0.148/0.148/0.000 ms 00:22:21.003 10:41:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:21.003 10:41:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@450 -- # return 0 00:22:21.003 10:41:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:21.003 10:41:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:21.003 10:41:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:21.003 10:41:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:21.003 10:41:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:21.003 10:41:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:21.003 10:41:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:21.003 10:41:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@47 -- # nvmfappstart -m 1 00:22:21.003 10:41:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:21.003 10:41:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:21.003 10:41:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:22:21.003 10:41:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@509 -- # nvmfpid=428449 00:22:21.003 10:41:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 1 00:22:21.003 10:41:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@510 -- # waitforlisten 428449 00:22:21.003 10:41:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@833 -- # '[' -z 428449 ']' 00:22:21.003 10:41:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:21.003 10:41:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@838 -- # local max_retries=100 00:22:21.003 10:41:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:21.003 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:21.003 10:41:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # xtrace_disable 00:22:21.003 10:41:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:22:21.260 [2024-11-15 10:41:09.495422] Starting SPDK v25.01-pre git sha1 318515b44 / DPDK 24.03.0 initialization... 00:22:21.260 [2024-11-15 10:41:09.495493] [ DPDK EAL parameters: nvmf -c 1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:21.260 [2024-11-15 10:41:09.561035] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:21.260 [2024-11-15 10:41:09.613110] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:21.260 [2024-11-15 10:41:09.613170] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:21.260 [2024-11-15 10:41:09.613197] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:21.260 [2024-11-15 10:41:09.613208] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:21.260 [2024-11-15 10:41:09.613217] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:21.260 [2024-11-15 10:41:09.613864] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:21.260 10:41:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:22:21.260 10:41:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@866 -- # return 0 00:22:21.260 10:41:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:21.260 10:41:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:21.260 10:41:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:22:21.518 10:41:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:21.519 10:41:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@49 -- # trap cleanup SIGINT SIGTERM EXIT 00:22:21.519 10:41:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@52 -- # tgt2pid=428469 00:22:21.519 10:41:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/tgt2.sock 00:22:21.519 10:41:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@54 -- # tgt1addr=10.0.0.2 00:22:21.519 10:41:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # get_main_ns_ip 00:22:21.519 10:41:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@769 -- # local ip 00:22:21.519 10:41:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:21.519 10:41:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:21.519 10:41:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:21.519 10:41:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:21.519 10:41:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:22:21.519 10:41:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:21.519 10:41:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:22:21.519 10:41:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:22:21.519 10:41:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:22:21.519 10:41:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # tgt2addr=10.0.0.1 00:22:21.519 10:41:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # uuidgen 00:22:21.519 10:41:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # ns1uuid=d23bf081-fd07-4a34-842d-6e72ac23b06f 00:22:21.519 10:41:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # uuidgen 00:22:21.519 10:41:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # ns2uuid=3fa2e366-7a20-4f49-89bc-86680572d83f 00:22:21.519 10:41:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # uuidgen 00:22:21.519 10:41:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # ns3uuid=672cbb34-0afa-471c-9857-28803274cfd1 00:22:21.519 10:41:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@63 -- # rpc_cmd 00:22:21.519 10:41:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:21.519 10:41:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:22:21.519 null0 00:22:21.519 null1 00:22:21.519 null2 00:22:21.519 [2024-11-15 10:41:09.792301] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:21.519 [2024-11-15 10:41:09.803982] Starting SPDK v25.01-pre git sha1 318515b44 / DPDK 24.03.0 initialization... 00:22:21.519 [2024-11-15 10:41:09.804048] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid428469 ] 00:22:21.519 [2024-11-15 10:41:09.816549] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:21.519 10:41:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:21.519 10:41:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@79 -- # waitforlisten 428469 /var/tmp/tgt2.sock 00:22:21.519 10:41:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@833 -- # '[' -z 428469 ']' 00:22:21.519 10:41:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/tgt2.sock 00:22:21.519 10:41:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@838 -- # local max_retries=100 00:22:21.519 10:41:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock...' 00:22:21.519 Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock... 00:22:21.519 10:41:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # xtrace_disable 00:22:21.519 10:41:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:22:21.519 [2024-11-15 10:41:09.868784] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:21.519 [2024-11-15 10:41:09.926258] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:21.778 10:41:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:22:21.778 10:41:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@866 -- # return 0 00:22:21.778 10:41:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/tgt2.sock 00:22:22.344 [2024-11-15 10:41:10.640201] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:22.344 [2024-11-15 10:41:10.656422] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.1 port 4421 *** 00:22:22.344 nvme0n1 nvme0n2 00:22:22.344 nvme1n1 00:22:22.344 10:41:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # nvme_connect 00:22:22.344 10:41:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@23 -- # local ctrlr 00:22:22.344 10:41:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@25 -- # nvme connect -t tcp -a 10.0.0.1 -s 4421 -n nqn.2024-10.io.spdk:cnode2 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid=8b464f06-2980-e311-ba20-001e67a94acd 00:22:22.910 10:41:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@28 -- # for ctrlr in /sys/class/nvme/nvme* 00:22:22.910 10:41:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ -e /sys/class/nvme/nvme0/subsysnqn ]] 00:22:22.910 10:41:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ nqn.2024-10.io.spdk:cnode2 == \n\q\n\.\2\0\2\4\-\1\0\.\i\o\.\s\p\d\k\:\c\n\o\d\e\2 ]] 00:22:22.910 10:41:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@31 -- # echo nvme0 00:22:22.910 10:41:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@32 -- # return 0 00:22:22.910 10:41:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # ctrlr=nvme0 00:22:22.910 10:41:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@95 -- # waitforblk nvme0n1 00:22:22.910 10:41:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1237 -- # local i=0 00:22:22.910 10:41:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1238 -- # lsblk -l -o NAME 00:22:22.910 10:41:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1238 -- # grep -q -w nvme0n1 00:22:22.910 10:41:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # '[' 0 -lt 15 ']' 00:22:22.910 10:41:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # i=1 00:22:22.910 10:41:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1241 -- # sleep 1 00:22:23.843 10:41:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1238 -- # lsblk -l -o NAME 00:22:23.843 10:41:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1238 -- # grep -q -w nvme0n1 00:22:23.843 10:41:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1244 -- # lsblk -l -o NAME 00:22:23.843 10:41:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1244 -- # grep -q -w nvme0n1 00:22:23.843 10:41:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1248 -- # return 0 00:22:23.843 10:41:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # uuid2nguid d23bf081-fd07-4a34-842d-6e72ac23b06f 00:22:23.843 10:41:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:22:23.843 10:41:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # nvme_get_nguid nvme0 1 00:22:23.843 10:41:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=1 nguid 00:22:23.844 10:41:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n1 -o json 00:22:23.844 10:41:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:22:24.104 10:41:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=d23bf081fd074a34842d6e72ac23b06f 00:22:24.104 10:41:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo D23BF081FD074A34842D6E72AC23B06F 00:22:24.104 10:41:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # [[ D23BF081FD074A34842D6E72AC23B06F == \D\2\3\B\F\0\8\1\F\D\0\7\4\A\3\4\8\4\2\D\6\E\7\2\A\C\2\3\B\0\6\F ]] 00:22:24.104 10:41:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@97 -- # waitforblk nvme0n2 00:22:24.104 10:41:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1237 -- # local i=0 00:22:24.104 10:41:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1238 -- # lsblk -l -o NAME 00:22:24.104 10:41:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1238 -- # grep -q -w nvme0n2 00:22:24.104 10:41:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1244 -- # lsblk -l -o NAME 00:22:24.104 10:41:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1244 -- # grep -q -w nvme0n2 00:22:24.104 10:41:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1248 -- # return 0 00:22:24.104 10:41:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # uuid2nguid 3fa2e366-7a20-4f49-89bc-86680572d83f 00:22:24.104 10:41:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:22:24.104 10:41:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # nvme_get_nguid nvme0 2 00:22:24.104 10:41:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=2 nguid 00:22:24.104 10:41:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n2 -o json 00:22:24.104 10:41:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:22:24.104 10:41:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=3fa2e3667a204f4989bc86680572d83f 00:22:24.104 10:41:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 3FA2E3667A204F4989BC86680572D83F 00:22:24.104 10:41:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # [[ 3FA2E3667A204F4989BC86680572D83F == \3\F\A\2\E\3\6\6\7\A\2\0\4\F\4\9\8\9\B\C\8\6\6\8\0\5\7\2\D\8\3\F ]] 00:22:24.104 10:41:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@99 -- # waitforblk nvme0n3 00:22:24.104 10:41:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1237 -- # local i=0 00:22:24.104 10:41:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1238 -- # lsblk -l -o NAME 00:22:24.104 10:41:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1238 -- # grep -q -w nvme0n3 00:22:24.104 10:41:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1244 -- # lsblk -l -o NAME 00:22:24.104 10:41:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1244 -- # grep -q -w nvme0n3 00:22:24.104 10:41:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1248 -- # return 0 00:22:24.104 10:41:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # uuid2nguid 672cbb34-0afa-471c-9857-28803274cfd1 00:22:24.104 10:41:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:22:24.104 10:41:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # nvme_get_nguid nvme0 3 00:22:24.104 10:41:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=3 nguid 00:22:24.104 10:41:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n3 -o json 00:22:24.104 10:41:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:22:24.105 10:41:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=672cbb340afa471c985728803274cfd1 00:22:24.105 10:41:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 672CBB340AFA471C985728803274CFD1 00:22:24.105 10:41:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # [[ 672CBB340AFA471C985728803274CFD1 == \6\7\2\C\B\B\3\4\0\A\F\A\4\7\1\C\9\8\5\7\2\8\8\0\3\2\7\4\C\F\D\1 ]] 00:22:24.105 10:41:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@101 -- # nvme disconnect -d /dev/nvme0 00:22:24.105 10:41:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@103 -- # trap - SIGINT SIGTERM EXIT 00:22:24.105 10:41:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@104 -- # cleanup 00:22:24.105 10:41:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@18 -- # killprocess 428469 00:22:24.105 10:41:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@952 -- # '[' -z 428469 ']' 00:22:24.105 10:41:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@956 -- # kill -0 428469 00:22:24.105 10:41:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@957 -- # uname 00:22:24.105 10:41:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:22:24.105 10:41:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 428469 00:22:24.363 10:41:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:22:24.363 10:41:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:22:24.363 10:41:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@970 -- # echo 'killing process with pid 428469' 00:22:24.363 killing process with pid 428469 00:22:24.363 10:41:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@971 -- # kill 428469 00:22:24.363 10:41:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@976 -- # wait 428469 00:22:24.622 10:41:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@19 -- # nvmftestfini 00:22:24.622 10:41:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:24.622 10:41:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@121 -- # sync 00:22:24.622 10:41:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:24.622 10:41:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@124 -- # set +e 00:22:24.622 10:41:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:24.622 10:41:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:24.622 rmmod nvme_tcp 00:22:24.622 rmmod nvme_fabrics 00:22:24.622 rmmod nvme_keyring 00:22:24.622 10:41:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:24.622 10:41:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@128 -- # set -e 00:22:24.622 10:41:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@129 -- # return 0 00:22:24.622 10:41:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@517 -- # '[' -n 428449 ']' 00:22:24.622 10:41:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@518 -- # killprocess 428449 00:22:24.622 10:41:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@952 -- # '[' -z 428449 ']' 00:22:24.622 10:41:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@956 -- # kill -0 428449 00:22:24.622 10:41:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@957 -- # uname 00:22:24.622 10:41:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:22:24.622 10:41:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 428449 00:22:24.881 10:41:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:22:24.881 10:41:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:22:24.881 10:41:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@970 -- # echo 'killing process with pid 428449' 00:22:24.881 killing process with pid 428449 00:22:24.881 10:41:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@971 -- # kill 428449 00:22:24.881 10:41:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@976 -- # wait 428449 00:22:24.881 10:41:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:24.881 10:41:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:24.881 10:41:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:24.881 10:41:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@297 -- # iptr 00:22:24.881 10:41:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-save 00:22:24.881 10:41:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:24.881 10:41:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-restore 00:22:24.881 10:41:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:24.881 10:41:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:24.881 10:41:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:24.881 10:41:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:24.881 10:41:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:27.419 10:41:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:27.419 00:22:27.419 real 0m8.303s 00:22:27.419 user 0m8.258s 00:22:27.419 sys 0m2.529s 00:22:27.419 10:41:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1128 -- # xtrace_disable 00:22:27.419 10:41:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:22:27.419 ************************************ 00:22:27.419 END TEST nvmf_nsid 00:22:27.419 ************************************ 00:22:27.419 10:41:15 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:22:27.419 00:22:27.419 real 11m49.929s 00:22:27.419 user 28m4.305s 00:22:27.419 sys 2m52.168s 00:22:27.419 10:41:15 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1128 -- # xtrace_disable 00:22:27.419 10:41:15 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:22:27.419 ************************************ 00:22:27.419 END TEST nvmf_target_extra 00:22:27.419 ************************************ 00:22:27.419 10:41:15 nvmf_tcp -- nvmf/nvmf.sh@16 -- # run_test nvmf_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:22:27.419 10:41:15 nvmf_tcp -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:22:27.419 10:41:15 nvmf_tcp -- common/autotest_common.sh@1109 -- # xtrace_disable 00:22:27.419 10:41:15 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:27.419 ************************************ 00:22:27.419 START TEST nvmf_host 00:22:27.419 ************************************ 00:22:27.419 10:41:15 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:22:27.419 * Looking for test storage... 00:22:27.419 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:22:27.419 10:41:15 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:22:27.419 10:41:15 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1691 -- # lcov --version 00:22:27.419 10:41:15 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:22:27.419 10:41:15 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:22:27.419 10:41:15 nvmf_tcp.nvmf_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:27.419 10:41:15 nvmf_tcp.nvmf_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:27.419 10:41:15 nvmf_tcp.nvmf_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:27.419 10:41:15 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # IFS=.-: 00:22:27.419 10:41:15 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # read -ra ver1 00:22:27.419 10:41:15 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # IFS=.-: 00:22:27.419 10:41:15 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # read -ra ver2 00:22:27.419 10:41:15 nvmf_tcp.nvmf_host -- scripts/common.sh@338 -- # local 'op=<' 00:22:27.419 10:41:15 nvmf_tcp.nvmf_host -- scripts/common.sh@340 -- # ver1_l=2 00:22:27.419 10:41:15 nvmf_tcp.nvmf_host -- scripts/common.sh@341 -- # ver2_l=1 00:22:27.419 10:41:15 nvmf_tcp.nvmf_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:27.419 10:41:15 nvmf_tcp.nvmf_host -- scripts/common.sh@344 -- # case "$op" in 00:22:27.419 10:41:15 nvmf_tcp.nvmf_host -- scripts/common.sh@345 -- # : 1 00:22:27.419 10:41:15 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:27.419 10:41:15 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:27.419 10:41:15 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # decimal 1 00:22:27.419 10:41:15 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=1 00:22:27.419 10:41:15 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:27.419 10:41:15 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 1 00:22:27.419 10:41:15 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # ver1[v]=1 00:22:27.419 10:41:15 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # decimal 2 00:22:27.419 10:41:15 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=2 00:22:27.419 10:41:15 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:27.419 10:41:15 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 2 00:22:27.419 10:41:15 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # ver2[v]=2 00:22:27.419 10:41:15 nvmf_tcp.nvmf_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:27.419 10:41:15 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:27.419 10:41:15 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # return 0 00:22:27.419 10:41:15 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:27.419 10:41:15 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:22:27.419 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:27.419 --rc genhtml_branch_coverage=1 00:22:27.419 --rc genhtml_function_coverage=1 00:22:27.419 --rc genhtml_legend=1 00:22:27.419 --rc geninfo_all_blocks=1 00:22:27.419 --rc geninfo_unexecuted_blocks=1 00:22:27.419 00:22:27.419 ' 00:22:27.419 10:41:15 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:22:27.419 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:27.419 --rc genhtml_branch_coverage=1 00:22:27.419 --rc genhtml_function_coverage=1 00:22:27.419 --rc genhtml_legend=1 00:22:27.419 --rc geninfo_all_blocks=1 00:22:27.419 --rc geninfo_unexecuted_blocks=1 00:22:27.419 00:22:27.419 ' 00:22:27.419 10:41:15 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:22:27.419 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:27.419 --rc genhtml_branch_coverage=1 00:22:27.419 --rc genhtml_function_coverage=1 00:22:27.419 --rc genhtml_legend=1 00:22:27.419 --rc geninfo_all_blocks=1 00:22:27.419 --rc geninfo_unexecuted_blocks=1 00:22:27.419 00:22:27.419 ' 00:22:27.419 10:41:15 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:22:27.419 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:27.419 --rc genhtml_branch_coverage=1 00:22:27.419 --rc genhtml_function_coverage=1 00:22:27.419 --rc genhtml_legend=1 00:22:27.419 --rc geninfo_all_blocks=1 00:22:27.419 --rc geninfo_unexecuted_blocks=1 00:22:27.419 00:22:27.419 ' 00:22:27.419 10:41:15 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:27.419 10:41:15 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # uname -s 00:22:27.419 10:41:15 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:27.419 10:41:15 nvmf_tcp.nvmf_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:27.419 10:41:15 nvmf_tcp.nvmf_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:27.419 10:41:15 nvmf_tcp.nvmf_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:27.419 10:41:15 nvmf_tcp.nvmf_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:27.419 10:41:15 nvmf_tcp.nvmf_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:27.419 10:41:15 nvmf_tcp.nvmf_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:27.419 10:41:15 nvmf_tcp.nvmf_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:27.419 10:41:15 nvmf_tcp.nvmf_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:27.419 10:41:15 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:27.419 10:41:15 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:22:27.419 10:41:15 nvmf_tcp.nvmf_host -- nvmf/common.sh@18 -- # NVME_HOSTID=8b464f06-2980-e311-ba20-001e67a94acd 00:22:27.419 10:41:15 nvmf_tcp.nvmf_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:27.420 10:41:15 nvmf_tcp.nvmf_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:27.420 10:41:15 nvmf_tcp.nvmf_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:27.420 10:41:15 nvmf_tcp.nvmf_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:27.420 10:41:15 nvmf_tcp.nvmf_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:27.420 10:41:15 nvmf_tcp.nvmf_host -- scripts/common.sh@15 -- # shopt -s extglob 00:22:27.420 10:41:15 nvmf_tcp.nvmf_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:27.420 10:41:15 nvmf_tcp.nvmf_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:27.420 10:41:15 nvmf_tcp.nvmf_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:27.420 10:41:15 nvmf_tcp.nvmf_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:27.420 10:41:15 nvmf_tcp.nvmf_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:27.420 10:41:15 nvmf_tcp.nvmf_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:27.420 10:41:15 nvmf_tcp.nvmf_host -- paths/export.sh@5 -- # export PATH 00:22:27.420 10:41:15 nvmf_tcp.nvmf_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:27.420 10:41:15 nvmf_tcp.nvmf_host -- nvmf/common.sh@51 -- # : 0 00:22:27.420 10:41:15 nvmf_tcp.nvmf_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:27.420 10:41:15 nvmf_tcp.nvmf_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:27.420 10:41:15 nvmf_tcp.nvmf_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:27.420 10:41:15 nvmf_tcp.nvmf_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:27.420 10:41:15 nvmf_tcp.nvmf_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:27.420 10:41:15 nvmf_tcp.nvmf_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:27.420 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:27.420 10:41:15 nvmf_tcp.nvmf_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:27.420 10:41:15 nvmf_tcp.nvmf_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:27.420 10:41:15 nvmf_tcp.nvmf_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:27.420 10:41:15 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:22:27.420 10:41:15 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@13 -- # TEST_ARGS=("$@") 00:22:27.420 10:41:15 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@15 -- # [[ 0 -eq 0 ]] 00:22:27.420 10:41:15 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@16 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:22:27.420 10:41:15 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:22:27.420 10:41:15 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:22:27.420 10:41:15 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:22:27.420 ************************************ 00:22:27.420 START TEST nvmf_multicontroller 00:22:27.420 ************************************ 00:22:27.420 10:41:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:22:27.420 * Looking for test storage... 00:22:27.420 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:27.420 10:41:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:22:27.420 10:41:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1691 -- # lcov --version 00:22:27.420 10:41:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:22:27.420 10:41:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:22:27.420 10:41:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:27.420 10:41:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:27.420 10:41:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:27.420 10:41:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # IFS=.-: 00:22:27.420 10:41:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # read -ra ver1 00:22:27.420 10:41:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # IFS=.-: 00:22:27.420 10:41:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # read -ra ver2 00:22:27.420 10:41:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@338 -- # local 'op=<' 00:22:27.420 10:41:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@340 -- # ver1_l=2 00:22:27.420 10:41:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@341 -- # ver2_l=1 00:22:27.420 10:41:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:27.420 10:41:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@344 -- # case "$op" in 00:22:27.420 10:41:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@345 -- # : 1 00:22:27.420 10:41:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:27.420 10:41:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:27.420 10:41:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # decimal 1 00:22:27.420 10:41:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=1 00:22:27.420 10:41:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:27.420 10:41:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 1 00:22:27.420 10:41:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # ver1[v]=1 00:22:27.420 10:41:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # decimal 2 00:22:27.420 10:41:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=2 00:22:27.420 10:41:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:27.420 10:41:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 2 00:22:27.420 10:41:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # ver2[v]=2 00:22:27.421 10:41:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:27.421 10:41:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:27.421 10:41:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # return 0 00:22:27.421 10:41:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:27.421 10:41:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:22:27.421 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:27.421 --rc genhtml_branch_coverage=1 00:22:27.421 --rc genhtml_function_coverage=1 00:22:27.421 --rc genhtml_legend=1 00:22:27.421 --rc geninfo_all_blocks=1 00:22:27.421 --rc geninfo_unexecuted_blocks=1 00:22:27.421 00:22:27.421 ' 00:22:27.421 10:41:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:22:27.421 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:27.421 --rc genhtml_branch_coverage=1 00:22:27.421 --rc genhtml_function_coverage=1 00:22:27.421 --rc genhtml_legend=1 00:22:27.421 --rc geninfo_all_blocks=1 00:22:27.421 --rc geninfo_unexecuted_blocks=1 00:22:27.421 00:22:27.421 ' 00:22:27.421 10:41:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:22:27.421 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:27.421 --rc genhtml_branch_coverage=1 00:22:27.421 --rc genhtml_function_coverage=1 00:22:27.421 --rc genhtml_legend=1 00:22:27.421 --rc geninfo_all_blocks=1 00:22:27.421 --rc geninfo_unexecuted_blocks=1 00:22:27.421 00:22:27.421 ' 00:22:27.421 10:41:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:22:27.421 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:27.421 --rc genhtml_branch_coverage=1 00:22:27.421 --rc genhtml_function_coverage=1 00:22:27.421 --rc genhtml_legend=1 00:22:27.421 --rc geninfo_all_blocks=1 00:22:27.421 --rc geninfo_unexecuted_blocks=1 00:22:27.421 00:22:27.421 ' 00:22:27.421 10:41:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:27.421 10:41:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:22:27.421 10:41:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:27.421 10:41:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:27.421 10:41:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:27.421 10:41:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:27.421 10:41:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:27.421 10:41:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:27.421 10:41:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:27.421 10:41:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:27.421 10:41:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:27.421 10:41:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:27.421 10:41:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:22:27.421 10:41:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=8b464f06-2980-e311-ba20-001e67a94acd 00:22:27.421 10:41:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:27.421 10:41:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:27.421 10:41:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:27.421 10:41:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:27.421 10:41:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:27.421 10:41:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@15 -- # shopt -s extglob 00:22:27.421 10:41:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:27.421 10:41:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:27.421 10:41:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:27.421 10:41:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:27.421 10:41:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:27.421 10:41:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:27.421 10:41:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:22:27.421 10:41:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:27.421 10:41:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@51 -- # : 0 00:22:27.421 10:41:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:27.421 10:41:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:27.421 10:41:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:27.421 10:41:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:27.421 10:41:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:27.421 10:41:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:27.422 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:27.422 10:41:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:27.422 10:41:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:27.422 10:41:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:27.422 10:41:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:22:27.422 10:41:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:22:27.422 10:41:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:22:27.422 10:41:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:22:27.422 10:41:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:27.422 10:41:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:22:27.422 10:41:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:22:27.422 10:41:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:27.422 10:41:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:27.422 10:41:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:27.422 10:41:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:27.422 10:41:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:27.422 10:41:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:27.422 10:41:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:27.422 10:41:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:27.422 10:41:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:27.422 10:41:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:27.422 10:41:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@309 -- # xtrace_disable 00:22:27.422 10:41:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:29.954 10:41:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:29.954 10:41:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # pci_devs=() 00:22:29.954 10:41:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:29.954 10:41:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:29.954 10:41:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:29.954 10:41:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:29.954 10:41:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:29.954 10:41:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@319 -- # net_devs=() 00:22:29.954 10:41:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:29.954 10:41:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # e810=() 00:22:29.954 10:41:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # local -ga e810 00:22:29.954 10:41:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # x722=() 00:22:29.954 10:41:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # local -ga x722 00:22:29.954 10:41:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@322 -- # mlx=() 00:22:29.954 10:41:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@322 -- # local -ga mlx 00:22:29.954 10:41:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:29.954 10:41:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:29.954 10:41:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:29.954 10:41:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:29.954 10:41:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:29.954 10:41:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:29.954 10:41:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:29.954 10:41:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:29.954 10:41:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:29.954 10:41:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:29.954 10:41:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:29.954 10:41:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:29.954 10:41:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:29.954 10:41:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:29.954 10:41:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:29.954 10:41:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:29.954 10:41:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:29.954 10:41:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:29.954 10:41:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:29.954 10:41:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@367 -- # echo 'Found 0000:82:00.0 (0x8086 - 0x159b)' 00:22:29.954 Found 0000:82:00.0 (0x8086 - 0x159b) 00:22:29.954 10:41:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:29.954 10:41:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:29.954 10:41:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:29.954 10:41:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:29.954 10:41:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:29.954 10:41:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:29.954 10:41:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@367 -- # echo 'Found 0000:82:00.1 (0x8086 - 0x159b)' 00:22:29.954 Found 0000:82:00.1 (0x8086 - 0x159b) 00:22:29.954 10:41:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:29.954 10:41:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:29.954 10:41:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:29.954 10:41:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:29.954 10:41:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:29.954 10:41:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:29.954 10:41:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:29.954 10:41:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:29.954 10:41:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:29.954 10:41:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:29.954 10:41:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:29.954 10:41:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:29.954 10:41:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:29.954 10:41:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:29.954 10:41:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:29.954 10:41:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:82:00.0: cvl_0_0' 00:22:29.954 Found net devices under 0000:82:00.0: cvl_0_0 00:22:29.955 10:41:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:29.955 10:41:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:29.955 10:41:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:29.955 10:41:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:29.955 10:41:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:29.955 10:41:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:29.955 10:41:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:29.955 10:41:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:29.955 10:41:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:82:00.1: cvl_0_1' 00:22:29.955 Found net devices under 0000:82:00.1: cvl_0_1 00:22:29.955 10:41:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:29.955 10:41:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:29.955 10:41:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # is_hw=yes 00:22:29.955 10:41:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:29.955 10:41:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:29.955 10:41:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:29.955 10:41:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:29.955 10:41:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:29.955 10:41:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:29.955 10:41:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:29.955 10:41:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:29.955 10:41:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:29.955 10:41:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:29.955 10:41:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:29.955 10:41:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:29.955 10:41:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:29.955 10:41:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:29.955 10:41:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:29.955 10:41:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:29.955 10:41:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:29.955 10:41:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:29.955 10:41:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:29.955 10:41:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:29.955 10:41:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:29.955 10:41:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:29.955 10:41:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:29.955 10:41:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:29.955 10:41:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:29.955 10:41:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:29.955 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:29.955 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.298 ms 00:22:29.955 00:22:29.955 --- 10.0.0.2 ping statistics --- 00:22:29.955 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:29.955 rtt min/avg/max/mdev = 0.298/0.298/0.298/0.000 ms 00:22:29.955 10:41:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:29.955 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:29.955 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.130 ms 00:22:29.955 00:22:29.955 --- 10.0.0.1 ping statistics --- 00:22:29.955 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:29.955 rtt min/avg/max/mdev = 0.130/0.130/0.130/0.000 ms 00:22:29.955 10:41:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:29.955 10:41:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@450 -- # return 0 00:22:29.955 10:41:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:29.955 10:41:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:29.955 10:41:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:29.955 10:41:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:29.955 10:41:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:29.955 10:41:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:29.955 10:41:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:29.955 10:41:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:22:29.955 10:41:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:29.955 10:41:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:29.955 10:41:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:29.955 10:41:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@509 -- # nvmfpid=430918 00:22:29.955 10:41:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:22:29.955 10:41:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@510 -- # waitforlisten 430918 00:22:29.955 10:41:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@833 -- # '[' -z 430918 ']' 00:22:29.955 10:41:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:29.955 10:41:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@838 -- # local max_retries=100 00:22:29.955 10:41:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:29.955 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:29.955 10:41:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@842 -- # xtrace_disable 00:22:29.955 10:41:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:29.955 [2024-11-15 10:41:18.217543] Starting SPDK v25.01-pre git sha1 318515b44 / DPDK 24.03.0 initialization... 00:22:29.955 [2024-11-15 10:41:18.217632] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:29.955 [2024-11-15 10:41:18.288154] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:22:29.955 [2024-11-15 10:41:18.342547] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:29.955 [2024-11-15 10:41:18.342602] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:29.955 [2024-11-15 10:41:18.342630] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:29.955 [2024-11-15 10:41:18.342641] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:29.955 [2024-11-15 10:41:18.342650] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:29.955 [2024-11-15 10:41:18.344166] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:29.955 [2024-11-15 10:41:18.344272] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:22:29.955 [2024-11-15 10:41:18.344281] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:30.214 10:41:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:22:30.214 10:41:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@866 -- # return 0 00:22:30.214 10:41:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:30.214 10:41:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:30.214 10:41:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:30.214 10:41:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:30.214 10:41:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:30.214 10:41:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:30.214 10:41:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:30.214 [2024-11-15 10:41:18.484947] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:30.214 10:41:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:30.214 10:41:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:22:30.214 10:41:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:30.214 10:41:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:30.214 Malloc0 00:22:30.214 10:41:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:30.214 10:41:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:30.214 10:41:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:30.214 10:41:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:30.214 10:41:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:30.214 10:41:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:30.214 10:41:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:30.214 10:41:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:30.214 10:41:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:30.214 10:41:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:30.214 10:41:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:30.214 10:41:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:30.214 [2024-11-15 10:41:18.540913] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:30.214 10:41:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:30.214 10:41:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:22:30.214 10:41:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:30.214 10:41:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:30.214 [2024-11-15 10:41:18.548793] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:22:30.214 10:41:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:30.214 10:41:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:22:30.214 10:41:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:30.214 10:41:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:30.214 Malloc1 00:22:30.214 10:41:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:30.214 10:41:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:22:30.214 10:41:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:30.214 10:41:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:30.214 10:41:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:30.214 10:41:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:22:30.214 10:41:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:30.214 10:41:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:30.214 10:41:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:30.214 10:41:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:22:30.214 10:41:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:30.214 10:41:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:30.214 10:41:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:30.214 10:41:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:22:30.214 10:41:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:30.214 10:41:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:30.214 10:41:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:30.214 10:41:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=431055 00:22:30.214 10:41:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:30.214 10:41:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 431055 /var/tmp/bdevperf.sock 00:22:30.214 10:41:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:22:30.214 10:41:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@833 -- # '[' -z 431055 ']' 00:22:30.214 10:41:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:30.215 10:41:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@838 -- # local max_retries=100 00:22:30.215 10:41:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:30.215 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:30.215 10:41:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@842 -- # xtrace_disable 00:22:30.215 10:41:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:30.473 10:41:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:22:30.473 10:41:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@866 -- # return 0 00:22:30.473 10:41:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:22:30.473 10:41:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:30.473 10:41:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:30.731 NVMe0n1 00:22:30.731 10:41:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:30.731 10:41:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:30.731 10:41:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:22:30.731 10:41:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:30.731 10:41:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:30.731 10:41:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:30.731 1 00:22:30.731 10:41:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:22:30.731 10:41:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:22:30.731 10:41:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:22:30.731 10:41:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:22:30.731 10:41:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:30.731 10:41:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:22:30.731 10:41:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:30.731 10:41:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:22:30.731 10:41:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:30.731 10:41:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:30.731 request: 00:22:30.731 { 00:22:30.731 "name": "NVMe0", 00:22:30.731 "trtype": "tcp", 00:22:30.731 "traddr": "10.0.0.2", 00:22:30.731 "adrfam": "ipv4", 00:22:30.731 "trsvcid": "4420", 00:22:30.731 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:30.731 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:22:30.731 "hostaddr": "10.0.0.1", 00:22:30.731 "prchk_reftag": false, 00:22:30.731 "prchk_guard": false, 00:22:30.731 "hdgst": false, 00:22:30.731 "ddgst": false, 00:22:30.731 "allow_unrecognized_csi": false, 00:22:30.731 "method": "bdev_nvme_attach_controller", 00:22:30.731 "req_id": 1 00:22:30.731 } 00:22:30.731 Got JSON-RPC error response 00:22:30.731 response: 00:22:30.731 { 00:22:30.731 "code": -114, 00:22:30.731 "message": "A controller named NVMe0 already exists with the specified network path" 00:22:30.731 } 00:22:30.731 10:41:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:22:30.731 10:41:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:22:30.731 10:41:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:30.731 10:41:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:30.731 10:41:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:30.731 10:41:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:22:30.731 10:41:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:22:30.731 10:41:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:22:30.731 10:41:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:22:30.731 10:41:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:30.731 10:41:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:22:30.731 10:41:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:30.731 10:41:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:22:30.731 10:41:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:30.731 10:41:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:30.731 request: 00:22:30.731 { 00:22:30.731 "name": "NVMe0", 00:22:30.731 "trtype": "tcp", 00:22:30.731 "traddr": "10.0.0.2", 00:22:30.731 "adrfam": "ipv4", 00:22:30.731 "trsvcid": "4420", 00:22:30.731 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:22:30.731 "hostaddr": "10.0.0.1", 00:22:30.731 "prchk_reftag": false, 00:22:30.731 "prchk_guard": false, 00:22:30.731 "hdgst": false, 00:22:30.731 "ddgst": false, 00:22:30.731 "allow_unrecognized_csi": false, 00:22:30.731 "method": "bdev_nvme_attach_controller", 00:22:30.731 "req_id": 1 00:22:30.731 } 00:22:30.731 Got JSON-RPC error response 00:22:30.731 response: 00:22:30.731 { 00:22:30.731 "code": -114, 00:22:30.731 "message": "A controller named NVMe0 already exists with the specified network path" 00:22:30.731 } 00:22:30.731 10:41:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:22:30.731 10:41:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:22:30.731 10:41:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:30.731 10:41:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:30.731 10:41:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:30.731 10:41:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:22:30.731 10:41:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:22:30.731 10:41:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:22:30.731 10:41:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:22:30.731 10:41:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:30.731 10:41:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:22:30.731 10:41:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:30.731 10:41:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:22:30.731 10:41:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:30.731 10:41:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:30.731 request: 00:22:30.731 { 00:22:30.731 "name": "NVMe0", 00:22:30.731 "trtype": "tcp", 00:22:30.731 "traddr": "10.0.0.2", 00:22:30.731 "adrfam": "ipv4", 00:22:30.731 "trsvcid": "4420", 00:22:30.731 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:30.731 "hostaddr": "10.0.0.1", 00:22:30.731 "prchk_reftag": false, 00:22:30.731 "prchk_guard": false, 00:22:30.731 "hdgst": false, 00:22:30.731 "ddgst": false, 00:22:30.731 "multipath": "disable", 00:22:30.731 "allow_unrecognized_csi": false, 00:22:30.731 "method": "bdev_nvme_attach_controller", 00:22:30.731 "req_id": 1 00:22:30.731 } 00:22:30.731 Got JSON-RPC error response 00:22:30.731 response: 00:22:30.731 { 00:22:30.731 "code": -114, 00:22:30.731 "message": "A controller named NVMe0 already exists and multipath is disabled" 00:22:30.731 } 00:22:30.731 10:41:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:22:30.731 10:41:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:22:30.731 10:41:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:30.731 10:41:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:30.731 10:41:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:30.731 10:41:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:22:30.731 10:41:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:22:30.732 10:41:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:22:30.732 10:41:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:22:30.732 10:41:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:30.732 10:41:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:22:30.732 10:41:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:30.732 10:41:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:22:30.732 10:41:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:30.732 10:41:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:30.732 request: 00:22:30.732 { 00:22:30.732 "name": "NVMe0", 00:22:30.732 "trtype": "tcp", 00:22:30.732 "traddr": "10.0.0.2", 00:22:30.732 "adrfam": "ipv4", 00:22:30.732 "trsvcid": "4420", 00:22:30.732 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:30.732 "hostaddr": "10.0.0.1", 00:22:30.732 "prchk_reftag": false, 00:22:30.732 "prchk_guard": false, 00:22:30.732 "hdgst": false, 00:22:30.732 "ddgst": false, 00:22:30.732 "multipath": "failover", 00:22:30.732 "allow_unrecognized_csi": false, 00:22:30.732 "method": "bdev_nvme_attach_controller", 00:22:30.732 "req_id": 1 00:22:30.732 } 00:22:30.732 Got JSON-RPC error response 00:22:30.732 response: 00:22:30.732 { 00:22:30.732 "code": -114, 00:22:30.732 "message": "A controller named NVMe0 already exists with the specified network path" 00:22:30.732 } 00:22:30.732 10:41:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:22:30.732 10:41:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:22:30.732 10:41:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:30.732 10:41:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:30.732 10:41:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:30.732 10:41:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:30.732 10:41:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:30.732 10:41:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:30.989 NVMe0n1 00:22:30.989 10:41:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:30.989 10:41:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:30.989 10:41:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:30.989 10:41:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:30.989 10:41:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:30.989 10:41:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:22:30.989 10:41:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:30.989 10:41:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:31.247 00:22:31.247 10:41:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:31.247 10:41:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:31.247 10:41:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:31.247 10:41:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:22:31.247 10:41:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:31.247 10:41:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:31.247 10:41:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:22:31.247 10:41:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:32.619 { 00:22:32.619 "results": [ 00:22:32.619 { 00:22:32.619 "job": "NVMe0n1", 00:22:32.619 "core_mask": "0x1", 00:22:32.619 "workload": "write", 00:22:32.619 "status": "finished", 00:22:32.619 "queue_depth": 128, 00:22:32.619 "io_size": 4096, 00:22:32.619 "runtime": 1.008818, 00:22:32.619 "iops": 16932.68756108634, 00:22:32.619 "mibps": 66.14331078549351, 00:22:32.619 "io_failed": 0, 00:22:32.619 "io_timeout": 0, 00:22:32.619 "avg_latency_us": 7528.058199100634, 00:22:32.619 "min_latency_us": 5194.334814814815, 00:22:32.619 "max_latency_us": 12913.01925925926 00:22:32.619 } 00:22:32.619 ], 00:22:32.619 "core_count": 1 00:22:32.619 } 00:22:32.619 10:41:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:22:32.619 10:41:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:32.619 10:41:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:32.619 10:41:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:32.619 10:41:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@100 -- # [[ -n '' ]] 00:22:32.619 10:41:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@116 -- # killprocess 431055 00:22:32.619 10:41:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@952 -- # '[' -z 431055 ']' 00:22:32.619 10:41:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@956 -- # kill -0 431055 00:22:32.619 10:41:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@957 -- # uname 00:22:32.619 10:41:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:22:32.619 10:41:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 431055 00:22:32.619 10:41:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:22:32.619 10:41:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:22:32.619 10:41:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@970 -- # echo 'killing process with pid 431055' 00:22:32.619 killing process with pid 431055 00:22:32.619 10:41:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@971 -- # kill 431055 00:22:32.619 10:41:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@976 -- # wait 431055 00:22:32.619 10:41:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@118 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:32.619 10:41:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:32.619 10:41:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:32.619 10:41:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:32.619 10:41:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@119 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:22:32.619 10:41:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:32.619 10:41:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:32.619 10:41:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:32.619 10:41:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:22:32.619 10:41:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@123 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:22:32.619 10:41:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1597 -- # read -r file 00:22:32.619 10:41:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1596 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:22:32.619 10:41:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1596 -- # sort -u 00:22:32.619 10:41:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1598 -- # cat 00:22:32.619 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:22:32.619 [2024-11-15 10:41:18.655059] Starting SPDK v25.01-pre git sha1 318515b44 / DPDK 24.03.0 initialization... 00:22:32.620 [2024-11-15 10:41:18.655144] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid431055 ] 00:22:32.620 [2024-11-15 10:41:18.723152] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:32.620 [2024-11-15 10:41:18.781288] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:32.620 [2024-11-15 10:41:19.571916] bdev.c:4691:bdev_name_add: *ERROR*: Bdev name 39b25d7b-86af-40f2-832c-9578bd176f7b already exists 00:22:32.620 [2024-11-15 10:41:19.571954] bdev.c:7842:bdev_register: *ERROR*: Unable to add uuid:39b25d7b-86af-40f2-832c-9578bd176f7b alias for bdev NVMe1n1 00:22:32.620 [2024-11-15 10:41:19.571985] bdev_nvme.c:4658:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:22:32.620 Running I/O for 1 seconds... 00:22:32.620 16922.00 IOPS, 66.10 MiB/s 00:22:32.620 Latency(us) 00:22:32.620 [2024-11-15T09:41:21.083Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:32.620 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:22:32.620 NVMe0n1 : 1.01 16932.69 66.14 0.00 0.00 7528.06 5194.33 12913.02 00:22:32.620 [2024-11-15T09:41:21.083Z] =================================================================================================================== 00:22:32.620 [2024-11-15T09:41:21.083Z] Total : 16932.69 66.14 0.00 0.00 7528.06 5194.33 12913.02 00:22:32.620 Received shutdown signal, test time was about 1.000000 seconds 00:22:32.620 00:22:32.620 Latency(us) 00:22:32.620 [2024-11-15T09:41:21.083Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:32.620 [2024-11-15T09:41:21.083Z] =================================================================================================================== 00:22:32.620 [2024-11-15T09:41:21.083Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:32.620 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:22:32.620 10:41:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1603 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:22:32.620 10:41:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1597 -- # read -r file 00:22:32.620 10:41:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@124 -- # nvmftestfini 00:22:32.620 10:41:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:32.620 10:41:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@121 -- # sync 00:22:32.620 10:41:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:32.620 10:41:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@124 -- # set +e 00:22:32.620 10:41:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:32.620 10:41:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:32.620 rmmod nvme_tcp 00:22:32.620 rmmod nvme_fabrics 00:22:32.620 rmmod nvme_keyring 00:22:32.620 10:41:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:32.620 10:41:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@128 -- # set -e 00:22:32.620 10:41:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@129 -- # return 0 00:22:32.620 10:41:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@517 -- # '[' -n 430918 ']' 00:22:32.620 10:41:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@518 -- # killprocess 430918 00:22:32.620 10:41:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@952 -- # '[' -z 430918 ']' 00:22:32.620 10:41:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@956 -- # kill -0 430918 00:22:32.620 10:41:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@957 -- # uname 00:22:32.620 10:41:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:22:32.620 10:41:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 430918 00:22:32.878 10:41:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:22:32.878 10:41:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:22:32.878 10:41:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@970 -- # echo 'killing process with pid 430918' 00:22:32.878 killing process with pid 430918 00:22:32.878 10:41:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@971 -- # kill 430918 00:22:32.878 10:41:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@976 -- # wait 430918 00:22:33.137 10:41:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:33.137 10:41:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:33.137 10:41:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:33.137 10:41:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@297 -- # iptr 00:22:33.137 10:41:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # iptables-save 00:22:33.137 10:41:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # iptables-restore 00:22:33.137 10:41:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:33.137 10:41:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:33.137 10:41:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:33.137 10:41:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:33.137 10:41:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:33.137 10:41:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:35.112 10:41:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:35.112 00:22:35.112 real 0m7.787s 00:22:35.112 user 0m12.263s 00:22:35.112 sys 0m2.506s 00:22:35.112 10:41:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1128 -- # xtrace_disable 00:22:35.112 10:41:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:35.112 ************************************ 00:22:35.112 END TEST nvmf_multicontroller 00:22:35.112 ************************************ 00:22:35.112 10:41:23 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@17 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:22:35.112 10:41:23 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:22:35.112 10:41:23 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:22:35.112 10:41:23 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:22:35.112 ************************************ 00:22:35.112 START TEST nvmf_aer 00:22:35.112 ************************************ 00:22:35.112 10:41:23 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:22:35.112 * Looking for test storage... 00:22:35.112 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:35.112 10:41:23 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:22:35.112 10:41:23 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1691 -- # lcov --version 00:22:35.112 10:41:23 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:22:35.417 10:41:23 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:22:35.417 10:41:23 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:35.417 10:41:23 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:35.417 10:41:23 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:35.417 10:41:23 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # IFS=.-: 00:22:35.417 10:41:23 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # read -ra ver1 00:22:35.417 10:41:23 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # IFS=.-: 00:22:35.417 10:41:23 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # read -ra ver2 00:22:35.417 10:41:23 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@338 -- # local 'op=<' 00:22:35.417 10:41:23 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@340 -- # ver1_l=2 00:22:35.417 10:41:23 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@341 -- # ver2_l=1 00:22:35.417 10:41:23 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:35.417 10:41:23 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@344 -- # case "$op" in 00:22:35.417 10:41:23 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@345 -- # : 1 00:22:35.417 10:41:23 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:35.417 10:41:23 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:35.417 10:41:23 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # decimal 1 00:22:35.417 10:41:23 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=1 00:22:35.417 10:41:23 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:35.417 10:41:23 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 1 00:22:35.417 10:41:23 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # ver1[v]=1 00:22:35.417 10:41:23 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # decimal 2 00:22:35.417 10:41:23 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=2 00:22:35.418 10:41:23 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:35.418 10:41:23 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 2 00:22:35.418 10:41:23 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # ver2[v]=2 00:22:35.418 10:41:23 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:35.418 10:41:23 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:35.418 10:41:23 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # return 0 00:22:35.418 10:41:23 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:35.418 10:41:23 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:22:35.418 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:35.418 --rc genhtml_branch_coverage=1 00:22:35.418 --rc genhtml_function_coverage=1 00:22:35.418 --rc genhtml_legend=1 00:22:35.418 --rc geninfo_all_blocks=1 00:22:35.418 --rc geninfo_unexecuted_blocks=1 00:22:35.418 00:22:35.418 ' 00:22:35.418 10:41:23 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:22:35.418 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:35.418 --rc genhtml_branch_coverage=1 00:22:35.418 --rc genhtml_function_coverage=1 00:22:35.418 --rc genhtml_legend=1 00:22:35.418 --rc geninfo_all_blocks=1 00:22:35.418 --rc geninfo_unexecuted_blocks=1 00:22:35.418 00:22:35.418 ' 00:22:35.418 10:41:23 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:22:35.418 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:35.418 --rc genhtml_branch_coverage=1 00:22:35.418 --rc genhtml_function_coverage=1 00:22:35.418 --rc genhtml_legend=1 00:22:35.418 --rc geninfo_all_blocks=1 00:22:35.418 --rc geninfo_unexecuted_blocks=1 00:22:35.418 00:22:35.418 ' 00:22:35.418 10:41:23 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:22:35.418 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:35.418 --rc genhtml_branch_coverage=1 00:22:35.418 --rc genhtml_function_coverage=1 00:22:35.418 --rc genhtml_legend=1 00:22:35.418 --rc geninfo_all_blocks=1 00:22:35.418 --rc geninfo_unexecuted_blocks=1 00:22:35.418 00:22:35.418 ' 00:22:35.418 10:41:23 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:35.418 10:41:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:22:35.418 10:41:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:35.418 10:41:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:35.418 10:41:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:35.418 10:41:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:35.418 10:41:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:35.418 10:41:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:35.418 10:41:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:35.418 10:41:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:35.418 10:41:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:35.418 10:41:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:35.418 10:41:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:22:35.418 10:41:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=8b464f06-2980-e311-ba20-001e67a94acd 00:22:35.418 10:41:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:35.418 10:41:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:35.418 10:41:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:35.418 10:41:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:35.418 10:41:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:35.418 10:41:23 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@15 -- # shopt -s extglob 00:22:35.418 10:41:23 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:35.418 10:41:23 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:35.418 10:41:23 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:35.418 10:41:23 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:35.418 10:41:23 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:35.418 10:41:23 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:35.418 10:41:23 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:22:35.418 10:41:23 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:35.418 10:41:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@51 -- # : 0 00:22:35.418 10:41:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:35.418 10:41:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:35.418 10:41:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:35.418 10:41:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:35.418 10:41:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:35.418 10:41:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:35.418 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:35.418 10:41:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:35.418 10:41:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:35.418 10:41:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:35.418 10:41:23 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:22:35.418 10:41:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:35.418 10:41:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:35.418 10:41:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:35.418 10:41:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:35.418 10:41:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:35.418 10:41:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:35.418 10:41:23 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:35.418 10:41:23 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:35.418 10:41:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:35.418 10:41:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:35.418 10:41:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@309 -- # xtrace_disable 00:22:35.418 10:41:23 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:37.416 10:41:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:37.416 10:41:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # pci_devs=() 00:22:37.416 10:41:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:37.416 10:41:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:37.416 10:41:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:37.416 10:41:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:37.416 10:41:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:37.416 10:41:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # net_devs=() 00:22:37.416 10:41:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:37.416 10:41:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # e810=() 00:22:37.416 10:41:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # local -ga e810 00:22:37.416 10:41:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # x722=() 00:22:37.416 10:41:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # local -ga x722 00:22:37.416 10:41:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # mlx=() 00:22:37.416 10:41:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # local -ga mlx 00:22:37.416 10:41:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:37.416 10:41:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:37.416 10:41:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:37.416 10:41:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:37.416 10:41:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:37.416 10:41:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:37.416 10:41:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:37.416 10:41:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:37.416 10:41:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:37.416 10:41:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:37.416 10:41:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:37.416 10:41:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:37.416 10:41:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:37.416 10:41:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:37.416 10:41:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:37.416 10:41:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:37.416 10:41:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:37.416 10:41:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:37.416 10:41:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:37.416 10:41:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@367 -- # echo 'Found 0000:82:00.0 (0x8086 - 0x159b)' 00:22:37.416 Found 0000:82:00.0 (0x8086 - 0x159b) 00:22:37.416 10:41:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:37.416 10:41:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:37.416 10:41:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:37.416 10:41:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:37.416 10:41:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:37.416 10:41:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:37.416 10:41:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@367 -- # echo 'Found 0000:82:00.1 (0x8086 - 0x159b)' 00:22:37.416 Found 0000:82:00.1 (0x8086 - 0x159b) 00:22:37.416 10:41:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:37.416 10:41:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:37.416 10:41:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:37.416 10:41:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:37.416 10:41:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:37.416 10:41:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:37.416 10:41:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:37.416 10:41:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:37.416 10:41:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:37.417 10:41:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:37.417 10:41:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:37.417 10:41:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:37.417 10:41:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:37.417 10:41:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:37.417 10:41:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:37.417 10:41:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:82:00.0: cvl_0_0' 00:22:37.417 Found net devices under 0000:82:00.0: cvl_0_0 00:22:37.417 10:41:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:37.417 10:41:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:37.417 10:41:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:37.417 10:41:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:37.417 10:41:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:37.417 10:41:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:37.417 10:41:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:37.417 10:41:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:37.417 10:41:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:82:00.1: cvl_0_1' 00:22:37.417 Found net devices under 0000:82:00.1: cvl_0_1 00:22:37.417 10:41:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:37.417 10:41:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:37.417 10:41:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # is_hw=yes 00:22:37.417 10:41:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:37.417 10:41:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:37.417 10:41:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:37.417 10:41:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:37.417 10:41:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:37.417 10:41:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:37.417 10:41:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:37.417 10:41:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:37.417 10:41:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:37.417 10:41:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:37.417 10:41:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:37.417 10:41:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:37.417 10:41:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:37.417 10:41:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:37.417 10:41:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:37.417 10:41:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:37.417 10:41:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:37.675 10:41:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:37.675 10:41:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:37.675 10:41:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:37.675 10:41:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:37.675 10:41:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:37.675 10:41:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:37.675 10:41:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:37.675 10:41:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:37.675 10:41:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:37.675 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:37.675 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.182 ms 00:22:37.675 00:22:37.675 --- 10.0.0.2 ping statistics --- 00:22:37.675 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:37.675 rtt min/avg/max/mdev = 0.182/0.182/0.182/0.000 ms 00:22:37.675 10:41:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:37.675 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:37.675 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.101 ms 00:22:37.675 00:22:37.675 --- 10.0.0.1 ping statistics --- 00:22:37.675 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:37.675 rtt min/avg/max/mdev = 0.101/0.101/0.101/0.000 ms 00:22:37.675 10:41:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:37.675 10:41:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@450 -- # return 0 00:22:37.675 10:41:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:37.675 10:41:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:37.675 10:41:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:37.675 10:41:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:37.675 10:41:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:37.675 10:41:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:37.675 10:41:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:37.675 10:41:26 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:22:37.675 10:41:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:37.675 10:41:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:37.675 10:41:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:37.675 10:41:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@509 -- # nvmfpid=433295 00:22:37.675 10:41:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:22:37.675 10:41:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@510 -- # waitforlisten 433295 00:22:37.675 10:41:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@833 -- # '[' -z 433295 ']' 00:22:37.675 10:41:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:37.675 10:41:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@838 -- # local max_retries=100 00:22:37.675 10:41:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:37.675 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:37.675 10:41:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@842 -- # xtrace_disable 00:22:37.675 10:41:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:37.675 [2024-11-15 10:41:26.069970] Starting SPDK v25.01-pre git sha1 318515b44 / DPDK 24.03.0 initialization... 00:22:37.675 [2024-11-15 10:41:26.070050] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:37.933 [2024-11-15 10:41:26.142172] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:37.933 [2024-11-15 10:41:26.199942] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:37.933 [2024-11-15 10:41:26.199994] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:37.933 [2024-11-15 10:41:26.200023] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:37.933 [2024-11-15 10:41:26.200035] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:37.933 [2024-11-15 10:41:26.200044] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:37.933 [2024-11-15 10:41:26.201671] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:37.933 [2024-11-15 10:41:26.201757] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:37.933 [2024-11-15 10:41:26.201821] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:22:37.933 [2024-11-15 10:41:26.201825] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:37.933 10:41:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:22:37.933 10:41:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@866 -- # return 0 00:22:37.933 10:41:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:37.933 10:41:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:37.933 10:41:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:37.933 10:41:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:37.933 10:41:26 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:37.933 10:41:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:37.933 10:41:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:37.933 [2024-11-15 10:41:26.350338] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:37.933 10:41:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:37.933 10:41:26 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:22:37.933 10:41:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:37.933 10:41:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:38.190 Malloc0 00:22:38.190 10:41:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:38.190 10:41:26 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:22:38.190 10:41:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:38.190 10:41:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:38.190 10:41:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:38.190 10:41:26 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:38.190 10:41:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:38.190 10:41:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:38.190 10:41:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:38.190 10:41:26 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:38.190 10:41:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:38.190 10:41:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:38.190 [2024-11-15 10:41:26.419494] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:38.190 10:41:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:38.190 10:41:26 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:22:38.190 10:41:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:38.190 10:41:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:38.190 [ 00:22:38.190 { 00:22:38.190 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:22:38.190 "subtype": "Discovery", 00:22:38.190 "listen_addresses": [], 00:22:38.190 "allow_any_host": true, 00:22:38.190 "hosts": [] 00:22:38.190 }, 00:22:38.190 { 00:22:38.190 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:38.190 "subtype": "NVMe", 00:22:38.190 "listen_addresses": [ 00:22:38.190 { 00:22:38.190 "trtype": "TCP", 00:22:38.190 "adrfam": "IPv4", 00:22:38.190 "traddr": "10.0.0.2", 00:22:38.190 "trsvcid": "4420" 00:22:38.190 } 00:22:38.190 ], 00:22:38.190 "allow_any_host": true, 00:22:38.190 "hosts": [], 00:22:38.191 "serial_number": "SPDK00000000000001", 00:22:38.191 "model_number": "SPDK bdev Controller", 00:22:38.191 "max_namespaces": 2, 00:22:38.191 "min_cntlid": 1, 00:22:38.191 "max_cntlid": 65519, 00:22:38.191 "namespaces": [ 00:22:38.191 { 00:22:38.191 "nsid": 1, 00:22:38.191 "bdev_name": "Malloc0", 00:22:38.191 "name": "Malloc0", 00:22:38.191 "nguid": "60722DD6A9454972AF050747EAC3ECCD", 00:22:38.191 "uuid": "60722dd6-a945-4972-af05-0747eac3eccd" 00:22:38.191 } 00:22:38.191 ] 00:22:38.191 } 00:22:38.191 ] 00:22:38.191 10:41:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:38.191 10:41:26 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:22:38.191 10:41:26 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:22:38.191 10:41:26 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@33 -- # aerpid=433333 00:22:38.191 10:41:26 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:22:38.191 10:41:26 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:22:38.191 10:41:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1267 -- # local i=0 00:22:38.191 10:41:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1268 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:22:38.191 10:41:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # '[' 0 -lt 200 ']' 00:22:38.191 10:41:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # i=1 00:22:38.191 10:41:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # sleep 0.1 00:22:38.191 10:41:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1268 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:22:38.191 10:41:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # '[' 1 -lt 200 ']' 00:22:38.191 10:41:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # i=2 00:22:38.191 10:41:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # sleep 0.1 00:22:38.191 10:41:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1268 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:22:38.191 10:41:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1274 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:22:38.191 10:41:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1278 -- # return 0 00:22:38.191 10:41:26 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:22:38.191 10:41:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:38.191 10:41:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:38.449 Malloc1 00:22:38.449 10:41:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:38.449 10:41:26 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:22:38.449 10:41:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:38.449 10:41:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:38.449 10:41:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:38.449 10:41:26 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:22:38.449 10:41:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:38.449 10:41:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:38.449 [ 00:22:38.449 { 00:22:38.449 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:22:38.449 "subtype": "Discovery", 00:22:38.449 "listen_addresses": [], 00:22:38.449 "allow_any_host": true, 00:22:38.449 "hosts": [] 00:22:38.449 }, 00:22:38.449 { 00:22:38.449 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:38.449 "subtype": "NVMe", 00:22:38.449 "listen_addresses": [ 00:22:38.449 { 00:22:38.449 "trtype": "TCP", 00:22:38.449 "adrfam": "IPv4", 00:22:38.449 "traddr": "10.0.0.2", 00:22:38.449 "trsvcid": "4420" 00:22:38.449 } 00:22:38.449 ], 00:22:38.449 "allow_any_host": true, 00:22:38.449 "hosts": [], 00:22:38.449 "serial_number": "SPDK00000000000001", 00:22:38.449 "model_number": "SPDK bdev Controller", 00:22:38.449 "max_namespaces": 2, 00:22:38.449 "min_cntlid": 1, 00:22:38.449 "max_cntlid": 65519, 00:22:38.449 "namespaces": [ 00:22:38.449 { 00:22:38.449 "nsid": 1, 00:22:38.449 "bdev_name": "Malloc0", 00:22:38.449 "name": "Malloc0", 00:22:38.449 "nguid": "60722DD6A9454972AF050747EAC3ECCD", 00:22:38.449 "uuid": "60722dd6-a945-4972-af05-0747eac3eccd" 00:22:38.449 }, 00:22:38.449 { 00:22:38.449 "nsid": 2, 00:22:38.449 "bdev_name": "Malloc1", 00:22:38.449 "name": "Malloc1", 00:22:38.449 "nguid": "A7B480DE35BC442B82EBF28141BFDE30", 00:22:38.449 "uuid": "a7b480de-35bc-442b-82eb-f28141bfde30" 00:22:38.449 } 00:22:38.449 ] 00:22:38.449 } 00:22:38.449 ] 00:22:38.449 10:41:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:38.449 10:41:26 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@43 -- # wait 433333 00:22:38.449 Asynchronous Event Request test 00:22:38.449 Attaching to 10.0.0.2 00:22:38.449 Attached to 10.0.0.2 00:22:38.449 Registering asynchronous event callbacks... 00:22:38.449 Starting namespace attribute notice tests for all controllers... 00:22:38.449 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:22:38.449 aer_cb - Changed Namespace 00:22:38.449 Cleaning up... 00:22:38.449 10:41:26 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:22:38.449 10:41:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:38.449 10:41:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:38.449 10:41:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:38.449 10:41:26 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:22:38.449 10:41:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:38.449 10:41:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:38.449 10:41:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:38.449 10:41:26 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:38.449 10:41:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:38.449 10:41:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:38.449 10:41:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:38.449 10:41:26 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:22:38.449 10:41:26 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:22:38.449 10:41:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:38.449 10:41:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@121 -- # sync 00:22:38.449 10:41:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:38.449 10:41:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@124 -- # set +e 00:22:38.449 10:41:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:38.449 10:41:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:38.449 rmmod nvme_tcp 00:22:38.449 rmmod nvme_fabrics 00:22:38.449 rmmod nvme_keyring 00:22:38.449 10:41:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:38.449 10:41:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@128 -- # set -e 00:22:38.449 10:41:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@129 -- # return 0 00:22:38.449 10:41:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@517 -- # '[' -n 433295 ']' 00:22:38.449 10:41:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@518 -- # killprocess 433295 00:22:38.449 10:41:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@952 -- # '[' -z 433295 ']' 00:22:38.449 10:41:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@956 -- # kill -0 433295 00:22:38.449 10:41:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@957 -- # uname 00:22:38.449 10:41:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:22:38.449 10:41:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 433295 00:22:38.449 10:41:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:22:38.449 10:41:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:22:38.449 10:41:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@970 -- # echo 'killing process with pid 433295' 00:22:38.449 killing process with pid 433295 00:22:38.449 10:41:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@971 -- # kill 433295 00:22:38.449 10:41:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@976 -- # wait 433295 00:22:38.708 10:41:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:38.708 10:41:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:38.708 10:41:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:38.708 10:41:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@297 -- # iptr 00:22:38.708 10:41:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # iptables-save 00:22:38.708 10:41:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:38.708 10:41:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # iptables-restore 00:22:38.708 10:41:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:38.708 10:41:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:38.708 10:41:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:38.708 10:41:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:38.708 10:41:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:41.245 10:41:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:41.245 00:22:41.245 real 0m5.707s 00:22:41.245 user 0m4.449s 00:22:41.245 sys 0m2.158s 00:22:41.245 10:41:29 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1128 -- # xtrace_disable 00:22:41.246 10:41:29 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:41.246 ************************************ 00:22:41.246 END TEST nvmf_aer 00:22:41.246 ************************************ 00:22:41.246 10:41:29 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@18 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:22:41.246 10:41:29 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:22:41.246 10:41:29 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:22:41.246 10:41:29 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:22:41.246 ************************************ 00:22:41.246 START TEST nvmf_async_init 00:22:41.246 ************************************ 00:22:41.246 10:41:29 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:22:41.246 * Looking for test storage... 00:22:41.246 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:41.246 10:41:29 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:22:41.246 10:41:29 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1691 -- # lcov --version 00:22:41.246 10:41:29 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:22:41.246 10:41:29 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:22:41.246 10:41:29 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:41.246 10:41:29 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:41.246 10:41:29 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:41.246 10:41:29 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # IFS=.-: 00:22:41.246 10:41:29 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # read -ra ver1 00:22:41.246 10:41:29 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # IFS=.-: 00:22:41.246 10:41:29 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # read -ra ver2 00:22:41.246 10:41:29 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@338 -- # local 'op=<' 00:22:41.246 10:41:29 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@340 -- # ver1_l=2 00:22:41.246 10:41:29 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@341 -- # ver2_l=1 00:22:41.246 10:41:29 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:41.246 10:41:29 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@344 -- # case "$op" in 00:22:41.246 10:41:29 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@345 -- # : 1 00:22:41.246 10:41:29 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:41.246 10:41:29 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:41.246 10:41:29 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # decimal 1 00:22:41.246 10:41:29 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=1 00:22:41.246 10:41:29 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:41.246 10:41:29 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 1 00:22:41.246 10:41:29 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # ver1[v]=1 00:22:41.246 10:41:29 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # decimal 2 00:22:41.246 10:41:29 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=2 00:22:41.246 10:41:29 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:41.246 10:41:29 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 2 00:22:41.246 10:41:29 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # ver2[v]=2 00:22:41.246 10:41:29 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:41.246 10:41:29 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:41.246 10:41:29 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # return 0 00:22:41.246 10:41:29 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:41.246 10:41:29 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:22:41.246 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:41.246 --rc genhtml_branch_coverage=1 00:22:41.246 --rc genhtml_function_coverage=1 00:22:41.246 --rc genhtml_legend=1 00:22:41.246 --rc geninfo_all_blocks=1 00:22:41.246 --rc geninfo_unexecuted_blocks=1 00:22:41.246 00:22:41.246 ' 00:22:41.246 10:41:29 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:22:41.246 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:41.246 --rc genhtml_branch_coverage=1 00:22:41.246 --rc genhtml_function_coverage=1 00:22:41.246 --rc genhtml_legend=1 00:22:41.246 --rc geninfo_all_blocks=1 00:22:41.246 --rc geninfo_unexecuted_blocks=1 00:22:41.246 00:22:41.246 ' 00:22:41.246 10:41:29 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:22:41.246 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:41.246 --rc genhtml_branch_coverage=1 00:22:41.246 --rc genhtml_function_coverage=1 00:22:41.246 --rc genhtml_legend=1 00:22:41.246 --rc geninfo_all_blocks=1 00:22:41.246 --rc geninfo_unexecuted_blocks=1 00:22:41.246 00:22:41.246 ' 00:22:41.246 10:41:29 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:22:41.246 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:41.246 --rc genhtml_branch_coverage=1 00:22:41.246 --rc genhtml_function_coverage=1 00:22:41.246 --rc genhtml_legend=1 00:22:41.246 --rc geninfo_all_blocks=1 00:22:41.246 --rc geninfo_unexecuted_blocks=1 00:22:41.246 00:22:41.246 ' 00:22:41.246 10:41:29 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:41.246 10:41:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:22:41.246 10:41:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:41.246 10:41:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:41.246 10:41:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:41.246 10:41:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:41.246 10:41:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:41.246 10:41:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:41.246 10:41:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:41.246 10:41:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:41.246 10:41:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:41.246 10:41:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:41.246 10:41:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:22:41.246 10:41:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=8b464f06-2980-e311-ba20-001e67a94acd 00:22:41.246 10:41:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:41.246 10:41:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:41.246 10:41:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:41.246 10:41:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:41.246 10:41:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:41.246 10:41:29 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@15 -- # shopt -s extglob 00:22:41.246 10:41:29 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:41.246 10:41:29 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:41.246 10:41:29 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:41.246 10:41:29 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:41.246 10:41:29 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:41.246 10:41:29 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:41.246 10:41:29 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:22:41.246 10:41:29 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:41.246 10:41:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@51 -- # : 0 00:22:41.246 10:41:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:41.246 10:41:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:41.246 10:41:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:41.246 10:41:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:41.246 10:41:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:41.246 10:41:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:41.247 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:41.247 10:41:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:41.247 10:41:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:41.247 10:41:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:41.247 10:41:29 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:22:41.247 10:41:29 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:22:41.247 10:41:29 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:22:41.247 10:41:29 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:22:41.247 10:41:29 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:22:41.247 10:41:29 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:22:41.247 10:41:29 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # nguid=2f9ecff1496e4b1dac688f3222ddad9a 00:22:41.247 10:41:29 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:22:41.247 10:41:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:41.247 10:41:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:41.247 10:41:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:41.247 10:41:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:41.247 10:41:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:41.247 10:41:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:41.247 10:41:29 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:41.247 10:41:29 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:41.247 10:41:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:41.247 10:41:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:41.247 10:41:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@309 -- # xtrace_disable 00:22:41.247 10:41:29 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:43.147 10:41:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:43.147 10:41:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # pci_devs=() 00:22:43.147 10:41:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:43.147 10:41:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:43.147 10:41:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:43.147 10:41:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:43.147 10:41:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:43.147 10:41:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # net_devs=() 00:22:43.147 10:41:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:43.147 10:41:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # e810=() 00:22:43.147 10:41:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # local -ga e810 00:22:43.147 10:41:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # x722=() 00:22:43.147 10:41:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # local -ga x722 00:22:43.147 10:41:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # mlx=() 00:22:43.147 10:41:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # local -ga mlx 00:22:43.147 10:41:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:43.147 10:41:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:43.147 10:41:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:43.147 10:41:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:43.147 10:41:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:43.147 10:41:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:43.147 10:41:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:43.147 10:41:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:43.147 10:41:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:43.147 10:41:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:43.147 10:41:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:43.147 10:41:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:43.147 10:41:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:43.147 10:41:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:43.147 10:41:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:43.147 10:41:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:43.147 10:41:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:43.147 10:41:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:43.147 10:41:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:43.147 10:41:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@367 -- # echo 'Found 0000:82:00.0 (0x8086 - 0x159b)' 00:22:43.147 Found 0000:82:00.0 (0x8086 - 0x159b) 00:22:43.147 10:41:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:43.147 10:41:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:43.147 10:41:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:43.147 10:41:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:43.147 10:41:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:43.147 10:41:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:43.147 10:41:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@367 -- # echo 'Found 0000:82:00.1 (0x8086 - 0x159b)' 00:22:43.147 Found 0000:82:00.1 (0x8086 - 0x159b) 00:22:43.147 10:41:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:43.147 10:41:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:43.147 10:41:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:43.147 10:41:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:43.147 10:41:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:43.147 10:41:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:43.147 10:41:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:43.147 10:41:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:43.147 10:41:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:43.147 10:41:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:43.147 10:41:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:43.147 10:41:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:43.147 10:41:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:43.147 10:41:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:43.147 10:41:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:43.147 10:41:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:82:00.0: cvl_0_0' 00:22:43.147 Found net devices under 0000:82:00.0: cvl_0_0 00:22:43.147 10:41:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:43.147 10:41:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:43.147 10:41:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:43.147 10:41:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:43.147 10:41:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:43.147 10:41:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:43.147 10:41:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:43.147 10:41:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:43.147 10:41:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:82:00.1: cvl_0_1' 00:22:43.147 Found net devices under 0000:82:00.1: cvl_0_1 00:22:43.147 10:41:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:43.147 10:41:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:43.147 10:41:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # is_hw=yes 00:22:43.147 10:41:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:43.147 10:41:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:43.147 10:41:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:43.147 10:41:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:43.147 10:41:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:43.147 10:41:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:43.147 10:41:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:43.148 10:41:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:43.148 10:41:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:43.148 10:41:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:43.148 10:41:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:43.148 10:41:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:43.148 10:41:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:43.148 10:41:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:43.148 10:41:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:43.148 10:41:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:43.148 10:41:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:43.148 10:41:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:43.407 10:41:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:43.407 10:41:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:43.407 10:41:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:43.407 10:41:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:43.407 10:41:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:43.407 10:41:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:43.407 10:41:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:43.407 10:41:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:43.407 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:43.407 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.176 ms 00:22:43.407 00:22:43.407 --- 10.0.0.2 ping statistics --- 00:22:43.407 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:43.407 rtt min/avg/max/mdev = 0.176/0.176/0.176/0.000 ms 00:22:43.407 10:41:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:43.407 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:43.407 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.084 ms 00:22:43.407 00:22:43.407 --- 10.0.0.1 ping statistics --- 00:22:43.407 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:43.407 rtt min/avg/max/mdev = 0.084/0.084/0.084/0.000 ms 00:22:43.407 10:41:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:43.407 10:41:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@450 -- # return 0 00:22:43.407 10:41:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:43.407 10:41:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:43.407 10:41:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:43.407 10:41:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:43.407 10:41:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:43.407 10:41:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:43.407 10:41:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:43.407 10:41:31 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:22:43.407 10:41:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:43.407 10:41:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:43.407 10:41:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:43.407 10:41:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@509 -- # nvmfpid=435387 00:22:43.407 10:41:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:22:43.407 10:41:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@510 -- # waitforlisten 435387 00:22:43.407 10:41:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@833 -- # '[' -z 435387 ']' 00:22:43.407 10:41:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:43.407 10:41:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@838 -- # local max_retries=100 00:22:43.407 10:41:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:43.407 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:43.407 10:41:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@842 -- # xtrace_disable 00:22:43.407 10:41:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:43.407 [2024-11-15 10:41:31.788070] Starting SPDK v25.01-pre git sha1 318515b44 / DPDK 24.03.0 initialization... 00:22:43.407 [2024-11-15 10:41:31.788155] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:43.407 [2024-11-15 10:41:31.860597] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:43.665 [2024-11-15 10:41:31.921443] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:43.665 [2024-11-15 10:41:31.921503] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:43.665 [2024-11-15 10:41:31.921532] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:43.665 [2024-11-15 10:41:31.921544] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:43.665 [2024-11-15 10:41:31.921555] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:43.665 [2024-11-15 10:41:31.922217] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:43.665 10:41:32 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:22:43.665 10:41:32 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@866 -- # return 0 00:22:43.665 10:41:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:43.665 10:41:32 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:43.665 10:41:32 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:43.665 10:41:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:43.665 10:41:32 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:22:43.665 10:41:32 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:43.665 10:41:32 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:43.665 [2024-11-15 10:41:32.073213] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:43.665 10:41:32 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:43.665 10:41:32 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:22:43.665 10:41:32 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:43.665 10:41:32 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:43.665 null0 00:22:43.665 10:41:32 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:43.666 10:41:32 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:22:43.666 10:41:32 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:43.666 10:41:32 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:43.666 10:41:32 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:43.666 10:41:32 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:22:43.666 10:41:32 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:43.666 10:41:32 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:43.666 10:41:32 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:43.666 10:41:32 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g 2f9ecff1496e4b1dac688f3222ddad9a 00:22:43.666 10:41:32 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:43.666 10:41:32 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:43.666 10:41:32 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:43.666 10:41:32 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:22:43.666 10:41:32 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:43.666 10:41:32 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:43.666 [2024-11-15 10:41:32.113522] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:43.666 10:41:32 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:43.666 10:41:32 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:22:43.666 10:41:32 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:43.666 10:41:32 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:43.925 nvme0n1 00:22:43.925 10:41:32 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:43.925 10:41:32 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:22:43.925 10:41:32 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:43.925 10:41:32 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:43.925 [ 00:22:43.925 { 00:22:43.925 "name": "nvme0n1", 00:22:43.925 "aliases": [ 00:22:43.925 "2f9ecff1-496e-4b1d-ac68-8f3222ddad9a" 00:22:43.925 ], 00:22:43.925 "product_name": "NVMe disk", 00:22:43.925 "block_size": 512, 00:22:43.925 "num_blocks": 2097152, 00:22:43.925 "uuid": "2f9ecff1-496e-4b1d-ac68-8f3222ddad9a", 00:22:43.925 "numa_id": 1, 00:22:43.925 "assigned_rate_limits": { 00:22:43.925 "rw_ios_per_sec": 0, 00:22:43.925 "rw_mbytes_per_sec": 0, 00:22:43.925 "r_mbytes_per_sec": 0, 00:22:43.925 "w_mbytes_per_sec": 0 00:22:43.925 }, 00:22:43.925 "claimed": false, 00:22:43.925 "zoned": false, 00:22:43.925 "supported_io_types": { 00:22:43.925 "read": true, 00:22:43.925 "write": true, 00:22:43.925 "unmap": false, 00:22:43.925 "flush": true, 00:22:43.925 "reset": true, 00:22:43.925 "nvme_admin": true, 00:22:43.925 "nvme_io": true, 00:22:43.925 "nvme_io_md": false, 00:22:43.925 "write_zeroes": true, 00:22:43.925 "zcopy": false, 00:22:43.925 "get_zone_info": false, 00:22:43.925 "zone_management": false, 00:22:43.925 "zone_append": false, 00:22:43.925 "compare": true, 00:22:43.925 "compare_and_write": true, 00:22:43.925 "abort": true, 00:22:43.925 "seek_hole": false, 00:22:43.925 "seek_data": false, 00:22:43.925 "copy": true, 00:22:43.925 "nvme_iov_md": false 00:22:43.925 }, 00:22:43.925 "memory_domains": [ 00:22:43.925 { 00:22:43.925 "dma_device_id": "system", 00:22:43.925 "dma_device_type": 1 00:22:43.925 } 00:22:43.925 ], 00:22:43.925 "driver_specific": { 00:22:43.925 "nvme": [ 00:22:43.925 { 00:22:43.925 "trid": { 00:22:43.925 "trtype": "TCP", 00:22:43.925 "adrfam": "IPv4", 00:22:43.925 "traddr": "10.0.0.2", 00:22:43.925 "trsvcid": "4420", 00:22:43.925 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:22:43.925 }, 00:22:43.925 "ctrlr_data": { 00:22:43.925 "cntlid": 1, 00:22:43.925 "vendor_id": "0x8086", 00:22:43.925 "model_number": "SPDK bdev Controller", 00:22:43.925 "serial_number": "00000000000000000000", 00:22:43.925 "firmware_revision": "25.01", 00:22:43.925 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:22:43.925 "oacs": { 00:22:43.925 "security": 0, 00:22:43.925 "format": 0, 00:22:43.925 "firmware": 0, 00:22:43.925 "ns_manage": 0 00:22:43.925 }, 00:22:43.925 "multi_ctrlr": true, 00:22:43.925 "ana_reporting": false 00:22:43.925 }, 00:22:43.925 "vs": { 00:22:43.925 "nvme_version": "1.3" 00:22:43.925 }, 00:22:43.925 "ns_data": { 00:22:43.925 "id": 1, 00:22:43.925 "can_share": true 00:22:43.925 } 00:22:43.925 } 00:22:43.925 ], 00:22:43.925 "mp_policy": "active_passive" 00:22:43.925 } 00:22:43.925 } 00:22:43.925 ] 00:22:43.925 10:41:32 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:43.925 10:41:32 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:22:43.925 10:41:32 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:43.925 10:41:32 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:43.925 [2024-11-15 10:41:32.364092] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:22:43.925 [2024-11-15 10:41:32.364193] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1711b20 (9): Bad file descriptor 00:22:44.183 [2024-11-15 10:41:32.506487] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:22:44.183 10:41:32 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:44.183 10:41:32 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:22:44.183 10:41:32 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:44.183 10:41:32 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:44.183 [ 00:22:44.183 { 00:22:44.183 "name": "nvme0n1", 00:22:44.183 "aliases": [ 00:22:44.183 "2f9ecff1-496e-4b1d-ac68-8f3222ddad9a" 00:22:44.183 ], 00:22:44.183 "product_name": "NVMe disk", 00:22:44.183 "block_size": 512, 00:22:44.183 "num_blocks": 2097152, 00:22:44.183 "uuid": "2f9ecff1-496e-4b1d-ac68-8f3222ddad9a", 00:22:44.183 "numa_id": 1, 00:22:44.183 "assigned_rate_limits": { 00:22:44.183 "rw_ios_per_sec": 0, 00:22:44.183 "rw_mbytes_per_sec": 0, 00:22:44.183 "r_mbytes_per_sec": 0, 00:22:44.183 "w_mbytes_per_sec": 0 00:22:44.183 }, 00:22:44.183 "claimed": false, 00:22:44.183 "zoned": false, 00:22:44.183 "supported_io_types": { 00:22:44.183 "read": true, 00:22:44.183 "write": true, 00:22:44.183 "unmap": false, 00:22:44.183 "flush": true, 00:22:44.183 "reset": true, 00:22:44.183 "nvme_admin": true, 00:22:44.183 "nvme_io": true, 00:22:44.183 "nvme_io_md": false, 00:22:44.183 "write_zeroes": true, 00:22:44.183 "zcopy": false, 00:22:44.183 "get_zone_info": false, 00:22:44.183 "zone_management": false, 00:22:44.183 "zone_append": false, 00:22:44.183 "compare": true, 00:22:44.183 "compare_and_write": true, 00:22:44.183 "abort": true, 00:22:44.183 "seek_hole": false, 00:22:44.183 "seek_data": false, 00:22:44.183 "copy": true, 00:22:44.183 "nvme_iov_md": false 00:22:44.183 }, 00:22:44.183 "memory_domains": [ 00:22:44.183 { 00:22:44.183 "dma_device_id": "system", 00:22:44.183 "dma_device_type": 1 00:22:44.183 } 00:22:44.183 ], 00:22:44.183 "driver_specific": { 00:22:44.183 "nvme": [ 00:22:44.183 { 00:22:44.183 "trid": { 00:22:44.183 "trtype": "TCP", 00:22:44.183 "adrfam": "IPv4", 00:22:44.183 "traddr": "10.0.0.2", 00:22:44.183 "trsvcid": "4420", 00:22:44.183 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:22:44.183 }, 00:22:44.183 "ctrlr_data": { 00:22:44.183 "cntlid": 2, 00:22:44.183 "vendor_id": "0x8086", 00:22:44.183 "model_number": "SPDK bdev Controller", 00:22:44.183 "serial_number": "00000000000000000000", 00:22:44.183 "firmware_revision": "25.01", 00:22:44.183 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:22:44.183 "oacs": { 00:22:44.183 "security": 0, 00:22:44.183 "format": 0, 00:22:44.183 "firmware": 0, 00:22:44.183 "ns_manage": 0 00:22:44.183 }, 00:22:44.183 "multi_ctrlr": true, 00:22:44.183 "ana_reporting": false 00:22:44.183 }, 00:22:44.183 "vs": { 00:22:44.183 "nvme_version": "1.3" 00:22:44.183 }, 00:22:44.183 "ns_data": { 00:22:44.183 "id": 1, 00:22:44.183 "can_share": true 00:22:44.183 } 00:22:44.183 } 00:22:44.183 ], 00:22:44.183 "mp_policy": "active_passive" 00:22:44.183 } 00:22:44.183 } 00:22:44.183 ] 00:22:44.184 10:41:32 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:44.184 10:41:32 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:44.184 10:41:32 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:44.184 10:41:32 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:44.184 10:41:32 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:44.184 10:41:32 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:22:44.184 10:41:32 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.G9zj9ihPEb 00:22:44.184 10:41:32 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:22:44.184 10:41:32 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.G9zj9ihPEb 00:22:44.184 10:41:32 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd keyring_file_add_key key0 /tmp/tmp.G9zj9ihPEb 00:22:44.184 10:41:32 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:44.184 10:41:32 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:44.184 10:41:32 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:44.184 10:41:32 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:22:44.184 10:41:32 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:44.184 10:41:32 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:44.184 10:41:32 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:44.184 10:41:32 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@58 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:22:44.184 10:41:32 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:44.184 10:41:32 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:44.184 [2024-11-15 10:41:32.560769] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:44.184 [2024-11-15 10:41:32.560892] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:22:44.184 10:41:32 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:44.184 10:41:32 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@60 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk key0 00:22:44.184 10:41:32 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:44.184 10:41:32 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:44.184 10:41:32 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:44.184 10:41:32 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@66 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk key0 00:22:44.184 10:41:32 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:44.184 10:41:32 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:44.184 [2024-11-15 10:41:32.576789] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:44.184 nvme0n1 00:22:44.184 10:41:32 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:44.442 10:41:32 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@70 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:22:44.442 10:41:32 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:44.442 10:41:32 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:44.442 [ 00:22:44.442 { 00:22:44.442 "name": "nvme0n1", 00:22:44.442 "aliases": [ 00:22:44.442 "2f9ecff1-496e-4b1d-ac68-8f3222ddad9a" 00:22:44.442 ], 00:22:44.442 "product_name": "NVMe disk", 00:22:44.442 "block_size": 512, 00:22:44.442 "num_blocks": 2097152, 00:22:44.442 "uuid": "2f9ecff1-496e-4b1d-ac68-8f3222ddad9a", 00:22:44.442 "numa_id": 1, 00:22:44.442 "assigned_rate_limits": { 00:22:44.443 "rw_ios_per_sec": 0, 00:22:44.443 "rw_mbytes_per_sec": 0, 00:22:44.443 "r_mbytes_per_sec": 0, 00:22:44.443 "w_mbytes_per_sec": 0 00:22:44.443 }, 00:22:44.443 "claimed": false, 00:22:44.443 "zoned": false, 00:22:44.443 "supported_io_types": { 00:22:44.443 "read": true, 00:22:44.443 "write": true, 00:22:44.443 "unmap": false, 00:22:44.443 "flush": true, 00:22:44.443 "reset": true, 00:22:44.443 "nvme_admin": true, 00:22:44.443 "nvme_io": true, 00:22:44.443 "nvme_io_md": false, 00:22:44.443 "write_zeroes": true, 00:22:44.443 "zcopy": false, 00:22:44.443 "get_zone_info": false, 00:22:44.443 "zone_management": false, 00:22:44.443 "zone_append": false, 00:22:44.443 "compare": true, 00:22:44.443 "compare_and_write": true, 00:22:44.443 "abort": true, 00:22:44.443 "seek_hole": false, 00:22:44.443 "seek_data": false, 00:22:44.443 "copy": true, 00:22:44.443 "nvme_iov_md": false 00:22:44.443 }, 00:22:44.443 "memory_domains": [ 00:22:44.443 { 00:22:44.443 "dma_device_id": "system", 00:22:44.443 "dma_device_type": 1 00:22:44.443 } 00:22:44.443 ], 00:22:44.443 "driver_specific": { 00:22:44.443 "nvme": [ 00:22:44.443 { 00:22:44.443 "trid": { 00:22:44.443 "trtype": "TCP", 00:22:44.443 "adrfam": "IPv4", 00:22:44.443 "traddr": "10.0.0.2", 00:22:44.443 "trsvcid": "4421", 00:22:44.443 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:22:44.443 }, 00:22:44.443 "ctrlr_data": { 00:22:44.443 "cntlid": 3, 00:22:44.443 "vendor_id": "0x8086", 00:22:44.443 "model_number": "SPDK bdev Controller", 00:22:44.443 "serial_number": "00000000000000000000", 00:22:44.443 "firmware_revision": "25.01", 00:22:44.443 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:22:44.443 "oacs": { 00:22:44.443 "security": 0, 00:22:44.443 "format": 0, 00:22:44.443 "firmware": 0, 00:22:44.443 "ns_manage": 0 00:22:44.443 }, 00:22:44.443 "multi_ctrlr": true, 00:22:44.443 "ana_reporting": false 00:22:44.443 }, 00:22:44.443 "vs": { 00:22:44.443 "nvme_version": "1.3" 00:22:44.443 }, 00:22:44.443 "ns_data": { 00:22:44.443 "id": 1, 00:22:44.443 "can_share": true 00:22:44.443 } 00:22:44.443 } 00:22:44.443 ], 00:22:44.443 "mp_policy": "active_passive" 00:22:44.443 } 00:22:44.443 } 00:22:44.443 ] 00:22:44.443 10:41:32 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:44.443 10:41:32 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@73 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:44.443 10:41:32 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:44.443 10:41:32 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:44.443 10:41:32 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:44.443 10:41:32 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@76 -- # rm -f /tmp/tmp.G9zj9ihPEb 00:22:44.443 10:41:32 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@78 -- # trap - SIGINT SIGTERM EXIT 00:22:44.443 10:41:32 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@79 -- # nvmftestfini 00:22:44.443 10:41:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:44.443 10:41:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@121 -- # sync 00:22:44.443 10:41:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:44.443 10:41:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@124 -- # set +e 00:22:44.443 10:41:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:44.443 10:41:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:44.443 rmmod nvme_tcp 00:22:44.443 rmmod nvme_fabrics 00:22:44.443 rmmod nvme_keyring 00:22:44.443 10:41:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:44.443 10:41:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@128 -- # set -e 00:22:44.443 10:41:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@129 -- # return 0 00:22:44.443 10:41:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@517 -- # '[' -n 435387 ']' 00:22:44.443 10:41:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@518 -- # killprocess 435387 00:22:44.443 10:41:32 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@952 -- # '[' -z 435387 ']' 00:22:44.443 10:41:32 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@956 -- # kill -0 435387 00:22:44.443 10:41:32 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@957 -- # uname 00:22:44.443 10:41:32 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:22:44.443 10:41:32 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 435387 00:22:44.443 10:41:32 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:22:44.443 10:41:32 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:22:44.443 10:41:32 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@970 -- # echo 'killing process with pid 435387' 00:22:44.443 killing process with pid 435387 00:22:44.443 10:41:32 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@971 -- # kill 435387 00:22:44.443 10:41:32 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@976 -- # wait 435387 00:22:44.716 10:41:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:44.716 10:41:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:44.716 10:41:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:44.716 10:41:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@297 -- # iptr 00:22:44.716 10:41:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # iptables-save 00:22:44.716 10:41:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:44.716 10:41:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # iptables-restore 00:22:44.716 10:41:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:44.716 10:41:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:44.716 10:41:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:44.716 10:41:32 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:44.716 10:41:32 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:46.622 10:41:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:46.622 00:22:46.622 real 0m5.792s 00:22:46.622 user 0m2.175s 00:22:46.622 sys 0m2.057s 00:22:46.622 10:41:35 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1128 -- # xtrace_disable 00:22:46.622 10:41:35 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:46.622 ************************************ 00:22:46.622 END TEST nvmf_async_init 00:22:46.622 ************************************ 00:22:46.622 10:41:35 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@19 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:22:46.622 10:41:35 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:22:46.622 10:41:35 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:22:46.622 10:41:35 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:22:46.622 ************************************ 00:22:46.622 START TEST dma 00:22:46.622 ************************************ 00:22:46.622 10:41:35 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:22:46.880 * Looking for test storage... 00:22:46.880 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:46.880 10:41:35 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:22:46.880 10:41:35 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1691 -- # lcov --version 00:22:46.880 10:41:35 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:22:46.880 10:41:35 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:22:46.880 10:41:35 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:46.880 10:41:35 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:46.880 10:41:35 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:46.880 10:41:35 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # IFS=.-: 00:22:46.880 10:41:35 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # read -ra ver1 00:22:46.880 10:41:35 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # IFS=.-: 00:22:46.880 10:41:35 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # read -ra ver2 00:22:46.880 10:41:35 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@338 -- # local 'op=<' 00:22:46.880 10:41:35 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@340 -- # ver1_l=2 00:22:46.880 10:41:35 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@341 -- # ver2_l=1 00:22:46.880 10:41:35 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:46.880 10:41:35 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@344 -- # case "$op" in 00:22:46.880 10:41:35 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@345 -- # : 1 00:22:46.880 10:41:35 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:46.880 10:41:35 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:46.880 10:41:35 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # decimal 1 00:22:46.880 10:41:35 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=1 00:22:46.880 10:41:35 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:46.880 10:41:35 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 1 00:22:46.880 10:41:35 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # ver1[v]=1 00:22:46.880 10:41:35 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # decimal 2 00:22:46.880 10:41:35 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=2 00:22:46.880 10:41:35 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:46.880 10:41:35 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 2 00:22:46.880 10:41:35 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # ver2[v]=2 00:22:46.881 10:41:35 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:46.881 10:41:35 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:46.881 10:41:35 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # return 0 00:22:46.881 10:41:35 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:46.881 10:41:35 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:22:46.881 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:46.881 --rc genhtml_branch_coverage=1 00:22:46.881 --rc genhtml_function_coverage=1 00:22:46.881 --rc genhtml_legend=1 00:22:46.881 --rc geninfo_all_blocks=1 00:22:46.881 --rc geninfo_unexecuted_blocks=1 00:22:46.881 00:22:46.881 ' 00:22:46.881 10:41:35 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:22:46.881 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:46.881 --rc genhtml_branch_coverage=1 00:22:46.881 --rc genhtml_function_coverage=1 00:22:46.881 --rc genhtml_legend=1 00:22:46.881 --rc geninfo_all_blocks=1 00:22:46.881 --rc geninfo_unexecuted_blocks=1 00:22:46.881 00:22:46.881 ' 00:22:46.881 10:41:35 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:22:46.881 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:46.881 --rc genhtml_branch_coverage=1 00:22:46.881 --rc genhtml_function_coverage=1 00:22:46.881 --rc genhtml_legend=1 00:22:46.881 --rc geninfo_all_blocks=1 00:22:46.881 --rc geninfo_unexecuted_blocks=1 00:22:46.881 00:22:46.881 ' 00:22:46.881 10:41:35 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:22:46.881 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:46.881 --rc genhtml_branch_coverage=1 00:22:46.881 --rc genhtml_function_coverage=1 00:22:46.881 --rc genhtml_legend=1 00:22:46.881 --rc geninfo_all_blocks=1 00:22:46.881 --rc geninfo_unexecuted_blocks=1 00:22:46.881 00:22:46.881 ' 00:22:46.881 10:41:35 nvmf_tcp.nvmf_host.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:46.881 10:41:35 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # uname -s 00:22:46.881 10:41:35 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:46.881 10:41:35 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:46.881 10:41:35 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:46.881 10:41:35 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:46.881 10:41:35 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:46.881 10:41:35 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:46.881 10:41:35 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:46.881 10:41:35 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:46.881 10:41:35 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:46.881 10:41:35 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:46.881 10:41:35 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:22:46.881 10:41:35 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=8b464f06-2980-e311-ba20-001e67a94acd 00:22:46.881 10:41:35 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:46.881 10:41:35 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:46.881 10:41:35 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:46.881 10:41:35 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:46.881 10:41:35 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:46.881 10:41:35 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@15 -- # shopt -s extglob 00:22:46.881 10:41:35 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:46.881 10:41:35 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:46.881 10:41:35 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:46.881 10:41:35 nvmf_tcp.nvmf_host.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:46.881 10:41:35 nvmf_tcp.nvmf_host.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:46.881 10:41:35 nvmf_tcp.nvmf_host.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:46.881 10:41:35 nvmf_tcp.nvmf_host.dma -- paths/export.sh@5 -- # export PATH 00:22:46.881 10:41:35 nvmf_tcp.nvmf_host.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:46.881 10:41:35 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@51 -- # : 0 00:22:46.881 10:41:35 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:46.881 10:41:35 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:46.881 10:41:35 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:46.881 10:41:35 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:46.881 10:41:35 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:46.881 10:41:35 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:46.881 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:46.881 10:41:35 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:46.881 10:41:35 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:46.881 10:41:35 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:46.881 10:41:35 nvmf_tcp.nvmf_host.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:22:46.881 10:41:35 nvmf_tcp.nvmf_host.dma -- host/dma.sh@13 -- # exit 0 00:22:46.881 00:22:46.881 real 0m0.156s 00:22:46.881 user 0m0.098s 00:22:46.881 sys 0m0.067s 00:22:46.881 10:41:35 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1128 -- # xtrace_disable 00:22:46.881 10:41:35 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:22:46.881 ************************************ 00:22:46.881 END TEST dma 00:22:46.881 ************************************ 00:22:46.881 10:41:35 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@22 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:22:46.881 10:41:35 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:22:46.881 10:41:35 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:22:46.881 10:41:35 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:22:46.881 ************************************ 00:22:46.881 START TEST nvmf_identify 00:22:46.881 ************************************ 00:22:46.881 10:41:35 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:22:46.881 * Looking for test storage... 00:22:46.881 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:46.881 10:41:35 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:22:46.881 10:41:35 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1691 -- # lcov --version 00:22:46.881 10:41:35 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:22:47.141 10:41:35 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:22:47.141 10:41:35 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:47.141 10:41:35 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:47.141 10:41:35 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:47.141 10:41:35 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # IFS=.-: 00:22:47.141 10:41:35 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # read -ra ver1 00:22:47.141 10:41:35 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # IFS=.-: 00:22:47.141 10:41:35 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # read -ra ver2 00:22:47.141 10:41:35 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@338 -- # local 'op=<' 00:22:47.141 10:41:35 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@340 -- # ver1_l=2 00:22:47.141 10:41:35 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@341 -- # ver2_l=1 00:22:47.141 10:41:35 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:47.141 10:41:35 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@344 -- # case "$op" in 00:22:47.141 10:41:35 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@345 -- # : 1 00:22:47.141 10:41:35 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:47.141 10:41:35 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:47.141 10:41:35 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # decimal 1 00:22:47.141 10:41:35 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=1 00:22:47.141 10:41:35 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:47.141 10:41:35 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 1 00:22:47.141 10:41:35 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # ver1[v]=1 00:22:47.141 10:41:35 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # decimal 2 00:22:47.141 10:41:35 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=2 00:22:47.141 10:41:35 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:47.141 10:41:35 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 2 00:22:47.141 10:41:35 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # ver2[v]=2 00:22:47.141 10:41:35 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:47.141 10:41:35 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:47.141 10:41:35 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # return 0 00:22:47.141 10:41:35 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:47.141 10:41:35 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:22:47.141 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:47.141 --rc genhtml_branch_coverage=1 00:22:47.141 --rc genhtml_function_coverage=1 00:22:47.141 --rc genhtml_legend=1 00:22:47.141 --rc geninfo_all_blocks=1 00:22:47.141 --rc geninfo_unexecuted_blocks=1 00:22:47.141 00:22:47.141 ' 00:22:47.141 10:41:35 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:22:47.141 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:47.141 --rc genhtml_branch_coverage=1 00:22:47.141 --rc genhtml_function_coverage=1 00:22:47.141 --rc genhtml_legend=1 00:22:47.141 --rc geninfo_all_blocks=1 00:22:47.141 --rc geninfo_unexecuted_blocks=1 00:22:47.141 00:22:47.141 ' 00:22:47.141 10:41:35 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:22:47.141 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:47.141 --rc genhtml_branch_coverage=1 00:22:47.141 --rc genhtml_function_coverage=1 00:22:47.141 --rc genhtml_legend=1 00:22:47.141 --rc geninfo_all_blocks=1 00:22:47.141 --rc geninfo_unexecuted_blocks=1 00:22:47.141 00:22:47.141 ' 00:22:47.141 10:41:35 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:22:47.141 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:47.141 --rc genhtml_branch_coverage=1 00:22:47.141 --rc genhtml_function_coverage=1 00:22:47.141 --rc genhtml_legend=1 00:22:47.141 --rc geninfo_all_blocks=1 00:22:47.141 --rc geninfo_unexecuted_blocks=1 00:22:47.141 00:22:47.141 ' 00:22:47.141 10:41:35 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:47.141 10:41:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:22:47.141 10:41:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:47.141 10:41:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:47.141 10:41:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:47.141 10:41:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:47.141 10:41:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:47.141 10:41:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:47.141 10:41:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:47.141 10:41:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:47.141 10:41:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:47.141 10:41:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:47.141 10:41:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:22:47.141 10:41:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=8b464f06-2980-e311-ba20-001e67a94acd 00:22:47.141 10:41:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:47.141 10:41:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:47.141 10:41:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:47.141 10:41:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:47.141 10:41:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:47.141 10:41:35 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@15 -- # shopt -s extglob 00:22:47.141 10:41:35 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:47.141 10:41:35 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:47.141 10:41:35 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:47.141 10:41:35 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:47.142 10:41:35 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:47.142 10:41:35 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:47.142 10:41:35 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:22:47.142 10:41:35 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:47.142 10:41:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@51 -- # : 0 00:22:47.142 10:41:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:47.142 10:41:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:47.142 10:41:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:47.142 10:41:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:47.142 10:41:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:47.142 10:41:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:47.142 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:47.142 10:41:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:47.142 10:41:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:47.142 10:41:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:47.142 10:41:35 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:22:47.142 10:41:35 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:22:47.142 10:41:35 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:22:47.142 10:41:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:47.142 10:41:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:47.142 10:41:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:47.142 10:41:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:47.142 10:41:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:47.142 10:41:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:47.142 10:41:35 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:47.142 10:41:35 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:47.142 10:41:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:47.142 10:41:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:47.142 10:41:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@309 -- # xtrace_disable 00:22:47.142 10:41:35 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:49.043 10:41:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:49.043 10:41:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # pci_devs=() 00:22:49.043 10:41:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:49.043 10:41:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:49.043 10:41:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:49.043 10:41:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:49.043 10:41:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:49.043 10:41:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # net_devs=() 00:22:49.043 10:41:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:49.043 10:41:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # e810=() 00:22:49.043 10:41:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # local -ga e810 00:22:49.043 10:41:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # x722=() 00:22:49.043 10:41:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # local -ga x722 00:22:49.043 10:41:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # mlx=() 00:22:49.043 10:41:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # local -ga mlx 00:22:49.043 10:41:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:49.043 10:41:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:49.044 10:41:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:49.044 10:41:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:49.044 10:41:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:49.044 10:41:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:49.044 10:41:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:49.044 10:41:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:49.044 10:41:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:49.044 10:41:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:49.044 10:41:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:49.044 10:41:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:49.044 10:41:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:49.044 10:41:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:49.044 10:41:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:49.044 10:41:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:49.044 10:41:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:49.044 10:41:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:49.044 10:41:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:49.044 10:41:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@367 -- # echo 'Found 0000:82:00.0 (0x8086 - 0x159b)' 00:22:49.044 Found 0000:82:00.0 (0x8086 - 0x159b) 00:22:49.044 10:41:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:49.044 10:41:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:49.044 10:41:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:49.044 10:41:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:49.044 10:41:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:49.044 10:41:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:49.044 10:41:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@367 -- # echo 'Found 0000:82:00.1 (0x8086 - 0x159b)' 00:22:49.044 Found 0000:82:00.1 (0x8086 - 0x159b) 00:22:49.044 10:41:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:49.044 10:41:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:49.044 10:41:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:49.044 10:41:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:49.044 10:41:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:49.044 10:41:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:49.044 10:41:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:49.044 10:41:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:49.044 10:41:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:49.044 10:41:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:49.044 10:41:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:49.044 10:41:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:49.044 10:41:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:49.044 10:41:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:49.044 10:41:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:49.044 10:41:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:82:00.0: cvl_0_0' 00:22:49.044 Found net devices under 0000:82:00.0: cvl_0_0 00:22:49.044 10:41:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:49.044 10:41:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:49.044 10:41:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:49.044 10:41:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:49.044 10:41:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:49.044 10:41:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:49.044 10:41:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:49.044 10:41:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:49.044 10:41:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:82:00.1: cvl_0_1' 00:22:49.044 Found net devices under 0000:82:00.1: cvl_0_1 00:22:49.044 10:41:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:49.044 10:41:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:49.044 10:41:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # is_hw=yes 00:22:49.044 10:41:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:49.044 10:41:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:49.044 10:41:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:49.044 10:41:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:49.044 10:41:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:49.044 10:41:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:49.044 10:41:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:49.044 10:41:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:49.044 10:41:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:49.044 10:41:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:49.044 10:41:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:49.044 10:41:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:49.044 10:41:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:49.044 10:41:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:49.044 10:41:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:49.044 10:41:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:49.044 10:41:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:49.044 10:41:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:49.302 10:41:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:49.302 10:41:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:49.302 10:41:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:49.302 10:41:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:49.302 10:41:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:49.302 10:41:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:49.302 10:41:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:49.302 10:41:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:49.302 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:49.302 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.258 ms 00:22:49.302 00:22:49.302 --- 10.0.0.2 ping statistics --- 00:22:49.302 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:49.302 rtt min/avg/max/mdev = 0.258/0.258/0.258/0.000 ms 00:22:49.302 10:41:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:49.302 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:49.302 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.131 ms 00:22:49.302 00:22:49.302 --- 10.0.0.1 ping statistics --- 00:22:49.302 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:49.302 rtt min/avg/max/mdev = 0.131/0.131/0.131/0.000 ms 00:22:49.302 10:41:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:49.302 10:41:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@450 -- # return 0 00:22:49.302 10:41:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:49.302 10:41:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:49.302 10:41:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:49.302 10:41:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:49.302 10:41:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:49.302 10:41:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:49.302 10:41:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:49.302 10:41:37 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:22:49.302 10:41:37 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:49.302 10:41:37 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:49.302 10:41:37 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=437532 00:22:49.302 10:41:37 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:22:49.302 10:41:37 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:49.302 10:41:37 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 437532 00:22:49.302 10:41:37 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@833 -- # '[' -z 437532 ']' 00:22:49.302 10:41:37 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:49.302 10:41:37 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@838 -- # local max_retries=100 00:22:49.302 10:41:37 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:49.302 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:49.302 10:41:37 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@842 -- # xtrace_disable 00:22:49.302 10:41:37 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:49.302 [2024-11-15 10:41:37.657816] Starting SPDK v25.01-pre git sha1 318515b44 / DPDK 24.03.0 initialization... 00:22:49.302 [2024-11-15 10:41:37.657917] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:49.302 [2024-11-15 10:41:37.732149] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:49.559 [2024-11-15 10:41:37.795751] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:49.559 [2024-11-15 10:41:37.795808] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:49.559 [2024-11-15 10:41:37.795837] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:49.559 [2024-11-15 10:41:37.795849] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:49.559 [2024-11-15 10:41:37.795858] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:49.559 [2024-11-15 10:41:37.797604] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:49.559 [2024-11-15 10:41:37.797628] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:49.559 [2024-11-15 10:41:37.797692] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:22:49.559 [2024-11-15 10:41:37.797696] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:49.559 10:41:37 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:22:49.559 10:41:37 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@866 -- # return 0 00:22:49.559 10:41:37 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:49.559 10:41:37 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:49.559 10:41:37 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:49.559 [2024-11-15 10:41:37.926268] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:49.559 10:41:37 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:49.559 10:41:37 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:22:49.559 10:41:37 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:49.559 10:41:37 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:49.559 10:41:37 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:22:49.559 10:41:37 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:49.559 10:41:37 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:49.559 Malloc0 00:22:49.559 10:41:38 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:49.559 10:41:38 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:49.559 10:41:38 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:49.559 10:41:38 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:49.559 10:41:38 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:49.559 10:41:38 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:22:49.559 10:41:38 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:49.559 10:41:38 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:49.559 10:41:38 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:49.559 10:41:38 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:49.559 10:41:38 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:49.559 10:41:38 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:49.818 [2024-11-15 10:41:38.026164] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:49.818 10:41:38 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:49.818 10:41:38 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:22:49.818 10:41:38 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:49.818 10:41:38 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:49.818 10:41:38 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:49.818 10:41:38 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:22:49.818 10:41:38 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:49.818 10:41:38 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:49.818 [ 00:22:49.818 { 00:22:49.818 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:22:49.818 "subtype": "Discovery", 00:22:49.818 "listen_addresses": [ 00:22:49.818 { 00:22:49.818 "trtype": "TCP", 00:22:49.818 "adrfam": "IPv4", 00:22:49.818 "traddr": "10.0.0.2", 00:22:49.818 "trsvcid": "4420" 00:22:49.818 } 00:22:49.818 ], 00:22:49.818 "allow_any_host": true, 00:22:49.818 "hosts": [] 00:22:49.818 }, 00:22:49.818 { 00:22:49.818 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:49.818 "subtype": "NVMe", 00:22:49.818 "listen_addresses": [ 00:22:49.818 { 00:22:49.818 "trtype": "TCP", 00:22:49.818 "adrfam": "IPv4", 00:22:49.818 "traddr": "10.0.0.2", 00:22:49.818 "trsvcid": "4420" 00:22:49.818 } 00:22:49.818 ], 00:22:49.818 "allow_any_host": true, 00:22:49.818 "hosts": [], 00:22:49.818 "serial_number": "SPDK00000000000001", 00:22:49.818 "model_number": "SPDK bdev Controller", 00:22:49.818 "max_namespaces": 32, 00:22:49.818 "min_cntlid": 1, 00:22:49.818 "max_cntlid": 65519, 00:22:49.818 "namespaces": [ 00:22:49.818 { 00:22:49.818 "nsid": 1, 00:22:49.818 "bdev_name": "Malloc0", 00:22:49.818 "name": "Malloc0", 00:22:49.818 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:22:49.818 "eui64": "ABCDEF0123456789", 00:22:49.818 "uuid": "54b64a18-02ca-441c-a604-b77af4188394" 00:22:49.818 } 00:22:49.818 ] 00:22:49.818 } 00:22:49.818 ] 00:22:49.818 10:41:38 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:49.818 10:41:38 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:22:49.818 [2024-11-15 10:41:38.068632] Starting SPDK v25.01-pre git sha1 318515b44 / DPDK 24.03.0 initialization... 00:22:49.818 [2024-11-15 10:41:38.068687] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid437669 ] 00:22:49.818 [2024-11-15 10:41:38.119594] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to connect adminq (no timeout) 00:22:49.818 [2024-11-15 10:41:38.119677] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:22:49.818 [2024-11-15 10:41:38.119689] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:22:49.818 [2024-11-15 10:41:38.119707] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:22:49.818 [2024-11-15 10:41:38.119723] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:22:49.818 [2024-11-15 10:41:38.123840] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to wait for connect adminq (no timeout) 00:22:49.818 [2024-11-15 10:41:38.123893] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0xfad690 0 00:22:49.818 [2024-11-15 10:41:38.124108] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:22:49.818 [2024-11-15 10:41:38.124128] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:22:49.818 [2024-11-15 10:41:38.124136] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:22:49.818 [2024-11-15 10:41:38.124142] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:22:49.818 [2024-11-15 10:41:38.124198] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:49.818 [2024-11-15 10:41:38.124211] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:49.818 [2024-11-15 10:41:38.124218] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xfad690) 00:22:49.818 [2024-11-15 10:41:38.124237] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:22:49.818 [2024-11-15 10:41:38.124269] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x100f100, cid 0, qid 0 00:22:49.818 [2024-11-15 10:41:38.131380] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:49.818 [2024-11-15 10:41:38.131398] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:49.818 [2024-11-15 10:41:38.131406] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:49.818 [2024-11-15 10:41:38.131413] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x100f100) on tqpair=0xfad690 00:22:49.818 [2024-11-15 10:41:38.131431] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:22:49.818 [2024-11-15 10:41:38.131444] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs (no timeout) 00:22:49.818 [2024-11-15 10:41:38.131454] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs wait for vs (no timeout) 00:22:49.818 [2024-11-15 10:41:38.131480] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:49.818 [2024-11-15 10:41:38.131488] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:49.818 [2024-11-15 10:41:38.131495] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xfad690) 00:22:49.819 [2024-11-15 10:41:38.131506] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.819 [2024-11-15 10:41:38.131530] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x100f100, cid 0, qid 0 00:22:49.819 [2024-11-15 10:41:38.131662] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:49.819 [2024-11-15 10:41:38.131691] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:49.819 [2024-11-15 10:41:38.131697] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:49.819 [2024-11-15 10:41:38.131708] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x100f100) on tqpair=0xfad690 00:22:49.819 [2024-11-15 10:41:38.131718] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap (no timeout) 00:22:49.819 [2024-11-15 10:41:38.131731] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap wait for cap (no timeout) 00:22:49.819 [2024-11-15 10:41:38.131743] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:49.819 [2024-11-15 10:41:38.131750] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:49.819 [2024-11-15 10:41:38.131756] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xfad690) 00:22:49.819 [2024-11-15 10:41:38.131766] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.819 [2024-11-15 10:41:38.131787] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x100f100, cid 0, qid 0 00:22:49.819 [2024-11-15 10:41:38.131916] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:49.819 [2024-11-15 10:41:38.131926] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:49.819 [2024-11-15 10:41:38.131933] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:49.819 [2024-11-15 10:41:38.131939] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x100f100) on tqpair=0xfad690 00:22:49.819 [2024-11-15 10:41:38.131948] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en (no timeout) 00:22:49.819 [2024-11-15 10:41:38.131961] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en wait for cc (timeout 15000 ms) 00:22:49.819 [2024-11-15 10:41:38.131973] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:49.819 [2024-11-15 10:41:38.131980] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:49.819 [2024-11-15 10:41:38.131986] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xfad690) 00:22:49.819 [2024-11-15 10:41:38.131995] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.819 [2024-11-15 10:41:38.132015] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x100f100, cid 0, qid 0 00:22:49.819 [2024-11-15 10:41:38.132116] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:49.819 [2024-11-15 10:41:38.132127] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:49.819 [2024-11-15 10:41:38.132133] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:49.819 [2024-11-15 10:41:38.132140] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x100f100) on tqpair=0xfad690 00:22:49.819 [2024-11-15 10:41:38.132148] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:22:49.819 [2024-11-15 10:41:38.132163] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:49.819 [2024-11-15 10:41:38.132171] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:49.819 [2024-11-15 10:41:38.132177] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xfad690) 00:22:49.819 [2024-11-15 10:41:38.132187] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.819 [2024-11-15 10:41:38.132207] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x100f100, cid 0, qid 0 00:22:49.819 [2024-11-15 10:41:38.132284] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:49.819 [2024-11-15 10:41:38.132296] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:49.819 [2024-11-15 10:41:38.132302] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:49.819 [2024-11-15 10:41:38.132309] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x100f100) on tqpair=0xfad690 00:22:49.819 [2024-11-15 10:41:38.132317] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 0 && CSTS.RDY = 0 00:22:49.819 [2024-11-15 10:41:38.132329] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to controller is disabled (timeout 15000 ms) 00:22:49.819 [2024-11-15 10:41:38.132357] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:22:49.819 [2024-11-15 10:41:38.132482] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Setting CC.EN = 1 00:22:49.819 [2024-11-15 10:41:38.132491] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:22:49.819 [2024-11-15 10:41:38.132508] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:49.819 [2024-11-15 10:41:38.132515] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:49.819 [2024-11-15 10:41:38.132521] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xfad690) 00:22:49.819 [2024-11-15 10:41:38.132532] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.819 [2024-11-15 10:41:38.132553] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x100f100, cid 0, qid 0 00:22:49.819 [2024-11-15 10:41:38.132682] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:49.819 [2024-11-15 10:41:38.132694] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:49.819 [2024-11-15 10:41:38.132700] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:49.819 [2024-11-15 10:41:38.132706] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x100f100) on tqpair=0xfad690 00:22:49.819 [2024-11-15 10:41:38.132714] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:22:49.819 [2024-11-15 10:41:38.132729] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:49.819 [2024-11-15 10:41:38.132737] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:49.819 [2024-11-15 10:41:38.132743] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xfad690) 00:22:49.819 [2024-11-15 10:41:38.132753] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.819 [2024-11-15 10:41:38.132773] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x100f100, cid 0, qid 0 00:22:49.819 [2024-11-15 10:41:38.132885] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:49.819 [2024-11-15 10:41:38.132897] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:49.819 [2024-11-15 10:41:38.132903] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:49.819 [2024-11-15 10:41:38.132909] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x100f100) on tqpair=0xfad690 00:22:49.819 [2024-11-15 10:41:38.132916] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:22:49.819 [2024-11-15 10:41:38.132924] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to reset admin queue (timeout 30000 ms) 00:22:49.819 [2024-11-15 10:41:38.132937] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to identify controller (no timeout) 00:22:49.819 [2024-11-15 10:41:38.132959] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for identify controller (timeout 30000 ms) 00:22:49.819 [2024-11-15 10:41:38.132976] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:49.819 [2024-11-15 10:41:38.132983] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xfad690) 00:22:49.819 [2024-11-15 10:41:38.132993] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.819 [2024-11-15 10:41:38.133018] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x100f100, cid 0, qid 0 00:22:49.819 [2024-11-15 10:41:38.133148] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:49.819 [2024-11-15 10:41:38.133161] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:49.819 [2024-11-15 10:41:38.133168] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:49.819 [2024-11-15 10:41:38.133174] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xfad690): datao=0, datal=4096, cccid=0 00:22:49.819 [2024-11-15 10:41:38.133181] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x100f100) on tqpair(0xfad690): expected_datao=0, payload_size=4096 00:22:49.819 [2024-11-15 10:41:38.133188] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:49.819 [2024-11-15 10:41:38.133206] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:49.819 [2024-11-15 10:41:38.133215] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:49.819 [2024-11-15 10:41:38.176377] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:49.819 [2024-11-15 10:41:38.176396] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:49.819 [2024-11-15 10:41:38.176403] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:49.819 [2024-11-15 10:41:38.176410] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x100f100) on tqpair=0xfad690 00:22:49.819 [2024-11-15 10:41:38.176425] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_xfer_size 4294967295 00:22:49.819 [2024-11-15 10:41:38.176434] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] MDTS max_xfer_size 131072 00:22:49.819 [2024-11-15 10:41:38.176441] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CNTLID 0x0001 00:22:49.819 [2024-11-15 10:41:38.176457] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_sges 16 00:22:49.820 [2024-11-15 10:41:38.176467] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] fuses compare and write: 1 00:22:49.820 [2024-11-15 10:41:38.176475] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to configure AER (timeout 30000 ms) 00:22:49.820 [2024-11-15 10:41:38.176495] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for configure aer (timeout 30000 ms) 00:22:49.820 [2024-11-15 10:41:38.176509] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:49.820 [2024-11-15 10:41:38.176517] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:49.820 [2024-11-15 10:41:38.176523] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xfad690) 00:22:49.820 [2024-11-15 10:41:38.176534] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:49.820 [2024-11-15 10:41:38.176558] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x100f100, cid 0, qid 0 00:22:49.820 [2024-11-15 10:41:38.176743] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:49.820 [2024-11-15 10:41:38.176757] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:49.820 [2024-11-15 10:41:38.176763] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:49.820 [2024-11-15 10:41:38.176769] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x100f100) on tqpair=0xfad690 00:22:49.820 [2024-11-15 10:41:38.176781] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:49.820 [2024-11-15 10:41:38.176788] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:49.820 [2024-11-15 10:41:38.176794] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xfad690) 00:22:49.820 [2024-11-15 10:41:38.176803] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:49.820 [2024-11-15 10:41:38.176813] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:49.820 [2024-11-15 10:41:38.176827] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:49.820 [2024-11-15 10:41:38.176833] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0xfad690) 00:22:49.820 [2024-11-15 10:41:38.176842] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:49.820 [2024-11-15 10:41:38.176851] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:49.820 [2024-11-15 10:41:38.176858] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:49.820 [2024-11-15 10:41:38.176863] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0xfad690) 00:22:49.820 [2024-11-15 10:41:38.176872] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:49.820 [2024-11-15 10:41:38.176881] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:49.820 [2024-11-15 10:41:38.176887] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:49.820 [2024-11-15 10:41:38.176893] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xfad690) 00:22:49.820 [2024-11-15 10:41:38.176901] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:49.820 [2024-11-15 10:41:38.176909] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:22:49.820 [2024-11-15 10:41:38.176924] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:22:49.820 [2024-11-15 10:41:38.176935] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:49.820 [2024-11-15 10:41:38.176941] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xfad690) 00:22:49.820 [2024-11-15 10:41:38.176951] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.820 [2024-11-15 10:41:38.176973] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x100f100, cid 0, qid 0 00:22:49.820 [2024-11-15 10:41:38.176983] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x100f280, cid 1, qid 0 00:22:49.820 [2024-11-15 10:41:38.176991] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x100f400, cid 2, qid 0 00:22:49.820 [2024-11-15 10:41:38.176998] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x100f580, cid 3, qid 0 00:22:49.820 [2024-11-15 10:41:38.177004] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x100f700, cid 4, qid 0 00:22:49.820 [2024-11-15 10:41:38.177148] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:49.820 [2024-11-15 10:41:38.177160] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:49.820 [2024-11-15 10:41:38.177167] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:49.820 [2024-11-15 10:41:38.177173] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x100f700) on tqpair=0xfad690 00:22:49.820 [2024-11-15 10:41:38.177186] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Sending keep alive every 5000000 us 00:22:49.820 [2024-11-15 10:41:38.177195] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to ready (no timeout) 00:22:49.820 [2024-11-15 10:41:38.177213] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:49.820 [2024-11-15 10:41:38.177222] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xfad690) 00:22:49.820 [2024-11-15 10:41:38.177232] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.820 [2024-11-15 10:41:38.177253] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x100f700, cid 4, qid 0 00:22:49.820 [2024-11-15 10:41:38.177399] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:49.820 [2024-11-15 10:41:38.177413] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:49.820 [2024-11-15 10:41:38.177431] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:49.820 [2024-11-15 10:41:38.177439] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xfad690): datao=0, datal=4096, cccid=4 00:22:49.820 [2024-11-15 10:41:38.177447] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x100f700) on tqpair(0xfad690): expected_datao=0, payload_size=4096 00:22:49.820 [2024-11-15 10:41:38.177454] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:49.820 [2024-11-15 10:41:38.177464] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:49.820 [2024-11-15 10:41:38.177471] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:49.820 [2024-11-15 10:41:38.177492] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:49.820 [2024-11-15 10:41:38.177503] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:49.820 [2024-11-15 10:41:38.177510] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:49.820 [2024-11-15 10:41:38.177516] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x100f700) on tqpair=0xfad690 00:22:49.820 [2024-11-15 10:41:38.177536] nvme_ctrlr.c:4202:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Ctrlr already in ready state 00:22:49.820 [2024-11-15 10:41:38.177574] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:49.820 [2024-11-15 10:41:38.177584] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xfad690) 00:22:49.820 [2024-11-15 10:41:38.177595] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.820 [2024-11-15 10:41:38.177606] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:49.820 [2024-11-15 10:41:38.177613] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:49.820 [2024-11-15 10:41:38.177619] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xfad690) 00:22:49.820 [2024-11-15 10:41:38.177628] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:22:49.820 [2024-11-15 10:41:38.177669] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x100f700, cid 4, qid 0 00:22:49.820 [2024-11-15 10:41:38.177681] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x100f880, cid 5, qid 0 00:22:49.820 [2024-11-15 10:41:38.177857] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:49.820 [2024-11-15 10:41:38.177868] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:49.820 [2024-11-15 10:41:38.177874] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:49.820 [2024-11-15 10:41:38.177880] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xfad690): datao=0, datal=1024, cccid=4 00:22:49.820 [2024-11-15 10:41:38.177887] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x100f700) on tqpair(0xfad690): expected_datao=0, payload_size=1024 00:22:49.820 [2024-11-15 10:41:38.177894] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:49.820 [2024-11-15 10:41:38.177902] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:49.820 [2024-11-15 10:41:38.177909] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:49.820 [2024-11-15 10:41:38.177917] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:49.820 [2024-11-15 10:41:38.177925] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:49.820 [2024-11-15 10:41:38.177931] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:49.820 [2024-11-15 10:41:38.177937] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x100f880) on tqpair=0xfad690 00:22:49.820 [2024-11-15 10:41:38.218515] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:49.820 [2024-11-15 10:41:38.218533] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:49.820 [2024-11-15 10:41:38.218540] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:49.820 [2024-11-15 10:41:38.218546] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x100f700) on tqpair=0xfad690 00:22:49.820 [2024-11-15 10:41:38.218569] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:49.820 [2024-11-15 10:41:38.218580] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xfad690) 00:22:49.820 [2024-11-15 10:41:38.218591] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.820 [2024-11-15 10:41:38.218621] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x100f700, cid 4, qid 0 00:22:49.820 [2024-11-15 10:41:38.218740] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:49.820 [2024-11-15 10:41:38.218754] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:49.820 [2024-11-15 10:41:38.218760] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:49.820 [2024-11-15 10:41:38.218766] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xfad690): datao=0, datal=3072, cccid=4 00:22:49.820 [2024-11-15 10:41:38.218773] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x100f700) on tqpair(0xfad690): expected_datao=0, payload_size=3072 00:22:49.820 [2024-11-15 10:41:38.218780] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:49.820 [2024-11-15 10:41:38.218800] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:49.820 [2024-11-15 10:41:38.218808] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:49.820 [2024-11-15 10:41:38.263377] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:49.820 [2024-11-15 10:41:38.263395] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:49.821 [2024-11-15 10:41:38.263403] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:49.821 [2024-11-15 10:41:38.263410] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x100f700) on tqpair=0xfad690 00:22:49.821 [2024-11-15 10:41:38.263426] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:49.821 [2024-11-15 10:41:38.263435] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xfad690) 00:22:49.821 [2024-11-15 10:41:38.263446] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.821 [2024-11-15 10:41:38.263477] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x100f700, cid 4, qid 0 00:22:49.821 [2024-11-15 10:41:38.263577] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:49.821 [2024-11-15 10:41:38.263591] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:49.821 [2024-11-15 10:41:38.263597] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:49.821 [2024-11-15 10:41:38.263603] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xfad690): datao=0, datal=8, cccid=4 00:22:49.821 [2024-11-15 10:41:38.263611] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x100f700) on tqpair(0xfad690): expected_datao=0, payload_size=8 00:22:49.821 [2024-11-15 10:41:38.263618] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:49.821 [2024-11-15 10:41:38.263627] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:49.821 [2024-11-15 10:41:38.263634] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:50.082 [2024-11-15 10:41:38.304461] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:50.082 [2024-11-15 10:41:38.304482] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:50.082 [2024-11-15 10:41:38.304490] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:50.082 [2024-11-15 10:41:38.304496] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x100f700) on tqpair=0xfad690 00:22:50.082 ===================================================== 00:22:50.082 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:22:50.082 ===================================================== 00:22:50.082 Controller Capabilities/Features 00:22:50.082 ================================ 00:22:50.082 Vendor ID: 0000 00:22:50.083 Subsystem Vendor ID: 0000 00:22:50.083 Serial Number: .................... 00:22:50.083 Model Number: ........................................ 00:22:50.083 Firmware Version: 25.01 00:22:50.083 Recommended Arb Burst: 0 00:22:50.083 IEEE OUI Identifier: 00 00 00 00:22:50.083 Multi-path I/O 00:22:50.083 May have multiple subsystem ports: No 00:22:50.083 May have multiple controllers: No 00:22:50.083 Associated with SR-IOV VF: No 00:22:50.083 Max Data Transfer Size: 131072 00:22:50.083 Max Number of Namespaces: 0 00:22:50.083 Max Number of I/O Queues: 1024 00:22:50.083 NVMe Specification Version (VS): 1.3 00:22:50.083 NVMe Specification Version (Identify): 1.3 00:22:50.083 Maximum Queue Entries: 128 00:22:50.083 Contiguous Queues Required: Yes 00:22:50.083 Arbitration Mechanisms Supported 00:22:50.083 Weighted Round Robin: Not Supported 00:22:50.083 Vendor Specific: Not Supported 00:22:50.083 Reset Timeout: 15000 ms 00:22:50.083 Doorbell Stride: 4 bytes 00:22:50.083 NVM Subsystem Reset: Not Supported 00:22:50.083 Command Sets Supported 00:22:50.083 NVM Command Set: Supported 00:22:50.083 Boot Partition: Not Supported 00:22:50.083 Memory Page Size Minimum: 4096 bytes 00:22:50.083 Memory Page Size Maximum: 4096 bytes 00:22:50.083 Persistent Memory Region: Not Supported 00:22:50.083 Optional Asynchronous Events Supported 00:22:50.083 Namespace Attribute Notices: Not Supported 00:22:50.083 Firmware Activation Notices: Not Supported 00:22:50.083 ANA Change Notices: Not Supported 00:22:50.083 PLE Aggregate Log Change Notices: Not Supported 00:22:50.083 LBA Status Info Alert Notices: Not Supported 00:22:50.083 EGE Aggregate Log Change Notices: Not Supported 00:22:50.083 Normal NVM Subsystem Shutdown event: Not Supported 00:22:50.083 Zone Descriptor Change Notices: Not Supported 00:22:50.083 Discovery Log Change Notices: Supported 00:22:50.083 Controller Attributes 00:22:50.083 128-bit Host Identifier: Not Supported 00:22:50.083 Non-Operational Permissive Mode: Not Supported 00:22:50.083 NVM Sets: Not Supported 00:22:50.083 Read Recovery Levels: Not Supported 00:22:50.083 Endurance Groups: Not Supported 00:22:50.083 Predictable Latency Mode: Not Supported 00:22:50.083 Traffic Based Keep ALive: Not Supported 00:22:50.083 Namespace Granularity: Not Supported 00:22:50.083 SQ Associations: Not Supported 00:22:50.083 UUID List: Not Supported 00:22:50.083 Multi-Domain Subsystem: Not Supported 00:22:50.083 Fixed Capacity Management: Not Supported 00:22:50.083 Variable Capacity Management: Not Supported 00:22:50.083 Delete Endurance Group: Not Supported 00:22:50.083 Delete NVM Set: Not Supported 00:22:50.083 Extended LBA Formats Supported: Not Supported 00:22:50.083 Flexible Data Placement Supported: Not Supported 00:22:50.083 00:22:50.083 Controller Memory Buffer Support 00:22:50.083 ================================ 00:22:50.083 Supported: No 00:22:50.083 00:22:50.083 Persistent Memory Region Support 00:22:50.083 ================================ 00:22:50.083 Supported: No 00:22:50.083 00:22:50.083 Admin Command Set Attributes 00:22:50.083 ============================ 00:22:50.083 Security Send/Receive: Not Supported 00:22:50.083 Format NVM: Not Supported 00:22:50.083 Firmware Activate/Download: Not Supported 00:22:50.083 Namespace Management: Not Supported 00:22:50.083 Device Self-Test: Not Supported 00:22:50.083 Directives: Not Supported 00:22:50.083 NVMe-MI: Not Supported 00:22:50.083 Virtualization Management: Not Supported 00:22:50.083 Doorbell Buffer Config: Not Supported 00:22:50.083 Get LBA Status Capability: Not Supported 00:22:50.083 Command & Feature Lockdown Capability: Not Supported 00:22:50.083 Abort Command Limit: 1 00:22:50.083 Async Event Request Limit: 4 00:22:50.083 Number of Firmware Slots: N/A 00:22:50.083 Firmware Slot 1 Read-Only: N/A 00:22:50.083 Firmware Activation Without Reset: N/A 00:22:50.083 Multiple Update Detection Support: N/A 00:22:50.083 Firmware Update Granularity: No Information Provided 00:22:50.083 Per-Namespace SMART Log: No 00:22:50.083 Asymmetric Namespace Access Log Page: Not Supported 00:22:50.083 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:22:50.083 Command Effects Log Page: Not Supported 00:22:50.083 Get Log Page Extended Data: Supported 00:22:50.083 Telemetry Log Pages: Not Supported 00:22:50.083 Persistent Event Log Pages: Not Supported 00:22:50.083 Supported Log Pages Log Page: May Support 00:22:50.083 Commands Supported & Effects Log Page: Not Supported 00:22:50.083 Feature Identifiers & Effects Log Page:May Support 00:22:50.083 NVMe-MI Commands & Effects Log Page: May Support 00:22:50.083 Data Area 4 for Telemetry Log: Not Supported 00:22:50.083 Error Log Page Entries Supported: 128 00:22:50.083 Keep Alive: Not Supported 00:22:50.083 00:22:50.083 NVM Command Set Attributes 00:22:50.083 ========================== 00:22:50.083 Submission Queue Entry Size 00:22:50.083 Max: 1 00:22:50.083 Min: 1 00:22:50.083 Completion Queue Entry Size 00:22:50.083 Max: 1 00:22:50.083 Min: 1 00:22:50.083 Number of Namespaces: 0 00:22:50.083 Compare Command: Not Supported 00:22:50.083 Write Uncorrectable Command: Not Supported 00:22:50.083 Dataset Management Command: Not Supported 00:22:50.083 Write Zeroes Command: Not Supported 00:22:50.083 Set Features Save Field: Not Supported 00:22:50.083 Reservations: Not Supported 00:22:50.083 Timestamp: Not Supported 00:22:50.083 Copy: Not Supported 00:22:50.083 Volatile Write Cache: Not Present 00:22:50.083 Atomic Write Unit (Normal): 1 00:22:50.083 Atomic Write Unit (PFail): 1 00:22:50.083 Atomic Compare & Write Unit: 1 00:22:50.083 Fused Compare & Write: Supported 00:22:50.083 Scatter-Gather List 00:22:50.083 SGL Command Set: Supported 00:22:50.083 SGL Keyed: Supported 00:22:50.083 SGL Bit Bucket Descriptor: Not Supported 00:22:50.083 SGL Metadata Pointer: Not Supported 00:22:50.083 Oversized SGL: Not Supported 00:22:50.083 SGL Metadata Address: Not Supported 00:22:50.083 SGL Offset: Supported 00:22:50.083 Transport SGL Data Block: Not Supported 00:22:50.083 Replay Protected Memory Block: Not Supported 00:22:50.083 00:22:50.083 Firmware Slot Information 00:22:50.083 ========================= 00:22:50.083 Active slot: 0 00:22:50.083 00:22:50.083 00:22:50.083 Error Log 00:22:50.083 ========= 00:22:50.083 00:22:50.083 Active Namespaces 00:22:50.083 ================= 00:22:50.083 Discovery Log Page 00:22:50.083 ================== 00:22:50.083 Generation Counter: 2 00:22:50.083 Number of Records: 2 00:22:50.083 Record Format: 0 00:22:50.083 00:22:50.083 Discovery Log Entry 0 00:22:50.083 ---------------------- 00:22:50.083 Transport Type: 3 (TCP) 00:22:50.083 Address Family: 1 (IPv4) 00:22:50.083 Subsystem Type: 3 (Current Discovery Subsystem) 00:22:50.083 Entry Flags: 00:22:50.083 Duplicate Returned Information: 1 00:22:50.083 Explicit Persistent Connection Support for Discovery: 1 00:22:50.083 Transport Requirements: 00:22:50.083 Secure Channel: Not Required 00:22:50.083 Port ID: 0 (0x0000) 00:22:50.083 Controller ID: 65535 (0xffff) 00:22:50.083 Admin Max SQ Size: 128 00:22:50.083 Transport Service Identifier: 4420 00:22:50.083 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:22:50.083 Transport Address: 10.0.0.2 00:22:50.083 Discovery Log Entry 1 00:22:50.083 ---------------------- 00:22:50.083 Transport Type: 3 (TCP) 00:22:50.083 Address Family: 1 (IPv4) 00:22:50.083 Subsystem Type: 2 (NVM Subsystem) 00:22:50.083 Entry Flags: 00:22:50.083 Duplicate Returned Information: 0 00:22:50.083 Explicit Persistent Connection Support for Discovery: 0 00:22:50.083 Transport Requirements: 00:22:50.083 Secure Channel: Not Required 00:22:50.083 Port ID: 0 (0x0000) 00:22:50.083 Controller ID: 65535 (0xffff) 00:22:50.083 Admin Max SQ Size: 128 00:22:50.083 Transport Service Identifier: 4420 00:22:50.083 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:22:50.083 Transport Address: 10.0.0.2 [2024-11-15 10:41:38.304615] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Prepare to destruct SSD 00:22:50.083 [2024-11-15 10:41:38.304652] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x100f100) on tqpair=0xfad690 00:22:50.083 [2024-11-15 10:41:38.304665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.083 [2024-11-15 10:41:38.304673] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x100f280) on tqpair=0xfad690 00:22:50.083 [2024-11-15 10:41:38.304684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.083 [2024-11-15 10:41:38.304691] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x100f400) on tqpair=0xfad690 00:22:50.084 [2024-11-15 10:41:38.304698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.084 [2024-11-15 10:41:38.304706] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x100f580) on tqpair=0xfad690 00:22:50.084 [2024-11-15 10:41:38.304712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.084 [2024-11-15 10:41:38.304729] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:50.084 [2024-11-15 10:41:38.304738] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:50.084 [2024-11-15 10:41:38.304744] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xfad690) 00:22:50.084 [2024-11-15 10:41:38.304754] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.084 [2024-11-15 10:41:38.304780] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x100f580, cid 3, qid 0 00:22:50.084 [2024-11-15 10:41:38.304861] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:50.084 [2024-11-15 10:41:38.304874] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:50.084 [2024-11-15 10:41:38.304881] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:50.084 [2024-11-15 10:41:38.304887] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x100f580) on tqpair=0xfad690 00:22:50.084 [2024-11-15 10:41:38.304898] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:50.084 [2024-11-15 10:41:38.304905] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:50.084 [2024-11-15 10:41:38.304911] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xfad690) 00:22:50.084 [2024-11-15 10:41:38.304921] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.084 [2024-11-15 10:41:38.304947] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x100f580, cid 3, qid 0 00:22:50.084 [2024-11-15 10:41:38.305060] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:50.084 [2024-11-15 10:41:38.305073] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:50.084 [2024-11-15 10:41:38.305079] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:50.084 [2024-11-15 10:41:38.305085] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x100f580) on tqpair=0xfad690 00:22:50.084 [2024-11-15 10:41:38.305093] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] RTD3E = 0 us 00:22:50.084 [2024-11-15 10:41:38.305100] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown timeout = 10000 ms 00:22:50.084 [2024-11-15 10:41:38.305116] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:50.084 [2024-11-15 10:41:38.305124] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:50.084 [2024-11-15 10:41:38.305130] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xfad690) 00:22:50.084 [2024-11-15 10:41:38.305140] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.084 [2024-11-15 10:41:38.305160] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x100f580, cid 3, qid 0 00:22:50.084 [2024-11-15 10:41:38.305261] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:50.084 [2024-11-15 10:41:38.305274] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:50.084 [2024-11-15 10:41:38.305280] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:50.084 [2024-11-15 10:41:38.305286] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x100f580) on tqpair=0xfad690 00:22:50.084 [2024-11-15 10:41:38.305302] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:50.084 [2024-11-15 10:41:38.305314] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:50.084 [2024-11-15 10:41:38.305321] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xfad690) 00:22:50.084 [2024-11-15 10:41:38.305330] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.084 [2024-11-15 10:41:38.305373] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x100f580, cid 3, qid 0 00:22:50.084 [2024-11-15 10:41:38.305454] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:50.084 [2024-11-15 10:41:38.305468] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:50.084 [2024-11-15 10:41:38.305474] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:50.084 [2024-11-15 10:41:38.305480] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x100f580) on tqpair=0xfad690 00:22:50.084 [2024-11-15 10:41:38.305496] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:50.084 [2024-11-15 10:41:38.305505] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:50.084 [2024-11-15 10:41:38.305511] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xfad690) 00:22:50.084 [2024-11-15 10:41:38.305521] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.084 [2024-11-15 10:41:38.305542] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x100f580, cid 3, qid 0 00:22:50.084 [2024-11-15 10:41:38.305671] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:50.084 [2024-11-15 10:41:38.305698] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:50.084 [2024-11-15 10:41:38.305704] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:50.084 [2024-11-15 10:41:38.305710] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x100f580) on tqpair=0xfad690 00:22:50.084 [2024-11-15 10:41:38.305727] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:50.084 [2024-11-15 10:41:38.305735] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:50.084 [2024-11-15 10:41:38.305741] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xfad690) 00:22:50.084 [2024-11-15 10:41:38.305751] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.084 [2024-11-15 10:41:38.305771] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x100f580, cid 3, qid 0 00:22:50.084 [2024-11-15 10:41:38.305873] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:50.084 [2024-11-15 10:41:38.305884] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:50.084 [2024-11-15 10:41:38.305891] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:50.084 [2024-11-15 10:41:38.305897] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x100f580) on tqpair=0xfad690 00:22:50.084 [2024-11-15 10:41:38.305912] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:50.084 [2024-11-15 10:41:38.305920] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:50.084 [2024-11-15 10:41:38.305926] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xfad690) 00:22:50.084 [2024-11-15 10:41:38.305936] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.084 [2024-11-15 10:41:38.305956] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x100f580, cid 3, qid 0 00:22:50.084 [2024-11-15 10:41:38.306033] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:50.084 [2024-11-15 10:41:38.306044] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:50.084 [2024-11-15 10:41:38.306050] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:50.084 [2024-11-15 10:41:38.306056] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x100f580) on tqpair=0xfad690 00:22:50.084 [2024-11-15 10:41:38.306071] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:50.084 [2024-11-15 10:41:38.306080] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:50.084 [2024-11-15 10:41:38.306089] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xfad690) 00:22:50.084 [2024-11-15 10:41:38.306099] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.084 [2024-11-15 10:41:38.306119] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x100f580, cid 3, qid 0 00:22:50.084 [2024-11-15 10:41:38.306194] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:50.084 [2024-11-15 10:41:38.306205] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:50.084 [2024-11-15 10:41:38.306211] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:50.084 [2024-11-15 10:41:38.306217] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x100f580) on tqpair=0xfad690 00:22:50.084 [2024-11-15 10:41:38.306232] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:50.084 [2024-11-15 10:41:38.306240] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:50.084 [2024-11-15 10:41:38.306246] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xfad690) 00:22:50.084 [2024-11-15 10:41:38.306256] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.084 [2024-11-15 10:41:38.306276] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x100f580, cid 3, qid 0 00:22:50.084 [2024-11-15 10:41:38.306372] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:50.084 [2024-11-15 10:41:38.306385] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:50.084 [2024-11-15 10:41:38.306392] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:50.084 [2024-11-15 10:41:38.306398] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x100f580) on tqpair=0xfad690 00:22:50.084 [2024-11-15 10:41:38.306414] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:50.084 [2024-11-15 10:41:38.306423] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:50.084 [2024-11-15 10:41:38.306429] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xfad690) 00:22:50.084 [2024-11-15 10:41:38.306439] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.084 [2024-11-15 10:41:38.306460] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x100f580, cid 3, qid 0 00:22:50.084 [2024-11-15 10:41:38.306589] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:50.084 [2024-11-15 10:41:38.306602] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:50.084 [2024-11-15 10:41:38.306608] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:50.084 [2024-11-15 10:41:38.306615] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x100f580) on tqpair=0xfad690 00:22:50.084 [2024-11-15 10:41:38.306630] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:50.084 [2024-11-15 10:41:38.306639] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:50.084 [2024-11-15 10:41:38.306645] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xfad690) 00:22:50.084 [2024-11-15 10:41:38.306669] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.084 [2024-11-15 10:41:38.306690] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x100f580, cid 3, qid 0 00:22:50.084 [2024-11-15 10:41:38.306821] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:50.084 [2024-11-15 10:41:38.306833] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:50.084 [2024-11-15 10:41:38.306839] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:50.084 [2024-11-15 10:41:38.306845] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x100f580) on tqpair=0xfad690 00:22:50.084 [2024-11-15 10:41:38.306860] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:50.085 [2024-11-15 10:41:38.306869] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:50.085 [2024-11-15 10:41:38.306875] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xfad690) 00:22:50.085 [2024-11-15 10:41:38.306888] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.085 [2024-11-15 10:41:38.306909] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x100f580, cid 3, qid 0 00:22:50.085 [2024-11-15 10:41:38.306985] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:50.085 [2024-11-15 10:41:38.306998] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:50.085 [2024-11-15 10:41:38.307004] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:50.085 [2024-11-15 10:41:38.307010] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x100f580) on tqpair=0xfad690 00:22:50.085 [2024-11-15 10:41:38.307025] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:50.085 [2024-11-15 10:41:38.307034] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:50.085 [2024-11-15 10:41:38.307040] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xfad690) 00:22:50.085 [2024-11-15 10:41:38.307049] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.085 [2024-11-15 10:41:38.307069] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x100f580, cid 3, qid 0 00:22:50.085 [2024-11-15 10:41:38.307145] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:50.085 [2024-11-15 10:41:38.307158] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:50.085 [2024-11-15 10:41:38.307164] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:50.085 [2024-11-15 10:41:38.307170] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x100f580) on tqpair=0xfad690 00:22:50.085 [2024-11-15 10:41:38.307185] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:50.085 [2024-11-15 10:41:38.307193] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:50.085 [2024-11-15 10:41:38.307199] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xfad690) 00:22:50.085 [2024-11-15 10:41:38.307209] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.085 [2024-11-15 10:41:38.307228] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x100f580, cid 3, qid 0 00:22:50.085 [2024-11-15 10:41:38.307305] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:50.085 [2024-11-15 10:41:38.307317] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:50.085 [2024-11-15 10:41:38.307324] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:50.085 [2024-11-15 10:41:38.307330] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x100f580) on tqpair=0xfad690 00:22:50.085 [2024-11-15 10:41:38.307360] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:50.085 [2024-11-15 10:41:38.311382] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:50.085 [2024-11-15 10:41:38.311389] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xfad690) 00:22:50.085 [2024-11-15 10:41:38.311400] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.085 [2024-11-15 10:41:38.311423] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x100f580, cid 3, qid 0 00:22:50.085 [2024-11-15 10:41:38.311554] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:50.085 [2024-11-15 10:41:38.311568] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:50.085 [2024-11-15 10:41:38.311574] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:50.085 [2024-11-15 10:41:38.311580] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x100f580) on tqpair=0xfad690 00:22:50.085 [2024-11-15 10:41:38.311607] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown complete in 6 milliseconds 00:22:50.085 00:22:50.085 10:41:38 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:22:50.085 [2024-11-15 10:41:38.347196] Starting SPDK v25.01-pre git sha1 318515b44 / DPDK 24.03.0 initialization... 00:22:50.085 [2024-11-15 10:41:38.347239] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid437671 ] 00:22:50.085 [2024-11-15 10:41:38.394567] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to connect adminq (no timeout) 00:22:50.085 [2024-11-15 10:41:38.394624] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:22:50.085 [2024-11-15 10:41:38.394635] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:22:50.085 [2024-11-15 10:41:38.394650] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:22:50.085 [2024-11-15 10:41:38.394664] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:22:50.085 [2024-11-15 10:41:38.398651] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to wait for connect adminq (no timeout) 00:22:50.085 [2024-11-15 10:41:38.398704] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x13ef690 0 00:22:50.085 [2024-11-15 10:41:38.406396] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:22:50.085 [2024-11-15 10:41:38.406415] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:22:50.085 [2024-11-15 10:41:38.406423] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:22:50.085 [2024-11-15 10:41:38.406429] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:22:50.085 [2024-11-15 10:41:38.406463] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:50.085 [2024-11-15 10:41:38.406474] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:50.085 [2024-11-15 10:41:38.406480] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x13ef690) 00:22:50.085 [2024-11-15 10:41:38.406495] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:22:50.085 [2024-11-15 10:41:38.406521] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1451100, cid 0, qid 0 00:22:50.085 [2024-11-15 10:41:38.414394] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:50.085 [2024-11-15 10:41:38.414411] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:50.085 [2024-11-15 10:41:38.414418] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:50.085 [2024-11-15 10:41:38.414425] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1451100) on tqpair=0x13ef690 00:22:50.085 [2024-11-15 10:41:38.414442] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:22:50.085 [2024-11-15 10:41:38.414453] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs (no timeout) 00:22:50.085 [2024-11-15 10:41:38.414461] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs wait for vs (no timeout) 00:22:50.085 [2024-11-15 10:41:38.414479] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:50.085 [2024-11-15 10:41:38.414488] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:50.085 [2024-11-15 10:41:38.414494] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x13ef690) 00:22:50.085 [2024-11-15 10:41:38.414505] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.085 [2024-11-15 10:41:38.414528] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1451100, cid 0, qid 0 00:22:50.085 [2024-11-15 10:41:38.414647] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:50.085 [2024-11-15 10:41:38.414661] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:50.085 [2024-11-15 10:41:38.414685] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:50.085 [2024-11-15 10:41:38.414693] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1451100) on tqpair=0x13ef690 00:22:50.085 [2024-11-15 10:41:38.414701] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap (no timeout) 00:22:50.085 [2024-11-15 10:41:38.414715] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap wait for cap (no timeout) 00:22:50.085 [2024-11-15 10:41:38.414727] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:50.085 [2024-11-15 10:41:38.414734] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:50.085 [2024-11-15 10:41:38.414740] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x13ef690) 00:22:50.085 [2024-11-15 10:41:38.414749] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.085 [2024-11-15 10:41:38.414770] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1451100, cid 0, qid 0 00:22:50.085 [2024-11-15 10:41:38.414889] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:50.085 [2024-11-15 10:41:38.414902] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:50.085 [2024-11-15 10:41:38.414908] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:50.085 [2024-11-15 10:41:38.414914] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1451100) on tqpair=0x13ef690 00:22:50.085 [2024-11-15 10:41:38.414922] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en (no timeout) 00:22:50.085 [2024-11-15 10:41:38.414935] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en wait for cc (timeout 15000 ms) 00:22:50.085 [2024-11-15 10:41:38.414947] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:50.085 [2024-11-15 10:41:38.414954] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:50.085 [2024-11-15 10:41:38.414960] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x13ef690) 00:22:50.085 [2024-11-15 10:41:38.414969] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.085 [2024-11-15 10:41:38.414990] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1451100, cid 0, qid 0 00:22:50.085 [2024-11-15 10:41:38.415109] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:50.085 [2024-11-15 10:41:38.415122] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:50.085 [2024-11-15 10:41:38.415128] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:50.085 [2024-11-15 10:41:38.415134] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1451100) on tqpair=0x13ef690 00:22:50.085 [2024-11-15 10:41:38.415142] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:22:50.085 [2024-11-15 10:41:38.415158] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:50.085 [2024-11-15 10:41:38.415166] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:50.085 [2024-11-15 10:41:38.415172] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x13ef690) 00:22:50.085 [2024-11-15 10:41:38.415182] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.085 [2024-11-15 10:41:38.415202] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1451100, cid 0, qid 0 00:22:50.085 [2024-11-15 10:41:38.415276] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:50.086 [2024-11-15 10:41:38.415289] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:50.086 [2024-11-15 10:41:38.415296] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:50.086 [2024-11-15 10:41:38.415302] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1451100) on tqpair=0x13ef690 00:22:50.086 [2024-11-15 10:41:38.415312] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 0 && CSTS.RDY = 0 00:22:50.086 [2024-11-15 10:41:38.415320] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to controller is disabled (timeout 15000 ms) 00:22:50.086 [2024-11-15 10:41:38.415333] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:22:50.086 [2024-11-15 10:41:38.415443] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Setting CC.EN = 1 00:22:50.086 [2024-11-15 10:41:38.415455] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:22:50.086 [2024-11-15 10:41:38.415467] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:50.086 [2024-11-15 10:41:38.415474] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:50.086 [2024-11-15 10:41:38.415480] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x13ef690) 00:22:50.086 [2024-11-15 10:41:38.415490] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.086 [2024-11-15 10:41:38.415513] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1451100, cid 0, qid 0 00:22:50.086 [2024-11-15 10:41:38.415625] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:50.086 [2024-11-15 10:41:38.415638] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:50.086 [2024-11-15 10:41:38.415645] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:50.086 [2024-11-15 10:41:38.415651] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1451100) on tqpair=0x13ef690 00:22:50.086 [2024-11-15 10:41:38.415659] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:22:50.086 [2024-11-15 10:41:38.415689] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:50.086 [2024-11-15 10:41:38.415698] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:50.086 [2024-11-15 10:41:38.415704] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x13ef690) 00:22:50.086 [2024-11-15 10:41:38.415714] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.086 [2024-11-15 10:41:38.415734] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1451100, cid 0, qid 0 00:22:50.086 [2024-11-15 10:41:38.415813] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:50.086 [2024-11-15 10:41:38.415826] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:50.086 [2024-11-15 10:41:38.415832] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:50.086 [2024-11-15 10:41:38.415838] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1451100) on tqpair=0x13ef690 00:22:50.086 [2024-11-15 10:41:38.415845] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:22:50.086 [2024-11-15 10:41:38.415852] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to reset admin queue (timeout 30000 ms) 00:22:50.086 [2024-11-15 10:41:38.415865] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller (no timeout) 00:22:50.086 [2024-11-15 10:41:38.415878] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify controller (timeout 30000 ms) 00:22:50.086 [2024-11-15 10:41:38.415892] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:50.086 [2024-11-15 10:41:38.415899] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x13ef690) 00:22:50.086 [2024-11-15 10:41:38.415909] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.086 [2024-11-15 10:41:38.415933] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1451100, cid 0, qid 0 00:22:50.086 [2024-11-15 10:41:38.416061] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:50.086 [2024-11-15 10:41:38.416073] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:50.086 [2024-11-15 10:41:38.416079] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:50.086 [2024-11-15 10:41:38.416085] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x13ef690): datao=0, datal=4096, cccid=0 00:22:50.086 [2024-11-15 10:41:38.416092] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1451100) on tqpair(0x13ef690): expected_datao=0, payload_size=4096 00:22:50.086 [2024-11-15 10:41:38.416098] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:50.086 [2024-11-15 10:41:38.416108] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:50.086 [2024-11-15 10:41:38.416115] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:50.086 [2024-11-15 10:41:38.416125] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:50.086 [2024-11-15 10:41:38.416135] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:50.086 [2024-11-15 10:41:38.416140] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:50.086 [2024-11-15 10:41:38.416146] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1451100) on tqpair=0x13ef690 00:22:50.086 [2024-11-15 10:41:38.416156] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_xfer_size 4294967295 00:22:50.086 [2024-11-15 10:41:38.416164] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] MDTS max_xfer_size 131072 00:22:50.086 [2024-11-15 10:41:38.416171] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CNTLID 0x0001 00:22:50.086 [2024-11-15 10:41:38.416182] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_sges 16 00:22:50.086 [2024-11-15 10:41:38.416190] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] fuses compare and write: 1 00:22:50.086 [2024-11-15 10:41:38.416197] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to configure AER (timeout 30000 ms) 00:22:50.086 [2024-11-15 10:41:38.416216] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for configure aer (timeout 30000 ms) 00:22:50.086 [2024-11-15 10:41:38.416228] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:50.086 [2024-11-15 10:41:38.416235] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:50.086 [2024-11-15 10:41:38.416241] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x13ef690) 00:22:50.086 [2024-11-15 10:41:38.416251] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:50.086 [2024-11-15 10:41:38.416272] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1451100, cid 0, qid 0 00:22:50.086 [2024-11-15 10:41:38.416390] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:50.086 [2024-11-15 10:41:38.416405] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:50.086 [2024-11-15 10:41:38.416411] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:50.086 [2024-11-15 10:41:38.416417] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1451100) on tqpair=0x13ef690 00:22:50.086 [2024-11-15 10:41:38.416427] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:50.086 [2024-11-15 10:41:38.416435] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:50.086 [2024-11-15 10:41:38.416441] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x13ef690) 00:22:50.086 [2024-11-15 10:41:38.416450] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:50.086 [2024-11-15 10:41:38.416460] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:50.086 [2024-11-15 10:41:38.416467] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:50.086 [2024-11-15 10:41:38.416476] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x13ef690) 00:22:50.086 [2024-11-15 10:41:38.416486] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:50.086 [2024-11-15 10:41:38.416495] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:50.086 [2024-11-15 10:41:38.416502] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:50.086 [2024-11-15 10:41:38.416508] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x13ef690) 00:22:50.086 [2024-11-15 10:41:38.416516] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:50.086 [2024-11-15 10:41:38.416526] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:50.086 [2024-11-15 10:41:38.416532] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:50.086 [2024-11-15 10:41:38.416538] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x13ef690) 00:22:50.086 [2024-11-15 10:41:38.416546] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:50.086 [2024-11-15 10:41:38.416555] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:22:50.086 [2024-11-15 10:41:38.416569] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:22:50.086 [2024-11-15 10:41:38.416580] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:50.086 [2024-11-15 10:41:38.416587] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x13ef690) 00:22:50.086 [2024-11-15 10:41:38.416597] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.086 [2024-11-15 10:41:38.416619] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1451100, cid 0, qid 0 00:22:50.086 [2024-11-15 10:41:38.416630] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1451280, cid 1, qid 0 00:22:50.086 [2024-11-15 10:41:38.416638] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1451400, cid 2, qid 0 00:22:50.086 [2024-11-15 10:41:38.416645] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1451580, cid 3, qid 0 00:22:50.086 [2024-11-15 10:41:38.416652] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1451700, cid 4, qid 0 00:22:50.086 [2024-11-15 10:41:38.416805] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:50.086 [2024-11-15 10:41:38.416818] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:50.086 [2024-11-15 10:41:38.416824] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:50.086 [2024-11-15 10:41:38.416830] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1451700) on tqpair=0x13ef690 00:22:50.086 [2024-11-15 10:41:38.416842] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Sending keep alive every 5000000 us 00:22:50.086 [2024-11-15 10:41:38.416851] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller iocs specific (timeout 30000 ms) 00:22:50.086 [2024-11-15 10:41:38.416865] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set number of queues (timeout 30000 ms) 00:22:50.086 [2024-11-15 10:41:38.416876] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set number of queues (timeout 30000 ms) 00:22:50.086 [2024-11-15 10:41:38.416886] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:50.087 [2024-11-15 10:41:38.416893] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:50.087 [2024-11-15 10:41:38.416899] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x13ef690) 00:22:50.087 [2024-11-15 10:41:38.416908] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:50.087 [2024-11-15 10:41:38.416933] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1451700, cid 4, qid 0 00:22:50.087 [2024-11-15 10:41:38.417051] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:50.087 [2024-11-15 10:41:38.417062] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:50.087 [2024-11-15 10:41:38.417068] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:50.087 [2024-11-15 10:41:38.417074] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1451700) on tqpair=0x13ef690 00:22:50.087 [2024-11-15 10:41:38.417139] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify active ns (timeout 30000 ms) 00:22:50.087 [2024-11-15 10:41:38.417158] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify active ns (timeout 30000 ms) 00:22:50.087 [2024-11-15 10:41:38.417172] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:50.087 [2024-11-15 10:41:38.417180] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x13ef690) 00:22:50.087 [2024-11-15 10:41:38.417190] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.087 [2024-11-15 10:41:38.417211] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1451700, cid 4, qid 0 00:22:50.087 [2024-11-15 10:41:38.417334] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:50.087 [2024-11-15 10:41:38.417372] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:50.087 [2024-11-15 10:41:38.417381] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:50.087 [2024-11-15 10:41:38.417387] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x13ef690): datao=0, datal=4096, cccid=4 00:22:50.087 [2024-11-15 10:41:38.417394] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1451700) on tqpair(0x13ef690): expected_datao=0, payload_size=4096 00:22:50.087 [2024-11-15 10:41:38.417401] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:50.087 [2024-11-15 10:41:38.417418] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:50.087 [2024-11-15 10:41:38.417427] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:50.087 [2024-11-15 10:41:38.417438] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:50.087 [2024-11-15 10:41:38.417447] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:50.087 [2024-11-15 10:41:38.417453] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:50.087 [2024-11-15 10:41:38.417460] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1451700) on tqpair=0x13ef690 00:22:50.087 [2024-11-15 10:41:38.417476] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Namespace 1 was added 00:22:50.087 [2024-11-15 10:41:38.417494] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns (timeout 30000 ms) 00:22:50.087 [2024-11-15 10:41:38.417512] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify ns (timeout 30000 ms) 00:22:50.087 [2024-11-15 10:41:38.417526] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:50.087 [2024-11-15 10:41:38.417533] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x13ef690) 00:22:50.087 [2024-11-15 10:41:38.417543] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.087 [2024-11-15 10:41:38.417565] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1451700, cid 4, qid 0 00:22:50.087 [2024-11-15 10:41:38.417675] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:50.087 [2024-11-15 10:41:38.417688] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:50.087 [2024-11-15 10:41:38.417694] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:50.087 [2024-11-15 10:41:38.417700] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x13ef690): datao=0, datal=4096, cccid=4 00:22:50.087 [2024-11-15 10:41:38.417725] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1451700) on tqpair(0x13ef690): expected_datao=0, payload_size=4096 00:22:50.087 [2024-11-15 10:41:38.417733] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:50.087 [2024-11-15 10:41:38.417750] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:50.087 [2024-11-15 10:41:38.417758] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:50.087 [2024-11-15 10:41:38.417768] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:50.087 [2024-11-15 10:41:38.417777] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:50.087 [2024-11-15 10:41:38.417783] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:50.087 [2024-11-15 10:41:38.417789] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1451700) on tqpair=0x13ef690 00:22:50.087 [2024-11-15 10:41:38.417811] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:22:50.087 [2024-11-15 10:41:38.417830] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:22:50.087 [2024-11-15 10:41:38.417843] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:50.087 [2024-11-15 10:41:38.417850] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x13ef690) 00:22:50.087 [2024-11-15 10:41:38.417860] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.087 [2024-11-15 10:41:38.417881] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1451700, cid 4, qid 0 00:22:50.087 [2024-11-15 10:41:38.417983] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:50.087 [2024-11-15 10:41:38.417996] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:50.087 [2024-11-15 10:41:38.418002] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:50.087 [2024-11-15 10:41:38.418008] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x13ef690): datao=0, datal=4096, cccid=4 00:22:50.087 [2024-11-15 10:41:38.418015] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1451700) on tqpair(0x13ef690): expected_datao=0, payload_size=4096 00:22:50.087 [2024-11-15 10:41:38.418022] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:50.087 [2024-11-15 10:41:38.418031] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:50.087 [2024-11-15 10:41:38.418038] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:50.087 [2024-11-15 10:41:38.418049] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:50.087 [2024-11-15 10:41:38.418058] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:50.087 [2024-11-15 10:41:38.418064] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:50.087 [2024-11-15 10:41:38.418070] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1451700) on tqpair=0x13ef690 00:22:50.087 [2024-11-15 10:41:38.418083] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns iocs specific (timeout 30000 ms) 00:22:50.087 [2024-11-15 10:41:38.418097] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported log pages (timeout 30000 ms) 00:22:50.087 [2024-11-15 10:41:38.418111] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported features (timeout 30000 ms) 00:22:50.087 [2024-11-15 10:41:38.418123] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host behavior support feature (timeout 30000 ms) 00:22:50.087 [2024-11-15 10:41:38.418131] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set doorbell buffer config (timeout 30000 ms) 00:22:50.087 [2024-11-15 10:41:38.418139] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host ID (timeout 30000 ms) 00:22:50.087 [2024-11-15 10:41:38.418150] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] NVMe-oF transport - not sending Set Features - Host ID 00:22:50.087 [2024-11-15 10:41:38.418158] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to transport ready (timeout 30000 ms) 00:22:50.087 [2024-11-15 10:41:38.418166] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to ready (no timeout) 00:22:50.087 [2024-11-15 10:41:38.418184] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:50.087 [2024-11-15 10:41:38.418192] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x13ef690) 00:22:50.087 [2024-11-15 10:41:38.418202] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.087 [2024-11-15 10:41:38.418213] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:50.087 [2024-11-15 10:41:38.418220] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:50.087 [2024-11-15 10:41:38.418225] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x13ef690) 00:22:50.087 [2024-11-15 10:41:38.418234] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:22:50.087 [2024-11-15 10:41:38.418258] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1451700, cid 4, qid 0 00:22:50.087 [2024-11-15 10:41:38.418269] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1451880, cid 5, qid 0 00:22:50.087 [2024-11-15 10:41:38.422387] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:50.087 [2024-11-15 10:41:38.422404] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:50.088 [2024-11-15 10:41:38.422411] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:50.088 [2024-11-15 10:41:38.422417] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1451700) on tqpair=0x13ef690 00:22:50.088 [2024-11-15 10:41:38.422427] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:50.088 [2024-11-15 10:41:38.422436] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:50.088 [2024-11-15 10:41:38.422442] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:50.088 [2024-11-15 10:41:38.422448] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1451880) on tqpair=0x13ef690 00:22:50.088 [2024-11-15 10:41:38.422465] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:50.088 [2024-11-15 10:41:38.422474] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x13ef690) 00:22:50.088 [2024-11-15 10:41:38.422485] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.088 [2024-11-15 10:41:38.422507] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1451880, cid 5, qid 0 00:22:50.088 [2024-11-15 10:41:38.422655] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:50.088 [2024-11-15 10:41:38.422682] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:50.088 [2024-11-15 10:41:38.422689] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:50.088 [2024-11-15 10:41:38.422695] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1451880) on tqpair=0x13ef690 00:22:50.088 [2024-11-15 10:41:38.422711] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:50.088 [2024-11-15 10:41:38.422720] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x13ef690) 00:22:50.088 [2024-11-15 10:41:38.422729] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.088 [2024-11-15 10:41:38.422749] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1451880, cid 5, qid 0 00:22:50.088 [2024-11-15 10:41:38.422877] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:50.088 [2024-11-15 10:41:38.422888] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:50.088 [2024-11-15 10:41:38.422894] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:50.088 [2024-11-15 10:41:38.422904] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1451880) on tqpair=0x13ef690 00:22:50.088 [2024-11-15 10:41:38.422919] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:50.088 [2024-11-15 10:41:38.422928] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x13ef690) 00:22:50.088 [2024-11-15 10:41:38.422938] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.088 [2024-11-15 10:41:38.422957] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1451880, cid 5, qid 0 00:22:50.088 [2024-11-15 10:41:38.423063] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:50.088 [2024-11-15 10:41:38.423076] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:50.088 [2024-11-15 10:41:38.423082] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:50.088 [2024-11-15 10:41:38.423088] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1451880) on tqpair=0x13ef690 00:22:50.088 [2024-11-15 10:41:38.423111] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:50.088 [2024-11-15 10:41:38.423121] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x13ef690) 00:22:50.088 [2024-11-15 10:41:38.423131] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.088 [2024-11-15 10:41:38.423143] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:50.088 [2024-11-15 10:41:38.423150] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x13ef690) 00:22:50.088 [2024-11-15 10:41:38.423159] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.088 [2024-11-15 10:41:38.423170] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:50.088 [2024-11-15 10:41:38.423177] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x13ef690) 00:22:50.088 [2024-11-15 10:41:38.423186] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.088 [2024-11-15 10:41:38.423197] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:50.088 [2024-11-15 10:41:38.423205] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x13ef690) 00:22:50.088 [2024-11-15 10:41:38.423213] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.088 [2024-11-15 10:41:38.423234] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1451880, cid 5, qid 0 00:22:50.088 [2024-11-15 10:41:38.423245] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1451700, cid 4, qid 0 00:22:50.088 [2024-11-15 10:41:38.423252] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1451a00, cid 6, qid 0 00:22:50.088 [2024-11-15 10:41:38.423259] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1451b80, cid 7, qid 0 00:22:50.088 [2024-11-15 10:41:38.423487] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:50.088 [2024-11-15 10:41:38.423502] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:50.088 [2024-11-15 10:41:38.423509] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:50.088 [2024-11-15 10:41:38.423515] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x13ef690): datao=0, datal=8192, cccid=5 00:22:50.088 [2024-11-15 10:41:38.423522] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1451880) on tqpair(0x13ef690): expected_datao=0, payload_size=8192 00:22:50.088 [2024-11-15 10:41:38.423529] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:50.088 [2024-11-15 10:41:38.423547] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:50.088 [2024-11-15 10:41:38.423556] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:50.088 [2024-11-15 10:41:38.423572] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:50.088 [2024-11-15 10:41:38.423582] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:50.088 [2024-11-15 10:41:38.423588] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:50.088 [2024-11-15 10:41:38.423594] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x13ef690): datao=0, datal=512, cccid=4 00:22:50.088 [2024-11-15 10:41:38.423602] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1451700) on tqpair(0x13ef690): expected_datao=0, payload_size=512 00:22:50.088 [2024-11-15 10:41:38.423609] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:50.088 [2024-11-15 10:41:38.423617] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:50.088 [2024-11-15 10:41:38.423624] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:50.088 [2024-11-15 10:41:38.423632] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:50.088 [2024-11-15 10:41:38.423640] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:50.088 [2024-11-15 10:41:38.423646] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:50.088 [2024-11-15 10:41:38.423652] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x13ef690): datao=0, datal=512, cccid=6 00:22:50.088 [2024-11-15 10:41:38.423659] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1451a00) on tqpair(0x13ef690): expected_datao=0, payload_size=512 00:22:50.088 [2024-11-15 10:41:38.423681] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:50.088 [2024-11-15 10:41:38.423690] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:50.088 [2024-11-15 10:41:38.423697] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:50.088 [2024-11-15 10:41:38.423704] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:50.088 [2024-11-15 10:41:38.423712] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:50.088 [2024-11-15 10:41:38.423718] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:50.088 [2024-11-15 10:41:38.423724] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x13ef690): datao=0, datal=4096, cccid=7 00:22:50.088 [2024-11-15 10:41:38.423731] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1451b80) on tqpair(0x13ef690): expected_datao=0, payload_size=4096 00:22:50.088 [2024-11-15 10:41:38.423737] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:50.088 [2024-11-15 10:41:38.423746] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:50.088 [2024-11-15 10:41:38.423752] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:50.088 [2024-11-15 10:41:38.423763] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:50.088 [2024-11-15 10:41:38.423772] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:50.088 [2024-11-15 10:41:38.423777] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:50.088 [2024-11-15 10:41:38.423783] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1451880) on tqpair=0x13ef690 00:22:50.088 [2024-11-15 10:41:38.423803] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:50.088 [2024-11-15 10:41:38.423814] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:50.088 [2024-11-15 10:41:38.423820] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:50.088 [2024-11-15 10:41:38.423826] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1451700) on tqpair=0x13ef690 00:22:50.088 [2024-11-15 10:41:38.423840] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:50.088 [2024-11-15 10:41:38.423850] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:50.088 [2024-11-15 10:41:38.423856] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:50.088 [2024-11-15 10:41:38.423862] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1451a00) on tqpair=0x13ef690 00:22:50.088 [2024-11-15 10:41:38.423872] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:50.088 [2024-11-15 10:41:38.423881] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:50.088 [2024-11-15 10:41:38.423886] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:50.088 [2024-11-15 10:41:38.423895] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1451b80) on tqpair=0x13ef690 00:22:50.088 ===================================================== 00:22:50.088 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:50.088 ===================================================== 00:22:50.088 Controller Capabilities/Features 00:22:50.088 ================================ 00:22:50.088 Vendor ID: 8086 00:22:50.088 Subsystem Vendor ID: 8086 00:22:50.088 Serial Number: SPDK00000000000001 00:22:50.088 Model Number: SPDK bdev Controller 00:22:50.088 Firmware Version: 25.01 00:22:50.088 Recommended Arb Burst: 6 00:22:50.088 IEEE OUI Identifier: e4 d2 5c 00:22:50.088 Multi-path I/O 00:22:50.088 May have multiple subsystem ports: Yes 00:22:50.088 May have multiple controllers: Yes 00:22:50.088 Associated with SR-IOV VF: No 00:22:50.088 Max Data Transfer Size: 131072 00:22:50.088 Max Number of Namespaces: 32 00:22:50.088 Max Number of I/O Queues: 127 00:22:50.088 NVMe Specification Version (VS): 1.3 00:22:50.088 NVMe Specification Version (Identify): 1.3 00:22:50.088 Maximum Queue Entries: 128 00:22:50.088 Contiguous Queues Required: Yes 00:22:50.089 Arbitration Mechanisms Supported 00:22:50.089 Weighted Round Robin: Not Supported 00:22:50.089 Vendor Specific: Not Supported 00:22:50.089 Reset Timeout: 15000 ms 00:22:50.089 Doorbell Stride: 4 bytes 00:22:50.089 NVM Subsystem Reset: Not Supported 00:22:50.089 Command Sets Supported 00:22:50.089 NVM Command Set: Supported 00:22:50.089 Boot Partition: Not Supported 00:22:50.089 Memory Page Size Minimum: 4096 bytes 00:22:50.089 Memory Page Size Maximum: 4096 bytes 00:22:50.089 Persistent Memory Region: Not Supported 00:22:50.089 Optional Asynchronous Events Supported 00:22:50.089 Namespace Attribute Notices: Supported 00:22:50.089 Firmware Activation Notices: Not Supported 00:22:50.089 ANA Change Notices: Not Supported 00:22:50.089 PLE Aggregate Log Change Notices: Not Supported 00:22:50.089 LBA Status Info Alert Notices: Not Supported 00:22:50.089 EGE Aggregate Log Change Notices: Not Supported 00:22:50.089 Normal NVM Subsystem Shutdown event: Not Supported 00:22:50.089 Zone Descriptor Change Notices: Not Supported 00:22:50.089 Discovery Log Change Notices: Not Supported 00:22:50.089 Controller Attributes 00:22:50.089 128-bit Host Identifier: Supported 00:22:50.089 Non-Operational Permissive Mode: Not Supported 00:22:50.089 NVM Sets: Not Supported 00:22:50.089 Read Recovery Levels: Not Supported 00:22:50.089 Endurance Groups: Not Supported 00:22:50.089 Predictable Latency Mode: Not Supported 00:22:50.089 Traffic Based Keep ALive: Not Supported 00:22:50.089 Namespace Granularity: Not Supported 00:22:50.089 SQ Associations: Not Supported 00:22:50.089 UUID List: Not Supported 00:22:50.089 Multi-Domain Subsystem: Not Supported 00:22:50.089 Fixed Capacity Management: Not Supported 00:22:50.089 Variable Capacity Management: Not Supported 00:22:50.089 Delete Endurance Group: Not Supported 00:22:50.089 Delete NVM Set: Not Supported 00:22:50.089 Extended LBA Formats Supported: Not Supported 00:22:50.089 Flexible Data Placement Supported: Not Supported 00:22:50.089 00:22:50.089 Controller Memory Buffer Support 00:22:50.089 ================================ 00:22:50.089 Supported: No 00:22:50.089 00:22:50.089 Persistent Memory Region Support 00:22:50.089 ================================ 00:22:50.089 Supported: No 00:22:50.089 00:22:50.089 Admin Command Set Attributes 00:22:50.089 ============================ 00:22:50.089 Security Send/Receive: Not Supported 00:22:50.089 Format NVM: Not Supported 00:22:50.089 Firmware Activate/Download: Not Supported 00:22:50.089 Namespace Management: Not Supported 00:22:50.089 Device Self-Test: Not Supported 00:22:50.089 Directives: Not Supported 00:22:50.089 NVMe-MI: Not Supported 00:22:50.089 Virtualization Management: Not Supported 00:22:50.089 Doorbell Buffer Config: Not Supported 00:22:50.089 Get LBA Status Capability: Not Supported 00:22:50.089 Command & Feature Lockdown Capability: Not Supported 00:22:50.089 Abort Command Limit: 4 00:22:50.089 Async Event Request Limit: 4 00:22:50.089 Number of Firmware Slots: N/A 00:22:50.089 Firmware Slot 1 Read-Only: N/A 00:22:50.089 Firmware Activation Without Reset: N/A 00:22:50.089 Multiple Update Detection Support: N/A 00:22:50.089 Firmware Update Granularity: No Information Provided 00:22:50.089 Per-Namespace SMART Log: No 00:22:50.089 Asymmetric Namespace Access Log Page: Not Supported 00:22:50.089 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:22:50.089 Command Effects Log Page: Supported 00:22:50.089 Get Log Page Extended Data: Supported 00:22:50.089 Telemetry Log Pages: Not Supported 00:22:50.089 Persistent Event Log Pages: Not Supported 00:22:50.089 Supported Log Pages Log Page: May Support 00:22:50.089 Commands Supported & Effects Log Page: Not Supported 00:22:50.089 Feature Identifiers & Effects Log Page:May Support 00:22:50.089 NVMe-MI Commands & Effects Log Page: May Support 00:22:50.089 Data Area 4 for Telemetry Log: Not Supported 00:22:50.089 Error Log Page Entries Supported: 128 00:22:50.089 Keep Alive: Supported 00:22:50.089 Keep Alive Granularity: 10000 ms 00:22:50.089 00:22:50.089 NVM Command Set Attributes 00:22:50.089 ========================== 00:22:50.089 Submission Queue Entry Size 00:22:50.089 Max: 64 00:22:50.089 Min: 64 00:22:50.089 Completion Queue Entry Size 00:22:50.089 Max: 16 00:22:50.089 Min: 16 00:22:50.089 Number of Namespaces: 32 00:22:50.089 Compare Command: Supported 00:22:50.089 Write Uncorrectable Command: Not Supported 00:22:50.089 Dataset Management Command: Supported 00:22:50.089 Write Zeroes Command: Supported 00:22:50.089 Set Features Save Field: Not Supported 00:22:50.089 Reservations: Supported 00:22:50.089 Timestamp: Not Supported 00:22:50.089 Copy: Supported 00:22:50.089 Volatile Write Cache: Present 00:22:50.089 Atomic Write Unit (Normal): 1 00:22:50.089 Atomic Write Unit (PFail): 1 00:22:50.089 Atomic Compare & Write Unit: 1 00:22:50.089 Fused Compare & Write: Supported 00:22:50.089 Scatter-Gather List 00:22:50.089 SGL Command Set: Supported 00:22:50.089 SGL Keyed: Supported 00:22:50.089 SGL Bit Bucket Descriptor: Not Supported 00:22:50.089 SGL Metadata Pointer: Not Supported 00:22:50.089 Oversized SGL: Not Supported 00:22:50.089 SGL Metadata Address: Not Supported 00:22:50.089 SGL Offset: Supported 00:22:50.089 Transport SGL Data Block: Not Supported 00:22:50.089 Replay Protected Memory Block: Not Supported 00:22:50.089 00:22:50.089 Firmware Slot Information 00:22:50.089 ========================= 00:22:50.089 Active slot: 1 00:22:50.089 Slot 1 Firmware Revision: 25.01 00:22:50.089 00:22:50.089 00:22:50.089 Commands Supported and Effects 00:22:50.089 ============================== 00:22:50.089 Admin Commands 00:22:50.089 -------------- 00:22:50.089 Get Log Page (02h): Supported 00:22:50.089 Identify (06h): Supported 00:22:50.089 Abort (08h): Supported 00:22:50.089 Set Features (09h): Supported 00:22:50.089 Get Features (0Ah): Supported 00:22:50.089 Asynchronous Event Request (0Ch): Supported 00:22:50.089 Keep Alive (18h): Supported 00:22:50.089 I/O Commands 00:22:50.089 ------------ 00:22:50.089 Flush (00h): Supported LBA-Change 00:22:50.089 Write (01h): Supported LBA-Change 00:22:50.089 Read (02h): Supported 00:22:50.089 Compare (05h): Supported 00:22:50.089 Write Zeroes (08h): Supported LBA-Change 00:22:50.089 Dataset Management (09h): Supported LBA-Change 00:22:50.089 Copy (19h): Supported LBA-Change 00:22:50.089 00:22:50.089 Error Log 00:22:50.089 ========= 00:22:50.089 00:22:50.089 Arbitration 00:22:50.089 =========== 00:22:50.089 Arbitration Burst: 1 00:22:50.089 00:22:50.089 Power Management 00:22:50.089 ================ 00:22:50.089 Number of Power States: 1 00:22:50.089 Current Power State: Power State #0 00:22:50.089 Power State #0: 00:22:50.089 Max Power: 0.00 W 00:22:50.089 Non-Operational State: Operational 00:22:50.089 Entry Latency: Not Reported 00:22:50.089 Exit Latency: Not Reported 00:22:50.089 Relative Read Throughput: 0 00:22:50.089 Relative Read Latency: 0 00:22:50.089 Relative Write Throughput: 0 00:22:50.089 Relative Write Latency: 0 00:22:50.089 Idle Power: Not Reported 00:22:50.089 Active Power: Not Reported 00:22:50.089 Non-Operational Permissive Mode: Not Supported 00:22:50.089 00:22:50.089 Health Information 00:22:50.089 ================== 00:22:50.089 Critical Warnings: 00:22:50.089 Available Spare Space: OK 00:22:50.089 Temperature: OK 00:22:50.089 Device Reliability: OK 00:22:50.089 Read Only: No 00:22:50.089 Volatile Memory Backup: OK 00:22:50.089 Current Temperature: 0 Kelvin (-273 Celsius) 00:22:50.089 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:22:50.089 Available Spare: 0% 00:22:50.089 Available Spare Threshold: 0% 00:22:50.089 Life Percentage Used:[2024-11-15 10:41:38.424009] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:50.089 [2024-11-15 10:41:38.424021] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x13ef690) 00:22:50.089 [2024-11-15 10:41:38.424032] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.089 [2024-11-15 10:41:38.424060] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1451b80, cid 7, qid 0 00:22:50.089 [2024-11-15 10:41:38.424209] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:50.089 [2024-11-15 10:41:38.424222] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:50.089 [2024-11-15 10:41:38.424228] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:50.089 [2024-11-15 10:41:38.424235] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1451b80) on tqpair=0x13ef690 00:22:50.089 [2024-11-15 10:41:38.424277] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Prepare to destruct SSD 00:22:50.089 [2024-11-15 10:41:38.424296] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1451100) on tqpair=0x13ef690 00:22:50.089 [2024-11-15 10:41:38.424306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.089 [2024-11-15 10:41:38.424314] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1451280) on tqpair=0x13ef690 00:22:50.089 [2024-11-15 10:41:38.424321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.090 [2024-11-15 10:41:38.424329] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1451400) on tqpair=0x13ef690 00:22:50.090 [2024-11-15 10:41:38.424336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.090 [2024-11-15 10:41:38.424343] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1451580) on tqpair=0x13ef690 00:22:50.090 [2024-11-15 10:41:38.424350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.090 [2024-11-15 10:41:38.424369] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:50.090 [2024-11-15 10:41:38.424394] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:50.090 [2024-11-15 10:41:38.424401] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x13ef690) 00:22:50.090 [2024-11-15 10:41:38.424411] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.090 [2024-11-15 10:41:38.424435] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1451580, cid 3, qid 0 00:22:50.090 [2024-11-15 10:41:38.424594] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:50.090 [2024-11-15 10:41:38.424608] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:50.090 [2024-11-15 10:41:38.424614] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:50.090 [2024-11-15 10:41:38.424621] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1451580) on tqpair=0x13ef690 00:22:50.090 [2024-11-15 10:41:38.424631] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:50.090 [2024-11-15 10:41:38.424639] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:50.090 [2024-11-15 10:41:38.424645] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x13ef690) 00:22:50.090 [2024-11-15 10:41:38.424655] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.090 [2024-11-15 10:41:38.424695] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1451580, cid 3, qid 0 00:22:50.090 [2024-11-15 10:41:38.424831] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:50.090 [2024-11-15 10:41:38.424844] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:50.090 [2024-11-15 10:41:38.424853] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:50.090 [2024-11-15 10:41:38.424860] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1451580) on tqpair=0x13ef690 00:22:50.090 [2024-11-15 10:41:38.424867] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] RTD3E = 0 us 00:22:50.090 [2024-11-15 10:41:38.424874] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown timeout = 10000 ms 00:22:50.090 [2024-11-15 10:41:38.424889] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:50.090 [2024-11-15 10:41:38.424897] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:50.090 [2024-11-15 10:41:38.424903] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x13ef690) 00:22:50.090 [2024-11-15 10:41:38.424913] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.090 [2024-11-15 10:41:38.424933] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1451580, cid 3, qid 0 00:22:50.090 [2024-11-15 10:41:38.425056] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:50.090 [2024-11-15 10:41:38.425067] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:50.090 [2024-11-15 10:41:38.425074] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:50.090 [2024-11-15 10:41:38.425080] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1451580) on tqpair=0x13ef690 00:22:50.090 [2024-11-15 10:41:38.425095] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:50.090 [2024-11-15 10:41:38.425103] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:50.090 [2024-11-15 10:41:38.425109] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x13ef690) 00:22:50.090 [2024-11-15 10:41:38.425119] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.090 [2024-11-15 10:41:38.425138] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1451580, cid 3, qid 0 00:22:50.090 [2024-11-15 10:41:38.425214] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:50.090 [2024-11-15 10:41:38.425227] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:50.090 [2024-11-15 10:41:38.425233] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:50.090 [2024-11-15 10:41:38.425239] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1451580) on tqpair=0x13ef690 00:22:50.090 [2024-11-15 10:41:38.425254] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:50.090 [2024-11-15 10:41:38.425263] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:50.090 [2024-11-15 10:41:38.425269] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x13ef690) 00:22:50.090 [2024-11-15 10:41:38.425278] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.090 [2024-11-15 10:41:38.425298] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1451580, cid 3, qid 0 00:22:50.090 [2024-11-15 10:41:38.425395] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:50.090 [2024-11-15 10:41:38.425410] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:50.090 [2024-11-15 10:41:38.425416] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:50.090 [2024-11-15 10:41:38.425422] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1451580) on tqpair=0x13ef690 00:22:50.090 [2024-11-15 10:41:38.425439] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:50.090 [2024-11-15 10:41:38.425448] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:50.090 [2024-11-15 10:41:38.425454] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x13ef690) 00:22:50.090 [2024-11-15 10:41:38.425464] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.090 [2024-11-15 10:41:38.425485] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1451580, cid 3, qid 0 00:22:50.090 [2024-11-15 10:41:38.425563] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:50.090 [2024-11-15 10:41:38.425576] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:50.090 [2024-11-15 10:41:38.425582] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:50.090 [2024-11-15 10:41:38.425588] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1451580) on tqpair=0x13ef690 00:22:50.090 [2024-11-15 10:41:38.425603] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:50.090 [2024-11-15 10:41:38.425612] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:50.090 [2024-11-15 10:41:38.425618] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x13ef690) 00:22:50.090 [2024-11-15 10:41:38.425628] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.090 [2024-11-15 10:41:38.425649] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1451580, cid 3, qid 0 00:22:50.090 [2024-11-15 10:41:38.425739] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:50.090 [2024-11-15 10:41:38.425751] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:50.090 [2024-11-15 10:41:38.425757] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:50.090 [2024-11-15 10:41:38.425763] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1451580) on tqpair=0x13ef690 00:22:50.090 [2024-11-15 10:41:38.425778] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:50.090 [2024-11-15 10:41:38.425786] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:50.090 [2024-11-15 10:41:38.425792] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x13ef690) 00:22:50.090 [2024-11-15 10:41:38.425802] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.090 [2024-11-15 10:41:38.425822] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1451580, cid 3, qid 0 00:22:50.090 [2024-11-15 10:41:38.425896] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:50.090 [2024-11-15 10:41:38.425908] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:50.090 [2024-11-15 10:41:38.425915] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:50.090 [2024-11-15 10:41:38.425921] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1451580) on tqpair=0x13ef690 00:22:50.090 [2024-11-15 10:41:38.425936] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:50.090 [2024-11-15 10:41:38.425944] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:50.090 [2024-11-15 10:41:38.425950] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x13ef690) 00:22:50.090 [2024-11-15 10:41:38.425960] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.090 [2024-11-15 10:41:38.425979] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1451580, cid 3, qid 0 00:22:50.090 [2024-11-15 10:41:38.426050] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:50.090 [2024-11-15 10:41:38.426063] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:50.090 [2024-11-15 10:41:38.426069] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:50.090 [2024-11-15 10:41:38.426075] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1451580) on tqpair=0x13ef690 00:22:50.090 [2024-11-15 10:41:38.426090] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:50.090 [2024-11-15 10:41:38.426098] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:50.090 [2024-11-15 10:41:38.426104] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x13ef690) 00:22:50.090 [2024-11-15 10:41:38.426114] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.090 [2024-11-15 10:41:38.426134] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1451580, cid 3, qid 0 00:22:50.090 [2024-11-15 10:41:38.426208] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:50.090 [2024-11-15 10:41:38.426226] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:50.090 [2024-11-15 10:41:38.426233] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:50.090 [2024-11-15 10:41:38.426239] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1451580) on tqpair=0x13ef690 00:22:50.090 [2024-11-15 10:41:38.426255] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:50.090 [2024-11-15 10:41:38.426264] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:50.090 [2024-11-15 10:41:38.426270] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x13ef690) 00:22:50.090 [2024-11-15 10:41:38.426280] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.090 [2024-11-15 10:41:38.426300] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1451580, cid 3, qid 0 00:22:50.090 [2024-11-15 10:41:38.430373] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:50.090 [2024-11-15 10:41:38.430390] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:50.090 [2024-11-15 10:41:38.430396] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:50.090 [2024-11-15 10:41:38.430403] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1451580) on tqpair=0x13ef690 00:22:50.090 [2024-11-15 10:41:38.430421] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:50.091 [2024-11-15 10:41:38.430430] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:50.091 [2024-11-15 10:41:38.430437] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x13ef690) 00:22:50.091 [2024-11-15 10:41:38.430447] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.091 [2024-11-15 10:41:38.430470] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1451580, cid 3, qid 0 00:22:50.091 [2024-11-15 10:41:38.430620] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:50.091 [2024-11-15 10:41:38.430633] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:50.091 [2024-11-15 10:41:38.430639] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:50.091 [2024-11-15 10:41:38.430646] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1451580) on tqpair=0x13ef690 00:22:50.091 [2024-11-15 10:41:38.430680] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown complete in 5 milliseconds 00:22:50.091 0% 00:22:50.091 Data Units Read: 0 00:22:50.091 Data Units Written: 0 00:22:50.091 Host Read Commands: 0 00:22:50.091 Host Write Commands: 0 00:22:50.091 Controller Busy Time: 0 minutes 00:22:50.091 Power Cycles: 0 00:22:50.091 Power On Hours: 0 hours 00:22:50.091 Unsafe Shutdowns: 0 00:22:50.091 Unrecoverable Media Errors: 0 00:22:50.091 Lifetime Error Log Entries: 0 00:22:50.091 Warning Temperature Time: 0 minutes 00:22:50.091 Critical Temperature Time: 0 minutes 00:22:50.091 00:22:50.091 Number of Queues 00:22:50.091 ================ 00:22:50.091 Number of I/O Submission Queues: 127 00:22:50.091 Number of I/O Completion Queues: 127 00:22:50.091 00:22:50.091 Active Namespaces 00:22:50.091 ================= 00:22:50.091 Namespace ID:1 00:22:50.091 Error Recovery Timeout: Unlimited 00:22:50.091 Command Set Identifier: NVM (00h) 00:22:50.091 Deallocate: Supported 00:22:50.091 Deallocated/Unwritten Error: Not Supported 00:22:50.091 Deallocated Read Value: Unknown 00:22:50.091 Deallocate in Write Zeroes: Not Supported 00:22:50.091 Deallocated Guard Field: 0xFFFF 00:22:50.091 Flush: Supported 00:22:50.091 Reservation: Supported 00:22:50.091 Namespace Sharing Capabilities: Multiple Controllers 00:22:50.091 Size (in LBAs): 131072 (0GiB) 00:22:50.091 Capacity (in LBAs): 131072 (0GiB) 00:22:50.091 Utilization (in LBAs): 131072 (0GiB) 00:22:50.091 NGUID: ABCDEF0123456789ABCDEF0123456789 00:22:50.091 EUI64: ABCDEF0123456789 00:22:50.091 UUID: 54b64a18-02ca-441c-a604-b77af4188394 00:22:50.091 Thin Provisioning: Not Supported 00:22:50.091 Per-NS Atomic Units: Yes 00:22:50.091 Atomic Boundary Size (Normal): 0 00:22:50.091 Atomic Boundary Size (PFail): 0 00:22:50.091 Atomic Boundary Offset: 0 00:22:50.091 Maximum Single Source Range Length: 65535 00:22:50.091 Maximum Copy Length: 65535 00:22:50.091 Maximum Source Range Count: 1 00:22:50.091 NGUID/EUI64 Never Reused: No 00:22:50.091 Namespace Write Protected: No 00:22:50.091 Number of LBA Formats: 1 00:22:50.091 Current LBA Format: LBA Format #00 00:22:50.091 LBA Format #00: Data Size: 512 Metadata Size: 0 00:22:50.091 00:22:50.091 10:41:38 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@51 -- # sync 00:22:50.091 10:41:38 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:50.091 10:41:38 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:50.091 10:41:38 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:50.091 10:41:38 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:50.091 10:41:38 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:22:50.091 10:41:38 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:22:50.091 10:41:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:50.091 10:41:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@121 -- # sync 00:22:50.091 10:41:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:50.091 10:41:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@124 -- # set +e 00:22:50.091 10:41:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:50.091 10:41:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:50.091 rmmod nvme_tcp 00:22:50.091 rmmod nvme_fabrics 00:22:50.091 rmmod nvme_keyring 00:22:50.091 10:41:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:50.091 10:41:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@128 -- # set -e 00:22:50.091 10:41:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@129 -- # return 0 00:22:50.091 10:41:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@517 -- # '[' -n 437532 ']' 00:22:50.091 10:41:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@518 -- # killprocess 437532 00:22:50.091 10:41:38 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@952 -- # '[' -z 437532 ']' 00:22:50.091 10:41:38 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@956 -- # kill -0 437532 00:22:50.091 10:41:38 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@957 -- # uname 00:22:50.091 10:41:38 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:22:50.091 10:41:38 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 437532 00:22:50.350 10:41:38 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:22:50.350 10:41:38 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:22:50.350 10:41:38 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@970 -- # echo 'killing process with pid 437532' 00:22:50.350 killing process with pid 437532 00:22:50.350 10:41:38 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@971 -- # kill 437532 00:22:50.350 10:41:38 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@976 -- # wait 437532 00:22:50.350 10:41:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:50.350 10:41:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:50.350 10:41:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:50.350 10:41:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@297 -- # iptr 00:22:50.350 10:41:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:50.350 10:41:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-save 00:22:50.350 10:41:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-restore 00:22:50.350 10:41:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:50.350 10:41:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:50.350 10:41:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:50.350 10:41:38 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:50.350 10:41:38 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:52.880 10:41:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:52.880 00:22:52.880 real 0m5.552s 00:22:52.881 user 0m4.692s 00:22:52.881 sys 0m1.929s 00:22:52.881 10:41:40 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1128 -- # xtrace_disable 00:22:52.881 10:41:40 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:52.881 ************************************ 00:22:52.881 END TEST nvmf_identify 00:22:52.881 ************************************ 00:22:52.881 10:41:40 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@23 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:22:52.881 10:41:40 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:22:52.881 10:41:40 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:22:52.881 10:41:40 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:22:52.881 ************************************ 00:22:52.881 START TEST nvmf_perf 00:22:52.881 ************************************ 00:22:52.881 10:41:40 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:22:52.881 * Looking for test storage... 00:22:52.881 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:52.881 10:41:40 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:22:52.881 10:41:40 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1691 -- # lcov --version 00:22:52.881 10:41:40 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:22:52.881 10:41:40 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:22:52.881 10:41:40 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:52.881 10:41:40 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:52.881 10:41:40 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:52.881 10:41:40 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # IFS=.-: 00:22:52.881 10:41:40 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # read -ra ver1 00:22:52.881 10:41:40 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # IFS=.-: 00:22:52.881 10:41:40 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # read -ra ver2 00:22:52.881 10:41:40 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@338 -- # local 'op=<' 00:22:52.881 10:41:40 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@340 -- # ver1_l=2 00:22:52.881 10:41:40 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@341 -- # ver2_l=1 00:22:52.881 10:41:40 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:52.881 10:41:40 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@344 -- # case "$op" in 00:22:52.881 10:41:40 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@345 -- # : 1 00:22:52.881 10:41:40 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:52.881 10:41:40 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:52.881 10:41:40 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # decimal 1 00:22:52.881 10:41:40 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=1 00:22:52.881 10:41:40 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:52.881 10:41:40 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 1 00:22:52.881 10:41:41 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # ver1[v]=1 00:22:52.881 10:41:41 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # decimal 2 00:22:52.881 10:41:41 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=2 00:22:52.881 10:41:41 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:52.881 10:41:41 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 2 00:22:52.881 10:41:41 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # ver2[v]=2 00:22:52.881 10:41:41 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:52.881 10:41:41 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:52.881 10:41:41 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # return 0 00:22:52.881 10:41:41 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:52.881 10:41:41 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:22:52.881 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:52.881 --rc genhtml_branch_coverage=1 00:22:52.881 --rc genhtml_function_coverage=1 00:22:52.881 --rc genhtml_legend=1 00:22:52.881 --rc geninfo_all_blocks=1 00:22:52.881 --rc geninfo_unexecuted_blocks=1 00:22:52.881 00:22:52.881 ' 00:22:52.881 10:41:41 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:22:52.881 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:52.881 --rc genhtml_branch_coverage=1 00:22:52.881 --rc genhtml_function_coverage=1 00:22:52.881 --rc genhtml_legend=1 00:22:52.881 --rc geninfo_all_blocks=1 00:22:52.881 --rc geninfo_unexecuted_blocks=1 00:22:52.881 00:22:52.881 ' 00:22:52.881 10:41:41 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:22:52.881 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:52.881 --rc genhtml_branch_coverage=1 00:22:52.881 --rc genhtml_function_coverage=1 00:22:52.881 --rc genhtml_legend=1 00:22:52.881 --rc geninfo_all_blocks=1 00:22:52.881 --rc geninfo_unexecuted_blocks=1 00:22:52.881 00:22:52.881 ' 00:22:52.881 10:41:41 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:22:52.881 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:52.881 --rc genhtml_branch_coverage=1 00:22:52.881 --rc genhtml_function_coverage=1 00:22:52.881 --rc genhtml_legend=1 00:22:52.881 --rc geninfo_all_blocks=1 00:22:52.881 --rc geninfo_unexecuted_blocks=1 00:22:52.881 00:22:52.881 ' 00:22:52.881 10:41:41 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:52.881 10:41:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:22:52.881 10:41:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:52.881 10:41:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:52.881 10:41:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:52.881 10:41:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:52.881 10:41:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:52.881 10:41:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:52.881 10:41:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:52.881 10:41:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:52.881 10:41:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:52.881 10:41:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:52.881 10:41:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:22:52.881 10:41:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=8b464f06-2980-e311-ba20-001e67a94acd 00:22:52.881 10:41:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:52.881 10:41:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:52.881 10:41:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:52.881 10:41:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:52.881 10:41:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:52.881 10:41:41 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@15 -- # shopt -s extglob 00:22:52.881 10:41:41 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:52.881 10:41:41 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:52.881 10:41:41 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:52.881 10:41:41 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:52.881 10:41:41 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:52.881 10:41:41 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:52.881 10:41:41 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:22:52.881 10:41:41 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:52.881 10:41:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@51 -- # : 0 00:22:52.881 10:41:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:52.881 10:41:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:52.881 10:41:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:52.881 10:41:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:52.881 10:41:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:52.881 10:41:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:52.882 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:52.882 10:41:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:52.882 10:41:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:52.882 10:41:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:52.882 10:41:41 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:22:52.882 10:41:41 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:22:52.882 10:41:41 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:22:52.882 10:41:41 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:22:52.882 10:41:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:52.882 10:41:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:52.882 10:41:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:52.882 10:41:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:52.882 10:41:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:52.882 10:41:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:52.882 10:41:41 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:52.882 10:41:41 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:52.882 10:41:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:52.882 10:41:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:52.882 10:41:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@309 -- # xtrace_disable 00:22:52.882 10:41:41 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:22:54.780 10:41:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:54.781 10:41:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # pci_devs=() 00:22:54.781 10:41:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:54.781 10:41:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:54.781 10:41:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:54.781 10:41:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:54.781 10:41:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:54.781 10:41:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # net_devs=() 00:22:54.781 10:41:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:54.781 10:41:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # e810=() 00:22:54.781 10:41:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # local -ga e810 00:22:54.781 10:41:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # x722=() 00:22:54.781 10:41:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # local -ga x722 00:22:54.781 10:41:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # mlx=() 00:22:54.781 10:41:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # local -ga mlx 00:22:54.781 10:41:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:54.781 10:41:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:54.781 10:41:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:54.781 10:41:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:54.781 10:41:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:54.781 10:41:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:54.781 10:41:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:54.781 10:41:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:54.781 10:41:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:54.781 10:41:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:54.781 10:41:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:54.781 10:41:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:54.781 10:41:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:54.781 10:41:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:54.781 10:41:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:54.781 10:41:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:54.781 10:41:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:54.781 10:41:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:54.781 10:41:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:54.781 10:41:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@367 -- # echo 'Found 0000:82:00.0 (0x8086 - 0x159b)' 00:22:54.781 Found 0000:82:00.0 (0x8086 - 0x159b) 00:22:54.781 10:41:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:54.781 10:41:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:54.781 10:41:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:54.781 10:41:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:54.781 10:41:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:54.781 10:41:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:54.781 10:41:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@367 -- # echo 'Found 0000:82:00.1 (0x8086 - 0x159b)' 00:22:54.781 Found 0000:82:00.1 (0x8086 - 0x159b) 00:22:54.781 10:41:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:54.781 10:41:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:54.781 10:41:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:54.781 10:41:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:54.781 10:41:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:54.781 10:41:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:54.781 10:41:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:54.781 10:41:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:54.781 10:41:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:54.781 10:41:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:54.781 10:41:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:54.781 10:41:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:54.781 10:41:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:54.781 10:41:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:54.781 10:41:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:54.781 10:41:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:82:00.0: cvl_0_0' 00:22:54.781 Found net devices under 0000:82:00.0: cvl_0_0 00:22:54.781 10:41:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:54.781 10:41:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:54.781 10:41:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:54.781 10:41:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:54.781 10:41:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:54.781 10:41:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:54.781 10:41:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:54.781 10:41:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:54.781 10:41:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:82:00.1: cvl_0_1' 00:22:54.781 Found net devices under 0000:82:00.1: cvl_0_1 00:22:54.781 10:41:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:54.781 10:41:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:54.781 10:41:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # is_hw=yes 00:22:54.781 10:41:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:54.781 10:41:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:54.781 10:41:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:54.781 10:41:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:54.781 10:41:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:54.781 10:41:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:54.781 10:41:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:54.781 10:41:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:54.781 10:41:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:54.781 10:41:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:54.781 10:41:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:54.781 10:41:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:54.781 10:41:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:54.781 10:41:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:54.781 10:41:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:54.781 10:41:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:54.781 10:41:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:54.781 10:41:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:54.781 10:41:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:54.781 10:41:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:54.781 10:41:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:54.781 10:41:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:54.781 10:41:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:54.781 10:41:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:54.781 10:41:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:54.781 10:41:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:54.781 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:54.781 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.245 ms 00:22:54.781 00:22:54.781 --- 10.0.0.2 ping statistics --- 00:22:54.781 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:54.781 rtt min/avg/max/mdev = 0.245/0.245/0.245/0.000 ms 00:22:54.781 10:41:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:54.781 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:54.781 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.102 ms 00:22:54.781 00:22:54.781 --- 10.0.0.1 ping statistics --- 00:22:54.781 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:54.781 rtt min/avg/max/mdev = 0.102/0.102/0.102/0.000 ms 00:22:54.781 10:41:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:54.781 10:41:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@450 -- # return 0 00:22:54.781 10:41:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:54.781 10:41:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:54.781 10:41:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:54.781 10:41:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:54.781 10:41:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:54.781 10:41:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:54.781 10:41:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:54.781 10:41:43 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:22:54.781 10:41:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:54.781 10:41:43 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:54.782 10:41:43 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:22:54.782 10:41:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@509 -- # nvmfpid=439615 00:22:54.782 10:41:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:22:54.782 10:41:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@510 -- # waitforlisten 439615 00:22:54.782 10:41:43 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@833 -- # '[' -z 439615 ']' 00:22:54.782 10:41:43 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:54.782 10:41:43 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@838 -- # local max_retries=100 00:22:54.782 10:41:43 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:54.782 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:54.782 10:41:43 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@842 -- # xtrace_disable 00:22:54.782 10:41:43 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:22:55.039 [2024-11-15 10:41:43.287903] Starting SPDK v25.01-pre git sha1 318515b44 / DPDK 24.03.0 initialization... 00:22:55.039 [2024-11-15 10:41:43.287990] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:55.039 [2024-11-15 10:41:43.358392] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:55.039 [2024-11-15 10:41:43.412180] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:55.039 [2024-11-15 10:41:43.412247] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:55.039 [2024-11-15 10:41:43.412274] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:55.039 [2024-11-15 10:41:43.412284] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:55.039 [2024-11-15 10:41:43.412294] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:55.039 [2024-11-15 10:41:43.413923] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:55.039 [2024-11-15 10:41:43.414033] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:55.039 [2024-11-15 10:41:43.414129] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:22:55.039 [2024-11-15 10:41:43.414137] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:55.296 10:41:43 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:22:55.296 10:41:43 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@866 -- # return 0 00:22:55.296 10:41:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:55.296 10:41:43 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:55.296 10:41:43 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:22:55.296 10:41:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:55.296 10:41:43 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:22:55.296 10:41:43 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:22:58.575 10:41:46 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:22:58.575 10:41:46 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:22:58.575 10:41:46 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:81:00.0 00:22:58.575 10:41:46 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:22:59.140 10:41:47 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:22:59.140 10:41:47 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:81:00.0 ']' 00:22:59.140 10:41:47 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:22:59.140 10:41:47 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:22:59.140 10:41:47 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:22:59.140 [2024-11-15 10:41:47.574845] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:59.140 10:41:47 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:59.706 10:41:47 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:22:59.706 10:41:47 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:59.706 10:41:48 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:22:59.706 10:41:48 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:22:59.964 10:41:48 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:00.221 [2024-11-15 10:41:48.662871] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:00.221 10:41:48 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:23:00.784 10:41:48 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:81:00.0 ']' 00:23:00.784 10:41:48 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:81:00.0' 00:23:00.784 10:41:48 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:23:00.784 10:41:48 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:81:00.0' 00:23:01.714 Initializing NVMe Controllers 00:23:01.714 Attached to NVMe Controller at 0000:81:00.0 [8086:0a54] 00:23:01.714 Associating PCIE (0000:81:00.0) NSID 1 with lcore 0 00:23:01.714 Initialization complete. Launching workers. 00:23:01.714 ======================================================== 00:23:01.714 Latency(us) 00:23:01.714 Device Information : IOPS MiB/s Average min max 00:23:01.714 PCIE (0000:81:00.0) NSID 1 from core 0: 84043.77 328.30 380.30 45.84 8219.42 00:23:01.714 ======================================================== 00:23:01.715 Total : 84043.77 328.30 380.30 45.84 8219.42 00:23:01.715 00:23:01.972 10:41:50 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:23:03.342 Initializing NVMe Controllers 00:23:03.342 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:03.342 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:23:03.342 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:23:03.342 Initialization complete. Launching workers. 00:23:03.342 ======================================================== 00:23:03.342 Latency(us) 00:23:03.342 Device Information : IOPS MiB/s Average min max 00:23:03.342 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 91.00 0.36 11236.78 139.34 44918.62 00:23:03.342 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 70.00 0.27 14835.30 6984.16 51872.94 00:23:03.342 ======================================================== 00:23:03.342 Total : 161.00 0.63 12801.35 139.34 51872.94 00:23:03.342 00:23:03.342 10:41:51 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:23:04.729 Initializing NVMe Controllers 00:23:04.729 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:04.729 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:23:04.729 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:23:04.729 Initialization complete. Launching workers. 00:23:04.729 ======================================================== 00:23:04.729 Latency(us) 00:23:04.729 Device Information : IOPS MiB/s Average min max 00:23:04.729 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 8571.59 33.48 3732.78 668.26 11085.04 00:23:04.729 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3805.72 14.87 8419.33 4284.04 16989.25 00:23:04.729 ======================================================== 00:23:04.729 Total : 12377.31 48.35 5173.78 668.26 16989.25 00:23:04.729 00:23:04.729 10:41:53 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:23:04.729 10:41:53 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:23:04.729 10:41:53 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:23:07.254 Initializing NVMe Controllers 00:23:07.254 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:07.254 Controller IO queue size 128, less than required. 00:23:07.254 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:07.254 Controller IO queue size 128, less than required. 00:23:07.254 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:07.254 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:23:07.254 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:23:07.254 Initialization complete. Launching workers. 00:23:07.254 ======================================================== 00:23:07.254 Latency(us) 00:23:07.254 Device Information : IOPS MiB/s Average min max 00:23:07.254 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1334.18 333.55 98702.52 53604.04 139979.14 00:23:07.254 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 576.86 144.22 230200.66 82709.98 365153.04 00:23:07.254 ======================================================== 00:23:07.254 Total : 1911.04 477.76 138396.18 53604.04 365153.04 00:23:07.254 00:23:07.254 10:41:55 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:23:07.511 No valid NVMe controllers or AIO or URING devices found 00:23:07.511 Initializing NVMe Controllers 00:23:07.511 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:07.511 Controller IO queue size 128, less than required. 00:23:07.511 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:07.511 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:23:07.511 Controller IO queue size 128, less than required. 00:23:07.511 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:07.511 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:23:07.511 WARNING: Some requested NVMe devices were skipped 00:23:07.511 10:41:55 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:23:10.038 Initializing NVMe Controllers 00:23:10.038 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:10.038 Controller IO queue size 128, less than required. 00:23:10.038 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:10.038 Controller IO queue size 128, less than required. 00:23:10.038 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:10.039 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:23:10.039 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:23:10.039 Initialization complete. Launching workers. 00:23:10.039 00:23:10.039 ==================== 00:23:10.039 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:23:10.039 TCP transport: 00:23:10.039 polls: 10509 00:23:10.039 idle_polls: 7969 00:23:10.039 sock_completions: 2540 00:23:10.039 nvme_completions: 4967 00:23:10.039 submitted_requests: 7442 00:23:10.039 queued_requests: 1 00:23:10.039 00:23:10.039 ==================== 00:23:10.039 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:23:10.039 TCP transport: 00:23:10.039 polls: 7963 00:23:10.039 idle_polls: 5532 00:23:10.039 sock_completions: 2431 00:23:10.039 nvme_completions: 4885 00:23:10.039 submitted_requests: 7314 00:23:10.039 queued_requests: 1 00:23:10.039 ======================================================== 00:23:10.039 Latency(us) 00:23:10.039 Device Information : IOPS MiB/s Average min max 00:23:10.039 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1239.57 309.89 107027.26 76842.11 152941.18 00:23:10.039 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1219.11 304.78 105952.82 57739.85 152611.78 00:23:10.039 ======================================================== 00:23:10.039 Total : 2458.68 614.67 106494.51 57739.85 152941.18 00:23:10.039 00:23:10.039 10:41:58 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@66 -- # sync 00:23:10.039 10:41:58 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:10.297 10:41:58 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:23:10.297 10:41:58 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:23:10.297 10:41:58 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:23:10.297 10:41:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:10.297 10:41:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@121 -- # sync 00:23:10.297 10:41:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:10.297 10:41:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@124 -- # set +e 00:23:10.297 10:41:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:10.297 10:41:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:10.297 rmmod nvme_tcp 00:23:10.297 rmmod nvme_fabrics 00:23:10.297 rmmod nvme_keyring 00:23:10.557 10:41:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:10.557 10:41:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@128 -- # set -e 00:23:10.557 10:41:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@129 -- # return 0 00:23:10.557 10:41:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@517 -- # '[' -n 439615 ']' 00:23:10.557 10:41:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@518 -- # killprocess 439615 00:23:10.557 10:41:58 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@952 -- # '[' -z 439615 ']' 00:23:10.557 10:41:58 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@956 -- # kill -0 439615 00:23:10.557 10:41:58 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@957 -- # uname 00:23:10.557 10:41:58 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:23:10.557 10:41:58 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 439615 00:23:10.557 10:41:58 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:23:10.557 10:41:58 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:23:10.557 10:41:58 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@970 -- # echo 'killing process with pid 439615' 00:23:10.557 killing process with pid 439615 00:23:10.557 10:41:58 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@971 -- # kill 439615 00:23:10.557 10:41:58 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@976 -- # wait 439615 00:23:13.085 10:42:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:13.085 10:42:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:13.085 10:42:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:13.085 10:42:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@297 -- # iptr 00:23:13.085 10:42:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-save 00:23:13.085 10:42:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:13.085 10:42:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-restore 00:23:13.085 10:42:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:13.085 10:42:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:13.085 10:42:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:13.085 10:42:01 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:13.085 10:42:01 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:15.070 10:42:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:15.070 00:23:15.070 real 0m22.375s 00:23:15.070 user 1m9.713s 00:23:15.070 sys 0m5.997s 00:23:15.071 10:42:03 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:23:15.071 10:42:03 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:23:15.071 ************************************ 00:23:15.071 END TEST nvmf_perf 00:23:15.071 ************************************ 00:23:15.071 10:42:03 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@24 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:23:15.071 10:42:03 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:23:15.071 10:42:03 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:23:15.071 10:42:03 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:15.071 ************************************ 00:23:15.071 START TEST nvmf_fio_host 00:23:15.071 ************************************ 00:23:15.071 10:42:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:23:15.071 * Looking for test storage... 00:23:15.071 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:15.071 10:42:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:23:15.071 10:42:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1691 -- # lcov --version 00:23:15.071 10:42:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:23:15.071 10:42:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:23:15.071 10:42:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:15.071 10:42:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:15.071 10:42:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:15.071 10:42:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # IFS=.-: 00:23:15.071 10:42:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # read -ra ver1 00:23:15.071 10:42:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # IFS=.-: 00:23:15.071 10:42:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # read -ra ver2 00:23:15.071 10:42:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@338 -- # local 'op=<' 00:23:15.071 10:42:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@340 -- # ver1_l=2 00:23:15.071 10:42:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@341 -- # ver2_l=1 00:23:15.071 10:42:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:15.071 10:42:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@344 -- # case "$op" in 00:23:15.071 10:42:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@345 -- # : 1 00:23:15.071 10:42:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:15.071 10:42:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:15.071 10:42:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # decimal 1 00:23:15.071 10:42:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=1 00:23:15.071 10:42:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:15.071 10:42:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 1 00:23:15.071 10:42:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # ver1[v]=1 00:23:15.071 10:42:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # decimal 2 00:23:15.071 10:42:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=2 00:23:15.071 10:42:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:15.071 10:42:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 2 00:23:15.071 10:42:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # ver2[v]=2 00:23:15.071 10:42:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:15.071 10:42:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:15.071 10:42:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # return 0 00:23:15.071 10:42:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:15.071 10:42:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:23:15.071 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:15.071 --rc genhtml_branch_coverage=1 00:23:15.071 --rc genhtml_function_coverage=1 00:23:15.071 --rc genhtml_legend=1 00:23:15.071 --rc geninfo_all_blocks=1 00:23:15.071 --rc geninfo_unexecuted_blocks=1 00:23:15.071 00:23:15.071 ' 00:23:15.071 10:42:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:23:15.071 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:15.071 --rc genhtml_branch_coverage=1 00:23:15.071 --rc genhtml_function_coverage=1 00:23:15.071 --rc genhtml_legend=1 00:23:15.071 --rc geninfo_all_blocks=1 00:23:15.071 --rc geninfo_unexecuted_blocks=1 00:23:15.071 00:23:15.071 ' 00:23:15.071 10:42:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:23:15.071 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:15.071 --rc genhtml_branch_coverage=1 00:23:15.071 --rc genhtml_function_coverage=1 00:23:15.071 --rc genhtml_legend=1 00:23:15.071 --rc geninfo_all_blocks=1 00:23:15.071 --rc geninfo_unexecuted_blocks=1 00:23:15.071 00:23:15.071 ' 00:23:15.071 10:42:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:23:15.071 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:15.071 --rc genhtml_branch_coverage=1 00:23:15.071 --rc genhtml_function_coverage=1 00:23:15.071 --rc genhtml_legend=1 00:23:15.071 --rc geninfo_all_blocks=1 00:23:15.071 --rc geninfo_unexecuted_blocks=1 00:23:15.071 00:23:15.071 ' 00:23:15.071 10:42:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:15.071 10:42:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:23:15.071 10:42:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:15.071 10:42:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:15.071 10:42:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:15.071 10:42:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:15.071 10:42:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:15.071 10:42:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:15.071 10:42:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:23:15.071 10:42:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:15.071 10:42:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:15.071 10:42:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:23:15.071 10:42:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:15.071 10:42:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:15.071 10:42:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:15.071 10:42:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:15.071 10:42:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:15.071 10:42:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:15.071 10:42:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:15.071 10:42:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:15.071 10:42:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:15.071 10:42:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:15.071 10:42:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:23:15.071 10:42:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=8b464f06-2980-e311-ba20-001e67a94acd 00:23:15.071 10:42:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:15.071 10:42:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:15.071 10:42:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:15.071 10:42:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:15.071 10:42:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:15.071 10:42:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:23:15.071 10:42:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:15.071 10:42:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:15.071 10:42:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:15.072 10:42:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:15.072 10:42:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:15.072 10:42:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:15.072 10:42:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:23:15.072 10:42:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:15.072 10:42:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@51 -- # : 0 00:23:15.072 10:42:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:15.072 10:42:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:15.072 10:42:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:15.072 10:42:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:15.072 10:42:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:15.072 10:42:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:15.072 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:15.072 10:42:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:15.072 10:42:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:15.072 10:42:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:15.072 10:42:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:23:15.072 10:42:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:23:15.072 10:42:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:15.072 10:42:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:15.072 10:42:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:15.072 10:42:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:15.072 10:42:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:15.072 10:42:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:15.072 10:42:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:15.072 10:42:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:15.072 10:42:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:15.072 10:42:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:15.072 10:42:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@309 -- # xtrace_disable 00:23:15.072 10:42:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:23:17.636 10:42:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:17.636 10:42:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # pci_devs=() 00:23:17.636 10:42:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:17.636 10:42:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:17.636 10:42:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:17.636 10:42:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:17.636 10:42:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:17.636 10:42:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # net_devs=() 00:23:17.636 10:42:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:17.636 10:42:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # e810=() 00:23:17.636 10:42:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # local -ga e810 00:23:17.636 10:42:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # x722=() 00:23:17.636 10:42:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # local -ga x722 00:23:17.636 10:42:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # mlx=() 00:23:17.636 10:42:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # local -ga mlx 00:23:17.636 10:42:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:17.636 10:42:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:17.636 10:42:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:17.636 10:42:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:17.636 10:42:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:17.636 10:42:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:17.636 10:42:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:17.636 10:42:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:17.636 10:42:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:17.636 10:42:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:17.636 10:42:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:17.636 10:42:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:17.636 10:42:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:17.636 10:42:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:17.636 10:42:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:17.636 10:42:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:17.636 10:42:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:17.636 10:42:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:17.636 10:42:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:17.636 10:42:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@367 -- # echo 'Found 0000:82:00.0 (0x8086 - 0x159b)' 00:23:17.636 Found 0000:82:00.0 (0x8086 - 0x159b) 00:23:17.636 10:42:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:17.636 10:42:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:17.636 10:42:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:17.636 10:42:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:17.636 10:42:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:17.636 10:42:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:17.636 10:42:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@367 -- # echo 'Found 0000:82:00.1 (0x8086 - 0x159b)' 00:23:17.636 Found 0000:82:00.1 (0x8086 - 0x159b) 00:23:17.636 10:42:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:17.636 10:42:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:17.636 10:42:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:17.636 10:42:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:17.636 10:42:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:17.636 10:42:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:17.636 10:42:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:17.636 10:42:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:17.636 10:42:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:17.636 10:42:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:17.636 10:42:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:17.636 10:42:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:17.636 10:42:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:17.636 10:42:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:17.636 10:42:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:17.636 10:42:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:82:00.0: cvl_0_0' 00:23:17.636 Found net devices under 0000:82:00.0: cvl_0_0 00:23:17.636 10:42:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:17.636 10:42:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:17.636 10:42:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:17.636 10:42:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:17.636 10:42:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:17.636 10:42:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:17.636 10:42:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:17.636 10:42:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:17.636 10:42:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:82:00.1: cvl_0_1' 00:23:17.636 Found net devices under 0000:82:00.1: cvl_0_1 00:23:17.636 10:42:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:17.636 10:42:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:17.636 10:42:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # is_hw=yes 00:23:17.636 10:42:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:23:17.636 10:42:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:23:17.636 10:42:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:23:17.636 10:42:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:17.636 10:42:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:17.636 10:42:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:17.636 10:42:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:17.636 10:42:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:17.636 10:42:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:17.636 10:42:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:17.636 10:42:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:17.636 10:42:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:17.636 10:42:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:17.636 10:42:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:17.636 10:42:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:17.636 10:42:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:17.636 10:42:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:17.636 10:42:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:17.636 10:42:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:17.637 10:42:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:17.637 10:42:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:17.637 10:42:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:17.637 10:42:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:17.637 10:42:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:17.637 10:42:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:17.637 10:42:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:17.637 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:17.637 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.170 ms 00:23:17.637 00:23:17.637 --- 10.0.0.2 ping statistics --- 00:23:17.637 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:17.637 rtt min/avg/max/mdev = 0.170/0.170/0.170/0.000 ms 00:23:17.637 10:42:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:17.637 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:17.637 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.078 ms 00:23:17.637 00:23:17.637 --- 10.0.0.1 ping statistics --- 00:23:17.637 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:17.637 rtt min/avg/max/mdev = 0.078/0.078/0.078/0.000 ms 00:23:17.637 10:42:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:17.637 10:42:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@450 -- # return 0 00:23:17.637 10:42:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:17.637 10:42:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:17.637 10:42:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:17.637 10:42:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:17.637 10:42:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:17.637 10:42:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:17.637 10:42:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:17.637 10:42:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:23:17.637 10:42:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:23:17.637 10:42:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:17.637 10:42:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:23:17.637 10:42:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=443825 00:23:17.637 10:42:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:23:17.637 10:42:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:17.637 10:42:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 443825 00:23:17.637 10:42:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@833 -- # '[' -z 443825 ']' 00:23:17.637 10:42:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:17.637 10:42:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@838 -- # local max_retries=100 00:23:17.637 10:42:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:17.637 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:17.637 10:42:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@842 -- # xtrace_disable 00:23:17.637 10:42:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:23:17.637 [2024-11-15 10:42:05.802753] Starting SPDK v25.01-pre git sha1 318515b44 / DPDK 24.03.0 initialization... 00:23:17.637 [2024-11-15 10:42:05.802850] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:17.637 [2024-11-15 10:42:05.885702] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:17.637 [2024-11-15 10:42:05.943967] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:17.637 [2024-11-15 10:42:05.944022] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:17.637 [2024-11-15 10:42:05.944050] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:17.637 [2024-11-15 10:42:05.944062] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:17.637 [2024-11-15 10:42:05.944071] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:17.637 [2024-11-15 10:42:05.945703] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:17.637 [2024-11-15 10:42:05.945741] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:17.637 [2024-11-15 10:42:05.945831] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:17.637 [2024-11-15 10:42:05.945828] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:23:17.637 10:42:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:23:17.637 10:42:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@866 -- # return 0 00:23:17.637 10:42:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:23:17.895 [2024-11-15 10:42:06.308927] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:17.895 10:42:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:23:17.895 10:42:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:17.895 10:42:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:23:17.895 10:42:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:23:18.461 Malloc1 00:23:18.461 10:42:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:18.719 10:42:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:23:18.977 10:42:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:19.235 [2024-11-15 10:42:07.560960] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:19.235 10:42:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:23:19.494 10:42:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:23:19.494 10:42:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:23:19.494 10:42:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1362 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:23:19.494 10:42:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:23:19.494 10:42:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:23:19.494 10:42:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local sanitizers 00:23:19.494 10:42:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1342 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:23:19.494 10:42:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # shift 00:23:19.494 10:42:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # local asan_lib= 00:23:19.494 10:42:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:23:19.494 10:42:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:23:19.494 10:42:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # grep libasan 00:23:19.494 10:42:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:23:19.494 10:42:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # asan_lib= 00:23:19.494 10:42:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:23:19.494 10:42:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:23:19.494 10:42:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:23:19.494 10:42:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # grep libclang_rt.asan 00:23:19.494 10:42:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:23:19.494 10:42:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # asan_lib= 00:23:19.494 10:42:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:23:19.494 10:42:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1354 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:23:19.494 10:42:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:23:19.752 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:23:19.752 fio-3.35 00:23:19.752 Starting 1 thread 00:23:22.282 00:23:22.282 test: (groupid=0, jobs=1): err= 0: pid=444572: Fri Nov 15 10:42:10 2024 00:23:22.282 read: IOPS=8886, BW=34.7MiB/s (36.4MB/s)(69.7MiB/2007msec) 00:23:22.282 slat (usec): min=2, max=185, avg= 2.91, stdev= 2.39 00:23:22.282 clat (usec): min=2444, max=13817, avg=7877.18, stdev=639.65 00:23:22.282 lat (usec): min=2465, max=13820, avg=7880.09, stdev=639.53 00:23:22.282 clat percentiles (usec): 00:23:22.282 | 1.00th=[ 6456], 5.00th=[ 6915], 10.00th=[ 7111], 20.00th=[ 7373], 00:23:22.282 | 30.00th=[ 7570], 40.00th=[ 7701], 50.00th=[ 7898], 60.00th=[ 8029], 00:23:22.282 | 70.00th=[ 8225], 80.00th=[ 8356], 90.00th=[ 8586], 95.00th=[ 8848], 00:23:22.282 | 99.00th=[ 9241], 99.50th=[ 9372], 99.90th=[12518], 99.95th=[13435], 00:23:22.282 | 99.99th=[13829] 00:23:22.282 bw ( KiB/s): min=34680, max=36560, per=99.94%, avg=35524.00, stdev=847.91, samples=4 00:23:22.282 iops : min= 8670, max= 9140, avg=8881.00, stdev=211.98, samples=4 00:23:22.282 write: IOPS=8900, BW=34.8MiB/s (36.5MB/s)(69.8MiB/2007msec); 0 zone resets 00:23:22.282 slat (usec): min=2, max=141, avg= 3.04, stdev= 2.03 00:23:22.282 clat (usec): min=1363, max=12647, avg=6418.92, stdev=530.85 00:23:22.282 lat (usec): min=1370, max=12650, avg=6421.96, stdev=530.81 00:23:22.282 clat percentiles (usec): 00:23:22.282 | 1.00th=[ 5276], 5.00th=[ 5604], 10.00th=[ 5800], 20.00th=[ 6063], 00:23:22.282 | 30.00th=[ 6194], 40.00th=[ 6325], 50.00th=[ 6390], 60.00th=[ 6521], 00:23:22.282 | 70.00th=[ 6652], 80.00th=[ 6783], 90.00th=[ 7046], 95.00th=[ 7242], 00:23:22.282 | 99.00th=[ 7570], 99.50th=[ 7701], 99.90th=[10421], 99.95th=[11207], 00:23:22.282 | 99.99th=[12649] 00:23:22.282 bw ( KiB/s): min=35168, max=36024, per=100.00%, avg=35620.00, stdev=396.92, samples=4 00:23:22.282 iops : min= 8792, max= 9006, avg=8905.00, stdev=99.23, samples=4 00:23:22.282 lat (msec) : 2=0.03%, 4=0.10%, 10=99.68%, 20=0.19% 00:23:22.282 cpu : usr=68.94%, sys=28.71%, ctx=106, majf=0, minf=28 00:23:22.282 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:23:22.282 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:22.282 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:23:22.282 issued rwts: total=17835,17864,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:22.282 latency : target=0, window=0, percentile=100.00%, depth=128 00:23:22.282 00:23:22.282 Run status group 0 (all jobs): 00:23:22.282 READ: bw=34.7MiB/s (36.4MB/s), 34.7MiB/s-34.7MiB/s (36.4MB/s-36.4MB/s), io=69.7MiB (73.1MB), run=2007-2007msec 00:23:22.283 WRITE: bw=34.8MiB/s (36.5MB/s), 34.8MiB/s-34.8MiB/s (36.5MB/s-36.5MB/s), io=69.8MiB (73.2MB), run=2007-2007msec 00:23:22.283 10:42:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:23:22.283 10:42:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1362 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:23:22.283 10:42:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:23:22.283 10:42:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:23:22.283 10:42:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local sanitizers 00:23:22.283 10:42:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1342 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:23:22.283 10:42:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # shift 00:23:22.283 10:42:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # local asan_lib= 00:23:22.283 10:42:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:23:22.283 10:42:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:23:22.283 10:42:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # grep libasan 00:23:22.283 10:42:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:23:22.283 10:42:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # asan_lib= 00:23:22.283 10:42:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:23:22.283 10:42:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:23:22.283 10:42:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:23:22.283 10:42:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # grep libclang_rt.asan 00:23:22.283 10:42:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:23:22.283 10:42:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # asan_lib= 00:23:22.283 10:42:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:23:22.283 10:42:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1354 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:23:22.283 10:42:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:23:22.540 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:23:22.540 fio-3.35 00:23:22.540 Starting 1 thread 00:23:25.066 00:23:25.066 test: (groupid=0, jobs=1): err= 0: pid=445151: Fri Nov 15 10:42:13 2024 00:23:25.066 read: IOPS=8277, BW=129MiB/s (136MB/s)(259MiB/2005msec) 00:23:25.066 slat (usec): min=2, max=123, avg= 4.09, stdev= 2.41 00:23:25.066 clat (usec): min=2523, max=17137, avg=8927.83, stdev=2082.27 00:23:25.066 lat (usec): min=2526, max=17141, avg=8931.92, stdev=2082.27 00:23:25.066 clat percentiles (usec): 00:23:25.066 | 1.00th=[ 4686], 5.00th=[ 5669], 10.00th=[ 6194], 20.00th=[ 7046], 00:23:25.066 | 30.00th=[ 7701], 40.00th=[ 8356], 50.00th=[ 8979], 60.00th=[ 9503], 00:23:25.066 | 70.00th=[10028], 80.00th=[10552], 90.00th=[11600], 95.00th=[12387], 00:23:25.066 | 99.00th=[14222], 99.50th=[15008], 99.90th=[16712], 99.95th=[16909], 00:23:25.066 | 99.99th=[17171] 00:23:25.066 bw ( KiB/s): min=63136, max=73984, per=51.73%, avg=68512.00, stdev=5957.56, samples=4 00:23:25.066 iops : min= 3946, max= 4624, avg=4282.00, stdev=372.35, samples=4 00:23:25.066 write: IOPS=4804, BW=75.1MiB/s (78.7MB/s)(140MiB/1868msec); 0 zone resets 00:23:25.066 slat (usec): min=30, max=163, avg=37.72, stdev= 6.43 00:23:25.066 clat (usec): min=5765, max=18721, avg=11436.58, stdev=1934.22 00:23:25.066 lat (usec): min=5801, max=18770, avg=11474.30, stdev=1934.03 00:23:25.066 clat percentiles (usec): 00:23:25.066 | 1.00th=[ 7635], 5.00th=[ 8717], 10.00th=[ 9241], 20.00th=[ 9896], 00:23:25.066 | 30.00th=[10290], 40.00th=[10683], 50.00th=[11207], 60.00th=[11600], 00:23:25.066 | 70.00th=[12256], 80.00th=[13042], 90.00th=[14091], 95.00th=[15139], 00:23:25.066 | 99.00th=[16712], 99.50th=[17171], 99.90th=[17957], 99.95th=[17957], 00:23:25.066 | 99.99th=[18744] 00:23:25.066 bw ( KiB/s): min=64864, max=77344, per=92.84%, avg=71360.00, stdev=6547.73, samples=4 00:23:25.066 iops : min= 4054, max= 4834, avg=4460.00, stdev=409.23, samples=4 00:23:25.066 lat (msec) : 4=0.22%, 10=53.35%, 20=46.43% 00:23:25.066 cpu : usr=83.24%, sys=15.26%, ctx=115, majf=0, minf=50 00:23:25.066 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:23:25.066 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:25.066 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:23:25.066 issued rwts: total=16597,8974,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:25.066 latency : target=0, window=0, percentile=100.00%, depth=128 00:23:25.066 00:23:25.066 Run status group 0 (all jobs): 00:23:25.066 READ: bw=129MiB/s (136MB/s), 129MiB/s-129MiB/s (136MB/s-136MB/s), io=259MiB (272MB), run=2005-2005msec 00:23:25.066 WRITE: bw=75.1MiB/s (78.7MB/s), 75.1MiB/s-75.1MiB/s (78.7MB/s-78.7MB/s), io=140MiB (147MB), run=1868-1868msec 00:23:25.066 10:42:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:25.323 10:42:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@49 -- # '[' 0 -eq 1 ']' 00:23:25.323 10:42:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:23:25.323 10:42:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:23:25.323 10:42:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:23:25.323 10:42:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:25.323 10:42:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@121 -- # sync 00:23:25.323 10:42:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:25.323 10:42:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@124 -- # set +e 00:23:25.323 10:42:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:25.323 10:42:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:25.323 rmmod nvme_tcp 00:23:25.323 rmmod nvme_fabrics 00:23:25.323 rmmod nvme_keyring 00:23:25.323 10:42:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:25.323 10:42:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@128 -- # set -e 00:23:25.323 10:42:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@129 -- # return 0 00:23:25.323 10:42:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@517 -- # '[' -n 443825 ']' 00:23:25.323 10:42:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@518 -- # killprocess 443825 00:23:25.323 10:42:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@952 -- # '[' -z 443825 ']' 00:23:25.323 10:42:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@956 -- # kill -0 443825 00:23:25.323 10:42:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@957 -- # uname 00:23:25.323 10:42:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:23:25.323 10:42:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 443825 00:23:25.324 10:42:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:23:25.324 10:42:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:23:25.324 10:42:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@970 -- # echo 'killing process with pid 443825' 00:23:25.324 killing process with pid 443825 00:23:25.324 10:42:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@971 -- # kill 443825 00:23:25.324 10:42:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@976 -- # wait 443825 00:23:25.583 10:42:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:25.583 10:42:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:25.583 10:42:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:25.583 10:42:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@297 -- # iptr 00:23:25.583 10:42:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-save 00:23:25.583 10:42:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:25.583 10:42:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-restore 00:23:25.583 10:42:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:25.583 10:42:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:25.584 10:42:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:25.584 10:42:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:25.584 10:42:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:27.511 10:42:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:27.511 00:23:27.511 real 0m12.654s 00:23:27.511 user 0m38.203s 00:23:27.511 sys 0m3.956s 00:23:27.511 10:42:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1128 -- # xtrace_disable 00:23:27.511 10:42:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:23:27.511 ************************************ 00:23:27.511 END TEST nvmf_fio_host 00:23:27.511 ************************************ 00:23:27.511 10:42:15 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@25 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:23:27.511 10:42:15 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:23:27.511 10:42:15 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:23:27.511 10:42:15 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:27.770 ************************************ 00:23:27.770 START TEST nvmf_failover 00:23:27.770 ************************************ 00:23:27.771 10:42:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:23:27.771 * Looking for test storage... 00:23:27.771 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:27.771 10:42:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:23:27.771 10:42:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1691 -- # lcov --version 00:23:27.771 10:42:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:23:27.771 10:42:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:23:27.771 10:42:16 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:27.771 10:42:16 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:27.771 10:42:16 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:27.771 10:42:16 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # IFS=.-: 00:23:27.771 10:42:16 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # read -ra ver1 00:23:27.771 10:42:16 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # IFS=.-: 00:23:27.771 10:42:16 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # read -ra ver2 00:23:27.771 10:42:16 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@338 -- # local 'op=<' 00:23:27.771 10:42:16 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@340 -- # ver1_l=2 00:23:27.771 10:42:16 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@341 -- # ver2_l=1 00:23:27.771 10:42:16 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:27.771 10:42:16 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@344 -- # case "$op" in 00:23:27.771 10:42:16 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@345 -- # : 1 00:23:27.771 10:42:16 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:27.771 10:42:16 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:27.771 10:42:16 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # decimal 1 00:23:27.771 10:42:16 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=1 00:23:27.771 10:42:16 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:27.771 10:42:16 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 1 00:23:27.771 10:42:16 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # ver1[v]=1 00:23:27.771 10:42:16 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # decimal 2 00:23:27.771 10:42:16 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=2 00:23:27.771 10:42:16 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:27.771 10:42:16 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 2 00:23:27.771 10:42:16 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # ver2[v]=2 00:23:27.771 10:42:16 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:27.771 10:42:16 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:27.771 10:42:16 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # return 0 00:23:27.771 10:42:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:27.771 10:42:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:23:27.771 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:27.771 --rc genhtml_branch_coverage=1 00:23:27.771 --rc genhtml_function_coverage=1 00:23:27.771 --rc genhtml_legend=1 00:23:27.771 --rc geninfo_all_blocks=1 00:23:27.771 --rc geninfo_unexecuted_blocks=1 00:23:27.771 00:23:27.771 ' 00:23:27.771 10:42:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:23:27.771 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:27.771 --rc genhtml_branch_coverage=1 00:23:27.771 --rc genhtml_function_coverage=1 00:23:27.771 --rc genhtml_legend=1 00:23:27.771 --rc geninfo_all_blocks=1 00:23:27.771 --rc geninfo_unexecuted_blocks=1 00:23:27.771 00:23:27.771 ' 00:23:27.771 10:42:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:23:27.771 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:27.771 --rc genhtml_branch_coverage=1 00:23:27.771 --rc genhtml_function_coverage=1 00:23:27.771 --rc genhtml_legend=1 00:23:27.771 --rc geninfo_all_blocks=1 00:23:27.771 --rc geninfo_unexecuted_blocks=1 00:23:27.771 00:23:27.771 ' 00:23:27.771 10:42:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:23:27.771 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:27.771 --rc genhtml_branch_coverage=1 00:23:27.771 --rc genhtml_function_coverage=1 00:23:27.771 --rc genhtml_legend=1 00:23:27.771 --rc geninfo_all_blocks=1 00:23:27.771 --rc geninfo_unexecuted_blocks=1 00:23:27.771 00:23:27.771 ' 00:23:27.771 10:42:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:27.771 10:42:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:23:27.771 10:42:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:27.771 10:42:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:27.771 10:42:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:27.771 10:42:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:27.771 10:42:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:27.771 10:42:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:27.771 10:42:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:27.771 10:42:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:27.771 10:42:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:27.771 10:42:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:27.771 10:42:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:23:27.771 10:42:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=8b464f06-2980-e311-ba20-001e67a94acd 00:23:27.771 10:42:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:27.771 10:42:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:27.771 10:42:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:27.771 10:42:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:27.771 10:42:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:27.771 10:42:16 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@15 -- # shopt -s extglob 00:23:27.771 10:42:16 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:27.771 10:42:16 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:27.771 10:42:16 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:27.771 10:42:16 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:27.771 10:42:16 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:27.771 10:42:16 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:27.771 10:42:16 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:23:27.771 10:42:16 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:27.771 10:42:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@51 -- # : 0 00:23:27.771 10:42:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:27.771 10:42:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:27.771 10:42:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:27.771 10:42:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:27.771 10:42:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:27.771 10:42:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:27.771 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:27.771 10:42:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:27.771 10:42:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:27.771 10:42:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:27.771 10:42:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:23:27.771 10:42:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:23:27.771 10:42:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:23:27.772 10:42:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:27.772 10:42:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:23:27.772 10:42:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:27.772 10:42:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:27.772 10:42:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:27.772 10:42:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:27.772 10:42:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:27.772 10:42:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:27.772 10:42:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:27.772 10:42:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:27.772 10:42:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:27.772 10:42:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:27.772 10:42:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@309 -- # xtrace_disable 00:23:27.772 10:42:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:23:29.675 10:42:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:29.675 10:42:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # pci_devs=() 00:23:29.675 10:42:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:29.675 10:42:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:29.675 10:42:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:29.675 10:42:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:29.675 10:42:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:29.675 10:42:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # net_devs=() 00:23:29.675 10:42:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:29.675 10:42:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # e810=() 00:23:29.675 10:42:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # local -ga e810 00:23:29.675 10:42:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # x722=() 00:23:29.675 10:42:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # local -ga x722 00:23:29.675 10:42:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # mlx=() 00:23:29.675 10:42:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # local -ga mlx 00:23:29.675 10:42:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:29.675 10:42:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:29.675 10:42:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:29.675 10:42:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:29.675 10:42:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:29.675 10:42:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:29.675 10:42:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:29.675 10:42:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:29.675 10:42:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:29.675 10:42:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:29.675 10:42:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:29.675 10:42:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:29.675 10:42:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:29.675 10:42:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:29.675 10:42:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:29.675 10:42:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:29.675 10:42:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:29.675 10:42:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:29.675 10:42:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:29.675 10:42:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@367 -- # echo 'Found 0000:82:00.0 (0x8086 - 0x159b)' 00:23:29.675 Found 0000:82:00.0 (0x8086 - 0x159b) 00:23:29.675 10:42:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:29.675 10:42:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:29.675 10:42:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:29.675 10:42:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:29.675 10:42:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:29.675 10:42:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:29.675 10:42:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@367 -- # echo 'Found 0000:82:00.1 (0x8086 - 0x159b)' 00:23:29.675 Found 0000:82:00.1 (0x8086 - 0x159b) 00:23:29.675 10:42:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:29.675 10:42:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:29.675 10:42:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:29.675 10:42:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:29.675 10:42:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:29.675 10:42:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:29.675 10:42:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:29.675 10:42:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:29.675 10:42:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:29.675 10:42:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:29.934 10:42:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:29.934 10:42:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:29.934 10:42:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:29.934 10:42:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:29.934 10:42:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:29.934 10:42:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:82:00.0: cvl_0_0' 00:23:29.934 Found net devices under 0000:82:00.0: cvl_0_0 00:23:29.934 10:42:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:29.934 10:42:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:29.934 10:42:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:29.934 10:42:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:29.934 10:42:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:29.934 10:42:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:29.934 10:42:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:29.934 10:42:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:29.934 10:42:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:82:00.1: cvl_0_1' 00:23:29.934 Found net devices under 0000:82:00.1: cvl_0_1 00:23:29.934 10:42:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:29.934 10:42:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:29.934 10:42:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # is_hw=yes 00:23:29.934 10:42:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:23:29.934 10:42:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:23:29.934 10:42:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:23:29.934 10:42:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:29.934 10:42:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:29.934 10:42:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:29.934 10:42:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:29.934 10:42:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:29.934 10:42:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:29.934 10:42:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:29.934 10:42:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:29.934 10:42:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:29.934 10:42:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:29.934 10:42:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:29.934 10:42:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:29.934 10:42:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:29.934 10:42:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:29.934 10:42:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:29.934 10:42:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:29.934 10:42:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:29.934 10:42:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:29.934 10:42:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:29.935 10:42:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:29.935 10:42:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:29.935 10:42:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:29.935 10:42:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:29.935 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:29.935 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.161 ms 00:23:29.935 00:23:29.935 --- 10.0.0.2 ping statistics --- 00:23:29.935 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:29.935 rtt min/avg/max/mdev = 0.161/0.161/0.161/0.000 ms 00:23:29.935 10:42:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:29.935 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:29.935 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.095 ms 00:23:29.935 00:23:29.935 --- 10.0.0.1 ping statistics --- 00:23:29.935 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:29.935 rtt min/avg/max/mdev = 0.095/0.095/0.095/0.000 ms 00:23:29.935 10:42:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:29.935 10:42:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@450 -- # return 0 00:23:29.935 10:42:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:29.935 10:42:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:29.935 10:42:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:29.935 10:42:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:29.935 10:42:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:29.935 10:42:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:29.935 10:42:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:29.935 10:42:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:23:29.935 10:42:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:29.935 10:42:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:29.935 10:42:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:23:29.935 10:42:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@509 -- # nvmfpid=447352 00:23:29.935 10:42:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:23:29.935 10:42:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@510 -- # waitforlisten 447352 00:23:29.935 10:42:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@833 -- # '[' -z 447352 ']' 00:23:29.935 10:42:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:29.935 10:42:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # local max_retries=100 00:23:29.935 10:42:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:29.935 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:29.935 10:42:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # xtrace_disable 00:23:29.935 10:42:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:23:29.935 [2024-11-15 10:42:18.342558] Starting SPDK v25.01-pre git sha1 318515b44 / DPDK 24.03.0 initialization... 00:23:29.935 [2024-11-15 10:42:18.342632] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:30.193 [2024-11-15 10:42:18.414011] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:23:30.194 [2024-11-15 10:42:18.470746] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:30.194 [2024-11-15 10:42:18.470797] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:30.194 [2024-11-15 10:42:18.470821] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:30.194 [2024-11-15 10:42:18.470833] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:30.194 [2024-11-15 10:42:18.470843] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:30.194 [2024-11-15 10:42:18.472402] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:30.194 [2024-11-15 10:42:18.472469] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:23:30.194 [2024-11-15 10:42:18.472473] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:30.194 10:42:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:23:30.194 10:42:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@866 -- # return 0 00:23:30.194 10:42:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:30.194 10:42:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:30.194 10:42:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:23:30.194 10:42:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:30.194 10:42:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:23:30.761 [2024-11-15 10:42:18.920690] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:30.761 10:42:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:23:31.018 Malloc0 00:23:31.018 10:42:19 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:31.276 10:42:19 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:31.535 10:42:19 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:31.793 [2024-11-15 10:42:20.126328] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:31.793 10:42:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:23:32.052 [2024-11-15 10:42:20.395123] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:23:32.052 10:42:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:23:32.310 [2024-11-15 10:42:20.655898] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:23:32.310 10:42:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=447639 00:23:32.310 10:42:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:23:32.310 10:42:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:32.310 10:42:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 447639 /var/tmp/bdevperf.sock 00:23:32.310 10:42:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@833 -- # '[' -z 447639 ']' 00:23:32.310 10:42:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:32.310 10:42:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # local max_retries=100 00:23:32.310 10:42:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:32.310 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:32.310 10:42:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # xtrace_disable 00:23:32.310 10:42:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:23:32.568 10:42:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:23:32.568 10:42:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@866 -- # return 0 00:23:32.568 10:42:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:23:33.134 NVMe0n1 00:23:33.134 10:42:21 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:23:33.701 00:23:33.701 10:42:21 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=447776 00:23:33.701 10:42:21 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:33.701 10:42:21 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:23:34.636 10:42:22 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:34.895 [2024-11-15 10:42:23.225034] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee5380 is same with the state(6) to be set 00:23:34.895 [2024-11-15 10:42:23.225109] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee5380 is same with the state(6) to be set 00:23:34.895 [2024-11-15 10:42:23.225125] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee5380 is same with the state(6) to be set 00:23:34.895 [2024-11-15 10:42:23.225146] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee5380 is same with the state(6) to be set 00:23:34.895 [2024-11-15 10:42:23.225166] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee5380 is same with the state(6) to be set 00:23:34.895 [2024-11-15 10:42:23.225177] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee5380 is same with the state(6) to be set 00:23:34.895 [2024-11-15 10:42:23.225188] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee5380 is same with the state(6) to be set 00:23:34.895 [2024-11-15 10:42:23.225200] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee5380 is same with the state(6) to be set 00:23:34.895 [2024-11-15 10:42:23.225211] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee5380 is same with the state(6) to be set 00:23:34.895 [2024-11-15 10:42:23.225231] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee5380 is same with the state(6) to be set 00:23:34.895 [2024-11-15 10:42:23.225242] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee5380 is same with the state(6) to be set 00:23:34.895 [2024-11-15 10:42:23.225254] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee5380 is same with the state(6) to be set 00:23:34.895 [2024-11-15 10:42:23.225265] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee5380 is same with the state(6) to be set 00:23:34.895 [2024-11-15 10:42:23.225277] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee5380 is same with the state(6) to be set 00:23:34.895 [2024-11-15 10:42:23.225289] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee5380 is same with the state(6) to be set 00:23:34.895 [2024-11-15 10:42:23.225301] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee5380 is same with the state(6) to be set 00:23:34.895 [2024-11-15 10:42:23.225312] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee5380 is same with the state(6) to be set 00:23:34.895 [2024-11-15 10:42:23.225324] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee5380 is same with the state(6) to be set 00:23:34.895 [2024-11-15 10:42:23.225335] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee5380 is same with the state(6) to be set 00:23:34.895 [2024-11-15 10:42:23.225347] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee5380 is same with the state(6) to be set 00:23:34.895 [2024-11-15 10:42:23.225358] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee5380 is same with the state(6) to be set 00:23:34.895 [2024-11-15 10:42:23.225395] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee5380 is same with the state(6) to be set 00:23:34.895 [2024-11-15 10:42:23.225407] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee5380 is same with the state(6) to be set 00:23:34.895 [2024-11-15 10:42:23.225419] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee5380 is same with the state(6) to be set 00:23:34.895 [2024-11-15 10:42:23.225431] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee5380 is same with the state(6) to be set 00:23:34.895 [2024-11-15 10:42:23.225442] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee5380 is same with the state(6) to be set 00:23:34.895 [2024-11-15 10:42:23.225454] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee5380 is same with the state(6) to be set 00:23:34.895 [2024-11-15 10:42:23.225465] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee5380 is same with the state(6) to be set 00:23:34.895 [2024-11-15 10:42:23.225477] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee5380 is same with the state(6) to be set 00:23:34.895 [2024-11-15 10:42:23.225489] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee5380 is same with the state(6) to be set 00:23:34.895 [2024-11-15 10:42:23.225505] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee5380 is same with the state(6) to be set 00:23:34.895 [2024-11-15 10:42:23.225516] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee5380 is same with the state(6) to be set 00:23:34.895 [2024-11-15 10:42:23.225528] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee5380 is same with the state(6) to be set 00:23:34.895 [2024-11-15 10:42:23.225540] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee5380 is same with the state(6) to be set 00:23:34.896 [2024-11-15 10:42:23.225553] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee5380 is same with the state(6) to be set 00:23:34.896 [2024-11-15 10:42:23.225565] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee5380 is same with the state(6) to be set 00:23:34.896 [2024-11-15 10:42:23.225577] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee5380 is same with the state(6) to be set 00:23:34.896 [2024-11-15 10:42:23.225588] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee5380 is same with the state(6) to be set 00:23:34.896 [2024-11-15 10:42:23.225599] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee5380 is same with the state(6) to be set 00:23:34.896 [2024-11-15 10:42:23.225611] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee5380 is same with the state(6) to be set 00:23:34.896 [2024-11-15 10:42:23.225622] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee5380 is same with the state(6) to be set 00:23:34.896 [2024-11-15 10:42:23.225633] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee5380 is same with the state(6) to be set 00:23:34.896 [2024-11-15 10:42:23.225645] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee5380 is same with the state(6) to be set 00:23:34.896 [2024-11-15 10:42:23.225656] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee5380 is same with the state(6) to be set 00:23:34.896 [2024-11-15 10:42:23.225667] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee5380 is same with the state(6) to be set 00:23:34.896 [2024-11-15 10:42:23.225693] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee5380 is same with the state(6) to be set 00:23:34.896 [2024-11-15 10:42:23.225704] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee5380 is same with the state(6) to be set 00:23:34.896 [2024-11-15 10:42:23.225715] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee5380 is same with the state(6) to be set 00:23:34.896 [2024-11-15 10:42:23.225726] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee5380 is same with the state(6) to be set 00:23:34.896 [2024-11-15 10:42:23.225737] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee5380 is same with the state(6) to be set 00:23:34.896 [2024-11-15 10:42:23.225749] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee5380 is same with the state(6) to be set 00:23:34.896 [2024-11-15 10:42:23.225759] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee5380 is same with the state(6) to be set 00:23:34.896 [2024-11-15 10:42:23.225771] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee5380 is same with the state(6) to be set 00:23:34.896 [2024-11-15 10:42:23.225782] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee5380 is same with the state(6) to be set 00:23:34.896 [2024-11-15 10:42:23.225793] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee5380 is same with the state(6) to be set 00:23:34.896 [2024-11-15 10:42:23.225803] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee5380 is same with the state(6) to be set 00:23:34.896 [2024-11-15 10:42:23.225815] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee5380 is same with the state(6) to be set 00:23:34.896 [2024-11-15 10:42:23.225829] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee5380 is same with the state(6) to be set 00:23:34.896 [2024-11-15 10:42:23.225841] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee5380 is same with the state(6) to be set 00:23:34.896 [2024-11-15 10:42:23.225852] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee5380 is same with the state(6) to be set 00:23:34.896 [2024-11-15 10:42:23.225862] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee5380 is same with the state(6) to be set 00:23:34.896 [2024-11-15 10:42:23.225874] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee5380 is same with the state(6) to be set 00:23:34.896 [2024-11-15 10:42:23.225885] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee5380 is same with the state(6) to be set 00:23:34.896 [2024-11-15 10:42:23.225896] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee5380 is same with the state(6) to be set 00:23:34.896 [2024-11-15 10:42:23.225907] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee5380 is same with the state(6) to be set 00:23:34.896 [2024-11-15 10:42:23.225918] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee5380 is same with the state(6) to be set 00:23:34.896 [2024-11-15 10:42:23.225929] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee5380 is same with the state(6) to be set 00:23:34.896 [2024-11-15 10:42:23.225940] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee5380 is same with the state(6) to be set 00:23:34.896 [2024-11-15 10:42:23.225951] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee5380 is same with the state(6) to be set 00:23:34.896 [2024-11-15 10:42:23.225962] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee5380 is same with the state(6) to be set 00:23:34.896 [2024-11-15 10:42:23.225973] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee5380 is same with the state(6) to be set 00:23:34.896 [2024-11-15 10:42:23.225984] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee5380 is same with the state(6) to be set 00:23:34.896 [2024-11-15 10:42:23.225996] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee5380 is same with the state(6) to be set 00:23:34.896 [2024-11-15 10:42:23.226007] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee5380 is same with the state(6) to be set 00:23:34.896 [2024-11-15 10:42:23.226019] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee5380 is same with the state(6) to be set 00:23:34.896 [2024-11-15 10:42:23.226030] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee5380 is same with the state(6) to be set 00:23:34.896 [2024-11-15 10:42:23.226041] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee5380 is same with the state(6) to be set 00:23:34.896 10:42:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:23:38.180 10:42:26 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:23:38.438 00:23:38.438 10:42:26 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:23:38.696 10:42:27 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:23:41.981 10:42:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:41.981 [2024-11-15 10:42:30.411568] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:41.981 10:42:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:23:43.357 10:42:31 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:23:43.357 [2024-11-15 10:42:31.720075] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dab220 is same with the state(6) to be set 00:23:43.357 [2024-11-15 10:42:31.720140] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dab220 is same with the state(6) to be set 00:23:43.357 [2024-11-15 10:42:31.720165] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dab220 is same with the state(6) to be set 00:23:43.357 [2024-11-15 10:42:31.720176] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dab220 is same with the state(6) to be set 00:23:43.357 [2024-11-15 10:42:31.720188] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dab220 is same with the state(6) to be set 00:23:43.357 [2024-11-15 10:42:31.720199] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dab220 is same with the state(6) to be set 00:23:43.357 [2024-11-15 10:42:31.720211] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dab220 is same with the state(6) to be set 00:23:43.357 [2024-11-15 10:42:31.720222] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dab220 is same with the state(6) to be set 00:23:43.357 [2024-11-15 10:42:31.720243] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dab220 is same with the state(6) to be set 00:23:43.357 10:42:31 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@59 -- # wait 447776 00:23:48.626 { 00:23:48.626 "results": [ 00:23:48.626 { 00:23:48.626 "job": "NVMe0n1", 00:23:48.626 "core_mask": "0x1", 00:23:48.626 "workload": "verify", 00:23:48.626 "status": "finished", 00:23:48.626 "verify_range": { 00:23:48.626 "start": 0, 00:23:48.626 "length": 16384 00:23:48.626 }, 00:23:48.626 "queue_depth": 128, 00:23:48.626 "io_size": 4096, 00:23:48.626 "runtime": 15.006209, 00:23:48.626 "iops": 8714.326183248548, 00:23:48.626 "mibps": 34.04033665331464, 00:23:48.626 "io_failed": 9133, 00:23:48.626 "io_timeout": 0, 00:23:48.626 "avg_latency_us": 13702.090054869097, 00:23:48.626 "min_latency_us": 537.0311111111112, 00:23:48.626 "max_latency_us": 16019.91111111111 00:23:48.626 } 00:23:48.626 ], 00:23:48.626 "core_count": 1 00:23:48.626 } 00:23:48.626 10:42:37 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@61 -- # killprocess 447639 00:23:48.626 10:42:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@952 -- # '[' -z 447639 ']' 00:23:48.626 10:42:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # kill -0 447639 00:23:48.626 10:42:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@957 -- # uname 00:23:48.626 10:42:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:23:48.626 10:42:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 447639 00:23:48.892 10:42:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:23:48.892 10:42:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:23:48.892 10:42:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@970 -- # echo 'killing process with pid 447639' 00:23:48.892 killing process with pid 447639 00:23:48.892 10:42:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@971 -- # kill 447639 00:23:48.892 10:42:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@976 -- # wait 447639 00:23:48.892 10:42:37 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:23:48.892 [2024-11-15 10:42:20.721262] Starting SPDK v25.01-pre git sha1 318515b44 / DPDK 24.03.0 initialization... 00:23:48.892 [2024-11-15 10:42:20.721378] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid447639 ] 00:23:48.892 [2024-11-15 10:42:20.788758] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:48.892 [2024-11-15 10:42:20.847421] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:48.892 Running I/O for 15 seconds... 00:23:48.892 8640.00 IOPS, 33.75 MiB/s [2024-11-15T09:42:37.355Z] [2024-11-15 10:42:23.227429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:83208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.892 [2024-11-15 10:42:23.227476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.892 [2024-11-15 10:42:23.227503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:83216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.892 [2024-11-15 10:42:23.227520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.893 [2024-11-15 10:42:23.227537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:83224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.893 [2024-11-15 10:42:23.227551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.893 [2024-11-15 10:42:23.227566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:83232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.893 [2024-11-15 10:42:23.227579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.893 [2024-11-15 10:42:23.227595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:83240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.893 [2024-11-15 10:42:23.227608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.893 [2024-11-15 10:42:23.227623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:83248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.893 [2024-11-15 10:42:23.227637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.893 [2024-11-15 10:42:23.227652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:83256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.893 [2024-11-15 10:42:23.227666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.893 [2024-11-15 10:42:23.227696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:83264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.893 [2024-11-15 10:42:23.227709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.893 [2024-11-15 10:42:23.227724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:83272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.893 [2024-11-15 10:42:23.227738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.893 [2024-11-15 10:42:23.227753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:83280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.893 [2024-11-15 10:42:23.227767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.893 [2024-11-15 10:42:23.227781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:83288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.893 [2024-11-15 10:42:23.227795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.893 [2024-11-15 10:42:23.227818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:83296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.893 [2024-11-15 10:42:23.227832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.893 [2024-11-15 10:42:23.227846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:83304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.893 [2024-11-15 10:42:23.227858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.893 [2024-11-15 10:42:23.227872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:83312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.893 [2024-11-15 10:42:23.227885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.893 [2024-11-15 10:42:23.227899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:83320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.893 [2024-11-15 10:42:23.227912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.893 [2024-11-15 10:42:23.227926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:83328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.893 [2024-11-15 10:42:23.227938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.893 [2024-11-15 10:42:23.227952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:83336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.893 [2024-11-15 10:42:23.227966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.893 [2024-11-15 10:42:23.227980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:83344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.893 [2024-11-15 10:42:23.227993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.893 [2024-11-15 10:42:23.228007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:83352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.893 [2024-11-15 10:42:23.228021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.893 [2024-11-15 10:42:23.228036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:83360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.893 [2024-11-15 10:42:23.228049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.893 [2024-11-15 10:42:23.228064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:83368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.893 [2024-11-15 10:42:23.228077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.893 [2024-11-15 10:42:23.228091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:83376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.893 [2024-11-15 10:42:23.228104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.893 [2024-11-15 10:42:23.228118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:83384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.893 [2024-11-15 10:42:23.228131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.893 [2024-11-15 10:42:23.228146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:83392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.893 [2024-11-15 10:42:23.228177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.893 [2024-11-15 10:42:23.228193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:83400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.893 [2024-11-15 10:42:23.228207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.893 [2024-11-15 10:42:23.228221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:83408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.893 [2024-11-15 10:42:23.228234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.893 [2024-11-15 10:42:23.228248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:83416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.893 [2024-11-15 10:42:23.228261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.893 [2024-11-15 10:42:23.228276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:83424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.893 [2024-11-15 10:42:23.228289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.893 [2024-11-15 10:42:23.228303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:83432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.893 [2024-11-15 10:42:23.228316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.893 [2024-11-15 10:42:23.228331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:83440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.893 [2024-11-15 10:42:23.228360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.893 [2024-11-15 10:42:23.228391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:83448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.893 [2024-11-15 10:42:23.228406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.893 [2024-11-15 10:42:23.228422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:83456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.893 [2024-11-15 10:42:23.228435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.893 [2024-11-15 10:42:23.228451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:83464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.893 [2024-11-15 10:42:23.228465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.893 [2024-11-15 10:42:23.228480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:83472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.893 [2024-11-15 10:42:23.228493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.893 [2024-11-15 10:42:23.228508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:83480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.893 [2024-11-15 10:42:23.228521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.893 [2024-11-15 10:42:23.228537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:83488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.893 [2024-11-15 10:42:23.228550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.893 [2024-11-15 10:42:23.228570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:83496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.893 [2024-11-15 10:42:23.228597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.893 [2024-11-15 10:42:23.228613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:83504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.893 [2024-11-15 10:42:23.228627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.893 [2024-11-15 10:42:23.228642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:83512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.893 [2024-11-15 10:42:23.228656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.893 [2024-11-15 10:42:23.228685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:83520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.893 [2024-11-15 10:42:23.228699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.893 [2024-11-15 10:42:23.228714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:83536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:48.893 [2024-11-15 10:42:23.228727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.893 [2024-11-15 10:42:23.228742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:83544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:48.894 [2024-11-15 10:42:23.228755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.894 [2024-11-15 10:42:23.228769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:83552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:48.894 [2024-11-15 10:42:23.228782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.894 [2024-11-15 10:42:23.228797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:83560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:48.894 [2024-11-15 10:42:23.228810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.894 [2024-11-15 10:42:23.228824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:83568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:48.894 [2024-11-15 10:42:23.228837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.894 [2024-11-15 10:42:23.228851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:83576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:48.894 [2024-11-15 10:42:23.228864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.894 [2024-11-15 10:42:23.228879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:83584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:48.894 [2024-11-15 10:42:23.228893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.894 [2024-11-15 10:42:23.228907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:83592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:48.894 [2024-11-15 10:42:23.228920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.894 [2024-11-15 10:42:23.228934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:83600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:48.894 [2024-11-15 10:42:23.228955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.894 [2024-11-15 10:42:23.228971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:83608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:48.894 [2024-11-15 10:42:23.228984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.894 [2024-11-15 10:42:23.228999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:83616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:48.894 [2024-11-15 10:42:23.229012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.894 [2024-11-15 10:42:23.229027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:83624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:48.894 [2024-11-15 10:42:23.229041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.894 [2024-11-15 10:42:23.229055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:83632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:48.894 [2024-11-15 10:42:23.229073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.894 [2024-11-15 10:42:23.229088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:83640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:48.894 [2024-11-15 10:42:23.229102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.894 [2024-11-15 10:42:23.229116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:83648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:48.894 [2024-11-15 10:42:23.229129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.894 [2024-11-15 10:42:23.229143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:83656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:48.894 [2024-11-15 10:42:23.229156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.894 [2024-11-15 10:42:23.229171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:83664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:48.894 [2024-11-15 10:42:23.229184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.894 [2024-11-15 10:42:23.229198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:83672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:48.894 [2024-11-15 10:42:23.229212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.894 [2024-11-15 10:42:23.229226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:83680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:48.894 [2024-11-15 10:42:23.229239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.894 [2024-11-15 10:42:23.229253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:83688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:48.894 [2024-11-15 10:42:23.229266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.894 [2024-11-15 10:42:23.229280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:83696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:48.894 [2024-11-15 10:42:23.229293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.894 [2024-11-15 10:42:23.229308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:83704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:48.894 [2024-11-15 10:42:23.229325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.894 [2024-11-15 10:42:23.229339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:83712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:48.894 [2024-11-15 10:42:23.229352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.894 [2024-11-15 10:42:23.229391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:83720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:48.894 [2024-11-15 10:42:23.229408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.894 [2024-11-15 10:42:23.229423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:83728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:48.894 [2024-11-15 10:42:23.229436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.894 [2024-11-15 10:42:23.229451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:83736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:48.894 [2024-11-15 10:42:23.229464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.894 [2024-11-15 10:42:23.229479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:83744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:48.894 [2024-11-15 10:42:23.229493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.894 [2024-11-15 10:42:23.229507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:83752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:48.894 [2024-11-15 10:42:23.229520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.894 [2024-11-15 10:42:23.229535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:83760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:48.894 [2024-11-15 10:42:23.229554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.894 [2024-11-15 10:42:23.229571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:83768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:48.894 [2024-11-15 10:42:23.229584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.894 [2024-11-15 10:42:23.229599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:83776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:48.894 [2024-11-15 10:42:23.229612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.894 [2024-11-15 10:42:23.229627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:83784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:48.894 [2024-11-15 10:42:23.229641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.894 [2024-11-15 10:42:23.229656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:83792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:48.894 [2024-11-15 10:42:23.229670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.894 [2024-11-15 10:42:23.229685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:83800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:48.894 [2024-11-15 10:42:23.229713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.894 [2024-11-15 10:42:23.229732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:83808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:48.894 [2024-11-15 10:42:23.229746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.894 [2024-11-15 10:42:23.229761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:83816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:48.894 [2024-11-15 10:42:23.229774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.894 [2024-11-15 10:42:23.229788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:83824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:48.894 [2024-11-15 10:42:23.229801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.894 [2024-11-15 10:42:23.229815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:83832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:48.894 [2024-11-15 10:42:23.229828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.894 [2024-11-15 10:42:23.229843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:83840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:48.894 [2024-11-15 10:42:23.229856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.894 [2024-11-15 10:42:23.229870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:83848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:48.894 [2024-11-15 10:42:23.229884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.894 [2024-11-15 10:42:23.229898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:83856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:48.894 [2024-11-15 10:42:23.229911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.894 [2024-11-15 10:42:23.229925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:83864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:48.895 [2024-11-15 10:42:23.229938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.895 [2024-11-15 10:42:23.229953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:83872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:48.895 [2024-11-15 10:42:23.229966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.895 [2024-11-15 10:42:23.229980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:83880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:48.895 [2024-11-15 10:42:23.229993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.895 [2024-11-15 10:42:23.230008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:83888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:48.895 [2024-11-15 10:42:23.230023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.895 [2024-11-15 10:42:23.230037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:83896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:48.895 [2024-11-15 10:42:23.230051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.895 [2024-11-15 10:42:23.230065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:83904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:48.895 [2024-11-15 10:42:23.230082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.895 [2024-11-15 10:42:23.230097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:83528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.895 [2024-11-15 10:42:23.230111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.895 [2024-11-15 10:42:23.230125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:48.895 [2024-11-15 10:42:23.230139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.895 [2024-11-15 10:42:23.230153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:83920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:48.895 [2024-11-15 10:42:23.230167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.895 [2024-11-15 10:42:23.230181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:83928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:48.895 [2024-11-15 10:42:23.230194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.895 [2024-11-15 10:42:23.230209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:83936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:48.895 [2024-11-15 10:42:23.230222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.895 [2024-11-15 10:42:23.230236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:83944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:48.895 [2024-11-15 10:42:23.230250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.895 [2024-11-15 10:42:23.230264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:83952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:48.895 [2024-11-15 10:42:23.230277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.895 [2024-11-15 10:42:23.230292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:83960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:48.895 [2024-11-15 10:42:23.230306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.895 [2024-11-15 10:42:23.230321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:83968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:48.895 [2024-11-15 10:42:23.230334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.895 [2024-11-15 10:42:23.230349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:83976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:48.895 [2024-11-15 10:42:23.230386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.895 [2024-11-15 10:42:23.230406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:83984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:48.895 [2024-11-15 10:42:23.230420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.895 [2024-11-15 10:42:23.230436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:83992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:48.895 [2024-11-15 10:42:23.230450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.895 [2024-11-15 10:42:23.230465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:84000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:48.895 [2024-11-15 10:42:23.230483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.895 [2024-11-15 10:42:23.230499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:84008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:48.895 [2024-11-15 10:42:23.230513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.895 [2024-11-15 10:42:23.230529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:84016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:48.895 [2024-11-15 10:42:23.230543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.895 [2024-11-15 10:42:23.230558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:84024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:48.895 [2024-11-15 10:42:23.230572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.895 [2024-11-15 10:42:23.230587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:84032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:48.895 [2024-11-15 10:42:23.230601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.895 [2024-11-15 10:42:23.230616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:84040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:48.895 [2024-11-15 10:42:23.230630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.895 [2024-11-15 10:42:23.230645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:84048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:48.895 [2024-11-15 10:42:23.230659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.895 [2024-11-15 10:42:23.230689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:84056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:48.895 [2024-11-15 10:42:23.230703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.895 [2024-11-15 10:42:23.230718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:84064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:48.895 [2024-11-15 10:42:23.230731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.895 [2024-11-15 10:42:23.230746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:84072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:48.895 [2024-11-15 10:42:23.230760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.895 [2024-11-15 10:42:23.230775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:84080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:48.895 [2024-11-15 10:42:23.230790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.895 [2024-11-15 10:42:23.230805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:84088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:48.895 [2024-11-15 10:42:23.230819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.895 [2024-11-15 10:42:23.230834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:84096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:48.895 [2024-11-15 10:42:23.230847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.895 [2024-11-15 10:42:23.230866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:84104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:48.895 [2024-11-15 10:42:23.230881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.895 [2024-11-15 10:42:23.230896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:84112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:48.895 [2024-11-15 10:42:23.230909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.895 [2024-11-15 10:42:23.230925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:84120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:48.895 [2024-11-15 10:42:23.230938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.895 [2024-11-15 10:42:23.230953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:84128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:48.895 [2024-11-15 10:42:23.230967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.895 [2024-11-15 10:42:23.230982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:84136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:48.895 [2024-11-15 10:42:23.230996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.895 [2024-11-15 10:42:23.231011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:84144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:48.895 [2024-11-15 10:42:23.231025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.895 [2024-11-15 10:42:23.231040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:84152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:48.895 [2024-11-15 10:42:23.231053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.895 [2024-11-15 10:42:23.231068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:84160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:48.895 [2024-11-15 10:42:23.231081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.895 [2024-11-15 10:42:23.231112] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:48.895 [2024-11-15 10:42:23.231129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84168 len:8 PRP1 0x0 PRP2 0x0 00:23:48.896 [2024-11-15 10:42:23.231145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.896 [2024-11-15 10:42:23.231164] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:48.896 [2024-11-15 10:42:23.231177] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:48.896 [2024-11-15 10:42:23.231189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84176 len:8 PRP1 0x0 PRP2 0x0 00:23:48.896 [2024-11-15 10:42:23.231202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.896 [2024-11-15 10:42:23.231215] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:48.896 [2024-11-15 10:42:23.231226] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:48.896 [2024-11-15 10:42:23.231237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84184 len:8 PRP1 0x0 PRP2 0x0 00:23:48.896 [2024-11-15 10:42:23.231250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.896 [2024-11-15 10:42:23.231267] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:48.896 [2024-11-15 10:42:23.231279] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:48.896 [2024-11-15 10:42:23.231290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84192 len:8 PRP1 0x0 PRP2 0x0 00:23:48.896 [2024-11-15 10:42:23.231303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.896 [2024-11-15 10:42:23.231315] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:48.896 [2024-11-15 10:42:23.231326] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:48.896 [2024-11-15 10:42:23.231337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84200 len:8 PRP1 0x0 PRP2 0x0 00:23:48.896 [2024-11-15 10:42:23.231351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.896 [2024-11-15 10:42:23.231387] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:48.896 [2024-11-15 10:42:23.231402] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:48.896 [2024-11-15 10:42:23.231414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84208 len:8 PRP1 0x0 PRP2 0x0 00:23:48.896 [2024-11-15 10:42:23.231427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.896 [2024-11-15 10:42:23.231441] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:48.896 [2024-11-15 10:42:23.231452] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:48.896 [2024-11-15 10:42:23.231463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84216 len:8 PRP1 0x0 PRP2 0x0 00:23:48.896 [2024-11-15 10:42:23.231476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.896 [2024-11-15 10:42:23.231490] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:48.896 [2024-11-15 10:42:23.231501] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:48.896 [2024-11-15 10:42:23.231513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84224 len:8 PRP1 0x0 PRP2 0x0 00:23:48.896 [2024-11-15 10:42:23.231525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.896 [2024-11-15 10:42:23.231597] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:23:48.896 [2024-11-15 10:42:23.231639] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:48.896 [2024-11-15 10:42:23.231666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.896 [2024-11-15 10:42:23.231698] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:48.896 [2024-11-15 10:42:23.231723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.896 [2024-11-15 10:42:23.231736] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:48.896 [2024-11-15 10:42:23.231749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.896 [2024-11-15 10:42:23.231763] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:48.896 [2024-11-15 10:42:23.231776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.896 [2024-11-15 10:42:23.231800] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:23:48.896 [2024-11-15 10:42:23.231847] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfd7560 (9): Bad file descriptor 00:23:48.896 [2024-11-15 10:42:23.235100] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:23:48.896 [2024-11-15 10:42:23.349801] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller successful. 00:23:48.896 8183.50 IOPS, 31.97 MiB/s [2024-11-15T09:42:37.359Z] 8384.67 IOPS, 32.75 MiB/s [2024-11-15T09:42:37.359Z] 8544.75 IOPS, 33.38 MiB/s [2024-11-15T09:42:37.359Z] 8612.20 IOPS, 33.64 MiB/s [2024-11-15T09:42:37.359Z] [2024-11-15 10:42:27.080228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:124696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:48.896 [2024-11-15 10:42:27.080284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.896 [2024-11-15 10:42:27.080309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:124256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.896 [2024-11-15 10:42:27.080325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.896 [2024-11-15 10:42:27.080340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:124264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.896 [2024-11-15 10:42:27.080378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.896 [2024-11-15 10:42:27.080395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:124272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.896 [2024-11-15 10:42:27.080409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.896 [2024-11-15 10:42:27.080424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:124280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.896 [2024-11-15 10:42:27.080438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.896 [2024-11-15 10:42:27.080453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:124288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.896 [2024-11-15 10:42:27.080468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.896 [2024-11-15 10:42:27.080484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:124296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.896 [2024-11-15 10:42:27.080498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.896 [2024-11-15 10:42:27.080513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:124304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.896 [2024-11-15 10:42:27.080528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.896 [2024-11-15 10:42:27.080544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:124704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:48.896 [2024-11-15 10:42:27.080559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.896 [2024-11-15 10:42:27.080574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:124712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:48.896 [2024-11-15 10:42:27.080587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.896 [2024-11-15 10:42:27.080602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:124720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:48.896 [2024-11-15 10:42:27.080624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.896 [2024-11-15 10:42:27.080640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:124728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:48.896 [2024-11-15 10:42:27.080677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.896 [2024-11-15 10:42:27.080692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:124736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:48.896 [2024-11-15 10:42:27.080704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.896 [2024-11-15 10:42:27.080719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:124744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:48.896 [2024-11-15 10:42:27.080732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.896 [2024-11-15 10:42:27.080745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:124752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:48.896 [2024-11-15 10:42:27.080758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.896 [2024-11-15 10:42:27.080772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:124760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:48.896 [2024-11-15 10:42:27.080784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.896 [2024-11-15 10:42:27.080798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:124768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:48.896 [2024-11-15 10:42:27.080812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.896 [2024-11-15 10:42:27.080826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:124776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:48.896 [2024-11-15 10:42:27.080839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.896 [2024-11-15 10:42:27.080854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:124784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:48.896 [2024-11-15 10:42:27.080868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.896 [2024-11-15 10:42:27.080882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:124792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:48.896 [2024-11-15 10:42:27.080895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.896 [2024-11-15 10:42:27.080909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:124800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:48.897 [2024-11-15 10:42:27.080922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.897 [2024-11-15 10:42:27.080937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:124808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:48.897 [2024-11-15 10:42:27.080950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.897 [2024-11-15 10:42:27.080964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:124816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:48.897 [2024-11-15 10:42:27.080976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.897 [2024-11-15 10:42:27.080995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:124824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:48.897 [2024-11-15 10:42:27.081009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.897 [2024-11-15 10:42:27.081023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:124832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:48.897 [2024-11-15 10:42:27.081036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.897 [2024-11-15 10:42:27.081050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:124840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:48.897 [2024-11-15 10:42:27.081063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.897 [2024-11-15 10:42:27.081077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:124848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:48.897 [2024-11-15 10:42:27.081091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.897 [2024-11-15 10:42:27.081105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:124856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:48.897 [2024-11-15 10:42:27.081118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.897 [2024-11-15 10:42:27.081132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:124864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:48.897 [2024-11-15 10:42:27.081145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.897 [2024-11-15 10:42:27.081159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:124872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:48.897 [2024-11-15 10:42:27.081172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.897 [2024-11-15 10:42:27.081186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:124880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:48.897 [2024-11-15 10:42:27.081199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.897 [2024-11-15 10:42:27.081213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:124888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:48.897 [2024-11-15 10:42:27.081227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.897 [2024-11-15 10:42:27.081241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:124896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:48.897 [2024-11-15 10:42:27.081254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.897 [2024-11-15 10:42:27.081268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:124904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:48.897 [2024-11-15 10:42:27.081281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.897 [2024-11-15 10:42:27.081296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:124912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:48.897 [2024-11-15 10:42:27.081309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.897 [2024-11-15 10:42:27.081323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:124920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:48.897 [2024-11-15 10:42:27.081340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.897 [2024-11-15 10:42:27.081378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:124928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:48.897 [2024-11-15 10:42:27.081394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.897 [2024-11-15 10:42:27.081409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:124936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:48.897 [2024-11-15 10:42:27.081423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.897 [2024-11-15 10:42:27.081438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:124944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:48.897 [2024-11-15 10:42:27.081451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.897 [2024-11-15 10:42:27.081467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:124952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:48.897 [2024-11-15 10:42:27.081481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.897 [2024-11-15 10:42:27.081496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:124960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:48.897 [2024-11-15 10:42:27.081509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.897 [2024-11-15 10:42:27.081524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:124968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:48.897 [2024-11-15 10:42:27.081538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.897 [2024-11-15 10:42:27.081554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:124976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:48.897 [2024-11-15 10:42:27.081567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.897 [2024-11-15 10:42:27.081582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:124984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:48.897 [2024-11-15 10:42:27.081596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.897 [2024-11-15 10:42:27.081611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:124992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:48.897 [2024-11-15 10:42:27.081625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.897 [2024-11-15 10:42:27.081640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:125000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:48.897 [2024-11-15 10:42:27.081668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.897 [2024-11-15 10:42:27.081683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:125008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:48.897 [2024-11-15 10:42:27.081696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.897 [2024-11-15 10:42:27.081725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:124312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.897 [2024-11-15 10:42:27.081739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.897 [2024-11-15 10:42:27.081758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:125016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:48.897 [2024-11-15 10:42:27.081771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.897 [2024-11-15 10:42:27.081785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:125024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:48.897 [2024-11-15 10:42:27.081799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.897 [2024-11-15 10:42:27.081814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:125032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:48.897 [2024-11-15 10:42:27.081827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.897 [2024-11-15 10:42:27.081841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:125040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:48.897 [2024-11-15 10:42:27.081854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.897 [2024-11-15 10:42:27.081868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:125048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:48.897 [2024-11-15 10:42:27.081881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.897 [2024-11-15 10:42:27.081896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:125056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:48.897 [2024-11-15 10:42:27.081908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.897 [2024-11-15 10:42:27.081922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:125064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:48.898 [2024-11-15 10:42:27.081935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.898 [2024-11-15 10:42:27.081964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:125072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:48.898 [2024-11-15 10:42:27.081978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.898 [2024-11-15 10:42:27.081992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:125080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:48.898 [2024-11-15 10:42:27.082006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.898 [2024-11-15 10:42:27.082021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:125088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:48.898 [2024-11-15 10:42:27.082034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.898 [2024-11-15 10:42:27.082048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:125096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:48.898 [2024-11-15 10:42:27.082061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.898 [2024-11-15 10:42:27.082075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:125104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:48.898 [2024-11-15 10:42:27.082088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.898 [2024-11-15 10:42:27.082103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:125112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:48.898 [2024-11-15 10:42:27.082119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.898 [2024-11-15 10:42:27.082135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:125120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:48.898 [2024-11-15 10:42:27.082148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.898 [2024-11-15 10:42:27.082163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:125128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:48.898 [2024-11-15 10:42:27.082176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.898 [2024-11-15 10:42:27.082192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:125136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:48.898 [2024-11-15 10:42:27.082207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.898 [2024-11-15 10:42:27.082222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:125144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:48.898 [2024-11-15 10:42:27.082235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.898 [2024-11-15 10:42:27.082250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:125152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:48.898 [2024-11-15 10:42:27.082264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.898 [2024-11-15 10:42:27.082279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:125160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:48.898 [2024-11-15 10:42:27.082293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.898 [2024-11-15 10:42:27.082308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:125168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:48.898 [2024-11-15 10:42:27.082321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.898 [2024-11-15 10:42:27.082336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:125176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:48.898 [2024-11-15 10:42:27.082360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.898 [2024-11-15 10:42:27.082404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:125184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:48.898 [2024-11-15 10:42:27.082419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.898 [2024-11-15 10:42:27.082434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:125192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:48.898 [2024-11-15 10:42:27.082448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.898 [2024-11-15 10:42:27.082464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:125200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:48.898 [2024-11-15 10:42:27.082478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.898 [2024-11-15 10:42:27.082494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:125208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:48.898 [2024-11-15 10:42:27.082507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.898 [2024-11-15 10:42:27.082523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:125216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:48.898 [2024-11-15 10:42:27.082540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.898 [2024-11-15 10:42:27.082557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:125224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:48.898 [2024-11-15 10:42:27.082572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.898 [2024-11-15 10:42:27.082588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:125232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:48.898 [2024-11-15 10:42:27.082602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.898 [2024-11-15 10:42:27.082617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:125240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:48.898 [2024-11-15 10:42:27.082631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.898 [2024-11-15 10:42:27.082646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:125248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:48.898 [2024-11-15 10:42:27.082661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.898 [2024-11-15 10:42:27.082700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:125256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:48.898 [2024-11-15 10:42:27.082715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.898 [2024-11-15 10:42:27.082730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:125264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:48.898 [2024-11-15 10:42:27.082744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.898 [2024-11-15 10:42:27.082760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:124320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.898 [2024-11-15 10:42:27.082773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.898 [2024-11-15 10:42:27.082788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:124328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.898 [2024-11-15 10:42:27.082801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.898 [2024-11-15 10:42:27.082817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:124336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.898 [2024-11-15 10:42:27.082831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.898 [2024-11-15 10:42:27.082846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:124344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.898 [2024-11-15 10:42:27.082860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.898 [2024-11-15 10:42:27.082875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:124352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.898 [2024-11-15 10:42:27.082888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.898 [2024-11-15 10:42:27.082904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:124360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.898 [2024-11-15 10:42:27.082918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.898 [2024-11-15 10:42:27.082936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:124368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.898 [2024-11-15 10:42:27.082951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.898 [2024-11-15 10:42:27.082966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:124376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.898 [2024-11-15 10:42:27.082980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.898 [2024-11-15 10:42:27.082995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:124384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.898 [2024-11-15 10:42:27.083009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.898 [2024-11-15 10:42:27.083024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:124392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.898 [2024-11-15 10:42:27.083037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.898 [2024-11-15 10:42:27.083052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:124400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.898 [2024-11-15 10:42:27.083065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.898 [2024-11-15 10:42:27.083080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:124408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.898 [2024-11-15 10:42:27.083094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.898 [2024-11-15 10:42:27.083110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:124416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.898 [2024-11-15 10:42:27.083124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.898 [2024-11-15 10:42:27.083139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:124424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.898 [2024-11-15 10:42:27.083153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.899 [2024-11-15 10:42:27.083168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:124432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.899 [2024-11-15 10:42:27.083183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.899 [2024-11-15 10:42:27.083198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:124440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.899 [2024-11-15 10:42:27.083211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.899 [2024-11-15 10:42:27.083226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:124448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.899 [2024-11-15 10:42:27.083240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.899 [2024-11-15 10:42:27.083255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:124456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.899 [2024-11-15 10:42:27.083268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.899 [2024-11-15 10:42:27.083284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:124464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.899 [2024-11-15 10:42:27.083301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.899 [2024-11-15 10:42:27.083331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:124472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.899 [2024-11-15 10:42:27.083357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.899 [2024-11-15 10:42:27.083383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:124480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.899 [2024-11-15 10:42:27.083399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.899 [2024-11-15 10:42:27.083414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:124488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.899 [2024-11-15 10:42:27.083428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.899 [2024-11-15 10:42:27.083443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:124496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.899 [2024-11-15 10:42:27.083457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.899 [2024-11-15 10:42:27.083473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:124504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.899 [2024-11-15 10:42:27.083486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.899 [2024-11-15 10:42:27.083503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:124512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.899 [2024-11-15 10:42:27.083516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.899 [2024-11-15 10:42:27.083531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:124520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.899 [2024-11-15 10:42:27.083545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.899 [2024-11-15 10:42:27.083560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:124528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.899 [2024-11-15 10:42:27.083573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.899 [2024-11-15 10:42:27.083588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:124536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.899 [2024-11-15 10:42:27.083602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.899 [2024-11-15 10:42:27.083617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:124544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.899 [2024-11-15 10:42:27.083630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.899 [2024-11-15 10:42:27.083654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:124552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.899 [2024-11-15 10:42:27.083668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.899 [2024-11-15 10:42:27.083698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:124560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.899 [2024-11-15 10:42:27.083711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.899 [2024-11-15 10:42:27.083730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:125272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:48.899 [2024-11-15 10:42:27.083745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.899 [2024-11-15 10:42:27.083760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:124568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.899 [2024-11-15 10:42:27.083773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.899 [2024-11-15 10:42:27.083787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:124576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.899 [2024-11-15 10:42:27.083800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.899 [2024-11-15 10:42:27.083815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:124584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.899 [2024-11-15 10:42:27.083829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.899 [2024-11-15 10:42:27.083843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:124592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.899 [2024-11-15 10:42:27.083856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.899 [2024-11-15 10:42:27.083870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:124600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.899 [2024-11-15 10:42:27.083883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.899 [2024-11-15 10:42:27.083898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:124608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.899 [2024-11-15 10:42:27.083911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.899 [2024-11-15 10:42:27.083926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:124616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.899 [2024-11-15 10:42:27.083939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.899 [2024-11-15 10:42:27.083954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:124624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.899 [2024-11-15 10:42:27.083967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.899 [2024-11-15 10:42:27.083981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:124632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.899 [2024-11-15 10:42:27.083994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.899 [2024-11-15 10:42:27.084008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:124640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.899 [2024-11-15 10:42:27.084022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.899 [2024-11-15 10:42:27.084037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:124648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.899 [2024-11-15 10:42:27.084050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.899 [2024-11-15 10:42:27.084064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:124656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.899 [2024-11-15 10:42:27.084081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.899 [2024-11-15 10:42:27.084097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:124664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.899 [2024-11-15 10:42:27.084111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.899 [2024-11-15 10:42:27.084125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:124672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.899 [2024-11-15 10:42:27.084138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.899 [2024-11-15 10:42:27.084153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:124680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.899 [2024-11-15 10:42:27.084166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.899 [2024-11-15 10:42:27.084196] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:48.899 [2024-11-15 10:42:27.084211] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:48.899 [2024-11-15 10:42:27.084224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:124688 len:8 PRP1 0x0 PRP2 0x0 00:23:48.899 [2024-11-15 10:42:27.084236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.899 [2024-11-15 10:42:27.084301] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:23:48.899 [2024-11-15 10:42:27.084338] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:48.899 [2024-11-15 10:42:27.084387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.899 [2024-11-15 10:42:27.084405] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:48.899 [2024-11-15 10:42:27.084421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.899 [2024-11-15 10:42:27.084435] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:48.899 [2024-11-15 10:42:27.084449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.899 [2024-11-15 10:42:27.084462] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:48.899 [2024-11-15 10:42:27.084477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.899 [2024-11-15 10:42:27.084491] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:23:48.900 [2024-11-15 10:42:27.087760] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:23:48.900 [2024-11-15 10:42:27.087800] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfd7560 (9): Bad file descriptor 00:23:48.900 [2024-11-15 10:42:27.158026] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller successful. 00:23:48.900 8537.33 IOPS, 33.35 MiB/s [2024-11-15T09:42:37.363Z] 8594.43 IOPS, 33.57 MiB/s [2024-11-15T09:42:37.363Z] 8619.62 IOPS, 33.67 MiB/s [2024-11-15T09:42:37.363Z] 8622.33 IOPS, 33.68 MiB/s [2024-11-15T09:42:37.363Z] [2024-11-15 10:42:31.722018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:76856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.900 [2024-11-15 10:42:31.722063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.900 [2024-11-15 10:42:31.722098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:76864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.900 [2024-11-15 10:42:31.722114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.900 [2024-11-15 10:42:31.722131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:76872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.900 [2024-11-15 10:42:31.722146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.900 [2024-11-15 10:42:31.722161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:76880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.900 [2024-11-15 10:42:31.722181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.900 [2024-11-15 10:42:31.722196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:76888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.900 [2024-11-15 10:42:31.722208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.900 [2024-11-15 10:42:31.722224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:76896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.900 [2024-11-15 10:42:31.722239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.900 [2024-11-15 10:42:31.722254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:76904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.900 [2024-11-15 10:42:31.722268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.900 [2024-11-15 10:42:31.722283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:76912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.900 [2024-11-15 10:42:31.722297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.900 [2024-11-15 10:42:31.722313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:76920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.900 [2024-11-15 10:42:31.722327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.900 [2024-11-15 10:42:31.722358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:76928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.900 [2024-11-15 10:42:31.722382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.900 [2024-11-15 10:42:31.722399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:76936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:48.900 [2024-11-15 10:42:31.722414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.900 [2024-11-15 10:42:31.722429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:76944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:48.900 [2024-11-15 10:42:31.722443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.900 [2024-11-15 10:42:31.722457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:76952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:48.900 [2024-11-15 10:42:31.722470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.900 [2024-11-15 10:42:31.722485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:76960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:48.900 [2024-11-15 10:42:31.722508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.900 [2024-11-15 10:42:31.722526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:76968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:48.900 [2024-11-15 10:42:31.722539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.900 [2024-11-15 10:42:31.722554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:76976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:48.900 [2024-11-15 10:42:31.722567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.900 [2024-11-15 10:42:31.722581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:76984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:48.900 [2024-11-15 10:42:31.722595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.900 [2024-11-15 10:42:31.722610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:76992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:48.900 [2024-11-15 10:42:31.722624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.900 [2024-11-15 10:42:31.722639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:77000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:48.900 [2024-11-15 10:42:31.722666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.900 [2024-11-15 10:42:31.722681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:77008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:48.900 [2024-11-15 10:42:31.722693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.900 [2024-11-15 10:42:31.722707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:77016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:48.900 [2024-11-15 10:42:31.722721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.900 [2024-11-15 10:42:31.722735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:77024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:48.900 [2024-11-15 10:42:31.722748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.900 [2024-11-15 10:42:31.722762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:77032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:48.900 [2024-11-15 10:42:31.722774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.900 [2024-11-15 10:42:31.722788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:48.900 [2024-11-15 10:42:31.722800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.900 [2024-11-15 10:42:31.722814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:77048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:48.900 [2024-11-15 10:42:31.722827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.900 [2024-11-15 10:42:31.722841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:77056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:48.900 [2024-11-15 10:42:31.722854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.900 [2024-11-15 10:42:31.722872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:77064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:48.900 [2024-11-15 10:42:31.722885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.900 [2024-11-15 10:42:31.722900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:77072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:48.900 [2024-11-15 10:42:31.722912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.900 [2024-11-15 10:42:31.722926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:77080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:48.900 [2024-11-15 10:42:31.722939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.900 [2024-11-15 10:42:31.722953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:77088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:48.900 [2024-11-15 10:42:31.722966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.900 [2024-11-15 10:42:31.722980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:77096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:48.900 [2024-11-15 10:42:31.722992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.900 [2024-11-15 10:42:31.723007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:77104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:48.900 [2024-11-15 10:42:31.723019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.900 [2024-11-15 10:42:31.723033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:77112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:48.900 [2024-11-15 10:42:31.723046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.900 [2024-11-15 10:42:31.723060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:77120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:48.900 [2024-11-15 10:42:31.723073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.900 [2024-11-15 10:42:31.723087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:77128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:48.900 [2024-11-15 10:42:31.723100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.900 [2024-11-15 10:42:31.723113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:77136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:48.900 [2024-11-15 10:42:31.723126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.900 [2024-11-15 10:42:31.723140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:77144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:48.900 [2024-11-15 10:42:31.723152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.900 [2024-11-15 10:42:31.723166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:77152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:48.900 [2024-11-15 10:42:31.723179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.901 [2024-11-15 10:42:31.723193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:77160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:48.901 [2024-11-15 10:42:31.723205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.901 [2024-11-15 10:42:31.723223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:77168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:48.901 [2024-11-15 10:42:31.723237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.901 [2024-11-15 10:42:31.723251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:77176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:48.901 [2024-11-15 10:42:31.723263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.901 [2024-11-15 10:42:31.723277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:77184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:48.901 [2024-11-15 10:42:31.723290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.901 [2024-11-15 10:42:31.723304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:77192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:48.901 [2024-11-15 10:42:31.723317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.901 [2024-11-15 10:42:31.723331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:77200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:48.901 [2024-11-15 10:42:31.723359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.901 [2024-11-15 10:42:31.723386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:77208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:48.901 [2024-11-15 10:42:31.723428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.901 [2024-11-15 10:42:31.723444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:77216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:48.901 [2024-11-15 10:42:31.723458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.901 [2024-11-15 10:42:31.723473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:77224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:48.901 [2024-11-15 10:42:31.723487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.901 [2024-11-15 10:42:31.723501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:77232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:48.901 [2024-11-15 10:42:31.723514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.901 [2024-11-15 10:42:31.723529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:77240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:48.901 [2024-11-15 10:42:31.723542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.901 [2024-11-15 10:42:31.723557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:77248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:48.901 [2024-11-15 10:42:31.723570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.901 [2024-11-15 10:42:31.723584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:77256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:48.901 [2024-11-15 10:42:31.723597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.901 [2024-11-15 10:42:31.723612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:77264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:48.901 [2024-11-15 10:42:31.723629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.901 [2024-11-15 10:42:31.723644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:77272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:48.901 [2024-11-15 10:42:31.723667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.901 [2024-11-15 10:42:31.723682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:77280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:48.901 [2024-11-15 10:42:31.723695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.901 [2024-11-15 10:42:31.723710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:77288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:48.901 [2024-11-15 10:42:31.723733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.901 [2024-11-15 10:42:31.723748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:77296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:48.901 [2024-11-15 10:42:31.723760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.901 [2024-11-15 10:42:31.723775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:77304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:48.901 [2024-11-15 10:42:31.723788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.901 [2024-11-15 10:42:31.723803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:77312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:48.901 [2024-11-15 10:42:31.723816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.901 [2024-11-15 10:42:31.723831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:77320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:48.901 [2024-11-15 10:42:31.723844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.901 [2024-11-15 10:42:31.723858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:77328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:48.901 [2024-11-15 10:42:31.723872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.901 [2024-11-15 10:42:31.723887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:77336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:48.901 [2024-11-15 10:42:31.723900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.901 [2024-11-15 10:42:31.723914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:77344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:48.901 [2024-11-15 10:42:31.723927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.901 [2024-11-15 10:42:31.723942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:77352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:48.901 [2024-11-15 10:42:31.723955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.901 [2024-11-15 10:42:31.723970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:77360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:48.901 [2024-11-15 10:42:31.723983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.901 [2024-11-15 10:42:31.724010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:77368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:48.901 [2024-11-15 10:42:31.724024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.901 [2024-11-15 10:42:31.724039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:77376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:48.901 [2024-11-15 10:42:31.724053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.901 [2024-11-15 10:42:31.724068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:77384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:48.901 [2024-11-15 10:42:31.724080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.901 [2024-11-15 10:42:31.724095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:77392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:48.901 [2024-11-15 10:42:31.724108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.901 [2024-11-15 10:42:31.724123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:77400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:48.901 [2024-11-15 10:42:31.724136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.901 [2024-11-15 10:42:31.724150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:77408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:48.901 [2024-11-15 10:42:31.724163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.901 [2024-11-15 10:42:31.724178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:77416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:48.901 [2024-11-15 10:42:31.724191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.901 [2024-11-15 10:42:31.724205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:77424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:48.901 [2024-11-15 10:42:31.724218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.901 [2024-11-15 10:42:31.724232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:77432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:48.901 [2024-11-15 10:42:31.724246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.901 [2024-11-15 10:42:31.724260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:77440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:48.901 [2024-11-15 10:42:31.724273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.902 [2024-11-15 10:42:31.724287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:77448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:48.902 [2024-11-15 10:42:31.724301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.902 [2024-11-15 10:42:31.724315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:77456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:48.902 [2024-11-15 10:42:31.724328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.902 [2024-11-15 10:42:31.724345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:77464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:48.902 [2024-11-15 10:42:31.724359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.902 [2024-11-15 10:42:31.724401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:77472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:48.902 [2024-11-15 10:42:31.724417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.902 [2024-11-15 10:42:31.724432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:77480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:48.902 [2024-11-15 10:42:31.724446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.902 [2024-11-15 10:42:31.724461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:77488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:48.902 [2024-11-15 10:42:31.724476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.902 [2024-11-15 10:42:31.724491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:77496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:48.902 [2024-11-15 10:42:31.724505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.902 [2024-11-15 10:42:31.724536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:77504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:48.902 [2024-11-15 10:42:31.724551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.902 [2024-11-15 10:42:31.724566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:77512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:48.902 [2024-11-15 10:42:31.724580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.902 [2024-11-15 10:42:31.724596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:77520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:48.902 [2024-11-15 10:42:31.724610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.902 [2024-11-15 10:42:31.724626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:77528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:48.902 [2024-11-15 10:42:31.724640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.902 [2024-11-15 10:42:31.724655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:77536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:48.902 [2024-11-15 10:42:31.724685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.902 [2024-11-15 10:42:31.724701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:77544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:48.902 [2024-11-15 10:42:31.724715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.902 [2024-11-15 10:42:31.724729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:77552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:48.902 [2024-11-15 10:42:31.724743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.902 [2024-11-15 10:42:31.724758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:77560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:48.902 [2024-11-15 10:42:31.724771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.902 [2024-11-15 10:42:31.724786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:77568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:48.902 [2024-11-15 10:42:31.724819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.902 [2024-11-15 10:42:31.724855] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:48.902 [2024-11-15 10:42:31.724873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77576 len:8 PRP1 0x0 PRP2 0x0 00:23:48.902 [2024-11-15 10:42:31.724887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.902 [2024-11-15 10:42:31.724907] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:48.902 [2024-11-15 10:42:31.724919] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:48.902 [2024-11-15 10:42:31.724931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77584 len:8 PRP1 0x0 PRP2 0x0 00:23:48.902 [2024-11-15 10:42:31.724944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.902 [2024-11-15 10:42:31.724957] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:48.902 [2024-11-15 10:42:31.724969] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:48.902 [2024-11-15 10:42:31.724981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77592 len:8 PRP1 0x0 PRP2 0x0 00:23:48.902 [2024-11-15 10:42:31.724995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.902 [2024-11-15 10:42:31.725008] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:48.902 [2024-11-15 10:42:31.725019] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:48.902 [2024-11-15 10:42:31.725031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77600 len:8 PRP1 0x0 PRP2 0x0 00:23:48.902 [2024-11-15 10:42:31.725050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.902 [2024-11-15 10:42:31.725064] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:48.902 [2024-11-15 10:42:31.725075] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:48.902 [2024-11-15 10:42:31.725086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77608 len:8 PRP1 0x0 PRP2 0x0 00:23:48.902 [2024-11-15 10:42:31.725099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.902 [2024-11-15 10:42:31.725112] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:48.902 [2024-11-15 10:42:31.725124] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:48.902 [2024-11-15 10:42:31.725135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77616 len:8 PRP1 0x0 PRP2 0x0 00:23:48.902 [2024-11-15 10:42:31.725163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.902 [2024-11-15 10:42:31.725177] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:48.902 [2024-11-15 10:42:31.725187] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:48.902 [2024-11-15 10:42:31.725198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77624 len:8 PRP1 0x0 PRP2 0x0 00:23:48.902 [2024-11-15 10:42:31.725211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.902 [2024-11-15 10:42:31.725224] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:48.902 [2024-11-15 10:42:31.725234] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:48.902 [2024-11-15 10:42:31.725249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77632 len:8 PRP1 0x0 PRP2 0x0 00:23:48.902 [2024-11-15 10:42:31.725262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.902 [2024-11-15 10:42:31.725275] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:48.902 [2024-11-15 10:42:31.725286] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:48.902 [2024-11-15 10:42:31.725298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77640 len:8 PRP1 0x0 PRP2 0x0 00:23:48.902 [2024-11-15 10:42:31.725311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.902 [2024-11-15 10:42:31.725324] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:48.902 [2024-11-15 10:42:31.725335] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:48.902 [2024-11-15 10:42:31.725346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77648 len:8 PRP1 0x0 PRP2 0x0 00:23:48.902 [2024-11-15 10:42:31.725358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.902 [2024-11-15 10:42:31.725396] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:48.902 [2024-11-15 10:42:31.725408] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:48.902 [2024-11-15 10:42:31.725420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77656 len:8 PRP1 0x0 PRP2 0x0 00:23:48.902 [2024-11-15 10:42:31.725433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.902 [2024-11-15 10:42:31.725445] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:48.902 [2024-11-15 10:42:31.725456] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:48.902 [2024-11-15 10:42:31.725468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77664 len:8 PRP1 0x0 PRP2 0x0 00:23:48.902 [2024-11-15 10:42:31.725485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.902 [2024-11-15 10:42:31.725499] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:48.902 [2024-11-15 10:42:31.725510] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:48.902 [2024-11-15 10:42:31.725521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77672 len:8 PRP1 0x0 PRP2 0x0 00:23:48.902 [2024-11-15 10:42:31.725535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.902 [2024-11-15 10:42:31.725548] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:48.902 [2024-11-15 10:42:31.725559] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:48.902 [2024-11-15 10:42:31.725570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77680 len:8 PRP1 0x0 PRP2 0x0 00:23:48.902 [2024-11-15 10:42:31.725582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.902 [2024-11-15 10:42:31.725595] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:48.903 [2024-11-15 10:42:31.725606] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:48.903 [2024-11-15 10:42:31.725617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77688 len:8 PRP1 0x0 PRP2 0x0 00:23:48.903 [2024-11-15 10:42:31.725631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.903 [2024-11-15 10:42:31.725644] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:48.903 [2024-11-15 10:42:31.725658] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:48.903 [2024-11-15 10:42:31.725685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77696 len:8 PRP1 0x0 PRP2 0x0 00:23:48.903 [2024-11-15 10:42:31.725698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.903 [2024-11-15 10:42:31.725711] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:48.903 [2024-11-15 10:42:31.725722] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:48.903 [2024-11-15 10:42:31.725733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77704 len:8 PRP1 0x0 PRP2 0x0 00:23:48.903 [2024-11-15 10:42:31.725746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.903 [2024-11-15 10:42:31.725759] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:48.903 [2024-11-15 10:42:31.725770] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:48.903 [2024-11-15 10:42:31.725781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77712 len:8 PRP1 0x0 PRP2 0x0 00:23:48.903 [2024-11-15 10:42:31.725793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.903 [2024-11-15 10:42:31.725807] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:48.903 [2024-11-15 10:42:31.725817] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:48.903 [2024-11-15 10:42:31.725828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77720 len:8 PRP1 0x0 PRP2 0x0 00:23:48.903 [2024-11-15 10:42:31.725840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.903 [2024-11-15 10:42:31.725853] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:48.903 [2024-11-15 10:42:31.725864] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:48.903 [2024-11-15 10:42:31.725875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77728 len:8 PRP1 0x0 PRP2 0x0 00:23:48.903 [2024-11-15 10:42:31.725893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.903 [2024-11-15 10:42:31.725906] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:48.903 [2024-11-15 10:42:31.725917] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:48.903 [2024-11-15 10:42:31.725927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77736 len:8 PRP1 0x0 PRP2 0x0 00:23:48.903 [2024-11-15 10:42:31.725940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.903 [2024-11-15 10:42:31.725954] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:48.903 [2024-11-15 10:42:31.725964] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:48.903 [2024-11-15 10:42:31.725975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77744 len:8 PRP1 0x0 PRP2 0x0 00:23:48.903 [2024-11-15 10:42:31.725988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.903 [2024-11-15 10:42:31.726000] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:48.903 [2024-11-15 10:42:31.726012] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:48.903 [2024-11-15 10:42:31.726023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77752 len:8 PRP1 0x0 PRP2 0x0 00:23:48.903 [2024-11-15 10:42:31.726036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.903 [2024-11-15 10:42:31.726052] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:48.903 [2024-11-15 10:42:31.726064] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:48.903 [2024-11-15 10:42:31.726075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77760 len:8 PRP1 0x0 PRP2 0x0 00:23:48.903 [2024-11-15 10:42:31.726088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.903 [2024-11-15 10:42:31.726101] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:48.903 [2024-11-15 10:42:31.726112] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:48.903 [2024-11-15 10:42:31.726122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77768 len:8 PRP1 0x0 PRP2 0x0 00:23:48.903 [2024-11-15 10:42:31.726134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.903 [2024-11-15 10:42:31.726147] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:48.903 [2024-11-15 10:42:31.726158] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:48.903 [2024-11-15 10:42:31.726169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77776 len:8 PRP1 0x0 PRP2 0x0 00:23:48.903 [2024-11-15 10:42:31.726182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.903 [2024-11-15 10:42:31.726194] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:48.903 [2024-11-15 10:42:31.726205] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:48.903 [2024-11-15 10:42:31.726216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77784 len:8 PRP1 0x0 PRP2 0x0 00:23:48.903 [2024-11-15 10:42:31.726229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.903 [2024-11-15 10:42:31.726241] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:48.903 [2024-11-15 10:42:31.726252] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:48.903 [2024-11-15 10:42:31.726263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77792 len:8 PRP1 0x0 PRP2 0x0 00:23:48.903 [2024-11-15 10:42:31.726281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.903 [2024-11-15 10:42:31.726294] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:48.903 [2024-11-15 10:42:31.726305] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:48.903 [2024-11-15 10:42:31.726316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77800 len:8 PRP1 0x0 PRP2 0x0 00:23:48.903 [2024-11-15 10:42:31.726328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.903 [2024-11-15 10:42:31.726341] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:48.903 [2024-11-15 10:42:31.726352] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:48.903 [2024-11-15 10:42:31.726390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77808 len:8 PRP1 0x0 PRP2 0x0 00:23:48.903 [2024-11-15 10:42:31.726405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.903 [2024-11-15 10:42:31.726419] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:48.903 [2024-11-15 10:42:31.726430] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:48.903 [2024-11-15 10:42:31.726441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77816 len:8 PRP1 0x0 PRP2 0x0 00:23:48.903 [2024-11-15 10:42:31.726457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.903 [2024-11-15 10:42:31.726471] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:48.903 [2024-11-15 10:42:31.726482] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:48.903 [2024-11-15 10:42:31.726493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77824 len:8 PRP1 0x0 PRP2 0x0 00:23:48.903 [2024-11-15 10:42:31.726505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.903 [2024-11-15 10:42:31.726518] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:48.903 [2024-11-15 10:42:31.726530] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:48.903 [2024-11-15 10:42:31.726541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77832 len:8 PRP1 0x0 PRP2 0x0 00:23:48.903 [2024-11-15 10:42:31.726554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.903 [2024-11-15 10:42:31.726567] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:48.903 [2024-11-15 10:42:31.726577] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:48.903 [2024-11-15 10:42:31.726587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77840 len:8 PRP1 0x0 PRP2 0x0 00:23:48.903 [2024-11-15 10:42:31.726600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.903 [2024-11-15 10:42:31.726613] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:48.903 [2024-11-15 10:42:31.726624] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:48.903 [2024-11-15 10:42:31.726635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77848 len:8 PRP1 0x0 PRP2 0x0 00:23:48.903 [2024-11-15 10:42:31.726647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.903 [2024-11-15 10:42:31.726660] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:48.903 [2024-11-15 10:42:31.726686] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:48.903 [2024-11-15 10:42:31.726697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77856 len:8 PRP1 0x0 PRP2 0x0 00:23:48.903 [2024-11-15 10:42:31.726715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.903 [2024-11-15 10:42:31.726728] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:48.903 [2024-11-15 10:42:31.726738] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:48.903 [2024-11-15 10:42:31.726749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77864 len:8 PRP1 0x0 PRP2 0x0 00:23:48.903 [2024-11-15 10:42:31.726761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.903 [2024-11-15 10:42:31.726774] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:48.903 [2024-11-15 10:42:31.726784] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:48.903 [2024-11-15 10:42:31.726800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77872 len:8 PRP1 0x0 PRP2 0x0 00:23:48.904 [2024-11-15 10:42:31.726813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.904 [2024-11-15 10:42:31.726885] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:23:48.904 [2024-11-15 10:42:31.726929] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:48.904 [2024-11-15 10:42:31.726948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.904 [2024-11-15 10:42:31.726963] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:48.904 [2024-11-15 10:42:31.726976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.904 [2024-11-15 10:42:31.726989] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:48.904 [2024-11-15 10:42:31.727002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.904 [2024-11-15 10:42:31.727015] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:48.904 [2024-11-15 10:42:31.727028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.904 [2024-11-15 10:42:31.727041] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:23:48.904 [2024-11-15 10:42:31.730300] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:23:48.904 [2024-11-15 10:42:31.730342] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfd7560 (9): Bad file descriptor 00:23:48.904 [2024-11-15 10:42:31.756900] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 6] Resetting controller successful. 00:23:48.904 8626.00 IOPS, 33.70 MiB/s [2024-11-15T09:42:37.367Z] 8659.55 IOPS, 33.83 MiB/s [2024-11-15T09:42:37.367Z] 8677.58 IOPS, 33.90 MiB/s [2024-11-15T09:42:37.367Z] 8695.00 IOPS, 33.96 MiB/s [2024-11-15T09:42:37.367Z] 8702.21 IOPS, 33.99 MiB/s 00:23:48.904 Latency(us) 00:23:48.904 [2024-11-15T09:42:37.367Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:48.904 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:23:48.904 Verification LBA range: start 0x0 length 0x4000 00:23:48.904 NVMe0n1 : 15.01 8714.33 34.04 608.61 0.00 13702.09 537.03 16019.91 00:23:48.904 [2024-11-15T09:42:37.367Z] =================================================================================================================== 00:23:48.904 [2024-11-15T09:42:37.367Z] Total : 8714.33 34.04 608.61 0.00 13702.09 537.03 16019.91 00:23:48.904 Received shutdown signal, test time was about 15.000000 seconds 00:23:48.904 00:23:48.904 Latency(us) 00:23:48.904 [2024-11-15T09:42:37.367Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:48.904 [2024-11-15T09:42:37.367Z] =================================================================================================================== 00:23:48.904 [2024-11-15T09:42:37.367Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:48.904 10:42:37 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:23:49.161 10:42:37 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # count=3 00:23:49.161 10:42:37 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:23:49.161 10:42:37 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=449619 00:23:49.161 10:42:37 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:23:49.161 10:42:37 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 449619 /var/tmp/bdevperf.sock 00:23:49.161 10:42:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@833 -- # '[' -z 449619 ']' 00:23:49.161 10:42:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:49.161 10:42:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # local max_retries=100 00:23:49.161 10:42:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:49.161 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:49.161 10:42:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # xtrace_disable 00:23:49.161 10:42:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:23:49.418 10:42:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:23:49.418 10:42:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@866 -- # return 0 00:23:49.418 10:42:37 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:23:49.675 [2024-11-15 10:42:37.887829] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:23:49.675 10:42:37 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:23:49.932 [2024-11-15 10:42:38.152556] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:23:49.932 10:42:38 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:23:50.189 NVMe0n1 00:23:50.189 10:42:38 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:23:50.754 00:23:50.754 10:42:38 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:23:51.011 00:23:51.011 10:42:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:51.011 10:42:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:23:51.270 10:42:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:51.527 10:42:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:23:54.810 10:42:42 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:54.810 10:42:42 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:23:54.810 10:42:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=450288 00:23:54.810 10:42:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:54.810 10:42:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@92 -- # wait 450288 00:23:56.186 { 00:23:56.186 "results": [ 00:23:56.186 { 00:23:56.186 "job": "NVMe0n1", 00:23:56.186 "core_mask": "0x1", 00:23:56.186 "workload": "verify", 00:23:56.186 "status": "finished", 00:23:56.186 "verify_range": { 00:23:56.186 "start": 0, 00:23:56.186 "length": 16384 00:23:56.186 }, 00:23:56.186 "queue_depth": 128, 00:23:56.186 "io_size": 4096, 00:23:56.186 "runtime": 1.014708, 00:23:56.186 "iops": 8663.57612239186, 00:23:56.186 "mibps": 33.8420942280932, 00:23:56.186 "io_failed": 0, 00:23:56.186 "io_timeout": 0, 00:23:56.186 "avg_latency_us": 14713.238153835784, 00:23:56.186 "min_latency_us": 2924.8474074074074, 00:23:56.186 "max_latency_us": 18738.44148148148 00:23:56.186 } 00:23:56.186 ], 00:23:56.186 "core_count": 1 00:23:56.186 } 00:23:56.186 10:42:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:23:56.186 [2024-11-15 10:42:37.400656] Starting SPDK v25.01-pre git sha1 318515b44 / DPDK 24.03.0 initialization... 00:23:56.186 [2024-11-15 10:42:37.400767] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid449619 ] 00:23:56.186 [2024-11-15 10:42:37.469032] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:56.186 [2024-11-15 10:42:37.525494] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:56.186 [2024-11-15 10:42:39.825585] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:23:56.186 [2024-11-15 10:42:39.825685] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:56.186 [2024-11-15 10:42:39.825708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.186 [2024-11-15 10:42:39.825725] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:56.186 [2024-11-15 10:42:39.825738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.186 [2024-11-15 10:42:39.825752] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:56.186 [2024-11-15 10:42:39.825764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.186 [2024-11-15 10:42:39.825787] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:56.186 [2024-11-15 10:42:39.825800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.186 [2024-11-15 10:42:39.825813] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 7] in failed state. 00:23:56.186 [2024-11-15 10:42:39.825867] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] resetting controller 00:23:56.186 [2024-11-15 10:42:39.825897] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2060560 (9): Bad file descriptor 00:23:56.186 [2024-11-15 10:42:39.871603] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 10] Resetting controller successful. 00:23:56.186 Running I/O for 1 seconds... 00:23:56.186 8624.00 IOPS, 33.69 MiB/s 00:23:56.186 Latency(us) 00:23:56.186 [2024-11-15T09:42:44.649Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:56.186 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:23:56.186 Verification LBA range: start 0x0 length 0x4000 00:23:56.186 NVMe0n1 : 1.01 8663.58 33.84 0.00 0.00 14713.24 2924.85 18738.44 00:23:56.186 [2024-11-15T09:42:44.649Z] =================================================================================================================== 00:23:56.186 [2024-11-15T09:42:44.649Z] Total : 8663.58 33.84 0.00 0.00 14713.24 2924.85 18738.44 00:23:56.186 10:42:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:56.186 10:42:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:23:56.186 10:42:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:56.765 10:42:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:56.765 10:42:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:23:56.765 10:42:45 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:57.330 10:42:45 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:24:00.611 10:42:48 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:00.611 10:42:48 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:24:00.611 10:42:48 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@108 -- # killprocess 449619 00:24:00.611 10:42:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@952 -- # '[' -z 449619 ']' 00:24:00.611 10:42:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # kill -0 449619 00:24:00.611 10:42:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@957 -- # uname 00:24:00.611 10:42:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:24:00.611 10:42:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 449619 00:24:00.611 10:42:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:24:00.611 10:42:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:24:00.611 10:42:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@970 -- # echo 'killing process with pid 449619' 00:24:00.611 killing process with pid 449619 00:24:00.611 10:42:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@971 -- # kill 449619 00:24:00.611 10:42:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@976 -- # wait 449619 00:24:00.611 10:42:49 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@110 -- # sync 00:24:00.611 10:42:49 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:00.869 10:42:49 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:24:00.869 10:42:49 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:24:00.869 10:42:49 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:24:00.869 10:42:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:00.869 10:42:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@121 -- # sync 00:24:00.869 10:42:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:00.869 10:42:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@124 -- # set +e 00:24:00.869 10:42:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:00.869 10:42:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:00.869 rmmod nvme_tcp 00:24:00.869 rmmod nvme_fabrics 00:24:01.127 rmmod nvme_keyring 00:24:01.127 10:42:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:01.127 10:42:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@128 -- # set -e 00:24:01.128 10:42:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@129 -- # return 0 00:24:01.128 10:42:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@517 -- # '[' -n 447352 ']' 00:24:01.128 10:42:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@518 -- # killprocess 447352 00:24:01.128 10:42:49 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@952 -- # '[' -z 447352 ']' 00:24:01.128 10:42:49 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # kill -0 447352 00:24:01.128 10:42:49 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@957 -- # uname 00:24:01.128 10:42:49 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:24:01.128 10:42:49 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 447352 00:24:01.128 10:42:49 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:24:01.128 10:42:49 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:24:01.128 10:42:49 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@970 -- # echo 'killing process with pid 447352' 00:24:01.128 killing process with pid 447352 00:24:01.128 10:42:49 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@971 -- # kill 447352 00:24:01.128 10:42:49 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@976 -- # wait 447352 00:24:01.386 10:42:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:01.386 10:42:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:01.386 10:42:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:01.386 10:42:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@297 -- # iptr 00:24:01.386 10:42:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-save 00:24:01.386 10:42:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:01.386 10:42:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-restore 00:24:01.386 10:42:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:01.386 10:42:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:01.386 10:42:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:01.386 10:42:49 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:01.386 10:42:49 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:03.296 10:42:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:03.296 00:24:03.296 real 0m35.692s 00:24:03.296 user 2m6.555s 00:24:03.296 sys 0m6.190s 00:24:03.296 10:42:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1128 -- # xtrace_disable 00:24:03.296 10:42:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:03.296 ************************************ 00:24:03.296 END TEST nvmf_failover 00:24:03.296 ************************************ 00:24:03.296 10:42:51 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@26 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:24:03.296 10:42:51 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:24:03.296 10:42:51 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:24:03.296 10:42:51 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:03.296 ************************************ 00:24:03.296 START TEST nvmf_host_discovery 00:24:03.296 ************************************ 00:24:03.296 10:42:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:24:03.556 * Looking for test storage... 00:24:03.556 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:03.556 10:42:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:24:03.556 10:42:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1691 -- # lcov --version 00:24:03.556 10:42:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:24:03.556 10:42:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:24:03.556 10:42:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:03.556 10:42:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:03.556 10:42:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:03.556 10:42:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:24:03.556 10:42:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:24:03.556 10:42:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:24:03.556 10:42:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:24:03.556 10:42:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:24:03.556 10:42:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:24:03.556 10:42:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:24:03.556 10:42:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:03.556 10:42:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@344 -- # case "$op" in 00:24:03.556 10:42:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@345 -- # : 1 00:24:03.556 10:42:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:03.556 10:42:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:03.556 10:42:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # decimal 1 00:24:03.556 10:42:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=1 00:24:03.556 10:42:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:03.556 10:42:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 1 00:24:03.556 10:42:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:24:03.556 10:42:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # decimal 2 00:24:03.556 10:42:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=2 00:24:03.556 10:42:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:03.557 10:42:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 2 00:24:03.557 10:42:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:24:03.557 10:42:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:03.557 10:42:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:03.557 10:42:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # return 0 00:24:03.557 10:42:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:03.557 10:42:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:24:03.557 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:03.557 --rc genhtml_branch_coverage=1 00:24:03.557 --rc genhtml_function_coverage=1 00:24:03.557 --rc genhtml_legend=1 00:24:03.557 --rc geninfo_all_blocks=1 00:24:03.557 --rc geninfo_unexecuted_blocks=1 00:24:03.557 00:24:03.557 ' 00:24:03.557 10:42:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:24:03.557 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:03.557 --rc genhtml_branch_coverage=1 00:24:03.557 --rc genhtml_function_coverage=1 00:24:03.557 --rc genhtml_legend=1 00:24:03.557 --rc geninfo_all_blocks=1 00:24:03.557 --rc geninfo_unexecuted_blocks=1 00:24:03.557 00:24:03.557 ' 00:24:03.557 10:42:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:24:03.557 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:03.557 --rc genhtml_branch_coverage=1 00:24:03.557 --rc genhtml_function_coverage=1 00:24:03.557 --rc genhtml_legend=1 00:24:03.557 --rc geninfo_all_blocks=1 00:24:03.557 --rc geninfo_unexecuted_blocks=1 00:24:03.557 00:24:03.557 ' 00:24:03.557 10:42:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:24:03.557 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:03.557 --rc genhtml_branch_coverage=1 00:24:03.557 --rc genhtml_function_coverage=1 00:24:03.557 --rc genhtml_legend=1 00:24:03.557 --rc geninfo_all_blocks=1 00:24:03.557 --rc geninfo_unexecuted_blocks=1 00:24:03.557 00:24:03.557 ' 00:24:03.557 10:42:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:03.557 10:42:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:24:03.557 10:42:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:03.557 10:42:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:03.557 10:42:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:03.557 10:42:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:03.557 10:42:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:03.557 10:42:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:03.557 10:42:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:03.557 10:42:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:03.557 10:42:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:03.557 10:42:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:03.557 10:42:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:24:03.557 10:42:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=8b464f06-2980-e311-ba20-001e67a94acd 00:24:03.557 10:42:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:03.557 10:42:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:03.557 10:42:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:03.557 10:42:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:03.557 10:42:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:03.557 10:42:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:24:03.557 10:42:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:03.557 10:42:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:03.557 10:42:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:03.557 10:42:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:03.557 10:42:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:03.557 10:42:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:03.557 10:42:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:24:03.557 10:42:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:03.557 10:42:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@51 -- # : 0 00:24:03.557 10:42:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:03.557 10:42:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:03.557 10:42:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:03.557 10:42:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:03.557 10:42:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:03.557 10:42:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:03.557 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:03.557 10:42:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:03.557 10:42:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:03.557 10:42:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:03.557 10:42:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:24:03.557 10:42:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:24:03.557 10:42:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:24:03.557 10:42:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:24:03.557 10:42:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:24:03.557 10:42:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:24:03.557 10:42:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:24:03.557 10:42:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:03.557 10:42:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:03.557 10:42:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:03.557 10:42:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:03.557 10:42:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:03.557 10:42:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:03.557 10:42:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:03.557 10:42:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:03.557 10:42:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:03.557 10:42:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:03.557 10:42:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@309 -- # xtrace_disable 00:24:03.557 10:42:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:06.092 10:42:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:06.092 10:42:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # pci_devs=() 00:24:06.092 10:42:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:06.092 10:42:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:06.092 10:42:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:06.092 10:42:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:06.092 10:42:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:06.092 10:42:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@319 -- # net_devs=() 00:24:06.092 10:42:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:06.092 10:42:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # e810=() 00:24:06.092 10:42:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # local -ga e810 00:24:06.092 10:42:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # x722=() 00:24:06.092 10:42:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # local -ga x722 00:24:06.092 10:42:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@322 -- # mlx=() 00:24:06.092 10:42:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@322 -- # local -ga mlx 00:24:06.092 10:42:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:06.092 10:42:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:06.092 10:42:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:06.092 10:42:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:06.092 10:42:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:06.092 10:42:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:06.092 10:42:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:06.092 10:42:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:06.092 10:42:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:06.092 10:42:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:06.092 10:42:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:06.092 10:42:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:06.092 10:42:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:06.092 10:42:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:06.092 10:42:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:06.092 10:42:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:06.092 10:42:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:06.092 10:42:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:06.092 10:42:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:06.092 10:42:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:82:00.0 (0x8086 - 0x159b)' 00:24:06.092 Found 0000:82:00.0 (0x8086 - 0x159b) 00:24:06.092 10:42:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:06.092 10:42:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:06.092 10:42:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:06.092 10:42:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:06.092 10:42:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:06.092 10:42:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:06.092 10:42:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:82:00.1 (0x8086 - 0x159b)' 00:24:06.092 Found 0000:82:00.1 (0x8086 - 0x159b) 00:24:06.092 10:42:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:06.092 10:42:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:06.092 10:42:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:06.092 10:42:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:06.092 10:42:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:06.092 10:42:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:06.092 10:42:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:06.092 10:42:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:06.092 10:42:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:06.092 10:42:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:06.092 10:42:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:06.092 10:42:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:06.092 10:42:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:06.092 10:42:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:06.092 10:42:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:06.092 10:42:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:82:00.0: cvl_0_0' 00:24:06.092 Found net devices under 0000:82:00.0: cvl_0_0 00:24:06.092 10:42:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:06.092 10:42:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:06.092 10:42:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:06.092 10:42:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:06.092 10:42:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:06.092 10:42:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:06.092 10:42:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:06.092 10:42:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:06.092 10:42:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:82:00.1: cvl_0_1' 00:24:06.092 Found net devices under 0000:82:00.1: cvl_0_1 00:24:06.092 10:42:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:06.092 10:42:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:06.092 10:42:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # is_hw=yes 00:24:06.092 10:42:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:06.092 10:42:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:06.092 10:42:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:06.092 10:42:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:06.092 10:42:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:06.092 10:42:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:06.093 10:42:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:06.093 10:42:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:06.093 10:42:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:06.093 10:42:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:06.093 10:42:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:06.093 10:42:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:06.093 10:42:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:06.093 10:42:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:06.093 10:42:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:06.093 10:42:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:06.093 10:42:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:06.093 10:42:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:06.093 10:42:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:06.093 10:42:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:06.093 10:42:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:06.093 10:42:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:06.093 10:42:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:06.093 10:42:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:06.093 10:42:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:06.093 10:42:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:06.093 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:06.093 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.181 ms 00:24:06.093 00:24:06.093 --- 10.0.0.2 ping statistics --- 00:24:06.093 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:06.093 rtt min/avg/max/mdev = 0.181/0.181/0.181/0.000 ms 00:24:06.093 10:42:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:06.093 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:06.093 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.141 ms 00:24:06.093 00:24:06.093 --- 10.0.0.1 ping statistics --- 00:24:06.093 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:06.093 rtt min/avg/max/mdev = 0.141/0.141/0.141/0.000 ms 00:24:06.093 10:42:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:06.093 10:42:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@450 -- # return 0 00:24:06.093 10:42:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:06.093 10:42:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:06.093 10:42:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:06.093 10:42:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:06.093 10:42:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:06.093 10:42:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:06.093 10:42:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:06.093 10:42:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:24:06.093 10:42:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:06.093 10:42:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:06.093 10:42:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:06.093 10:42:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@509 -- # nvmfpid=453016 00:24:06.093 10:42:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:24:06.093 10:42:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@510 -- # waitforlisten 453016 00:24:06.093 10:42:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@833 -- # '[' -z 453016 ']' 00:24:06.093 10:42:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:06.093 10:42:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@838 -- # local max_retries=100 00:24:06.093 10:42:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:06.093 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:06.093 10:42:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # xtrace_disable 00:24:06.093 10:42:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:06.093 [2024-11-15 10:42:54.268166] Starting SPDK v25.01-pre git sha1 318515b44 / DPDK 24.03.0 initialization... 00:24:06.093 [2024-11-15 10:42:54.268261] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:06.093 [2024-11-15 10:42:54.339490] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:06.093 [2024-11-15 10:42:54.395946] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:06.093 [2024-11-15 10:42:54.396003] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:06.093 [2024-11-15 10:42:54.396027] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:06.093 [2024-11-15 10:42:54.396037] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:06.093 [2024-11-15 10:42:54.396047] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:06.093 [2024-11-15 10:42:54.396714] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:06.093 10:42:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:24:06.093 10:42:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@866 -- # return 0 00:24:06.093 10:42:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:06.093 10:42:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:06.093 10:42:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:06.093 10:42:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:06.093 10:42:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:06.093 10:42:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:06.093 10:42:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:06.093 [2024-11-15 10:42:54.540253] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:06.093 10:42:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:06.093 10:42:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:24:06.093 10:42:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:06.093 10:42:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:06.093 [2024-11-15 10:42:54.548510] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:24:06.093 10:42:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:06.093 10:42:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:24:06.093 10:42:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:06.093 10:42:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:06.352 null0 00:24:06.352 10:42:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:06.352 10:42:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:24:06.352 10:42:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:06.352 10:42:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:06.352 null1 00:24:06.352 10:42:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:06.352 10:42:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:24:06.352 10:42:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:06.352 10:42:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:06.352 10:42:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:06.352 10:42:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=453047 00:24:06.352 10:42:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 453047 /tmp/host.sock 00:24:06.352 10:42:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:24:06.352 10:42:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@833 -- # '[' -z 453047 ']' 00:24:06.352 10:42:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@837 -- # local rpc_addr=/tmp/host.sock 00:24:06.352 10:42:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@838 -- # local max_retries=100 00:24:06.352 10:42:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:24:06.352 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:24:06.352 10:42:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # xtrace_disable 00:24:06.352 10:42:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:06.352 [2024-11-15 10:42:54.624173] Starting SPDK v25.01-pre git sha1 318515b44 / DPDK 24.03.0 initialization... 00:24:06.352 [2024-11-15 10:42:54.624266] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid453047 ] 00:24:06.352 [2024-11-15 10:42:54.690078] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:06.352 [2024-11-15 10:42:54.748482] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:06.611 10:42:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:24:06.611 10:42:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@866 -- # return 0 00:24:06.611 10:42:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:06.611 10:42:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:24:06.611 10:42:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:06.611 10:42:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:06.611 10:42:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:06.611 10:42:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:24:06.611 10:42:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:06.611 10:42:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:06.611 10:42:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:06.611 10:42:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:24:06.611 10:42:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:24:06.611 10:42:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:24:06.611 10:42:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:24:06.611 10:42:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:06.611 10:42:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:06.611 10:42:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:24:06.611 10:42:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:24:06.611 10:42:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:06.611 10:42:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:24:06.611 10:42:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:24:06.611 10:42:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:06.611 10:42:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:06.611 10:42:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:06.611 10:42:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:06.611 10:42:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:06.611 10:42:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:06.611 10:42:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:06.611 10:42:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:24:06.611 10:42:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:24:06.611 10:42:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:06.611 10:42:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:06.611 10:42:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:06.611 10:42:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:24:06.611 10:42:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:24:06.611 10:42:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:24:06.611 10:42:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:06.611 10:42:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:24:06.611 10:42:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:06.611 10:42:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:24:06.611 10:42:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:06.611 10:42:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:24:06.611 10:42:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:24:06.611 10:42:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:06.611 10:42:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:06.611 10:42:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:06.611 10:42:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:06.611 10:42:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:06.612 10:42:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:06.612 10:42:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:06.612 10:42:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:24:06.612 10:42:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:24:06.612 10:42:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:06.612 10:42:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:06.612 10:42:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:06.612 10:42:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:24:06.612 10:42:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:24:06.612 10:42:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:06.612 10:42:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:24:06.612 10:42:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:06.612 10:42:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:24:06.612 10:42:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:24:06.612 10:42:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:06.870 10:42:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:24:06.870 10:42:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:24:06.870 10:42:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:06.870 10:42:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:06.870 10:42:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:06.870 10:42:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:06.870 10:42:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:06.870 10:42:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:06.870 10:42:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:06.870 10:42:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:24:06.870 10:42:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:24:06.870 10:42:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:06.870 10:42:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:06.870 [2024-11-15 10:42:55.146034] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:06.870 10:42:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:06.870 10:42:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:24:06.870 10:42:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:24:06.870 10:42:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:24:06.870 10:42:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:06.870 10:42:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:06.870 10:42:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:24:06.870 10:42:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:24:06.870 10:42:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:06.870 10:42:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:24:06.870 10:42:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:24:06.870 10:42:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:06.870 10:42:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:06.870 10:42:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:06.870 10:42:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:06.870 10:42:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:06.870 10:42:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:06.870 10:42:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:06.870 10:42:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:24:06.870 10:42:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:24:06.870 10:42:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:24:06.870 10:42:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:24:06.870 10:42:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:24:06.870 10:42:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:24:06.870 10:42:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:24:06.870 10:42:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:24:06.870 10:42:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_notification_count 00:24:06.870 10:42:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:24:06.870 10:42:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:06.870 10:42:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:24:06.870 10:42:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:06.870 10:42:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:06.870 10:42:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:24:06.870 10:42:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:24:06.870 10:42:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # (( notification_count == expected_count )) 00:24:06.870 10:42:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:24:06.870 10:42:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:24:06.871 10:42:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:06.871 10:42:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:06.871 10:42:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:06.871 10:42:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:24:06.871 10:42:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:24:06.871 10:42:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:24:06.871 10:42:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:24:06.871 10:42:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:24:06.871 10:42:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_subsystem_names 00:24:06.871 10:42:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:24:06.871 10:42:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:06.871 10:42:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:24:06.871 10:42:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:06.871 10:42:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:24:06.871 10:42:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:24:06.871 10:42:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:06.871 10:42:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ '' == \n\v\m\e\0 ]] 00:24:06.871 10:42:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # sleep 1 00:24:07.806 [2024-11-15 10:42:55.928524] bdev_nvme.c:7384:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:24:07.806 [2024-11-15 10:42:55.928552] bdev_nvme.c:7470:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:24:07.806 [2024-11-15 10:42:55.928576] bdev_nvme.c:7347:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:24:07.806 [2024-11-15 10:42:56.016889] bdev_nvme.c:7313:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:24:07.806 [2024-11-15 10:42:56.198034] bdev_nvme.c:5634:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.2:4420 00:24:07.806 [2024-11-15 10:42:56.199093] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x1fedf80:1 started. 00:24:07.806 [2024-11-15 10:42:56.200869] bdev_nvme.c:7203:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:24:07.806 [2024-11-15 10:42:56.200891] bdev_nvme.c:7162:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:24:07.806 [2024-11-15 10:42:56.206394] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x1fedf80 was disconnected and freed. delete nvme_qpair. 00:24:08.065 10:42:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:24:08.065 10:42:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:24:08.065 10:42:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_subsystem_names 00:24:08.065 10:42:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:24:08.065 10:42:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:24:08.065 10:42:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:08.065 10:42:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:08.065 10:42:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:24:08.065 10:42:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:24:08.065 10:42:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:08.065 10:42:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:08.065 10:42:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:24:08.065 10:42:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:24:08.065 10:42:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:24:08.065 10:42:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:24:08.065 10:42:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:24:08.065 10:42:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:24:08.065 10:42:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_bdev_list 00:24:08.065 10:42:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:08.065 10:42:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:08.065 10:42:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:08.065 10:42:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:08.065 10:42:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:08.065 10:42:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:08.065 10:42:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:08.065 10:42:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:24:08.065 10:42:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:24:08.065 10:42:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:24:08.065 10:42:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:24:08.065 10:42:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:24:08.065 10:42:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:24:08.065 10:42:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:24:08.065 10:42:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_subsystem_paths nvme0 00:24:08.065 10:42:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:24:08.065 10:42:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:24:08.065 10:42:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:08.065 10:42:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:08.065 10:42:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:24:08.065 10:42:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:24:08.065 10:42:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:08.065 10:42:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ 4420 == \4\4\2\0 ]] 00:24:08.065 10:42:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:24:08.065 10:42:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:24:08.065 10:42:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:24:08.065 10:42:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:24:08.065 10:42:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:24:08.065 10:42:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:24:08.065 10:42:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:24:08.065 10:42:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:24:08.065 10:42:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_notification_count 00:24:08.065 10:42:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:24:08.065 10:42:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:08.065 10:42:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:08.065 10:42:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:24:08.065 10:42:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:08.065 10:42:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:24:08.065 10:42:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:24:08.065 10:42:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # (( notification_count == expected_count )) 00:24:08.065 10:42:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:24:08.065 10:42:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:24:08.065 10:42:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:08.065 10:42:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:08.065 10:42:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:08.065 10:42:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:24:08.065 10:42:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:24:08.065 10:42:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:24:08.065 10:42:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:24:08.065 10:42:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:24:08.065 10:42:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_bdev_list 00:24:08.065 [2024-11-15 10:42:56.501072] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x1fee3c0:1 started. 00:24:08.065 10:42:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:08.065 10:42:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:08.065 10:42:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:08.065 10:42:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:08.065 10:42:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:08.065 10:42:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:08.065 [2024-11-15 10:42:56.507038] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x1fee3c0 was disconnected and freed. delete nvme_qpair. 00:24:08.065 10:42:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:08.324 10:42:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:24:08.324 10:42:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:24:08.324 10:42:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:24:08.324 10:42:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:24:08.324 10:42:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:24:08.324 10:42:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:24:08.324 10:42:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:24:08.324 10:42:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:24:08.324 10:42:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:24:08.324 10:42:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_notification_count 00:24:08.324 10:42:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:24:08.324 10:42:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:24:08.324 10:42:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:08.324 10:42:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:08.324 10:42:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:08.324 10:42:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:24:08.324 10:42:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:24:08.324 10:42:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # (( notification_count == expected_count )) 00:24:08.324 10:42:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:24:08.325 10:42:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:24:08.325 10:42:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:08.325 10:42:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:08.325 [2024-11-15 10:42:56.586548] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:24:08.325 [2024-11-15 10:42:56.586951] bdev_nvme.c:7366:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:24:08.325 [2024-11-15 10:42:56.586981] bdev_nvme.c:7347:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:24:08.325 10:42:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:08.325 10:42:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:24:08.325 10:42:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:24:08.325 10:42:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:24:08.325 10:42:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:24:08.325 10:42:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:24:08.325 10:42:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_subsystem_names 00:24:08.325 10:42:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:24:08.325 10:42:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:24:08.325 10:42:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:08.325 10:42:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:08.325 10:42:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:24:08.325 10:42:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:24:08.325 10:42:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:08.325 10:42:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:08.325 10:42:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:24:08.325 10:42:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:24:08.325 10:42:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:24:08.325 10:42:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:24:08.325 10:42:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:24:08.325 10:42:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:24:08.325 10:42:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_bdev_list 00:24:08.325 10:42:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:08.325 10:42:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:08.325 10:42:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:08.325 10:42:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:08.325 10:42:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:08.325 10:42:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:08.325 10:42:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:08.325 10:42:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:24:08.325 10:42:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:24:08.325 10:42:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:24:08.325 10:42:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:24:08.325 10:42:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:24:08.325 10:42:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:24:08.325 10:42:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:24:08.325 10:42:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_subsystem_paths nvme0 00:24:08.325 10:42:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:24:08.325 10:42:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:08.325 10:42:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:24:08.325 10:42:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:08.325 10:42:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:24:08.325 10:42:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:24:08.325 10:42:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:08.325 10:42:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:24:08.325 10:42:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # sleep 1 00:24:08.325 [2024-11-15 10:42:56.712837] bdev_nvme.c:7308:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:24:08.583 [2024-11-15 10:42:56.812972] bdev_nvme.c:5634:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.2:4421 00:24:08.583 [2024-11-15 10:42:56.813034] bdev_nvme.c:7203:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:24:08.583 [2024-11-15 10:42:56.813051] bdev_nvme.c:7162:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:24:08.583 [2024-11-15 10:42:56.813059] bdev_nvme.c:7162:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:24:09.519 10:42:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:24:09.519 10:42:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:24:09.519 10:42:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_subsystem_paths nvme0 00:24:09.519 10:42:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:24:09.519 10:42:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:24:09.519 10:42:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:09.519 10:42:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:24:09.519 10:42:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:09.519 10:42:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:24:09.519 10:42:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:09.519 10:42:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:24:09.519 10:42:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:24:09.519 10:42:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:24:09.519 10:42:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:24:09.519 10:42:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:24:09.519 10:42:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:24:09.519 10:42:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:24:09.520 10:42:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:24:09.520 10:42:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:24:09.520 10:42:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_notification_count 00:24:09.520 10:42:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:24:09.520 10:42:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:24:09.520 10:42:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:09.520 10:42:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:09.520 10:42:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:09.520 10:42:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:24:09.520 10:42:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:24:09.520 10:42:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # (( notification_count == expected_count )) 00:24:09.520 10:42:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:24:09.520 10:42:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:24:09.520 10:42:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:09.520 10:42:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:09.520 [2024-11-15 10:42:57.794636] bdev_nvme.c:7366:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:24:09.520 [2024-11-15 10:42:57.794686] bdev_nvme.c:7347:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:24:09.520 10:42:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:09.520 10:42:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:24:09.520 10:42:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:24:09.520 10:42:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:24:09.520 10:42:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:24:09.520 10:42:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:24:09.520 10:42:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_subsystem_names 00:24:09.520 10:42:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:24:09.520 10:42:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:24:09.520 10:42:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:09.520 10:42:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:09.520 10:42:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:24:09.520 10:42:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:24:09.520 [2024-11-15 10:42:57.802553] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:09.520 [2024-11-15 10:42:57.802590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.520 [2024-11-15 10:42:57.802607] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:09.520 [2024-11-15 10:42:57.802621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.520 [2024-11-15 10:42:57.802636] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:09.520 [2024-11-15 10:42:57.802666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.520 [2024-11-15 10:42:57.802687] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:09.520 [2024-11-15 10:42:57.802701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.520 [2024-11-15 10:42:57.802731] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fbe550 is same with the state(6) to be set 00:24:09.520 [2024-11-15 10:42:57.812557] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fbe550 (9): Bad file descriptor 00:24:09.520 10:42:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:09.520 [2024-11-15 10:42:57.822602] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:24:09.520 [2024-11-15 10:42:57.822625] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:24:09.520 [2024-11-15 10:42:57.822636] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:24:09.520 [2024-11-15 10:42:57.822670] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:24:09.520 [2024-11-15 10:42:57.822701] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:24:09.520 [2024-11-15 10:42:57.822918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.520 [2024-11-15 10:42:57.822946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fbe550 with addr=10.0.0.2, port=4420 00:24:09.520 [2024-11-15 10:42:57.822962] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fbe550 is same with the state(6) to be set 00:24:09.520 [2024-11-15 10:42:57.822983] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fbe550 (9): Bad file descriptor 00:24:09.520 [2024-11-15 10:42:57.823014] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:24:09.520 [2024-11-15 10:42:57.823032] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:24:09.520 [2024-11-15 10:42:57.823046] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:24:09.520 [2024-11-15 10:42:57.823058] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:24:09.520 [2024-11-15 10:42:57.823068] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:24:09.520 [2024-11-15 10:42:57.823075] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:24:09.520 [2024-11-15 10:42:57.832734] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:24:09.520 [2024-11-15 10:42:57.832754] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:24:09.520 [2024-11-15 10:42:57.832762] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:24:09.520 [2024-11-15 10:42:57.832769] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:24:09.520 [2024-11-15 10:42:57.832792] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:24:09.520 [2024-11-15 10:42:57.832961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.520 [2024-11-15 10:42:57.832986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fbe550 with addr=10.0.0.2, port=4420 00:24:09.520 [2024-11-15 10:42:57.833001] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fbe550 is same with the state(6) to be set 00:24:09.520 [2024-11-15 10:42:57.833022] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fbe550 (9): Bad file descriptor 00:24:09.520 [2024-11-15 10:42:57.833056] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:24:09.520 [2024-11-15 10:42:57.833073] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:24:09.520 [2024-11-15 10:42:57.833085] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:24:09.520 [2024-11-15 10:42:57.833096] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:24:09.520 [2024-11-15 10:42:57.833104] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:24:09.520 [2024-11-15 10:42:57.833110] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:24:09.520 10:42:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:09.520 10:42:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:24:09.520 10:42:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:24:09.520 10:42:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:24:09.520 10:42:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:24:09.520 10:42:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:24:09.520 10:42:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:24:09.520 10:42:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_bdev_list 00:24:09.520 10:42:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:09.520 10:42:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:09.520 10:42:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:09.520 10:42:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:09.520 10:42:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:09.520 10:42:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:09.520 [2024-11-15 10:42:57.842827] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:24:09.520 [2024-11-15 10:42:57.842850] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:24:09.520 [2024-11-15 10:42:57.842859] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:24:09.520 [2024-11-15 10:42:57.842867] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:24:09.520 [2024-11-15 10:42:57.842893] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:24:09.520 [2024-11-15 10:42:57.843125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.520 [2024-11-15 10:42:57.843152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fbe550 with addr=10.0.0.2, port=4420 00:24:09.520 [2024-11-15 10:42:57.843168] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fbe550 is same with the state(6) to be set 00:24:09.520 [2024-11-15 10:42:57.843190] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fbe550 (9): Bad file descriptor 00:24:09.520 [2024-11-15 10:42:57.843221] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:24:09.521 [2024-11-15 10:42:57.843238] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:24:09.521 [2024-11-15 10:42:57.843252] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:24:09.521 [2024-11-15 10:42:57.843272] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:24:09.521 [2024-11-15 10:42:57.843281] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:24:09.521 [2024-11-15 10:42:57.843288] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:24:09.521 [2024-11-15 10:42:57.852928] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:24:09.521 [2024-11-15 10:42:57.852951] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:24:09.521 [2024-11-15 10:42:57.852959] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:24:09.521 [2024-11-15 10:42:57.852967] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:24:09.521 [2024-11-15 10:42:57.852992] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:24:09.521 [2024-11-15 10:42:57.853140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.521 [2024-11-15 10:42:57.853166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fbe550 with addr=10.0.0.2, port=4420 00:24:09.521 [2024-11-15 10:42:57.853182] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fbe550 is same with the state(6) to be set 00:24:09.521 [2024-11-15 10:42:57.853204] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fbe550 (9): Bad file descriptor 00:24:09.521 [2024-11-15 10:42:57.853225] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:24:09.521 [2024-11-15 10:42:57.853239] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:24:09.521 [2024-11-15 10:42:57.853252] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:24:09.521 [2024-11-15 10:42:57.853266] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:24:09.521 [2024-11-15 10:42:57.853274] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:24:09.521 [2024-11-15 10:42:57.853281] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:24:09.521 [2024-11-15 10:42:57.863027] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:24:09.521 [2024-11-15 10:42:57.863048] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:24:09.521 [2024-11-15 10:42:57.863057] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:24:09.521 [2024-11-15 10:42:57.863064] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:24:09.521 [2024-11-15 10:42:57.863089] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:24:09.521 [2024-11-15 10:42:57.863208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.521 [2024-11-15 10:42:57.863235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fbe550 with addr=10.0.0.2, port=4420 00:24:09.521 [2024-11-15 10:42:57.863251] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fbe550 is same with the state(6) to be set 00:24:09.521 [2024-11-15 10:42:57.863272] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fbe550 (9): Bad file descriptor 00:24:09.521 [2024-11-15 10:42:57.863303] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:24:09.521 [2024-11-15 10:42:57.863320] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:24:09.521 [2024-11-15 10:42:57.863338] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:24:09.521 [2024-11-15 10:42:57.863381] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:24:09.521 [2024-11-15 10:42:57.863392] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:24:09.521 [2024-11-15 10:42:57.863400] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:24:09.521 10:42:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:09.521 [2024-11-15 10:42:57.873123] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:24:09.521 [2024-11-15 10:42:57.873143] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:24:09.521 [2024-11-15 10:42:57.873152] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:24:09.521 [2024-11-15 10:42:57.873159] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:24:09.521 [2024-11-15 10:42:57.873182] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:24:09.521 [2024-11-15 10:42:57.873371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.521 [2024-11-15 10:42:57.873399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fbe550 with addr=10.0.0.2, port=4420 00:24:09.521 [2024-11-15 10:42:57.873415] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fbe550 is same with the state(6) to be set 00:24:09.521 [2024-11-15 10:42:57.873437] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fbe550 (9): Bad file descriptor 00:24:09.521 [2024-11-15 10:42:57.873471] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:24:09.521 [2024-11-15 10:42:57.873488] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:24:09.521 [2024-11-15 10:42:57.873502] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:24:09.521 [2024-11-15 10:42:57.873513] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:24:09.521 [2024-11-15 10:42:57.873522] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:24:09.521 [2024-11-15 10:42:57.873529] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:24:09.521 10:42:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:24:09.521 10:42:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:24:09.521 10:42:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:24:09.521 10:42:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:24:09.521 10:42:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:24:09.521 10:42:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:24:09.521 10:42:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:24:09.521 10:42:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_subsystem_paths nvme0 00:24:09.521 [2024-11-15 10:42:57.880500] bdev_nvme.c:7171:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:24:09.521 [2024-11-15 10:42:57.880531] bdev_nvme.c:7162:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:24:09.521 10:42:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:24:09.521 10:42:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:09.521 10:42:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:24:09.521 10:42:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:09.521 10:42:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:24:09.521 10:42:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:24:09.521 10:42:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:09.521 10:42:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ 4421 == \4\4\2\1 ]] 00:24:09.521 10:42:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:24:09.521 10:42:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:24:09.521 10:42:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:24:09.521 10:42:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:24:09.521 10:42:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:24:09.521 10:42:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:24:09.521 10:42:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:24:09.521 10:42:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:24:09.521 10:42:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_notification_count 00:24:09.521 10:42:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:24:09.521 10:42:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:24:09.521 10:42:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:09.521 10:42:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:09.521 10:42:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:09.521 10:42:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:24:09.521 10:42:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:24:09.521 10:42:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # (( notification_count == expected_count )) 00:24:09.521 10:42:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:24:09.521 10:42:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:24:09.521 10:42:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:09.521 10:42:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:09.521 10:42:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:09.521 10:42:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:24:09.521 10:42:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:24:09.521 10:42:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:24:09.521 10:42:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:24:09.522 10:42:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:24:09.522 10:42:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_subsystem_names 00:24:09.522 10:42:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:24:09.522 10:42:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:24:09.522 10:42:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:09.522 10:42:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:09.522 10:42:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:24:09.522 10:42:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:24:09.522 10:42:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:09.780 10:42:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ '' == '' ]] 00:24:09.780 10:42:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:24:09.780 10:42:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:24:09.780 10:42:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:24:09.780 10:42:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:24:09.780 10:42:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:24:09.780 10:42:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:24:09.780 10:42:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_bdev_list 00:24:09.780 10:42:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:09.780 10:42:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:09.780 10:42:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:09.780 10:42:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:09.780 10:42:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:09.780 10:42:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:09.780 10:42:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:09.780 10:42:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ '' == '' ]] 00:24:09.780 10:42:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:24:09.780 10:42:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:24:09.780 10:42:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:24:09.780 10:42:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:24:09.780 10:42:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:24:09.780 10:42:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:24:09.780 10:42:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:24:09.780 10:42:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:24:09.780 10:42:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_notification_count 00:24:09.780 10:42:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:24:09.780 10:42:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:09.780 10:42:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:24:09.780 10:42:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:09.780 10:42:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:09.780 10:42:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:24:09.780 10:42:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:24:09.780 10:42:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # (( notification_count == expected_count )) 00:24:09.780 10:42:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:24:09.780 10:42:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:24:09.781 10:42:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:09.781 10:42:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:10.715 [2024-11-15 10:42:59.095255] bdev_nvme.c:7384:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:24:10.715 [2024-11-15 10:42:59.095278] bdev_nvme.c:7470:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:24:10.715 [2024-11-15 10:42:59.095298] bdev_nvme.c:7347:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:24:10.973 [2024-11-15 10:42:59.182608] bdev_nvme.c:7313:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:24:11.232 [2024-11-15 10:42:59.494288] bdev_nvme.c:5634:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] ctrlr was created to 10.0.0.2:4421 00:24:11.232 [2024-11-15 10:42:59.495133] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] Connecting qpair 0x1fe7970:1 started. 00:24:11.232 [2024-11-15 10:42:59.497246] bdev_nvme.c:7203:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:24:11.232 [2024-11-15 10:42:59.497278] bdev_nvme.c:7162:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:24:11.232 10:42:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:11.232 10:42:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:24:11.232 10:42:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # local es=0 00:24:11.232 10:42:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:24:11.232 10:42:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:24:11.232 10:42:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:11.232 10:42:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:24:11.232 10:42:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:11.232 10:42:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:24:11.232 10:42:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:11.232 10:42:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:11.232 request: 00:24:11.232 { 00:24:11.232 "name": "nvme", 00:24:11.232 "trtype": "tcp", 00:24:11.232 "traddr": "10.0.0.2", 00:24:11.232 "adrfam": "ipv4", 00:24:11.232 "trsvcid": "8009", 00:24:11.232 "hostnqn": "nqn.2021-12.io.spdk:test", 00:24:11.232 "wait_for_attach": true, 00:24:11.232 "method": "bdev_nvme_start_discovery", 00:24:11.232 "req_id": 1 00:24:11.232 } 00:24:11.232 Got JSON-RPC error response 00:24:11.232 response: 00:24:11.232 { 00:24:11.232 "code": -17, 00:24:11.232 "message": "File exists" 00:24:11.232 } 00:24:11.232 10:42:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:24:11.232 10:42:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # es=1 00:24:11.232 10:42:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:24:11.232 10:42:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:24:11.232 10:42:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:24:11.232 10:42:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:24:11.232 10:42:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:24:11.232 10:42:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:24:11.232 10:42:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:11.232 10:42:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:24:11.232 10:42:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:11.232 10:42:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:24:11.232 10:42:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:11.232 [2024-11-15 10:42:59.546000] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] qpair 0x1fe7970 was disconnected and freed. delete nvme_qpair. 00:24:11.232 10:42:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:24:11.232 10:42:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:24:11.232 10:42:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:11.232 10:42:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:11.232 10:42:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:11.232 10:42:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:11.232 10:42:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:11.232 10:42:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:11.232 10:42:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:11.232 10:42:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:24:11.232 10:42:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:24:11.232 10:42:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # local es=0 00:24:11.232 10:42:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:24:11.232 10:42:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:24:11.232 10:42:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:11.232 10:42:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:24:11.232 10:42:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:11.232 10:42:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:24:11.232 10:42:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:11.232 10:42:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:11.232 request: 00:24:11.232 { 00:24:11.232 "name": "nvme_second", 00:24:11.232 "trtype": "tcp", 00:24:11.232 "traddr": "10.0.0.2", 00:24:11.232 "adrfam": "ipv4", 00:24:11.232 "trsvcid": "8009", 00:24:11.232 "hostnqn": "nqn.2021-12.io.spdk:test", 00:24:11.232 "wait_for_attach": true, 00:24:11.232 "method": "bdev_nvme_start_discovery", 00:24:11.232 "req_id": 1 00:24:11.232 } 00:24:11.232 Got JSON-RPC error response 00:24:11.232 response: 00:24:11.232 { 00:24:11.232 "code": -17, 00:24:11.232 "message": "File exists" 00:24:11.232 } 00:24:11.232 10:42:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:24:11.232 10:42:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # es=1 00:24:11.232 10:42:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:24:11.232 10:42:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:24:11.232 10:42:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:24:11.232 10:42:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:24:11.232 10:42:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:24:11.232 10:42:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:11.232 10:42:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:24:11.232 10:42:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:11.232 10:42:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:24:11.232 10:42:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:24:11.232 10:42:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:11.232 10:42:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:24:11.232 10:42:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:24:11.232 10:42:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:11.232 10:42:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:11.232 10:42:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:11.232 10:42:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:11.232 10:42:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:11.232 10:42:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:11.232 10:42:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:11.232 10:42:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:24:11.232 10:42:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:24:11.232 10:42:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # local es=0 00:24:11.232 10:42:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:24:11.232 10:42:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:24:11.232 10:42:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:11.233 10:42:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:24:11.233 10:42:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:11.233 10:42:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:24:11.233 10:42:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:11.233 10:42:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:12.607 [2024-11-15 10:43:00.696812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:12.607 [2024-11-15 10:43:00.696860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff84f0 with addr=10.0.0.2, port=8010 00:24:12.607 [2024-11-15 10:43:00.696889] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:24:12.607 [2024-11-15 10:43:00.696912] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:24:12.607 [2024-11-15 10:43:00.696924] bdev_nvme.c:7452:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:24:13.542 [2024-11-15 10:43:01.699233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:13.542 [2024-11-15 10:43:01.699301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff84f0 with addr=10.0.0.2, port=8010 00:24:13.542 [2024-11-15 10:43:01.699331] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:24:13.542 [2024-11-15 10:43:01.699372] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:24:13.542 [2024-11-15 10:43:01.699388] bdev_nvme.c:7452:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:24:14.551 [2024-11-15 10:43:02.701347] bdev_nvme.c:7427:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:24:14.551 request: 00:24:14.551 { 00:24:14.551 "name": "nvme_second", 00:24:14.551 "trtype": "tcp", 00:24:14.551 "traddr": "10.0.0.2", 00:24:14.551 "adrfam": "ipv4", 00:24:14.551 "trsvcid": "8010", 00:24:14.551 "hostnqn": "nqn.2021-12.io.spdk:test", 00:24:14.551 "wait_for_attach": false, 00:24:14.551 "attach_timeout_ms": 3000, 00:24:14.551 "method": "bdev_nvme_start_discovery", 00:24:14.551 "req_id": 1 00:24:14.551 } 00:24:14.551 Got JSON-RPC error response 00:24:14.551 response: 00:24:14.551 { 00:24:14.551 "code": -110, 00:24:14.551 "message": "Connection timed out" 00:24:14.551 } 00:24:14.551 10:43:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:24:14.551 10:43:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # es=1 00:24:14.551 10:43:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:24:14.551 10:43:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:24:14.551 10:43:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:24:14.551 10:43:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:24:14.551 10:43:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:24:14.551 10:43:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:24:14.551 10:43:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:14.551 10:43:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:14.551 10:43:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:24:14.551 10:43:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:24:14.551 10:43:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:14.551 10:43:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:24:14.551 10:43:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:24:14.551 10:43:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 453047 00:24:14.551 10:43:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:24:14.551 10:43:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:14.551 10:43:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@121 -- # sync 00:24:14.551 10:43:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:14.551 10:43:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@124 -- # set +e 00:24:14.551 10:43:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:14.551 10:43:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:14.551 rmmod nvme_tcp 00:24:14.551 rmmod nvme_fabrics 00:24:14.551 rmmod nvme_keyring 00:24:14.551 10:43:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:14.551 10:43:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@128 -- # set -e 00:24:14.551 10:43:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@129 -- # return 0 00:24:14.551 10:43:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@517 -- # '[' -n 453016 ']' 00:24:14.551 10:43:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@518 -- # killprocess 453016 00:24:14.551 10:43:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@952 -- # '[' -z 453016 ']' 00:24:14.551 10:43:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@956 -- # kill -0 453016 00:24:14.551 10:43:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@957 -- # uname 00:24:14.551 10:43:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:24:14.551 10:43:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 453016 00:24:14.551 10:43:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:24:14.551 10:43:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:24:14.551 10:43:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@970 -- # echo 'killing process with pid 453016' 00:24:14.551 killing process with pid 453016 00:24:14.551 10:43:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@971 -- # kill 453016 00:24:14.551 10:43:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@976 -- # wait 453016 00:24:14.811 10:43:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:14.811 10:43:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:14.811 10:43:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:14.811 10:43:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@297 -- # iptr 00:24:14.811 10:43:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-save 00:24:14.811 10:43:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:14.811 10:43:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-restore 00:24:14.811 10:43:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:14.811 10:43:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:14.811 10:43:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:14.811 10:43:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:14.811 10:43:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:16.733 10:43:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:16.733 00:24:16.733 real 0m13.388s 00:24:16.733 user 0m19.167s 00:24:16.733 sys 0m2.842s 00:24:16.733 10:43:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1128 -- # xtrace_disable 00:24:16.733 10:43:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:16.733 ************************************ 00:24:16.733 END TEST nvmf_host_discovery 00:24:16.733 ************************************ 00:24:16.733 10:43:05 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@27 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:24:16.733 10:43:05 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:24:16.733 10:43:05 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:24:16.733 10:43:05 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:16.733 ************************************ 00:24:16.733 START TEST nvmf_host_multipath_status 00:24:16.733 ************************************ 00:24:16.733 10:43:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:24:16.992 * Looking for test storage... 00:24:16.992 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:16.992 10:43:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:24:16.992 10:43:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1691 -- # lcov --version 00:24:16.992 10:43:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:24:16.992 10:43:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:24:16.992 10:43:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:16.992 10:43:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:16.992 10:43:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:16.992 10:43:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # IFS=.-: 00:24:16.992 10:43:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # read -ra ver1 00:24:16.992 10:43:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # IFS=.-: 00:24:16.992 10:43:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # read -ra ver2 00:24:16.992 10:43:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@338 -- # local 'op=<' 00:24:16.992 10:43:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@340 -- # ver1_l=2 00:24:16.992 10:43:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@341 -- # ver2_l=1 00:24:16.992 10:43:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:16.992 10:43:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@344 -- # case "$op" in 00:24:16.992 10:43:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@345 -- # : 1 00:24:16.992 10:43:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:16.992 10:43:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:16.992 10:43:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # decimal 1 00:24:16.992 10:43:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=1 00:24:16.992 10:43:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:16.992 10:43:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 1 00:24:16.992 10:43:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # ver1[v]=1 00:24:16.992 10:43:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # decimal 2 00:24:16.992 10:43:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=2 00:24:16.992 10:43:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:16.992 10:43:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 2 00:24:16.992 10:43:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # ver2[v]=2 00:24:16.992 10:43:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:16.992 10:43:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:16.992 10:43:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # return 0 00:24:16.992 10:43:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:16.992 10:43:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:24:16.992 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:16.992 --rc genhtml_branch_coverage=1 00:24:16.992 --rc genhtml_function_coverage=1 00:24:16.992 --rc genhtml_legend=1 00:24:16.992 --rc geninfo_all_blocks=1 00:24:16.992 --rc geninfo_unexecuted_blocks=1 00:24:16.992 00:24:16.992 ' 00:24:16.992 10:43:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:24:16.992 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:16.992 --rc genhtml_branch_coverage=1 00:24:16.992 --rc genhtml_function_coverage=1 00:24:16.992 --rc genhtml_legend=1 00:24:16.992 --rc geninfo_all_blocks=1 00:24:16.992 --rc geninfo_unexecuted_blocks=1 00:24:16.992 00:24:16.992 ' 00:24:16.992 10:43:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:24:16.992 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:16.992 --rc genhtml_branch_coverage=1 00:24:16.992 --rc genhtml_function_coverage=1 00:24:16.992 --rc genhtml_legend=1 00:24:16.992 --rc geninfo_all_blocks=1 00:24:16.992 --rc geninfo_unexecuted_blocks=1 00:24:16.992 00:24:16.992 ' 00:24:16.992 10:43:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:24:16.992 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:16.992 --rc genhtml_branch_coverage=1 00:24:16.992 --rc genhtml_function_coverage=1 00:24:16.992 --rc genhtml_legend=1 00:24:16.992 --rc geninfo_all_blocks=1 00:24:16.992 --rc geninfo_unexecuted_blocks=1 00:24:16.992 00:24:16.992 ' 00:24:16.992 10:43:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:16.992 10:43:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:24:16.992 10:43:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:16.992 10:43:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:16.992 10:43:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:16.992 10:43:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:16.992 10:43:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:16.992 10:43:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:16.993 10:43:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:16.993 10:43:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:16.993 10:43:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:16.993 10:43:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:16.993 10:43:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:24:16.993 10:43:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=8b464f06-2980-e311-ba20-001e67a94acd 00:24:16.993 10:43:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:16.993 10:43:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:16.993 10:43:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:16.993 10:43:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:16.993 10:43:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:16.993 10:43:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@15 -- # shopt -s extglob 00:24:16.993 10:43:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:16.993 10:43:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:16.993 10:43:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:16.993 10:43:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:16.993 10:43:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:16.993 10:43:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:16.993 10:43:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:24:16.993 10:43:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:16.993 10:43:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # : 0 00:24:16.993 10:43:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:16.993 10:43:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:16.993 10:43:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:16.993 10:43:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:16.993 10:43:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:16.993 10:43:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:16.993 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:16.993 10:43:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:16.993 10:43:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:16.993 10:43:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:16.993 10:43:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:24:16.993 10:43:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:24:16.993 10:43:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:24:16.993 10:43:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/bpftrace.sh 00:24:16.993 10:43:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:16.993 10:43:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:24:16.993 10:43:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:24:16.993 10:43:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:16.993 10:43:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:16.993 10:43:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:16.993 10:43:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:16.993 10:43:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:16.993 10:43:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:16.993 10:43:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:16.993 10:43:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:16.993 10:43:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:16.993 10:43:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:16.993 10:43:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@309 -- # xtrace_disable 00:24:16.993 10:43:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:24:19.529 10:43:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:19.529 10:43:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # pci_devs=() 00:24:19.529 10:43:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:19.529 10:43:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:19.529 10:43:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:19.529 10:43:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:19.529 10:43:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:19.529 10:43:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # net_devs=() 00:24:19.529 10:43:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:19.529 10:43:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # e810=() 00:24:19.529 10:43:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # local -ga e810 00:24:19.529 10:43:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # x722=() 00:24:19.529 10:43:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # local -ga x722 00:24:19.529 10:43:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # mlx=() 00:24:19.529 10:43:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # local -ga mlx 00:24:19.529 10:43:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:19.529 10:43:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:19.529 10:43:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:19.529 10:43:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:19.529 10:43:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:19.529 10:43:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:19.529 10:43:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:19.529 10:43:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:19.529 10:43:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:19.529 10:43:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:19.529 10:43:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:19.529 10:43:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:19.529 10:43:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:19.529 10:43:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:19.529 10:43:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:19.529 10:43:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:19.529 10:43:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:19.529 10:43:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:19.529 10:43:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:19.529 10:43:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@367 -- # echo 'Found 0000:82:00.0 (0x8086 - 0x159b)' 00:24:19.529 Found 0000:82:00.0 (0x8086 - 0x159b) 00:24:19.529 10:43:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:19.529 10:43:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:19.529 10:43:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:19.529 10:43:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:19.529 10:43:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:19.529 10:43:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:19.529 10:43:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@367 -- # echo 'Found 0000:82:00.1 (0x8086 - 0x159b)' 00:24:19.529 Found 0000:82:00.1 (0x8086 - 0x159b) 00:24:19.529 10:43:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:19.529 10:43:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:19.529 10:43:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:19.529 10:43:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:19.529 10:43:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:19.529 10:43:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:19.529 10:43:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:19.529 10:43:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:19.530 10:43:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:19.530 10:43:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:19.530 10:43:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:19.530 10:43:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:19.530 10:43:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:19.530 10:43:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:19.530 10:43:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:19.530 10:43:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:82:00.0: cvl_0_0' 00:24:19.530 Found net devices under 0000:82:00.0: cvl_0_0 00:24:19.530 10:43:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:19.530 10:43:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:19.530 10:43:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:19.530 10:43:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:19.530 10:43:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:19.530 10:43:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:19.530 10:43:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:19.530 10:43:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:19.530 10:43:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:82:00.1: cvl_0_1' 00:24:19.530 Found net devices under 0000:82:00.1: cvl_0_1 00:24:19.530 10:43:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:19.530 10:43:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:19.530 10:43:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # is_hw=yes 00:24:19.530 10:43:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:19.530 10:43:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:19.530 10:43:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:19.530 10:43:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:19.530 10:43:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:19.530 10:43:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:19.530 10:43:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:19.530 10:43:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:19.530 10:43:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:19.530 10:43:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:19.530 10:43:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:19.530 10:43:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:19.530 10:43:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:19.530 10:43:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:19.530 10:43:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:19.530 10:43:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:19.530 10:43:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:19.530 10:43:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:19.530 10:43:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:19.530 10:43:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:19.530 10:43:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:19.530 10:43:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:19.530 10:43:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:19.530 10:43:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:19.530 10:43:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:19.530 10:43:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:19.530 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:19.530 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.283 ms 00:24:19.530 00:24:19.530 --- 10.0.0.2 ping statistics --- 00:24:19.530 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:19.530 rtt min/avg/max/mdev = 0.283/0.283/0.283/0.000 ms 00:24:19.530 10:43:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:19.530 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:19.530 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.184 ms 00:24:19.530 00:24:19.530 --- 10.0.0.1 ping statistics --- 00:24:19.530 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:19.530 rtt min/avg/max/mdev = 0.184/0.184/0.184/0.000 ms 00:24:19.530 10:43:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:19.530 10:43:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # return 0 00:24:19.530 10:43:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:19.530 10:43:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:19.530 10:43:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:19.530 10:43:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:19.530 10:43:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:19.530 10:43:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:19.530 10:43:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:19.530 10:43:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:24:19.530 10:43:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:19.530 10:43:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:19.530 10:43:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:24:19.530 10:43:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@509 -- # nvmfpid=456081 00:24:19.530 10:43:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:24:19.530 10:43:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@510 -- # waitforlisten 456081 00:24:19.530 10:43:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@833 -- # '[' -z 456081 ']' 00:24:19.530 10:43:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:19.530 10:43:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # local max_retries=100 00:24:19.530 10:43:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:19.530 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:19.530 10:43:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # xtrace_disable 00:24:19.530 10:43:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:24:19.530 [2024-11-15 10:43:07.623080] Starting SPDK v25.01-pre git sha1 318515b44 / DPDK 24.03.0 initialization... 00:24:19.530 [2024-11-15 10:43:07.623166] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:19.530 [2024-11-15 10:43:07.695171] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:24:19.530 [2024-11-15 10:43:07.754193] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:19.530 [2024-11-15 10:43:07.754256] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:19.530 [2024-11-15 10:43:07.754270] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:19.530 [2024-11-15 10:43:07.754280] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:19.530 [2024-11-15 10:43:07.754289] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:19.530 [2024-11-15 10:43:07.755943] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:19.530 [2024-11-15 10:43:07.755949] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:19.530 10:43:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:24:19.530 10:43:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@866 -- # return 0 00:24:19.530 10:43:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:19.530 10:43:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:19.530 10:43:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:24:19.530 10:43:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:19.530 10:43:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=456081 00:24:19.530 10:43:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:24:19.789 [2024-11-15 10:43:08.156264] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:19.789 10:43:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:24:20.047 Malloc0 00:24:20.047 10:43:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:24:20.613 10:43:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:20.613 10:43:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:20.871 [2024-11-15 10:43:09.306503] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:20.871 10:43:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:24:21.129 [2024-11-15 10:43:09.571170] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:24:21.129 10:43:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=456367 00:24:21.129 10:43:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:24:21.129 10:43:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:24:21.129 10:43:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 456367 /var/tmp/bdevperf.sock 00:24:21.129 10:43:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@833 -- # '[' -z 456367 ']' 00:24:21.129 10:43:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:21.129 10:43:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # local max_retries=100 00:24:21.129 10:43:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:21.129 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:21.129 10:43:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # xtrace_disable 00:24:21.129 10:43:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:24:21.696 10:43:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:24:21.696 10:43:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@866 -- # return 0 00:24:21.696 10:43:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:24:21.696 10:43:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:24:22.630 Nvme0n1 00:24:22.630 10:43:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:24:22.889 Nvme0n1 00:24:22.889 10:43:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:24:22.889 10:43:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:24:24.791 10:43:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:24:24.791 10:43:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:24:25.050 10:43:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:24:25.308 10:43:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:24:26.684 10:43:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:24:26.684 10:43:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:24:26.684 10:43:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:26.684 10:43:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:26.684 10:43:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:26.684 10:43:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:24:26.684 10:43:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:26.684 10:43:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:26.942 10:43:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:26.942 10:43:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:26.942 10:43:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:26.942 10:43:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:27.508 10:43:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:27.508 10:43:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:27.508 10:43:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:27.508 10:43:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:27.766 10:43:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:27.766 10:43:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:24:27.766 10:43:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:27.766 10:43:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:28.025 10:43:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:28.025 10:43:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:24:28.025 10:43:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:28.025 10:43:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:28.283 10:43:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:28.283 10:43:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:24:28.283 10:43:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:24:28.542 10:43:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:24:28.800 10:43:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:24:30.176 10:43:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:24:30.176 10:43:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:24:30.176 10:43:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:30.176 10:43:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:30.176 10:43:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:30.176 10:43:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:24:30.176 10:43:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:30.176 10:43:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:30.434 10:43:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:30.434 10:43:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:30.434 10:43:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:30.434 10:43:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:31.001 10:43:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:31.001 10:43:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:31.001 10:43:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:31.001 10:43:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:31.260 10:43:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:31.260 10:43:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:24:31.260 10:43:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:31.260 10:43:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:31.518 10:43:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:31.518 10:43:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:24:31.518 10:43:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:31.518 10:43:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:31.779 10:43:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:31.779 10:43:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:24:31.779 10:43:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:24:32.036 10:43:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:24:32.602 10:43:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:24:33.562 10:43:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:24:33.562 10:43:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:24:33.562 10:43:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:33.562 10:43:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:33.860 10:43:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:33.860 10:43:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:24:33.860 10:43:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:33.860 10:43:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:34.157 10:43:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:34.157 10:43:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:34.157 10:43:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:34.157 10:43:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:34.430 10:43:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:34.430 10:43:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:34.430 10:43:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:34.430 10:43:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:34.688 10:43:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:34.688 10:43:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:24:34.688 10:43:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:34.688 10:43:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:34.945 10:43:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:34.945 10:43:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:24:34.945 10:43:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:34.945 10:43:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:35.203 10:43:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:35.203 10:43:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:24:35.203 10:43:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:24:35.769 10:43:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:24:36.027 10:43:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:24:36.961 10:43:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:24:36.961 10:43:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:24:36.961 10:43:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:36.961 10:43:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:37.220 10:43:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:37.220 10:43:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:24:37.220 10:43:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:37.220 10:43:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:37.478 10:43:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:37.478 10:43:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:37.478 10:43:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:37.478 10:43:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:37.736 10:43:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:37.736 10:43:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:37.736 10:43:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:37.736 10:43:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:38.303 10:43:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:38.303 10:43:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:24:38.303 10:43:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:38.303 10:43:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:38.561 10:43:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:38.561 10:43:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:24:38.561 10:43:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:38.561 10:43:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:38.819 10:43:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:38.819 10:43:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:24:38.819 10:43:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:24:39.078 10:43:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:24:39.336 10:43:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:24:40.271 10:43:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:24:40.271 10:43:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:24:40.271 10:43:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:40.271 10:43:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:40.530 10:43:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:40.530 10:43:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:24:40.530 10:43:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:40.530 10:43:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:40.788 10:43:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:40.788 10:43:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:40.788 10:43:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:40.788 10:43:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:41.354 10:43:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:41.354 10:43:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:41.354 10:43:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:41.354 10:43:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:41.354 10:43:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:41.354 10:43:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:24:41.354 10:43:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:41.355 10:43:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:41.613 10:43:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:41.613 10:43:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:24:41.613 10:43:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:41.613 10:43:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:41.871 10:43:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:41.871 10:43:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:24:41.871 10:43:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:24:42.438 10:43:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:24:42.438 10:43:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:24:43.814 10:43:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:24:43.814 10:43:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:24:43.814 10:43:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:43.814 10:43:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:43.814 10:43:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:43.814 10:43:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:24:43.814 10:43:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:43.814 10:43:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:44.072 10:43:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:44.072 10:43:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:44.072 10:43:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:44.072 10:43:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:44.637 10:43:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:44.637 10:43:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:44.637 10:43:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:44.637 10:43:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:44.896 10:43:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:44.896 10:43:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:24:44.896 10:43:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:44.896 10:43:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:45.153 10:43:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:45.153 10:43:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:24:45.153 10:43:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:45.153 10:43:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:45.410 10:43:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:45.410 10:43:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:24:45.668 10:43:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:24:45.668 10:43:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:24:45.926 10:43:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:24:46.495 10:43:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:24:47.429 10:43:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:24:47.429 10:43:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:24:47.429 10:43:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:47.429 10:43:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:47.687 10:43:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:47.687 10:43:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:24:47.687 10:43:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:47.687 10:43:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:47.945 10:43:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:47.946 10:43:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:47.946 10:43:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:47.946 10:43:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:48.204 10:43:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:48.204 10:43:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:48.204 10:43:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:48.204 10:43:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:48.770 10:43:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:48.770 10:43:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:24:48.770 10:43:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:48.770 10:43:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:49.028 10:43:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:49.028 10:43:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:24:49.028 10:43:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:49.028 10:43:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:49.286 10:43:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:49.286 10:43:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:24:49.286 10:43:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:24:49.544 10:43:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:24:49.802 10:43:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:24:51.176 10:43:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:24:51.176 10:43:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:24:51.176 10:43:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:51.176 10:43:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:51.176 10:43:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:51.176 10:43:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:24:51.176 10:43:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:51.176 10:43:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:51.435 10:43:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:51.435 10:43:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:51.435 10:43:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:51.435 10:43:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:52.000 10:43:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:52.000 10:43:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:52.000 10:43:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:52.001 10:43:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:52.258 10:43:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:52.258 10:43:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:24:52.258 10:43:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:52.258 10:43:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:52.517 10:43:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:52.517 10:43:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:24:52.517 10:43:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:52.517 10:43:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:52.775 10:43:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:52.775 10:43:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:24:52.775 10:43:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:24:53.033 10:43:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:24:53.291 10:43:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:24:54.667 10:43:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:24:54.667 10:43:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:24:54.667 10:43:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:54.667 10:43:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:54.667 10:43:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:54.667 10:43:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:24:54.667 10:43:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:54.667 10:43:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:54.925 10:43:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:54.925 10:43:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:54.925 10:43:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:54.925 10:43:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:55.492 10:43:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:55.492 10:43:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:55.492 10:43:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:55.492 10:43:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:55.750 10:43:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:55.750 10:43:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:24:55.750 10:43:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:55.750 10:43:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:56.008 10:43:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:56.008 10:43:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:24:56.008 10:43:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:56.008 10:43:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:56.266 10:43:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:56.266 10:43:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:24:56.266 10:43:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:24:56.523 10:43:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:24:57.088 10:43:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:24:58.022 10:43:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:24:58.022 10:43:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:24:58.022 10:43:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:58.022 10:43:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:58.280 10:43:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:58.280 10:43:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:24:58.280 10:43:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:58.280 10:43:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:58.538 10:43:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:58.538 10:43:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:58.538 10:43:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:58.538 10:43:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:58.796 10:43:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:58.796 10:43:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:58.796 10:43:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:58.796 10:43:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:59.361 10:43:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:59.361 10:43:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:24:59.361 10:43:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:59.361 10:43:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:59.620 10:43:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:59.620 10:43:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:24:59.620 10:43:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:59.620 10:43:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:59.878 10:43:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:59.878 10:43:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 456367 00:24:59.878 10:43:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # '[' -z 456367 ']' 00:24:59.878 10:43:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # kill -0 456367 00:24:59.878 10:43:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@957 -- # uname 00:24:59.878 10:43:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:24:59.878 10:43:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 456367 00:24:59.878 10:43:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:24:59.878 10:43:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:24:59.878 10:43:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@970 -- # echo 'killing process with pid 456367' 00:24:59.878 killing process with pid 456367 00:24:59.878 10:43:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@971 -- # kill 456367 00:24:59.878 10:43:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@976 -- # wait 456367 00:24:59.878 { 00:24:59.878 "results": [ 00:24:59.878 { 00:24:59.878 "job": "Nvme0n1", 00:24:59.878 "core_mask": "0x4", 00:24:59.878 "workload": "verify", 00:24:59.878 "status": "terminated", 00:24:59.878 "verify_range": { 00:24:59.878 "start": 0, 00:24:59.878 "length": 16384 00:24:59.878 }, 00:24:59.878 "queue_depth": 128, 00:24:59.878 "io_size": 4096, 00:24:59.878 "runtime": 36.906025, 00:24:59.878 "iops": 8573.153028536668, 00:24:59.878 "mibps": 33.48887901772136, 00:24:59.878 "io_failed": 0, 00:24:59.878 "io_timeout": 0, 00:24:59.878 "avg_latency_us": 14906.610039641444, 00:24:59.878 "min_latency_us": 491.52, 00:24:59.878 "max_latency_us": 4026531.84 00:24:59.878 } 00:24:59.878 ], 00:24:59.878 "core_count": 1 00:24:59.878 } 00:25:00.139 10:43:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 456367 00:25:00.139 10:43:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:25:00.139 [2024-11-15 10:43:09.637980] Starting SPDK v25.01-pre git sha1 318515b44 / DPDK 24.03.0 initialization... 00:25:00.139 [2024-11-15 10:43:09.638081] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid456367 ] 00:25:00.139 [2024-11-15 10:43:09.704318] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:00.139 [2024-11-15 10:43:09.762409] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:25:00.139 Running I/O for 90 seconds... 00:25:00.139 9038.00 IOPS, 35.30 MiB/s [2024-11-15T09:43:48.602Z] 9094.50 IOPS, 35.53 MiB/s [2024-11-15T09:43:48.602Z] 9113.67 IOPS, 35.60 MiB/s [2024-11-15T09:43:48.602Z] 9098.50 IOPS, 35.54 MiB/s [2024-11-15T09:43:48.602Z] 9085.80 IOPS, 35.49 MiB/s [2024-11-15T09:43:48.602Z] 9083.00 IOPS, 35.48 MiB/s [2024-11-15T09:43:48.602Z] 9099.71 IOPS, 35.55 MiB/s [2024-11-15T09:43:48.602Z] 9104.88 IOPS, 35.57 MiB/s [2024-11-15T09:43:48.602Z] 9082.44 IOPS, 35.48 MiB/s [2024-11-15T09:43:48.602Z] 9085.90 IOPS, 35.49 MiB/s [2024-11-15T09:43:48.602Z] 9081.18 IOPS, 35.47 MiB/s [2024-11-15T09:43:48.602Z] 9081.08 IOPS, 35.47 MiB/s [2024-11-15T09:43:48.602Z] 9070.85 IOPS, 35.43 MiB/s [2024-11-15T09:43:48.602Z] 9069.36 IOPS, 35.43 MiB/s [2024-11-15T09:43:48.602Z] 9050.27 IOPS, 35.35 MiB/s [2024-11-15T09:43:48.602Z] 9044.81 IOPS, 35.33 MiB/s [2024-11-15T09:43:48.602Z] [2024-11-15 10:43:27.399441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:116728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:00.139 [2024-11-15 10:43:27.399507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:25:00.139 [2024-11-15 10:43:27.399582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:116752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:00.139 [2024-11-15 10:43:27.399604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:25:00.139 [2024-11-15 10:43:27.399630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:116760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:00.139 [2024-11-15 10:43:27.399648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:25:00.139 [2024-11-15 10:43:27.399671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:116768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:00.139 [2024-11-15 10:43:27.399693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:25:00.139 [2024-11-15 10:43:27.399733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:116776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:00.139 [2024-11-15 10:43:27.399750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:25:00.139 [2024-11-15 10:43:27.399773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:116784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:00.139 [2024-11-15 10:43:27.399791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:25:00.139 [2024-11-15 10:43:27.399813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:116792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:00.139 [2024-11-15 10:43:27.399830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:25:00.139 [2024-11-15 10:43:27.399852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:116800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:00.139 [2024-11-15 10:43:27.399870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:25:00.139 [2024-11-15 10:43:27.400552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:116808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:00.139 [2024-11-15 10:43:27.400577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:25:00.139 [2024-11-15 10:43:27.400617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:116816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:00.139 [2024-11-15 10:43:27.400636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:25:00.139 [2024-11-15 10:43:27.400660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:116824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:00.139 [2024-11-15 10:43:27.400692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:25:00.139 [2024-11-15 10:43:27.400716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:116832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:00.139 [2024-11-15 10:43:27.400740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:00.139 [2024-11-15 10:43:27.400763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:116840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:00.139 [2024-11-15 10:43:27.400779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:00.139 [2024-11-15 10:43:27.400801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:116848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:00.139 [2024-11-15 10:43:27.400817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:25:00.139 [2024-11-15 10:43:27.400840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:116856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:00.139 [2024-11-15 10:43:27.400856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:25:00.139 [2024-11-15 10:43:27.400878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:116864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:00.139 [2024-11-15 10:43:27.400894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:25:00.139 [2024-11-15 10:43:27.400915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:116872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:00.139 [2024-11-15 10:43:27.400933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:25:00.139 [2024-11-15 10:43:27.400955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:116880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:00.139 [2024-11-15 10:43:27.400972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:25:00.139 [2024-11-15 10:43:27.400994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:116888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:00.139 [2024-11-15 10:43:27.401011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:25:00.139 [2024-11-15 10:43:27.401034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:116896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:00.139 [2024-11-15 10:43:27.401051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:25:00.139 [2024-11-15 10:43:27.401073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:116904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:00.139 [2024-11-15 10:43:27.401090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:25:00.139 [2024-11-15 10:43:27.401113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:116912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:00.140 [2024-11-15 10:43:27.401134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:25:00.140 [2024-11-15 10:43:27.401158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:116920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:00.140 [2024-11-15 10:43:27.401174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:25:00.140 [2024-11-15 10:43:27.401197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:116928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:00.140 [2024-11-15 10:43:27.401213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:25:00.140 [2024-11-15 10:43:27.401235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:116936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:00.140 [2024-11-15 10:43:27.401252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:25:00.140 [2024-11-15 10:43:27.401275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:116944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:00.140 [2024-11-15 10:43:27.401292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:25:00.140 [2024-11-15 10:43:27.401314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:116952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:00.140 [2024-11-15 10:43:27.401331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:25:00.140 [2024-11-15 10:43:27.401354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:116960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:00.140 [2024-11-15 10:43:27.401396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:25:00.140 [2024-11-15 10:43:27.401428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:116968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:00.140 [2024-11-15 10:43:27.401445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:25:00.140 [2024-11-15 10:43:27.401468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:116976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:00.140 [2024-11-15 10:43:27.401485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:25:00.140 [2024-11-15 10:43:27.401508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:116984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:00.140 [2024-11-15 10:43:27.401525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:25:00.140 [2024-11-15 10:43:27.401548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:116992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:00.140 [2024-11-15 10:43:27.401565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:25:00.140 [2024-11-15 10:43:27.401588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:117000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:00.140 [2024-11-15 10:43:27.401605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:25:00.140 [2024-11-15 10:43:27.401628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:117008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:00.140 [2024-11-15 10:43:27.401649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:25:00.140 [2024-11-15 10:43:27.401695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:117016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:00.140 [2024-11-15 10:43:27.401712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:25:00.140 [2024-11-15 10:43:27.401735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:117024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:00.140 [2024-11-15 10:43:27.401752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:25:00.140 [2024-11-15 10:43:27.401775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:117032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:00.140 [2024-11-15 10:43:27.401792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:25:00.140 [2024-11-15 10:43:27.401814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:117040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:00.140 [2024-11-15 10:43:27.401830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:25:00.140 [2024-11-15 10:43:27.401853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:117048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:00.140 [2024-11-15 10:43:27.401869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:25:00.140 [2024-11-15 10:43:27.401891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:117056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:00.140 [2024-11-15 10:43:27.401907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:25:00.140 [2024-11-15 10:43:27.401930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:117064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:00.140 [2024-11-15 10:43:27.401946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:25:00.140 [2024-11-15 10:43:27.401968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:117072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:00.140 [2024-11-15 10:43:27.401984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:25:00.140 [2024-11-15 10:43:27.402006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:117080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:00.140 [2024-11-15 10:43:27.402023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:00.140 [2024-11-15 10:43:27.402045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:117088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:00.140 [2024-11-15 10:43:27.402062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:00.140 [2024-11-15 10:43:27.402084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:117096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:00.140 [2024-11-15 10:43:27.402100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:00.140 [2024-11-15 10:43:27.402122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:117104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:00.140 [2024-11-15 10:43:27.402143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:25:00.140 [2024-11-15 10:43:27.402167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:117112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:00.140 [2024-11-15 10:43:27.402183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:25:00.140 [2024-11-15 10:43:27.402205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:117120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:00.140 [2024-11-15 10:43:27.402221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:00.140 [2024-11-15 10:43:27.402244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:117128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:00.140 [2024-11-15 10:43:27.402260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:25:00.140 [2024-11-15 10:43:27.402282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:117136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:00.140 [2024-11-15 10:43:27.402299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:00.140 [2024-11-15 10:43:27.402322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:117144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:00.140 [2024-11-15 10:43:27.402338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:25:00.140 [2024-11-15 10:43:27.402360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:117152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:00.140 [2024-11-15 10:43:27.402409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:25:00.140 [2024-11-15 10:43:27.402434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:117160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:00.140 [2024-11-15 10:43:27.402451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:25:00.140 [2024-11-15 10:43:27.402474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:117168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:00.140 [2024-11-15 10:43:27.402491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:25:00.140 [2024-11-15 10:43:27.402513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:117176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:00.140 [2024-11-15 10:43:27.402530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:25:00.140 [2024-11-15 10:43:27.402552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:117184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:00.140 [2024-11-15 10:43:27.402569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:25:00.140 [2024-11-15 10:43:27.402592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:117192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:00.141 [2024-11-15 10:43:27.402609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:25:00.141 [2024-11-15 10:43:27.402633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:116736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:00.141 [2024-11-15 10:43:27.402655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:00.141 [2024-11-15 10:43:27.402898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:116744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:00.141 [2024-11-15 10:43:27.402928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:25:00.141 [2024-11-15 10:43:27.402958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:117200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:00.141 [2024-11-15 10:43:27.402979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:00.141 [2024-11-15 10:43:27.403004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:117208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:00.141 [2024-11-15 10:43:27.403022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:25:00.141 [2024-11-15 10:43:27.403047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:117216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:00.141 [2024-11-15 10:43:27.403063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:25:00.141 [2024-11-15 10:43:27.403098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:117224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:00.141 [2024-11-15 10:43:27.403114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:00.141 [2024-11-15 10:43:27.403140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:117232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:00.141 [2024-11-15 10:43:27.403156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:25:00.141 [2024-11-15 10:43:27.403184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:117240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:00.141 [2024-11-15 10:43:27.403211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:25:00.141 [2024-11-15 10:43:27.403236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:117248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:00.141 [2024-11-15 10:43:27.403253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:00.141 [2024-11-15 10:43:27.403278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:117256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:00.141 [2024-11-15 10:43:27.403294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:00.141 [2024-11-15 10:43:27.403320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:117264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:00.141 [2024-11-15 10:43:27.403336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:25:00.141 [2024-11-15 10:43:27.403387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:117272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:00.141 [2024-11-15 10:43:27.403407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:25:00.141 [2024-11-15 10:43:27.403433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:117280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:00.141 [2024-11-15 10:43:27.403450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:25:00.141 [2024-11-15 10:43:27.403481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:117288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:00.141 [2024-11-15 10:43:27.403499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:00.141 [2024-11-15 10:43:27.403526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:117296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:00.141 [2024-11-15 10:43:27.403543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:00.141 [2024-11-15 10:43:27.403569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:117304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:00.141 [2024-11-15 10:43:27.403587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:00.141 [2024-11-15 10:43:27.403614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:117312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:00.141 [2024-11-15 10:43:27.403632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:00.141 [2024-11-15 10:43:27.403658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:117320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:00.141 [2024-11-15 10:43:27.403694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:00.141 [2024-11-15 10:43:27.403720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:117328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:00.141 [2024-11-15 10:43:27.403736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:00.141 [2024-11-15 10:43:27.403761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:117336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:00.141 [2024-11-15 10:43:27.403778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:00.141 [2024-11-15 10:43:27.403802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:117344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:00.141 [2024-11-15 10:43:27.403819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:25:00.141 [2024-11-15 10:43:27.403843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:117352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:00.141 [2024-11-15 10:43:27.403860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:25:00.141 [2024-11-15 10:43:27.403886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:117360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:00.141 [2024-11-15 10:43:27.403902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:25:00.141 [2024-11-15 10:43:27.403927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:117368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:00.141 [2024-11-15 10:43:27.403945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:25:00.141 [2024-11-15 10:43:27.403970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:117376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:00.141 [2024-11-15 10:43:27.403986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:25:00.141 [2024-11-15 10:43:27.404011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:117384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:00.141 [2024-11-15 10:43:27.404031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:25:00.141 [2024-11-15 10:43:27.404056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:117392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:00.141 [2024-11-15 10:43:27.404073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:25:00.141 [2024-11-15 10:43:27.404099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:117400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:00.141 [2024-11-15 10:43:27.404116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:25:00.141 [2024-11-15 10:43:27.404141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:117408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:00.141 [2024-11-15 10:43:27.404157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:25:00.141 [2024-11-15 10:43:27.404182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:117416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:00.141 [2024-11-15 10:43:27.404199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:25:00.141 [2024-11-15 10:43:27.404224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:117424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:00.141 [2024-11-15 10:43:27.404240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:25:00.141 [2024-11-15 10:43:27.404265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:117432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:00.141 [2024-11-15 10:43:27.404281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:25:00.141 [2024-11-15 10:43:27.404307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:117440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:00.141 [2024-11-15 10:43:27.404323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:00.141 [2024-11-15 10:43:27.404348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:117448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:00.141 [2024-11-15 10:43:27.404389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:00.141 [2024-11-15 10:43:27.404420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:117456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:00.141 [2024-11-15 10:43:27.404437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:00.141 [2024-11-15 10:43:27.404463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:117464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:00.141 [2024-11-15 10:43:27.404480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:25:00.141 [2024-11-15 10:43:27.404506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:117472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:00.141 [2024-11-15 10:43:27.404523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:25:00.142 [2024-11-15 10:43:27.404549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:117480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:00.142 [2024-11-15 10:43:27.404570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:25:00.142 [2024-11-15 10:43:27.404597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:117488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:00.142 [2024-11-15 10:43:27.404614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:25:00.142 [2024-11-15 10:43:27.404640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:117496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:00.142 [2024-11-15 10:43:27.404657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:25:00.142 [2024-11-15 10:43:27.404698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:117504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:00.142 [2024-11-15 10:43:27.404715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:25:00.142 [2024-11-15 10:43:27.404741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:117512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:00.142 [2024-11-15 10:43:27.404757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:25:00.142 [2024-11-15 10:43:27.404782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:117520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:00.142 [2024-11-15 10:43:27.404798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:25:00.142 [2024-11-15 10:43:27.404823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:117528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:00.142 [2024-11-15 10:43:27.404839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:25:00.142 [2024-11-15 10:43:27.404865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:117536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:00.142 [2024-11-15 10:43:27.404881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:25:00.142 [2024-11-15 10:43:27.404906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:117544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:00.142 [2024-11-15 10:43:27.404922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:25:00.142 [2024-11-15 10:43:27.404947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:117552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:00.142 [2024-11-15 10:43:27.404963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:25:00.142 [2024-11-15 10:43:27.404988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:117560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:00.142 [2024-11-15 10:43:27.405004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:25:00.142 [2024-11-15 10:43:27.405029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:117568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:00.142 [2024-11-15 10:43:27.405045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:25:00.142 [2024-11-15 10:43:27.405071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:117576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:00.142 [2024-11-15 10:43:27.405087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:25:00.142 [2024-11-15 10:43:27.405116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:117584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:00.142 [2024-11-15 10:43:27.405134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:00.142 [2024-11-15 10:43:27.405159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:117592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:00.142 [2024-11-15 10:43:27.405175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:00.142 [2024-11-15 10:43:27.405200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:117600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:00.142 [2024-11-15 10:43:27.405216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:25:00.142 [2024-11-15 10:43:27.405241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:117608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:00.142 [2024-11-15 10:43:27.405258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:25:00.142 [2024-11-15 10:43:27.405285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:117616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:00.142 [2024-11-15 10:43:27.405302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:25:00.142 [2024-11-15 10:43:27.405328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:117624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:00.142 [2024-11-15 10:43:27.405368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:25:00.142 [2024-11-15 10:43:27.405398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:117632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:00.142 [2024-11-15 10:43:27.405416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:25:00.142 [2024-11-15 10:43:27.405567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:117640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:00.142 [2024-11-15 10:43:27.405589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:25:00.142 [2024-11-15 10:43:27.405622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:117648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:00.142 [2024-11-15 10:43:27.405647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:25:00.142 [2024-11-15 10:43:27.405678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:117656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:00.142 [2024-11-15 10:43:27.405695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:25:00.142 [2024-11-15 10:43:27.405739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:117664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:00.142 [2024-11-15 10:43:27.405756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:25:00.142 [2024-11-15 10:43:27.405784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:117672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:00.142 [2024-11-15 10:43:27.405801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:25:00.142 [2024-11-15 10:43:27.405835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:117680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:00.142 [2024-11-15 10:43:27.405852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:25:00.142 [2024-11-15 10:43:27.405880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:117688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:00.142 [2024-11-15 10:43:27.405907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:25:00.142 [2024-11-15 10:43:27.405935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:117696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:00.142 [2024-11-15 10:43:27.405952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:25:00.142 [2024-11-15 10:43:27.405983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:117704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:00.142 [2024-11-15 10:43:27.405999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:25:00.142 [2024-11-15 10:43:27.406028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:117712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:00.142 [2024-11-15 10:43:27.406045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:25:00.142 [2024-11-15 10:43:27.406073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:117720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:00.142 [2024-11-15 10:43:27.406090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:25:00.142 [2024-11-15 10:43:27.406118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:117728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:00.142 [2024-11-15 10:43:27.406135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:25:00.142 [2024-11-15 10:43:27.406164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:117736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:00.142 [2024-11-15 10:43:27.406180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:25:00.142 8568.41 IOPS, 33.47 MiB/s [2024-11-15T09:43:48.605Z] 8092.39 IOPS, 31.61 MiB/s [2024-11-15T09:43:48.605Z] 7666.47 IOPS, 29.95 MiB/s [2024-11-15T09:43:48.605Z] 7283.15 IOPS, 28.45 MiB/s [2024-11-15T09:43:48.605Z] 7317.76 IOPS, 28.59 MiB/s [2024-11-15T09:43:48.605Z] 7396.14 IOPS, 28.89 MiB/s [2024-11-15T09:43:48.605Z] 7467.48 IOPS, 29.17 MiB/s [2024-11-15T09:43:48.605Z] 7625.96 IOPS, 29.79 MiB/s [2024-11-15T09:43:48.605Z] 7787.24 IOPS, 30.42 MiB/s [2024-11-15T09:43:48.605Z] 7939.81 IOPS, 31.01 MiB/s [2024-11-15T09:43:48.605Z] 8041.07 IOPS, 31.41 MiB/s [2024-11-15T09:43:48.605Z] 8076.25 IOPS, 31.55 MiB/s [2024-11-15T09:43:48.605Z] 8105.07 IOPS, 31.66 MiB/s [2024-11-15T09:43:48.605Z] 8133.70 IOPS, 31.77 MiB/s [2024-11-15T09:43:48.605Z] 8209.16 IOPS, 32.07 MiB/s [2024-11-15T09:43:48.605Z] 8328.75 IOPS, 32.53 MiB/s [2024-11-15T09:43:48.605Z] 8439.97 IOPS, 32.97 MiB/s [2024-11-15T09:43:48.605Z] [2024-11-15 10:43:45.288934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:88296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:00.142 [2024-11-15 10:43:45.288996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:25:00.142 [2024-11-15 10:43:45.289063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:88312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:00.142 [2024-11-15 10:43:45.289084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:25:00.142 [2024-11-15 10:43:45.289107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:88328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:00.143 [2024-11-15 10:43:45.289137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:25:00.143 [2024-11-15 10:43:45.289161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:88344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:00.143 [2024-11-15 10:43:45.289176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:25:00.143 [2024-11-15 10:43:45.289197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:88360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:00.143 [2024-11-15 10:43:45.289213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:25:00.143 [2024-11-15 10:43:45.289233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:88376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:00.143 [2024-11-15 10:43:45.289250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:25:00.143 [2024-11-15 10:43:45.289271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:88392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:00.143 [2024-11-15 10:43:45.289303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:25:00.143 [2024-11-15 10:43:45.289327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:88408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:00.143 [2024-11-15 10:43:45.289343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:25:00.143 [2024-11-15 10:43:45.289391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:88424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:00.143 [2024-11-15 10:43:45.289409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:25:00.143 [2024-11-15 10:43:45.289449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:88192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:00.143 [2024-11-15 10:43:45.289468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:25:00.143 [2024-11-15 10:43:45.291153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:88432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:00.143 [2024-11-15 10:43:45.291179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:25:00.143 [2024-11-15 10:43:45.291206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:88448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:00.143 [2024-11-15 10:43:45.291225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:00.143 [2024-11-15 10:43:45.291255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:88464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:00.143 [2024-11-15 10:43:45.291272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:00.143 [2024-11-15 10:43:45.291293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:88480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:00.143 [2024-11-15 10:43:45.291309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:00.143 [2024-11-15 10:43:45.291331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:88496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:00.143 [2024-11-15 10:43:45.291385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:25:00.143 [2024-11-15 10:43:45.291413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:88512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:00.143 [2024-11-15 10:43:45.291431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:25:00.143 [2024-11-15 10:43:45.291454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:88528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:00.143 [2024-11-15 10:43:45.291472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:00.143 [2024-11-15 10:43:45.291494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:88544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:00.143 [2024-11-15 10:43:45.291512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:25:00.143 [2024-11-15 10:43:45.291535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:88560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:00.143 [2024-11-15 10:43:45.291552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:00.143 [2024-11-15 10:43:45.291575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:88576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:00.143 [2024-11-15 10:43:45.291592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:25:00.143 [2024-11-15 10:43:45.291616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:88592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:00.143 [2024-11-15 10:43:45.291633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:25:00.143 [2024-11-15 10:43:45.291672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:88608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:00.143 [2024-11-15 10:43:45.291690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:25:00.143 [2024-11-15 10:43:45.291712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:88624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:00.143 [2024-11-15 10:43:45.291743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:25:00.143 [2024-11-15 10:43:45.291766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:88640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:00.143 [2024-11-15 10:43:45.291783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:25:00.143 [2024-11-15 10:43:45.291805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:88656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:00.143 [2024-11-15 10:43:45.291821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:25:00.143 [2024-11-15 10:43:45.291842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:88672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:00.143 [2024-11-15 10:43:45.291860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:25:00.143 [2024-11-15 10:43:45.291883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:88688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:00.143 [2024-11-15 10:43:45.291899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:00.143 [2024-11-15 10:43:45.291931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:88704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:00.143 [2024-11-15 10:43:45.291948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:25:00.143 [2024-11-15 10:43:45.291970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:88240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:00.143 [2024-11-15 10:43:45.291986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:00.143 [2024-11-15 10:43:45.292008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:88272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:00.143 [2024-11-15 10:43:45.292024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:25:00.143 [2024-11-15 10:43:45.292046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:88184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:00.143 [2024-11-15 10:43:45.292061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:25:00.143 [2024-11-15 10:43:45.292083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:88216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:00.143 [2024-11-15 10:43:45.292100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:00.143 [2024-11-15 10:43:45.292795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:88720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:00.143 [2024-11-15 10:43:45.292818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:25:00.143 [2024-11-15 10:43:45.292849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:88736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:00.143 [2024-11-15 10:43:45.292868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:25:00.144 [2024-11-15 10:43:45.292890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:88752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:00.144 [2024-11-15 10:43:45.292906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:00.144 [2024-11-15 10:43:45.292928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:88768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:00.144 [2024-11-15 10:43:45.292944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:00.144 [2024-11-15 10:43:45.292966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:88784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:00.144 [2024-11-15 10:43:45.292982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:25:00.144 [2024-11-15 10:43:45.293004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:88800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:00.144 [2024-11-15 10:43:45.293020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:25:00.144 [2024-11-15 10:43:45.293042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:88816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:00.144 [2024-11-15 10:43:45.293058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:25:00.144 [2024-11-15 10:43:45.293087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:88832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:00.144 [2024-11-15 10:43:45.293105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:00.144 [2024-11-15 10:43:45.293127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:88848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:00.144 [2024-11-15 10:43:45.293148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:00.144 [2024-11-15 10:43:45.293170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:88248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:00.144 [2024-11-15 10:43:45.293187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:00.144 [2024-11-15 10:43:45.293208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:88280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:00.144 [2024-11-15 10:43:45.293224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:00.144 [2024-11-15 10:43:45.293246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:88864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:00.144 [2024-11-15 10:43:45.293262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:00.144 [2024-11-15 10:43:45.293284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:88880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:00.144 [2024-11-15 10:43:45.293301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:00.144 [2024-11-15 10:43:45.293322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:88896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:00.144 [2024-11-15 10:43:45.293339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:00.144 [2024-11-15 10:43:45.293368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:88912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:00.144 [2024-11-15 10:43:45.293403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:25:00.144 [2024-11-15 10:43:45.293426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:88928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:00.144 [2024-11-15 10:43:45.293444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:25:00.144 [2024-11-15 10:43:45.293467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:88944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:00.144 [2024-11-15 10:43:45.293484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:25:00.144 [2024-11-15 10:43:45.293508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:88960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:00.144 [2024-11-15 10:43:45.293526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:25:00.144 [2024-11-15 10:43:45.293548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:88976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:00.144 [2024-11-15 10:43:45.293564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:25:00.144 [2024-11-15 10:43:45.293586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:88992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:00.144 [2024-11-15 10:43:45.293606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:25:00.144 [2024-11-15 10:43:45.293630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:89008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:00.144 [2024-11-15 10:43:45.293647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:25:00.144 [2024-11-15 10:43:45.293668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:89024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:00.144 [2024-11-15 10:43:45.293705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:25:00.144 [2024-11-15 10:43:45.293727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:89040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:00.144 [2024-11-15 10:43:45.293743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:25:00.144 [2024-11-15 10:43:45.293764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:89056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:00.144 [2024-11-15 10:43:45.293780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:25:00.144 [2024-11-15 10:43:45.293804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:89072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:00.144 [2024-11-15 10:43:45.293820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:25:00.144 [2024-11-15 10:43:45.293841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:89088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:00.144 [2024-11-15 10:43:45.293861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:25:00.144 [2024-11-15 10:43:45.293884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:89104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:00.144 [2024-11-15 10:43:45.293900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:00.144 [2024-11-15 10:43:45.293920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:89120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:00.144 [2024-11-15 10:43:45.293936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:00.144 [2024-11-15 10:43:45.293967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:89136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:00.144 [2024-11-15 10:43:45.293983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:00.144 [2024-11-15 10:43:45.294004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:89152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:00.144 [2024-11-15 10:43:45.294019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:25:00.144 8529.56 IOPS, 33.32 MiB/s [2024-11-15T09:43:48.607Z] 8544.97 IOPS, 33.38 MiB/s [2024-11-15T09:43:48.607Z] 8566.14 IOPS, 33.46 MiB/s [2024-11-15T09:43:48.607Z] Received shutdown signal, test time was about 36.906842 seconds 00:25:00.144 00:25:00.144 Latency(us) 00:25:00.144 [2024-11-15T09:43:48.607Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:00.144 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:25:00.144 Verification LBA range: start 0x0 length 0x4000 00:25:00.144 Nvme0n1 : 36.91 8573.15 33.49 0.00 0.00 14906.61 491.52 4026531.84 00:25:00.144 [2024-11-15T09:43:48.607Z] =================================================================================================================== 00:25:00.144 [2024-11-15T09:43:48.607Z] Total : 8573.15 33.49 0.00 0.00 14906.61 491.52 4026531.84 00:25:00.144 10:43:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:00.403 10:43:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:25:00.403 10:43:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:25:00.403 10:43:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:25:00.403 10:43:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@516 -- # nvmfcleanup 00:25:00.403 10:43:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # sync 00:25:00.403 10:43:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:00.403 10:43:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set +e 00:25:00.403 10:43:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:00.403 10:43:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:00.403 rmmod nvme_tcp 00:25:00.403 rmmod nvme_fabrics 00:25:00.403 rmmod nvme_keyring 00:25:00.403 10:43:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:00.403 10:43:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@128 -- # set -e 00:25:00.403 10:43:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@129 -- # return 0 00:25:00.403 10:43:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@517 -- # '[' -n 456081 ']' 00:25:00.403 10:43:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@518 -- # killprocess 456081 00:25:00.403 10:43:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # '[' -z 456081 ']' 00:25:00.403 10:43:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # kill -0 456081 00:25:00.403 10:43:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@957 -- # uname 00:25:00.403 10:43:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:25:00.403 10:43:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 456081 00:25:00.403 10:43:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:25:00.403 10:43:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:25:00.403 10:43:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@970 -- # echo 'killing process with pid 456081' 00:25:00.403 killing process with pid 456081 00:25:00.403 10:43:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@971 -- # kill 456081 00:25:00.403 10:43:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@976 -- # wait 456081 00:25:00.663 10:43:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:25:00.663 10:43:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:25:00.663 10:43:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:25:00.663 10:43:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # iptr 00:25:00.663 10:43:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-save 00:25:00.663 10:43:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:25:00.663 10:43:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-restore 00:25:00.663 10:43:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:00.663 10:43:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:00.663 10:43:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:00.663 10:43:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:00.663 10:43:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:03.197 10:43:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:03.197 00:25:03.197 real 0m45.922s 00:25:03.197 user 2m20.298s 00:25:03.197 sys 0m12.645s 00:25:03.197 10:43:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1128 -- # xtrace_disable 00:25:03.197 10:43:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:25:03.197 ************************************ 00:25:03.197 END TEST nvmf_host_multipath_status 00:25:03.197 ************************************ 00:25:03.198 10:43:51 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@28 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:25:03.198 10:43:51 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:25:03.198 10:43:51 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:25:03.198 10:43:51 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:25:03.198 ************************************ 00:25:03.198 START TEST nvmf_discovery_remove_ifc 00:25:03.198 ************************************ 00:25:03.198 10:43:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:25:03.198 * Looking for test storage... 00:25:03.198 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:03.198 10:43:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:25:03.198 10:43:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1691 -- # lcov --version 00:25:03.198 10:43:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:25:03.198 10:43:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:25:03.198 10:43:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:03.198 10:43:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:03.198 10:43:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:03.198 10:43:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # IFS=.-: 00:25:03.198 10:43:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # read -ra ver1 00:25:03.198 10:43:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # IFS=.-: 00:25:03.198 10:43:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # read -ra ver2 00:25:03.198 10:43:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@338 -- # local 'op=<' 00:25:03.198 10:43:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@340 -- # ver1_l=2 00:25:03.198 10:43:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@341 -- # ver2_l=1 00:25:03.198 10:43:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:03.198 10:43:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@344 -- # case "$op" in 00:25:03.198 10:43:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@345 -- # : 1 00:25:03.198 10:43:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:03.198 10:43:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:03.198 10:43:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # decimal 1 00:25:03.198 10:43:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=1 00:25:03.198 10:43:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:03.198 10:43:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 1 00:25:03.198 10:43:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # ver1[v]=1 00:25:03.198 10:43:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # decimal 2 00:25:03.198 10:43:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=2 00:25:03.198 10:43:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:03.198 10:43:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 2 00:25:03.198 10:43:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # ver2[v]=2 00:25:03.198 10:43:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:03.198 10:43:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:03.198 10:43:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # return 0 00:25:03.198 10:43:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:03.198 10:43:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:25:03.198 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:03.198 --rc genhtml_branch_coverage=1 00:25:03.198 --rc genhtml_function_coverage=1 00:25:03.198 --rc genhtml_legend=1 00:25:03.198 --rc geninfo_all_blocks=1 00:25:03.198 --rc geninfo_unexecuted_blocks=1 00:25:03.198 00:25:03.198 ' 00:25:03.198 10:43:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:25:03.198 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:03.198 --rc genhtml_branch_coverage=1 00:25:03.198 --rc genhtml_function_coverage=1 00:25:03.198 --rc genhtml_legend=1 00:25:03.198 --rc geninfo_all_blocks=1 00:25:03.198 --rc geninfo_unexecuted_blocks=1 00:25:03.198 00:25:03.198 ' 00:25:03.198 10:43:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:25:03.198 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:03.198 --rc genhtml_branch_coverage=1 00:25:03.198 --rc genhtml_function_coverage=1 00:25:03.198 --rc genhtml_legend=1 00:25:03.198 --rc geninfo_all_blocks=1 00:25:03.198 --rc geninfo_unexecuted_blocks=1 00:25:03.198 00:25:03.198 ' 00:25:03.198 10:43:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:25:03.198 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:03.198 --rc genhtml_branch_coverage=1 00:25:03.198 --rc genhtml_function_coverage=1 00:25:03.198 --rc genhtml_legend=1 00:25:03.198 --rc geninfo_all_blocks=1 00:25:03.198 --rc geninfo_unexecuted_blocks=1 00:25:03.198 00:25:03.198 ' 00:25:03.198 10:43:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:03.198 10:43:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:25:03.198 10:43:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:03.198 10:43:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:03.198 10:43:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:03.198 10:43:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:03.198 10:43:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:03.198 10:43:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:03.198 10:43:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:03.198 10:43:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:03.198 10:43:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:03.198 10:43:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:03.198 10:43:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:25:03.198 10:43:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=8b464f06-2980-e311-ba20-001e67a94acd 00:25:03.198 10:43:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:03.198 10:43:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:03.198 10:43:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:03.198 10:43:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:03.198 10:43:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:03.198 10:43:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@15 -- # shopt -s extglob 00:25:03.198 10:43:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:03.198 10:43:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:03.198 10:43:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:03.198 10:43:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:03.198 10:43:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:03.199 10:43:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:03.199 10:43:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:25:03.199 10:43:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:03.199 10:43:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # : 0 00:25:03.199 10:43:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:03.199 10:43:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:03.199 10:43:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:03.199 10:43:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:03.199 10:43:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:03.199 10:43:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:03.199 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:03.199 10:43:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:03.199 10:43:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:03.199 10:43:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:03.199 10:43:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:25:03.199 10:43:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:25:03.199 10:43:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:25:03.199 10:43:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:25:03.199 10:43:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:25:03.199 10:43:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:25:03.199 10:43:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:25:03.199 10:43:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:25:03.199 10:43:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:03.199 10:43:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@476 -- # prepare_net_devs 00:25:03.199 10:43:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@438 -- # local -g is_hw=no 00:25:03.199 10:43:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@440 -- # remove_spdk_ns 00:25:03.199 10:43:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:03.199 10:43:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:03.199 10:43:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:03.199 10:43:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:25:03.199 10:43:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:25:03.199 10:43:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@309 -- # xtrace_disable 00:25:03.199 10:43:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:05.101 10:43:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:05.101 10:43:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # pci_devs=() 00:25:05.101 10:43:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:05.101 10:43:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:05.101 10:43:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:05.101 10:43:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:05.101 10:43:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:05.101 10:43:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@319 -- # net_devs=() 00:25:05.101 10:43:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:05.101 10:43:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # e810=() 00:25:05.102 10:43:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # local -ga e810 00:25:05.102 10:43:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # x722=() 00:25:05.102 10:43:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # local -ga x722 00:25:05.102 10:43:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@322 -- # mlx=() 00:25:05.102 10:43:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@322 -- # local -ga mlx 00:25:05.102 10:43:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:05.102 10:43:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:05.102 10:43:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:05.102 10:43:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:05.102 10:43:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:05.102 10:43:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:05.102 10:43:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:05.102 10:43:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:05.102 10:43:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:05.102 10:43:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:05.102 10:43:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:05.102 10:43:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:05.102 10:43:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:05.102 10:43:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:05.102 10:43:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:05.102 10:43:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:05.102 10:43:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:05.102 10:43:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:05.102 10:43:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:05.102 10:43:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@367 -- # echo 'Found 0000:82:00.0 (0x8086 - 0x159b)' 00:25:05.102 Found 0000:82:00.0 (0x8086 - 0x159b) 00:25:05.102 10:43:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:05.102 10:43:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:05.102 10:43:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:05.102 10:43:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:05.102 10:43:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:05.102 10:43:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:05.102 10:43:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@367 -- # echo 'Found 0000:82:00.1 (0x8086 - 0x159b)' 00:25:05.102 Found 0000:82:00.1 (0x8086 - 0x159b) 00:25:05.102 10:43:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:05.102 10:43:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:05.102 10:43:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:05.102 10:43:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:05.102 10:43:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:05.102 10:43:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:05.102 10:43:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:05.102 10:43:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:05.102 10:43:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:05.102 10:43:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:05.102 10:43:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:05.102 10:43:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:05.102 10:43:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:05.102 10:43:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:05.102 10:43:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:05.102 10:43:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:82:00.0: cvl_0_0' 00:25:05.102 Found net devices under 0000:82:00.0: cvl_0_0 00:25:05.102 10:43:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:05.102 10:43:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:05.102 10:43:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:05.102 10:43:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:05.102 10:43:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:05.102 10:43:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:05.102 10:43:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:05.102 10:43:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:05.102 10:43:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:82:00.1: cvl_0_1' 00:25:05.102 Found net devices under 0000:82:00.1: cvl_0_1 00:25:05.102 10:43:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:05.102 10:43:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:25:05.102 10:43:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # is_hw=yes 00:25:05.102 10:43:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:25:05.102 10:43:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:25:05.102 10:43:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:25:05.102 10:43:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:05.102 10:43:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:05.102 10:43:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:05.102 10:43:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:05.102 10:43:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:05.102 10:43:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:05.102 10:43:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:05.102 10:43:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:05.102 10:43:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:05.102 10:43:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:05.102 10:43:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:05.102 10:43:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:05.102 10:43:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:05.102 10:43:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:05.102 10:43:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:05.102 10:43:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:05.102 10:43:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:05.102 10:43:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:05.102 10:43:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:05.361 10:43:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:05.361 10:43:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:05.361 10:43:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:05.361 10:43:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:05.361 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:05.361 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.266 ms 00:25:05.361 00:25:05.361 --- 10.0.0.2 ping statistics --- 00:25:05.361 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:05.361 rtt min/avg/max/mdev = 0.266/0.266/0.266/0.000 ms 00:25:05.361 10:43:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:05.361 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:05.361 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.100 ms 00:25:05.361 00:25:05.361 --- 10.0.0.1 ping statistics --- 00:25:05.361 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:05.361 rtt min/avg/max/mdev = 0.100/0.100/0.100/0.000 ms 00:25:05.361 10:43:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:05.361 10:43:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@450 -- # return 0 00:25:05.361 10:43:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:25:05.361 10:43:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:05.361 10:43:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:25:05.361 10:43:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:25:05.361 10:43:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:05.361 10:43:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:25:05.361 10:43:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:25:05.361 10:43:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:25:05.361 10:43:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:25:05.361 10:43:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@724 -- # xtrace_disable 00:25:05.361 10:43:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:05.362 10:43:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@509 -- # nvmfpid=463100 00:25:05.362 10:43:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:25:05.362 10:43:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@510 -- # waitforlisten 463100 00:25:05.362 10:43:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@833 -- # '[' -z 463100 ']' 00:25:05.362 10:43:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:05.362 10:43:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # local max_retries=100 00:25:05.362 10:43:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:05.362 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:05.362 10:43:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # xtrace_disable 00:25:05.362 10:43:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:05.362 [2024-11-15 10:43:53.671148] Starting SPDK v25.01-pre git sha1 318515b44 / DPDK 24.03.0 initialization... 00:25:05.362 [2024-11-15 10:43:53.671233] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:05.362 [2024-11-15 10:43:53.744226] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:05.362 [2024-11-15 10:43:53.804838] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:05.362 [2024-11-15 10:43:53.804891] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:05.362 [2024-11-15 10:43:53.804906] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:05.362 [2024-11-15 10:43:53.804917] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:05.362 [2024-11-15 10:43:53.804927] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:05.362 [2024-11-15 10:43:53.805557] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:05.620 10:43:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:25:05.620 10:43:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@866 -- # return 0 00:25:05.620 10:43:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:25:05.620 10:43:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@730 -- # xtrace_disable 00:25:05.620 10:43:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:05.620 10:43:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:05.620 10:43:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:25:05.620 10:43:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:05.620 10:43:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:05.620 [2024-11-15 10:43:53.959453] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:05.620 [2024-11-15 10:43:53.967696] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:25:05.620 null0 00:25:05.620 [2024-11-15 10:43:53.999550] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:05.620 10:43:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:05.620 10:43:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=463129 00:25:05.620 10:43:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 463129 /tmp/host.sock 00:25:05.620 10:43:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:25:05.620 10:43:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@833 -- # '[' -z 463129 ']' 00:25:05.620 10:43:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@837 -- # local rpc_addr=/tmp/host.sock 00:25:05.620 10:43:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # local max_retries=100 00:25:05.620 10:43:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:25:05.620 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:25:05.620 10:43:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # xtrace_disable 00:25:05.620 10:43:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:05.620 [2024-11-15 10:43:54.068766] Starting SPDK v25.01-pre git sha1 318515b44 / DPDK 24.03.0 initialization... 00:25:05.620 [2024-11-15 10:43:54.068854] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid463129 ] 00:25:05.879 [2024-11-15 10:43:54.135149] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:05.879 [2024-11-15 10:43:54.192769] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:05.879 10:43:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:25:05.879 10:43:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@866 -- # return 0 00:25:05.879 10:43:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:05.879 10:43:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:25:05.879 10:43:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:05.879 10:43:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:05.879 10:43:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:05.879 10:43:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:25:05.879 10:43:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:05.879 10:43:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:06.137 10:43:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:06.137 10:43:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:25:06.137 10:43:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:06.137 10:43:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:07.070 [2024-11-15 10:43:55.456951] bdev_nvme.c:7384:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:25:07.071 [2024-11-15 10:43:55.456981] bdev_nvme.c:7470:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:25:07.071 [2024-11-15 10:43:55.457008] bdev_nvme.c:7347:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:25:07.329 [2024-11-15 10:43:55.584464] bdev_nvme.c:7313:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:25:07.329 [2024-11-15 10:43:55.646248] bdev_nvme.c:5634:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.2:4420 00:25:07.329 [2024-11-15 10:43:55.647270] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x21d4be0:1 started. 00:25:07.329 [2024-11-15 10:43:55.648957] bdev_nvme.c:8180:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:25:07.329 [2024-11-15 10:43:55.649017] bdev_nvme.c:8180:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:25:07.329 [2024-11-15 10:43:55.649050] bdev_nvme.c:8180:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:25:07.329 [2024-11-15 10:43:55.649072] bdev_nvme.c:7203:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:25:07.329 [2024-11-15 10:43:55.649107] bdev_nvme.c:7162:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:25:07.329 10:43:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:07.329 10:43:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:25:07.329 10:43:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:07.329 10:43:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:07.329 10:43:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:07.329 10:43:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:07.329 10:43:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:07.329 10:43:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:07.329 10:43:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:07.329 [2024-11-15 10:43:55.655485] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x21d4be0 was disconnected and freed. delete nvme_qpair. 00:25:07.329 10:43:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:07.329 10:43:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:25:07.329 10:43:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:25:07.329 10:43:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:25:07.329 10:43:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:25:07.329 10:43:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:07.329 10:43:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:07.329 10:43:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:07.329 10:43:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:07.329 10:43:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:07.329 10:43:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:07.329 10:43:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:07.329 10:43:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:07.329 10:43:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:25:07.329 10:43:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:08.701 10:43:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:08.701 10:43:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:08.701 10:43:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:08.701 10:43:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:08.701 10:43:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:08.701 10:43:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:08.701 10:43:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:08.701 10:43:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:08.701 10:43:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:25:08.702 10:43:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:09.633 10:43:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:09.633 10:43:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:09.633 10:43:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:09.633 10:43:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:09.633 10:43:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:09.633 10:43:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:09.633 10:43:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:09.633 10:43:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:09.633 10:43:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:25:09.633 10:43:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:10.565 10:43:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:10.565 10:43:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:10.565 10:43:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:10.565 10:43:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:10.565 10:43:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:10.565 10:43:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:10.565 10:43:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:10.565 10:43:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:10.565 10:43:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:25:10.565 10:43:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:11.497 10:43:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:11.497 10:43:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:11.497 10:43:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:11.497 10:43:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:11.497 10:43:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:11.497 10:43:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:11.497 10:43:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:11.497 10:43:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:11.497 10:43:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:25:11.497 10:43:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:12.870 10:44:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:12.870 10:44:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:12.870 10:44:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:12.870 10:44:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:12.870 10:44:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:12.870 10:44:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:12.870 10:44:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:12.870 10:44:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:12.870 10:44:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:25:12.870 10:44:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:12.870 [2024-11-15 10:44:01.090482] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:25:12.870 [2024-11-15 10:44:01.090555] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:12.870 [2024-11-15 10:44:01.090587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.870 [2024-11-15 10:44:01.090604] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:12.870 [2024-11-15 10:44:01.090617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.870 [2024-11-15 10:44:01.090631] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:12.870 [2024-11-15 10:44:01.090654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.870 [2024-11-15 10:44:01.090681] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:12.870 [2024-11-15 10:44:01.090694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.870 [2024-11-15 10:44:01.090706] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:25:12.870 [2024-11-15 10:44:01.090719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.870 [2024-11-15 10:44:01.090730] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b1400 is same with the state(6) to be set 00:25:12.870 [2024-11-15 10:44:01.100499] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21b1400 (9): Bad file descriptor 00:25:12.870 [2024-11-15 10:44:01.110545] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:25:12.870 [2024-11-15 10:44:01.110568] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:25:12.870 [2024-11-15 10:44:01.110578] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:25:12.870 [2024-11-15 10:44:01.110587] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:25:12.870 [2024-11-15 10:44:01.110627] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:25:13.802 10:44:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:13.802 10:44:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:13.802 10:44:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:13.802 10:44:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:13.802 10:44:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:13.802 10:44:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:13.802 10:44:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:13.802 [2024-11-15 10:44:02.173410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:25:13.803 [2024-11-15 10:44:02.173459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b1400 with addr=10.0.0.2, port=4420 00:25:13.803 [2024-11-15 10:44:02.173480] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b1400 is same with the state(6) to be set 00:25:13.803 [2024-11-15 10:44:02.173528] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21b1400 (9): Bad file descriptor 00:25:13.803 [2024-11-15 10:44:02.173932] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] Unable to perform failover, already in progress. 00:25:13.803 [2024-11-15 10:44:02.173971] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:25:13.803 [2024-11-15 10:44:02.173989] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:25:13.803 [2024-11-15 10:44:02.174006] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:25:13.803 [2024-11-15 10:44:02.174020] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:25:13.803 [2024-11-15 10:44:02.174031] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:25:13.803 [2024-11-15 10:44:02.174039] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:25:13.803 [2024-11-15 10:44:02.174052] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:25:13.803 [2024-11-15 10:44:02.174060] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:25:13.803 10:44:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:13.803 10:44:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:25:13.803 10:44:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:14.736 [2024-11-15 10:44:03.176552] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:25:14.737 [2024-11-15 10:44:03.176580] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:25:14.737 [2024-11-15 10:44:03.176601] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:25:14.737 [2024-11-15 10:44:03.176618] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:25:14.737 [2024-11-15 10:44:03.176632] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] already in failed state 00:25:14.737 [2024-11-15 10:44:03.176660] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:25:14.737 [2024-11-15 10:44:03.176669] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:25:14.737 [2024-11-15 10:44:03.176676] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:25:14.737 [2024-11-15 10:44:03.176732] bdev_nvme.c:7135:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:25:14.737 [2024-11-15 10:44:03.176765] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:14.737 [2024-11-15 10:44:03.176791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.737 [2024-11-15 10:44:03.176823] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:14.737 [2024-11-15 10:44:03.176836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.737 [2024-11-15 10:44:03.176849] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:14.737 [2024-11-15 10:44:03.176863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.737 [2024-11-15 10:44:03.176876] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:14.737 [2024-11-15 10:44:03.176889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.737 [2024-11-15 10:44:03.176902] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:25:14.737 [2024-11-15 10:44:03.176915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.737 [2024-11-15 10:44:03.176929] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] in failed state. 00:25:14.737 [2024-11-15 10:44:03.177066] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a0b40 (9): Bad file descriptor 00:25:14.737 [2024-11-15 10:44:03.178090] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:25:14.737 [2024-11-15 10:44:03.178112] nvme_ctrlr.c:1217:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] Failed to read the CC register 00:25:14.737 10:44:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:14.737 10:44:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:14.737 10:44:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:14.737 10:44:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:14.737 10:44:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:14.737 10:44:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:14.737 10:44:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:14.737 10:44:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:14.995 10:44:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:25:14.995 10:44:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:14.995 10:44:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:14.995 10:44:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:25:14.995 10:44:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:14.995 10:44:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:14.995 10:44:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:14.995 10:44:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:14.995 10:44:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:14.995 10:44:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:14.995 10:44:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:14.995 10:44:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:14.995 10:44:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:25:14.995 10:44:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:15.928 10:44:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:15.928 10:44:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:15.928 10:44:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:15.928 10:44:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:15.928 10:44:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:15.928 10:44:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:15.928 10:44:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:15.928 10:44:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:15.928 10:44:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:25:15.928 10:44:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:16.866 [2024-11-15 10:44:05.192472] bdev_nvme.c:7384:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:25:16.866 [2024-11-15 10:44:05.192505] bdev_nvme.c:7470:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:25:16.866 [2024-11-15 10:44:05.192529] bdev_nvme.c:7347:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:25:16.866 [2024-11-15 10:44:05.320960] bdev_nvme.c:7313:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:25:17.124 10:44:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:17.124 10:44:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:17.124 10:44:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:17.124 10:44:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:17.124 10:44:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:17.124 10:44:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:17.124 10:44:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:17.124 10:44:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:17.124 [2024-11-15 10:44:05.381850] bdev_nvme.c:5634:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.2:4420 00:25:17.124 [2024-11-15 10:44:05.382638] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] Connecting qpair 0x21bbbd0:1 started. 00:25:17.124 [2024-11-15 10:44:05.383996] bdev_nvme.c:8180:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:25:17.124 [2024-11-15 10:44:05.384041] bdev_nvme.c:8180:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:25:17.124 [2024-11-15 10:44:05.384073] bdev_nvme.c:8180:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:25:17.124 [2024-11-15 10:44:05.384095] bdev_nvme.c:7203:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:25:17.124 [2024-11-15 10:44:05.384108] bdev_nvme.c:7162:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:25:17.124 [2024-11-15 10:44:05.390984] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] qpair 0x21bbbd0 was disconnected and freed. delete nvme_qpair. 00:25:17.124 10:44:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:25:17.124 10:44:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:18.058 10:44:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:18.058 10:44:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:18.058 10:44:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:18.058 10:44:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:18.058 10:44:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:18.058 10:44:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:18.058 10:44:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:18.058 10:44:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:18.058 10:44:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:25:18.058 10:44:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:25:18.058 10:44:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 463129 00:25:18.058 10:44:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # '[' -z 463129 ']' 00:25:18.058 10:44:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # kill -0 463129 00:25:18.058 10:44:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@957 -- # uname 00:25:18.058 10:44:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:25:18.058 10:44:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 463129 00:25:18.059 10:44:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:25:18.059 10:44:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:25:18.059 10:44:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 463129' 00:25:18.059 killing process with pid 463129 00:25:18.059 10:44:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@971 -- # kill 463129 00:25:18.059 10:44:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@976 -- # wait 463129 00:25:18.317 10:44:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:25:18.317 10:44:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@516 -- # nvmfcleanup 00:25:18.317 10:44:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # sync 00:25:18.317 10:44:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:18.317 10:44:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set +e 00:25:18.317 10:44:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:18.317 10:44:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:18.317 rmmod nvme_tcp 00:25:18.317 rmmod nvme_fabrics 00:25:18.317 rmmod nvme_keyring 00:25:18.317 10:44:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:18.317 10:44:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@128 -- # set -e 00:25:18.317 10:44:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@129 -- # return 0 00:25:18.317 10:44:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@517 -- # '[' -n 463100 ']' 00:25:18.317 10:44:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@518 -- # killprocess 463100 00:25:18.317 10:44:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # '[' -z 463100 ']' 00:25:18.317 10:44:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # kill -0 463100 00:25:18.317 10:44:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@957 -- # uname 00:25:18.317 10:44:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:25:18.317 10:44:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 463100 00:25:18.576 10:44:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:25:18.576 10:44:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:25:18.576 10:44:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 463100' 00:25:18.576 killing process with pid 463100 00:25:18.576 10:44:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@971 -- # kill 463100 00:25:18.576 10:44:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@976 -- # wait 463100 00:25:18.576 10:44:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:25:18.576 10:44:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:25:18.576 10:44:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:25:18.576 10:44:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # iptr 00:25:18.576 10:44:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-save 00:25:18.576 10:44:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:25:18.576 10:44:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-restore 00:25:18.576 10:44:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:18.576 10:44:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:18.576 10:44:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:18.576 10:44:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:18.576 10:44:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:21.231 10:44:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:21.231 00:25:21.231 real 0m17.903s 00:25:21.231 user 0m25.776s 00:25:21.231 sys 0m3.144s 00:25:21.231 10:44:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:25:21.231 10:44:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:21.231 ************************************ 00:25:21.231 END TEST nvmf_discovery_remove_ifc 00:25:21.231 ************************************ 00:25:21.231 10:44:09 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@29 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:25:21.231 10:44:09 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:25:21.231 10:44:09 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:25:21.231 10:44:09 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:25:21.231 ************************************ 00:25:21.231 START TEST nvmf_identify_kernel_target 00:25:21.231 ************************************ 00:25:21.231 10:44:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:25:21.231 * Looking for test storage... 00:25:21.231 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:21.231 10:44:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:25:21.231 10:44:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1691 -- # lcov --version 00:25:21.231 10:44:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:25:21.231 10:44:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:25:21.231 10:44:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:21.231 10:44:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:21.231 10:44:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:21.231 10:44:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # IFS=.-: 00:25:21.231 10:44:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # read -ra ver1 00:25:21.231 10:44:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # IFS=.-: 00:25:21.231 10:44:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # read -ra ver2 00:25:21.232 10:44:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@338 -- # local 'op=<' 00:25:21.232 10:44:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@340 -- # ver1_l=2 00:25:21.232 10:44:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@341 -- # ver2_l=1 00:25:21.232 10:44:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:21.232 10:44:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@344 -- # case "$op" in 00:25:21.232 10:44:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@345 -- # : 1 00:25:21.232 10:44:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:21.232 10:44:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:21.232 10:44:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # decimal 1 00:25:21.232 10:44:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=1 00:25:21.232 10:44:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:21.232 10:44:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 1 00:25:21.232 10:44:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # ver1[v]=1 00:25:21.232 10:44:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # decimal 2 00:25:21.232 10:44:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=2 00:25:21.232 10:44:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:21.232 10:44:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 2 00:25:21.232 10:44:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # ver2[v]=2 00:25:21.232 10:44:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:21.232 10:44:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:21.232 10:44:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # return 0 00:25:21.232 10:44:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:21.232 10:44:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:25:21.232 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:21.232 --rc genhtml_branch_coverage=1 00:25:21.232 --rc genhtml_function_coverage=1 00:25:21.232 --rc genhtml_legend=1 00:25:21.232 --rc geninfo_all_blocks=1 00:25:21.232 --rc geninfo_unexecuted_blocks=1 00:25:21.232 00:25:21.232 ' 00:25:21.232 10:44:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:25:21.232 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:21.232 --rc genhtml_branch_coverage=1 00:25:21.232 --rc genhtml_function_coverage=1 00:25:21.232 --rc genhtml_legend=1 00:25:21.232 --rc geninfo_all_blocks=1 00:25:21.232 --rc geninfo_unexecuted_blocks=1 00:25:21.232 00:25:21.232 ' 00:25:21.232 10:44:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:25:21.232 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:21.232 --rc genhtml_branch_coverage=1 00:25:21.232 --rc genhtml_function_coverage=1 00:25:21.232 --rc genhtml_legend=1 00:25:21.232 --rc geninfo_all_blocks=1 00:25:21.232 --rc geninfo_unexecuted_blocks=1 00:25:21.232 00:25:21.232 ' 00:25:21.232 10:44:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:25:21.232 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:21.232 --rc genhtml_branch_coverage=1 00:25:21.232 --rc genhtml_function_coverage=1 00:25:21.232 --rc genhtml_legend=1 00:25:21.232 --rc geninfo_all_blocks=1 00:25:21.232 --rc geninfo_unexecuted_blocks=1 00:25:21.232 00:25:21.232 ' 00:25:21.232 10:44:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:21.232 10:44:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:25:21.232 10:44:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:21.232 10:44:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:21.232 10:44:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:21.232 10:44:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:21.232 10:44:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:21.232 10:44:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:21.232 10:44:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:21.232 10:44:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:21.232 10:44:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:21.232 10:44:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:21.232 10:44:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:25:21.232 10:44:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=8b464f06-2980-e311-ba20-001e67a94acd 00:25:21.232 10:44:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:21.232 10:44:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:21.232 10:44:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:21.232 10:44:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:21.232 10:44:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:21.232 10:44:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@15 -- # shopt -s extglob 00:25:21.232 10:44:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:21.232 10:44:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:21.232 10:44:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:21.232 10:44:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:21.232 10:44:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:21.232 10:44:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:21.232 10:44:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:25:21.232 10:44:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:21.233 10:44:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # : 0 00:25:21.233 10:44:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:21.233 10:44:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:21.233 10:44:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:21.233 10:44:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:21.233 10:44:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:21.233 10:44:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:21.233 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:21.233 10:44:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:21.233 10:44:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:21.233 10:44:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:21.233 10:44:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:25:21.233 10:44:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:25:21.233 10:44:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:21.233 10:44:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:25:21.233 10:44:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:25:21.233 10:44:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:25:21.233 10:44:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:21.233 10:44:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:21.233 10:44:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:21.233 10:44:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:25:21.233 10:44:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:25:21.233 10:44:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@309 -- # xtrace_disable 00:25:21.233 10:44:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:25:23.135 10:44:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:23.135 10:44:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # pci_devs=() 00:25:23.135 10:44:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:23.135 10:44:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:23.135 10:44:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:23.135 10:44:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:23.135 10:44:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:23.135 10:44:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # net_devs=() 00:25:23.135 10:44:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:23.135 10:44:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # e810=() 00:25:23.135 10:44:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # local -ga e810 00:25:23.135 10:44:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # x722=() 00:25:23.135 10:44:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # local -ga x722 00:25:23.135 10:44:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # mlx=() 00:25:23.135 10:44:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # local -ga mlx 00:25:23.135 10:44:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:23.135 10:44:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:23.135 10:44:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:23.135 10:44:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:23.135 10:44:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:23.135 10:44:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:23.135 10:44:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:23.135 10:44:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:23.135 10:44:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:23.135 10:44:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:23.135 10:44:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:23.135 10:44:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:23.135 10:44:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:23.135 10:44:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:23.135 10:44:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:23.135 10:44:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:23.135 10:44:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:23.135 10:44:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:23.135 10:44:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:23.135 10:44:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@367 -- # echo 'Found 0000:82:00.0 (0x8086 - 0x159b)' 00:25:23.135 Found 0000:82:00.0 (0x8086 - 0x159b) 00:25:23.135 10:44:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:23.135 10:44:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:23.135 10:44:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:23.135 10:44:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:23.135 10:44:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:23.135 10:44:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:23.135 10:44:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@367 -- # echo 'Found 0000:82:00.1 (0x8086 - 0x159b)' 00:25:23.135 Found 0000:82:00.1 (0x8086 - 0x159b) 00:25:23.135 10:44:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:23.135 10:44:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:23.135 10:44:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:23.135 10:44:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:23.135 10:44:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:23.135 10:44:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:23.135 10:44:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:23.135 10:44:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:23.135 10:44:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:23.135 10:44:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:23.135 10:44:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:23.135 10:44:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:23.135 10:44:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:23.135 10:44:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:23.135 10:44:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:23.135 10:44:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:82:00.0: cvl_0_0' 00:25:23.135 Found net devices under 0000:82:00.0: cvl_0_0 00:25:23.135 10:44:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:23.135 10:44:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:23.135 10:44:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:23.135 10:44:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:23.135 10:44:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:23.135 10:44:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:23.135 10:44:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:23.135 10:44:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:23.135 10:44:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:82:00.1: cvl_0_1' 00:25:23.135 Found net devices under 0000:82:00.1: cvl_0_1 00:25:23.135 10:44:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:23.135 10:44:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:25:23.135 10:44:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # is_hw=yes 00:25:23.135 10:44:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:25:23.135 10:44:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:25:23.136 10:44:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:25:23.136 10:44:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:23.136 10:44:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:23.136 10:44:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:23.136 10:44:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:23.136 10:44:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:23.136 10:44:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:23.136 10:44:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:23.136 10:44:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:23.136 10:44:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:23.136 10:44:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:23.136 10:44:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:23.136 10:44:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:23.136 10:44:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:23.136 10:44:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:23.136 10:44:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:23.136 10:44:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:23.136 10:44:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:23.136 10:44:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:23.136 10:44:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:23.136 10:44:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:23.136 10:44:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:23.136 10:44:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:23.136 10:44:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:23.136 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:23.136 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.266 ms 00:25:23.136 00:25:23.136 --- 10.0.0.2 ping statistics --- 00:25:23.136 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:23.136 rtt min/avg/max/mdev = 0.266/0.266/0.266/0.000 ms 00:25:23.136 10:44:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:23.136 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:23.136 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.138 ms 00:25:23.136 00:25:23.136 --- 10.0.0.1 ping statistics --- 00:25:23.136 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:23.136 rtt min/avg/max/mdev = 0.138/0.138/0.138/0.000 ms 00:25:23.136 10:44:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:23.136 10:44:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # return 0 00:25:23.136 10:44:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:25:23.136 10:44:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:23.136 10:44:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:25:23.136 10:44:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:25:23.136 10:44:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:23.136 10:44:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:25:23.136 10:44:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:25:23.136 10:44:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:25:23.136 10:44:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:25:23.136 10:44:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@769 -- # local ip 00:25:23.136 10:44:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:23.136 10:44:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:23.136 10:44:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:23.136 10:44:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:23.136 10:44:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:23.136 10:44:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:23.136 10:44:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:23.136 10:44:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:23.136 10:44:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:23.136 10:44:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:25:23.136 10:44:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:25:23.136 10:44:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:25:23.136 10:44:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:25:23.136 10:44:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:25:23.136 10:44:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:25:23.136 10:44:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:25:23.136 10:44:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # local block nvme 00:25:23.136 10:44:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:25:23.136 10:44:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@670 -- # modprobe nvmet 00:25:23.136 10:44:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:25:23.136 10:44:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:25:24.509 Waiting for block devices as requested 00:25:24.509 0000:81:00.0 (8086 0a54): vfio-pci -> nvme 00:25:24.509 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:25:24.767 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:25:24.767 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:25:24.767 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:25:24.767 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:25:25.025 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:25:25.025 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:25:25.025 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:25:25.025 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:25:25.285 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:25:25.285 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:25:25.285 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:25:25.285 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:25:25.545 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:25:25.545 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:25:25.545 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:25:25.805 10:44:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:25:25.805 10:44:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:25:25.805 10:44:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:25:25.805 10:44:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:25:25.805 10:44:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:25:25.805 10:44:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:25:25.805 10:44:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:25:25.805 10:44:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:25:25.805 10:44:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:25:25.805 No valid GPT data, bailing 00:25:25.805 10:44:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:25:25.805 10:44:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:25:25.805 10:44:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:25:25.805 10:44:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:25:25.805 10:44:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:25:25.805 10:44:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:25:25.805 10:44:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:25:25.805 10:44:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:25:25.805 10:44:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:25:25.805 10:44:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # echo 1 00:25:25.805 10:44:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:25:25.805 10:44:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@697 -- # echo 1 00:25:25.805 10:44:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:25:25.805 10:44:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@700 -- # echo tcp 00:25:25.805 10:44:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@701 -- # echo 4420 00:25:25.805 10:44:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@702 -- # echo ipv4 00:25:25.805 10:44:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:25:25.805 10:44:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid=8b464f06-2980-e311-ba20-001e67a94acd -a 10.0.0.1 -t tcp -s 4420 00:25:25.805 00:25:25.805 Discovery Log Number of Records 2, Generation counter 2 00:25:25.805 =====Discovery Log Entry 0====== 00:25:25.805 trtype: tcp 00:25:25.805 adrfam: ipv4 00:25:25.805 subtype: current discovery subsystem 00:25:25.805 treq: not specified, sq flow control disable supported 00:25:25.805 portid: 1 00:25:25.805 trsvcid: 4420 00:25:25.805 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:25:25.805 traddr: 10.0.0.1 00:25:25.805 eflags: none 00:25:25.805 sectype: none 00:25:25.805 =====Discovery Log Entry 1====== 00:25:25.805 trtype: tcp 00:25:25.805 adrfam: ipv4 00:25:25.805 subtype: nvme subsystem 00:25:25.805 treq: not specified, sq flow control disable supported 00:25:25.805 portid: 1 00:25:25.805 trsvcid: 4420 00:25:25.805 subnqn: nqn.2016-06.io.spdk:testnqn 00:25:25.805 traddr: 10.0.0.1 00:25:25.805 eflags: none 00:25:25.805 sectype: none 00:25:25.805 10:44:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:25:25.805 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:25:26.066 ===================================================== 00:25:26.066 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:25:26.066 ===================================================== 00:25:26.066 Controller Capabilities/Features 00:25:26.066 ================================ 00:25:26.066 Vendor ID: 0000 00:25:26.066 Subsystem Vendor ID: 0000 00:25:26.066 Serial Number: d83c9501c2ef99d3563b 00:25:26.066 Model Number: Linux 00:25:26.066 Firmware Version: 6.8.9-20 00:25:26.066 Recommended Arb Burst: 0 00:25:26.066 IEEE OUI Identifier: 00 00 00 00:25:26.066 Multi-path I/O 00:25:26.066 May have multiple subsystem ports: No 00:25:26.066 May have multiple controllers: No 00:25:26.066 Associated with SR-IOV VF: No 00:25:26.066 Max Data Transfer Size: Unlimited 00:25:26.066 Max Number of Namespaces: 0 00:25:26.066 Max Number of I/O Queues: 1024 00:25:26.066 NVMe Specification Version (VS): 1.3 00:25:26.066 NVMe Specification Version (Identify): 1.3 00:25:26.066 Maximum Queue Entries: 1024 00:25:26.066 Contiguous Queues Required: No 00:25:26.066 Arbitration Mechanisms Supported 00:25:26.066 Weighted Round Robin: Not Supported 00:25:26.066 Vendor Specific: Not Supported 00:25:26.066 Reset Timeout: 7500 ms 00:25:26.066 Doorbell Stride: 4 bytes 00:25:26.066 NVM Subsystem Reset: Not Supported 00:25:26.066 Command Sets Supported 00:25:26.066 NVM Command Set: Supported 00:25:26.066 Boot Partition: Not Supported 00:25:26.066 Memory Page Size Minimum: 4096 bytes 00:25:26.066 Memory Page Size Maximum: 4096 bytes 00:25:26.066 Persistent Memory Region: Not Supported 00:25:26.066 Optional Asynchronous Events Supported 00:25:26.066 Namespace Attribute Notices: Not Supported 00:25:26.066 Firmware Activation Notices: Not Supported 00:25:26.066 ANA Change Notices: Not Supported 00:25:26.066 PLE Aggregate Log Change Notices: Not Supported 00:25:26.066 LBA Status Info Alert Notices: Not Supported 00:25:26.066 EGE Aggregate Log Change Notices: Not Supported 00:25:26.066 Normal NVM Subsystem Shutdown event: Not Supported 00:25:26.066 Zone Descriptor Change Notices: Not Supported 00:25:26.066 Discovery Log Change Notices: Supported 00:25:26.066 Controller Attributes 00:25:26.066 128-bit Host Identifier: Not Supported 00:25:26.066 Non-Operational Permissive Mode: Not Supported 00:25:26.066 NVM Sets: Not Supported 00:25:26.066 Read Recovery Levels: Not Supported 00:25:26.066 Endurance Groups: Not Supported 00:25:26.066 Predictable Latency Mode: Not Supported 00:25:26.066 Traffic Based Keep ALive: Not Supported 00:25:26.066 Namespace Granularity: Not Supported 00:25:26.066 SQ Associations: Not Supported 00:25:26.066 UUID List: Not Supported 00:25:26.066 Multi-Domain Subsystem: Not Supported 00:25:26.066 Fixed Capacity Management: Not Supported 00:25:26.066 Variable Capacity Management: Not Supported 00:25:26.066 Delete Endurance Group: Not Supported 00:25:26.066 Delete NVM Set: Not Supported 00:25:26.066 Extended LBA Formats Supported: Not Supported 00:25:26.066 Flexible Data Placement Supported: Not Supported 00:25:26.066 00:25:26.066 Controller Memory Buffer Support 00:25:26.066 ================================ 00:25:26.066 Supported: No 00:25:26.066 00:25:26.066 Persistent Memory Region Support 00:25:26.066 ================================ 00:25:26.066 Supported: No 00:25:26.066 00:25:26.066 Admin Command Set Attributes 00:25:26.066 ============================ 00:25:26.066 Security Send/Receive: Not Supported 00:25:26.066 Format NVM: Not Supported 00:25:26.066 Firmware Activate/Download: Not Supported 00:25:26.066 Namespace Management: Not Supported 00:25:26.066 Device Self-Test: Not Supported 00:25:26.066 Directives: Not Supported 00:25:26.066 NVMe-MI: Not Supported 00:25:26.066 Virtualization Management: Not Supported 00:25:26.066 Doorbell Buffer Config: Not Supported 00:25:26.066 Get LBA Status Capability: Not Supported 00:25:26.066 Command & Feature Lockdown Capability: Not Supported 00:25:26.066 Abort Command Limit: 1 00:25:26.066 Async Event Request Limit: 1 00:25:26.066 Number of Firmware Slots: N/A 00:25:26.066 Firmware Slot 1 Read-Only: N/A 00:25:26.066 Firmware Activation Without Reset: N/A 00:25:26.066 Multiple Update Detection Support: N/A 00:25:26.066 Firmware Update Granularity: No Information Provided 00:25:26.066 Per-Namespace SMART Log: No 00:25:26.066 Asymmetric Namespace Access Log Page: Not Supported 00:25:26.066 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:25:26.066 Command Effects Log Page: Not Supported 00:25:26.066 Get Log Page Extended Data: Supported 00:25:26.066 Telemetry Log Pages: Not Supported 00:25:26.066 Persistent Event Log Pages: Not Supported 00:25:26.066 Supported Log Pages Log Page: May Support 00:25:26.066 Commands Supported & Effects Log Page: Not Supported 00:25:26.066 Feature Identifiers & Effects Log Page:May Support 00:25:26.066 NVMe-MI Commands & Effects Log Page: May Support 00:25:26.066 Data Area 4 for Telemetry Log: Not Supported 00:25:26.066 Error Log Page Entries Supported: 1 00:25:26.066 Keep Alive: Not Supported 00:25:26.066 00:25:26.066 NVM Command Set Attributes 00:25:26.066 ========================== 00:25:26.066 Submission Queue Entry Size 00:25:26.066 Max: 1 00:25:26.066 Min: 1 00:25:26.066 Completion Queue Entry Size 00:25:26.066 Max: 1 00:25:26.066 Min: 1 00:25:26.066 Number of Namespaces: 0 00:25:26.066 Compare Command: Not Supported 00:25:26.066 Write Uncorrectable Command: Not Supported 00:25:26.066 Dataset Management Command: Not Supported 00:25:26.066 Write Zeroes Command: Not Supported 00:25:26.066 Set Features Save Field: Not Supported 00:25:26.066 Reservations: Not Supported 00:25:26.066 Timestamp: Not Supported 00:25:26.066 Copy: Not Supported 00:25:26.066 Volatile Write Cache: Not Present 00:25:26.066 Atomic Write Unit (Normal): 1 00:25:26.066 Atomic Write Unit (PFail): 1 00:25:26.067 Atomic Compare & Write Unit: 1 00:25:26.067 Fused Compare & Write: Not Supported 00:25:26.067 Scatter-Gather List 00:25:26.067 SGL Command Set: Supported 00:25:26.067 SGL Keyed: Not Supported 00:25:26.067 SGL Bit Bucket Descriptor: Not Supported 00:25:26.067 SGL Metadata Pointer: Not Supported 00:25:26.067 Oversized SGL: Not Supported 00:25:26.067 SGL Metadata Address: Not Supported 00:25:26.067 SGL Offset: Supported 00:25:26.067 Transport SGL Data Block: Not Supported 00:25:26.067 Replay Protected Memory Block: Not Supported 00:25:26.067 00:25:26.067 Firmware Slot Information 00:25:26.067 ========================= 00:25:26.067 Active slot: 0 00:25:26.067 00:25:26.067 00:25:26.067 Error Log 00:25:26.067 ========= 00:25:26.067 00:25:26.067 Active Namespaces 00:25:26.067 ================= 00:25:26.067 Discovery Log Page 00:25:26.067 ================== 00:25:26.067 Generation Counter: 2 00:25:26.067 Number of Records: 2 00:25:26.067 Record Format: 0 00:25:26.067 00:25:26.067 Discovery Log Entry 0 00:25:26.067 ---------------------- 00:25:26.067 Transport Type: 3 (TCP) 00:25:26.067 Address Family: 1 (IPv4) 00:25:26.067 Subsystem Type: 3 (Current Discovery Subsystem) 00:25:26.067 Entry Flags: 00:25:26.067 Duplicate Returned Information: 0 00:25:26.067 Explicit Persistent Connection Support for Discovery: 0 00:25:26.067 Transport Requirements: 00:25:26.067 Secure Channel: Not Specified 00:25:26.067 Port ID: 1 (0x0001) 00:25:26.067 Controller ID: 65535 (0xffff) 00:25:26.067 Admin Max SQ Size: 32 00:25:26.067 Transport Service Identifier: 4420 00:25:26.067 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:25:26.067 Transport Address: 10.0.0.1 00:25:26.067 Discovery Log Entry 1 00:25:26.067 ---------------------- 00:25:26.067 Transport Type: 3 (TCP) 00:25:26.067 Address Family: 1 (IPv4) 00:25:26.067 Subsystem Type: 2 (NVM Subsystem) 00:25:26.067 Entry Flags: 00:25:26.067 Duplicate Returned Information: 0 00:25:26.067 Explicit Persistent Connection Support for Discovery: 0 00:25:26.067 Transport Requirements: 00:25:26.067 Secure Channel: Not Specified 00:25:26.067 Port ID: 1 (0x0001) 00:25:26.067 Controller ID: 65535 (0xffff) 00:25:26.067 Admin Max SQ Size: 32 00:25:26.067 Transport Service Identifier: 4420 00:25:26.067 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:25:26.067 Transport Address: 10.0.0.1 00:25:26.067 10:44:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:25:26.067 get_feature(0x01) failed 00:25:26.067 get_feature(0x02) failed 00:25:26.067 get_feature(0x04) failed 00:25:26.067 ===================================================== 00:25:26.067 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:25:26.067 ===================================================== 00:25:26.067 Controller Capabilities/Features 00:25:26.067 ================================ 00:25:26.067 Vendor ID: 0000 00:25:26.067 Subsystem Vendor ID: 0000 00:25:26.067 Serial Number: 8b936410cb3c47d46c07 00:25:26.067 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:25:26.067 Firmware Version: 6.8.9-20 00:25:26.067 Recommended Arb Burst: 6 00:25:26.067 IEEE OUI Identifier: 00 00 00 00:25:26.067 Multi-path I/O 00:25:26.067 May have multiple subsystem ports: Yes 00:25:26.067 May have multiple controllers: Yes 00:25:26.067 Associated with SR-IOV VF: No 00:25:26.067 Max Data Transfer Size: Unlimited 00:25:26.067 Max Number of Namespaces: 1024 00:25:26.067 Max Number of I/O Queues: 128 00:25:26.067 NVMe Specification Version (VS): 1.3 00:25:26.067 NVMe Specification Version (Identify): 1.3 00:25:26.067 Maximum Queue Entries: 1024 00:25:26.067 Contiguous Queues Required: No 00:25:26.067 Arbitration Mechanisms Supported 00:25:26.067 Weighted Round Robin: Not Supported 00:25:26.067 Vendor Specific: Not Supported 00:25:26.067 Reset Timeout: 7500 ms 00:25:26.067 Doorbell Stride: 4 bytes 00:25:26.067 NVM Subsystem Reset: Not Supported 00:25:26.067 Command Sets Supported 00:25:26.067 NVM Command Set: Supported 00:25:26.067 Boot Partition: Not Supported 00:25:26.067 Memory Page Size Minimum: 4096 bytes 00:25:26.067 Memory Page Size Maximum: 4096 bytes 00:25:26.067 Persistent Memory Region: Not Supported 00:25:26.067 Optional Asynchronous Events Supported 00:25:26.067 Namespace Attribute Notices: Supported 00:25:26.067 Firmware Activation Notices: Not Supported 00:25:26.067 ANA Change Notices: Supported 00:25:26.067 PLE Aggregate Log Change Notices: Not Supported 00:25:26.067 LBA Status Info Alert Notices: Not Supported 00:25:26.067 EGE Aggregate Log Change Notices: Not Supported 00:25:26.067 Normal NVM Subsystem Shutdown event: Not Supported 00:25:26.067 Zone Descriptor Change Notices: Not Supported 00:25:26.067 Discovery Log Change Notices: Not Supported 00:25:26.067 Controller Attributes 00:25:26.067 128-bit Host Identifier: Supported 00:25:26.067 Non-Operational Permissive Mode: Not Supported 00:25:26.067 NVM Sets: Not Supported 00:25:26.067 Read Recovery Levels: Not Supported 00:25:26.067 Endurance Groups: Not Supported 00:25:26.067 Predictable Latency Mode: Not Supported 00:25:26.067 Traffic Based Keep ALive: Supported 00:25:26.067 Namespace Granularity: Not Supported 00:25:26.067 SQ Associations: Not Supported 00:25:26.067 UUID List: Not Supported 00:25:26.067 Multi-Domain Subsystem: Not Supported 00:25:26.067 Fixed Capacity Management: Not Supported 00:25:26.067 Variable Capacity Management: Not Supported 00:25:26.067 Delete Endurance Group: Not Supported 00:25:26.067 Delete NVM Set: Not Supported 00:25:26.067 Extended LBA Formats Supported: Not Supported 00:25:26.067 Flexible Data Placement Supported: Not Supported 00:25:26.067 00:25:26.067 Controller Memory Buffer Support 00:25:26.067 ================================ 00:25:26.067 Supported: No 00:25:26.067 00:25:26.067 Persistent Memory Region Support 00:25:26.067 ================================ 00:25:26.067 Supported: No 00:25:26.067 00:25:26.067 Admin Command Set Attributes 00:25:26.067 ============================ 00:25:26.067 Security Send/Receive: Not Supported 00:25:26.067 Format NVM: Not Supported 00:25:26.067 Firmware Activate/Download: Not Supported 00:25:26.067 Namespace Management: Not Supported 00:25:26.067 Device Self-Test: Not Supported 00:25:26.067 Directives: Not Supported 00:25:26.067 NVMe-MI: Not Supported 00:25:26.067 Virtualization Management: Not Supported 00:25:26.067 Doorbell Buffer Config: Not Supported 00:25:26.067 Get LBA Status Capability: Not Supported 00:25:26.067 Command & Feature Lockdown Capability: Not Supported 00:25:26.067 Abort Command Limit: 4 00:25:26.067 Async Event Request Limit: 4 00:25:26.067 Number of Firmware Slots: N/A 00:25:26.067 Firmware Slot 1 Read-Only: N/A 00:25:26.067 Firmware Activation Without Reset: N/A 00:25:26.067 Multiple Update Detection Support: N/A 00:25:26.067 Firmware Update Granularity: No Information Provided 00:25:26.067 Per-Namespace SMART Log: Yes 00:25:26.067 Asymmetric Namespace Access Log Page: Supported 00:25:26.067 ANA Transition Time : 10 sec 00:25:26.067 00:25:26.067 Asymmetric Namespace Access Capabilities 00:25:26.067 ANA Optimized State : Supported 00:25:26.067 ANA Non-Optimized State : Supported 00:25:26.067 ANA Inaccessible State : Supported 00:25:26.067 ANA Persistent Loss State : Supported 00:25:26.067 ANA Change State : Supported 00:25:26.067 ANAGRPID is not changed : No 00:25:26.067 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:25:26.067 00:25:26.067 ANA Group Identifier Maximum : 128 00:25:26.067 Number of ANA Group Identifiers : 128 00:25:26.067 Max Number of Allowed Namespaces : 1024 00:25:26.067 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:25:26.067 Command Effects Log Page: Supported 00:25:26.067 Get Log Page Extended Data: Supported 00:25:26.067 Telemetry Log Pages: Not Supported 00:25:26.067 Persistent Event Log Pages: Not Supported 00:25:26.067 Supported Log Pages Log Page: May Support 00:25:26.067 Commands Supported & Effects Log Page: Not Supported 00:25:26.067 Feature Identifiers & Effects Log Page:May Support 00:25:26.067 NVMe-MI Commands & Effects Log Page: May Support 00:25:26.067 Data Area 4 for Telemetry Log: Not Supported 00:25:26.067 Error Log Page Entries Supported: 128 00:25:26.067 Keep Alive: Supported 00:25:26.067 Keep Alive Granularity: 1000 ms 00:25:26.067 00:25:26.067 NVM Command Set Attributes 00:25:26.067 ========================== 00:25:26.067 Submission Queue Entry Size 00:25:26.067 Max: 64 00:25:26.067 Min: 64 00:25:26.067 Completion Queue Entry Size 00:25:26.067 Max: 16 00:25:26.067 Min: 16 00:25:26.067 Number of Namespaces: 1024 00:25:26.067 Compare Command: Not Supported 00:25:26.067 Write Uncorrectable Command: Not Supported 00:25:26.067 Dataset Management Command: Supported 00:25:26.068 Write Zeroes Command: Supported 00:25:26.068 Set Features Save Field: Not Supported 00:25:26.068 Reservations: Not Supported 00:25:26.068 Timestamp: Not Supported 00:25:26.068 Copy: Not Supported 00:25:26.068 Volatile Write Cache: Present 00:25:26.068 Atomic Write Unit (Normal): 1 00:25:26.068 Atomic Write Unit (PFail): 1 00:25:26.068 Atomic Compare & Write Unit: 1 00:25:26.068 Fused Compare & Write: Not Supported 00:25:26.068 Scatter-Gather List 00:25:26.068 SGL Command Set: Supported 00:25:26.068 SGL Keyed: Not Supported 00:25:26.068 SGL Bit Bucket Descriptor: Not Supported 00:25:26.068 SGL Metadata Pointer: Not Supported 00:25:26.068 Oversized SGL: Not Supported 00:25:26.068 SGL Metadata Address: Not Supported 00:25:26.068 SGL Offset: Supported 00:25:26.068 Transport SGL Data Block: Not Supported 00:25:26.068 Replay Protected Memory Block: Not Supported 00:25:26.068 00:25:26.068 Firmware Slot Information 00:25:26.068 ========================= 00:25:26.068 Active slot: 0 00:25:26.068 00:25:26.068 Asymmetric Namespace Access 00:25:26.068 =========================== 00:25:26.068 Change Count : 0 00:25:26.068 Number of ANA Group Descriptors : 1 00:25:26.068 ANA Group Descriptor : 0 00:25:26.068 ANA Group ID : 1 00:25:26.068 Number of NSID Values : 1 00:25:26.068 Change Count : 0 00:25:26.068 ANA State : 1 00:25:26.068 Namespace Identifier : 1 00:25:26.068 00:25:26.068 Commands Supported and Effects 00:25:26.068 ============================== 00:25:26.068 Admin Commands 00:25:26.068 -------------- 00:25:26.068 Get Log Page (02h): Supported 00:25:26.068 Identify (06h): Supported 00:25:26.068 Abort (08h): Supported 00:25:26.068 Set Features (09h): Supported 00:25:26.068 Get Features (0Ah): Supported 00:25:26.068 Asynchronous Event Request (0Ch): Supported 00:25:26.068 Keep Alive (18h): Supported 00:25:26.068 I/O Commands 00:25:26.068 ------------ 00:25:26.068 Flush (00h): Supported 00:25:26.068 Write (01h): Supported LBA-Change 00:25:26.068 Read (02h): Supported 00:25:26.068 Write Zeroes (08h): Supported LBA-Change 00:25:26.068 Dataset Management (09h): Supported 00:25:26.068 00:25:26.068 Error Log 00:25:26.068 ========= 00:25:26.068 Entry: 0 00:25:26.068 Error Count: 0x3 00:25:26.068 Submission Queue Id: 0x0 00:25:26.068 Command Id: 0x5 00:25:26.068 Phase Bit: 0 00:25:26.068 Status Code: 0x2 00:25:26.068 Status Code Type: 0x0 00:25:26.068 Do Not Retry: 1 00:25:26.068 Error Location: 0x28 00:25:26.068 LBA: 0x0 00:25:26.068 Namespace: 0x0 00:25:26.068 Vendor Log Page: 0x0 00:25:26.068 ----------- 00:25:26.068 Entry: 1 00:25:26.068 Error Count: 0x2 00:25:26.068 Submission Queue Id: 0x0 00:25:26.068 Command Id: 0x5 00:25:26.068 Phase Bit: 0 00:25:26.068 Status Code: 0x2 00:25:26.068 Status Code Type: 0x0 00:25:26.068 Do Not Retry: 1 00:25:26.068 Error Location: 0x28 00:25:26.068 LBA: 0x0 00:25:26.068 Namespace: 0x0 00:25:26.068 Vendor Log Page: 0x0 00:25:26.068 ----------- 00:25:26.068 Entry: 2 00:25:26.068 Error Count: 0x1 00:25:26.068 Submission Queue Id: 0x0 00:25:26.068 Command Id: 0x4 00:25:26.068 Phase Bit: 0 00:25:26.068 Status Code: 0x2 00:25:26.068 Status Code Type: 0x0 00:25:26.068 Do Not Retry: 1 00:25:26.068 Error Location: 0x28 00:25:26.068 LBA: 0x0 00:25:26.068 Namespace: 0x0 00:25:26.068 Vendor Log Page: 0x0 00:25:26.068 00:25:26.068 Number of Queues 00:25:26.068 ================ 00:25:26.068 Number of I/O Submission Queues: 128 00:25:26.068 Number of I/O Completion Queues: 128 00:25:26.068 00:25:26.068 ZNS Specific Controller Data 00:25:26.068 ============================ 00:25:26.068 Zone Append Size Limit: 0 00:25:26.068 00:25:26.068 00:25:26.068 Active Namespaces 00:25:26.068 ================= 00:25:26.068 get_feature(0x05) failed 00:25:26.068 Namespace ID:1 00:25:26.068 Command Set Identifier: NVM (00h) 00:25:26.068 Deallocate: Supported 00:25:26.068 Deallocated/Unwritten Error: Not Supported 00:25:26.068 Deallocated Read Value: Unknown 00:25:26.068 Deallocate in Write Zeroes: Not Supported 00:25:26.068 Deallocated Guard Field: 0xFFFF 00:25:26.068 Flush: Supported 00:25:26.068 Reservation: Not Supported 00:25:26.068 Namespace Sharing Capabilities: Multiple Controllers 00:25:26.068 Size (in LBAs): 3907029168 (1863GiB) 00:25:26.068 Capacity (in LBAs): 3907029168 (1863GiB) 00:25:26.068 Utilization (in LBAs): 3907029168 (1863GiB) 00:25:26.068 UUID: 8aee2f40-93e8-4a62-bef9-134399c0e1fe 00:25:26.068 Thin Provisioning: Not Supported 00:25:26.068 Per-NS Atomic Units: Yes 00:25:26.068 Atomic Boundary Size (Normal): 0 00:25:26.068 Atomic Boundary Size (PFail): 0 00:25:26.068 Atomic Boundary Offset: 0 00:25:26.068 NGUID/EUI64 Never Reused: No 00:25:26.068 ANA group ID: 1 00:25:26.068 Namespace Write Protected: No 00:25:26.068 Number of LBA Formats: 1 00:25:26.068 Current LBA Format: LBA Format #00 00:25:26.068 LBA Format #00: Data Size: 512 Metadata Size: 0 00:25:26.068 00:25:26.068 10:44:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:25:26.068 10:44:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:25:26.068 10:44:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # sync 00:25:26.068 10:44:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:26.068 10:44:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set +e 00:25:26.068 10:44:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:26.068 10:44:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:26.068 rmmod nvme_tcp 00:25:26.068 rmmod nvme_fabrics 00:25:26.068 10:44:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:26.068 10:44:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@128 -- # set -e 00:25:26.068 10:44:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@129 -- # return 0 00:25:26.068 10:44:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:25:26.068 10:44:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:25:26.068 10:44:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:25:26.068 10:44:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:25:26.068 10:44:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # iptr 00:25:26.068 10:44:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-save 00:25:26.068 10:44:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:25:26.068 10:44:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-restore 00:25:26.068 10:44:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:26.068 10:44:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:26.068 10:44:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:26.068 10:44:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:26.068 10:44:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:28.605 10:44:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:28.605 10:44:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:25:28.605 10:44:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:25:28.605 10:44:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@714 -- # echo 0 00:25:28.605 10:44:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:25:28.605 10:44:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:25:28.605 10:44:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:25:28.605 10:44:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:25:28.605 10:44:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:25:28.605 10:44:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:25:28.605 10:44:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:25:29.543 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:25:29.543 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:25:29.543 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:25:29.543 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:25:29.543 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:25:29.543 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:25:29.543 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:25:29.543 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:25:29.543 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:25:29.543 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:25:29.543 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:25:29.543 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:25:29.543 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:25:29.543 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:25:29.543 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:25:29.543 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:25:31.450 0000:81:00.0 (8086 0a54): nvme -> vfio-pci 00:25:31.450 00:25:31.450 real 0m10.779s 00:25:31.450 user 0m2.143s 00:25:31.450 sys 0m3.799s 00:25:31.450 10:44:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1128 -- # xtrace_disable 00:25:31.450 10:44:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:25:31.450 ************************************ 00:25:31.450 END TEST nvmf_identify_kernel_target 00:25:31.450 ************************************ 00:25:31.450 10:44:19 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@30 -- # run_test nvmf_auth_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:25:31.450 10:44:19 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:25:31.450 10:44:19 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:25:31.450 10:44:19 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:25:31.709 ************************************ 00:25:31.709 START TEST nvmf_auth_host 00:25:31.709 ************************************ 00:25:31.709 10:44:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:25:31.709 * Looking for test storage... 00:25:31.709 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:31.709 10:44:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:25:31.709 10:44:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1691 -- # lcov --version 00:25:31.709 10:44:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:25:31.709 10:44:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:25:31.709 10:44:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:31.709 10:44:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:31.709 10:44:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:31.709 10:44:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # IFS=.-: 00:25:31.709 10:44:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # read -ra ver1 00:25:31.709 10:44:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # IFS=.-: 00:25:31.709 10:44:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # read -ra ver2 00:25:31.709 10:44:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@338 -- # local 'op=<' 00:25:31.709 10:44:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@340 -- # ver1_l=2 00:25:31.709 10:44:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@341 -- # ver2_l=1 00:25:31.709 10:44:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:31.709 10:44:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@344 -- # case "$op" in 00:25:31.709 10:44:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@345 -- # : 1 00:25:31.709 10:44:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:31.709 10:44:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:31.709 10:44:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # decimal 1 00:25:31.709 10:44:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=1 00:25:31.709 10:44:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:31.709 10:44:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 1 00:25:31.709 10:44:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # ver1[v]=1 00:25:31.709 10:44:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # decimal 2 00:25:31.709 10:44:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=2 00:25:31.709 10:44:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:31.709 10:44:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 2 00:25:31.709 10:44:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # ver2[v]=2 00:25:31.709 10:44:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:31.709 10:44:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:31.710 10:44:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # return 0 00:25:31.710 10:44:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:31.710 10:44:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:25:31.710 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:31.710 --rc genhtml_branch_coverage=1 00:25:31.710 --rc genhtml_function_coverage=1 00:25:31.710 --rc genhtml_legend=1 00:25:31.710 --rc geninfo_all_blocks=1 00:25:31.710 --rc geninfo_unexecuted_blocks=1 00:25:31.710 00:25:31.710 ' 00:25:31.710 10:44:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:25:31.710 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:31.710 --rc genhtml_branch_coverage=1 00:25:31.710 --rc genhtml_function_coverage=1 00:25:31.710 --rc genhtml_legend=1 00:25:31.710 --rc geninfo_all_blocks=1 00:25:31.710 --rc geninfo_unexecuted_blocks=1 00:25:31.710 00:25:31.710 ' 00:25:31.710 10:44:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:25:31.710 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:31.710 --rc genhtml_branch_coverage=1 00:25:31.710 --rc genhtml_function_coverage=1 00:25:31.710 --rc genhtml_legend=1 00:25:31.710 --rc geninfo_all_blocks=1 00:25:31.710 --rc geninfo_unexecuted_blocks=1 00:25:31.710 00:25:31.710 ' 00:25:31.710 10:44:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:25:31.710 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:31.710 --rc genhtml_branch_coverage=1 00:25:31.710 --rc genhtml_function_coverage=1 00:25:31.710 --rc genhtml_legend=1 00:25:31.710 --rc geninfo_all_blocks=1 00:25:31.710 --rc geninfo_unexecuted_blocks=1 00:25:31.710 00:25:31.710 ' 00:25:31.710 10:44:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:31.710 10:44:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:25:31.710 10:44:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:31.710 10:44:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:31.710 10:44:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:31.710 10:44:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:31.710 10:44:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:31.710 10:44:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:31.710 10:44:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:31.710 10:44:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:31.710 10:44:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:31.710 10:44:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:31.710 10:44:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:25:31.710 10:44:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=8b464f06-2980-e311-ba20-001e67a94acd 00:25:31.710 10:44:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:31.710 10:44:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:31.710 10:44:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:31.710 10:44:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:31.710 10:44:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:31.710 10:44:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@15 -- # shopt -s extglob 00:25:31.710 10:44:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:31.710 10:44:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:31.710 10:44:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:31.710 10:44:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:31.710 10:44:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:31.710 10:44:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:31.710 10:44:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:25:31.710 10:44:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:31.710 10:44:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@51 -- # : 0 00:25:31.710 10:44:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:31.710 10:44:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:31.710 10:44:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:31.710 10:44:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:31.710 10:44:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:31.710 10:44:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:31.710 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:31.710 10:44:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:31.710 10:44:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:31.710 10:44:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:31.710 10:44:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:25:31.710 10:44:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:25:31.710 10:44:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:25:31.710 10:44:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:25:31.710 10:44:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:25:31.710 10:44:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:25:31.710 10:44:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:25:31.710 10:44:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:25:31.710 10:44:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:25:31.710 10:44:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:25:31.710 10:44:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:31.710 10:44:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:25:31.710 10:44:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:25:31.710 10:44:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:25:31.710 10:44:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:31.710 10:44:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:31.710 10:44:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:31.710 10:44:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:25:31.710 10:44:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:25:31.710 10:44:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@309 -- # xtrace_disable 00:25:31.710 10:44:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:34.240 10:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:34.240 10:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # pci_devs=() 00:25:34.240 10:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:34.240 10:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:34.240 10:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:34.240 10:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:34.240 10:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:34.240 10:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # net_devs=() 00:25:34.240 10:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:34.240 10:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # e810=() 00:25:34.240 10:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # local -ga e810 00:25:34.240 10:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # x722=() 00:25:34.240 10:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # local -ga x722 00:25:34.240 10:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # mlx=() 00:25:34.240 10:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # local -ga mlx 00:25:34.240 10:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:34.240 10:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:34.240 10:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:34.240 10:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:34.240 10:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:34.240 10:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:34.240 10:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:34.240 10:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:34.240 10:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:34.240 10:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:34.240 10:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:34.240 10:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:34.240 10:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:34.240 10:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:34.240 10:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:34.240 10:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:34.240 10:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:34.240 10:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:34.240 10:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:34.240 10:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@367 -- # echo 'Found 0000:82:00.0 (0x8086 - 0x159b)' 00:25:34.240 Found 0000:82:00.0 (0x8086 - 0x159b) 00:25:34.240 10:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:34.240 10:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:34.240 10:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:34.240 10:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:34.240 10:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:34.240 10:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:34.240 10:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@367 -- # echo 'Found 0000:82:00.1 (0x8086 - 0x159b)' 00:25:34.240 Found 0000:82:00.1 (0x8086 - 0x159b) 00:25:34.240 10:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:34.240 10:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:34.240 10:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:34.240 10:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:34.240 10:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:34.240 10:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:34.240 10:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:34.240 10:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:34.240 10:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:34.240 10:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:34.240 10:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:34.240 10:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:34.240 10:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:34.240 10:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:34.240 10:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:34.240 10:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:82:00.0: cvl_0_0' 00:25:34.240 Found net devices under 0000:82:00.0: cvl_0_0 00:25:34.240 10:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:34.240 10:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:34.240 10:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:34.240 10:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:34.240 10:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:34.240 10:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:34.240 10:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:34.240 10:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:34.240 10:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:82:00.1: cvl_0_1' 00:25:34.240 Found net devices under 0000:82:00.1: cvl_0_1 00:25:34.240 10:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:34.240 10:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:25:34.240 10:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # is_hw=yes 00:25:34.240 10:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:25:34.240 10:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:25:34.240 10:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:25:34.240 10:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:34.240 10:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:34.240 10:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:34.240 10:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:34.240 10:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:34.240 10:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:34.240 10:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:34.240 10:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:34.240 10:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:34.240 10:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:34.240 10:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:34.240 10:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:34.240 10:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:34.240 10:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:34.240 10:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:34.240 10:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:34.240 10:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:34.240 10:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:34.240 10:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:34.240 10:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:34.240 10:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:34.240 10:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:34.240 10:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:34.240 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:34.240 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.240 ms 00:25:34.240 00:25:34.240 --- 10.0.0.2 ping statistics --- 00:25:34.240 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:34.240 rtt min/avg/max/mdev = 0.240/0.240/0.240/0.000 ms 00:25:34.240 10:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:34.240 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:34.240 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.134 ms 00:25:34.240 00:25:34.240 --- 10.0.0.1 ping statistics --- 00:25:34.240 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:34.240 rtt min/avg/max/mdev = 0.134/0.134/0.134/0.000 ms 00:25:34.240 10:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:34.240 10:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@450 -- # return 0 00:25:34.240 10:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:25:34.240 10:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:34.240 10:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:25:34.241 10:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:25:34.241 10:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:34.241 10:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:25:34.241 10:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:25:34.241 10:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:25:34.241 10:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:25:34.241 10:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@724 -- # xtrace_disable 00:25:34.241 10:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:34.241 10:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@509 -- # nvmfpid=470355 00:25:34.241 10:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:25:34.241 10:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@510 -- # waitforlisten 470355 00:25:34.241 10:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@833 -- # '[' -z 470355 ']' 00:25:34.241 10:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:34.241 10:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@838 -- # local max_retries=100 00:25:34.241 10:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:34.241 10:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # xtrace_disable 00:25:34.241 10:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:34.241 10:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:25:34.241 10:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@866 -- # return 0 00:25:34.241 10:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:25:34.241 10:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@730 -- # xtrace_disable 00:25:34.241 10:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:34.500 10:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:34.500 10:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:25:34.500 10:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:25:34.500 10:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:25:34.500 10:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:34.500 10:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:25:34.500 10:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:25:34.500 10:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:25:34.500 10:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:25:34.500 10:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=bf55167af752c7d128a9bd63425611c6 00:25:34.500 10:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:25:34.500 10:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.0u1 00:25:34.500 10:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key bf55167af752c7d128a9bd63425611c6 0 00:25:34.500 10:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 bf55167af752c7d128a9bd63425611c6 0 00:25:34.500 10:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:25:34.500 10:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:25:34.500 10:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=bf55167af752c7d128a9bd63425611c6 00:25:34.500 10:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:25:34.500 10:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:25:34.500 10:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.0u1 00:25:34.500 10:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.0u1 00:25:34.500 10:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.0u1 00:25:34.500 10:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:25:34.500 10:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:25:34.500 10:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:34.500 10:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:25:34.500 10:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:25:34.500 10:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:25:34.500 10:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:25:34.500 10:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=16d85c7c704c389280e3fbd2aa92a33e518e08bf63c0d96df118e498a5ae3bc2 00:25:34.500 10:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:25:34.500 10:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.oL1 00:25:34.500 10:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 16d85c7c704c389280e3fbd2aa92a33e518e08bf63c0d96df118e498a5ae3bc2 3 00:25:34.500 10:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 16d85c7c704c389280e3fbd2aa92a33e518e08bf63c0d96df118e498a5ae3bc2 3 00:25:34.500 10:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:25:34.500 10:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:25:34.500 10:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=16d85c7c704c389280e3fbd2aa92a33e518e08bf63c0d96df118e498a5ae3bc2 00:25:34.500 10:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:25:34.500 10:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:25:34.500 10:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.oL1 00:25:34.500 10:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.oL1 00:25:34.500 10:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.oL1 00:25:34.500 10:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:25:34.500 10:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:25:34.500 10:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:34.500 10:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:25:34.500 10:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:25:34.500 10:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:25:34.500 10:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:25:34.500 10:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=996b1c737e91289b4ad069729c90452ffae305c2a6cfcdfb 00:25:34.500 10:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:25:34.500 10:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.1lO 00:25:34.500 10:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 996b1c737e91289b4ad069729c90452ffae305c2a6cfcdfb 0 00:25:34.500 10:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 996b1c737e91289b4ad069729c90452ffae305c2a6cfcdfb 0 00:25:34.500 10:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:25:34.500 10:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:25:34.500 10:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=996b1c737e91289b4ad069729c90452ffae305c2a6cfcdfb 00:25:34.500 10:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:25:34.500 10:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:25:34.500 10:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.1lO 00:25:34.500 10:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.1lO 00:25:34.500 10:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.1lO 00:25:34.501 10:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:25:34.501 10:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:25:34.501 10:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:34.501 10:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:25:34.501 10:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:25:34.501 10:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:25:34.501 10:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:25:34.501 10:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=494a213aeba25fcd1934abc64dbf05ea36a016a3f8ef2006 00:25:34.501 10:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:25:34.501 10:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.eym 00:25:34.501 10:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 494a213aeba25fcd1934abc64dbf05ea36a016a3f8ef2006 2 00:25:34.501 10:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 494a213aeba25fcd1934abc64dbf05ea36a016a3f8ef2006 2 00:25:34.501 10:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:25:34.501 10:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:25:34.501 10:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=494a213aeba25fcd1934abc64dbf05ea36a016a3f8ef2006 00:25:34.501 10:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:25:34.501 10:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:25:34.501 10:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.eym 00:25:34.501 10:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.eym 00:25:34.501 10:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.eym 00:25:34.501 10:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:25:34.501 10:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:25:34.501 10:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:34.501 10:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:25:34.501 10:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:25:34.501 10:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:25:34.501 10:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:25:34.501 10:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=c67d5e4718fb6843879be6877da5a995 00:25:34.501 10:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:25:34.501 10:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.jXZ 00:25:34.501 10:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key c67d5e4718fb6843879be6877da5a995 1 00:25:34.501 10:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 c67d5e4718fb6843879be6877da5a995 1 00:25:34.501 10:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:25:34.501 10:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:25:34.501 10:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=c67d5e4718fb6843879be6877da5a995 00:25:34.501 10:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:25:34.501 10:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:25:34.759 10:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.jXZ 00:25:34.759 10:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.jXZ 00:25:34.759 10:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.jXZ 00:25:34.759 10:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:25:34.759 10:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:25:34.759 10:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:34.759 10:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:25:34.759 10:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:25:34.759 10:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:25:34.759 10:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:25:34.759 10:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=5d95648b43652cc1aeef9c57db9afda4 00:25:34.759 10:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:25:34.759 10:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.s2L 00:25:34.759 10:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 5d95648b43652cc1aeef9c57db9afda4 1 00:25:34.759 10:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 5d95648b43652cc1aeef9c57db9afda4 1 00:25:34.759 10:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:25:34.759 10:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:25:34.759 10:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=5d95648b43652cc1aeef9c57db9afda4 00:25:34.759 10:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:25:34.759 10:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:25:34.759 10:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.s2L 00:25:34.759 10:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.s2L 00:25:34.759 10:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.s2L 00:25:34.759 10:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:25:34.759 10:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:25:34.759 10:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:34.759 10:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:25:34.759 10:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:25:34.759 10:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:25:34.759 10:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:25:34.759 10:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=3042673758ce0d4a53aa4b8b010910066ae128d0a1701a58 00:25:34.759 10:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:25:34.759 10:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.zcM 00:25:34.759 10:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 3042673758ce0d4a53aa4b8b010910066ae128d0a1701a58 2 00:25:34.759 10:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 3042673758ce0d4a53aa4b8b010910066ae128d0a1701a58 2 00:25:34.759 10:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:25:34.759 10:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:25:34.759 10:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=3042673758ce0d4a53aa4b8b010910066ae128d0a1701a58 00:25:34.759 10:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:25:34.759 10:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:25:34.760 10:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.zcM 00:25:34.760 10:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.zcM 00:25:34.760 10:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.zcM 00:25:34.760 10:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:25:34.760 10:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:25:34.760 10:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:34.760 10:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:25:34.760 10:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:25:34.760 10:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:25:34.760 10:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:25:34.760 10:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=59ab31c60f373745eb7f8461c59ef8cf 00:25:34.760 10:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:25:34.760 10:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.7ge 00:25:34.760 10:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 59ab31c60f373745eb7f8461c59ef8cf 0 00:25:34.760 10:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 59ab31c60f373745eb7f8461c59ef8cf 0 00:25:34.760 10:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:25:34.760 10:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:25:34.760 10:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=59ab31c60f373745eb7f8461c59ef8cf 00:25:34.760 10:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:25:34.760 10:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:25:34.760 10:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.7ge 00:25:34.760 10:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.7ge 00:25:34.760 10:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.7ge 00:25:34.760 10:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:25:34.760 10:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:25:34.760 10:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:34.760 10:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:25:34.760 10:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:25:34.760 10:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:25:34.760 10:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:25:34.760 10:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=a4f8ef9b78ee35fd913e54c3c84b0c1df978509deb121801c7b55a97f005c969 00:25:34.760 10:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:25:34.760 10:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.GBU 00:25:34.760 10:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key a4f8ef9b78ee35fd913e54c3c84b0c1df978509deb121801c7b55a97f005c969 3 00:25:34.760 10:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 a4f8ef9b78ee35fd913e54c3c84b0c1df978509deb121801c7b55a97f005c969 3 00:25:34.760 10:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:25:34.760 10:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:25:34.760 10:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=a4f8ef9b78ee35fd913e54c3c84b0c1df978509deb121801c7b55a97f005c969 00:25:34.760 10:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:25:34.760 10:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:25:34.760 10:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.GBU 00:25:34.760 10:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.GBU 00:25:34.760 10:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.GBU 00:25:34.760 10:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:25:34.760 10:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 470355 00:25:34.760 10:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@833 -- # '[' -z 470355 ']' 00:25:34.760 10:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:34.760 10:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@838 -- # local max_retries=100 00:25:34.760 10:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:34.760 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:34.760 10:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # xtrace_disable 00:25:34.760 10:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:35.019 10:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:25:35.019 10:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@866 -- # return 0 00:25:35.019 10:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:25:35.019 10:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.0u1 00:25:35.019 10:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:35.019 10:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:35.019 10:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:35.019 10:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.oL1 ]] 00:25:35.019 10:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.oL1 00:25:35.019 10:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:35.019 10:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:35.019 10:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:35.019 10:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:25:35.019 10:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.1lO 00:25:35.019 10:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:35.019 10:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:35.019 10:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:35.019 10:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.eym ]] 00:25:35.019 10:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.eym 00:25:35.019 10:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:35.276 10:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:35.276 10:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:35.276 10:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:25:35.276 10:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.jXZ 00:25:35.276 10:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:35.276 10:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:35.276 10:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:35.276 10:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.s2L ]] 00:25:35.276 10:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.s2L 00:25:35.276 10:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:35.276 10:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:35.276 10:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:35.276 10:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:25:35.276 10:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.zcM 00:25:35.276 10:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:35.276 10:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:35.276 10:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:35.276 10:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.7ge ]] 00:25:35.276 10:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.7ge 00:25:35.276 10:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:35.276 10:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:35.276 10:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:35.276 10:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:25:35.276 10:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.GBU 00:25:35.276 10:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:35.276 10:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:35.276 10:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:35.276 10:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:25:35.276 10:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:25:35.276 10:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:25:35.276 10:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:35.276 10:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:35.276 10:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:35.276 10:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:35.276 10:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:35.276 10:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:35.276 10:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:35.276 10:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:35.276 10:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:35.276 10:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:35.276 10:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:25:35.276 10:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@660 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:25:35.276 10:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:25:35.276 10:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:25:35.276 10:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:25:35.276 10:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:25:35.276 10:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@667 -- # local block nvme 00:25:35.276 10:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:25:35.276 10:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@670 -- # modprobe nvmet 00:25:35.276 10:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:25:35.276 10:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:25:36.207 Waiting for block devices as requested 00:25:36.465 0000:81:00.0 (8086 0a54): vfio-pci -> nvme 00:25:36.465 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:25:36.465 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:25:36.723 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:25:36.723 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:25:36.723 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:25:36.981 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:25:36.981 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:25:36.981 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:25:36.981 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:25:37.239 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:25:37.239 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:25:37.239 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:25:37.239 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:25:37.496 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:25:37.496 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:25:37.496 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:25:38.064 10:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:25:38.064 10:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:25:38.064 10:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:25:38.064 10:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:25:38.064 10:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:25:38.064 10:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:25:38.064 10:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:25:38.064 10:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:25:38.064 10:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:25:38.064 No valid GPT data, bailing 00:25:38.064 10:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:25:38.064 10:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:25:38.064 10:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:25:38.064 10:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:25:38.064 10:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:25:38.064 10:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:25:38.064 10:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:25:38.064 10:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:25:38.064 10:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@693 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:25:38.064 10:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@695 -- # echo 1 00:25:38.064 10:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:25:38.064 10:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@697 -- # echo 1 00:25:38.064 10:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:25:38.064 10:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@700 -- # echo tcp 00:25:38.064 10:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@701 -- # echo 4420 00:25:38.064 10:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # echo ipv4 00:25:38.064 10:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:25:38.064 10:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid=8b464f06-2980-e311-ba20-001e67a94acd -a 10.0.0.1 -t tcp -s 4420 00:25:38.064 00:25:38.064 Discovery Log Number of Records 2, Generation counter 2 00:25:38.064 =====Discovery Log Entry 0====== 00:25:38.064 trtype: tcp 00:25:38.064 adrfam: ipv4 00:25:38.064 subtype: current discovery subsystem 00:25:38.064 treq: not specified, sq flow control disable supported 00:25:38.064 portid: 1 00:25:38.064 trsvcid: 4420 00:25:38.064 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:25:38.064 traddr: 10.0.0.1 00:25:38.064 eflags: none 00:25:38.064 sectype: none 00:25:38.064 =====Discovery Log Entry 1====== 00:25:38.064 trtype: tcp 00:25:38.064 adrfam: ipv4 00:25:38.064 subtype: nvme subsystem 00:25:38.064 treq: not specified, sq flow control disable supported 00:25:38.064 portid: 1 00:25:38.064 trsvcid: 4420 00:25:38.064 subnqn: nqn.2024-02.io.spdk:cnode0 00:25:38.064 traddr: 10.0.0.1 00:25:38.064 eflags: none 00:25:38.064 sectype: none 00:25:38.064 10:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:25:38.064 10:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:25:38.064 10:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:25:38.064 10:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:25:38.064 10:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:38.064 10:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:38.064 10:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:38.064 10:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:38.064 10:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTk2YjFjNzM3ZTkxMjg5YjRhZDA2OTcyOWM5MDQ1MmZmYWUzMDVjMmE2Y2ZjZGZi+syEcg==: 00:25:38.064 10:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDk0YTIxM2FlYmEyNWZjZDE5MzRhYmM2NGRiZjA1ZWEzNmEwMTZhM2Y4ZWYyMDA2hT2seA==: 00:25:38.064 10:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:38.064 10:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:38.064 10:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTk2YjFjNzM3ZTkxMjg5YjRhZDA2OTcyOWM5MDQ1MmZmYWUzMDVjMmE2Y2ZjZGZi+syEcg==: 00:25:38.064 10:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDk0YTIxM2FlYmEyNWZjZDE5MzRhYmM2NGRiZjA1ZWEzNmEwMTZhM2Y4ZWYyMDA2hT2seA==: ]] 00:25:38.064 10:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDk0YTIxM2FlYmEyNWZjZDE5MzRhYmM2NGRiZjA1ZWEzNmEwMTZhM2Y4ZWYyMDA2hT2seA==: 00:25:38.064 10:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:25:38.064 10:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:25:38.064 10:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:25:38.064 10:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:25:38.064 10:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:25:38.064 10:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:38.065 10:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:25:38.065 10:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:25:38.065 10:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:38.065 10:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:38.065 10:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:25:38.065 10:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:38.065 10:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:38.065 10:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:38.065 10:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:38.065 10:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:38.065 10:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:38.065 10:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:38.065 10:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:38.065 10:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:38.065 10:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:38.065 10:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:38.065 10:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:38.065 10:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:38.065 10:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:38.065 10:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:38.065 10:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:38.065 10:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:38.323 nvme0n1 00:25:38.323 10:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:38.323 10:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:38.323 10:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:38.323 10:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:38.323 10:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:38.323 10:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:38.323 10:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:38.323 10:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:38.323 10:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:38.323 10:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:38.323 10:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:38.323 10:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:25:38.323 10:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:38.323 10:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:38.323 10:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:25:38.323 10:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:38.323 10:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:38.323 10:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:38.323 10:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:38.323 10:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YmY1NTE2N2FmNzUyYzdkMTI4YTliZDYzNDI1NjExYzYg2AGh: 00:25:38.323 10:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTZkODVjN2M3MDRjMzg5MjgwZTNmYmQyYWE5MmEzM2U1MThlMDhiZjYzYzBkOTZkZjExOGU0OThhNWFlM2JjMkq5pME=: 00:25:38.323 10:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:38.323 10:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:38.323 10:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YmY1NTE2N2FmNzUyYzdkMTI4YTliZDYzNDI1NjExYzYg2AGh: 00:25:38.323 10:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTZkODVjN2M3MDRjMzg5MjgwZTNmYmQyYWE5MmEzM2U1MThlMDhiZjYzYzBkOTZkZjExOGU0OThhNWFlM2JjMkq5pME=: ]] 00:25:38.323 10:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTZkODVjN2M3MDRjMzg5MjgwZTNmYmQyYWE5MmEzM2U1MThlMDhiZjYzYzBkOTZkZjExOGU0OThhNWFlM2JjMkq5pME=: 00:25:38.323 10:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:25:38.323 10:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:38.323 10:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:38.323 10:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:38.323 10:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:38.323 10:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:38.323 10:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:25:38.323 10:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:38.323 10:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:38.323 10:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:38.323 10:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:38.323 10:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:38.323 10:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:38.323 10:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:38.323 10:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:38.323 10:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:38.323 10:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:38.323 10:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:38.323 10:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:38.323 10:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:38.323 10:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:38.323 10:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:38.323 10:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:38.323 10:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:38.582 nvme0n1 00:25:38.582 10:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:38.582 10:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:38.582 10:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:38.582 10:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:38.582 10:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:38.582 10:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:38.582 10:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:38.582 10:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:38.582 10:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:38.582 10:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:38.582 10:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:38.582 10:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:38.582 10:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:25:38.582 10:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:38.582 10:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:38.582 10:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:38.582 10:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:38.582 10:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTk2YjFjNzM3ZTkxMjg5YjRhZDA2OTcyOWM5MDQ1MmZmYWUzMDVjMmE2Y2ZjZGZi+syEcg==: 00:25:38.582 10:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDk0YTIxM2FlYmEyNWZjZDE5MzRhYmM2NGRiZjA1ZWEzNmEwMTZhM2Y4ZWYyMDA2hT2seA==: 00:25:38.582 10:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:38.582 10:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:38.582 10:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTk2YjFjNzM3ZTkxMjg5YjRhZDA2OTcyOWM5MDQ1MmZmYWUzMDVjMmE2Y2ZjZGZi+syEcg==: 00:25:38.582 10:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDk0YTIxM2FlYmEyNWZjZDE5MzRhYmM2NGRiZjA1ZWEzNmEwMTZhM2Y4ZWYyMDA2hT2seA==: ]] 00:25:38.582 10:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDk0YTIxM2FlYmEyNWZjZDE5MzRhYmM2NGRiZjA1ZWEzNmEwMTZhM2Y4ZWYyMDA2hT2seA==: 00:25:38.582 10:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:25:38.582 10:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:38.582 10:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:38.582 10:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:38.582 10:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:38.582 10:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:38.582 10:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:25:38.582 10:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:38.582 10:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:38.582 10:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:38.582 10:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:38.582 10:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:38.582 10:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:38.582 10:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:38.582 10:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:38.582 10:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:38.582 10:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:38.582 10:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:38.582 10:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:38.582 10:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:38.582 10:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:38.582 10:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:38.582 10:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:38.582 10:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:38.841 nvme0n1 00:25:38.841 10:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:38.841 10:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:38.841 10:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:38.841 10:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:38.841 10:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:38.841 10:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:38.841 10:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:38.841 10:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:38.841 10:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:38.841 10:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:38.841 10:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:38.841 10:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:38.841 10:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:25:38.841 10:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:38.841 10:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:38.841 10:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:38.841 10:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:38.841 10:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YzY3ZDVlNDcxOGZiNjg0Mzg3OWJlNjg3N2RhNWE5OTVNukEg: 00:25:38.841 10:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NWQ5NTY0OGI0MzY1MmNjMWFlZWY5YzU3ZGI5YWZkYTT4zVDv: 00:25:38.841 10:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:38.841 10:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:38.841 10:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YzY3ZDVlNDcxOGZiNjg0Mzg3OWJlNjg3N2RhNWE5OTVNukEg: 00:25:38.841 10:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NWQ5NTY0OGI0MzY1MmNjMWFlZWY5YzU3ZGI5YWZkYTT4zVDv: ]] 00:25:38.841 10:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NWQ5NTY0OGI0MzY1MmNjMWFlZWY5YzU3ZGI5YWZkYTT4zVDv: 00:25:38.841 10:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:25:38.841 10:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:38.841 10:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:38.841 10:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:38.841 10:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:38.841 10:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:38.841 10:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:25:38.841 10:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:38.841 10:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:38.841 10:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:38.841 10:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:38.841 10:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:38.841 10:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:38.841 10:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:38.841 10:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:38.841 10:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:38.841 10:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:38.841 10:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:38.841 10:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:38.841 10:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:38.841 10:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:38.841 10:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:38.841 10:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:38.841 10:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:39.100 nvme0n1 00:25:39.100 10:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:39.100 10:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:39.100 10:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:39.100 10:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:39.100 10:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:39.100 10:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:39.100 10:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:39.100 10:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:39.100 10:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:39.100 10:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:39.100 10:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:39.100 10:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:39.100 10:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:25:39.100 10:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:39.100 10:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:39.100 10:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:39.100 10:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:39.100 10:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MzA0MjY3Mzc1OGNlMGQ0YTUzYWE0YjhiMDEwOTEwMDY2YWUxMjhkMGExNzAxYTU49V48dQ==: 00:25:39.100 10:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NTlhYjMxYzYwZjM3Mzc0NWViN2Y4NDYxYzU5ZWY4Y2YCw+Ba: 00:25:39.100 10:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:39.100 10:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:39.100 10:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MzA0MjY3Mzc1OGNlMGQ0YTUzYWE0YjhiMDEwOTEwMDY2YWUxMjhkMGExNzAxYTU49V48dQ==: 00:25:39.100 10:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NTlhYjMxYzYwZjM3Mzc0NWViN2Y4NDYxYzU5ZWY4Y2YCw+Ba: ]] 00:25:39.100 10:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NTlhYjMxYzYwZjM3Mzc0NWViN2Y4NDYxYzU5ZWY4Y2YCw+Ba: 00:25:39.100 10:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:25:39.100 10:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:39.100 10:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:39.100 10:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:39.100 10:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:39.100 10:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:39.100 10:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:25:39.100 10:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:39.100 10:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:39.100 10:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:39.100 10:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:39.100 10:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:39.100 10:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:39.100 10:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:39.100 10:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:39.100 10:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:39.100 10:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:39.100 10:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:39.100 10:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:39.100 10:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:39.100 10:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:39.100 10:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:39.100 10:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:39.100 10:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:39.359 nvme0n1 00:25:39.359 10:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:39.359 10:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:39.359 10:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:39.359 10:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:39.359 10:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:39.359 10:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:39.359 10:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:39.359 10:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:39.359 10:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:39.359 10:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:39.359 10:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:39.359 10:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:39.359 10:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:25:39.359 10:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:39.359 10:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:39.359 10:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:39.359 10:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:39.359 10:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YTRmOGVmOWI3OGVlMzVmZDkxM2U1NGMzYzg0YjBjMWRmOTc4NTA5ZGViMTIxODAxYzdiNTVhOTdmMDA1Yzk2Oe8H5EY=: 00:25:39.359 10:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:39.359 10:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:39.359 10:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:39.359 10:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YTRmOGVmOWI3OGVlMzVmZDkxM2U1NGMzYzg0YjBjMWRmOTc4NTA5ZGViMTIxODAxYzdiNTVhOTdmMDA1Yzk2Oe8H5EY=: 00:25:39.359 10:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:39.359 10:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:25:39.359 10:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:39.359 10:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:39.359 10:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:39.359 10:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:39.359 10:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:39.359 10:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:25:39.359 10:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:39.359 10:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:39.359 10:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:39.359 10:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:39.359 10:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:39.359 10:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:39.359 10:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:39.359 10:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:39.359 10:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:39.359 10:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:39.359 10:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:39.359 10:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:39.359 10:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:39.359 10:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:39.359 10:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:39.359 10:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:39.359 10:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:39.359 nvme0n1 00:25:39.359 10:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:39.359 10:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:39.359 10:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:39.359 10:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:39.359 10:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:39.359 10:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:39.618 10:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:39.618 10:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:39.618 10:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:39.618 10:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:39.618 10:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:39.618 10:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:39.618 10:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:39.618 10:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:25:39.618 10:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:39.618 10:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:39.618 10:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:39.618 10:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:39.618 10:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YmY1NTE2N2FmNzUyYzdkMTI4YTliZDYzNDI1NjExYzYg2AGh: 00:25:39.618 10:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTZkODVjN2M3MDRjMzg5MjgwZTNmYmQyYWE5MmEzM2U1MThlMDhiZjYzYzBkOTZkZjExOGU0OThhNWFlM2JjMkq5pME=: 00:25:39.618 10:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:39.618 10:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:39.876 10:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YmY1NTE2N2FmNzUyYzdkMTI4YTliZDYzNDI1NjExYzYg2AGh: 00:25:39.876 10:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTZkODVjN2M3MDRjMzg5MjgwZTNmYmQyYWE5MmEzM2U1MThlMDhiZjYzYzBkOTZkZjExOGU0OThhNWFlM2JjMkq5pME=: ]] 00:25:39.876 10:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTZkODVjN2M3MDRjMzg5MjgwZTNmYmQyYWE5MmEzM2U1MThlMDhiZjYzYzBkOTZkZjExOGU0OThhNWFlM2JjMkq5pME=: 00:25:39.876 10:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:25:39.876 10:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:39.876 10:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:39.876 10:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:39.876 10:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:39.876 10:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:39.876 10:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:25:39.876 10:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:39.876 10:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:39.876 10:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:39.876 10:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:39.876 10:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:39.876 10:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:39.876 10:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:39.876 10:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:39.876 10:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:39.876 10:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:39.876 10:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:39.876 10:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:39.876 10:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:39.876 10:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:39.876 10:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:39.876 10:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:39.876 10:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:39.876 nvme0n1 00:25:39.876 10:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:39.876 10:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:39.876 10:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:39.876 10:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:39.876 10:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:40.134 10:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:40.134 10:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:40.135 10:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:40.135 10:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:40.135 10:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:40.135 10:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:40.135 10:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:40.135 10:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:25:40.135 10:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:40.135 10:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:40.135 10:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:40.135 10:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:40.135 10:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTk2YjFjNzM3ZTkxMjg5YjRhZDA2OTcyOWM5MDQ1MmZmYWUzMDVjMmE2Y2ZjZGZi+syEcg==: 00:25:40.135 10:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDk0YTIxM2FlYmEyNWZjZDE5MzRhYmM2NGRiZjA1ZWEzNmEwMTZhM2Y4ZWYyMDA2hT2seA==: 00:25:40.135 10:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:40.135 10:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:40.135 10:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTk2YjFjNzM3ZTkxMjg5YjRhZDA2OTcyOWM5MDQ1MmZmYWUzMDVjMmE2Y2ZjZGZi+syEcg==: 00:25:40.135 10:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDk0YTIxM2FlYmEyNWZjZDE5MzRhYmM2NGRiZjA1ZWEzNmEwMTZhM2Y4ZWYyMDA2hT2seA==: ]] 00:25:40.135 10:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDk0YTIxM2FlYmEyNWZjZDE5MzRhYmM2NGRiZjA1ZWEzNmEwMTZhM2Y4ZWYyMDA2hT2seA==: 00:25:40.135 10:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:25:40.135 10:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:40.135 10:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:40.135 10:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:40.135 10:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:40.135 10:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:40.135 10:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:25:40.135 10:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:40.135 10:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:40.135 10:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:40.135 10:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:40.135 10:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:40.135 10:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:40.135 10:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:40.135 10:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:40.135 10:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:40.135 10:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:40.135 10:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:40.135 10:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:40.135 10:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:40.135 10:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:40.135 10:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:40.135 10:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:40.135 10:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:40.393 nvme0n1 00:25:40.393 10:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:40.393 10:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:40.393 10:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:40.393 10:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:40.393 10:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:40.393 10:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:40.393 10:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:40.393 10:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:40.393 10:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:40.393 10:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:40.393 10:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:40.393 10:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:40.393 10:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:25:40.393 10:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:40.393 10:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:40.393 10:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:40.393 10:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:40.393 10:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YzY3ZDVlNDcxOGZiNjg0Mzg3OWJlNjg3N2RhNWE5OTVNukEg: 00:25:40.393 10:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NWQ5NTY0OGI0MzY1MmNjMWFlZWY5YzU3ZGI5YWZkYTT4zVDv: 00:25:40.393 10:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:40.393 10:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:40.393 10:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YzY3ZDVlNDcxOGZiNjg0Mzg3OWJlNjg3N2RhNWE5OTVNukEg: 00:25:40.393 10:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NWQ5NTY0OGI0MzY1MmNjMWFlZWY5YzU3ZGI5YWZkYTT4zVDv: ]] 00:25:40.393 10:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NWQ5NTY0OGI0MzY1MmNjMWFlZWY5YzU3ZGI5YWZkYTT4zVDv: 00:25:40.393 10:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:25:40.393 10:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:40.393 10:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:40.393 10:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:40.393 10:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:40.393 10:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:40.393 10:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:25:40.393 10:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:40.393 10:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:40.393 10:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:40.393 10:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:40.394 10:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:40.394 10:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:40.394 10:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:40.394 10:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:40.394 10:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:40.394 10:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:40.394 10:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:40.394 10:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:40.394 10:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:40.394 10:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:40.394 10:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:40.394 10:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:40.394 10:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:40.659 nvme0n1 00:25:40.659 10:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:40.659 10:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:40.659 10:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:40.659 10:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:40.659 10:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:40.659 10:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:40.659 10:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:40.659 10:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:40.659 10:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:40.659 10:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:40.659 10:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:40.659 10:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:40.659 10:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:25:40.659 10:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:40.659 10:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:40.659 10:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:40.659 10:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:40.659 10:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MzA0MjY3Mzc1OGNlMGQ0YTUzYWE0YjhiMDEwOTEwMDY2YWUxMjhkMGExNzAxYTU49V48dQ==: 00:25:40.659 10:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NTlhYjMxYzYwZjM3Mzc0NWViN2Y4NDYxYzU5ZWY4Y2YCw+Ba: 00:25:40.659 10:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:40.659 10:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:40.659 10:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MzA0MjY3Mzc1OGNlMGQ0YTUzYWE0YjhiMDEwOTEwMDY2YWUxMjhkMGExNzAxYTU49V48dQ==: 00:25:40.659 10:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NTlhYjMxYzYwZjM3Mzc0NWViN2Y4NDYxYzU5ZWY4Y2YCw+Ba: ]] 00:25:40.659 10:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NTlhYjMxYzYwZjM3Mzc0NWViN2Y4NDYxYzU5ZWY4Y2YCw+Ba: 00:25:40.659 10:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:25:40.659 10:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:40.659 10:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:40.659 10:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:40.659 10:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:40.659 10:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:40.659 10:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:25:40.659 10:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:40.659 10:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:40.659 10:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:40.659 10:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:40.659 10:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:40.659 10:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:40.659 10:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:40.659 10:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:40.659 10:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:40.659 10:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:40.659 10:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:40.659 10:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:40.659 10:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:40.659 10:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:40.659 10:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:40.659 10:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:40.659 10:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:40.659 nvme0n1 00:25:40.659 10:44:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:40.659 10:44:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:40.659 10:44:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:40.659 10:44:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:40.659 10:44:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:40.918 10:44:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:40.918 10:44:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:40.918 10:44:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:40.918 10:44:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:40.918 10:44:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:40.918 10:44:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:40.918 10:44:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:40.918 10:44:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:25:40.918 10:44:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:40.918 10:44:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:40.918 10:44:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:40.918 10:44:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:40.918 10:44:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YTRmOGVmOWI3OGVlMzVmZDkxM2U1NGMzYzg0YjBjMWRmOTc4NTA5ZGViMTIxODAxYzdiNTVhOTdmMDA1Yzk2Oe8H5EY=: 00:25:40.918 10:44:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:40.918 10:44:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:40.918 10:44:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:40.918 10:44:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YTRmOGVmOWI3OGVlMzVmZDkxM2U1NGMzYzg0YjBjMWRmOTc4NTA5ZGViMTIxODAxYzdiNTVhOTdmMDA1Yzk2Oe8H5EY=: 00:25:40.918 10:44:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:40.918 10:44:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:25:40.918 10:44:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:40.918 10:44:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:40.918 10:44:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:40.918 10:44:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:40.918 10:44:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:40.918 10:44:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:25:40.918 10:44:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:40.918 10:44:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:40.918 10:44:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:40.918 10:44:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:40.918 10:44:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:40.918 10:44:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:40.918 10:44:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:40.918 10:44:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:40.918 10:44:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:40.918 10:44:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:40.918 10:44:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:40.918 10:44:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:40.918 10:44:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:40.918 10:44:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:40.918 10:44:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:40.918 10:44:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:40.918 10:44:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:41.176 nvme0n1 00:25:41.176 10:44:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:41.176 10:44:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:41.176 10:44:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:41.176 10:44:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:41.176 10:44:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:41.176 10:44:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:41.176 10:44:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:41.176 10:44:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:41.176 10:44:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:41.176 10:44:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:41.176 10:44:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:41.176 10:44:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:41.176 10:44:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:41.176 10:44:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:25:41.176 10:44:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:41.176 10:44:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:41.176 10:44:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:41.176 10:44:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:41.176 10:44:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YmY1NTE2N2FmNzUyYzdkMTI4YTliZDYzNDI1NjExYzYg2AGh: 00:25:41.176 10:44:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTZkODVjN2M3MDRjMzg5MjgwZTNmYmQyYWE5MmEzM2U1MThlMDhiZjYzYzBkOTZkZjExOGU0OThhNWFlM2JjMkq5pME=: 00:25:41.176 10:44:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:41.176 10:44:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:41.743 10:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YmY1NTE2N2FmNzUyYzdkMTI4YTliZDYzNDI1NjExYzYg2AGh: 00:25:41.743 10:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTZkODVjN2M3MDRjMzg5MjgwZTNmYmQyYWE5MmEzM2U1MThlMDhiZjYzYzBkOTZkZjExOGU0OThhNWFlM2JjMkq5pME=: ]] 00:25:41.743 10:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTZkODVjN2M3MDRjMzg5MjgwZTNmYmQyYWE5MmEzM2U1MThlMDhiZjYzYzBkOTZkZjExOGU0OThhNWFlM2JjMkq5pME=: 00:25:41.743 10:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:25:41.743 10:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:41.743 10:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:41.743 10:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:41.743 10:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:41.743 10:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:41.743 10:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:25:41.743 10:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:41.743 10:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:41.743 10:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:41.743 10:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:41.743 10:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:41.743 10:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:41.743 10:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:41.743 10:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:41.743 10:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:41.743 10:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:41.743 10:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:41.743 10:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:41.743 10:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:41.743 10:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:41.743 10:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:41.743 10:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:41.743 10:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:42.001 nvme0n1 00:25:42.001 10:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:42.001 10:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:42.001 10:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:42.001 10:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:42.002 10:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:42.002 10:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:42.002 10:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:42.002 10:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:42.002 10:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:42.002 10:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:42.002 10:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:42.002 10:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:42.002 10:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:25:42.002 10:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:42.002 10:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:42.002 10:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:42.002 10:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:42.002 10:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTk2YjFjNzM3ZTkxMjg5YjRhZDA2OTcyOWM5MDQ1MmZmYWUzMDVjMmE2Y2ZjZGZi+syEcg==: 00:25:42.002 10:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDk0YTIxM2FlYmEyNWZjZDE5MzRhYmM2NGRiZjA1ZWEzNmEwMTZhM2Y4ZWYyMDA2hT2seA==: 00:25:42.002 10:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:42.002 10:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:42.002 10:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTk2YjFjNzM3ZTkxMjg5YjRhZDA2OTcyOWM5MDQ1MmZmYWUzMDVjMmE2Y2ZjZGZi+syEcg==: 00:25:42.002 10:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDk0YTIxM2FlYmEyNWZjZDE5MzRhYmM2NGRiZjA1ZWEzNmEwMTZhM2Y4ZWYyMDA2hT2seA==: ]] 00:25:42.002 10:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDk0YTIxM2FlYmEyNWZjZDE5MzRhYmM2NGRiZjA1ZWEzNmEwMTZhM2Y4ZWYyMDA2hT2seA==: 00:25:42.002 10:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:25:42.002 10:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:42.002 10:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:42.002 10:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:42.002 10:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:42.002 10:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:42.002 10:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:25:42.002 10:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:42.002 10:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:42.002 10:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:42.002 10:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:42.002 10:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:42.002 10:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:42.002 10:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:42.002 10:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:42.002 10:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:42.002 10:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:42.002 10:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:42.002 10:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:42.002 10:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:42.002 10:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:42.002 10:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:42.002 10:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:42.002 10:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:42.261 nvme0n1 00:25:42.261 10:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:42.261 10:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:42.261 10:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:42.261 10:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:42.261 10:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:42.261 10:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:42.261 10:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:42.261 10:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:42.261 10:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:42.261 10:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:42.261 10:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:42.261 10:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:42.261 10:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:25:42.261 10:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:42.261 10:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:42.261 10:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:42.261 10:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:42.261 10:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YzY3ZDVlNDcxOGZiNjg0Mzg3OWJlNjg3N2RhNWE5OTVNukEg: 00:25:42.261 10:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NWQ5NTY0OGI0MzY1MmNjMWFlZWY5YzU3ZGI5YWZkYTT4zVDv: 00:25:42.261 10:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:42.261 10:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:42.261 10:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YzY3ZDVlNDcxOGZiNjg0Mzg3OWJlNjg3N2RhNWE5OTVNukEg: 00:25:42.261 10:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NWQ5NTY0OGI0MzY1MmNjMWFlZWY5YzU3ZGI5YWZkYTT4zVDv: ]] 00:25:42.261 10:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NWQ5NTY0OGI0MzY1MmNjMWFlZWY5YzU3ZGI5YWZkYTT4zVDv: 00:25:42.261 10:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:25:42.261 10:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:42.261 10:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:42.261 10:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:42.261 10:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:42.261 10:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:42.261 10:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:25:42.261 10:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:42.261 10:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:42.261 10:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:42.261 10:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:42.261 10:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:42.261 10:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:42.261 10:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:42.261 10:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:42.261 10:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:42.261 10:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:42.261 10:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:42.261 10:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:42.261 10:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:42.261 10:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:42.261 10:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:42.261 10:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:42.261 10:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:42.828 nvme0n1 00:25:42.828 10:44:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:42.828 10:44:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:42.828 10:44:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:42.828 10:44:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:42.828 10:44:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:42.828 10:44:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:42.828 10:44:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:42.828 10:44:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:42.828 10:44:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:42.828 10:44:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:42.828 10:44:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:42.828 10:44:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:42.828 10:44:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:25:42.828 10:44:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:42.828 10:44:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:42.828 10:44:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:42.828 10:44:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:42.828 10:44:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MzA0MjY3Mzc1OGNlMGQ0YTUzYWE0YjhiMDEwOTEwMDY2YWUxMjhkMGExNzAxYTU49V48dQ==: 00:25:42.828 10:44:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NTlhYjMxYzYwZjM3Mzc0NWViN2Y4NDYxYzU5ZWY4Y2YCw+Ba: 00:25:42.828 10:44:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:42.828 10:44:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:42.828 10:44:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MzA0MjY3Mzc1OGNlMGQ0YTUzYWE0YjhiMDEwOTEwMDY2YWUxMjhkMGExNzAxYTU49V48dQ==: 00:25:42.828 10:44:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NTlhYjMxYzYwZjM3Mzc0NWViN2Y4NDYxYzU5ZWY4Y2YCw+Ba: ]] 00:25:42.828 10:44:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NTlhYjMxYzYwZjM3Mzc0NWViN2Y4NDYxYzU5ZWY4Y2YCw+Ba: 00:25:42.828 10:44:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:25:42.828 10:44:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:42.828 10:44:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:42.828 10:44:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:42.828 10:44:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:42.828 10:44:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:42.828 10:44:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:25:42.828 10:44:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:42.828 10:44:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:42.828 10:44:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:42.828 10:44:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:42.828 10:44:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:42.828 10:44:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:42.828 10:44:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:42.828 10:44:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:42.828 10:44:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:42.828 10:44:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:42.828 10:44:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:42.828 10:44:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:42.828 10:44:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:42.828 10:44:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:42.828 10:44:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:42.828 10:44:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:42.828 10:44:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:43.087 nvme0n1 00:25:43.087 10:44:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:43.087 10:44:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:43.087 10:44:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:43.087 10:44:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:43.087 10:44:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:43.087 10:44:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:43.087 10:44:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:43.087 10:44:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:43.087 10:44:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:43.087 10:44:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:43.087 10:44:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:43.087 10:44:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:43.087 10:44:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:25:43.087 10:44:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:43.087 10:44:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:43.087 10:44:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:43.087 10:44:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:43.087 10:44:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YTRmOGVmOWI3OGVlMzVmZDkxM2U1NGMzYzg0YjBjMWRmOTc4NTA5ZGViMTIxODAxYzdiNTVhOTdmMDA1Yzk2Oe8H5EY=: 00:25:43.087 10:44:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:43.087 10:44:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:43.087 10:44:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:43.087 10:44:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YTRmOGVmOWI3OGVlMzVmZDkxM2U1NGMzYzg0YjBjMWRmOTc4NTA5ZGViMTIxODAxYzdiNTVhOTdmMDA1Yzk2Oe8H5EY=: 00:25:43.087 10:44:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:43.087 10:44:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:25:43.087 10:44:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:43.087 10:44:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:43.087 10:44:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:43.087 10:44:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:43.087 10:44:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:43.087 10:44:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:25:43.087 10:44:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:43.087 10:44:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:43.087 10:44:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:43.087 10:44:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:43.087 10:44:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:43.087 10:44:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:43.087 10:44:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:43.087 10:44:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:43.087 10:44:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:43.087 10:44:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:43.087 10:44:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:43.087 10:44:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:43.087 10:44:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:43.087 10:44:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:43.087 10:44:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:43.087 10:44:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:43.087 10:44:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:43.653 nvme0n1 00:25:43.653 10:44:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:43.653 10:44:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:43.653 10:44:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:43.653 10:44:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:43.653 10:44:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:43.653 10:44:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:43.653 10:44:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:43.653 10:44:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:43.653 10:44:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:43.653 10:44:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:43.653 10:44:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:43.653 10:44:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:43.653 10:44:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:43.653 10:44:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:25:43.653 10:44:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:43.653 10:44:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:43.653 10:44:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:43.653 10:44:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:43.653 10:44:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YmY1NTE2N2FmNzUyYzdkMTI4YTliZDYzNDI1NjExYzYg2AGh: 00:25:43.653 10:44:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTZkODVjN2M3MDRjMzg5MjgwZTNmYmQyYWE5MmEzM2U1MThlMDhiZjYzYzBkOTZkZjExOGU0OThhNWFlM2JjMkq5pME=: 00:25:43.653 10:44:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:43.653 10:44:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:45.556 10:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YmY1NTE2N2FmNzUyYzdkMTI4YTliZDYzNDI1NjExYzYg2AGh: 00:25:45.556 10:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTZkODVjN2M3MDRjMzg5MjgwZTNmYmQyYWE5MmEzM2U1MThlMDhiZjYzYzBkOTZkZjExOGU0OThhNWFlM2JjMkq5pME=: ]] 00:25:45.556 10:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTZkODVjN2M3MDRjMzg5MjgwZTNmYmQyYWE5MmEzM2U1MThlMDhiZjYzYzBkOTZkZjExOGU0OThhNWFlM2JjMkq5pME=: 00:25:45.556 10:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:25:45.556 10:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:45.556 10:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:45.556 10:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:45.556 10:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:45.556 10:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:45.556 10:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:25:45.556 10:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:45.556 10:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:45.556 10:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:45.556 10:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:45.556 10:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:45.556 10:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:45.556 10:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:45.556 10:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:45.556 10:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:45.556 10:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:45.556 10:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:45.556 10:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:45.556 10:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:45.556 10:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:45.556 10:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:45.556 10:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:45.556 10:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:45.815 nvme0n1 00:25:45.815 10:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:45.815 10:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:45.815 10:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:45.815 10:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:45.815 10:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:45.815 10:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:45.815 10:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:45.815 10:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:45.815 10:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:45.815 10:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:45.815 10:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:45.815 10:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:45.815 10:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:25:45.815 10:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:45.815 10:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:45.815 10:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:45.815 10:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:45.815 10:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTk2YjFjNzM3ZTkxMjg5YjRhZDA2OTcyOWM5MDQ1MmZmYWUzMDVjMmE2Y2ZjZGZi+syEcg==: 00:25:45.815 10:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDk0YTIxM2FlYmEyNWZjZDE5MzRhYmM2NGRiZjA1ZWEzNmEwMTZhM2Y4ZWYyMDA2hT2seA==: 00:25:45.815 10:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:45.815 10:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:45.815 10:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTk2YjFjNzM3ZTkxMjg5YjRhZDA2OTcyOWM5MDQ1MmZmYWUzMDVjMmE2Y2ZjZGZi+syEcg==: 00:25:45.815 10:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDk0YTIxM2FlYmEyNWZjZDE5MzRhYmM2NGRiZjA1ZWEzNmEwMTZhM2Y4ZWYyMDA2hT2seA==: ]] 00:25:45.815 10:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDk0YTIxM2FlYmEyNWZjZDE5MzRhYmM2NGRiZjA1ZWEzNmEwMTZhM2Y4ZWYyMDA2hT2seA==: 00:25:45.815 10:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:25:45.815 10:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:45.815 10:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:45.815 10:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:45.815 10:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:45.815 10:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:45.815 10:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:25:45.815 10:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:45.815 10:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:45.815 10:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:45.815 10:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:45.815 10:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:45.815 10:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:45.815 10:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:45.815 10:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:45.815 10:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:45.815 10:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:45.815 10:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:45.815 10:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:45.815 10:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:45.815 10:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:45.815 10:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:45.815 10:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:45.815 10:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:46.381 nvme0n1 00:25:46.381 10:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:46.381 10:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:46.381 10:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:46.381 10:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:46.381 10:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:46.381 10:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:46.381 10:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:46.381 10:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:46.381 10:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:46.381 10:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:46.381 10:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:46.381 10:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:46.381 10:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:25:46.381 10:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:46.381 10:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:46.381 10:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:46.381 10:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:46.381 10:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YzY3ZDVlNDcxOGZiNjg0Mzg3OWJlNjg3N2RhNWE5OTVNukEg: 00:25:46.381 10:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NWQ5NTY0OGI0MzY1MmNjMWFlZWY5YzU3ZGI5YWZkYTT4zVDv: 00:25:46.381 10:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:46.381 10:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:46.381 10:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YzY3ZDVlNDcxOGZiNjg0Mzg3OWJlNjg3N2RhNWE5OTVNukEg: 00:25:46.381 10:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NWQ5NTY0OGI0MzY1MmNjMWFlZWY5YzU3ZGI5YWZkYTT4zVDv: ]] 00:25:46.381 10:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NWQ5NTY0OGI0MzY1MmNjMWFlZWY5YzU3ZGI5YWZkYTT4zVDv: 00:25:46.381 10:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:25:46.381 10:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:46.381 10:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:46.381 10:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:46.381 10:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:46.381 10:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:46.381 10:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:25:46.381 10:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:46.381 10:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:46.381 10:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:46.381 10:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:46.381 10:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:46.381 10:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:46.381 10:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:46.381 10:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:46.381 10:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:46.381 10:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:46.381 10:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:46.381 10:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:46.381 10:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:46.381 10:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:46.381 10:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:46.381 10:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:46.381 10:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:46.946 nvme0n1 00:25:46.946 10:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:46.946 10:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:46.946 10:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:46.946 10:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:46.946 10:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:46.946 10:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:47.205 10:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:47.205 10:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:47.205 10:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:47.205 10:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:47.205 10:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:47.205 10:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:47.205 10:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:25:47.205 10:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:47.205 10:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:47.205 10:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:47.205 10:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:47.205 10:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MzA0MjY3Mzc1OGNlMGQ0YTUzYWE0YjhiMDEwOTEwMDY2YWUxMjhkMGExNzAxYTU49V48dQ==: 00:25:47.205 10:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NTlhYjMxYzYwZjM3Mzc0NWViN2Y4NDYxYzU5ZWY4Y2YCw+Ba: 00:25:47.205 10:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:47.205 10:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:47.205 10:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MzA0MjY3Mzc1OGNlMGQ0YTUzYWE0YjhiMDEwOTEwMDY2YWUxMjhkMGExNzAxYTU49V48dQ==: 00:25:47.205 10:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NTlhYjMxYzYwZjM3Mzc0NWViN2Y4NDYxYzU5ZWY4Y2YCw+Ba: ]] 00:25:47.205 10:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NTlhYjMxYzYwZjM3Mzc0NWViN2Y4NDYxYzU5ZWY4Y2YCw+Ba: 00:25:47.205 10:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:25:47.205 10:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:47.205 10:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:47.205 10:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:47.205 10:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:47.205 10:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:47.205 10:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:25:47.205 10:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:47.205 10:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:47.205 10:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:47.205 10:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:47.205 10:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:47.205 10:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:47.205 10:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:47.205 10:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:47.205 10:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:47.205 10:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:47.205 10:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:47.205 10:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:47.205 10:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:47.205 10:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:47.205 10:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:47.205 10:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:47.205 10:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:47.772 nvme0n1 00:25:47.772 10:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:47.772 10:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:47.772 10:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:47.772 10:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:47.772 10:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:47.772 10:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:47.772 10:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:47.772 10:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:47.772 10:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:47.772 10:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:47.772 10:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:47.772 10:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:47.772 10:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:25:47.772 10:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:47.772 10:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:47.772 10:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:47.772 10:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:47.772 10:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YTRmOGVmOWI3OGVlMzVmZDkxM2U1NGMzYzg0YjBjMWRmOTc4NTA5ZGViMTIxODAxYzdiNTVhOTdmMDA1Yzk2Oe8H5EY=: 00:25:47.772 10:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:47.772 10:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:47.772 10:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:47.772 10:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YTRmOGVmOWI3OGVlMzVmZDkxM2U1NGMzYzg0YjBjMWRmOTc4NTA5ZGViMTIxODAxYzdiNTVhOTdmMDA1Yzk2Oe8H5EY=: 00:25:47.772 10:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:47.772 10:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:25:47.772 10:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:47.772 10:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:47.772 10:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:47.772 10:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:47.772 10:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:47.772 10:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:25:47.772 10:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:47.772 10:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:47.772 10:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:47.772 10:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:47.772 10:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:47.772 10:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:47.772 10:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:47.772 10:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:47.772 10:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:47.772 10:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:47.772 10:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:47.772 10:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:47.772 10:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:47.772 10:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:47.772 10:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:47.772 10:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:47.772 10:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:48.338 nvme0n1 00:25:48.338 10:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:48.338 10:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:48.338 10:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:48.338 10:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:48.338 10:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:48.338 10:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:48.338 10:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:48.338 10:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:48.338 10:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:48.338 10:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:48.339 10:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:48.339 10:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:48.339 10:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:48.339 10:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:25:48.339 10:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:48.339 10:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:48.339 10:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:48.339 10:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:48.339 10:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YmY1NTE2N2FmNzUyYzdkMTI4YTliZDYzNDI1NjExYzYg2AGh: 00:25:48.339 10:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTZkODVjN2M3MDRjMzg5MjgwZTNmYmQyYWE5MmEzM2U1MThlMDhiZjYzYzBkOTZkZjExOGU0OThhNWFlM2JjMkq5pME=: 00:25:48.339 10:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:48.339 10:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:48.339 10:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YmY1NTE2N2FmNzUyYzdkMTI4YTliZDYzNDI1NjExYzYg2AGh: 00:25:48.339 10:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTZkODVjN2M3MDRjMzg5MjgwZTNmYmQyYWE5MmEzM2U1MThlMDhiZjYzYzBkOTZkZjExOGU0OThhNWFlM2JjMkq5pME=: ]] 00:25:48.339 10:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTZkODVjN2M3MDRjMzg5MjgwZTNmYmQyYWE5MmEzM2U1MThlMDhiZjYzYzBkOTZkZjExOGU0OThhNWFlM2JjMkq5pME=: 00:25:48.339 10:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:25:48.339 10:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:48.339 10:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:48.339 10:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:48.339 10:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:48.339 10:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:48.339 10:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:25:48.339 10:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:48.339 10:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:48.339 10:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:48.339 10:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:48.339 10:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:48.339 10:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:48.339 10:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:48.339 10:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:48.339 10:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:48.339 10:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:48.339 10:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:48.339 10:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:48.339 10:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:48.339 10:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:48.339 10:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:48.339 10:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:48.339 10:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:49.274 nvme0n1 00:25:49.274 10:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:49.274 10:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:49.274 10:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:49.274 10:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:49.274 10:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:49.274 10:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:49.274 10:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:49.274 10:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:49.274 10:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:49.274 10:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:49.274 10:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:49.274 10:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:49.274 10:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:25:49.274 10:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:49.274 10:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:49.274 10:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:49.274 10:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:49.274 10:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTk2YjFjNzM3ZTkxMjg5YjRhZDA2OTcyOWM5MDQ1MmZmYWUzMDVjMmE2Y2ZjZGZi+syEcg==: 00:25:49.274 10:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDk0YTIxM2FlYmEyNWZjZDE5MzRhYmM2NGRiZjA1ZWEzNmEwMTZhM2Y4ZWYyMDA2hT2seA==: 00:25:49.274 10:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:49.274 10:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:49.274 10:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTk2YjFjNzM3ZTkxMjg5YjRhZDA2OTcyOWM5MDQ1MmZmYWUzMDVjMmE2Y2ZjZGZi+syEcg==: 00:25:49.274 10:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDk0YTIxM2FlYmEyNWZjZDE5MzRhYmM2NGRiZjA1ZWEzNmEwMTZhM2Y4ZWYyMDA2hT2seA==: ]] 00:25:49.274 10:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDk0YTIxM2FlYmEyNWZjZDE5MzRhYmM2NGRiZjA1ZWEzNmEwMTZhM2Y4ZWYyMDA2hT2seA==: 00:25:49.274 10:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:25:49.274 10:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:49.274 10:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:49.274 10:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:49.274 10:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:49.274 10:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:49.274 10:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:25:49.274 10:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:49.274 10:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:49.274 10:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:49.274 10:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:49.274 10:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:49.274 10:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:49.274 10:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:49.274 10:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:49.274 10:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:49.274 10:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:49.274 10:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:49.275 10:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:49.275 10:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:49.275 10:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:49.275 10:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:49.275 10:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:49.275 10:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:50.208 nvme0n1 00:25:50.208 10:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:50.208 10:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:50.208 10:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:50.208 10:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:50.208 10:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:50.208 10:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:50.208 10:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:50.208 10:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:50.208 10:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:50.208 10:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:50.208 10:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:50.208 10:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:50.208 10:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:25:50.208 10:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:50.208 10:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:50.208 10:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:50.208 10:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:50.208 10:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YzY3ZDVlNDcxOGZiNjg0Mzg3OWJlNjg3N2RhNWE5OTVNukEg: 00:25:50.208 10:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NWQ5NTY0OGI0MzY1MmNjMWFlZWY5YzU3ZGI5YWZkYTT4zVDv: 00:25:50.208 10:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:50.208 10:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:50.208 10:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YzY3ZDVlNDcxOGZiNjg0Mzg3OWJlNjg3N2RhNWE5OTVNukEg: 00:25:50.208 10:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NWQ5NTY0OGI0MzY1MmNjMWFlZWY5YzU3ZGI5YWZkYTT4zVDv: ]] 00:25:50.208 10:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NWQ5NTY0OGI0MzY1MmNjMWFlZWY5YzU3ZGI5YWZkYTT4zVDv: 00:25:50.208 10:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:25:50.208 10:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:50.208 10:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:50.208 10:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:50.208 10:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:50.208 10:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:50.208 10:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:25:50.208 10:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:50.208 10:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:50.208 10:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:50.208 10:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:50.208 10:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:50.208 10:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:50.208 10:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:50.208 10:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:50.208 10:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:50.208 10:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:50.208 10:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:50.208 10:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:50.208 10:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:50.208 10:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:50.208 10:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:50.208 10:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:50.208 10:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:51.141 nvme0n1 00:25:51.141 10:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:51.141 10:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:51.141 10:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:51.141 10:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:51.141 10:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:51.141 10:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:51.400 10:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:51.400 10:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:51.400 10:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:51.400 10:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:51.400 10:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:51.400 10:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:51.400 10:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:25:51.400 10:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:51.400 10:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:51.400 10:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:51.400 10:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:51.400 10:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MzA0MjY3Mzc1OGNlMGQ0YTUzYWE0YjhiMDEwOTEwMDY2YWUxMjhkMGExNzAxYTU49V48dQ==: 00:25:51.400 10:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NTlhYjMxYzYwZjM3Mzc0NWViN2Y4NDYxYzU5ZWY4Y2YCw+Ba: 00:25:51.400 10:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:51.400 10:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:51.400 10:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MzA0MjY3Mzc1OGNlMGQ0YTUzYWE0YjhiMDEwOTEwMDY2YWUxMjhkMGExNzAxYTU49V48dQ==: 00:25:51.400 10:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NTlhYjMxYzYwZjM3Mzc0NWViN2Y4NDYxYzU5ZWY4Y2YCw+Ba: ]] 00:25:51.400 10:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NTlhYjMxYzYwZjM3Mzc0NWViN2Y4NDYxYzU5ZWY4Y2YCw+Ba: 00:25:51.400 10:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:25:51.400 10:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:51.400 10:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:51.400 10:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:51.400 10:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:51.400 10:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:51.400 10:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:25:51.400 10:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:51.400 10:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:51.400 10:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:51.400 10:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:51.400 10:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:51.400 10:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:51.400 10:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:51.400 10:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:51.400 10:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:51.400 10:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:51.400 10:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:51.400 10:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:51.400 10:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:51.400 10:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:51.400 10:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:51.400 10:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:51.400 10:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:52.335 nvme0n1 00:25:52.335 10:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:52.335 10:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:52.335 10:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:52.335 10:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:52.335 10:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:52.335 10:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:52.335 10:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:52.335 10:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:52.335 10:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:52.335 10:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:52.335 10:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:52.335 10:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:52.335 10:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:25:52.335 10:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:52.335 10:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:52.335 10:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:52.335 10:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:52.335 10:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YTRmOGVmOWI3OGVlMzVmZDkxM2U1NGMzYzg0YjBjMWRmOTc4NTA5ZGViMTIxODAxYzdiNTVhOTdmMDA1Yzk2Oe8H5EY=: 00:25:52.335 10:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:52.335 10:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:52.335 10:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:52.335 10:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YTRmOGVmOWI3OGVlMzVmZDkxM2U1NGMzYzg0YjBjMWRmOTc4NTA5ZGViMTIxODAxYzdiNTVhOTdmMDA1Yzk2Oe8H5EY=: 00:25:52.335 10:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:52.335 10:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:25:52.335 10:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:52.335 10:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:52.335 10:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:52.335 10:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:52.335 10:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:52.335 10:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:25:52.335 10:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:52.335 10:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:52.335 10:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:52.335 10:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:52.335 10:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:52.335 10:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:52.335 10:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:52.335 10:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:52.335 10:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:52.335 10:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:52.335 10:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:52.335 10:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:52.335 10:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:52.335 10:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:52.335 10:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:52.335 10:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:52.335 10:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:53.271 nvme0n1 00:25:53.271 10:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:53.271 10:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:53.271 10:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:53.271 10:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:53.271 10:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:53.271 10:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:53.271 10:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:53.271 10:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:53.271 10:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:53.271 10:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:53.271 10:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:53.271 10:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:25:53.271 10:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:53.271 10:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:53.271 10:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:25:53.271 10:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:53.271 10:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:53.271 10:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:53.271 10:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:53.271 10:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YmY1NTE2N2FmNzUyYzdkMTI4YTliZDYzNDI1NjExYzYg2AGh: 00:25:53.271 10:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTZkODVjN2M3MDRjMzg5MjgwZTNmYmQyYWE5MmEzM2U1MThlMDhiZjYzYzBkOTZkZjExOGU0OThhNWFlM2JjMkq5pME=: 00:25:53.271 10:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:53.271 10:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:53.271 10:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YmY1NTE2N2FmNzUyYzdkMTI4YTliZDYzNDI1NjExYzYg2AGh: 00:25:53.271 10:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTZkODVjN2M3MDRjMzg5MjgwZTNmYmQyYWE5MmEzM2U1MThlMDhiZjYzYzBkOTZkZjExOGU0OThhNWFlM2JjMkq5pME=: ]] 00:25:53.271 10:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTZkODVjN2M3MDRjMzg5MjgwZTNmYmQyYWE5MmEzM2U1MThlMDhiZjYzYzBkOTZkZjExOGU0OThhNWFlM2JjMkq5pME=: 00:25:53.271 10:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:25:53.271 10:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:53.271 10:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:53.271 10:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:53.271 10:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:53.271 10:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:53.271 10:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:25:53.271 10:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:53.271 10:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:53.271 10:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:53.271 10:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:53.271 10:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:53.271 10:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:53.271 10:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:53.271 10:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:53.271 10:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:53.271 10:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:53.271 10:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:53.271 10:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:53.271 10:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:53.271 10:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:53.271 10:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:53.271 10:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:53.271 10:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:53.530 nvme0n1 00:25:53.530 10:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:53.530 10:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:53.530 10:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:53.530 10:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:53.530 10:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:53.530 10:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:53.530 10:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:53.530 10:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:53.530 10:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:53.530 10:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:53.530 10:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:53.530 10:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:53.530 10:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:25:53.530 10:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:53.530 10:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:53.530 10:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:53.530 10:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:53.530 10:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTk2YjFjNzM3ZTkxMjg5YjRhZDA2OTcyOWM5MDQ1MmZmYWUzMDVjMmE2Y2ZjZGZi+syEcg==: 00:25:53.530 10:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDk0YTIxM2FlYmEyNWZjZDE5MzRhYmM2NGRiZjA1ZWEzNmEwMTZhM2Y4ZWYyMDA2hT2seA==: 00:25:53.530 10:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:53.530 10:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:53.530 10:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTk2YjFjNzM3ZTkxMjg5YjRhZDA2OTcyOWM5MDQ1MmZmYWUzMDVjMmE2Y2ZjZGZi+syEcg==: 00:25:53.530 10:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDk0YTIxM2FlYmEyNWZjZDE5MzRhYmM2NGRiZjA1ZWEzNmEwMTZhM2Y4ZWYyMDA2hT2seA==: ]] 00:25:53.530 10:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDk0YTIxM2FlYmEyNWZjZDE5MzRhYmM2NGRiZjA1ZWEzNmEwMTZhM2Y4ZWYyMDA2hT2seA==: 00:25:53.530 10:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:25:53.530 10:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:53.530 10:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:53.530 10:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:53.530 10:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:53.530 10:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:53.530 10:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:25:53.530 10:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:53.530 10:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:53.530 10:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:53.530 10:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:53.530 10:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:53.530 10:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:53.530 10:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:53.530 10:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:53.530 10:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:53.530 10:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:53.530 10:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:53.530 10:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:53.530 10:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:53.530 10:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:53.530 10:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:53.530 10:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:53.530 10:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:53.790 nvme0n1 00:25:53.790 10:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:53.790 10:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:53.790 10:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:53.790 10:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:53.790 10:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:53.790 10:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:53.790 10:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:53.790 10:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:53.790 10:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:53.790 10:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:53.790 10:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:53.790 10:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:53.790 10:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:25:53.790 10:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:53.790 10:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:53.790 10:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:53.790 10:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:53.790 10:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YzY3ZDVlNDcxOGZiNjg0Mzg3OWJlNjg3N2RhNWE5OTVNukEg: 00:25:53.790 10:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NWQ5NTY0OGI0MzY1MmNjMWFlZWY5YzU3ZGI5YWZkYTT4zVDv: 00:25:53.790 10:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:53.790 10:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:53.790 10:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YzY3ZDVlNDcxOGZiNjg0Mzg3OWJlNjg3N2RhNWE5OTVNukEg: 00:25:53.790 10:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NWQ5NTY0OGI0MzY1MmNjMWFlZWY5YzU3ZGI5YWZkYTT4zVDv: ]] 00:25:53.790 10:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NWQ5NTY0OGI0MzY1MmNjMWFlZWY5YzU3ZGI5YWZkYTT4zVDv: 00:25:53.790 10:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:25:53.790 10:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:53.790 10:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:53.790 10:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:53.790 10:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:53.790 10:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:53.790 10:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:25:53.790 10:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:53.790 10:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:53.790 10:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:53.790 10:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:53.790 10:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:53.790 10:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:53.790 10:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:53.790 10:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:53.790 10:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:53.790 10:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:53.790 10:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:53.790 10:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:53.790 10:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:53.790 10:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:53.790 10:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:53.790 10:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:53.790 10:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:54.049 nvme0n1 00:25:54.049 10:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:54.049 10:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:54.049 10:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:54.049 10:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:54.049 10:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:54.049 10:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:54.049 10:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:54.049 10:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:54.049 10:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:54.049 10:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:54.049 10:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:54.049 10:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:54.049 10:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:25:54.049 10:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:54.049 10:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:54.049 10:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:54.049 10:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:54.049 10:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MzA0MjY3Mzc1OGNlMGQ0YTUzYWE0YjhiMDEwOTEwMDY2YWUxMjhkMGExNzAxYTU49V48dQ==: 00:25:54.049 10:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NTlhYjMxYzYwZjM3Mzc0NWViN2Y4NDYxYzU5ZWY4Y2YCw+Ba: 00:25:54.049 10:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:54.049 10:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:54.049 10:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MzA0MjY3Mzc1OGNlMGQ0YTUzYWE0YjhiMDEwOTEwMDY2YWUxMjhkMGExNzAxYTU49V48dQ==: 00:25:54.049 10:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NTlhYjMxYzYwZjM3Mzc0NWViN2Y4NDYxYzU5ZWY4Y2YCw+Ba: ]] 00:25:54.049 10:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NTlhYjMxYzYwZjM3Mzc0NWViN2Y4NDYxYzU5ZWY4Y2YCw+Ba: 00:25:54.049 10:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:25:54.049 10:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:54.049 10:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:54.049 10:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:54.049 10:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:54.049 10:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:54.049 10:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:25:54.050 10:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:54.050 10:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:54.050 10:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:54.050 10:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:54.050 10:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:54.050 10:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:54.050 10:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:54.050 10:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:54.050 10:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:54.050 10:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:54.050 10:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:54.050 10:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:54.050 10:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:54.050 10:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:54.050 10:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:54.050 10:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:54.050 10:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:54.309 nvme0n1 00:25:54.309 10:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:54.309 10:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:54.309 10:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:54.309 10:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:54.309 10:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:54.309 10:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:54.309 10:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:54.309 10:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:54.309 10:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:54.309 10:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:54.309 10:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:54.309 10:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:54.309 10:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:25:54.309 10:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:54.309 10:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:54.309 10:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:54.309 10:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:54.309 10:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YTRmOGVmOWI3OGVlMzVmZDkxM2U1NGMzYzg0YjBjMWRmOTc4NTA5ZGViMTIxODAxYzdiNTVhOTdmMDA1Yzk2Oe8H5EY=: 00:25:54.309 10:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:54.309 10:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:54.309 10:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:54.309 10:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YTRmOGVmOWI3OGVlMzVmZDkxM2U1NGMzYzg0YjBjMWRmOTc4NTA5ZGViMTIxODAxYzdiNTVhOTdmMDA1Yzk2Oe8H5EY=: 00:25:54.309 10:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:54.309 10:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:25:54.309 10:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:54.309 10:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:54.309 10:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:54.309 10:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:54.309 10:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:54.309 10:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:25:54.309 10:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:54.309 10:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:54.309 10:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:54.309 10:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:54.309 10:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:54.309 10:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:54.309 10:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:54.309 10:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:54.309 10:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:54.309 10:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:54.309 10:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:54.309 10:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:54.309 10:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:54.309 10:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:54.309 10:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:54.309 10:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:54.309 10:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:54.309 nvme0n1 00:25:54.309 10:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:54.568 10:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:54.568 10:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:54.568 10:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:54.568 10:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:54.568 10:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:54.568 10:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:54.568 10:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:54.568 10:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:54.568 10:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:54.568 10:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:54.568 10:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:54.568 10:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:54.568 10:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:25:54.568 10:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:54.568 10:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:54.568 10:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:54.568 10:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:54.568 10:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YmY1NTE2N2FmNzUyYzdkMTI4YTliZDYzNDI1NjExYzYg2AGh: 00:25:54.568 10:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTZkODVjN2M3MDRjMzg5MjgwZTNmYmQyYWE5MmEzM2U1MThlMDhiZjYzYzBkOTZkZjExOGU0OThhNWFlM2JjMkq5pME=: 00:25:54.568 10:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:54.568 10:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:54.568 10:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YmY1NTE2N2FmNzUyYzdkMTI4YTliZDYzNDI1NjExYzYg2AGh: 00:25:54.568 10:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTZkODVjN2M3MDRjMzg5MjgwZTNmYmQyYWE5MmEzM2U1MThlMDhiZjYzYzBkOTZkZjExOGU0OThhNWFlM2JjMkq5pME=: ]] 00:25:54.568 10:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTZkODVjN2M3MDRjMzg5MjgwZTNmYmQyYWE5MmEzM2U1MThlMDhiZjYzYzBkOTZkZjExOGU0OThhNWFlM2JjMkq5pME=: 00:25:54.568 10:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:25:54.568 10:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:54.568 10:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:54.568 10:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:54.568 10:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:54.568 10:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:54.568 10:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:25:54.568 10:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:54.568 10:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:54.568 10:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:54.568 10:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:54.568 10:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:54.568 10:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:54.568 10:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:54.568 10:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:54.568 10:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:54.568 10:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:54.568 10:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:54.568 10:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:54.568 10:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:54.568 10:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:54.568 10:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:54.568 10:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:54.568 10:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:54.568 nvme0n1 00:25:54.568 10:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:54.568 10:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:54.568 10:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:54.568 10:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:54.568 10:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:54.568 10:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:54.826 10:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:54.826 10:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:54.826 10:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:54.826 10:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:54.826 10:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:54.826 10:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:54.826 10:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:25:54.826 10:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:54.826 10:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:54.826 10:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:54.826 10:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:54.826 10:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTk2YjFjNzM3ZTkxMjg5YjRhZDA2OTcyOWM5MDQ1MmZmYWUzMDVjMmE2Y2ZjZGZi+syEcg==: 00:25:54.826 10:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDk0YTIxM2FlYmEyNWZjZDE5MzRhYmM2NGRiZjA1ZWEzNmEwMTZhM2Y4ZWYyMDA2hT2seA==: 00:25:54.826 10:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:54.826 10:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:54.826 10:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTk2YjFjNzM3ZTkxMjg5YjRhZDA2OTcyOWM5MDQ1MmZmYWUzMDVjMmE2Y2ZjZGZi+syEcg==: 00:25:54.826 10:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDk0YTIxM2FlYmEyNWZjZDE5MzRhYmM2NGRiZjA1ZWEzNmEwMTZhM2Y4ZWYyMDA2hT2seA==: ]] 00:25:54.826 10:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDk0YTIxM2FlYmEyNWZjZDE5MzRhYmM2NGRiZjA1ZWEzNmEwMTZhM2Y4ZWYyMDA2hT2seA==: 00:25:54.826 10:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:25:54.826 10:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:54.826 10:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:54.826 10:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:54.826 10:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:54.826 10:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:54.826 10:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:25:54.826 10:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:54.826 10:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:54.826 10:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:54.826 10:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:54.826 10:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:54.826 10:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:54.826 10:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:54.826 10:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:54.826 10:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:54.826 10:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:54.826 10:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:54.826 10:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:54.827 10:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:54.827 10:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:54.827 10:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:54.827 10:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:54.827 10:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:54.827 nvme0n1 00:25:54.827 10:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:54.827 10:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:54.827 10:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:54.827 10:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:54.827 10:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:54.827 10:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:55.085 10:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:55.085 10:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:55.085 10:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:55.085 10:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:55.085 10:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:55.085 10:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:55.085 10:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:25:55.085 10:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:55.085 10:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:55.085 10:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:55.085 10:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:55.085 10:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YzY3ZDVlNDcxOGZiNjg0Mzg3OWJlNjg3N2RhNWE5OTVNukEg: 00:25:55.085 10:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NWQ5NTY0OGI0MzY1MmNjMWFlZWY5YzU3ZGI5YWZkYTT4zVDv: 00:25:55.085 10:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:55.085 10:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:55.085 10:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YzY3ZDVlNDcxOGZiNjg0Mzg3OWJlNjg3N2RhNWE5OTVNukEg: 00:25:55.085 10:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NWQ5NTY0OGI0MzY1MmNjMWFlZWY5YzU3ZGI5YWZkYTT4zVDv: ]] 00:25:55.085 10:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NWQ5NTY0OGI0MzY1MmNjMWFlZWY5YzU3ZGI5YWZkYTT4zVDv: 00:25:55.085 10:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:25:55.085 10:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:55.085 10:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:55.085 10:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:55.085 10:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:55.085 10:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:55.085 10:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:25:55.085 10:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:55.085 10:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:55.085 10:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:55.085 10:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:55.085 10:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:55.085 10:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:55.085 10:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:55.085 10:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:55.085 10:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:55.085 10:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:55.085 10:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:55.085 10:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:55.085 10:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:55.085 10:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:55.085 10:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:55.085 10:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:55.085 10:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:55.085 nvme0n1 00:25:55.085 10:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:55.085 10:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:55.085 10:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:55.085 10:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:55.085 10:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:55.085 10:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:55.343 10:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:55.343 10:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:55.343 10:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:55.343 10:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:55.343 10:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:55.343 10:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:55.343 10:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:25:55.343 10:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:55.343 10:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:55.343 10:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:55.343 10:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:55.343 10:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MzA0MjY3Mzc1OGNlMGQ0YTUzYWE0YjhiMDEwOTEwMDY2YWUxMjhkMGExNzAxYTU49V48dQ==: 00:25:55.343 10:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NTlhYjMxYzYwZjM3Mzc0NWViN2Y4NDYxYzU5ZWY4Y2YCw+Ba: 00:25:55.343 10:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:55.343 10:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:55.343 10:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MzA0MjY3Mzc1OGNlMGQ0YTUzYWE0YjhiMDEwOTEwMDY2YWUxMjhkMGExNzAxYTU49V48dQ==: 00:25:55.343 10:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NTlhYjMxYzYwZjM3Mzc0NWViN2Y4NDYxYzU5ZWY4Y2YCw+Ba: ]] 00:25:55.343 10:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NTlhYjMxYzYwZjM3Mzc0NWViN2Y4NDYxYzU5ZWY4Y2YCw+Ba: 00:25:55.343 10:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:25:55.343 10:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:55.343 10:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:55.343 10:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:55.343 10:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:55.343 10:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:55.343 10:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:25:55.343 10:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:55.343 10:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:55.343 10:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:55.343 10:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:55.343 10:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:55.343 10:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:55.343 10:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:55.343 10:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:55.343 10:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:55.343 10:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:55.343 10:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:55.344 10:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:55.344 10:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:55.344 10:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:55.344 10:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:55.344 10:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:55.344 10:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:55.344 nvme0n1 00:25:55.344 10:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:55.344 10:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:55.344 10:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:55.344 10:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:55.344 10:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:55.344 10:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:55.602 10:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:55.602 10:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:55.602 10:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:55.602 10:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:55.602 10:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:55.602 10:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:55.602 10:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:25:55.602 10:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:55.602 10:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:55.602 10:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:55.602 10:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:55.602 10:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YTRmOGVmOWI3OGVlMzVmZDkxM2U1NGMzYzg0YjBjMWRmOTc4NTA5ZGViMTIxODAxYzdiNTVhOTdmMDA1Yzk2Oe8H5EY=: 00:25:55.602 10:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:55.602 10:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:55.602 10:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:55.602 10:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YTRmOGVmOWI3OGVlMzVmZDkxM2U1NGMzYzg0YjBjMWRmOTc4NTA5ZGViMTIxODAxYzdiNTVhOTdmMDA1Yzk2Oe8H5EY=: 00:25:55.602 10:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:55.602 10:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:25:55.602 10:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:55.602 10:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:55.602 10:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:55.602 10:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:55.602 10:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:55.602 10:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:25:55.602 10:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:55.602 10:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:55.602 10:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:55.602 10:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:55.602 10:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:55.602 10:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:55.602 10:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:55.602 10:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:55.602 10:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:55.602 10:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:55.602 10:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:55.602 10:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:55.602 10:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:55.602 10:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:55.602 10:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:55.602 10:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:55.602 10:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:55.602 nvme0n1 00:25:55.602 10:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:55.602 10:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:55.602 10:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:55.602 10:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:55.602 10:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:55.602 10:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:55.861 10:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:55.861 10:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:55.861 10:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:55.861 10:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:55.861 10:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:55.861 10:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:55.861 10:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:55.861 10:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:25:55.861 10:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:55.861 10:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:55.861 10:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:55.861 10:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:55.861 10:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YmY1NTE2N2FmNzUyYzdkMTI4YTliZDYzNDI1NjExYzYg2AGh: 00:25:55.861 10:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTZkODVjN2M3MDRjMzg5MjgwZTNmYmQyYWE5MmEzM2U1MThlMDhiZjYzYzBkOTZkZjExOGU0OThhNWFlM2JjMkq5pME=: 00:25:55.861 10:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:55.861 10:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:55.861 10:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YmY1NTE2N2FmNzUyYzdkMTI4YTliZDYzNDI1NjExYzYg2AGh: 00:25:55.861 10:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTZkODVjN2M3MDRjMzg5MjgwZTNmYmQyYWE5MmEzM2U1MThlMDhiZjYzYzBkOTZkZjExOGU0OThhNWFlM2JjMkq5pME=: ]] 00:25:55.861 10:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTZkODVjN2M3MDRjMzg5MjgwZTNmYmQyYWE5MmEzM2U1MThlMDhiZjYzYzBkOTZkZjExOGU0OThhNWFlM2JjMkq5pME=: 00:25:55.861 10:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:25:55.861 10:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:55.861 10:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:55.861 10:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:55.861 10:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:55.861 10:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:55.861 10:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:25:55.861 10:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:55.861 10:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:55.861 10:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:55.861 10:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:55.861 10:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:55.861 10:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:55.861 10:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:55.861 10:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:55.861 10:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:55.861 10:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:55.861 10:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:55.861 10:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:55.861 10:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:55.861 10:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:55.861 10:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:55.861 10:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:55.861 10:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:56.119 nvme0n1 00:25:56.119 10:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:56.119 10:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:56.119 10:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:56.119 10:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:56.119 10:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:56.119 10:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:56.119 10:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:56.119 10:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:56.119 10:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:56.119 10:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:56.119 10:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:56.119 10:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:56.119 10:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:25:56.119 10:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:56.119 10:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:56.119 10:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:56.120 10:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:56.120 10:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTk2YjFjNzM3ZTkxMjg5YjRhZDA2OTcyOWM5MDQ1MmZmYWUzMDVjMmE2Y2ZjZGZi+syEcg==: 00:25:56.120 10:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDk0YTIxM2FlYmEyNWZjZDE5MzRhYmM2NGRiZjA1ZWEzNmEwMTZhM2Y4ZWYyMDA2hT2seA==: 00:25:56.120 10:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:56.120 10:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:56.120 10:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTk2YjFjNzM3ZTkxMjg5YjRhZDA2OTcyOWM5MDQ1MmZmYWUzMDVjMmE2Y2ZjZGZi+syEcg==: 00:25:56.120 10:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDk0YTIxM2FlYmEyNWZjZDE5MzRhYmM2NGRiZjA1ZWEzNmEwMTZhM2Y4ZWYyMDA2hT2seA==: ]] 00:25:56.120 10:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDk0YTIxM2FlYmEyNWZjZDE5MzRhYmM2NGRiZjA1ZWEzNmEwMTZhM2Y4ZWYyMDA2hT2seA==: 00:25:56.120 10:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:25:56.120 10:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:56.120 10:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:56.120 10:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:56.120 10:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:56.120 10:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:56.120 10:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:25:56.120 10:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:56.120 10:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:56.120 10:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:56.120 10:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:56.120 10:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:56.120 10:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:56.120 10:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:56.120 10:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:56.120 10:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:56.120 10:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:56.120 10:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:56.120 10:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:56.120 10:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:56.120 10:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:56.120 10:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:56.120 10:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:56.120 10:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:56.686 nvme0n1 00:25:56.686 10:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:56.686 10:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:56.686 10:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:56.686 10:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:56.686 10:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:56.686 10:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:56.686 10:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:56.686 10:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:56.686 10:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:56.686 10:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:56.686 10:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:56.686 10:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:56.686 10:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:25:56.686 10:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:56.686 10:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:56.686 10:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:56.686 10:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:56.686 10:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YzY3ZDVlNDcxOGZiNjg0Mzg3OWJlNjg3N2RhNWE5OTVNukEg: 00:25:56.686 10:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NWQ5NTY0OGI0MzY1MmNjMWFlZWY5YzU3ZGI5YWZkYTT4zVDv: 00:25:56.686 10:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:56.686 10:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:56.686 10:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YzY3ZDVlNDcxOGZiNjg0Mzg3OWJlNjg3N2RhNWE5OTVNukEg: 00:25:56.686 10:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NWQ5NTY0OGI0MzY1MmNjMWFlZWY5YzU3ZGI5YWZkYTT4zVDv: ]] 00:25:56.686 10:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NWQ5NTY0OGI0MzY1MmNjMWFlZWY5YzU3ZGI5YWZkYTT4zVDv: 00:25:56.686 10:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:25:56.686 10:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:56.686 10:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:56.686 10:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:56.686 10:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:56.686 10:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:56.686 10:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:25:56.686 10:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:56.686 10:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:56.686 10:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:56.686 10:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:56.686 10:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:56.686 10:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:56.686 10:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:56.686 10:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:56.686 10:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:56.686 10:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:56.686 10:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:56.686 10:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:56.686 10:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:56.686 10:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:56.686 10:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:56.686 10:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:56.686 10:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:56.944 nvme0n1 00:25:56.945 10:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:56.945 10:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:56.945 10:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:56.945 10:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:56.945 10:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:56.945 10:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:56.945 10:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:56.945 10:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:56.945 10:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:56.945 10:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:56.945 10:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:56.945 10:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:56.945 10:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:25:56.945 10:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:56.945 10:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:56.945 10:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:56.945 10:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:56.945 10:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MzA0MjY3Mzc1OGNlMGQ0YTUzYWE0YjhiMDEwOTEwMDY2YWUxMjhkMGExNzAxYTU49V48dQ==: 00:25:56.945 10:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NTlhYjMxYzYwZjM3Mzc0NWViN2Y4NDYxYzU5ZWY4Y2YCw+Ba: 00:25:56.945 10:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:56.945 10:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:56.945 10:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MzA0MjY3Mzc1OGNlMGQ0YTUzYWE0YjhiMDEwOTEwMDY2YWUxMjhkMGExNzAxYTU49V48dQ==: 00:25:56.945 10:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NTlhYjMxYzYwZjM3Mzc0NWViN2Y4NDYxYzU5ZWY4Y2YCw+Ba: ]] 00:25:56.945 10:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NTlhYjMxYzYwZjM3Mzc0NWViN2Y4NDYxYzU5ZWY4Y2YCw+Ba: 00:25:56.945 10:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:25:56.945 10:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:56.945 10:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:56.945 10:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:56.945 10:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:56.945 10:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:56.945 10:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:25:56.945 10:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:56.945 10:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:56.945 10:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:56.945 10:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:56.945 10:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:56.945 10:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:56.945 10:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:56.945 10:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:56.945 10:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:56.945 10:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:56.945 10:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:56.945 10:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:56.945 10:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:56.945 10:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:56.945 10:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:56.945 10:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:56.945 10:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:57.203 nvme0n1 00:25:57.203 10:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:57.203 10:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:57.203 10:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:57.203 10:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:57.203 10:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:57.203 10:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:57.462 10:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:57.462 10:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:57.462 10:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:57.462 10:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:57.462 10:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:57.462 10:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:57.462 10:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:25:57.462 10:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:57.462 10:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:57.462 10:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:57.462 10:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:57.462 10:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YTRmOGVmOWI3OGVlMzVmZDkxM2U1NGMzYzg0YjBjMWRmOTc4NTA5ZGViMTIxODAxYzdiNTVhOTdmMDA1Yzk2Oe8H5EY=: 00:25:57.462 10:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:57.462 10:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:57.462 10:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:57.462 10:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YTRmOGVmOWI3OGVlMzVmZDkxM2U1NGMzYzg0YjBjMWRmOTc4NTA5ZGViMTIxODAxYzdiNTVhOTdmMDA1Yzk2Oe8H5EY=: 00:25:57.462 10:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:57.462 10:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:25:57.462 10:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:57.462 10:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:57.462 10:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:57.462 10:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:57.462 10:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:57.462 10:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:25:57.462 10:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:57.462 10:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:57.462 10:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:57.462 10:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:57.462 10:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:57.462 10:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:57.462 10:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:57.462 10:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:57.462 10:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:57.462 10:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:57.462 10:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:57.462 10:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:57.462 10:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:57.462 10:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:57.462 10:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:57.462 10:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:57.463 10:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:57.721 nvme0n1 00:25:57.721 10:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:57.721 10:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:57.721 10:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:57.721 10:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:57.721 10:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:57.721 10:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:57.721 10:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:57.721 10:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:57.721 10:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:57.721 10:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:57.721 10:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:57.721 10:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:57.721 10:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:57.721 10:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:25:57.721 10:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:57.721 10:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:57.721 10:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:57.721 10:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:57.721 10:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YmY1NTE2N2FmNzUyYzdkMTI4YTliZDYzNDI1NjExYzYg2AGh: 00:25:57.721 10:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTZkODVjN2M3MDRjMzg5MjgwZTNmYmQyYWE5MmEzM2U1MThlMDhiZjYzYzBkOTZkZjExOGU0OThhNWFlM2JjMkq5pME=: 00:25:57.721 10:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:57.721 10:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:57.721 10:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YmY1NTE2N2FmNzUyYzdkMTI4YTliZDYzNDI1NjExYzYg2AGh: 00:25:57.721 10:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTZkODVjN2M3MDRjMzg5MjgwZTNmYmQyYWE5MmEzM2U1MThlMDhiZjYzYzBkOTZkZjExOGU0OThhNWFlM2JjMkq5pME=: ]] 00:25:57.721 10:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTZkODVjN2M3MDRjMzg5MjgwZTNmYmQyYWE5MmEzM2U1MThlMDhiZjYzYzBkOTZkZjExOGU0OThhNWFlM2JjMkq5pME=: 00:25:57.721 10:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:25:57.721 10:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:57.721 10:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:57.721 10:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:57.721 10:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:57.721 10:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:57.721 10:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:25:57.721 10:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:57.721 10:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:57.721 10:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:57.721 10:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:57.721 10:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:57.721 10:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:57.721 10:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:57.721 10:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:57.721 10:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:57.721 10:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:57.721 10:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:57.721 10:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:57.721 10:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:57.721 10:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:57.721 10:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:57.721 10:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:57.721 10:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:58.288 nvme0n1 00:25:58.288 10:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:58.288 10:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:58.288 10:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:58.288 10:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:58.288 10:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:58.288 10:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:58.288 10:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:58.288 10:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:58.288 10:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:58.288 10:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:58.288 10:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:58.288 10:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:58.288 10:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:25:58.288 10:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:58.288 10:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:58.288 10:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:58.288 10:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:58.288 10:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTk2YjFjNzM3ZTkxMjg5YjRhZDA2OTcyOWM5MDQ1MmZmYWUzMDVjMmE2Y2ZjZGZi+syEcg==: 00:25:58.288 10:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDk0YTIxM2FlYmEyNWZjZDE5MzRhYmM2NGRiZjA1ZWEzNmEwMTZhM2Y4ZWYyMDA2hT2seA==: 00:25:58.288 10:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:58.288 10:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:58.288 10:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTk2YjFjNzM3ZTkxMjg5YjRhZDA2OTcyOWM5MDQ1MmZmYWUzMDVjMmE2Y2ZjZGZi+syEcg==: 00:25:58.288 10:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDk0YTIxM2FlYmEyNWZjZDE5MzRhYmM2NGRiZjA1ZWEzNmEwMTZhM2Y4ZWYyMDA2hT2seA==: ]] 00:25:58.288 10:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDk0YTIxM2FlYmEyNWZjZDE5MzRhYmM2NGRiZjA1ZWEzNmEwMTZhM2Y4ZWYyMDA2hT2seA==: 00:25:58.288 10:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:25:58.288 10:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:58.288 10:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:58.288 10:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:58.288 10:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:58.288 10:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:58.288 10:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:25:58.288 10:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:58.288 10:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:58.288 10:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:58.546 10:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:58.546 10:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:58.546 10:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:58.546 10:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:58.546 10:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:58.546 10:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:58.546 10:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:58.546 10:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:58.546 10:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:58.546 10:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:58.546 10:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:58.546 10:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:58.546 10:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:58.546 10:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:59.112 nvme0n1 00:25:59.112 10:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:59.112 10:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:59.112 10:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:59.112 10:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:59.112 10:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:59.112 10:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:59.112 10:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:59.112 10:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:59.112 10:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:59.113 10:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:59.113 10:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:59.113 10:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:59.113 10:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:25:59.113 10:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:59.113 10:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:59.113 10:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:59.113 10:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:59.113 10:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YzY3ZDVlNDcxOGZiNjg0Mzg3OWJlNjg3N2RhNWE5OTVNukEg: 00:25:59.113 10:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NWQ5NTY0OGI0MzY1MmNjMWFlZWY5YzU3ZGI5YWZkYTT4zVDv: 00:25:59.113 10:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:59.113 10:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:59.113 10:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YzY3ZDVlNDcxOGZiNjg0Mzg3OWJlNjg3N2RhNWE5OTVNukEg: 00:25:59.113 10:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NWQ5NTY0OGI0MzY1MmNjMWFlZWY5YzU3ZGI5YWZkYTT4zVDv: ]] 00:25:59.113 10:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NWQ5NTY0OGI0MzY1MmNjMWFlZWY5YzU3ZGI5YWZkYTT4zVDv: 00:25:59.113 10:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:25:59.113 10:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:59.113 10:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:59.113 10:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:59.113 10:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:59.113 10:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:59.113 10:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:25:59.113 10:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:59.113 10:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:59.113 10:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:59.113 10:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:59.113 10:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:59.113 10:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:59.113 10:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:59.113 10:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:59.113 10:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:59.113 10:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:59.113 10:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:59.113 10:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:59.113 10:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:59.113 10:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:59.113 10:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:59.113 10:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:59.113 10:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:59.679 nvme0n1 00:25:59.679 10:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:59.679 10:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:59.679 10:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:59.679 10:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:59.679 10:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:59.679 10:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:59.679 10:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:59.679 10:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:59.679 10:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:59.679 10:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:59.679 10:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:59.679 10:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:59.679 10:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:25:59.679 10:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:59.679 10:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:59.679 10:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:59.679 10:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:59.679 10:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MzA0MjY3Mzc1OGNlMGQ0YTUzYWE0YjhiMDEwOTEwMDY2YWUxMjhkMGExNzAxYTU49V48dQ==: 00:25:59.679 10:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NTlhYjMxYzYwZjM3Mzc0NWViN2Y4NDYxYzU5ZWY4Y2YCw+Ba: 00:25:59.679 10:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:59.679 10:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:59.679 10:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MzA0MjY3Mzc1OGNlMGQ0YTUzYWE0YjhiMDEwOTEwMDY2YWUxMjhkMGExNzAxYTU49V48dQ==: 00:25:59.679 10:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NTlhYjMxYzYwZjM3Mzc0NWViN2Y4NDYxYzU5ZWY4Y2YCw+Ba: ]] 00:25:59.679 10:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NTlhYjMxYzYwZjM3Mzc0NWViN2Y4NDYxYzU5ZWY4Y2YCw+Ba: 00:25:59.679 10:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:25:59.679 10:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:59.679 10:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:59.679 10:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:59.679 10:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:59.679 10:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:59.679 10:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:25:59.679 10:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:59.679 10:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:59.679 10:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:59.679 10:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:59.679 10:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:59.679 10:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:59.679 10:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:59.679 10:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:59.679 10:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:59.679 10:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:59.679 10:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:59.679 10:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:59.679 10:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:59.679 10:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:59.679 10:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:59.679 10:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:59.679 10:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:00.247 nvme0n1 00:26:00.247 10:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:00.247 10:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:00.247 10:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:00.247 10:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:00.247 10:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:00.247 10:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:00.247 10:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:00.247 10:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:00.247 10:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:00.247 10:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:00.247 10:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:00.247 10:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:00.247 10:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:26:00.247 10:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:00.247 10:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:00.247 10:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:00.247 10:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:00.247 10:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YTRmOGVmOWI3OGVlMzVmZDkxM2U1NGMzYzg0YjBjMWRmOTc4NTA5ZGViMTIxODAxYzdiNTVhOTdmMDA1Yzk2Oe8H5EY=: 00:26:00.247 10:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:00.247 10:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:00.247 10:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:00.247 10:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YTRmOGVmOWI3OGVlMzVmZDkxM2U1NGMzYzg0YjBjMWRmOTc4NTA5ZGViMTIxODAxYzdiNTVhOTdmMDA1Yzk2Oe8H5EY=: 00:26:00.247 10:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:00.247 10:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:26:00.247 10:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:00.247 10:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:00.247 10:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:00.247 10:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:00.247 10:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:00.247 10:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:26:00.247 10:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:00.247 10:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:00.247 10:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:00.247 10:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:00.247 10:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:00.247 10:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:00.247 10:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:00.247 10:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:00.247 10:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:00.247 10:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:00.247 10:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:00.247 10:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:00.247 10:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:00.247 10:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:00.247 10:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:00.247 10:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:00.247 10:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:00.813 nvme0n1 00:26:00.813 10:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:00.813 10:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:00.813 10:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:00.813 10:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:00.813 10:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:00.813 10:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:00.813 10:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:00.813 10:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:00.813 10:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:00.813 10:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:00.813 10:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:00.813 10:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:00.813 10:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:00.813 10:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:26:00.813 10:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:00.813 10:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:00.813 10:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:00.813 10:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:00.813 10:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YmY1NTE2N2FmNzUyYzdkMTI4YTliZDYzNDI1NjExYzYg2AGh: 00:26:00.813 10:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTZkODVjN2M3MDRjMzg5MjgwZTNmYmQyYWE5MmEzM2U1MThlMDhiZjYzYzBkOTZkZjExOGU0OThhNWFlM2JjMkq5pME=: 00:26:00.813 10:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:00.813 10:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:00.813 10:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YmY1NTE2N2FmNzUyYzdkMTI4YTliZDYzNDI1NjExYzYg2AGh: 00:26:00.813 10:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTZkODVjN2M3MDRjMzg5MjgwZTNmYmQyYWE5MmEzM2U1MThlMDhiZjYzYzBkOTZkZjExOGU0OThhNWFlM2JjMkq5pME=: ]] 00:26:00.813 10:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTZkODVjN2M3MDRjMzg5MjgwZTNmYmQyYWE5MmEzM2U1MThlMDhiZjYzYzBkOTZkZjExOGU0OThhNWFlM2JjMkq5pME=: 00:26:00.813 10:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:26:00.813 10:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:00.813 10:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:00.813 10:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:00.813 10:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:00.813 10:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:00.813 10:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:26:00.813 10:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:00.813 10:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:00.813 10:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:00.813 10:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:00.813 10:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:00.813 10:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:00.813 10:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:00.813 10:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:00.813 10:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:00.813 10:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:00.813 10:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:00.813 10:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:00.813 10:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:00.813 10:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:00.813 10:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:00.813 10:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:00.813 10:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:01.747 nvme0n1 00:26:01.747 10:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:01.747 10:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:01.747 10:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:01.747 10:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:01.747 10:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:01.747 10:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:01.747 10:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:01.747 10:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:01.747 10:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:01.747 10:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:02.006 10:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:02.006 10:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:02.006 10:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:26:02.006 10:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:02.006 10:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:02.006 10:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:02.006 10:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:02.006 10:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTk2YjFjNzM3ZTkxMjg5YjRhZDA2OTcyOWM5MDQ1MmZmYWUzMDVjMmE2Y2ZjZGZi+syEcg==: 00:26:02.006 10:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDk0YTIxM2FlYmEyNWZjZDE5MzRhYmM2NGRiZjA1ZWEzNmEwMTZhM2Y4ZWYyMDA2hT2seA==: 00:26:02.006 10:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:02.006 10:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:02.006 10:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTk2YjFjNzM3ZTkxMjg5YjRhZDA2OTcyOWM5MDQ1MmZmYWUzMDVjMmE2Y2ZjZGZi+syEcg==: 00:26:02.006 10:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDk0YTIxM2FlYmEyNWZjZDE5MzRhYmM2NGRiZjA1ZWEzNmEwMTZhM2Y4ZWYyMDA2hT2seA==: ]] 00:26:02.006 10:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDk0YTIxM2FlYmEyNWZjZDE5MzRhYmM2NGRiZjA1ZWEzNmEwMTZhM2Y4ZWYyMDA2hT2seA==: 00:26:02.006 10:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:26:02.006 10:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:02.006 10:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:02.006 10:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:02.006 10:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:02.006 10:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:02.006 10:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:26:02.006 10:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:02.006 10:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:02.006 10:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:02.006 10:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:02.006 10:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:02.006 10:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:02.006 10:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:02.006 10:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:02.006 10:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:02.006 10:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:02.006 10:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:02.006 10:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:02.006 10:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:02.006 10:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:02.006 10:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:02.006 10:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:02.006 10:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:02.941 nvme0n1 00:26:02.941 10:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:02.941 10:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:02.941 10:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:02.941 10:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:02.941 10:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:02.941 10:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:02.941 10:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:02.941 10:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:02.941 10:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:02.941 10:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:02.941 10:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:02.941 10:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:02.941 10:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:26:02.941 10:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:02.941 10:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:02.941 10:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:02.941 10:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:02.941 10:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YzY3ZDVlNDcxOGZiNjg0Mzg3OWJlNjg3N2RhNWE5OTVNukEg: 00:26:02.941 10:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NWQ5NTY0OGI0MzY1MmNjMWFlZWY5YzU3ZGI5YWZkYTT4zVDv: 00:26:02.941 10:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:02.941 10:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:02.941 10:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YzY3ZDVlNDcxOGZiNjg0Mzg3OWJlNjg3N2RhNWE5OTVNukEg: 00:26:02.941 10:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NWQ5NTY0OGI0MzY1MmNjMWFlZWY5YzU3ZGI5YWZkYTT4zVDv: ]] 00:26:02.941 10:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NWQ5NTY0OGI0MzY1MmNjMWFlZWY5YzU3ZGI5YWZkYTT4zVDv: 00:26:02.941 10:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:26:02.941 10:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:02.941 10:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:02.941 10:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:02.941 10:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:02.941 10:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:02.941 10:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:26:02.941 10:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:02.941 10:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:02.941 10:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:02.941 10:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:02.941 10:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:02.941 10:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:02.941 10:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:02.941 10:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:02.941 10:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:02.941 10:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:02.941 10:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:02.941 10:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:02.941 10:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:02.941 10:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:02.941 10:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:02.941 10:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:02.941 10:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:03.875 nvme0n1 00:26:03.875 10:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:03.875 10:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:03.875 10:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:03.875 10:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:03.875 10:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:03.875 10:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:03.875 10:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:03.875 10:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:03.875 10:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:03.875 10:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:03.875 10:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:03.875 10:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:03.875 10:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:26:03.875 10:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:03.875 10:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:03.875 10:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:03.875 10:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:03.875 10:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MzA0MjY3Mzc1OGNlMGQ0YTUzYWE0YjhiMDEwOTEwMDY2YWUxMjhkMGExNzAxYTU49V48dQ==: 00:26:03.875 10:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NTlhYjMxYzYwZjM3Mzc0NWViN2Y4NDYxYzU5ZWY4Y2YCw+Ba: 00:26:03.875 10:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:03.875 10:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:03.875 10:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MzA0MjY3Mzc1OGNlMGQ0YTUzYWE0YjhiMDEwOTEwMDY2YWUxMjhkMGExNzAxYTU49V48dQ==: 00:26:03.875 10:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NTlhYjMxYzYwZjM3Mzc0NWViN2Y4NDYxYzU5ZWY4Y2YCw+Ba: ]] 00:26:03.875 10:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NTlhYjMxYzYwZjM3Mzc0NWViN2Y4NDYxYzU5ZWY4Y2YCw+Ba: 00:26:03.875 10:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:26:03.875 10:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:03.875 10:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:03.875 10:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:03.875 10:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:03.875 10:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:03.875 10:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:26:03.875 10:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:03.875 10:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:03.875 10:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:03.875 10:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:03.875 10:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:03.875 10:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:03.875 10:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:03.875 10:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:03.875 10:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:03.875 10:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:03.875 10:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:03.875 10:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:03.875 10:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:03.875 10:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:03.875 10:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:03.875 10:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:03.875 10:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:04.809 nvme0n1 00:26:04.809 10:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:04.809 10:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:04.809 10:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:04.809 10:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:04.809 10:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:04.809 10:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:04.809 10:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:04.809 10:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:04.809 10:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:04.809 10:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:04.809 10:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:04.809 10:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:04.809 10:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:26:04.809 10:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:04.809 10:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:04.809 10:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:04.809 10:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:04.809 10:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YTRmOGVmOWI3OGVlMzVmZDkxM2U1NGMzYzg0YjBjMWRmOTc4NTA5ZGViMTIxODAxYzdiNTVhOTdmMDA1Yzk2Oe8H5EY=: 00:26:04.809 10:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:04.809 10:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:04.809 10:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:04.809 10:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YTRmOGVmOWI3OGVlMzVmZDkxM2U1NGMzYzg0YjBjMWRmOTc4NTA5ZGViMTIxODAxYzdiNTVhOTdmMDA1Yzk2Oe8H5EY=: 00:26:04.809 10:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:04.809 10:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:26:04.809 10:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:04.809 10:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:04.809 10:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:04.809 10:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:04.809 10:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:04.809 10:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:26:04.809 10:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:04.809 10:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:04.809 10:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:04.809 10:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:04.809 10:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:04.809 10:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:04.809 10:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:04.809 10:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:04.809 10:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:04.809 10:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:04.809 10:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:04.809 10:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:04.809 10:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:04.809 10:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:04.809 10:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:04.809 10:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:04.809 10:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:05.744 nvme0n1 00:26:05.744 10:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:05.744 10:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:05.744 10:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:05.744 10:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:05.744 10:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:05.744 10:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:05.744 10:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:05.744 10:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:05.744 10:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:05.744 10:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:05.744 10:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:05.744 10:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:26:05.744 10:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:05.744 10:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:05.744 10:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:26:05.744 10:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:05.744 10:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:05.744 10:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:05.744 10:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:05.744 10:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YmY1NTE2N2FmNzUyYzdkMTI4YTliZDYzNDI1NjExYzYg2AGh: 00:26:05.744 10:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTZkODVjN2M3MDRjMzg5MjgwZTNmYmQyYWE5MmEzM2U1MThlMDhiZjYzYzBkOTZkZjExOGU0OThhNWFlM2JjMkq5pME=: 00:26:05.744 10:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:05.744 10:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:05.744 10:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YmY1NTE2N2FmNzUyYzdkMTI4YTliZDYzNDI1NjExYzYg2AGh: 00:26:05.744 10:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTZkODVjN2M3MDRjMzg5MjgwZTNmYmQyYWE5MmEzM2U1MThlMDhiZjYzYzBkOTZkZjExOGU0OThhNWFlM2JjMkq5pME=: ]] 00:26:05.744 10:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTZkODVjN2M3MDRjMzg5MjgwZTNmYmQyYWE5MmEzM2U1MThlMDhiZjYzYzBkOTZkZjExOGU0OThhNWFlM2JjMkq5pME=: 00:26:05.744 10:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:26:05.744 10:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:05.744 10:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:05.744 10:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:05.744 10:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:05.744 10:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:05.744 10:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:26:05.744 10:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:05.744 10:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:05.744 10:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:05.744 10:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:05.744 10:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:05.744 10:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:05.744 10:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:05.744 10:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:05.744 10:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:05.744 10:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:05.744 10:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:05.744 10:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:05.744 10:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:05.744 10:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:05.744 10:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:05.744 10:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:05.744 10:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:06.003 nvme0n1 00:26:06.003 10:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:06.003 10:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:06.003 10:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:06.003 10:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:06.003 10:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:06.003 10:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:06.003 10:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:06.003 10:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:06.003 10:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:06.003 10:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:06.003 10:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:06.003 10:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:06.003 10:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:26:06.003 10:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:06.003 10:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:06.003 10:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:06.003 10:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:06.003 10:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTk2YjFjNzM3ZTkxMjg5YjRhZDA2OTcyOWM5MDQ1MmZmYWUzMDVjMmE2Y2ZjZGZi+syEcg==: 00:26:06.003 10:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDk0YTIxM2FlYmEyNWZjZDE5MzRhYmM2NGRiZjA1ZWEzNmEwMTZhM2Y4ZWYyMDA2hT2seA==: 00:26:06.003 10:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:06.003 10:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:06.003 10:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTk2YjFjNzM3ZTkxMjg5YjRhZDA2OTcyOWM5MDQ1MmZmYWUzMDVjMmE2Y2ZjZGZi+syEcg==: 00:26:06.003 10:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDk0YTIxM2FlYmEyNWZjZDE5MzRhYmM2NGRiZjA1ZWEzNmEwMTZhM2Y4ZWYyMDA2hT2seA==: ]] 00:26:06.003 10:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDk0YTIxM2FlYmEyNWZjZDE5MzRhYmM2NGRiZjA1ZWEzNmEwMTZhM2Y4ZWYyMDA2hT2seA==: 00:26:06.003 10:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:26:06.003 10:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:06.003 10:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:06.003 10:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:06.003 10:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:06.003 10:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:06.003 10:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:26:06.003 10:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:06.003 10:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:06.003 10:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:06.003 10:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:06.003 10:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:06.003 10:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:06.003 10:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:06.003 10:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:06.003 10:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:06.003 10:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:06.003 10:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:06.003 10:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:06.003 10:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:06.003 10:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:06.003 10:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:06.003 10:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:06.003 10:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:06.262 nvme0n1 00:26:06.262 10:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:06.262 10:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:06.262 10:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:06.262 10:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:06.262 10:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:06.262 10:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:06.262 10:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:06.262 10:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:06.262 10:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:06.262 10:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:06.262 10:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:06.262 10:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:06.262 10:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:26:06.262 10:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:06.262 10:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:06.262 10:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:06.262 10:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:06.262 10:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YzY3ZDVlNDcxOGZiNjg0Mzg3OWJlNjg3N2RhNWE5OTVNukEg: 00:26:06.262 10:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NWQ5NTY0OGI0MzY1MmNjMWFlZWY5YzU3ZGI5YWZkYTT4zVDv: 00:26:06.262 10:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:06.262 10:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:06.262 10:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YzY3ZDVlNDcxOGZiNjg0Mzg3OWJlNjg3N2RhNWE5OTVNukEg: 00:26:06.262 10:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NWQ5NTY0OGI0MzY1MmNjMWFlZWY5YzU3ZGI5YWZkYTT4zVDv: ]] 00:26:06.262 10:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NWQ5NTY0OGI0MzY1MmNjMWFlZWY5YzU3ZGI5YWZkYTT4zVDv: 00:26:06.262 10:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:26:06.262 10:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:06.262 10:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:06.262 10:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:06.262 10:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:06.262 10:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:06.262 10:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:26:06.262 10:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:06.262 10:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:06.262 10:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:06.262 10:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:06.262 10:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:06.262 10:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:06.262 10:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:06.262 10:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:06.262 10:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:06.262 10:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:06.262 10:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:06.262 10:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:06.262 10:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:06.262 10:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:06.262 10:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:06.262 10:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:06.263 10:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:06.263 nvme0n1 00:26:06.263 10:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:06.263 10:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:06.263 10:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:06.263 10:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:06.263 10:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:06.263 10:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:06.520 10:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:06.520 10:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:06.520 10:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:06.520 10:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:06.520 10:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:06.520 10:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:06.520 10:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:26:06.520 10:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:06.520 10:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:06.520 10:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:06.520 10:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:06.520 10:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MzA0MjY3Mzc1OGNlMGQ0YTUzYWE0YjhiMDEwOTEwMDY2YWUxMjhkMGExNzAxYTU49V48dQ==: 00:26:06.520 10:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NTlhYjMxYzYwZjM3Mzc0NWViN2Y4NDYxYzU5ZWY4Y2YCw+Ba: 00:26:06.520 10:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:06.520 10:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:06.520 10:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MzA0MjY3Mzc1OGNlMGQ0YTUzYWE0YjhiMDEwOTEwMDY2YWUxMjhkMGExNzAxYTU49V48dQ==: 00:26:06.520 10:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NTlhYjMxYzYwZjM3Mzc0NWViN2Y4NDYxYzU5ZWY4Y2YCw+Ba: ]] 00:26:06.520 10:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NTlhYjMxYzYwZjM3Mzc0NWViN2Y4NDYxYzU5ZWY4Y2YCw+Ba: 00:26:06.520 10:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:26:06.520 10:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:06.520 10:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:06.520 10:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:06.520 10:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:06.520 10:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:06.520 10:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:26:06.520 10:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:06.520 10:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:06.520 10:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:06.520 10:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:06.520 10:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:06.520 10:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:06.520 10:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:06.520 10:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:06.520 10:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:06.520 10:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:06.520 10:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:06.520 10:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:06.520 10:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:06.520 10:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:06.520 10:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:06.520 10:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:06.520 10:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:06.520 nvme0n1 00:26:06.520 10:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:06.520 10:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:06.520 10:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:06.520 10:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:06.520 10:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:06.520 10:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:06.520 10:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:06.520 10:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:06.520 10:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:06.520 10:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:06.779 10:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:06.779 10:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:06.779 10:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:26:06.779 10:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:06.779 10:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:06.779 10:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:06.779 10:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:06.779 10:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YTRmOGVmOWI3OGVlMzVmZDkxM2U1NGMzYzg0YjBjMWRmOTc4NTA5ZGViMTIxODAxYzdiNTVhOTdmMDA1Yzk2Oe8H5EY=: 00:26:06.779 10:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:06.779 10:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:06.779 10:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:06.779 10:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YTRmOGVmOWI3OGVlMzVmZDkxM2U1NGMzYzg0YjBjMWRmOTc4NTA5ZGViMTIxODAxYzdiNTVhOTdmMDA1Yzk2Oe8H5EY=: 00:26:06.779 10:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:06.779 10:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:26:06.779 10:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:06.779 10:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:06.779 10:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:06.779 10:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:06.779 10:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:06.779 10:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:26:06.779 10:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:06.779 10:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:06.779 10:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:06.779 10:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:06.779 10:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:06.779 10:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:06.779 10:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:06.779 10:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:06.779 10:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:06.779 10:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:06.779 10:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:06.779 10:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:06.779 10:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:06.779 10:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:06.779 10:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:06.779 10:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:06.779 10:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:06.779 nvme0n1 00:26:06.779 10:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:06.779 10:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:06.779 10:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:06.779 10:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:06.779 10:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:06.779 10:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:06.779 10:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:06.779 10:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:06.779 10:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:06.779 10:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:06.779 10:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:06.779 10:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:06.779 10:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:06.779 10:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:26:06.779 10:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:06.779 10:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:06.779 10:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:06.779 10:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:06.779 10:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YmY1NTE2N2FmNzUyYzdkMTI4YTliZDYzNDI1NjExYzYg2AGh: 00:26:06.779 10:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTZkODVjN2M3MDRjMzg5MjgwZTNmYmQyYWE5MmEzM2U1MThlMDhiZjYzYzBkOTZkZjExOGU0OThhNWFlM2JjMkq5pME=: 00:26:06.779 10:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:06.779 10:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:06.779 10:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YmY1NTE2N2FmNzUyYzdkMTI4YTliZDYzNDI1NjExYzYg2AGh: 00:26:06.779 10:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTZkODVjN2M3MDRjMzg5MjgwZTNmYmQyYWE5MmEzM2U1MThlMDhiZjYzYzBkOTZkZjExOGU0OThhNWFlM2JjMkq5pME=: ]] 00:26:06.779 10:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTZkODVjN2M3MDRjMzg5MjgwZTNmYmQyYWE5MmEzM2U1MThlMDhiZjYzYzBkOTZkZjExOGU0OThhNWFlM2JjMkq5pME=: 00:26:06.779 10:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:26:06.779 10:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:06.779 10:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:06.779 10:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:06.779 10:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:06.779 10:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:06.779 10:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:26:06.779 10:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:06.779 10:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:06.779 10:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:07.038 10:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:07.038 10:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:07.038 10:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:07.038 10:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:07.038 10:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:07.038 10:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:07.038 10:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:07.038 10:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:07.038 10:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:07.038 10:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:07.038 10:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:07.038 10:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:07.038 10:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:07.038 10:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:07.038 nvme0n1 00:26:07.038 10:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:07.038 10:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:07.038 10:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:07.038 10:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:07.038 10:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:07.038 10:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:07.038 10:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:07.038 10:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:07.038 10:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:07.038 10:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:07.295 10:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:07.295 10:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:07.295 10:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:26:07.296 10:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:07.296 10:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:07.296 10:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:07.296 10:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:07.296 10:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTk2YjFjNzM3ZTkxMjg5YjRhZDA2OTcyOWM5MDQ1MmZmYWUzMDVjMmE2Y2ZjZGZi+syEcg==: 00:26:07.296 10:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDk0YTIxM2FlYmEyNWZjZDE5MzRhYmM2NGRiZjA1ZWEzNmEwMTZhM2Y4ZWYyMDA2hT2seA==: 00:26:07.296 10:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:07.296 10:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:07.296 10:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTk2YjFjNzM3ZTkxMjg5YjRhZDA2OTcyOWM5MDQ1MmZmYWUzMDVjMmE2Y2ZjZGZi+syEcg==: 00:26:07.296 10:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDk0YTIxM2FlYmEyNWZjZDE5MzRhYmM2NGRiZjA1ZWEzNmEwMTZhM2Y4ZWYyMDA2hT2seA==: ]] 00:26:07.296 10:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDk0YTIxM2FlYmEyNWZjZDE5MzRhYmM2NGRiZjA1ZWEzNmEwMTZhM2Y4ZWYyMDA2hT2seA==: 00:26:07.296 10:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:26:07.296 10:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:07.296 10:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:07.296 10:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:07.296 10:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:07.296 10:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:07.296 10:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:26:07.296 10:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:07.296 10:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:07.296 10:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:07.296 10:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:07.296 10:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:07.296 10:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:07.296 10:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:07.296 10:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:07.296 10:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:07.296 10:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:07.296 10:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:07.296 10:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:07.296 10:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:07.296 10:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:07.296 10:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:07.296 10:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:07.296 10:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:07.296 nvme0n1 00:26:07.296 10:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:07.296 10:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:07.296 10:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:07.296 10:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:07.296 10:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:07.296 10:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:07.553 10:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:07.553 10:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:07.553 10:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:07.553 10:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:07.553 10:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:07.553 10:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:07.553 10:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:26:07.553 10:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:07.553 10:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:07.553 10:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:07.553 10:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:07.553 10:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YzY3ZDVlNDcxOGZiNjg0Mzg3OWJlNjg3N2RhNWE5OTVNukEg: 00:26:07.553 10:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NWQ5NTY0OGI0MzY1MmNjMWFlZWY5YzU3ZGI5YWZkYTT4zVDv: 00:26:07.553 10:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:07.553 10:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:07.553 10:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YzY3ZDVlNDcxOGZiNjg0Mzg3OWJlNjg3N2RhNWE5OTVNukEg: 00:26:07.553 10:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NWQ5NTY0OGI0MzY1MmNjMWFlZWY5YzU3ZGI5YWZkYTT4zVDv: ]] 00:26:07.553 10:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NWQ5NTY0OGI0MzY1MmNjMWFlZWY5YzU3ZGI5YWZkYTT4zVDv: 00:26:07.553 10:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:26:07.553 10:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:07.553 10:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:07.553 10:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:07.553 10:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:07.553 10:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:07.553 10:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:26:07.553 10:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:07.553 10:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:07.553 10:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:07.553 10:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:07.553 10:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:07.553 10:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:07.554 10:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:07.554 10:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:07.554 10:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:07.554 10:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:07.554 10:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:07.554 10:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:07.554 10:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:07.554 10:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:07.554 10:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:07.554 10:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:07.554 10:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:07.554 nvme0n1 00:26:07.554 10:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:07.554 10:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:07.554 10:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:07.554 10:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:07.554 10:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:07.554 10:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:07.811 10:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:07.811 10:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:07.811 10:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:07.811 10:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:07.811 10:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:07.811 10:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:07.811 10:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:26:07.811 10:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:07.811 10:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:07.811 10:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:07.811 10:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:07.811 10:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MzA0MjY3Mzc1OGNlMGQ0YTUzYWE0YjhiMDEwOTEwMDY2YWUxMjhkMGExNzAxYTU49V48dQ==: 00:26:07.811 10:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NTlhYjMxYzYwZjM3Mzc0NWViN2Y4NDYxYzU5ZWY4Y2YCw+Ba: 00:26:07.811 10:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:07.812 10:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:07.812 10:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MzA0MjY3Mzc1OGNlMGQ0YTUzYWE0YjhiMDEwOTEwMDY2YWUxMjhkMGExNzAxYTU49V48dQ==: 00:26:07.812 10:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NTlhYjMxYzYwZjM3Mzc0NWViN2Y4NDYxYzU5ZWY4Y2YCw+Ba: ]] 00:26:07.812 10:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NTlhYjMxYzYwZjM3Mzc0NWViN2Y4NDYxYzU5ZWY4Y2YCw+Ba: 00:26:07.812 10:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:26:07.812 10:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:07.812 10:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:07.812 10:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:07.812 10:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:07.812 10:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:07.812 10:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:26:07.812 10:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:07.812 10:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:07.812 10:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:07.812 10:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:07.812 10:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:07.812 10:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:07.812 10:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:07.812 10:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:07.812 10:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:07.812 10:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:07.812 10:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:07.812 10:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:07.812 10:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:07.812 10:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:07.812 10:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:07.812 10:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:07.812 10:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:07.812 nvme0n1 00:26:07.812 10:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:07.812 10:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:07.812 10:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:07.812 10:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:07.812 10:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:07.812 10:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:08.069 10:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:08.069 10:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:08.069 10:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:08.069 10:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:08.069 10:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:08.069 10:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:08.069 10:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:26:08.069 10:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:08.069 10:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:08.069 10:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:08.069 10:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:08.069 10:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YTRmOGVmOWI3OGVlMzVmZDkxM2U1NGMzYzg0YjBjMWRmOTc4NTA5ZGViMTIxODAxYzdiNTVhOTdmMDA1Yzk2Oe8H5EY=: 00:26:08.069 10:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:08.069 10:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:08.069 10:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:08.069 10:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YTRmOGVmOWI3OGVlMzVmZDkxM2U1NGMzYzg0YjBjMWRmOTc4NTA5ZGViMTIxODAxYzdiNTVhOTdmMDA1Yzk2Oe8H5EY=: 00:26:08.069 10:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:08.069 10:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:26:08.069 10:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:08.069 10:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:08.069 10:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:08.069 10:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:08.069 10:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:08.069 10:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:26:08.069 10:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:08.069 10:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:08.069 10:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:08.069 10:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:08.069 10:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:08.069 10:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:08.069 10:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:08.069 10:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:08.069 10:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:08.069 10:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:08.069 10:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:08.069 10:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:08.069 10:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:08.069 10:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:08.069 10:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:08.069 10:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:08.069 10:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:08.069 nvme0n1 00:26:08.069 10:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:08.069 10:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:08.069 10:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:08.069 10:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:08.069 10:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:08.069 10:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:08.069 10:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:08.069 10:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:08.069 10:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:08.069 10:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:08.327 10:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:08.327 10:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:08.327 10:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:08.327 10:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:26:08.327 10:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:08.327 10:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:08.327 10:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:08.327 10:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:08.327 10:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YmY1NTE2N2FmNzUyYzdkMTI4YTliZDYzNDI1NjExYzYg2AGh: 00:26:08.327 10:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTZkODVjN2M3MDRjMzg5MjgwZTNmYmQyYWE5MmEzM2U1MThlMDhiZjYzYzBkOTZkZjExOGU0OThhNWFlM2JjMkq5pME=: 00:26:08.327 10:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:08.327 10:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:08.327 10:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YmY1NTE2N2FmNzUyYzdkMTI4YTliZDYzNDI1NjExYzYg2AGh: 00:26:08.327 10:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTZkODVjN2M3MDRjMzg5MjgwZTNmYmQyYWE5MmEzM2U1MThlMDhiZjYzYzBkOTZkZjExOGU0OThhNWFlM2JjMkq5pME=: ]] 00:26:08.327 10:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTZkODVjN2M3MDRjMzg5MjgwZTNmYmQyYWE5MmEzM2U1MThlMDhiZjYzYzBkOTZkZjExOGU0OThhNWFlM2JjMkq5pME=: 00:26:08.327 10:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:26:08.327 10:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:08.327 10:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:08.327 10:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:08.327 10:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:08.327 10:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:08.327 10:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:26:08.327 10:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:08.327 10:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:08.327 10:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:08.327 10:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:08.327 10:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:08.327 10:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:08.327 10:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:08.327 10:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:08.327 10:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:08.327 10:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:08.327 10:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:08.327 10:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:08.327 10:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:08.327 10:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:08.327 10:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:08.327 10:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:08.327 10:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:08.586 nvme0n1 00:26:08.586 10:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:08.586 10:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:08.586 10:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:08.586 10:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:08.586 10:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:08.586 10:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:08.586 10:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:08.586 10:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:08.586 10:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:08.586 10:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:08.587 10:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:08.587 10:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:08.587 10:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:26:08.587 10:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:08.587 10:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:08.587 10:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:08.587 10:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:08.587 10:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTk2YjFjNzM3ZTkxMjg5YjRhZDA2OTcyOWM5MDQ1MmZmYWUzMDVjMmE2Y2ZjZGZi+syEcg==: 00:26:08.587 10:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDk0YTIxM2FlYmEyNWZjZDE5MzRhYmM2NGRiZjA1ZWEzNmEwMTZhM2Y4ZWYyMDA2hT2seA==: 00:26:08.587 10:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:08.587 10:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:08.587 10:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTk2YjFjNzM3ZTkxMjg5YjRhZDA2OTcyOWM5MDQ1MmZmYWUzMDVjMmE2Y2ZjZGZi+syEcg==: 00:26:08.587 10:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDk0YTIxM2FlYmEyNWZjZDE5MzRhYmM2NGRiZjA1ZWEzNmEwMTZhM2Y4ZWYyMDA2hT2seA==: ]] 00:26:08.587 10:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDk0YTIxM2FlYmEyNWZjZDE5MzRhYmM2NGRiZjA1ZWEzNmEwMTZhM2Y4ZWYyMDA2hT2seA==: 00:26:08.587 10:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:26:08.587 10:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:08.587 10:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:08.587 10:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:08.587 10:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:08.587 10:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:08.587 10:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:26:08.587 10:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:08.587 10:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:08.587 10:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:08.587 10:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:08.587 10:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:08.587 10:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:08.587 10:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:08.587 10:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:08.587 10:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:08.587 10:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:08.587 10:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:08.587 10:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:08.587 10:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:08.587 10:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:08.587 10:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:08.587 10:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:08.587 10:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:08.845 nvme0n1 00:26:08.845 10:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:08.845 10:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:08.845 10:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:08.845 10:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:08.845 10:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:08.845 10:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:08.845 10:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:08.845 10:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:08.845 10:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:08.845 10:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:08.845 10:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:08.845 10:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:08.845 10:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:26:08.845 10:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:08.845 10:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:08.845 10:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:08.845 10:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:08.845 10:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YzY3ZDVlNDcxOGZiNjg0Mzg3OWJlNjg3N2RhNWE5OTVNukEg: 00:26:08.845 10:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NWQ5NTY0OGI0MzY1MmNjMWFlZWY5YzU3ZGI5YWZkYTT4zVDv: 00:26:08.845 10:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:08.845 10:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:08.845 10:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YzY3ZDVlNDcxOGZiNjg0Mzg3OWJlNjg3N2RhNWE5OTVNukEg: 00:26:08.845 10:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NWQ5NTY0OGI0MzY1MmNjMWFlZWY5YzU3ZGI5YWZkYTT4zVDv: ]] 00:26:08.845 10:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NWQ5NTY0OGI0MzY1MmNjMWFlZWY5YzU3ZGI5YWZkYTT4zVDv: 00:26:08.845 10:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:26:08.845 10:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:08.845 10:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:08.845 10:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:08.845 10:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:08.845 10:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:08.845 10:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:26:08.845 10:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:08.845 10:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:08.845 10:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:08.845 10:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:08.845 10:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:08.845 10:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:08.845 10:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:08.845 10:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:08.845 10:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:08.845 10:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:08.845 10:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:08.845 10:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:08.845 10:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:08.845 10:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:08.845 10:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:08.845 10:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:08.845 10:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:09.412 nvme0n1 00:26:09.412 10:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:09.412 10:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:09.412 10:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:09.412 10:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:09.412 10:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:09.412 10:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:09.412 10:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:09.412 10:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:09.412 10:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:09.412 10:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:09.412 10:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:09.412 10:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:09.412 10:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:26:09.412 10:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:09.412 10:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:09.412 10:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:09.412 10:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:09.412 10:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MzA0MjY3Mzc1OGNlMGQ0YTUzYWE0YjhiMDEwOTEwMDY2YWUxMjhkMGExNzAxYTU49V48dQ==: 00:26:09.412 10:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NTlhYjMxYzYwZjM3Mzc0NWViN2Y4NDYxYzU5ZWY4Y2YCw+Ba: 00:26:09.412 10:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:09.412 10:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:09.412 10:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MzA0MjY3Mzc1OGNlMGQ0YTUzYWE0YjhiMDEwOTEwMDY2YWUxMjhkMGExNzAxYTU49V48dQ==: 00:26:09.412 10:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NTlhYjMxYzYwZjM3Mzc0NWViN2Y4NDYxYzU5ZWY4Y2YCw+Ba: ]] 00:26:09.412 10:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NTlhYjMxYzYwZjM3Mzc0NWViN2Y4NDYxYzU5ZWY4Y2YCw+Ba: 00:26:09.412 10:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:26:09.412 10:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:09.412 10:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:09.412 10:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:09.412 10:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:09.412 10:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:09.412 10:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:26:09.412 10:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:09.412 10:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:09.412 10:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:09.412 10:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:09.412 10:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:09.412 10:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:09.412 10:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:09.412 10:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:09.412 10:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:09.412 10:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:09.412 10:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:09.412 10:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:09.412 10:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:09.412 10:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:09.412 10:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:09.412 10:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:09.412 10:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:09.671 nvme0n1 00:26:09.671 10:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:09.671 10:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:09.671 10:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:09.671 10:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:09.671 10:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:09.671 10:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:09.671 10:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:09.671 10:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:09.671 10:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:09.671 10:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:09.671 10:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:09.671 10:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:09.671 10:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:26:09.671 10:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:09.671 10:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:09.671 10:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:09.671 10:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:09.671 10:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YTRmOGVmOWI3OGVlMzVmZDkxM2U1NGMzYzg0YjBjMWRmOTc4NTA5ZGViMTIxODAxYzdiNTVhOTdmMDA1Yzk2Oe8H5EY=: 00:26:09.671 10:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:09.671 10:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:09.671 10:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:09.671 10:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YTRmOGVmOWI3OGVlMzVmZDkxM2U1NGMzYzg0YjBjMWRmOTc4NTA5ZGViMTIxODAxYzdiNTVhOTdmMDA1Yzk2Oe8H5EY=: 00:26:09.671 10:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:09.671 10:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:26:09.671 10:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:09.671 10:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:09.671 10:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:09.671 10:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:09.671 10:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:09.671 10:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:26:09.671 10:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:09.671 10:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:09.671 10:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:09.671 10:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:09.671 10:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:09.671 10:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:09.671 10:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:09.671 10:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:09.671 10:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:09.671 10:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:09.671 10:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:09.671 10:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:09.671 10:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:09.671 10:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:09.671 10:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:09.671 10:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:09.671 10:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:10.237 nvme0n1 00:26:10.237 10:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:10.237 10:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:10.237 10:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:10.237 10:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:10.237 10:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:10.237 10:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:10.237 10:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:10.237 10:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:10.238 10:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:10.238 10:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:10.238 10:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:10.238 10:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:10.238 10:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:10.238 10:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:26:10.238 10:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:10.238 10:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:10.238 10:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:10.238 10:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:10.238 10:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YmY1NTE2N2FmNzUyYzdkMTI4YTliZDYzNDI1NjExYzYg2AGh: 00:26:10.238 10:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTZkODVjN2M3MDRjMzg5MjgwZTNmYmQyYWE5MmEzM2U1MThlMDhiZjYzYzBkOTZkZjExOGU0OThhNWFlM2JjMkq5pME=: 00:26:10.238 10:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:10.238 10:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:10.238 10:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YmY1NTE2N2FmNzUyYzdkMTI4YTliZDYzNDI1NjExYzYg2AGh: 00:26:10.238 10:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTZkODVjN2M3MDRjMzg5MjgwZTNmYmQyYWE5MmEzM2U1MThlMDhiZjYzYzBkOTZkZjExOGU0OThhNWFlM2JjMkq5pME=: ]] 00:26:10.238 10:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTZkODVjN2M3MDRjMzg5MjgwZTNmYmQyYWE5MmEzM2U1MThlMDhiZjYzYzBkOTZkZjExOGU0OThhNWFlM2JjMkq5pME=: 00:26:10.238 10:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:26:10.238 10:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:10.238 10:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:10.238 10:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:10.238 10:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:10.238 10:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:10.238 10:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:26:10.238 10:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:10.238 10:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:10.238 10:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:10.238 10:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:10.238 10:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:10.238 10:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:10.238 10:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:10.238 10:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:10.238 10:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:10.238 10:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:10.238 10:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:10.238 10:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:10.238 10:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:10.238 10:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:10.238 10:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:10.238 10:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:10.238 10:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:10.805 nvme0n1 00:26:10.805 10:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:10.805 10:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:10.805 10:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:10.805 10:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:10.805 10:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:10.805 10:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:10.805 10:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:10.805 10:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:10.805 10:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:10.805 10:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:10.805 10:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:10.805 10:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:10.805 10:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:26:10.805 10:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:10.805 10:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:10.805 10:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:10.805 10:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:10.805 10:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTk2YjFjNzM3ZTkxMjg5YjRhZDA2OTcyOWM5MDQ1MmZmYWUzMDVjMmE2Y2ZjZGZi+syEcg==: 00:26:10.805 10:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDk0YTIxM2FlYmEyNWZjZDE5MzRhYmM2NGRiZjA1ZWEzNmEwMTZhM2Y4ZWYyMDA2hT2seA==: 00:26:10.805 10:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:10.805 10:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:10.805 10:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTk2YjFjNzM3ZTkxMjg5YjRhZDA2OTcyOWM5MDQ1MmZmYWUzMDVjMmE2Y2ZjZGZi+syEcg==: 00:26:10.805 10:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDk0YTIxM2FlYmEyNWZjZDE5MzRhYmM2NGRiZjA1ZWEzNmEwMTZhM2Y4ZWYyMDA2hT2seA==: ]] 00:26:10.805 10:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDk0YTIxM2FlYmEyNWZjZDE5MzRhYmM2NGRiZjA1ZWEzNmEwMTZhM2Y4ZWYyMDA2hT2seA==: 00:26:10.805 10:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:26:10.805 10:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:10.805 10:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:10.805 10:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:10.805 10:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:10.805 10:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:10.805 10:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:26:10.805 10:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:10.805 10:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:10.805 10:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:10.805 10:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:10.805 10:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:10.805 10:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:10.805 10:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:10.805 10:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:10.805 10:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:10.805 10:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:10.805 10:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:10.805 10:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:10.805 10:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:10.805 10:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:10.805 10:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:10.805 10:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:10.805 10:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:11.372 nvme0n1 00:26:11.372 10:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:11.373 10:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:11.373 10:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:11.373 10:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:11.373 10:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:11.373 10:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:11.373 10:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:11.373 10:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:11.373 10:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:11.373 10:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:11.373 10:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:11.373 10:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:11.373 10:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:26:11.373 10:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:11.373 10:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:11.373 10:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:11.373 10:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:11.373 10:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YzY3ZDVlNDcxOGZiNjg0Mzg3OWJlNjg3N2RhNWE5OTVNukEg: 00:26:11.373 10:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NWQ5NTY0OGI0MzY1MmNjMWFlZWY5YzU3ZGI5YWZkYTT4zVDv: 00:26:11.373 10:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:11.373 10:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:11.373 10:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YzY3ZDVlNDcxOGZiNjg0Mzg3OWJlNjg3N2RhNWE5OTVNukEg: 00:26:11.373 10:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NWQ5NTY0OGI0MzY1MmNjMWFlZWY5YzU3ZGI5YWZkYTT4zVDv: ]] 00:26:11.373 10:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NWQ5NTY0OGI0MzY1MmNjMWFlZWY5YzU3ZGI5YWZkYTT4zVDv: 00:26:11.373 10:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:26:11.373 10:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:11.373 10:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:11.373 10:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:11.373 10:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:11.373 10:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:11.373 10:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:26:11.373 10:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:11.373 10:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:11.373 10:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:11.373 10:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:11.373 10:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:11.373 10:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:11.373 10:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:11.373 10:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:11.373 10:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:11.373 10:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:11.373 10:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:11.373 10:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:11.373 10:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:11.373 10:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:11.373 10:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:11.373 10:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:11.373 10:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:11.939 nvme0n1 00:26:11.939 10:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:11.939 10:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:11.939 10:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:11.939 10:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:11.939 10:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:11.939 10:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:11.939 10:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:11.939 10:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:11.939 10:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:11.939 10:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:11.939 10:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:11.939 10:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:11.939 10:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:26:11.939 10:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:11.939 10:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:11.939 10:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:11.939 10:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:11.939 10:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MzA0MjY3Mzc1OGNlMGQ0YTUzYWE0YjhiMDEwOTEwMDY2YWUxMjhkMGExNzAxYTU49V48dQ==: 00:26:11.939 10:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NTlhYjMxYzYwZjM3Mzc0NWViN2Y4NDYxYzU5ZWY4Y2YCw+Ba: 00:26:11.939 10:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:11.939 10:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:11.939 10:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MzA0MjY3Mzc1OGNlMGQ0YTUzYWE0YjhiMDEwOTEwMDY2YWUxMjhkMGExNzAxYTU49V48dQ==: 00:26:11.939 10:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NTlhYjMxYzYwZjM3Mzc0NWViN2Y4NDYxYzU5ZWY4Y2YCw+Ba: ]] 00:26:11.939 10:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NTlhYjMxYzYwZjM3Mzc0NWViN2Y4NDYxYzU5ZWY4Y2YCw+Ba: 00:26:11.939 10:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:26:11.939 10:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:11.939 10:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:11.939 10:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:11.939 10:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:11.939 10:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:11.939 10:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:26:11.939 10:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:11.939 10:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:11.939 10:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:11.939 10:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:11.939 10:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:11.939 10:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:11.939 10:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:11.939 10:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:11.939 10:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:11.939 10:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:11.939 10:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:11.939 10:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:11.939 10:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:11.939 10:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:11.939 10:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:11.939 10:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:11.939 10:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:12.505 nvme0n1 00:26:12.505 10:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:12.505 10:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:12.505 10:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:12.505 10:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:12.505 10:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:12.505 10:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:12.505 10:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:12.505 10:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:12.505 10:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:12.505 10:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:12.505 10:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:12.505 10:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:12.505 10:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:26:12.505 10:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:12.505 10:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:12.505 10:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:12.505 10:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:12.505 10:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YTRmOGVmOWI3OGVlMzVmZDkxM2U1NGMzYzg0YjBjMWRmOTc4NTA5ZGViMTIxODAxYzdiNTVhOTdmMDA1Yzk2Oe8H5EY=: 00:26:12.505 10:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:12.505 10:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:12.505 10:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:12.505 10:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YTRmOGVmOWI3OGVlMzVmZDkxM2U1NGMzYzg0YjBjMWRmOTc4NTA5ZGViMTIxODAxYzdiNTVhOTdmMDA1Yzk2Oe8H5EY=: 00:26:12.505 10:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:12.505 10:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:26:12.505 10:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:12.505 10:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:12.505 10:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:12.505 10:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:12.505 10:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:12.505 10:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:26:12.505 10:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:12.505 10:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:12.505 10:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:12.505 10:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:12.505 10:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:12.505 10:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:12.505 10:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:12.505 10:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:12.505 10:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:12.505 10:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:12.505 10:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:12.505 10:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:12.505 10:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:12.506 10:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:12.506 10:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:12.506 10:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:12.506 10:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:13.071 nvme0n1 00:26:13.071 10:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:13.071 10:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:13.071 10:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:13.071 10:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:13.071 10:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:13.071 10:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:13.071 10:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:13.071 10:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:13.071 10:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:13.071 10:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:13.071 10:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:13.071 10:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:13.071 10:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:13.071 10:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:26:13.071 10:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:13.071 10:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:13.071 10:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:13.071 10:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:13.071 10:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YmY1NTE2N2FmNzUyYzdkMTI4YTliZDYzNDI1NjExYzYg2AGh: 00:26:13.071 10:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTZkODVjN2M3MDRjMzg5MjgwZTNmYmQyYWE5MmEzM2U1MThlMDhiZjYzYzBkOTZkZjExOGU0OThhNWFlM2JjMkq5pME=: 00:26:13.071 10:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:13.071 10:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:13.071 10:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YmY1NTE2N2FmNzUyYzdkMTI4YTliZDYzNDI1NjExYzYg2AGh: 00:26:13.071 10:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTZkODVjN2M3MDRjMzg5MjgwZTNmYmQyYWE5MmEzM2U1MThlMDhiZjYzYzBkOTZkZjExOGU0OThhNWFlM2JjMkq5pME=: ]] 00:26:13.071 10:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTZkODVjN2M3MDRjMzg5MjgwZTNmYmQyYWE5MmEzM2U1MThlMDhiZjYzYzBkOTZkZjExOGU0OThhNWFlM2JjMkq5pME=: 00:26:13.071 10:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:26:13.071 10:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:13.071 10:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:13.071 10:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:13.071 10:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:13.071 10:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:13.071 10:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:26:13.071 10:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:13.071 10:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:13.071 10:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:13.071 10:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:13.071 10:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:13.071 10:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:13.071 10:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:13.071 10:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:13.071 10:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:13.071 10:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:13.071 10:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:13.071 10:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:13.071 10:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:13.071 10:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:13.071 10:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:13.071 10:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:13.071 10:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:14.003 nvme0n1 00:26:14.003 10:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:14.003 10:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:14.003 10:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:14.003 10:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:14.003 10:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:14.003 10:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:14.003 10:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:14.003 10:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:14.003 10:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:14.003 10:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:14.003 10:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:14.003 10:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:14.003 10:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:26:14.003 10:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:14.003 10:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:14.003 10:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:14.003 10:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:14.003 10:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTk2YjFjNzM3ZTkxMjg5YjRhZDA2OTcyOWM5MDQ1MmZmYWUzMDVjMmE2Y2ZjZGZi+syEcg==: 00:26:14.003 10:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDk0YTIxM2FlYmEyNWZjZDE5MzRhYmM2NGRiZjA1ZWEzNmEwMTZhM2Y4ZWYyMDA2hT2seA==: 00:26:14.003 10:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:14.003 10:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:14.003 10:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTk2YjFjNzM3ZTkxMjg5YjRhZDA2OTcyOWM5MDQ1MmZmYWUzMDVjMmE2Y2ZjZGZi+syEcg==: 00:26:14.003 10:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDk0YTIxM2FlYmEyNWZjZDE5MzRhYmM2NGRiZjA1ZWEzNmEwMTZhM2Y4ZWYyMDA2hT2seA==: ]] 00:26:14.003 10:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDk0YTIxM2FlYmEyNWZjZDE5MzRhYmM2NGRiZjA1ZWEzNmEwMTZhM2Y4ZWYyMDA2hT2seA==: 00:26:14.003 10:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:26:14.003 10:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:14.003 10:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:14.003 10:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:14.003 10:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:14.003 10:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:14.003 10:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:26:14.003 10:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:14.003 10:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:14.003 10:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:14.003 10:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:14.003 10:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:14.003 10:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:14.003 10:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:14.003 10:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:14.003 10:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:14.003 10:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:14.003 10:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:14.003 10:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:14.003 10:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:14.003 10:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:14.003 10:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:14.003 10:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:14.003 10:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:15.018 nvme0n1 00:26:15.018 10:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:15.018 10:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:15.018 10:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:15.018 10:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:15.018 10:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:15.018 10:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:15.018 10:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:15.018 10:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:15.018 10:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:15.018 10:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:15.018 10:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:15.018 10:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:15.018 10:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:26:15.018 10:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:15.018 10:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:15.018 10:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:15.018 10:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:15.018 10:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YzY3ZDVlNDcxOGZiNjg0Mzg3OWJlNjg3N2RhNWE5OTVNukEg: 00:26:15.018 10:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NWQ5NTY0OGI0MzY1MmNjMWFlZWY5YzU3ZGI5YWZkYTT4zVDv: 00:26:15.018 10:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:15.018 10:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:15.018 10:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YzY3ZDVlNDcxOGZiNjg0Mzg3OWJlNjg3N2RhNWE5OTVNukEg: 00:26:15.018 10:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NWQ5NTY0OGI0MzY1MmNjMWFlZWY5YzU3ZGI5YWZkYTT4zVDv: ]] 00:26:15.018 10:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NWQ5NTY0OGI0MzY1MmNjMWFlZWY5YzU3ZGI5YWZkYTT4zVDv: 00:26:15.018 10:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:26:15.018 10:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:15.018 10:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:15.018 10:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:15.018 10:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:15.018 10:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:15.018 10:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:26:15.018 10:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:15.018 10:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:15.018 10:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:15.018 10:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:15.018 10:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:15.018 10:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:15.018 10:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:15.018 10:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:15.018 10:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:15.018 10:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:15.018 10:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:15.018 10:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:15.018 10:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:15.018 10:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:15.018 10:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:15.018 10:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:15.018 10:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:15.976 nvme0n1 00:26:15.976 10:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:15.976 10:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:15.976 10:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:15.976 10:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:15.976 10:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:15.976 10:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:15.976 10:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:15.976 10:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:15.976 10:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:15.976 10:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:15.976 10:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:15.976 10:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:15.976 10:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:26:15.976 10:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:15.976 10:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:15.976 10:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:15.976 10:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:15.976 10:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MzA0MjY3Mzc1OGNlMGQ0YTUzYWE0YjhiMDEwOTEwMDY2YWUxMjhkMGExNzAxYTU49V48dQ==: 00:26:15.976 10:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NTlhYjMxYzYwZjM3Mzc0NWViN2Y4NDYxYzU5ZWY4Y2YCw+Ba: 00:26:15.976 10:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:15.976 10:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:15.976 10:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MzA0MjY3Mzc1OGNlMGQ0YTUzYWE0YjhiMDEwOTEwMDY2YWUxMjhkMGExNzAxYTU49V48dQ==: 00:26:15.976 10:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NTlhYjMxYzYwZjM3Mzc0NWViN2Y4NDYxYzU5ZWY4Y2YCw+Ba: ]] 00:26:15.976 10:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NTlhYjMxYzYwZjM3Mzc0NWViN2Y4NDYxYzU5ZWY4Y2YCw+Ba: 00:26:15.976 10:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:26:15.976 10:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:15.976 10:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:15.976 10:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:15.976 10:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:15.976 10:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:15.976 10:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:26:15.976 10:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:15.976 10:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:15.976 10:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:15.976 10:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:15.976 10:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:15.976 10:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:15.976 10:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:15.976 10:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:15.976 10:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:15.976 10:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:15.976 10:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:15.976 10:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:15.976 10:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:15.976 10:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:15.976 10:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:15.976 10:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:15.976 10:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:16.909 nvme0n1 00:26:16.909 10:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:16.909 10:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:16.909 10:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:16.909 10:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:16.909 10:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:16.909 10:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:16.909 10:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:16.909 10:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:16.909 10:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:16.909 10:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:16.909 10:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:16.909 10:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:16.909 10:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:26:16.909 10:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:16.909 10:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:16.909 10:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:16.909 10:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:16.909 10:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YTRmOGVmOWI3OGVlMzVmZDkxM2U1NGMzYzg0YjBjMWRmOTc4NTA5ZGViMTIxODAxYzdiNTVhOTdmMDA1Yzk2Oe8H5EY=: 00:26:16.909 10:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:16.909 10:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:16.909 10:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:16.909 10:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YTRmOGVmOWI3OGVlMzVmZDkxM2U1NGMzYzg0YjBjMWRmOTc4NTA5ZGViMTIxODAxYzdiNTVhOTdmMDA1Yzk2Oe8H5EY=: 00:26:16.909 10:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:16.909 10:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:26:16.909 10:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:16.909 10:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:16.909 10:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:16.909 10:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:16.909 10:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:16.909 10:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:26:16.909 10:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:16.909 10:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:16.909 10:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:16.909 10:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:16.909 10:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:16.909 10:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:16.909 10:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:16.909 10:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:16.909 10:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:16.909 10:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:16.909 10:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:16.909 10:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:16.909 10:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:16.909 10:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:16.909 10:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:16.909 10:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:16.909 10:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:17.843 nvme0n1 00:26:17.843 10:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:17.843 10:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:17.843 10:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:17.843 10:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:17.843 10:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:17.843 10:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:17.843 10:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:17.843 10:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:17.843 10:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:17.843 10:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:17.843 10:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:17.843 10:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:26:17.843 10:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:17.843 10:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:17.843 10:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:17.843 10:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:17.843 10:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTk2YjFjNzM3ZTkxMjg5YjRhZDA2OTcyOWM5MDQ1MmZmYWUzMDVjMmE2Y2ZjZGZi+syEcg==: 00:26:17.843 10:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDk0YTIxM2FlYmEyNWZjZDE5MzRhYmM2NGRiZjA1ZWEzNmEwMTZhM2Y4ZWYyMDA2hT2seA==: 00:26:17.843 10:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:17.843 10:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:17.843 10:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTk2YjFjNzM3ZTkxMjg5YjRhZDA2OTcyOWM5MDQ1MmZmYWUzMDVjMmE2Y2ZjZGZi+syEcg==: 00:26:17.843 10:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDk0YTIxM2FlYmEyNWZjZDE5MzRhYmM2NGRiZjA1ZWEzNmEwMTZhM2Y4ZWYyMDA2hT2seA==: ]] 00:26:17.843 10:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDk0YTIxM2FlYmEyNWZjZDE5MzRhYmM2NGRiZjA1ZWEzNmEwMTZhM2Y4ZWYyMDA2hT2seA==: 00:26:17.843 10:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:26:17.843 10:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:17.843 10:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:17.843 10:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:17.843 10:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:26:17.843 10:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:17.843 10:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:17.843 10:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:17.843 10:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:17.843 10:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:17.843 10:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:17.843 10:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:17.843 10:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:17.843 10:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:17.843 10:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:17.843 10:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:26:17.843 10:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:26:17.843 10:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:26:17.843 10:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:26:17.843 10:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:17.843 10:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:26:17.843 10:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:17.843 10:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:26:17.843 10:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:17.843 10:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:18.107 request: 00:26:18.107 { 00:26:18.107 "name": "nvme0", 00:26:18.107 "trtype": "tcp", 00:26:18.107 "traddr": "10.0.0.1", 00:26:18.107 "adrfam": "ipv4", 00:26:18.107 "trsvcid": "4420", 00:26:18.107 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:26:18.107 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:26:18.107 "prchk_reftag": false, 00:26:18.107 "prchk_guard": false, 00:26:18.107 "hdgst": false, 00:26:18.107 "ddgst": false, 00:26:18.107 "allow_unrecognized_csi": false, 00:26:18.107 "method": "bdev_nvme_attach_controller", 00:26:18.107 "req_id": 1 00:26:18.107 } 00:26:18.107 Got JSON-RPC error response 00:26:18.107 response: 00:26:18.107 { 00:26:18.107 "code": -5, 00:26:18.107 "message": "Input/output error" 00:26:18.107 } 00:26:18.107 10:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:26:18.107 10:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:26:18.107 10:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:26:18.107 10:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:26:18.107 10:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:26:18.107 10:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:26:18.107 10:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:26:18.107 10:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:18.107 10:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:18.107 10:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:18.107 10:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:26:18.107 10:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:26:18.107 10:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:18.107 10:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:18.107 10:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:18.107 10:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:18.107 10:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:18.107 10:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:18.107 10:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:18.107 10:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:18.107 10:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:18.107 10:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:18.107 10:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:26:18.107 10:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:26:18.107 10:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:26:18.107 10:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:26:18.107 10:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:18.107 10:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:26:18.107 10:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:18.107 10:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:26:18.107 10:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:18.107 10:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:18.107 request: 00:26:18.107 { 00:26:18.107 "name": "nvme0", 00:26:18.107 "trtype": "tcp", 00:26:18.107 "traddr": "10.0.0.1", 00:26:18.107 "adrfam": "ipv4", 00:26:18.107 "trsvcid": "4420", 00:26:18.107 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:26:18.107 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:26:18.107 "prchk_reftag": false, 00:26:18.107 "prchk_guard": false, 00:26:18.107 "hdgst": false, 00:26:18.107 "ddgst": false, 00:26:18.107 "dhchap_key": "key2", 00:26:18.107 "allow_unrecognized_csi": false, 00:26:18.107 "method": "bdev_nvme_attach_controller", 00:26:18.107 "req_id": 1 00:26:18.107 } 00:26:18.107 Got JSON-RPC error response 00:26:18.107 response: 00:26:18.107 { 00:26:18.107 "code": -5, 00:26:18.107 "message": "Input/output error" 00:26:18.107 } 00:26:18.107 10:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:26:18.107 10:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:26:18.107 10:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:26:18.107 10:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:26:18.107 10:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:26:18.107 10:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:26:18.107 10:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:18.107 10:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:18.107 10:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:26:18.107 10:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:18.107 10:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:26:18.107 10:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:26:18.107 10:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:18.107 10:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:18.107 10:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:18.107 10:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:18.107 10:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:18.107 10:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:18.107 10:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:18.107 10:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:18.107 10:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:18.107 10:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:18.107 10:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:26:18.107 10:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:26:18.107 10:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:26:18.107 10:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:26:18.107 10:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:18.107 10:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:26:18.107 10:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:18.107 10:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:26:18.107 10:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:18.107 10:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:18.366 request: 00:26:18.366 { 00:26:18.366 "name": "nvme0", 00:26:18.366 "trtype": "tcp", 00:26:18.366 "traddr": "10.0.0.1", 00:26:18.366 "adrfam": "ipv4", 00:26:18.366 "trsvcid": "4420", 00:26:18.366 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:26:18.366 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:26:18.366 "prchk_reftag": false, 00:26:18.366 "prchk_guard": false, 00:26:18.366 "hdgst": false, 00:26:18.366 "ddgst": false, 00:26:18.366 "dhchap_key": "key1", 00:26:18.366 "dhchap_ctrlr_key": "ckey2", 00:26:18.366 "allow_unrecognized_csi": false, 00:26:18.366 "method": "bdev_nvme_attach_controller", 00:26:18.366 "req_id": 1 00:26:18.366 } 00:26:18.366 Got JSON-RPC error response 00:26:18.366 response: 00:26:18.366 { 00:26:18.366 "code": -5, 00:26:18.366 "message": "Input/output error" 00:26:18.366 } 00:26:18.366 10:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:26:18.366 10:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:26:18.366 10:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:26:18.366 10:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:26:18.366 10:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:26:18.366 10:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # get_main_ns_ip 00:26:18.366 10:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:18.366 10:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:18.366 10:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:18.366 10:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:18.366 10:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:18.366 10:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:18.366 10:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:18.366 10:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:18.366 10:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:18.366 10:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:18.366 10:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:26:18.366 10:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:18.366 10:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:18.366 nvme0n1 00:26:18.366 10:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:18.366 10:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@132 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:26:18.366 10:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:18.366 10:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:18.366 10:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:18.366 10:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:18.366 10:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YzY3ZDVlNDcxOGZiNjg0Mzg3OWJlNjg3N2RhNWE5OTVNukEg: 00:26:18.366 10:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NWQ5NTY0OGI0MzY1MmNjMWFlZWY5YzU3ZGI5YWZkYTT4zVDv: 00:26:18.366 10:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:18.366 10:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:18.366 10:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YzY3ZDVlNDcxOGZiNjg0Mzg3OWJlNjg3N2RhNWE5OTVNukEg: 00:26:18.366 10:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NWQ5NTY0OGI0MzY1MmNjMWFlZWY5YzU3ZGI5YWZkYTT4zVDv: ]] 00:26:18.366 10:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NWQ5NTY0OGI0MzY1MmNjMWFlZWY5YzU3ZGI5YWZkYTT4zVDv: 00:26:18.366 10:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@133 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:18.366 10:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:18.366 10:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:18.623 10:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:18.623 10:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # rpc_cmd bdev_nvme_get_controllers 00:26:18.623 10:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:18.623 10:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:18.623 10:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # jq -r '.[].name' 00:26:18.623 10:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:18.623 10:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:18.623 10:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@136 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:26:18.623 10:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:26:18.623 10:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:26:18.623 10:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:26:18.623 10:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:18.623 10:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:26:18.623 10:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:18.623 10:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:26:18.623 10:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:18.624 10:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:18.624 request: 00:26:18.624 { 00:26:18.624 "name": "nvme0", 00:26:18.624 "dhchap_key": "key1", 00:26:18.624 "dhchap_ctrlr_key": "ckey2", 00:26:18.624 "method": "bdev_nvme_set_keys", 00:26:18.624 "req_id": 1 00:26:18.624 } 00:26:18.624 Got JSON-RPC error response 00:26:18.624 response: 00:26:18.624 { 00:26:18.624 "code": -13, 00:26:18.624 "message": "Permission denied" 00:26:18.624 } 00:26:18.624 10:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:26:18.624 10:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:26:18.624 10:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:26:18.624 10:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:26:18.624 10:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:26:18.624 10:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:26:18.624 10:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:18.624 10:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:18.624 10:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:26:18.624 10:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:18.624 10:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:26:18.624 10:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:26:19.556 10:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:26:19.556 10:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:26:19.556 10:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:19.556 10:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:19.556 10:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:19.813 10:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:26:19.813 10:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:26:20.746 10:45:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:26:20.746 10:45:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:26:20.746 10:45:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:20.746 10:45:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:20.746 10:45:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:20.746 10:45:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 0 != 0 )) 00:26:20.746 10:45:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@141 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:26:20.746 10:45:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:20.746 10:45:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:20.746 10:45:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:20.746 10:45:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:20.746 10:45:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTk2YjFjNzM3ZTkxMjg5YjRhZDA2OTcyOWM5MDQ1MmZmYWUzMDVjMmE2Y2ZjZGZi+syEcg==: 00:26:20.746 10:45:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDk0YTIxM2FlYmEyNWZjZDE5MzRhYmM2NGRiZjA1ZWEzNmEwMTZhM2Y4ZWYyMDA2hT2seA==: 00:26:20.746 10:45:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:20.746 10:45:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:20.746 10:45:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTk2YjFjNzM3ZTkxMjg5YjRhZDA2OTcyOWM5MDQ1MmZmYWUzMDVjMmE2Y2ZjZGZi+syEcg==: 00:26:20.746 10:45:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDk0YTIxM2FlYmEyNWZjZDE5MzRhYmM2NGRiZjA1ZWEzNmEwMTZhM2Y4ZWYyMDA2hT2seA==: ]] 00:26:20.746 10:45:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDk0YTIxM2FlYmEyNWZjZDE5MzRhYmM2NGRiZjA1ZWEzNmEwMTZhM2Y4ZWYyMDA2hT2seA==: 00:26:20.746 10:45:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # get_main_ns_ip 00:26:20.746 10:45:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:20.746 10:45:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:20.746 10:45:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:20.746 10:45:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:20.746 10:45:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:20.746 10:45:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:20.746 10:45:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:20.746 10:45:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:20.746 10:45:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:20.746 10:45:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:20.746 10:45:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:26:20.746 10:45:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:20.746 10:45:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:21.004 nvme0n1 00:26:21.004 10:45:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:21.004 10:45:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@146 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:26:21.004 10:45:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:21.004 10:45:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:21.004 10:45:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:21.004 10:45:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:21.004 10:45:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YzY3ZDVlNDcxOGZiNjg0Mzg3OWJlNjg3N2RhNWE5OTVNukEg: 00:26:21.004 10:45:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NWQ5NTY0OGI0MzY1MmNjMWFlZWY5YzU3ZGI5YWZkYTT4zVDv: 00:26:21.004 10:45:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:21.004 10:45:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:21.004 10:45:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YzY3ZDVlNDcxOGZiNjg0Mzg3OWJlNjg3N2RhNWE5OTVNukEg: 00:26:21.004 10:45:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NWQ5NTY0OGI0MzY1MmNjMWFlZWY5YzU3ZGI5YWZkYTT4zVDv: ]] 00:26:21.004 10:45:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NWQ5NTY0OGI0MzY1MmNjMWFlZWY5YzU3ZGI5YWZkYTT4zVDv: 00:26:21.004 10:45:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@147 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:26:21.004 10:45:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:26:21.004 10:45:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:26:21.004 10:45:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:26:21.004 10:45:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:21.004 10:45:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:26:21.004 10:45:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:21.004 10:45:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:26:21.004 10:45:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:21.004 10:45:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:21.004 request: 00:26:21.004 { 00:26:21.004 "name": "nvme0", 00:26:21.004 "dhchap_key": "key2", 00:26:21.004 "dhchap_ctrlr_key": "ckey1", 00:26:21.004 "method": "bdev_nvme_set_keys", 00:26:21.004 "req_id": 1 00:26:21.004 } 00:26:21.004 Got JSON-RPC error response 00:26:21.004 response: 00:26:21.004 { 00:26:21.004 "code": -13, 00:26:21.004 "message": "Permission denied" 00:26:21.004 } 00:26:21.004 10:45:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:26:21.004 10:45:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:26:21.004 10:45:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:26:21.004 10:45:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:26:21.004 10:45:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:26:21.004 10:45:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:26:21.004 10:45:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:26:21.004 10:45:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:21.004 10:45:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:21.004 10:45:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:21.004 10:45:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 1 != 0 )) 00:26:21.004 10:45:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@149 -- # sleep 1s 00:26:21.937 10:45:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:26:21.937 10:45:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:21.937 10:45:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:21.937 10:45:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:26:21.937 10:45:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:21.937 10:45:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 0 != 0 )) 00:26:21.937 10:45:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@152 -- # trap - SIGINT SIGTERM EXIT 00:26:21.937 10:45:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@153 -- # cleanup 00:26:21.937 10:45:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:26:21.937 10:45:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:26:21.937 10:45:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@121 -- # sync 00:26:21.937 10:45:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:21.937 10:45:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@124 -- # set +e 00:26:21.937 10:45:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:21.937 10:45:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:21.937 rmmod nvme_tcp 00:26:21.937 rmmod nvme_fabrics 00:26:22.195 10:45:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:22.195 10:45:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@128 -- # set -e 00:26:22.195 10:45:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@129 -- # return 0 00:26:22.195 10:45:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@517 -- # '[' -n 470355 ']' 00:26:22.195 10:45:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@518 -- # killprocess 470355 00:26:22.195 10:45:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@952 -- # '[' -z 470355 ']' 00:26:22.195 10:45:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@956 -- # kill -0 470355 00:26:22.195 10:45:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@957 -- # uname 00:26:22.195 10:45:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:26:22.195 10:45:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 470355 00:26:22.195 10:45:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:26:22.195 10:45:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:26:22.195 10:45:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@970 -- # echo 'killing process with pid 470355' 00:26:22.195 killing process with pid 470355 00:26:22.195 10:45:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@971 -- # kill 470355 00:26:22.195 10:45:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@976 -- # wait 470355 00:26:22.453 10:45:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:26:22.453 10:45:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:26:22.453 10:45:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:26:22.453 10:45:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@297 -- # iptr 00:26:22.453 10:45:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-save 00:26:22.453 10:45:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:26:22.453 10:45:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-restore 00:26:22.453 10:45:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:22.453 10:45:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@302 -- # remove_spdk_ns 00:26:22.453 10:45:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:22.453 10:45:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:22.453 10:45:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:24.357 10:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:26:24.357 10:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:26:24.357 10:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:26:24.357 10:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:26:24.357 10:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:26:24.357 10:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@714 -- # echo 0 00:26:24.357 10:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:26:24.357 10:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:26:24.357 10:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:26:24.357 10:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:26:24.357 10:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:26:24.357 10:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:26:24.357 10:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:26:25.732 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:26:25.732 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:26:25.732 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:26:25.732 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:26:25.732 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:26:25.732 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:26:25.732 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:26:25.732 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:26:25.732 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:26:25.732 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:26:25.732 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:26:25.732 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:26:25.732 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:26:25.732 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:26:25.732 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:26:25.732 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:26:27.633 0000:81:00.0 (8086 0a54): nvme -> vfio-pci 00:26:27.633 10:45:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.0u1 /tmp/spdk.key-null.1lO /tmp/spdk.key-sha256.jXZ /tmp/spdk.key-sha384.zcM /tmp/spdk.key-sha512.GBU /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log 00:26:27.633 10:45:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:26:29.009 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:26:29.009 0000:81:00.0 (8086 0a54): Already using the vfio-pci driver 00:26:29.009 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:26:29.009 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:26:29.009 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:26:29.009 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:26:29.009 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:26:29.009 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:26:29.009 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:26:29.009 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:26:29.009 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:26:29.009 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:26:29.009 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:26:29.009 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:26:29.009 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:26:29.009 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:26:29.009 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:26:29.009 00:26:29.009 real 0m57.378s 00:26:29.009 user 0m54.194s 00:26:29.009 sys 0m6.252s 00:26:29.009 10:45:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1128 -- # xtrace_disable 00:26:29.009 10:45:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:29.009 ************************************ 00:26:29.009 END TEST nvmf_auth_host 00:26:29.009 ************************************ 00:26:29.009 10:45:17 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@32 -- # [[ tcp == \t\c\p ]] 00:26:29.009 10:45:17 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@33 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:26:29.009 10:45:17 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:26:29.009 10:45:17 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:26:29.009 10:45:17 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:26:29.009 ************************************ 00:26:29.009 START TEST nvmf_digest 00:26:29.009 ************************************ 00:26:29.009 10:45:17 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:26:29.009 * Looking for test storage... 00:26:29.009 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:29.009 10:45:17 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:26:29.009 10:45:17 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1691 -- # lcov --version 00:26:29.009 10:45:17 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:26:29.268 10:45:17 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:26:29.268 10:45:17 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:29.268 10:45:17 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:29.268 10:45:17 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:29.268 10:45:17 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # IFS=.-: 00:26:29.268 10:45:17 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # read -ra ver1 00:26:29.268 10:45:17 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # IFS=.-: 00:26:29.268 10:45:17 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # read -ra ver2 00:26:29.268 10:45:17 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@338 -- # local 'op=<' 00:26:29.268 10:45:17 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@340 -- # ver1_l=2 00:26:29.268 10:45:17 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@341 -- # ver2_l=1 00:26:29.268 10:45:17 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:29.268 10:45:17 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@344 -- # case "$op" in 00:26:29.268 10:45:17 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@345 -- # : 1 00:26:29.268 10:45:17 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:29.268 10:45:17 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:29.268 10:45:17 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # decimal 1 00:26:29.268 10:45:17 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=1 00:26:29.268 10:45:17 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:29.268 10:45:17 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 1 00:26:29.268 10:45:17 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # ver1[v]=1 00:26:29.268 10:45:17 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # decimal 2 00:26:29.268 10:45:17 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=2 00:26:29.268 10:45:17 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:29.268 10:45:17 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 2 00:26:29.268 10:45:17 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # ver2[v]=2 00:26:29.268 10:45:17 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:29.268 10:45:17 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:29.268 10:45:17 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # return 0 00:26:29.268 10:45:17 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:29.268 10:45:17 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:26:29.268 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:29.268 --rc genhtml_branch_coverage=1 00:26:29.268 --rc genhtml_function_coverage=1 00:26:29.268 --rc genhtml_legend=1 00:26:29.268 --rc geninfo_all_blocks=1 00:26:29.268 --rc geninfo_unexecuted_blocks=1 00:26:29.268 00:26:29.268 ' 00:26:29.268 10:45:17 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:26:29.268 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:29.268 --rc genhtml_branch_coverage=1 00:26:29.268 --rc genhtml_function_coverage=1 00:26:29.268 --rc genhtml_legend=1 00:26:29.268 --rc geninfo_all_blocks=1 00:26:29.268 --rc geninfo_unexecuted_blocks=1 00:26:29.268 00:26:29.268 ' 00:26:29.268 10:45:17 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:26:29.268 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:29.268 --rc genhtml_branch_coverage=1 00:26:29.268 --rc genhtml_function_coverage=1 00:26:29.268 --rc genhtml_legend=1 00:26:29.268 --rc geninfo_all_blocks=1 00:26:29.268 --rc geninfo_unexecuted_blocks=1 00:26:29.268 00:26:29.268 ' 00:26:29.268 10:45:17 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:26:29.268 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:29.268 --rc genhtml_branch_coverage=1 00:26:29.268 --rc genhtml_function_coverage=1 00:26:29.268 --rc genhtml_legend=1 00:26:29.268 --rc geninfo_all_blocks=1 00:26:29.268 --rc geninfo_unexecuted_blocks=1 00:26:29.268 00:26:29.268 ' 00:26:29.268 10:45:17 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:29.268 10:45:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:26:29.268 10:45:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:29.268 10:45:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:29.268 10:45:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:29.268 10:45:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:29.268 10:45:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:29.268 10:45:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:29.268 10:45:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:29.268 10:45:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:29.268 10:45:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:29.268 10:45:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:29.268 10:45:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:26:29.268 10:45:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=8b464f06-2980-e311-ba20-001e67a94acd 00:26:29.268 10:45:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:29.268 10:45:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:29.268 10:45:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:29.268 10:45:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:29.268 10:45:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:29.268 10:45:17 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@15 -- # shopt -s extglob 00:26:29.268 10:45:17 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:29.268 10:45:17 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:29.268 10:45:17 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:29.268 10:45:17 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:29.268 10:45:17 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:29.269 10:45:17 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:29.269 10:45:17 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:26:29.269 10:45:17 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:29.269 10:45:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@51 -- # : 0 00:26:29.269 10:45:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:29.269 10:45:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:29.269 10:45:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:29.269 10:45:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:29.269 10:45:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:29.269 10:45:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:29.269 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:29.269 10:45:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:29.269 10:45:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:29.269 10:45:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:29.269 10:45:17 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:26:29.269 10:45:17 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:26:29.269 10:45:17 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:26:29.269 10:45:17 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:26:29.269 10:45:17 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:26:29.269 10:45:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:26:29.269 10:45:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:29.269 10:45:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@476 -- # prepare_net_devs 00:26:29.269 10:45:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@438 -- # local -g is_hw=no 00:26:29.269 10:45:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@440 -- # remove_spdk_ns 00:26:29.269 10:45:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:29.269 10:45:17 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:29.269 10:45:17 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:29.269 10:45:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:26:29.269 10:45:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:26:29.269 10:45:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@309 -- # xtrace_disable 00:26:29.269 10:45:17 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:26:31.799 10:45:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:31.799 10:45:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # pci_devs=() 00:26:31.799 10:45:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # local -a pci_devs 00:26:31.799 10:45:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@316 -- # pci_net_devs=() 00:26:31.799 10:45:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:26:31.799 10:45:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # pci_drivers=() 00:26:31.799 10:45:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # local -A pci_drivers 00:26:31.800 10:45:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@319 -- # net_devs=() 00:26:31.800 10:45:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@319 -- # local -ga net_devs 00:26:31.800 10:45:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # e810=() 00:26:31.800 10:45:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # local -ga e810 00:26:31.800 10:45:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # x722=() 00:26:31.800 10:45:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # local -ga x722 00:26:31.800 10:45:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@322 -- # mlx=() 00:26:31.800 10:45:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@322 -- # local -ga mlx 00:26:31.800 10:45:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:31.800 10:45:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:31.800 10:45:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:31.800 10:45:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:31.800 10:45:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:31.800 10:45:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:31.800 10:45:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:31.800 10:45:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:31.800 10:45:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:31.800 10:45:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:31.800 10:45:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:31.800 10:45:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:31.800 10:45:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:26:31.800 10:45:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:26:31.800 10:45:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:26:31.800 10:45:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:26:31.800 10:45:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:26:31.800 10:45:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:26:31.800 10:45:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:31.800 10:45:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@367 -- # echo 'Found 0000:82:00.0 (0x8086 - 0x159b)' 00:26:31.800 Found 0000:82:00.0 (0x8086 - 0x159b) 00:26:31.800 10:45:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:31.800 10:45:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:31.800 10:45:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:31.800 10:45:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:31.800 10:45:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:31.800 10:45:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:31.800 10:45:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@367 -- # echo 'Found 0000:82:00.1 (0x8086 - 0x159b)' 00:26:31.800 Found 0000:82:00.1 (0x8086 - 0x159b) 00:26:31.800 10:45:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:31.800 10:45:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:31.800 10:45:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:31.800 10:45:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:31.800 10:45:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:31.800 10:45:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:26:31.800 10:45:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:26:31.800 10:45:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:26:31.800 10:45:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:31.800 10:45:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:31.800 10:45:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:31.800 10:45:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:31.800 10:45:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:31.800 10:45:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:31.800 10:45:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:31.800 10:45:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:82:00.0: cvl_0_0' 00:26:31.800 Found net devices under 0000:82:00.0: cvl_0_0 00:26:31.800 10:45:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:31.800 10:45:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:31.800 10:45:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:31.800 10:45:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:31.800 10:45:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:31.800 10:45:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:31.800 10:45:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:31.800 10:45:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:31.800 10:45:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:82:00.1: cvl_0_1' 00:26:31.800 Found net devices under 0000:82:00.1: cvl_0_1 00:26:31.800 10:45:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:31.800 10:45:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:26:31.800 10:45:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # is_hw=yes 00:26:31.800 10:45:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:26:31.800 10:45:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:26:31.800 10:45:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:26:31.800 10:45:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:31.800 10:45:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:31.800 10:45:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:31.800 10:45:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:31.800 10:45:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:26:31.800 10:45:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:31.800 10:45:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:31.800 10:45:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:26:31.800 10:45:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:26:31.800 10:45:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:31.800 10:45:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:31.800 10:45:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:26:31.800 10:45:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:26:31.800 10:45:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:26:31.800 10:45:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:31.800 10:45:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:31.800 10:45:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:31.800 10:45:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:26:31.800 10:45:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:31.800 10:45:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:31.800 10:45:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:31.800 10:45:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:26:31.800 10:45:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:26:31.800 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:31.800 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.262 ms 00:26:31.800 00:26:31.800 --- 10.0.0.2 ping statistics --- 00:26:31.800 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:31.800 rtt min/avg/max/mdev = 0.262/0.262/0.262/0.000 ms 00:26:31.800 10:45:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:31.800 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:31.800 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.167 ms 00:26:31.800 00:26:31.800 --- 10.0.0.1 ping statistics --- 00:26:31.800 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:31.800 rtt min/avg/max/mdev = 0.167/0.167/0.167/0.000 ms 00:26:31.800 10:45:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:31.800 10:45:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@450 -- # return 0 00:26:31.800 10:45:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:26:31.800 10:45:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:31.800 10:45:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:26:31.800 10:45:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:26:31.800 10:45:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:31.800 10:45:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:26:31.800 10:45:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:26:31.800 10:45:19 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:26:31.800 10:45:19 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:26:31.800 10:45:19 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:26:31.800 10:45:19 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:26:31.800 10:45:19 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1109 -- # xtrace_disable 00:26:31.800 10:45:19 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:26:31.800 ************************************ 00:26:31.800 START TEST nvmf_digest_clean 00:26:31.800 ************************************ 00:26:31.801 10:45:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1127 -- # run_digest 00:26:31.801 10:45:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:26:31.801 10:45:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:26:31.801 10:45:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:26:31.801 10:45:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:26:31.801 10:45:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:26:31.801 10:45:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:26:31.801 10:45:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@724 -- # xtrace_disable 00:26:31.801 10:45:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:31.801 10:45:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@509 -- # nvmfpid=481271 00:26:31.801 10:45:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:26:31.801 10:45:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@510 -- # waitforlisten 481271 00:26:31.801 10:45:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # '[' -z 481271 ']' 00:26:31.801 10:45:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:31.801 10:45:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # local max_retries=100 00:26:31.801 10:45:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:31.801 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:31.801 10:45:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # xtrace_disable 00:26:31.801 10:45:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:31.801 [2024-11-15 10:45:19.896011] Starting SPDK v25.01-pre git sha1 318515b44 / DPDK 24.03.0 initialization... 00:26:31.801 [2024-11-15 10:45:19.896092] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:31.801 [2024-11-15 10:45:19.966722] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:31.801 [2024-11-15 10:45:20.028061] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:31.801 [2024-11-15 10:45:20.028123] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:31.801 [2024-11-15 10:45:20.028152] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:31.801 [2024-11-15 10:45:20.028175] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:31.801 [2024-11-15 10:45:20.028186] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:31.801 [2024-11-15 10:45:20.028863] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:31.801 10:45:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:26:31.801 10:45:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@866 -- # return 0 00:26:31.801 10:45:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:26:31.801 10:45:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@730 -- # xtrace_disable 00:26:31.801 10:45:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:31.801 10:45:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:31.801 10:45:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:26:31.801 10:45:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:26:31.801 10:45:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:26:31.801 10:45:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:31.801 10:45:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:31.801 null0 00:26:31.801 [2024-11-15 10:45:20.261695] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:32.058 [2024-11-15 10:45:20.285937] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:32.059 10:45:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:32.059 10:45:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:26:32.059 10:45:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:26:32.059 10:45:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:26:32.059 10:45:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:26:32.059 10:45:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:26:32.059 10:45:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:26:32.059 10:45:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:26:32.059 10:45:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=481299 00:26:32.059 10:45:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 481299 /var/tmp/bperf.sock 00:26:32.059 10:45:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # '[' -z 481299 ']' 00:26:32.059 10:45:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:32.059 10:45:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # local max_retries=100 00:26:32.059 10:45:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:26:32.059 10:45:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:32.059 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:32.059 10:45:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # xtrace_disable 00:26:32.059 10:45:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:32.059 [2024-11-15 10:45:20.339740] Starting SPDK v25.01-pre git sha1 318515b44 / DPDK 24.03.0 initialization... 00:26:32.059 [2024-11-15 10:45:20.339810] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid481299 ] 00:26:32.059 [2024-11-15 10:45:20.411970] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:32.059 [2024-11-15 10:45:20.475488] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:32.316 10:45:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:26:32.316 10:45:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@866 -- # return 0 00:26:32.316 10:45:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:26:32.316 10:45:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:26:32.316 10:45:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:26:32.574 10:45:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:32.574 10:45:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:33.188 nvme0n1 00:26:33.188 10:45:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:26:33.188 10:45:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:33.188 Running I/O for 2 seconds... 00:26:35.123 19632.00 IOPS, 76.69 MiB/s [2024-11-15T09:45:23.844Z] 19932.00 IOPS, 77.86 MiB/s 00:26:35.381 Latency(us) 00:26:35.381 [2024-11-15T09:45:23.844Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:35.381 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:26:35.381 nvme0n1 : 2.01 19944.48 77.91 0.00 0.00 6410.48 3082.62 13981.01 00:26:35.381 [2024-11-15T09:45:23.844Z] =================================================================================================================== 00:26:35.381 [2024-11-15T09:45:23.844Z] Total : 19944.48 77.91 0.00 0.00 6410.48 3082.62 13981.01 00:26:35.381 { 00:26:35.381 "results": [ 00:26:35.381 { 00:26:35.381 "job": "nvme0n1", 00:26:35.381 "core_mask": "0x2", 00:26:35.381 "workload": "randread", 00:26:35.381 "status": "finished", 00:26:35.381 "queue_depth": 128, 00:26:35.381 "io_size": 4096, 00:26:35.381 "runtime": 2.005166, 00:26:35.381 "iops": 19944.4833993794, 00:26:35.381 "mibps": 77.90813827882579, 00:26:35.381 "io_failed": 0, 00:26:35.381 "io_timeout": 0, 00:26:35.381 "avg_latency_us": 6410.478070725256, 00:26:35.381 "min_latency_us": 3082.6192592592593, 00:26:35.381 "max_latency_us": 13981.013333333334 00:26:35.381 } 00:26:35.381 ], 00:26:35.381 "core_count": 1 00:26:35.381 } 00:26:35.381 10:45:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:26:35.381 10:45:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:26:35.381 10:45:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:26:35.381 10:45:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:26:35.381 | select(.opcode=="crc32c") 00:26:35.381 | "\(.module_name) \(.executed)"' 00:26:35.381 10:45:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:26:35.638 10:45:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:26:35.638 10:45:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:26:35.638 10:45:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:26:35.639 10:45:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:26:35.639 10:45:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 481299 00:26:35.639 10:45:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # '[' -z 481299 ']' 00:26:35.639 10:45:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # kill -0 481299 00:26:35.639 10:45:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # uname 00:26:35.639 10:45:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:26:35.639 10:45:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 481299 00:26:35.639 10:45:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:26:35.639 10:45:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:26:35.639 10:45:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # echo 'killing process with pid 481299' 00:26:35.639 killing process with pid 481299 00:26:35.639 10:45:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@971 -- # kill 481299 00:26:35.639 Received shutdown signal, test time was about 2.000000 seconds 00:26:35.639 00:26:35.639 Latency(us) 00:26:35.639 [2024-11-15T09:45:24.102Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:35.639 [2024-11-15T09:45:24.102Z] =================================================================================================================== 00:26:35.639 [2024-11-15T09:45:24.102Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:35.639 10:45:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@976 -- # wait 481299 00:26:35.897 10:45:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:26:35.897 10:45:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:26:35.897 10:45:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:26:35.897 10:45:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:26:35.897 10:45:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:26:35.897 10:45:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:26:35.897 10:45:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:26:35.897 10:45:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=481825 00:26:35.897 10:45:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:26:35.897 10:45:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 481825 /var/tmp/bperf.sock 00:26:35.897 10:45:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # '[' -z 481825 ']' 00:26:35.897 10:45:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:35.897 10:45:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # local max_retries=100 00:26:35.897 10:45:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:35.897 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:35.897 10:45:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # xtrace_disable 00:26:35.897 10:45:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:35.897 [2024-11-15 10:45:24.205837] Starting SPDK v25.01-pre git sha1 318515b44 / DPDK 24.03.0 initialization... 00:26:35.897 [2024-11-15 10:45:24.205916] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid481825 ] 00:26:35.897 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:35.897 Zero copy mechanism will not be used. 00:26:35.897 [2024-11-15 10:45:24.271792] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:35.897 [2024-11-15 10:45:24.329929] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:36.156 10:45:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:26:36.156 10:45:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@866 -- # return 0 00:26:36.156 10:45:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:26:36.156 10:45:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:26:36.156 10:45:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:26:36.414 10:45:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:36.414 10:45:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:36.980 nvme0n1 00:26:36.980 10:45:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:26:36.980 10:45:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:36.980 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:36.980 Zero copy mechanism will not be used. 00:26:36.980 Running I/O for 2 seconds... 00:26:39.285 5212.00 IOPS, 651.50 MiB/s [2024-11-15T09:45:27.748Z] 5385.00 IOPS, 673.12 MiB/s 00:26:39.285 Latency(us) 00:26:39.285 [2024-11-15T09:45:27.748Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:39.285 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:26:39.285 nvme0n1 : 2.00 5383.86 672.98 0.00 0.00 2968.45 713.01 10485.76 00:26:39.285 [2024-11-15T09:45:27.748Z] =================================================================================================================== 00:26:39.285 [2024-11-15T09:45:27.748Z] Total : 5383.86 672.98 0.00 0.00 2968.45 713.01 10485.76 00:26:39.285 { 00:26:39.285 "results": [ 00:26:39.285 { 00:26:39.285 "job": "nvme0n1", 00:26:39.285 "core_mask": "0x2", 00:26:39.285 "workload": "randread", 00:26:39.285 "status": "finished", 00:26:39.286 "queue_depth": 16, 00:26:39.286 "io_size": 131072, 00:26:39.286 "runtime": 2.003397, 00:26:39.286 "iops": 5383.8555213969075, 00:26:39.286 "mibps": 672.9819401746134, 00:26:39.286 "io_failed": 0, 00:26:39.286 "io_timeout": 0, 00:26:39.286 "avg_latency_us": 2968.4530097314077, 00:26:39.286 "min_latency_us": 713.0074074074074, 00:26:39.286 "max_latency_us": 10485.76 00:26:39.286 } 00:26:39.286 ], 00:26:39.286 "core_count": 1 00:26:39.286 } 00:26:39.286 10:45:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:26:39.286 10:45:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:26:39.286 10:45:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:26:39.286 10:45:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:26:39.286 10:45:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:26:39.286 | select(.opcode=="crc32c") 00:26:39.286 | "\(.module_name) \(.executed)"' 00:26:39.286 10:45:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:26:39.286 10:45:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:26:39.286 10:45:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:26:39.286 10:45:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:26:39.286 10:45:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 481825 00:26:39.286 10:45:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # '[' -z 481825 ']' 00:26:39.286 10:45:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # kill -0 481825 00:26:39.286 10:45:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # uname 00:26:39.286 10:45:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:26:39.286 10:45:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 481825 00:26:39.286 10:45:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:26:39.286 10:45:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:26:39.286 10:45:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # echo 'killing process with pid 481825' 00:26:39.286 killing process with pid 481825 00:26:39.286 10:45:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@971 -- # kill 481825 00:26:39.286 Received shutdown signal, test time was about 2.000000 seconds 00:26:39.286 00:26:39.286 Latency(us) 00:26:39.286 [2024-11-15T09:45:27.749Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:39.286 [2024-11-15T09:45:27.749Z] =================================================================================================================== 00:26:39.286 [2024-11-15T09:45:27.749Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:39.286 10:45:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@976 -- # wait 481825 00:26:39.543 10:45:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:26:39.544 10:45:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:26:39.544 10:45:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:26:39.544 10:45:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:26:39.544 10:45:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:26:39.544 10:45:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:26:39.544 10:45:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:26:39.544 10:45:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=482232 00:26:39.544 10:45:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:26:39.544 10:45:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 482232 /var/tmp/bperf.sock 00:26:39.544 10:45:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # '[' -z 482232 ']' 00:26:39.544 10:45:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:39.544 10:45:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # local max_retries=100 00:26:39.544 10:45:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:39.544 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:39.544 10:45:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # xtrace_disable 00:26:39.544 10:45:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:39.544 [2024-11-15 10:45:27.924507] Starting SPDK v25.01-pre git sha1 318515b44 / DPDK 24.03.0 initialization... 00:26:39.544 [2024-11-15 10:45:27.924586] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid482232 ] 00:26:39.544 [2024-11-15 10:45:27.991253] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:39.801 [2024-11-15 10:45:28.050297] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:39.801 10:45:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:26:39.801 10:45:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@866 -- # return 0 00:26:39.801 10:45:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:26:39.801 10:45:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:26:39.801 10:45:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:26:40.365 10:45:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:40.365 10:45:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:40.621 nvme0n1 00:26:40.621 10:45:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:26:40.621 10:45:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:40.880 Running I/O for 2 seconds... 00:26:42.747 20877.00 IOPS, 81.55 MiB/s [2024-11-15T09:45:31.210Z] 20862.50 IOPS, 81.49 MiB/s 00:26:42.747 Latency(us) 00:26:42.747 [2024-11-15T09:45:31.210Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:42.747 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:26:42.747 nvme0n1 : 2.01 20865.86 81.51 0.00 0.00 6122.17 4538.97 12621.75 00:26:42.747 [2024-11-15T09:45:31.210Z] =================================================================================================================== 00:26:42.747 [2024-11-15T09:45:31.210Z] Total : 20865.86 81.51 0.00 0.00 6122.17 4538.97 12621.75 00:26:42.747 { 00:26:42.747 "results": [ 00:26:42.747 { 00:26:42.747 "job": "nvme0n1", 00:26:42.747 "core_mask": "0x2", 00:26:42.747 "workload": "randwrite", 00:26:42.747 "status": "finished", 00:26:42.747 "queue_depth": 128, 00:26:42.747 "io_size": 4096, 00:26:42.747 "runtime": 2.007346, 00:26:42.747 "iops": 20865.8596973317, 00:26:42.747 "mibps": 81.50726444270195, 00:26:42.747 "io_failed": 0, 00:26:42.747 "io_timeout": 0, 00:26:42.747 "avg_latency_us": 6122.167053599141, 00:26:42.747 "min_latency_us": 4538.974814814815, 00:26:42.747 "max_latency_us": 12621.748148148148 00:26:42.747 } 00:26:42.747 ], 00:26:42.747 "core_count": 1 00:26:42.747 } 00:26:43.004 10:45:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:26:43.004 10:45:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:26:43.004 10:45:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:26:43.004 10:45:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:26:43.004 | select(.opcode=="crc32c") 00:26:43.004 | "\(.module_name) \(.executed)"' 00:26:43.004 10:45:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:26:43.262 10:45:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:26:43.263 10:45:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:26:43.263 10:45:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:26:43.263 10:45:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:26:43.263 10:45:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 482232 00:26:43.263 10:45:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # '[' -z 482232 ']' 00:26:43.263 10:45:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # kill -0 482232 00:26:43.263 10:45:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # uname 00:26:43.263 10:45:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:26:43.263 10:45:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 482232 00:26:43.263 10:45:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:26:43.263 10:45:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:26:43.263 10:45:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # echo 'killing process with pid 482232' 00:26:43.263 killing process with pid 482232 00:26:43.263 10:45:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@971 -- # kill 482232 00:26:43.263 Received shutdown signal, test time was about 2.000000 seconds 00:26:43.263 00:26:43.263 Latency(us) 00:26:43.263 [2024-11-15T09:45:31.726Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:43.263 [2024-11-15T09:45:31.726Z] =================================================================================================================== 00:26:43.263 [2024-11-15T09:45:31.726Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:43.263 10:45:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@976 -- # wait 482232 00:26:43.521 10:45:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:26:43.521 10:45:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:26:43.521 10:45:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:26:43.521 10:45:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:26:43.521 10:45:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:26:43.521 10:45:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:26:43.521 10:45:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:26:43.521 10:45:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=482662 00:26:43.521 10:45:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 482662 /var/tmp/bperf.sock 00:26:43.521 10:45:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:26:43.521 10:45:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # '[' -z 482662 ']' 00:26:43.521 10:45:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:43.521 10:45:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # local max_retries=100 00:26:43.521 10:45:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:43.521 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:43.521 10:45:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # xtrace_disable 00:26:43.521 10:45:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:43.521 [2024-11-15 10:45:31.798488] Starting SPDK v25.01-pre git sha1 318515b44 / DPDK 24.03.0 initialization... 00:26:43.521 [2024-11-15 10:45:31.798574] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid482662 ] 00:26:43.521 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:43.521 Zero copy mechanism will not be used. 00:26:43.521 [2024-11-15 10:45:31.873249] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:43.521 [2024-11-15 10:45:31.932227] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:43.780 10:45:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:26:43.780 10:45:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@866 -- # return 0 00:26:43.780 10:45:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:26:43.780 10:45:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:26:43.780 10:45:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:26:44.037 10:45:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:44.037 10:45:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:44.295 nvme0n1 00:26:44.295 10:45:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:26:44.295 10:45:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:44.553 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:44.553 Zero copy mechanism will not be used. 00:26:44.553 Running I/O for 2 seconds... 00:26:46.429 5183.00 IOPS, 647.88 MiB/s [2024-11-15T09:45:34.892Z] 5286.00 IOPS, 660.75 MiB/s 00:26:46.429 Latency(us) 00:26:46.429 [2024-11-15T09:45:34.892Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:46.429 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:26:46.429 nvme0n1 : 2.00 5285.00 660.63 0.00 0.00 3020.79 2342.31 7136.14 00:26:46.429 [2024-11-15T09:45:34.892Z] =================================================================================================================== 00:26:46.429 [2024-11-15T09:45:34.892Z] Total : 5285.00 660.63 0.00 0.00 3020.79 2342.31 7136.14 00:26:46.429 { 00:26:46.429 "results": [ 00:26:46.429 { 00:26:46.429 "job": "nvme0n1", 00:26:46.429 "core_mask": "0x2", 00:26:46.429 "workload": "randwrite", 00:26:46.429 "status": "finished", 00:26:46.429 "queue_depth": 16, 00:26:46.429 "io_size": 131072, 00:26:46.429 "runtime": 2.004162, 00:26:46.429 "iops": 5285.001911023161, 00:26:46.429 "mibps": 660.6252388778951, 00:26:46.429 "io_failed": 0, 00:26:46.429 "io_timeout": 0, 00:26:46.429 "avg_latency_us": 3020.7919794114355, 00:26:46.429 "min_latency_us": 2342.305185185185, 00:26:46.429 "max_latency_us": 7136.142222222222 00:26:46.429 } 00:26:46.429 ], 00:26:46.429 "core_count": 1 00:26:46.429 } 00:26:46.429 10:45:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:26:46.429 10:45:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:26:46.429 10:45:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:26:46.429 10:45:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:26:46.429 | select(.opcode=="crc32c") 00:26:46.429 | "\(.module_name) \(.executed)"' 00:26:46.429 10:45:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:26:46.690 10:45:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:26:46.690 10:45:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:26:46.690 10:45:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:26:46.947 10:45:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:26:46.947 10:45:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 482662 00:26:46.947 10:45:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # '[' -z 482662 ']' 00:26:46.947 10:45:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # kill -0 482662 00:26:46.947 10:45:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # uname 00:26:46.947 10:45:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:26:46.947 10:45:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 482662 00:26:46.947 10:45:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:26:46.947 10:45:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:26:46.947 10:45:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # echo 'killing process with pid 482662' 00:26:46.947 killing process with pid 482662 00:26:46.947 10:45:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@971 -- # kill 482662 00:26:46.947 Received shutdown signal, test time was about 2.000000 seconds 00:26:46.947 00:26:46.947 Latency(us) 00:26:46.947 [2024-11-15T09:45:35.410Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:46.947 [2024-11-15T09:45:35.410Z] =================================================================================================================== 00:26:46.947 [2024-11-15T09:45:35.410Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:46.947 10:45:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@976 -- # wait 482662 00:26:46.947 10:45:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 481271 00:26:46.947 10:45:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # '[' -z 481271 ']' 00:26:46.947 10:45:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # kill -0 481271 00:26:46.947 10:45:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # uname 00:26:47.205 10:45:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:26:47.205 10:45:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 481271 00:26:47.205 10:45:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:26:47.205 10:45:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:26:47.205 10:45:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # echo 'killing process with pid 481271' 00:26:47.205 killing process with pid 481271 00:26:47.205 10:45:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@971 -- # kill 481271 00:26:47.205 10:45:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@976 -- # wait 481271 00:26:47.463 00:26:47.463 real 0m15.838s 00:26:47.463 user 0m31.079s 00:26:47.463 sys 0m5.167s 00:26:47.463 10:45:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1128 -- # xtrace_disable 00:26:47.463 10:45:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:47.463 ************************************ 00:26:47.463 END TEST nvmf_digest_clean 00:26:47.463 ************************************ 00:26:47.463 10:45:35 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:26:47.463 10:45:35 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:26:47.463 10:45:35 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1109 -- # xtrace_disable 00:26:47.463 10:45:35 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:26:47.463 ************************************ 00:26:47.463 START TEST nvmf_digest_error 00:26:47.463 ************************************ 00:26:47.463 10:45:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1127 -- # run_digest_error 00:26:47.463 10:45:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:26:47.463 10:45:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:26:47.463 10:45:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@724 -- # xtrace_disable 00:26:47.463 10:45:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:47.463 10:45:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@509 -- # nvmfpid=483195 00:26:47.463 10:45:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:26:47.463 10:45:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@510 -- # waitforlisten 483195 00:26:47.463 10:45:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # '[' -z 483195 ']' 00:26:47.463 10:45:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:47.463 10:45:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # local max_retries=100 00:26:47.463 10:45:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:47.463 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:47.463 10:45:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # xtrace_disable 00:26:47.463 10:45:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:47.463 [2024-11-15 10:45:35.785436] Starting SPDK v25.01-pre git sha1 318515b44 / DPDK 24.03.0 initialization... 00:26:47.463 [2024-11-15 10:45:35.785522] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:47.463 [2024-11-15 10:45:35.857893] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:47.463 [2024-11-15 10:45:35.914038] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:47.463 [2024-11-15 10:45:35.914093] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:47.463 [2024-11-15 10:45:35.914121] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:47.463 [2024-11-15 10:45:35.914132] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:47.463 [2024-11-15 10:45:35.914141] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:47.463 [2024-11-15 10:45:35.914776] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:47.721 10:45:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:26:47.721 10:45:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@866 -- # return 0 00:26:47.721 10:45:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:26:47.721 10:45:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@730 -- # xtrace_disable 00:26:47.721 10:45:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:47.721 10:45:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:47.721 10:45:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:26:47.721 10:45:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:47.721 10:45:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:47.721 [2024-11-15 10:45:36.035475] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:26:47.721 10:45:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:47.721 10:45:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:26:47.721 10:45:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:26:47.721 10:45:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:47.721 10:45:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:47.721 null0 00:26:47.721 [2024-11-15 10:45:36.143440] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:47.721 [2024-11-15 10:45:36.167684] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:47.721 10:45:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:47.721 10:45:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:26:47.721 10:45:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:26:47.721 10:45:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:26:47.721 10:45:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:26:47.721 10:45:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:26:47.721 10:45:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=483223 00:26:47.721 10:45:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 483223 /var/tmp/bperf.sock 00:26:47.721 10:45:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:26:47.721 10:45:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # '[' -z 483223 ']' 00:26:47.721 10:45:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:47.721 10:45:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # local max_retries=100 00:26:47.721 10:45:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:47.721 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:47.721 10:45:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # xtrace_disable 00:26:47.721 10:45:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:47.979 [2024-11-15 10:45:36.217879] Starting SPDK v25.01-pre git sha1 318515b44 / DPDK 24.03.0 initialization... 00:26:47.979 [2024-11-15 10:45:36.217961] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid483223 ] 00:26:47.979 [2024-11-15 10:45:36.285816] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:47.979 [2024-11-15 10:45:36.349406] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:48.313 10:45:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:26:48.313 10:45:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@866 -- # return 0 00:26:48.313 10:45:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:48.313 10:45:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:48.313 10:45:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:26:48.313 10:45:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:48.313 10:45:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:48.313 10:45:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:48.313 10:45:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:48.313 10:45:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:48.879 nvme0n1 00:26:48.879 10:45:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:26:48.879 10:45:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:48.879 10:45:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:48.879 10:45:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:48.879 10:45:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:26:48.879 10:45:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:48.879 Running I/O for 2 seconds... 00:26:48.879 [2024-11-15 10:45:37.192853] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d3510) 00:26:48.879 [2024-11-15 10:45:37.192914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13490 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.879 [2024-11-15 10:45:37.192936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:48.879 [2024-11-15 10:45:37.206326] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d3510) 00:26:48.879 [2024-11-15 10:45:37.206378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16971 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.879 [2024-11-15 10:45:37.206397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:48.879 [2024-11-15 10:45:37.218802] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d3510) 00:26:48.879 [2024-11-15 10:45:37.218830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:21604 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.879 [2024-11-15 10:45:37.218861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:48.879 [2024-11-15 10:45:37.229414] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d3510) 00:26:48.879 [2024-11-15 10:45:37.229444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12310 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.879 [2024-11-15 10:45:37.229461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:48.879 [2024-11-15 10:45:37.241552] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d3510) 00:26:48.880 [2024-11-15 10:45:37.241582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6365 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.880 [2024-11-15 10:45:37.241614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:48.880 [2024-11-15 10:45:37.254204] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d3510) 00:26:48.880 [2024-11-15 10:45:37.254239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:12329 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.880 [2024-11-15 10:45:37.254271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:48.880 [2024-11-15 10:45:37.269231] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d3510) 00:26:48.880 [2024-11-15 10:45:37.269259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:9284 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.880 [2024-11-15 10:45:37.269290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:48.880 [2024-11-15 10:45:37.278829] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d3510) 00:26:48.880 [2024-11-15 10:45:37.278857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:4223 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.880 [2024-11-15 10:45:37.278889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:48.880 [2024-11-15 10:45:37.293572] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d3510) 00:26:48.880 [2024-11-15 10:45:37.293602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:18290 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.880 [2024-11-15 10:45:37.293634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:48.880 [2024-11-15 10:45:37.305227] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d3510) 00:26:48.880 [2024-11-15 10:45:37.305255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:21312 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.880 [2024-11-15 10:45:37.305286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:48.880 [2024-11-15 10:45:37.317386] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d3510) 00:26:48.880 [2024-11-15 10:45:37.317414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:24237 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.880 [2024-11-15 10:45:37.317447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:48.880 [2024-11-15 10:45:37.327919] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d3510) 00:26:48.880 [2024-11-15 10:45:37.327947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:20727 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.880 [2024-11-15 10:45:37.327979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:48.880 [2024-11-15 10:45:37.338779] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d3510) 00:26:48.880 [2024-11-15 10:45:37.338807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:8525 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.880 [2024-11-15 10:45:37.338838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:49.138 [2024-11-15 10:45:37.355026] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d3510) 00:26:49.138 [2024-11-15 10:45:37.355056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:2491 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.138 [2024-11-15 10:45:37.355087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:49.138 [2024-11-15 10:45:37.367457] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d3510) 00:26:49.138 [2024-11-15 10:45:37.367486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:20598 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.138 [2024-11-15 10:45:37.367518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:49.138 [2024-11-15 10:45:37.378321] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d3510) 00:26:49.138 [2024-11-15 10:45:37.378374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:23907 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.138 [2024-11-15 10:45:37.378395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:49.138 [2024-11-15 10:45:37.390622] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d3510) 00:26:49.138 [2024-11-15 10:45:37.390651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:3261 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.138 [2024-11-15 10:45:37.390682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:49.138 [2024-11-15 10:45:37.403270] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d3510) 00:26:49.138 [2024-11-15 10:45:37.403297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:9907 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.138 [2024-11-15 10:45:37.403329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:49.138 [2024-11-15 10:45:37.412793] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d3510) 00:26:49.138 [2024-11-15 10:45:37.412822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:12003 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.138 [2024-11-15 10:45:37.412852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:49.138 [2024-11-15 10:45:37.424716] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d3510) 00:26:49.138 [2024-11-15 10:45:37.424745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:12689 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.138 [2024-11-15 10:45:37.424760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:49.138 [2024-11-15 10:45:37.437389] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d3510) 00:26:49.138 [2024-11-15 10:45:37.437419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:3966 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.138 [2024-11-15 10:45:37.437450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:49.138 [2024-11-15 10:45:37.448290] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d3510) 00:26:49.138 [2024-11-15 10:45:37.448317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:9266 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.138 [2024-11-15 10:45:37.448348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:49.138 [2024-11-15 10:45:37.463030] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d3510) 00:26:49.138 [2024-11-15 10:45:37.463059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:23979 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.138 [2024-11-15 10:45:37.463096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:49.138 [2024-11-15 10:45:37.475820] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d3510) 00:26:49.138 [2024-11-15 10:45:37.475858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:1863 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.138 [2024-11-15 10:45:37.475888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:49.138 [2024-11-15 10:45:37.486017] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d3510) 00:26:49.138 [2024-11-15 10:45:37.486044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6334 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.138 [2024-11-15 10:45:37.486075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:49.138 [2024-11-15 10:45:37.498576] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d3510) 00:26:49.138 [2024-11-15 10:45:37.498605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:24881 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.138 [2024-11-15 10:45:37.498636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:49.138 [2024-11-15 10:45:37.511020] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d3510) 00:26:49.138 [2024-11-15 10:45:37.511048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:13556 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.139 [2024-11-15 10:45:37.511080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:49.139 [2024-11-15 10:45:37.522722] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d3510) 00:26:49.139 [2024-11-15 10:45:37.522749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20530 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.139 [2024-11-15 10:45:37.522780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:49.139 [2024-11-15 10:45:37.534136] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d3510) 00:26:49.139 [2024-11-15 10:45:37.534181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:12060 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.139 [2024-11-15 10:45:37.534198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:49.139 [2024-11-15 10:45:37.544860] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d3510) 00:26:49.139 [2024-11-15 10:45:37.544899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:5017 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.139 [2024-11-15 10:45:37.544930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:49.139 [2024-11-15 10:45:37.556671] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d3510) 00:26:49.139 [2024-11-15 10:45:37.556700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:23377 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.139 [2024-11-15 10:45:37.556715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:49.139 [2024-11-15 10:45:37.567269] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d3510) 00:26:49.139 [2024-11-15 10:45:37.567301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:22517 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.139 [2024-11-15 10:45:37.567333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:49.139 [2024-11-15 10:45:37.579424] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d3510) 00:26:49.139 [2024-11-15 10:45:37.579455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:8868 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.139 [2024-11-15 10:45:37.579487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:49.139 [2024-11-15 10:45:37.591573] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d3510) 00:26:49.139 [2024-11-15 10:45:37.591602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:3653 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.139 [2024-11-15 10:45:37.591634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:49.139 [2024-11-15 10:45:37.602804] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d3510) 00:26:49.139 [2024-11-15 10:45:37.602850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:15305 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.139 [2024-11-15 10:45:37.602868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:49.398 [2024-11-15 10:45:37.616201] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d3510) 00:26:49.398 [2024-11-15 10:45:37.616233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:22051 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.398 [2024-11-15 10:45:37.616266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:49.398 [2024-11-15 10:45:37.627211] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d3510) 00:26:49.398 [2024-11-15 10:45:37.627240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:18130 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.398 [2024-11-15 10:45:37.627271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:49.398 [2024-11-15 10:45:37.642737] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d3510) 00:26:49.398 [2024-11-15 10:45:37.642772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:13053 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.398 [2024-11-15 10:45:37.642804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:49.398 [2024-11-15 10:45:37.658717] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d3510) 00:26:49.398 [2024-11-15 10:45:37.658746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:16339 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.398 [2024-11-15 10:45:37.658778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:49.398 [2024-11-15 10:45:37.672492] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d3510) 00:26:49.398 [2024-11-15 10:45:37.672522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:12348 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.398 [2024-11-15 10:45:37.672566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:49.398 [2024-11-15 10:45:37.685411] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d3510) 00:26:49.398 [2024-11-15 10:45:37.685440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:4247 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.398 [2024-11-15 10:45:37.685473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:49.398 [2024-11-15 10:45:37.696119] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d3510) 00:26:49.398 [2024-11-15 10:45:37.696148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:123 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.398 [2024-11-15 10:45:37.696178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:49.398 [2024-11-15 10:45:37.710585] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d3510) 00:26:49.398 [2024-11-15 10:45:37.710615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:16722 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.398 [2024-11-15 10:45:37.710648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:49.398 [2024-11-15 10:45:37.726400] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d3510) 00:26:49.398 [2024-11-15 10:45:37.726429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:1969 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.398 [2024-11-15 10:45:37.726461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:49.398 [2024-11-15 10:45:37.737010] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d3510) 00:26:49.398 [2024-11-15 10:45:37.737037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:5199 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.398 [2024-11-15 10:45:37.737069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:49.398 [2024-11-15 10:45:37.751073] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d3510) 00:26:49.398 [2024-11-15 10:45:37.751102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:6249 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.398 [2024-11-15 10:45:37.751133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:49.398 [2024-11-15 10:45:37.766637] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d3510) 00:26:49.398 [2024-11-15 10:45:37.766680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:12445 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.398 [2024-11-15 10:45:37.766697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:49.398 [2024-11-15 10:45:37.780898] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d3510) 00:26:49.398 [2024-11-15 10:45:37.780927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:7142 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.398 [2024-11-15 10:45:37.780959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:49.398 [2024-11-15 10:45:37.792163] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d3510) 00:26:49.398 [2024-11-15 10:45:37.792199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:25542 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.398 [2024-11-15 10:45:37.792231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:49.398 [2024-11-15 10:45:37.805429] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d3510) 00:26:49.398 [2024-11-15 10:45:37.805458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:11616 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.398 [2024-11-15 10:45:37.805491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:49.398 [2024-11-15 10:45:37.816784] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d3510) 00:26:49.398 [2024-11-15 10:45:37.816812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18040 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.398 [2024-11-15 10:45:37.816844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:49.398 [2024-11-15 10:45:37.831661] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d3510) 00:26:49.398 [2024-11-15 10:45:37.831689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20522 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.398 [2024-11-15 10:45:37.831721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:49.398 [2024-11-15 10:45:37.843483] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d3510) 00:26:49.398 [2024-11-15 10:45:37.843513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8251 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.398 [2024-11-15 10:45:37.843547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:49.398 [2024-11-15 10:45:37.855708] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d3510) 00:26:49.398 [2024-11-15 10:45:37.855736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:2281 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.398 [2024-11-15 10:45:37.855768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:49.657 [2024-11-15 10:45:37.866864] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d3510) 00:26:49.657 [2024-11-15 10:45:37.866895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:13850 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.657 [2024-11-15 10:45:37.866929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:49.657 [2024-11-15 10:45:37.878896] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d3510) 00:26:49.657 [2024-11-15 10:45:37.878926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15807 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.657 [2024-11-15 10:45:37.878958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:49.657 [2024-11-15 10:45:37.893836] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d3510) 00:26:49.657 [2024-11-15 10:45:37.893865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:13404 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.657 [2024-11-15 10:45:37.893897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:49.657 [2024-11-15 10:45:37.905850] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d3510) 00:26:49.657 [2024-11-15 10:45:37.905879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:3455 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.657 [2024-11-15 10:45:37.905912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:49.657 [2024-11-15 10:45:37.918906] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d3510) 00:26:49.657 [2024-11-15 10:45:37.918934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:14121 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.657 [2024-11-15 10:45:37.918966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:49.657 [2024-11-15 10:45:37.930990] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d3510) 00:26:49.657 [2024-11-15 10:45:37.931018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:15832 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.657 [2024-11-15 10:45:37.931050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:49.657 [2024-11-15 10:45:37.943266] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d3510) 00:26:49.657 [2024-11-15 10:45:37.943294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:19012 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.657 [2024-11-15 10:45:37.943325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:49.657 [2024-11-15 10:45:37.955092] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d3510) 00:26:49.657 [2024-11-15 10:45:37.955120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:21642 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.657 [2024-11-15 10:45:37.955152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:49.657 [2024-11-15 10:45:37.965835] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d3510) 00:26:49.657 [2024-11-15 10:45:37.965864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:952 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.657 [2024-11-15 10:45:37.965894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:49.657 [2024-11-15 10:45:37.978585] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d3510) 00:26:49.657 [2024-11-15 10:45:37.978615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:18324 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.657 [2024-11-15 10:45:37.978651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:49.657 [2024-11-15 10:45:37.992267] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d3510) 00:26:49.657 [2024-11-15 10:45:37.992295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:8580 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.657 [2024-11-15 10:45:37.992327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:49.657 [2024-11-15 10:45:38.002115] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d3510) 00:26:49.657 [2024-11-15 10:45:38.002144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:7268 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.657 [2024-11-15 10:45:38.002182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:49.657 [2024-11-15 10:45:38.015260] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d3510) 00:26:49.657 [2024-11-15 10:45:38.015288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:6735 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.658 [2024-11-15 10:45:38.015320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:49.658 [2024-11-15 10:45:38.030798] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d3510) 00:26:49.658 [2024-11-15 10:45:38.030826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:4615 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.658 [2024-11-15 10:45:38.030858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:49.658 [2024-11-15 10:45:38.046353] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d3510) 00:26:49.658 [2024-11-15 10:45:38.046406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:11543 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.658 [2024-11-15 10:45:38.046424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:49.658 [2024-11-15 10:45:38.056821] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d3510) 00:26:49.658 [2024-11-15 10:45:38.056848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:13912 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.658 [2024-11-15 10:45:38.056880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:49.658 [2024-11-15 10:45:38.070123] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d3510) 00:26:49.658 [2024-11-15 10:45:38.070151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24546 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.658 [2024-11-15 10:45:38.070182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:49.658 [2024-11-15 10:45:38.086504] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d3510) 00:26:49.658 [2024-11-15 10:45:38.086533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:8753 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.658 [2024-11-15 10:45:38.086566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:49.658 [2024-11-15 10:45:38.099118] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d3510) 00:26:49.658 [2024-11-15 10:45:38.099146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:12651 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.658 [2024-11-15 10:45:38.099178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:49.658 [2024-11-15 10:45:38.110450] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d3510) 00:26:49.658 [2024-11-15 10:45:38.110478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:18588 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.658 [2024-11-15 10:45:38.110511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:49.658 [2024-11-15 10:45:38.123495] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d3510) 00:26:49.658 [2024-11-15 10:45:38.123554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:19927 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.658 [2024-11-15 10:45:38.123577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:49.917 [2024-11-15 10:45:38.136879] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d3510) 00:26:49.917 [2024-11-15 10:45:38.136910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14596 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.917 [2024-11-15 10:45:38.136942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:49.917 [2024-11-15 10:45:38.147931] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d3510) 00:26:49.917 [2024-11-15 10:45:38.147960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12894 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.917 [2024-11-15 10:45:38.147992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:49.917 [2024-11-15 10:45:38.162740] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d3510) 00:26:49.917 [2024-11-15 10:45:38.162769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:9310 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.917 [2024-11-15 10:45:38.162801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:49.917 [2024-11-15 10:45:38.175756] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d3510) 00:26:49.917 [2024-11-15 10:45:38.175785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:17411 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.917 [2024-11-15 10:45:38.175817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:49.917 20259.00 IOPS, 79.14 MiB/s [2024-11-15T09:45:38.380Z] [2024-11-15 10:45:38.187280] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d3510) 00:26:49.917 [2024-11-15 10:45:38.187308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:7521 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.917 [2024-11-15 10:45:38.187340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:49.917 [2024-11-15 10:45:38.202468] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d3510) 00:26:49.917 [2024-11-15 10:45:38.202498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19773 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.917 [2024-11-15 10:45:38.202532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:49.917 [2024-11-15 10:45:38.215496] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d3510) 00:26:49.917 [2024-11-15 10:45:38.215526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:8116 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.917 [2024-11-15 10:45:38.215559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:49.917 [2024-11-15 10:45:38.227440] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d3510) 00:26:49.917 [2024-11-15 10:45:38.227470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:4780 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.917 [2024-11-15 10:45:38.227509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:49.917 [2024-11-15 10:45:38.241569] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d3510) 00:26:49.917 [2024-11-15 10:45:38.241599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:15897 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.917 [2024-11-15 10:45:38.241632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:49.917 [2024-11-15 10:45:38.255216] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d3510) 00:26:49.917 [2024-11-15 10:45:38.255244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:11974 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.917 [2024-11-15 10:45:38.255275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:49.917 [2024-11-15 10:45:38.266495] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d3510) 00:26:49.917 [2024-11-15 10:45:38.266523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:10152 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.917 [2024-11-15 10:45:38.266556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:49.917 [2024-11-15 10:45:38.277477] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d3510) 00:26:49.917 [2024-11-15 10:45:38.277507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:3208 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.917 [2024-11-15 10:45:38.277539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:49.917 [2024-11-15 10:45:38.290396] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d3510) 00:26:49.917 [2024-11-15 10:45:38.290424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:22512 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.917 [2024-11-15 10:45:38.290455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:49.917 [2024-11-15 10:45:38.302340] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d3510) 00:26:49.918 [2024-11-15 10:45:38.302391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:4391 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.918 [2024-11-15 10:45:38.302410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:49.918 [2024-11-15 10:45:38.313234] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d3510) 00:26:49.918 [2024-11-15 10:45:38.313262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3636 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.918 [2024-11-15 10:45:38.313293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:49.918 [2024-11-15 10:45:38.328706] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d3510) 00:26:49.918 [2024-11-15 10:45:38.328734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:14524 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.918 [2024-11-15 10:45:38.328767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:49.918 [2024-11-15 10:45:38.343285] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d3510) 00:26:49.918 [2024-11-15 10:45:38.343320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:1941 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.918 [2024-11-15 10:45:38.343352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:49.918 [2024-11-15 10:45:38.358461] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d3510) 00:26:49.918 [2024-11-15 10:45:38.358490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:2555 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.918 [2024-11-15 10:45:38.358522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:49.918 [2024-11-15 10:45:38.368956] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d3510) 00:26:49.918 [2024-11-15 10:45:38.368986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:11330 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.918 [2024-11-15 10:45:38.369022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:49.918 [2024-11-15 10:45:38.382921] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d3510) 00:26:49.918 [2024-11-15 10:45:38.382952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22750 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.918 [2024-11-15 10:45:38.382984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:50.177 [2024-11-15 10:45:38.394059] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d3510) 00:26:50.177 [2024-11-15 10:45:38.394089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:6059 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.177 [2024-11-15 10:45:38.394121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:50.177 [2024-11-15 10:45:38.406954] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d3510) 00:26:50.177 [2024-11-15 10:45:38.406984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:9298 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.177 [2024-11-15 10:45:38.407015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:50.177 [2024-11-15 10:45:38.419664] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d3510) 00:26:50.177 [2024-11-15 10:45:38.419709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:2987 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.177 [2024-11-15 10:45:38.419726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:50.177 [2024-11-15 10:45:38.432213] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d3510) 00:26:50.177 [2024-11-15 10:45:38.432241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:1778 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.177 [2024-11-15 10:45:38.432273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:50.177 [2024-11-15 10:45:38.443492] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d3510) 00:26:50.177 [2024-11-15 10:45:38.443521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:21186 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.177 [2024-11-15 10:45:38.443554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:50.177 [2024-11-15 10:45:38.455295] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d3510) 00:26:50.177 [2024-11-15 10:45:38.455325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:8874 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.177 [2024-11-15 10:45:38.455357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:50.177 [2024-11-15 10:45:38.471210] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d3510) 00:26:50.177 [2024-11-15 10:45:38.471239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:7481 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.177 [2024-11-15 10:45:38.471271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:50.177 [2024-11-15 10:45:38.486578] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d3510) 00:26:50.177 [2024-11-15 10:45:38.486607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:14191 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.177 [2024-11-15 10:45:38.486638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:50.177 [2024-11-15 10:45:38.500716] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d3510) 00:26:50.177 [2024-11-15 10:45:38.500744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:16732 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.177 [2024-11-15 10:45:38.500775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:50.177 [2024-11-15 10:45:38.516078] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d3510) 00:26:50.177 [2024-11-15 10:45:38.516106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:11502 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.177 [2024-11-15 10:45:38.516138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:50.177 [2024-11-15 10:45:38.528664] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d3510) 00:26:50.177 [2024-11-15 10:45:38.528709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:25253 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.177 [2024-11-15 10:45:38.528725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:50.177 [2024-11-15 10:45:38.539499] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d3510) 00:26:50.177 [2024-11-15 10:45:38.539528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:1850 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.177 [2024-11-15 10:45:38.539560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:50.177 [2024-11-15 10:45:38.552120] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d3510) 00:26:50.177 [2024-11-15 10:45:38.552147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17388 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.177 [2024-11-15 10:45:38.552178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:50.177 [2024-11-15 10:45:38.565749] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d3510) 00:26:50.177 [2024-11-15 10:45:38.565776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:7804 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.177 [2024-11-15 10:45:38.565813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:50.177 [2024-11-15 10:45:38.576770] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d3510) 00:26:50.177 [2024-11-15 10:45:38.576799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:1177 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.177 [2024-11-15 10:45:38.576831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:50.177 [2024-11-15 10:45:38.590968] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d3510) 00:26:50.177 [2024-11-15 10:45:38.590996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:4520 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.177 [2024-11-15 10:45:38.591028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:50.177 [2024-11-15 10:45:38.605033] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d3510) 00:26:50.177 [2024-11-15 10:45:38.605068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15570 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.177 [2024-11-15 10:45:38.605098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:50.177 [2024-11-15 10:45:38.615996] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d3510) 00:26:50.177 [2024-11-15 10:45:38.616024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:1060 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.177 [2024-11-15 10:45:38.616054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:50.177 [2024-11-15 10:45:38.630839] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d3510) 00:26:50.177 [2024-11-15 10:45:38.630868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:22701 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.177 [2024-11-15 10:45:38.630899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:50.436 [2024-11-15 10:45:38.644814] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d3510) 00:26:50.436 [2024-11-15 10:45:38.644843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:3711 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.436 [2024-11-15 10:45:38.644873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:50.436 [2024-11-15 10:45:38.659897] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d3510) 00:26:50.436 [2024-11-15 10:45:38.659927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:8800 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.436 [2024-11-15 10:45:38.659959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:50.436 [2024-11-15 10:45:38.675911] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d3510) 00:26:50.436 [2024-11-15 10:45:38.675940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:8737 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.436 [2024-11-15 10:45:38.675972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:50.436 [2024-11-15 10:45:38.687196] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d3510) 00:26:50.436 [2024-11-15 10:45:38.687229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:2925 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.436 [2024-11-15 10:45:38.687261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:50.436 [2024-11-15 10:45:38.697461] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d3510) 00:26:50.436 [2024-11-15 10:45:38.697490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:7157 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.436 [2024-11-15 10:45:38.697522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:50.436 [2024-11-15 10:45:38.709830] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d3510) 00:26:50.436 [2024-11-15 10:45:38.709859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:19671 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.436 [2024-11-15 10:45:38.709889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:50.436 [2024-11-15 10:45:38.722738] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d3510) 00:26:50.436 [2024-11-15 10:45:38.722766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:3616 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.436 [2024-11-15 10:45:38.722797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:50.436 [2024-11-15 10:45:38.734300] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d3510) 00:26:50.436 [2024-11-15 10:45:38.734328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:20049 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.436 [2024-11-15 10:45:38.734344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:50.436 [2024-11-15 10:45:38.744232] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d3510) 00:26:50.436 [2024-11-15 10:45:38.744259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:16558 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.436 [2024-11-15 10:45:38.744290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:50.436 [2024-11-15 10:45:38.758900] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d3510) 00:26:50.436 [2024-11-15 10:45:38.758928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:3177 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.436 [2024-11-15 10:45:38.758958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:50.436 [2024-11-15 10:45:38.772623] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d3510) 00:26:50.436 [2024-11-15 10:45:38.772651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:3394 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.436 [2024-11-15 10:45:38.772682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:50.436 [2024-11-15 10:45:38.787593] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d3510) 00:26:50.436 [2024-11-15 10:45:38.787621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22977 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.436 [2024-11-15 10:45:38.787652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:50.436 [2024-11-15 10:45:38.798630] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d3510) 00:26:50.436 [2024-11-15 10:45:38.798659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:2498 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.436 [2024-11-15 10:45:38.798675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:50.436 [2024-11-15 10:45:38.812991] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d3510) 00:26:50.436 [2024-11-15 10:45:38.813020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22328 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.436 [2024-11-15 10:45:38.813051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:50.436 [2024-11-15 10:45:38.828504] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d3510) 00:26:50.436 [2024-11-15 10:45:38.828533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:18960 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.436 [2024-11-15 10:45:38.828564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:50.436 [2024-11-15 10:45:38.841978] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d3510) 00:26:50.436 [2024-11-15 10:45:38.842005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:4890 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.436 [2024-11-15 10:45:38.842037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:50.436 [2024-11-15 10:45:38.852324] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d3510) 00:26:50.436 [2024-11-15 10:45:38.852373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:17588 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.436 [2024-11-15 10:45:38.852391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:50.436 [2024-11-15 10:45:38.867527] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d3510) 00:26:50.436 [2024-11-15 10:45:38.867555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1876 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.436 [2024-11-15 10:45:38.867587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:50.436 [2024-11-15 10:45:38.883650] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d3510) 00:26:50.436 [2024-11-15 10:45:38.883695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:7820 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.436 [2024-11-15 10:45:38.883712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:50.436 [2024-11-15 10:45:38.897756] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d3510) 00:26:50.436 [2024-11-15 10:45:38.897784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:22178 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.436 [2024-11-15 10:45:38.897814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:50.695 [2024-11-15 10:45:38.913843] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d3510) 00:26:50.695 [2024-11-15 10:45:38.913894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:379 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.695 [2024-11-15 10:45:38.913927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:50.695 [2024-11-15 10:45:38.924098] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d3510) 00:26:50.695 [2024-11-15 10:45:38.924125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22719 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.695 [2024-11-15 10:45:38.924157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:50.695 [2024-11-15 10:45:38.937732] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d3510) 00:26:50.695 [2024-11-15 10:45:38.937759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:15904 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.695 [2024-11-15 10:45:38.937789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:50.695 [2024-11-15 10:45:38.952779] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d3510) 00:26:50.695 [2024-11-15 10:45:38.952807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18837 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.695 [2024-11-15 10:45:38.952838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:50.695 [2024-11-15 10:45:38.967915] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d3510) 00:26:50.695 [2024-11-15 10:45:38.967942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:8316 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.695 [2024-11-15 10:45:38.967973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:50.695 [2024-11-15 10:45:38.982035] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d3510) 00:26:50.695 [2024-11-15 10:45:38.982063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:11116 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.695 [2024-11-15 10:45:38.982095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:50.695 [2024-11-15 10:45:38.992975] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d3510) 00:26:50.695 [2024-11-15 10:45:38.993001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:22443 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.695 [2024-11-15 10:45:38.993032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:50.695 [2024-11-15 10:45:39.007433] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d3510) 00:26:50.695 [2024-11-15 10:45:39.007461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:7196 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.695 [2024-11-15 10:45:39.007491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:50.695 [2024-11-15 10:45:39.022858] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d3510) 00:26:50.695 [2024-11-15 10:45:39.022886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:19596 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.695 [2024-11-15 10:45:39.022917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:50.695 [2024-11-15 10:45:39.038048] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d3510) 00:26:50.695 [2024-11-15 10:45:39.038076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:13627 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.695 [2024-11-15 10:45:39.038106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:50.695 [2024-11-15 10:45:39.050282] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d3510) 00:26:50.695 [2024-11-15 10:45:39.050309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:15226 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.695 [2024-11-15 10:45:39.050340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:50.695 [2024-11-15 10:45:39.062197] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d3510) 00:26:50.695 [2024-11-15 10:45:39.062224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:3919 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.695 [2024-11-15 10:45:39.062254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:50.695 [2024-11-15 10:45:39.072185] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d3510) 00:26:50.695 [2024-11-15 10:45:39.072211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:21745 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.695 [2024-11-15 10:45:39.072242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:50.695 [2024-11-15 10:45:39.084187] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d3510) 00:26:50.695 [2024-11-15 10:45:39.084214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:11261 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.695 [2024-11-15 10:45:39.084244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:50.695 [2024-11-15 10:45:39.097803] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d3510) 00:26:50.695 [2024-11-15 10:45:39.097830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:1354 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.695 [2024-11-15 10:45:39.097862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:50.695 [2024-11-15 10:45:39.109623] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d3510) 00:26:50.695 [2024-11-15 10:45:39.109665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:5945 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.695 [2024-11-15 10:45:39.109681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:50.695 [2024-11-15 10:45:39.119835] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d3510) 00:26:50.695 [2024-11-15 10:45:39.119862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:11105 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.695 [2024-11-15 10:45:39.119892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:50.695 [2024-11-15 10:45:39.131126] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d3510) 00:26:50.695 [2024-11-15 10:45:39.131154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:886 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.695 [2024-11-15 10:45:39.131197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:50.695 [2024-11-15 10:45:39.144831] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d3510) 00:26:50.695 [2024-11-15 10:45:39.144858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:15946 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.695 [2024-11-15 10:45:39.144888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:50.695 [2024-11-15 10:45:39.159332] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d3510) 00:26:50.695 [2024-11-15 10:45:39.159383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1235 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.695 [2024-11-15 10:45:39.159401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:50.953 [2024-11-15 10:45:39.170233] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d3510) 00:26:50.953 [2024-11-15 10:45:39.170262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:513 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.953 [2024-11-15 10:45:39.170293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:50.953 19859.50 IOPS, 77.58 MiB/s [2024-11-15T09:45:39.416Z] [2024-11-15 10:45:39.185307] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d3510) 00:26:50.953 [2024-11-15 10:45:39.185336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:1877 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.953 [2024-11-15 10:45:39.185378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:50.953 00:26:50.953 Latency(us) 00:26:50.953 [2024-11-15T09:45:39.416Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:50.953 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:26:50.953 nvme0n1 : 2.05 19475.19 76.07 0.00 0.00 6434.28 3349.62 50098.63 00:26:50.953 [2024-11-15T09:45:39.416Z] =================================================================================================================== 00:26:50.953 [2024-11-15T09:45:39.416Z] Total : 19475.19 76.07 0.00 0.00 6434.28 3349.62 50098.63 00:26:50.953 { 00:26:50.953 "results": [ 00:26:50.953 { 00:26:50.953 "job": "nvme0n1", 00:26:50.953 "core_mask": "0x2", 00:26:50.953 "workload": "randread", 00:26:50.953 "status": "finished", 00:26:50.953 "queue_depth": 128, 00:26:50.953 "io_size": 4096, 00:26:50.953 "runtime": 2.046039, 00:26:50.953 "iops": 19475.190844358294, 00:26:50.953 "mibps": 76.07496423577459, 00:26:50.953 "io_failed": 0, 00:26:50.953 "io_timeout": 0, 00:26:50.953 "avg_latency_us": 6434.284981833291, 00:26:50.953 "min_latency_us": 3349.617777777778, 00:26:50.953 "max_latency_us": 50098.63111111111 00:26:50.953 } 00:26:50.953 ], 00:26:50.953 "core_count": 1 00:26:50.953 } 00:26:50.953 10:45:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:26:50.953 10:45:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:26:50.953 10:45:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:26:50.953 | .driver_specific 00:26:50.954 | .nvme_error 00:26:50.954 | .status_code 00:26:50.954 | .command_transient_transport_error' 00:26:50.954 10:45:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:26:51.211 10:45:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 156 > 0 )) 00:26:51.211 10:45:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 483223 00:26:51.211 10:45:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # '[' -z 483223 ']' 00:26:51.211 10:45:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # kill -0 483223 00:26:51.211 10:45:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # uname 00:26:51.211 10:45:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:26:51.211 10:45:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 483223 00:26:51.211 10:45:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:26:51.211 10:45:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:26:51.211 10:45:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # echo 'killing process with pid 483223' 00:26:51.211 killing process with pid 483223 00:26:51.211 10:45:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@971 -- # kill 483223 00:26:51.211 Received shutdown signal, test time was about 2.000000 seconds 00:26:51.211 00:26:51.211 Latency(us) 00:26:51.211 [2024-11-15T09:45:39.674Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:51.211 [2024-11-15T09:45:39.674Z] =================================================================================================================== 00:26:51.211 [2024-11-15T09:45:39.674Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:51.211 10:45:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@976 -- # wait 483223 00:26:51.470 10:45:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:26:51.470 10:45:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:26:51.470 10:45:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:26:51.470 10:45:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:26:51.470 10:45:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:26:51.470 10:45:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=483646 00:26:51.470 10:45:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:26:51.470 10:45:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 483646 /var/tmp/bperf.sock 00:26:51.470 10:45:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # '[' -z 483646 ']' 00:26:51.470 10:45:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:51.470 10:45:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # local max_retries=100 00:26:51.470 10:45:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:51.470 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:51.470 10:45:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # xtrace_disable 00:26:51.470 10:45:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:51.470 [2024-11-15 10:45:39.821325] Starting SPDK v25.01-pre git sha1 318515b44 / DPDK 24.03.0 initialization... 00:26:51.470 [2024-11-15 10:45:39.821442] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid483646 ] 00:26:51.470 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:51.470 Zero copy mechanism will not be used. 00:26:51.470 [2024-11-15 10:45:39.892492] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:51.727 [2024-11-15 10:45:39.952732] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:51.727 10:45:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:26:51.727 10:45:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@866 -- # return 0 00:26:51.728 10:45:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:51.728 10:45:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:51.985 10:45:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:26:51.985 10:45:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:51.985 10:45:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:51.985 10:45:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:51.985 10:45:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:51.985 10:45:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:52.242 nvme0n1 00:26:52.501 10:45:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:26:52.501 10:45:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:52.501 10:45:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:52.501 10:45:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:52.501 10:45:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:26:52.501 10:45:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:52.501 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:52.501 Zero copy mechanism will not be used. 00:26:52.501 Running I/O for 2 seconds... 00:26:52.501 [2024-11-15 10:45:40.842876] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebc6e0) 00:26:52.501 [2024-11-15 10:45:40.842948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.501 [2024-11-15 10:45:40.842970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:52.501 [2024-11-15 10:45:40.846957] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebc6e0) 00:26:52.501 [2024-11-15 10:45:40.847000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.501 [2024-11-15 10:45:40.847017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:52.501 [2024-11-15 10:45:40.851783] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebc6e0) 00:26:52.501 [2024-11-15 10:45:40.851811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.501 [2024-11-15 10:45:40.851843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:52.501 [2024-11-15 10:45:40.857981] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebc6e0) 00:26:52.501 [2024-11-15 10:45:40.858016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.501 [2024-11-15 10:45:40.858049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:52.501 [2024-11-15 10:45:40.864459] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebc6e0) 00:26:52.501 [2024-11-15 10:45:40.864488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.501 [2024-11-15 10:45:40.864519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:52.501 [2024-11-15 10:45:40.871112] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebc6e0) 00:26:52.501 [2024-11-15 10:45:40.871139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.501 [2024-11-15 10:45:40.871171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:52.501 [2024-11-15 10:45:40.877557] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebc6e0) 00:26:52.501 [2024-11-15 10:45:40.877585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.501 [2024-11-15 10:45:40.877616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:52.501 [2024-11-15 10:45:40.884760] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebc6e0) 00:26:52.501 [2024-11-15 10:45:40.884789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.501 [2024-11-15 10:45:40.884820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:52.501 [2024-11-15 10:45:40.891199] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebc6e0) 00:26:52.501 [2024-11-15 10:45:40.891228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.501 [2024-11-15 10:45:40.891259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:52.501 [2024-11-15 10:45:40.898029] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebc6e0) 00:26:52.501 [2024-11-15 10:45:40.898058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.501 [2024-11-15 10:45:40.898089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:52.501 [2024-11-15 10:45:40.905829] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebc6e0) 00:26:52.501 [2024-11-15 10:45:40.905859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.501 [2024-11-15 10:45:40.905889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:52.501 [2024-11-15 10:45:40.914299] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebc6e0) 00:26:52.501 [2024-11-15 10:45:40.914329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.501 [2024-11-15 10:45:40.914368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:52.501 [2024-11-15 10:45:40.921983] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebc6e0) 00:26:52.501 [2024-11-15 10:45:40.922011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.501 [2024-11-15 10:45:40.922042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:52.501 [2024-11-15 10:45:40.929151] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebc6e0) 00:26:52.501 [2024-11-15 10:45:40.929179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.501 [2024-11-15 10:45:40.929210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:52.501 [2024-11-15 10:45:40.935752] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebc6e0) 00:26:52.501 [2024-11-15 10:45:40.935780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.501 [2024-11-15 10:45:40.935812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:52.501 [2024-11-15 10:45:40.941254] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebc6e0) 00:26:52.501 [2024-11-15 10:45:40.941283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.501 [2024-11-15 10:45:40.941314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:52.501 [2024-11-15 10:45:40.947932] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebc6e0) 00:26:52.501 [2024-11-15 10:45:40.947961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.501 [2024-11-15 10:45:40.947992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:52.501 [2024-11-15 10:45:40.954071] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebc6e0) 00:26:52.502 [2024-11-15 10:45:40.954100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.502 [2024-11-15 10:45:40.954130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:52.502 [2024-11-15 10:45:40.960177] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebc6e0) 00:26:52.502 [2024-11-15 10:45:40.960205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.502 [2024-11-15 10:45:40.960236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:52.502 [2024-11-15 10:45:40.967409] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebc6e0) 00:26:52.502 [2024-11-15 10:45:40.967456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.502 [2024-11-15 10:45:40.967474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:52.760 [2024-11-15 10:45:40.974168] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebc6e0) 00:26:52.760 [2024-11-15 10:45:40.974206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.760 [2024-11-15 10:45:40.974239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:52.760 [2024-11-15 10:45:40.979726] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebc6e0) 00:26:52.760 [2024-11-15 10:45:40.979754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.760 [2024-11-15 10:45:40.979786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:52.760 [2024-11-15 10:45:40.985005] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebc6e0) 00:26:52.760 [2024-11-15 10:45:40.985032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.760 [2024-11-15 10:45:40.985063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:52.760 [2024-11-15 10:45:40.990329] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebc6e0) 00:26:52.760 [2024-11-15 10:45:40.990356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.760 [2024-11-15 10:45:40.990396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:52.760 [2024-11-15 10:45:40.995717] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebc6e0) 00:26:52.760 [2024-11-15 10:45:40.995757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.760 [2024-11-15 10:45:40.995788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:52.760 [2024-11-15 10:45:41.000950] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebc6e0) 00:26:52.760 [2024-11-15 10:45:41.000977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.761 [2024-11-15 10:45:41.001008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:52.761 [2024-11-15 10:45:41.006241] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebc6e0) 00:26:52.761 [2024-11-15 10:45:41.006269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.761 [2024-11-15 10:45:41.006299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:52.761 [2024-11-15 10:45:41.011447] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebc6e0) 00:26:52.761 [2024-11-15 10:45:41.011476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.761 [2024-11-15 10:45:41.011509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:52.761 [2024-11-15 10:45:41.016604] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebc6e0) 00:26:52.761 [2024-11-15 10:45:41.016632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.761 [2024-11-15 10:45:41.016648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:52.761 [2024-11-15 10:45:41.021883] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebc6e0) 00:26:52.761 [2024-11-15 10:45:41.021910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.761 [2024-11-15 10:45:41.021940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:52.761 [2024-11-15 10:45:41.027113] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebc6e0) 00:26:52.761 [2024-11-15 10:45:41.027139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.761 [2024-11-15 10:45:41.027170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:52.761 [2024-11-15 10:45:41.032175] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebc6e0) 00:26:52.761 [2024-11-15 10:45:41.032201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.761 [2024-11-15 10:45:41.032232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:52.761 [2024-11-15 10:45:41.037300] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebc6e0) 00:26:52.761 [2024-11-15 10:45:41.037326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.761 [2024-11-15 10:45:41.037357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:52.761 [2024-11-15 10:45:41.042401] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebc6e0) 00:26:52.761 [2024-11-15 10:45:41.042429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.761 [2024-11-15 10:45:41.042460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:52.761 [2024-11-15 10:45:41.047684] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebc6e0) 00:26:52.761 [2024-11-15 10:45:41.047711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.761 [2024-11-15 10:45:41.047726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:52.761 [2024-11-15 10:45:41.052856] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebc6e0) 00:26:52.761 [2024-11-15 10:45:41.052883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.761 [2024-11-15 10:45:41.052914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:52.761 [2024-11-15 10:45:41.058185] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebc6e0) 00:26:52.761 [2024-11-15 10:45:41.058212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.761 [2024-11-15 10:45:41.058243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:52.761 [2024-11-15 10:45:41.063702] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebc6e0) 00:26:52.761 [2024-11-15 10:45:41.063728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.761 [2024-11-15 10:45:41.063767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:52.761 [2024-11-15 10:45:41.069289] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebc6e0) 00:26:52.761 [2024-11-15 10:45:41.069315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.761 [2024-11-15 10:45:41.069345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:52.761 [2024-11-15 10:45:41.075416] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebc6e0) 00:26:52.761 [2024-11-15 10:45:41.075445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.761 [2024-11-15 10:45:41.075478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:52.761 [2024-11-15 10:45:41.081132] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebc6e0) 00:26:52.761 [2024-11-15 10:45:41.081159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.761 [2024-11-15 10:45:41.081193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:52.761 [2024-11-15 10:45:41.086793] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebc6e0) 00:26:52.761 [2024-11-15 10:45:41.086821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.761 [2024-11-15 10:45:41.086854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:52.761 [2024-11-15 10:45:41.092307] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebc6e0) 00:26:52.761 [2024-11-15 10:45:41.092334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.761 [2024-11-15 10:45:41.092372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:52.761 [2024-11-15 10:45:41.097877] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebc6e0) 00:26:52.761 [2024-11-15 10:45:41.097904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.761 [2024-11-15 10:45:41.097936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:52.761 [2024-11-15 10:45:41.103780] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebc6e0) 00:26:52.761 [2024-11-15 10:45:41.103816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.761 [2024-11-15 10:45:41.103849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:52.761 [2024-11-15 10:45:41.110040] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebc6e0) 00:26:52.761 [2024-11-15 10:45:41.110067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.761 [2024-11-15 10:45:41.110098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:52.761 [2024-11-15 10:45:41.115788] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebc6e0) 00:26:52.761 [2024-11-15 10:45:41.115821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.761 [2024-11-15 10:45:41.115853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:52.761 [2024-11-15 10:45:41.121766] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebc6e0) 00:26:52.761 [2024-11-15 10:45:41.121802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.761 [2024-11-15 10:45:41.121833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:52.761 [2024-11-15 10:45:41.127699] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebc6e0) 00:26:52.761 [2024-11-15 10:45:41.127740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.761 [2024-11-15 10:45:41.127760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:52.761 [2024-11-15 10:45:41.133557] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebc6e0) 00:26:52.761 [2024-11-15 10:45:41.133586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.761 [2024-11-15 10:45:41.133617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:52.761 [2024-11-15 10:45:41.139785] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebc6e0) 00:26:52.761 [2024-11-15 10:45:41.139812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.761 [2024-11-15 10:45:41.139843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:52.761 [2024-11-15 10:45:41.146001] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebc6e0) 00:26:52.761 [2024-11-15 10:45:41.146029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.761 [2024-11-15 10:45:41.146060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:52.761 [2024-11-15 10:45:41.152299] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebc6e0) 00:26:52.761 [2024-11-15 10:45:41.152325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.762 [2024-11-15 10:45:41.152356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:52.762 [2024-11-15 10:45:41.158796] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebc6e0) 00:26:52.762 [2024-11-15 10:45:41.158823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.762 [2024-11-15 10:45:41.158854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:52.762 [2024-11-15 10:45:41.165205] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebc6e0) 00:26:52.762 [2024-11-15 10:45:41.165231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.762 [2024-11-15 10:45:41.165261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:52.762 [2024-11-15 10:45:41.171693] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebc6e0) 00:26:52.762 [2024-11-15 10:45:41.171743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.762 [2024-11-15 10:45:41.171759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:52.762 [2024-11-15 10:45:41.178247] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebc6e0) 00:26:52.762 [2024-11-15 10:45:41.178274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.762 [2024-11-15 10:45:41.178304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:52.762 [2024-11-15 10:45:41.185015] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebc6e0) 00:26:52.762 [2024-11-15 10:45:41.185043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.762 [2024-11-15 10:45:41.185073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:52.762 [2024-11-15 10:45:41.191728] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebc6e0) 00:26:52.762 [2024-11-15 10:45:41.191755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.762 [2024-11-15 10:45:41.191786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:52.762 [2024-11-15 10:45:41.198555] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebc6e0) 00:26:52.762 [2024-11-15 10:45:41.198583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.762 [2024-11-15 10:45:41.198615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:52.762 [2024-11-15 10:45:41.205634] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebc6e0) 00:26:52.762 [2024-11-15 10:45:41.205676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.762 [2024-11-15 10:45:41.205692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:52.762 [2024-11-15 10:45:41.212499] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebc6e0) 00:26:52.762 [2024-11-15 10:45:41.212528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.762 [2024-11-15 10:45:41.212560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:52.762 [2024-11-15 10:45:41.219405] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebc6e0) 00:26:52.762 [2024-11-15 10:45:41.219434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.762 [2024-11-15 10:45:41.219467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:52.762 [2024-11-15 10:45:41.226562] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebc6e0) 00:26:52.762 [2024-11-15 10:45:41.226604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.762 [2024-11-15 10:45:41.226655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:53.020 [2024-11-15 10:45:41.233759] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebc6e0) 00:26:53.020 [2024-11-15 10:45:41.233788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.020 [2024-11-15 10:45:41.233820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:53.021 [2024-11-15 10:45:41.240447] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebc6e0) 00:26:53.021 [2024-11-15 10:45:41.240477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.021 [2024-11-15 10:45:41.240511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:53.021 [2024-11-15 10:45:41.247134] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebc6e0) 00:26:53.021 [2024-11-15 10:45:41.247161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.021 [2024-11-15 10:45:41.247193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:53.021 [2024-11-15 10:45:41.253790] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebc6e0) 00:26:53.021 [2024-11-15 10:45:41.253817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.021 [2024-11-15 10:45:41.253849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:53.021 [2024-11-15 10:45:41.260361] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebc6e0) 00:26:53.021 [2024-11-15 10:45:41.260394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.021 [2024-11-15 10:45:41.260426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:53.021 [2024-11-15 10:45:41.267202] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebc6e0) 00:26:53.021 [2024-11-15 10:45:41.267229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.021 [2024-11-15 10:45:41.267260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:53.021 [2024-11-15 10:45:41.273802] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebc6e0) 00:26:53.021 [2024-11-15 10:45:41.273829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.021 [2024-11-15 10:45:41.273861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:53.021 [2024-11-15 10:45:41.280510] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebc6e0) 00:26:53.021 [2024-11-15 10:45:41.280538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.021 [2024-11-15 10:45:41.280570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:53.021 [2024-11-15 10:45:41.287215] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebc6e0) 00:26:53.021 [2024-11-15 10:45:41.287247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.021 [2024-11-15 10:45:41.287279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:53.021 [2024-11-15 10:45:41.293914] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebc6e0) 00:26:53.021 [2024-11-15 10:45:41.293942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.021 [2024-11-15 10:45:41.293973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:53.021 [2024-11-15 10:45:41.300493] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebc6e0) 00:26:53.021 [2024-11-15 10:45:41.300522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.021 [2024-11-15 10:45:41.300553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:53.021 [2024-11-15 10:45:41.307293] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebc6e0) 00:26:53.021 [2024-11-15 10:45:41.307320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.021 [2024-11-15 10:45:41.307351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:53.021 [2024-11-15 10:45:41.313970] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebc6e0) 00:26:53.021 [2024-11-15 10:45:41.313997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.021 [2024-11-15 10:45:41.314028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:53.021 [2024-11-15 10:45:41.320714] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebc6e0) 00:26:53.021 [2024-11-15 10:45:41.320756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.021 [2024-11-15 10:45:41.320772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:53.021 [2024-11-15 10:45:41.327419] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebc6e0) 00:26:53.021 [2024-11-15 10:45:41.327447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.021 [2024-11-15 10:45:41.327478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:53.021 [2024-11-15 10:45:41.334040] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebc6e0) 00:26:53.021 [2024-11-15 10:45:41.334066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.021 [2024-11-15 10:45:41.334097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:53.021 [2024-11-15 10:45:41.340705] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebc6e0) 00:26:53.021 [2024-11-15 10:45:41.340731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.021 [2024-11-15 10:45:41.340763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:53.021 [2024-11-15 10:45:41.347391] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebc6e0) 00:26:53.021 [2024-11-15 10:45:41.347419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.021 [2024-11-15 10:45:41.347451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:53.021 [2024-11-15 10:45:41.354064] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebc6e0) 00:26:53.021 [2024-11-15 10:45:41.354091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.021 [2024-11-15 10:45:41.354121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:53.021 [2024-11-15 10:45:41.360873] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebc6e0) 00:26:53.021 [2024-11-15 10:45:41.360902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.021 [2024-11-15 10:45:41.360933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:53.021 [2024-11-15 10:45:41.367735] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebc6e0) 00:26:53.021 [2024-11-15 10:45:41.367762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.021 [2024-11-15 10:45:41.367791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:53.021 [2024-11-15 10:45:41.374567] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebc6e0) 00:26:53.021 [2024-11-15 10:45:41.374597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.021 [2024-11-15 10:45:41.374629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:53.021 [2024-11-15 10:45:41.381276] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebc6e0) 00:26:53.021 [2024-11-15 10:45:41.381303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.021 [2024-11-15 10:45:41.381334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:53.021 [2024-11-15 10:45:41.388035] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebc6e0) 00:26:53.021 [2024-11-15 10:45:41.388062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.021 [2024-11-15 10:45:41.388093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:53.021 [2024-11-15 10:45:41.394881] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebc6e0) 00:26:53.021 [2024-11-15 10:45:41.394909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.022 [2024-11-15 10:45:41.394940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:53.022 [2024-11-15 10:45:41.401762] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebc6e0) 00:26:53.022 [2024-11-15 10:45:41.401789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.022 [2024-11-15 10:45:41.401826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:53.022 [2024-11-15 10:45:41.408484] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebc6e0) 00:26:53.022 [2024-11-15 10:45:41.408513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.022 [2024-11-15 10:45:41.408546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:53.022 [2024-11-15 10:45:41.415256] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebc6e0) 00:26:53.022 [2024-11-15 10:45:41.415283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.022 [2024-11-15 10:45:41.415313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:53.022 [2024-11-15 10:45:41.421903] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebc6e0) 00:26:53.022 [2024-11-15 10:45:41.421930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.022 [2024-11-15 10:45:41.421961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:53.022 [2024-11-15 10:45:41.428845] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebc6e0) 00:26:53.022 [2024-11-15 10:45:41.428872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.022 [2024-11-15 10:45:41.428904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:53.022 [2024-11-15 10:45:41.435542] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebc6e0) 00:26:53.022 [2024-11-15 10:45:41.435571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.022 [2024-11-15 10:45:41.435603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:53.022 [2024-11-15 10:45:41.442262] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebc6e0) 00:26:53.022 [2024-11-15 10:45:41.442289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.022 [2024-11-15 10:45:41.442320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:53.022 [2024-11-15 10:45:41.449011] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebc6e0) 00:26:53.022 [2024-11-15 10:45:41.449038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.022 [2024-11-15 10:45:41.449068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:53.022 [2024-11-15 10:45:41.455622] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebc6e0) 00:26:53.022 [2024-11-15 10:45:41.455649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.022 [2024-11-15 10:45:41.455665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:53.022 [2024-11-15 10:45:41.462215] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebc6e0) 00:26:53.022 [2024-11-15 10:45:41.462242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.022 [2024-11-15 10:45:41.462272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:53.022 [2024-11-15 10:45:41.469030] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebc6e0) 00:26:53.022 [2024-11-15 10:45:41.469056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.022 [2024-11-15 10:45:41.469086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:53.022 [2024-11-15 10:45:41.475637] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebc6e0) 00:26:53.022 [2024-11-15 10:45:41.475665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.022 [2024-11-15 10:45:41.475680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:53.022 [2024-11-15 10:45:41.482247] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebc6e0) 00:26:53.022 [2024-11-15 10:45:41.482274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.022 [2024-11-15 10:45:41.482304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:53.281 [2024-11-15 10:45:41.489604] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebc6e0) 00:26:53.281 [2024-11-15 10:45:41.489650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.281 [2024-11-15 10:45:41.489669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:53.281 [2024-11-15 10:45:41.496692] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebc6e0) 00:26:53.282 [2024-11-15 10:45:41.496722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.282 [2024-11-15 10:45:41.496753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:53.282 [2024-11-15 10:45:41.503282] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebc6e0) 00:26:53.282 [2024-11-15 10:45:41.503310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.282 [2024-11-15 10:45:41.503342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:53.282 [2024-11-15 10:45:41.509991] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebc6e0) 00:26:53.282 [2024-11-15 10:45:41.510018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.282 [2024-11-15 10:45:41.510049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:53.282 [2024-11-15 10:45:41.516859] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebc6e0) 00:26:53.282 [2024-11-15 10:45:41.516887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.282 [2024-11-15 10:45:41.516923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:53.282 [2024-11-15 10:45:41.523275] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebc6e0) 00:26:53.282 [2024-11-15 10:45:41.523301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.282 [2024-11-15 10:45:41.523332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:53.282 [2024-11-15 10:45:41.530061] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebc6e0) 00:26:53.282 [2024-11-15 10:45:41.530089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.282 [2024-11-15 10:45:41.530120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:53.282 [2024-11-15 10:45:41.537162] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebc6e0) 00:26:53.282 [2024-11-15 10:45:41.537190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.282 [2024-11-15 10:45:41.537221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:53.282 [2024-11-15 10:45:41.544427] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebc6e0) 00:26:53.282 [2024-11-15 10:45:41.544455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.282 [2024-11-15 10:45:41.544488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:53.282 [2024-11-15 10:45:41.551064] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebc6e0) 00:26:53.282 [2024-11-15 10:45:41.551091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.282 [2024-11-15 10:45:41.551121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:53.282 [2024-11-15 10:45:41.557755] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebc6e0) 00:26:53.282 [2024-11-15 10:45:41.557782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.282 [2024-11-15 10:45:41.557814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:53.282 [2024-11-15 10:45:41.564550] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebc6e0) 00:26:53.282 [2024-11-15 10:45:41.564578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.282 [2024-11-15 10:45:41.564610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:53.282 [2024-11-15 10:45:41.571256] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebc6e0) 00:26:53.282 [2024-11-15 10:45:41.571285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.282 [2024-11-15 10:45:41.571317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:53.282 [2024-11-15 10:45:41.577099] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebc6e0) 00:26:53.282 [2024-11-15 10:45:41.577134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.282 [2024-11-15 10:45:41.577166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:53.282 [2024-11-15 10:45:41.582943] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebc6e0) 00:26:53.282 [2024-11-15 10:45:41.583003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.282 [2024-11-15 10:45:41.583020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:53.282 [2024-11-15 10:45:41.589830] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebc6e0) 00:26:53.282 [2024-11-15 10:45:41.589876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.282 [2024-11-15 10:45:41.589894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:53.282 [2024-11-15 10:45:41.596232] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebc6e0) 00:26:53.282 [2024-11-15 10:45:41.596261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.282 [2024-11-15 10:45:41.596292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:53.282 [2024-11-15 10:45:41.602580] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebc6e0) 00:26:53.282 [2024-11-15 10:45:41.602612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.282 [2024-11-15 10:45:41.602631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:53.282 [2024-11-15 10:45:41.608778] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebc6e0) 00:26:53.282 [2024-11-15 10:45:41.608806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.282 [2024-11-15 10:45:41.608838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:53.282 [2024-11-15 10:45:41.614400] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebc6e0) 00:26:53.282 [2024-11-15 10:45:41.614431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.282 [2024-11-15 10:45:41.614448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:53.282 [2024-11-15 10:45:41.617836] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebc6e0) 00:26:53.282 [2024-11-15 10:45:41.617861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.282 [2024-11-15 10:45:41.617891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:53.282 [2024-11-15 10:45:41.623618] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebc6e0) 00:26:53.282 [2024-11-15 10:45:41.623662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.282 [2024-11-15 10:45:41.623678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:53.282 [2024-11-15 10:45:41.629890] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebc6e0) 00:26:53.282 [2024-11-15 10:45:41.629918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.282 [2024-11-15 10:45:41.629950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:53.282 [2024-11-15 10:45:41.636450] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebc6e0) 00:26:53.282 [2024-11-15 10:45:41.636480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.282 [2024-11-15 10:45:41.636496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:53.282 [2024-11-15 10:45:41.642544] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebc6e0) 00:26:53.282 [2024-11-15 10:45:41.642574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.282 [2024-11-15 10:45:41.642606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:53.282 [2024-11-15 10:45:41.648528] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebc6e0) 00:26:53.282 [2024-11-15 10:45:41.648556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.282 [2024-11-15 10:45:41.648592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:53.282 [2024-11-15 10:45:41.654935] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebc6e0) 00:26:53.282 [2024-11-15 10:45:41.654964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.282 [2024-11-15 10:45:41.654995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:53.282 [2024-11-15 10:45:41.660758] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebc6e0) 00:26:53.282 [2024-11-15 10:45:41.660787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.282 [2024-11-15 10:45:41.660818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:53.282 [2024-11-15 10:45:41.666213] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebc6e0) 00:26:53.283 [2024-11-15 10:45:41.666240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.283 [2024-11-15 10:45:41.666272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:53.283 [2024-11-15 10:45:41.672453] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebc6e0) 00:26:53.283 [2024-11-15 10:45:41.672481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.283 [2024-11-15 10:45:41.672513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:53.283 [2024-11-15 10:45:41.678445] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebc6e0) 00:26:53.283 [2024-11-15 10:45:41.678474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.283 [2024-11-15 10:45:41.678512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:53.283 [2024-11-15 10:45:41.683771] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebc6e0) 00:26:53.283 [2024-11-15 10:45:41.683797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.283 [2024-11-15 10:45:41.683827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:53.283 [2024-11-15 10:45:41.688882] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebc6e0) 00:26:53.283 [2024-11-15 10:45:41.688908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.283 [2024-11-15 10:45:41.688939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:53.283 [2024-11-15 10:45:41.694014] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebc6e0) 00:26:53.283 [2024-11-15 10:45:41.694041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.283 [2024-11-15 10:45:41.694071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:53.283 [2024-11-15 10:45:41.699194] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebc6e0) 00:26:53.283 [2024-11-15 10:45:41.699221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.283 [2024-11-15 10:45:41.699253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:53.283 [2024-11-15 10:45:41.704240] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebc6e0) 00:26:53.283 [2024-11-15 10:45:41.704265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.283 [2024-11-15 10:45:41.704295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:53.283 [2024-11-15 10:45:41.709321] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebc6e0) 00:26:53.283 [2024-11-15 10:45:41.709369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.283 [2024-11-15 10:45:41.709388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:53.283 [2024-11-15 10:45:41.714368] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebc6e0) 00:26:53.283 [2024-11-15 10:45:41.714395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.283 [2024-11-15 10:45:41.714425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:53.283 [2024-11-15 10:45:41.719474] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebc6e0) 00:26:53.283 [2024-11-15 10:45:41.719501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.283 [2024-11-15 10:45:41.719532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:53.283 [2024-11-15 10:45:41.724487] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebc6e0) 00:26:53.283 [2024-11-15 10:45:41.724519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.283 [2024-11-15 10:45:41.724551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:53.283 [2024-11-15 10:45:41.729480] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebc6e0) 00:26:53.283 [2024-11-15 10:45:41.729507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.283 [2024-11-15 10:45:41.729538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:53.283 [2024-11-15 10:45:41.734544] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebc6e0) 00:26:53.283 [2024-11-15 10:45:41.734572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.283 [2024-11-15 10:45:41.734604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:53.283 [2024-11-15 10:45:41.739685] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebc6e0) 00:26:53.283 [2024-11-15 10:45:41.739712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.283 [2024-11-15 10:45:41.739743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:53.283 [2024-11-15 10:45:41.744955] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebc6e0) 00:26:53.283 [2024-11-15 10:45:41.744997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.283 [2024-11-15 10:45:41.745014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:53.542 [2024-11-15 10:45:41.750511] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebc6e0) 00:26:53.542 [2024-11-15 10:45:41.750541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.542 [2024-11-15 10:45:41.750573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:53.542 [2024-11-15 10:45:41.755807] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebc6e0) 00:26:53.542 [2024-11-15 10:45:41.755834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.542 [2024-11-15 10:45:41.755867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:53.542 [2024-11-15 10:45:41.760928] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebc6e0) 00:26:53.542 [2024-11-15 10:45:41.760954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.542 [2024-11-15 10:45:41.760985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:53.542 [2024-11-15 10:45:41.765939] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebc6e0) 00:26:53.542 [2024-11-15 10:45:41.765965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.542 [2024-11-15 10:45:41.765995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:53.542 [2024-11-15 10:45:41.771691] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebc6e0) 00:26:53.542 [2024-11-15 10:45:41.771719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.542 [2024-11-15 10:45:41.771750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:53.542 [2024-11-15 10:45:41.776040] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebc6e0) 00:26:53.542 [2024-11-15 10:45:41.776067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.542 [2024-11-15 10:45:41.776097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:53.542 [2024-11-15 10:45:41.780964] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebc6e0) 00:26:53.542 [2024-11-15 10:45:41.780990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.542 [2024-11-15 10:45:41.781021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:53.542 [2024-11-15 10:45:41.786002] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebc6e0) 00:26:53.542 [2024-11-15 10:45:41.786028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.542 [2024-11-15 10:45:41.786058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:53.542 [2024-11-15 10:45:41.790961] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebc6e0) 00:26:53.542 [2024-11-15 10:45:41.790988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.542 [2024-11-15 10:45:41.791019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:53.543 [2024-11-15 10:45:41.796041] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebc6e0) 00:26:53.543 [2024-11-15 10:45:41.796067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.543 [2024-11-15 10:45:41.796096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:53.543 [2024-11-15 10:45:41.801247] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebc6e0) 00:26:53.543 [2024-11-15 10:45:41.801274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.543 [2024-11-15 10:45:41.801305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:53.543 [2024-11-15 10:45:41.806373] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebc6e0) 00:26:53.543 [2024-11-15 10:45:41.806400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.543 [2024-11-15 10:45:41.806431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:53.543 [2024-11-15 10:45:41.812003] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebc6e0) 00:26:53.543 [2024-11-15 10:45:41.812029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.543 [2024-11-15 10:45:41.812065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:53.543 [2024-11-15 10:45:41.816636] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebc6e0) 00:26:53.543 [2024-11-15 10:45:41.816677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.543 [2024-11-15 10:45:41.816693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:53.543 [2024-11-15 10:45:41.822342] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebc6e0) 00:26:53.543 [2024-11-15 10:45:41.822391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.543 [2024-11-15 10:45:41.822409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:53.543 [2024-11-15 10:45:41.827336] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebc6e0) 00:26:53.543 [2024-11-15 10:45:41.827385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.543 [2024-11-15 10:45:41.827401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:53.543 [2024-11-15 10:45:41.832469] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebc6e0) 00:26:53.543 [2024-11-15 10:45:41.832511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.543 [2024-11-15 10:45:41.832528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:53.543 5038.00 IOPS, 629.75 MiB/s [2024-11-15T09:45:42.006Z] [2024-11-15 10:45:41.838864] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebc6e0) 00:26:53.543 [2024-11-15 10:45:41.838890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.543 [2024-11-15 10:45:41.838922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:53.543 [2024-11-15 10:45:41.844040] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebc6e0) 00:26:53.543 [2024-11-15 10:45:41.844067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.543 [2024-11-15 10:45:41.844098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:53.543 [2024-11-15 10:45:41.849189] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebc6e0) 00:26:53.543 [2024-11-15 10:45:41.849216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.543 [2024-11-15 10:45:41.849247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:53.543 [2024-11-15 10:45:41.854804] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebc6e0) 00:26:53.543 [2024-11-15 10:45:41.854832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.543 [2024-11-15 10:45:41.854863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:53.543 [2024-11-15 10:45:41.860182] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebc6e0) 00:26:53.543 [2024-11-15 10:45:41.860209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.543 [2024-11-15 10:45:41.860240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:53.543 [2024-11-15 10:45:41.865430] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebc6e0) 00:26:53.543 [2024-11-15 10:45:41.865461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.543 [2024-11-15 10:45:41.865479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:53.543 [2024-11-15 10:45:41.870693] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebc6e0) 00:26:53.543 [2024-11-15 10:45:41.870744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.543 [2024-11-15 10:45:41.870760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:53.543 [2024-11-15 10:45:41.875923] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebc6e0) 00:26:53.543 [2024-11-15 10:45:41.875950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.543 [2024-11-15 10:45:41.875980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:53.543 [2024-11-15 10:45:41.881085] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebc6e0) 00:26:53.543 [2024-11-15 10:45:41.881112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.543 [2024-11-15 10:45:41.881143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:53.543 [2024-11-15 10:45:41.886212] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebc6e0) 00:26:53.543 [2024-11-15 10:45:41.886239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.543 [2024-11-15 10:45:41.886270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:53.543 [2024-11-15 10:45:41.891926] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebc6e0) 00:26:53.543 [2024-11-15 10:45:41.891954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.543 [2024-11-15 10:45:41.891985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:53.543 [2024-11-15 10:45:41.898396] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebc6e0) 00:26:53.543 [2024-11-15 10:45:41.898427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.543 [2024-11-15 10:45:41.898460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:53.543 [2024-11-15 10:45:41.904589] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebc6e0) 00:26:53.543 [2024-11-15 10:45:41.904619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.543 [2024-11-15 10:45:41.904663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:53.543 [2024-11-15 10:45:41.910820] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebc6e0) 00:26:53.543 [2024-11-15 10:45:41.910848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.543 [2024-11-15 10:45:41.910880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:53.543 [2024-11-15 10:45:41.916706] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebc6e0) 00:26:53.543 [2024-11-15 10:45:41.916741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.543 [2024-11-15 10:45:41.916773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:53.543 [2024-11-15 10:45:41.922038] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebc6e0) 00:26:53.543 [2024-11-15 10:45:41.922065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.543 [2024-11-15 10:45:41.922096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:53.543 [2024-11-15 10:45:41.927148] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebc6e0) 00:26:53.543 [2024-11-15 10:45:41.927175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.543 [2024-11-15 10:45:41.927205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:53.543 [2024-11-15 10:45:41.933289] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebc6e0) 00:26:53.543 [2024-11-15 10:45:41.933320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.543 [2024-11-15 10:45:41.933351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:53.543 [2024-11-15 10:45:41.939768] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebc6e0) 00:26:53.543 [2024-11-15 10:45:41.939796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.544 [2024-11-15 10:45:41.939828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:53.544 [2024-11-15 10:45:41.946791] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebc6e0) 00:26:53.544 [2024-11-15 10:45:41.946820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.544 [2024-11-15 10:45:41.946851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:53.544 [2024-11-15 10:45:41.955114] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebc6e0) 00:26:53.544 [2024-11-15 10:45:41.955157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.544 [2024-11-15 10:45:41.955174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:53.544 [2024-11-15 10:45:41.961468] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebc6e0) 00:26:53.544 [2024-11-15 10:45:41.961505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.544 [2024-11-15 10:45:41.961538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:53.544 [2024-11-15 10:45:41.970116] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebc6e0) 00:26:53.544 [2024-11-15 10:45:41.970144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.544 [2024-11-15 10:45:41.970175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:53.544 [2024-11-15 10:45:41.976519] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebc6e0) 00:26:53.544 [2024-11-15 10:45:41.976551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.544 [2024-11-15 10:45:41.976583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:53.544 [2024-11-15 10:45:41.981974] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebc6e0) 00:26:53.544 [2024-11-15 10:45:41.982002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.544 [2024-11-15 10:45:41.982032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:53.544 [2024-11-15 10:45:41.987634] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebc6e0) 00:26:53.544 [2024-11-15 10:45:41.987660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.544 [2024-11-15 10:45:41.987675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:53.544 [2024-11-15 10:45:41.992189] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebc6e0) 00:26:53.544 [2024-11-15 10:45:41.992215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.544 [2024-11-15 10:45:41.992246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:53.544 [2024-11-15 10:45:41.997424] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebc6e0) 00:26:53.544 [2024-11-15 10:45:41.997452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.544 [2024-11-15 10:45:41.997468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:53.544 [2024-11-15 10:45:42.002552] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebc6e0) 00:26:53.544 [2024-11-15 10:45:42.002580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.544 [2024-11-15 10:45:42.002610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:53.803 [2024-11-15 10:45:42.008646] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebc6e0) 00:26:53.803 [2024-11-15 10:45:42.008692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.803 [2024-11-15 10:45:42.008738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:53.803 [2024-11-15 10:45:42.014201] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebc6e0) 00:26:53.803 [2024-11-15 10:45:42.014232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.803 [2024-11-15 10:45:42.014263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:53.803 [2024-11-15 10:45:42.019389] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebc6e0) 00:26:53.803 [2024-11-15 10:45:42.019441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.803 [2024-11-15 10:45:42.019458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:53.803 [2024-11-15 10:45:42.024576] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebc6e0) 00:26:53.804 [2024-11-15 10:45:42.024604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.804 [2024-11-15 10:45:42.024643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:53.804 [2024-11-15 10:45:42.029605] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebc6e0) 00:26:53.804 [2024-11-15 10:45:42.029633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.804 [2024-11-15 10:45:42.029666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:53.804 [2024-11-15 10:45:42.034986] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebc6e0) 00:26:53.804 [2024-11-15 10:45:42.035014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.804 [2024-11-15 10:45:42.035045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:53.804 [2024-11-15 10:45:42.040313] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebc6e0) 00:26:53.804 [2024-11-15 10:45:42.040355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.804 [2024-11-15 10:45:42.040382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:53.804 [2024-11-15 10:45:42.045845] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebc6e0) 00:26:53.804 [2024-11-15 10:45:42.045886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.804 [2024-11-15 10:45:42.045903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:53.804 [2024-11-15 10:45:42.051808] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebc6e0) 00:26:53.804 [2024-11-15 10:45:42.051837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.804 [2024-11-15 10:45:42.051870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:53.804 [2024-11-15 10:45:42.058289] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebc6e0) 00:26:53.804 [2024-11-15 10:45:42.058324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.804 [2024-11-15 10:45:42.058357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:53.804 [2024-11-15 10:45:42.063564] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebc6e0) 00:26:53.804 [2024-11-15 10:45:42.063610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.804 [2024-11-15 10:45:42.063627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:53.804 [2024-11-15 10:45:42.069298] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebc6e0) 00:26:53.804 [2024-11-15 10:45:42.069326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.804 [2024-11-15 10:45:42.069357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:53.804 [2024-11-15 10:45:42.074928] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebc6e0) 00:26:53.804 [2024-11-15 10:45:42.074956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.804 [2024-11-15 10:45:42.074986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:53.804 [2024-11-15 10:45:42.080826] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebc6e0) 00:26:53.804 [2024-11-15 10:45:42.080855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.804 [2024-11-15 10:45:42.080885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:53.804 [2024-11-15 10:45:42.087965] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebc6e0) 00:26:53.804 [2024-11-15 10:45:42.087996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.804 [2024-11-15 10:45:42.088029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:53.804 [2024-11-15 10:45:42.094023] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebc6e0) 00:26:53.804 [2024-11-15 10:45:42.094050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.804 [2024-11-15 10:45:42.094081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:53.804 [2024-11-15 10:45:42.100453] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebc6e0) 00:26:53.804 [2024-11-15 10:45:42.100485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.804 [2024-11-15 10:45:42.100504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:53.804 [2024-11-15 10:45:42.106874] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebc6e0) 00:26:53.804 [2024-11-15 10:45:42.106903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.804 [2024-11-15 10:45:42.106935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:53.804 [2024-11-15 10:45:42.113709] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebc6e0) 00:26:53.804 [2024-11-15 10:45:42.113752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.804 [2024-11-15 10:45:42.113768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:53.804 [2024-11-15 10:45:42.120455] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebc6e0) 00:26:53.804 [2024-11-15 10:45:42.120486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.804 [2024-11-15 10:45:42.120518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:53.804 [2024-11-15 10:45:42.127881] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebc6e0) 00:26:53.804 [2024-11-15 10:45:42.127909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.804 [2024-11-15 10:45:42.127940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:53.804 [2024-11-15 10:45:42.134183] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebc6e0) 00:26:53.804 [2024-11-15 10:45:42.134211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.804 [2024-11-15 10:45:42.134243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:53.804 [2024-11-15 10:45:42.139782] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebc6e0) 00:26:53.804 [2024-11-15 10:45:42.139810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.804 [2024-11-15 10:45:42.139841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:53.804 [2024-11-15 10:45:42.143598] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebc6e0) 00:26:53.804 [2024-11-15 10:45:42.143628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.804 [2024-11-15 10:45:42.143653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:53.804 [2024-11-15 10:45:42.148780] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebc6e0) 00:26:53.804 [2024-11-15 10:45:42.148807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.804 [2024-11-15 10:45:42.148838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:53.804 [2024-11-15 10:45:42.153884] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebc6e0) 00:26:53.804 [2024-11-15 10:45:42.153911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.804 [2024-11-15 10:45:42.153940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:53.804 [2024-11-15 10:45:42.159165] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebc6e0) 00:26:53.804 [2024-11-15 10:45:42.159191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.804 [2024-11-15 10:45:42.159228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:53.804 [2024-11-15 10:45:42.164390] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebc6e0) 00:26:53.805 [2024-11-15 10:45:42.164417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.805 [2024-11-15 10:45:42.164448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:53.805 [2024-11-15 10:45:42.169552] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebc6e0) 00:26:53.805 [2024-11-15 10:45:42.169598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.805 [2024-11-15 10:45:42.169620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:53.805 [2024-11-15 10:45:42.174643] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebc6e0) 00:26:53.805 [2024-11-15 10:45:42.174670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.805 [2024-11-15 10:45:42.174702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:53.805 [2024-11-15 10:45:42.179701] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebc6e0) 00:26:53.805 [2024-11-15 10:45:42.179742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.805 [2024-11-15 10:45:42.179758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:53.805 [2024-11-15 10:45:42.184662] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebc6e0) 00:26:53.805 [2024-11-15 10:45:42.184709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.805 [2024-11-15 10:45:42.184724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:53.805 [2024-11-15 10:45:42.189824] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebc6e0) 00:26:53.805 [2024-11-15 10:45:42.189850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.805 [2024-11-15 10:45:42.189881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:53.805 [2024-11-15 10:45:42.194955] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebc6e0) 00:26:53.805 [2024-11-15 10:45:42.194980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.805 [2024-11-15 10:45:42.195010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:53.805 [2024-11-15 10:45:42.200012] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebc6e0) 00:26:53.805 [2024-11-15 10:45:42.200038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.805 [2024-11-15 10:45:42.200069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:53.805 [2024-11-15 10:45:42.205199] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebc6e0) 00:26:53.805 [2024-11-15 10:45:42.205229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.805 [2024-11-15 10:45:42.205260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:53.805 [2024-11-15 10:45:42.210293] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebc6e0) 00:26:53.805 [2024-11-15 10:45:42.210319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.805 [2024-11-15 10:45:42.210348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:53.805 [2024-11-15 10:45:42.215460] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebc6e0) 00:26:53.805 [2024-11-15 10:45:42.215487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.805 [2024-11-15 10:45:42.215520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:53.805 [2024-11-15 10:45:42.220711] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebc6e0) 00:26:53.805 [2024-11-15 10:45:42.220737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.805 [2024-11-15 10:45:42.220767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:53.805 [2024-11-15 10:45:42.225904] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebc6e0) 00:26:53.805 [2024-11-15 10:45:42.225930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.805 [2024-11-15 10:45:42.225961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:53.805 [2024-11-15 10:45:42.232549] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebc6e0) 00:26:53.805 [2024-11-15 10:45:42.232593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.805 [2024-11-15 10:45:42.232626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:53.805 [2024-11-15 10:45:42.237171] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebc6e0) 00:26:53.805 [2024-11-15 10:45:42.237198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.805 [2024-11-15 10:45:42.237228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:53.805 [2024-11-15 10:45:42.242370] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebc6e0) 00:26:53.805 [2024-11-15 10:45:42.242411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.805 [2024-11-15 10:45:42.242427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:53.805 [2024-11-15 10:45:42.247640] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebc6e0) 00:26:53.805 [2024-11-15 10:45:42.247681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.805 [2024-11-15 10:45:42.247696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:53.805 [2024-11-15 10:45:42.254087] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebc6e0) 00:26:53.805 [2024-11-15 10:45:42.254115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.805 [2024-11-15 10:45:42.254147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:53.805 [2024-11-15 10:45:42.261927] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebc6e0) 00:26:53.805 [2024-11-15 10:45:42.261954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.805 [2024-11-15 10:45:42.261987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:53.805 [2024-11-15 10:45:42.269040] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebc6e0) 00:26:53.805 [2024-11-15 10:45:42.269070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.805 [2024-11-15 10:45:42.269110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:54.065 [2024-11-15 10:45:42.275274] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebc6e0) 00:26:54.065 [2024-11-15 10:45:42.275306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.065 [2024-11-15 10:45:42.275341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:54.065 [2024-11-15 10:45:42.281321] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebc6e0) 00:26:54.065 [2024-11-15 10:45:42.281372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.065 [2024-11-15 10:45:42.281401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:54.065 [2024-11-15 10:45:42.286844] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebc6e0) 00:26:54.065 [2024-11-15 10:45:42.286872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.065 [2024-11-15 10:45:42.286903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:54.065 [2024-11-15 10:45:42.292910] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebc6e0) 00:26:54.065 [2024-11-15 10:45:42.292937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.065 [2024-11-15 10:45:42.292969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:54.065 [2024-11-15 10:45:42.299172] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebc6e0) 00:26:54.065 [2024-11-15 10:45:42.299199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.065 [2024-11-15 10:45:42.299230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:54.065 [2024-11-15 10:45:42.305383] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebc6e0) 00:26:54.065 [2024-11-15 10:45:42.305412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.065 [2024-11-15 10:45:42.305451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:54.065 [2024-11-15 10:45:42.311320] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebc6e0) 00:26:54.065 [2024-11-15 10:45:42.311368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.065 [2024-11-15 10:45:42.311393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:54.065 [2024-11-15 10:45:42.316794] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebc6e0) 00:26:54.065 [2024-11-15 10:45:42.316820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.065 [2024-11-15 10:45:42.316852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:54.065 [2024-11-15 10:45:42.322928] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebc6e0) 00:26:54.065 [2024-11-15 10:45:42.322955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.065 [2024-11-15 10:45:42.322986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:54.065 [2024-11-15 10:45:42.328379] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebc6e0) 00:26:54.065 [2024-11-15 10:45:42.328407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.065 [2024-11-15 10:45:42.328439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:54.065 [2024-11-15 10:45:42.333990] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebc6e0) 00:26:54.065 [2024-11-15 10:45:42.334018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.065 [2024-11-15 10:45:42.334050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:54.065 [2024-11-15 10:45:42.340145] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebc6e0) 00:26:54.065 [2024-11-15 10:45:42.340180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.065 [2024-11-15 10:45:42.340212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:54.065 [2024-11-15 10:45:42.346790] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebc6e0) 00:26:54.065 [2024-11-15 10:45:42.346818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.065 [2024-11-15 10:45:42.346849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:54.065 [2024-11-15 10:45:42.352546] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebc6e0) 00:26:54.065 [2024-11-15 10:45:42.352574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.065 [2024-11-15 10:45:42.352606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:54.065 [2024-11-15 10:45:42.358170] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebc6e0) 00:26:54.065 [2024-11-15 10:45:42.358198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.065 [2024-11-15 10:45:42.358228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:54.065 [2024-11-15 10:45:42.364009] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebc6e0) 00:26:54.065 [2024-11-15 10:45:42.364036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.065 [2024-11-15 10:45:42.364066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:54.065 [2024-11-15 10:45:42.370293] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebc6e0) 00:26:54.065 [2024-11-15 10:45:42.370335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.065 [2024-11-15 10:45:42.370355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:54.065 [2024-11-15 10:45:42.376454] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebc6e0) 00:26:54.065 [2024-11-15 10:45:42.376484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.065 [2024-11-15 10:45:42.376515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:54.065 [2024-11-15 10:45:42.383733] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebc6e0) 00:26:54.065 [2024-11-15 10:45:42.383761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.065 [2024-11-15 10:45:42.383793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:54.065 [2024-11-15 10:45:42.391393] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebc6e0) 00:26:54.066 [2024-11-15 10:45:42.391423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.066 [2024-11-15 10:45:42.391455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:54.066 [2024-11-15 10:45:42.398825] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebc6e0) 00:26:54.066 [2024-11-15 10:45:42.398863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.066 [2024-11-15 10:45:42.398894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:54.066 [2024-11-15 10:45:42.406573] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebc6e0) 00:26:54.066 [2024-11-15 10:45:42.406602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.066 [2024-11-15 10:45:42.406633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:54.066 [2024-11-15 10:45:42.413809] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebc6e0) 00:26:54.066 [2024-11-15 10:45:42.413837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.066 [2024-11-15 10:45:42.413875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:54.066 [2024-11-15 10:45:42.419673] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebc6e0) 00:26:54.066 [2024-11-15 10:45:42.419702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.066 [2024-11-15 10:45:42.419718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:54.066 [2024-11-15 10:45:42.425149] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebc6e0) 00:26:54.066 [2024-11-15 10:45:42.425176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.066 [2024-11-15 10:45:42.425207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:54.066 [2024-11-15 10:45:42.430824] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebc6e0) 00:26:54.067 [2024-11-15 10:45:42.430851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.067 [2024-11-15 10:45:42.430890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:54.067 [2024-11-15 10:45:42.435990] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebc6e0) 00:26:54.067 [2024-11-15 10:45:42.436024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.067 [2024-11-15 10:45:42.436056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:54.067 [2024-11-15 10:45:42.441871] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebc6e0) 00:26:54.067 [2024-11-15 10:45:42.441898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.067 [2024-11-15 10:45:42.441928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:54.067 [2024-11-15 10:45:42.447100] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebc6e0) 00:26:54.067 [2024-11-15 10:45:42.447126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.067 [2024-11-15 10:45:42.447159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:54.067 [2024-11-15 10:45:42.452203] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebc6e0) 00:26:54.067 [2024-11-15 10:45:42.452229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.067 [2024-11-15 10:45:42.452258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:54.067 [2024-11-15 10:45:42.457140] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebc6e0) 00:26:54.067 [2024-11-15 10:45:42.457166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.067 [2024-11-15 10:45:42.457196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:54.067 [2024-11-15 10:45:42.462191] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebc6e0) 00:26:54.067 [2024-11-15 10:45:42.462221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.067 [2024-11-15 10:45:42.462252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:54.067 [2024-11-15 10:45:42.467315] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebc6e0) 00:26:54.067 [2024-11-15 10:45:42.467342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.067 [2024-11-15 10:45:42.467380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:54.067 [2024-11-15 10:45:42.472299] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebc6e0) 00:26:54.067 [2024-11-15 10:45:42.472325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.067 [2024-11-15 10:45:42.472355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:54.067 [2024-11-15 10:45:42.477380] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebc6e0) 00:26:54.067 [2024-11-15 10:45:42.477406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.067 [2024-11-15 10:45:42.477436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:54.067 [2024-11-15 10:45:42.482292] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebc6e0) 00:26:54.067 [2024-11-15 10:45:42.482318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.067 [2024-11-15 10:45:42.482356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:54.067 [2024-11-15 10:45:42.487462] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebc6e0) 00:26:54.067 [2024-11-15 10:45:42.487497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.067 [2024-11-15 10:45:42.487528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:54.067 [2024-11-15 10:45:42.492506] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebc6e0) 00:26:54.067 [2024-11-15 10:45:42.492534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.067 [2024-11-15 10:45:42.492565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:54.067 [2024-11-15 10:45:42.498171] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebc6e0) 00:26:54.067 [2024-11-15 10:45:42.498197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.067 [2024-11-15 10:45:42.498228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:54.067 [2024-11-15 10:45:42.503516] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebc6e0) 00:26:54.067 [2024-11-15 10:45:42.503543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.067 [2024-11-15 10:45:42.503574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:54.068 [2024-11-15 10:45:42.509237] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebc6e0) 00:26:54.068 [2024-11-15 10:45:42.509263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.068 [2024-11-15 10:45:42.509294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:54.068 [2024-11-15 10:45:42.515330] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebc6e0) 00:26:54.068 [2024-11-15 10:45:42.515377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.068 [2024-11-15 10:45:42.515394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:54.068 [2024-11-15 10:45:42.521234] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebc6e0) 00:26:54.068 [2024-11-15 10:45:42.521261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.068 [2024-11-15 10:45:42.521292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:54.068 [2024-11-15 10:45:42.528601] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebc6e0) 00:26:54.068 [2024-11-15 10:45:42.528648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.068 [2024-11-15 10:45:42.528665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:54.326 [2024-11-15 10:45:42.536512] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebc6e0) 00:26:54.326 [2024-11-15 10:45:42.536560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.326 [2024-11-15 10:45:42.536579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:54.326 [2024-11-15 10:45:42.544106] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebc6e0) 00:26:54.326 [2024-11-15 10:45:42.544135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.326 [2024-11-15 10:45:42.544167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:54.327 [2024-11-15 10:45:42.551564] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebc6e0) 00:26:54.327 [2024-11-15 10:45:42.551600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.327 [2024-11-15 10:45:42.551632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:54.327 [2024-11-15 10:45:42.559444] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebc6e0) 00:26:54.327 [2024-11-15 10:45:42.559471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.327 [2024-11-15 10:45:42.559501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:54.327 [2024-11-15 10:45:42.566369] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebc6e0) 00:26:54.327 [2024-11-15 10:45:42.566397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.327 [2024-11-15 10:45:42.566436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:54.327 [2024-11-15 10:45:42.574487] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebc6e0) 00:26:54.327 [2024-11-15 10:45:42.574517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.327 [2024-11-15 10:45:42.574549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:54.327 [2024-11-15 10:45:42.581506] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebc6e0) 00:26:54.327 [2024-11-15 10:45:42.581535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.327 [2024-11-15 10:45:42.581567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:54.327 [2024-11-15 10:45:42.588907] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebc6e0) 00:26:54.327 [2024-11-15 10:45:42.588936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.327 [2024-11-15 10:45:42.588967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:54.327 [2024-11-15 10:45:42.596998] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebc6e0) 00:26:54.327 [2024-11-15 10:45:42.597027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.327 [2024-11-15 10:45:42.597058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:54.327 [2024-11-15 10:45:42.606122] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebc6e0) 00:26:54.327 [2024-11-15 10:45:42.606152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.327 [2024-11-15 10:45:42.606184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:54.327 [2024-11-15 10:45:42.613159] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebc6e0) 00:26:54.327 [2024-11-15 10:45:42.613187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.327 [2024-11-15 10:45:42.613219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:54.327 [2024-11-15 10:45:42.621834] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebc6e0) 00:26:54.327 [2024-11-15 10:45:42.621864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.327 [2024-11-15 10:45:42.621896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:54.327 [2024-11-15 10:45:42.629063] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebc6e0) 00:26:54.327 [2024-11-15 10:45:42.629090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.327 [2024-11-15 10:45:42.629122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:54.327 [2024-11-15 10:45:42.636632] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebc6e0) 00:26:54.327 [2024-11-15 10:45:42.636685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.327 [2024-11-15 10:45:42.636701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:54.327 [2024-11-15 10:45:42.642668] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebc6e0) 00:26:54.327 [2024-11-15 10:45:42.642695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.327 [2024-11-15 10:45:42.642725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:54.327 [2024-11-15 10:45:42.648666] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebc6e0) 00:26:54.327 [2024-11-15 10:45:42.648707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.327 [2024-11-15 10:45:42.648722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:54.327 [2024-11-15 10:45:42.655151] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebc6e0) 00:26:54.327 [2024-11-15 10:45:42.655178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.327 [2024-11-15 10:45:42.655210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:54.327 [2024-11-15 10:45:42.661529] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebc6e0) 00:26:54.327 [2024-11-15 10:45:42.661557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.327 [2024-11-15 10:45:42.661588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:54.327 [2024-11-15 10:45:42.667353] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebc6e0) 00:26:54.327 [2024-11-15 10:45:42.667388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.327 [2024-11-15 10:45:42.667420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:54.327 [2024-11-15 10:45:42.674640] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebc6e0) 00:26:54.327 [2024-11-15 10:45:42.674684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.327 [2024-11-15 10:45:42.674701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:54.327 [2024-11-15 10:45:42.681458] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebc6e0) 00:26:54.327 [2024-11-15 10:45:42.681486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.327 [2024-11-15 10:45:42.681517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:54.327 [2024-11-15 10:45:42.688168] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebc6e0) 00:26:54.327 [2024-11-15 10:45:42.688195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.327 [2024-11-15 10:45:42.688228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:54.327 [2024-11-15 10:45:42.694859] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebc6e0) 00:26:54.327 [2024-11-15 10:45:42.694885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.327 [2024-11-15 10:45:42.694916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:54.327 [2024-11-15 10:45:42.701489] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebc6e0) 00:26:54.327 [2024-11-15 10:45:42.701517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.327 [2024-11-15 10:45:42.701548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:54.327 [2024-11-15 10:45:42.708331] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebc6e0) 00:26:54.327 [2024-11-15 10:45:42.708381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.327 [2024-11-15 10:45:42.708398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:54.327 [2024-11-15 10:45:42.715082] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebc6e0) 00:26:54.327 [2024-11-15 10:45:42.715109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.327 [2024-11-15 10:45:42.715140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:54.327 [2024-11-15 10:45:42.721786] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebc6e0) 00:26:54.327 [2024-11-15 10:45:42.721813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.328 [2024-11-15 10:45:42.721843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:54.328 [2024-11-15 10:45:42.728582] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebc6e0) 00:26:54.328 [2024-11-15 10:45:42.728609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.328 [2024-11-15 10:45:42.728641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:54.328 [2024-11-15 10:45:42.734950] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebc6e0) 00:26:54.328 [2024-11-15 10:45:42.734977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.328 [2024-11-15 10:45:42.735008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:54.328 [2024-11-15 10:45:42.741629] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebc6e0) 00:26:54.328 [2024-11-15 10:45:42.741670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.328 [2024-11-15 10:45:42.741686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:54.328 [2024-11-15 10:45:42.748281] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebc6e0) 00:26:54.328 [2024-11-15 10:45:42.748307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.328 [2024-11-15 10:45:42.748346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:54.328 [2024-11-15 10:45:42.754974] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebc6e0) 00:26:54.328 [2024-11-15 10:45:42.755002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.328 [2024-11-15 10:45:42.755033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:54.328 [2024-11-15 10:45:42.761631] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebc6e0) 00:26:54.328 [2024-11-15 10:45:42.761673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.328 [2024-11-15 10:45:42.761690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:54.328 [2024-11-15 10:45:42.768256] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebc6e0) 00:26:54.328 [2024-11-15 10:45:42.768282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.328 [2024-11-15 10:45:42.768314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:54.328 [2024-11-15 10:45:42.774898] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebc6e0) 00:26:54.328 [2024-11-15 10:45:42.774927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.328 [2024-11-15 10:45:42.774958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:54.328 [2024-11-15 10:45:42.781559] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebc6e0) 00:26:54.328 [2024-11-15 10:45:42.781588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.328 [2024-11-15 10:45:42.781620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:54.328 [2024-11-15 10:45:42.788301] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebc6e0) 00:26:54.328 [2024-11-15 10:45:42.788328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.328 [2024-11-15 10:45:42.788358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:54.586 [2024-11-15 10:45:42.795414] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebc6e0) 00:26:54.586 [2024-11-15 10:45:42.795445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.586 [2024-11-15 10:45:42.795477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:54.586 [2024-11-15 10:45:42.802625] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebc6e0) 00:26:54.586 [2024-11-15 10:45:42.802669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.586 [2024-11-15 10:45:42.802687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:54.586 [2024-11-15 10:45:42.809779] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebc6e0) 00:26:54.586 [2024-11-15 10:45:42.809807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.586 [2024-11-15 10:45:42.809838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:54.586 [2024-11-15 10:45:42.818649] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebc6e0) 00:26:54.586 [2024-11-15 10:45:42.818694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.586 [2024-11-15 10:45:42.818710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:54.586 [2024-11-15 10:45:42.826786] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebc6e0) 00:26:54.586 [2024-11-15 10:45:42.826815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.586 [2024-11-15 10:45:42.826845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:54.586 [2024-11-15 10:45:42.833691] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebc6e0) 00:26:54.586 [2024-11-15 10:45:42.833719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.586 [2024-11-15 10:45:42.833751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:54.586 5052.50 IOPS, 631.56 MiB/s [2024-11-15T09:45:43.049Z] [2024-11-15 10:45:42.841784] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebc6e0) 00:26:54.586 [2024-11-15 10:45:42.841811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.586 [2024-11-15 10:45:42.841842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:54.586 00:26:54.586 Latency(us) 00:26:54.586 [2024-11-15T09:45:43.049Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:54.586 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:26:54.587 nvme0n1 : 2.00 5052.99 631.62 0.00 0.00 3161.95 801.00 11650.84 00:26:54.587 [2024-11-15T09:45:43.050Z] =================================================================================================================== 00:26:54.587 [2024-11-15T09:45:43.050Z] Total : 5052.99 631.62 0.00 0.00 3161.95 801.00 11650.84 00:26:54.587 { 00:26:54.587 "results": [ 00:26:54.587 { 00:26:54.587 "job": "nvme0n1", 00:26:54.587 "core_mask": "0x2", 00:26:54.587 "workload": "randread", 00:26:54.587 "status": "finished", 00:26:54.587 "queue_depth": 16, 00:26:54.587 "io_size": 131072, 00:26:54.587 "runtime": 2.002972, 00:26:54.587 "iops": 5052.991254995078, 00:26:54.587 "mibps": 631.6239068743847, 00:26:54.587 "io_failed": 0, 00:26:54.587 "io_timeout": 0, 00:26:54.587 "avg_latency_us": 3161.9494592468172, 00:26:54.587 "min_latency_us": 800.9955555555556, 00:26:54.587 "max_latency_us": 11650.844444444445 00:26:54.587 } 00:26:54.587 ], 00:26:54.587 "core_count": 1 00:26:54.587 } 00:26:54.587 10:45:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:26:54.587 10:45:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:26:54.587 10:45:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:26:54.587 | .driver_specific 00:26:54.587 | .nvme_error 00:26:54.587 | .status_code 00:26:54.587 | .command_transient_transport_error' 00:26:54.587 10:45:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:26:54.845 10:45:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 327 > 0 )) 00:26:54.845 10:45:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 483646 00:26:54.845 10:45:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # '[' -z 483646 ']' 00:26:54.845 10:45:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # kill -0 483646 00:26:54.845 10:45:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # uname 00:26:54.845 10:45:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:26:54.845 10:45:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 483646 00:26:54.845 10:45:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:26:54.845 10:45:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:26:54.845 10:45:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # echo 'killing process with pid 483646' 00:26:54.845 killing process with pid 483646 00:26:54.845 10:45:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@971 -- # kill 483646 00:26:54.845 Received shutdown signal, test time was about 2.000000 seconds 00:26:54.845 00:26:54.845 Latency(us) 00:26:54.845 [2024-11-15T09:45:43.308Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:54.845 [2024-11-15T09:45:43.308Z] =================================================================================================================== 00:26:54.845 [2024-11-15T09:45:43.308Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:54.845 10:45:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@976 -- # wait 483646 00:26:55.103 10:45:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:26:55.103 10:45:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:26:55.103 10:45:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:26:55.103 10:45:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:26:55.103 10:45:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:26:55.104 10:45:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=484153 00:26:55.104 10:45:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:26:55.104 10:45:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 484153 /var/tmp/bperf.sock 00:26:55.104 10:45:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # '[' -z 484153 ']' 00:26:55.104 10:45:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:55.104 10:45:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # local max_retries=100 00:26:55.104 10:45:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:55.104 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:55.104 10:45:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # xtrace_disable 00:26:55.104 10:45:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:55.104 [2024-11-15 10:45:43.470399] Starting SPDK v25.01-pre git sha1 318515b44 / DPDK 24.03.0 initialization... 00:26:55.104 [2024-11-15 10:45:43.470478] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid484153 ] 00:26:55.104 [2024-11-15 10:45:43.534826] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:55.362 [2024-11-15 10:45:43.590252] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:55.362 10:45:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:26:55.362 10:45:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@866 -- # return 0 00:26:55.362 10:45:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:55.362 10:45:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:55.619 10:45:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:26:55.619 10:45:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:55.619 10:45:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:55.620 10:45:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:55.620 10:45:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:55.620 10:45:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:56.185 nvme0n1 00:26:56.185 10:45:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:26:56.185 10:45:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:56.185 10:45:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:56.185 10:45:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:56.185 10:45:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:26:56.185 10:45:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:56.185 Running I/O for 2 seconds... 00:26:56.185 [2024-11-15 10:45:44.630484] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e80210) with pdu=0x2000166e7818 00:26:56.185 [2024-11-15 10:45:44.631377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:7781 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.185 [2024-11-15 10:45:44.631432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:26:56.185 [2024-11-15 10:45:44.642454] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e80210) with pdu=0x2000166f96f8 00:26:56.185 [2024-11-15 10:45:44.643273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:1164 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.185 [2024-11-15 10:45:44.643300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:56.442 [2024-11-15 10:45:44.654570] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e80210) with pdu=0x2000166e49b0 00:26:56.442 [2024-11-15 10:45:44.655817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:7617 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.442 [2024-11-15 10:45:44.655845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:56.442 [2024-11-15 10:45:44.666181] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e80210) with pdu=0x2000166f0788 00:26:56.442 [2024-11-15 10:45:44.667403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:21289 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.442 [2024-11-15 10:45:44.667431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:26:56.442 [2024-11-15 10:45:44.677210] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e80210) with pdu=0x2000166f1868 00:26:56.442 [2024-11-15 10:45:44.678418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:19445 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.442 [2024-11-15 10:45:44.678444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:56.442 [2024-11-15 10:45:44.688472] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e80210) with pdu=0x2000166e4140 00:26:56.442 [2024-11-15 10:45:44.689853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:11460 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.442 [2024-11-15 10:45:44.689879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:56.442 [2024-11-15 10:45:44.699181] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e80210) with pdu=0x2000166fb8b8 00:26:56.442 [2024-11-15 10:45:44.700606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:3142 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.442 [2024-11-15 10:45:44.700632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:26:56.442 [2024-11-15 10:45:44.709381] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e80210) with pdu=0x2000166e8088 00:26:56.442 [2024-11-15 10:45:44.710316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:21146 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.442 [2024-11-15 10:45:44.710341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:26:56.442 [2024-11-15 10:45:44.720158] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e80210) with pdu=0x2000166e9168 00:26:56.442 [2024-11-15 10:45:44.721145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:18344 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.442 [2024-11-15 10:45:44.721171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:26:56.442 [2024-11-15 10:45:44.731046] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e80210) with pdu=0x2000166e5a90 00:26:56.442 [2024-11-15 10:45:44.732037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:11255 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.442 [2024-11-15 10:45:44.732062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:26:56.442 [2024-11-15 10:45:44.743400] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e80210) with pdu=0x2000166e49b0 00:26:56.442 [2024-11-15 10:45:44.744913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:25144 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.442 [2024-11-15 10:45:44.744939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:26:56.442 [2024-11-15 10:45:44.753617] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e80210) with pdu=0x2000166f3e60 00:26:56.442 [2024-11-15 10:45:44.754689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:2847 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.442 [2024-11-15 10:45:44.754715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:26:56.442 [2024-11-15 10:45:44.765904] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e80210) with pdu=0x2000166f4f40 00:26:56.442 [2024-11-15 10:45:44.767564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:930 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.442 [2024-11-15 10:45:44.767591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:26:56.442 [2024-11-15 10:45:44.776150] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e80210) with pdu=0x2000166f2510 00:26:56.442 [2024-11-15 10:45:44.777402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:22852 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.442 [2024-11-15 10:45:44.777429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:56.442 [2024-11-15 10:45:44.786135] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e80210) with pdu=0x2000166e84c0 00:26:56.442 [2024-11-15 10:45:44.787704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:8854 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.442 [2024-11-15 10:45:44.787730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:56.442 [2024-11-15 10:45:44.796278] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e80210) with pdu=0x2000166f7970 00:26:56.442 [2024-11-15 10:45:44.797136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:19825 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.442 [2024-11-15 10:45:44.797162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:26:56.443 [2024-11-15 10:45:44.807800] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e80210) with pdu=0x2000166e0ea0 00:26:56.443 [2024-11-15 10:45:44.808742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:4918 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.443 [2024-11-15 10:45:44.808767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:26:56.443 [2024-11-15 10:45:44.818778] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e80210) with pdu=0x2000166e1f80 00:26:56.443 [2024-11-15 10:45:44.819760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:14971 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.443 [2024-11-15 10:45:44.819786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:56.443 [2024-11-15 10:45:44.829051] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e80210) with pdu=0x2000166e2c28 00:26:56.443 [2024-11-15 10:45:44.830005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:9352 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.443 [2024-11-15 10:45:44.830030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:26:56.443 [2024-11-15 10:45:44.840158] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e80210) with pdu=0x2000166de038 00:26:56.443 [2024-11-15 10:45:44.841114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:4086 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.443 [2024-11-15 10:45:44.841139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:26:56.443 [2024-11-15 10:45:44.851217] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e80210) with pdu=0x2000166e6b70 00:26:56.443 [2024-11-15 10:45:44.852199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:8422 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.443 [2024-11-15 10:45:44.852229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:26:56.443 [2024-11-15 10:45:44.861587] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e80210) with pdu=0x2000166df550 00:26:56.443 [2024-11-15 10:45:44.862532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:12828 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.443 [2024-11-15 10:45:44.862558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:26:56.443 [2024-11-15 10:45:44.873726] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e80210) with pdu=0x2000166ee5c8 00:26:56.443 [2024-11-15 10:45:44.874815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:1387 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.443 [2024-11-15 10:45:44.874840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:56.443 [2024-11-15 10:45:44.883949] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e80210) with pdu=0x2000166f96f8 00:26:56.443 [2024-11-15 10:45:44.885040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:4295 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.443 [2024-11-15 10:45:44.885067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:26:56.443 [2024-11-15 10:45:44.896427] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e80210) with pdu=0x2000166e73e0 00:26:56.443 [2024-11-15 10:45:44.897645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:7271 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.443 [2024-11-15 10:45:44.897672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:26:56.443 [2024-11-15 10:45:44.906862] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e80210) with pdu=0x2000166df118 00:26:56.443 [2024-11-15 10:45:44.908267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:7511 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.443 [2024-11-15 10:45:44.908296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:26:56.701 [2024-11-15 10:45:44.918956] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e80210) with pdu=0x2000166e84c0 00:26:56.701 [2024-11-15 10:45:44.919748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:22219 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.701 [2024-11-15 10:45:44.919790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:26:56.701 [2024-11-15 10:45:44.930580] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e80210) with pdu=0x2000166fa3a0 00:26:56.701 [2024-11-15 10:45:44.931606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:19370 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.701 [2024-11-15 10:45:44.931634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:56.701 [2024-11-15 10:45:44.941035] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e80210) with pdu=0x2000166f96f8 00:26:56.701 [2024-11-15 10:45:44.942609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:8348 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.701 [2024-11-15 10:45:44.942635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:56.701 [2024-11-15 10:45:44.951213] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e80210) with pdu=0x2000166f6cc8 00:26:56.701 [2024-11-15 10:45:44.952084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:19978 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.701 [2024-11-15 10:45:44.952110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:26:56.701 [2024-11-15 10:45:44.963598] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e80210) with pdu=0x2000166e3498 00:26:56.701 [2024-11-15 10:45:44.964968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:4194 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.701 [2024-11-15 10:45:44.964993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:26:56.701 [2024-11-15 10:45:44.973739] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e80210) with pdu=0x2000166f2d80 00:26:56.701 [2024-11-15 10:45:44.974659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:7192 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.701 [2024-11-15 10:45:44.974685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:56.701 [2024-11-15 10:45:44.987209] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e80210) with pdu=0x2000166ebb98 00:26:56.701 [2024-11-15 10:45:44.989012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:11673 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.701 [2024-11-15 10:45:44.989038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:56.701 [2024-11-15 10:45:44.995000] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e80210) with pdu=0x2000166f0350 00:26:56.701 [2024-11-15 10:45:44.995828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:21700 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.701 [2024-11-15 10:45:44.995853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:26:56.701 [2024-11-15 10:45:45.005271] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e80210) with pdu=0x2000166eb760 00:26:56.701 [2024-11-15 10:45:45.006096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:19998 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.701 [2024-11-15 10:45:45.006120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:26:56.701 [2024-11-15 10:45:45.017448] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e80210) with pdu=0x2000166e8d30 00:26:56.701 [2024-11-15 10:45:45.018407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:3896 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.701 [2024-11-15 10:45:45.018434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:56.701 [2024-11-15 10:45:45.028590] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e80210) with pdu=0x2000166e23b8 00:26:56.701 [2024-11-15 10:45:45.029653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:3745 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.701 [2024-11-15 10:45:45.029678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:56.701 [2024-11-15 10:45:45.038591] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e80210) with pdu=0x2000166f1ca0 00:26:56.701 [2024-11-15 10:45:45.039526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:17095 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.701 [2024-11-15 10:45:45.039552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:26:56.701 [2024-11-15 10:45:45.049864] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e80210) with pdu=0x2000166e1f80 00:26:56.701 [2024-11-15 10:45:45.050829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13847 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.701 [2024-11-15 10:45:45.050854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:26:56.701 [2024-11-15 10:45:45.061117] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e80210) with pdu=0x2000166e0a68 00:26:56.701 [2024-11-15 10:45:45.062202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:12769 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.701 [2024-11-15 10:45:45.062227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:26:56.701 [2024-11-15 10:45:45.071496] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e80210) with pdu=0x2000166ed4e8 00:26:56.701 [2024-11-15 10:45:45.072578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:4136 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.701 [2024-11-15 10:45:45.072604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:26:56.701 [2024-11-15 10:45:45.083721] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e80210) with pdu=0x2000166e3498 00:26:56.701 [2024-11-15 10:45:45.084962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:10240 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.701 [2024-11-15 10:45:45.084988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:26:56.701 [2024-11-15 10:45:45.094825] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e80210) with pdu=0x2000166eb760 00:26:56.701 [2024-11-15 10:45:45.096182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:15004 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.701 [2024-11-15 10:45:45.096207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:26:56.702 [2024-11-15 10:45:45.106257] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e80210) with pdu=0x2000166fc998 00:26:56.702 [2024-11-15 10:45:45.107946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:20122 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.702 [2024-11-15 10:45:45.107972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:26:56.702 [2024-11-15 10:45:45.114246] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e80210) with pdu=0x2000166e99d8 00:26:56.702 [2024-11-15 10:45:45.115090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:20945 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.702 [2024-11-15 10:45:45.115116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:56.702 [2024-11-15 10:45:45.126354] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e80210) with pdu=0x2000166fcdd0 00:26:56.702 [2024-11-15 10:45:45.127341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:5392 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.702 [2024-11-15 10:45:45.127389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:26:56.702 [2024-11-15 10:45:45.137659] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e80210) with pdu=0x2000166e38d0 00:26:56.702 [2024-11-15 10:45:45.138835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:18470 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.702 [2024-11-15 10:45:45.138871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:26:56.702 [2024-11-15 10:45:45.148234] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e80210) with pdu=0x2000166e0ea0 00:26:56.702 [2024-11-15 10:45:45.149320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:8092 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.702 [2024-11-15 10:45:45.149346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:26:56.702 [2024-11-15 10:45:45.160465] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e80210) with pdu=0x2000166f4f40 00:26:56.702 [2024-11-15 10:45:45.161713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:18543 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.702 [2024-11-15 10:45:45.161739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:56.959 [2024-11-15 10:45:45.172750] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e80210) with pdu=0x2000166f31b8 00:26:56.960 [2024-11-15 10:45:45.174183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:3398 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.960 [2024-11-15 10:45:45.174210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:56.960 [2024-11-15 10:45:45.182830] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e80210) with pdu=0x2000166eff18 00:26:56.960 [2024-11-15 10:45:45.184061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:8034 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.960 [2024-11-15 10:45:45.184087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:26:56.960 [2024-11-15 10:45:45.193436] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e80210) with pdu=0x2000166fdeb0 00:26:56.960 [2024-11-15 10:45:45.194655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:6350 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.960 [2024-11-15 10:45:45.194681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:26:56.960 [2024-11-15 10:45:45.204694] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e80210) with pdu=0x2000166e3d08 00:26:56.960 [2024-11-15 10:45:45.206009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:20487 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.960 [2024-11-15 10:45:45.206034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:26:56.960 [2024-11-15 10:45:45.216183] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e80210) with pdu=0x2000166fc128 00:26:56.960 [2024-11-15 10:45:45.217717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:1185 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.960 [2024-11-15 10:45:45.217743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:26:56.960 [2024-11-15 10:45:45.226485] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e80210) with pdu=0x2000166fd640 00:26:56.960 [2024-11-15 10:45:45.227605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:22314 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.960 [2024-11-15 10:45:45.227631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:26:56.960 [2024-11-15 10:45:45.237595] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e80210) with pdu=0x2000166fb048 00:26:56.960 [2024-11-15 10:45:45.238583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:12743 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.960 [2024-11-15 10:45:45.238609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:56.960 [2024-11-15 10:45:45.248778] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e80210) with pdu=0x2000166e99d8 00:26:56.960 [2024-11-15 10:45:45.250027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:22987 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.960 [2024-11-15 10:45:45.250052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:56.960 [2024-11-15 10:45:45.259846] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e80210) with pdu=0x2000166ef6a8 00:26:56.960 [2024-11-15 10:45:45.261204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:12753 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.960 [2024-11-15 10:45:45.261230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:26:56.960 [2024-11-15 10:45:45.268874] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e80210) with pdu=0x2000166f2d80 00:26:56.960 [2024-11-15 10:45:45.269700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:11916 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.960 [2024-11-15 10:45:45.269726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:26:56.960 [2024-11-15 10:45:45.280163] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e80210) with pdu=0x2000166f3e60 00:26:56.960 [2024-11-15 10:45:45.281104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:4047 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.960 [2024-11-15 10:45:45.281130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:26:56.960 [2024-11-15 10:45:45.291277] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e80210) with pdu=0x2000166ea248 00:26:56.960 [2024-11-15 10:45:45.292205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:2592 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.960 [2024-11-15 10:45:45.292231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:26:56.960 [2024-11-15 10:45:45.302545] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e80210) with pdu=0x2000166f9f68 00:26:56.960 [2024-11-15 10:45:45.303760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:16438 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.960 [2024-11-15 10:45:45.303786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:26:56.960 [2024-11-15 10:45:45.315863] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e80210) with pdu=0x2000166f2948 00:26:56.960 [2024-11-15 10:45:45.317653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:20161 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.960 [2024-11-15 10:45:45.317679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:26:56.960 [2024-11-15 10:45:45.323567] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e80210) with pdu=0x2000166f4298 00:26:56.960 [2024-11-15 10:45:45.324381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:6108 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.960 [2024-11-15 10:45:45.324408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:26:56.960 [2024-11-15 10:45:45.333965] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e80210) with pdu=0x2000166dece0 00:26:56.960 [2024-11-15 10:45:45.334766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:17049 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.960 [2024-11-15 10:45:45.334792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:56.960 [2024-11-15 10:45:45.347311] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e80210) with pdu=0x2000166f96f8 00:26:56.960 [2024-11-15 10:45:45.348571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:7647 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.960 [2024-11-15 10:45:45.348598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:56.960 [2024-11-15 10:45:45.358762] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e80210) with pdu=0x2000166f6890 00:26:56.960 [2024-11-15 10:45:45.360121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:18741 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.960 [2024-11-15 10:45:45.360146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:26:56.960 [2024-11-15 10:45:45.369985] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e80210) with pdu=0x2000166ea680 00:26:56.960 [2024-11-15 10:45:45.371642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:3224 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.960 [2024-11-15 10:45:45.371684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:26:56.960 [2024-11-15 10:45:45.377840] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e80210) with pdu=0x2000166f8a50 00:26:56.960 [2024-11-15 10:45:45.378633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:19448 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.960 [2024-11-15 10:45:45.378658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:56.960 [2024-11-15 10:45:45.388804] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e80210) with pdu=0x2000166f5be8 00:26:56.960 [2024-11-15 10:45:45.389615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:1420 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.960 [2024-11-15 10:45:45.389642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:26:56.960 [2024-11-15 10:45:45.402190] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e80210) with pdu=0x2000166e7818 00:26:56.960 [2024-11-15 10:45:45.403441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:12677 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.960 [2024-11-15 10:45:45.403467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:26:56.960 [2024-11-15 10:45:45.412684] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e80210) with pdu=0x2000166eee38 00:26:56.960 [2024-11-15 10:45:45.413964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:24666 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.960 [2024-11-15 10:45:45.413989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:26:56.960 [2024-11-15 10:45:45.424118] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e80210) with pdu=0x2000166edd58 00:26:56.960 [2024-11-15 10:45:45.425058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:6194 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.960 [2024-11-15 10:45:45.425092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:57.219 [2024-11-15 10:45:45.437882] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e80210) with pdu=0x2000166e0a68 00:26:57.219 [2024-11-15 10:45:45.439704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:2219 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.219 [2024-11-15 10:45:45.439732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:26:57.219 [2024-11-15 10:45:45.445627] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e80210) with pdu=0x2000166ef270 00:26:57.219 [2024-11-15 10:45:45.446449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:3962 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.219 [2024-11-15 10:45:45.446474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:26:57.219 [2024-11-15 10:45:45.455996] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e80210) with pdu=0x2000166f4298 00:26:57.219 [2024-11-15 10:45:45.456842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:16168 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.219 [2024-11-15 10:45:45.456869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:57.219 [2024-11-15 10:45:45.469563] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e80210) with pdu=0x2000166fb480 00:26:57.219 [2024-11-15 10:45:45.471060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:7274 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.219 [2024-11-15 10:45:45.471086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:57.219 [2024-11-15 10:45:45.481170] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e80210) with pdu=0x2000166dece0 00:26:57.219 [2024-11-15 10:45:45.482130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:1375 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.219 [2024-11-15 10:45:45.482157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:26:57.219 [2024-11-15 10:45:45.491094] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e80210) with pdu=0x2000166fac10 00:26:57.219 [2024-11-15 10:45:45.492134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:14610 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.219 [2024-11-15 10:45:45.492160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:57.219 [2024-11-15 10:45:45.501995] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e80210) with pdu=0x2000166fb048 00:26:57.219 [2024-11-15 10:45:45.502980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:12475 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.219 [2024-11-15 10:45:45.503014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:57.219 [2024-11-15 10:45:45.513445] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e80210) with pdu=0x2000166e3d08 00:26:57.219 [2024-11-15 10:45:45.514575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:20213 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.219 [2024-11-15 10:45:45.514601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:26:57.219 [2024-11-15 10:45:45.524895] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e80210) with pdu=0x2000166eb328 00:26:57.219 [2024-11-15 10:45:45.526330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:8306 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.219 [2024-11-15 10:45:45.526380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:26:57.219 [2024-11-15 10:45:45.535112] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e80210) with pdu=0x2000166e12d8 00:26:57.219 [2024-11-15 10:45:45.536284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:11383 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.219 [2024-11-15 10:45:45.536310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:57.219 [2024-11-15 10:45:45.545967] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e80210) with pdu=0x2000166e0a68 00:26:57.219 [2024-11-15 10:45:45.546962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:3705 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.219 [2024-11-15 10:45:45.546987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:26:57.219 [2024-11-15 10:45:45.557407] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e80210) with pdu=0x2000166fb048 00:26:57.219 [2024-11-15 10:45:45.558563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:7508 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.219 [2024-11-15 10:45:45.558589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:57.219 [2024-11-15 10:45:45.568490] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e80210) with pdu=0x2000166fb480 00:26:57.219 [2024-11-15 10:45:45.569912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:5992 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.219 [2024-11-15 10:45:45.569944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:26:57.219 [2024-11-15 10:45:45.579791] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e80210) with pdu=0x2000166f0350 00:26:57.219 [2024-11-15 10:45:45.580863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:22021 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.219 [2024-11-15 10:45:45.580890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:26:57.219 [2024-11-15 10:45:45.592595] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e80210) with pdu=0x2000166f0350 00:26:57.219 [2024-11-15 10:45:45.594214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:7545 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.219 [2024-11-15 10:45:45.594239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:26:57.219 [2024-11-15 10:45:45.604526] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e80210) with pdu=0x2000166f2d80 00:26:57.219 [2024-11-15 10:45:45.606257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:24123 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.219 [2024-11-15 10:45:45.606282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:26:57.219 [2024-11-15 10:45:45.612469] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e80210) with pdu=0x2000166df550 00:26:57.219 [2024-11-15 10:45:45.613356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:8815 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.219 [2024-11-15 10:45:45.613388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:26:57.219 22890.00 IOPS, 89.41 MiB/s [2024-11-15T09:45:45.682Z] [2024-11-15 10:45:45.625221] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e80210) with pdu=0x2000166fb8b8 00:26:57.219 [2024-11-15 10:45:45.626281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:16967 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.219 [2024-11-15 10:45:45.626307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:26:57.219 [2024-11-15 10:45:45.636380] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e80210) with pdu=0x2000166dece0 00:26:57.219 [2024-11-15 10:45:45.637396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:9350 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.219 [2024-11-15 10:45:45.637425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:26:57.219 [2024-11-15 10:45:45.647039] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e80210) with pdu=0x2000166fc998 00:26:57.219 [2024-11-15 10:45:45.647986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:23439 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.219 [2024-11-15 10:45:45.648015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:26:57.219 [2024-11-15 10:45:45.660339] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e80210) with pdu=0x2000166eff18 00:26:57.219 [2024-11-15 10:45:45.661858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:8807 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.219 [2024-11-15 10:45:45.661883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:57.219 [2024-11-15 10:45:45.671876] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e80210) with pdu=0x2000166f6020 00:26:57.219 [2024-11-15 10:45:45.673496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:22521 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.220 [2024-11-15 10:45:45.673522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:26:57.220 [2024-11-15 10:45:45.683098] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e80210) with pdu=0x2000166eb760 00:26:57.220 [2024-11-15 10:45:45.684890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:8725 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.220 [2024-11-15 10:45:45.684917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:57.478 [2024-11-15 10:45:45.691196] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e80210) with pdu=0x2000166e38d0 00:26:57.478 [2024-11-15 10:45:45.692051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:11362 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.478 [2024-11-15 10:45:45.692078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:26:57.478 [2024-11-15 10:45:45.702520] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e80210) with pdu=0x2000166dece0 00:26:57.478 [2024-11-15 10:45:45.703256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:16094 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.478 [2024-11-15 10:45:45.703282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:26:57.478 [2024-11-15 10:45:45.713862] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e80210) with pdu=0x2000166f0788 00:26:57.478 [2024-11-15 10:45:45.714598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:10759 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.478 [2024-11-15 10:45:45.714630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:26:57.478 [2024-11-15 10:45:45.725388] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e80210) with pdu=0x2000166ff3c8 00:26:57.478 [2024-11-15 10:45:45.726412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:7757 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.478 [2024-11-15 10:45:45.726439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:26:57.478 [2024-11-15 10:45:45.736884] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e80210) with pdu=0x2000166f57b0 00:26:57.478 [2024-11-15 10:45:45.738038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:22265 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.478 [2024-11-15 10:45:45.738064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:26:57.478 [2024-11-15 10:45:45.748028] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e80210) with pdu=0x2000166fa3a0 00:26:57.478 [2024-11-15 10:45:45.749239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:11285 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.478 [2024-11-15 10:45:45.749264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:26:57.478 [2024-11-15 10:45:45.758735] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e80210) with pdu=0x2000166eea00 00:26:57.478 [2024-11-15 10:45:45.759882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:14732 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.478 [2024-11-15 10:45:45.759908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:26:57.478 [2024-11-15 10:45:45.770171] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e80210) with pdu=0x2000166ef270 00:26:57.478 [2024-11-15 10:45:45.771480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:12551 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.478 [2024-11-15 10:45:45.771507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:26:57.478 [2024-11-15 10:45:45.781582] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e80210) with pdu=0x2000166e0630 00:26:57.478 [2024-11-15 10:45:45.783022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:6855 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.478 [2024-11-15 10:45:45.783048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:26:57.478 [2024-11-15 10:45:45.791486] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e80210) with pdu=0x2000166e5ec8 00:26:57.478 [2024-11-15 10:45:45.793034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:19707 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.478 [2024-11-15 10:45:45.793059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:26:57.478 [2024-11-15 10:45:45.802590] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e80210) with pdu=0x2000166f6020 00:26:57.478 [2024-11-15 10:45:45.803750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:21459 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.478 [2024-11-15 10:45:45.803776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:26:57.478 [2024-11-15 10:45:45.813542] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e80210) with pdu=0x2000166f8e88 00:26:57.478 [2024-11-15 10:45:45.814590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:18377 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.478 [2024-11-15 10:45:45.814616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:26:57.478 [2024-11-15 10:45:45.823863] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e80210) with pdu=0x2000166f57b0 00:26:57.478 [2024-11-15 10:45:45.824753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20079 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.478 [2024-11-15 10:45:45.824778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:26:57.478 [2024-11-15 10:45:45.833993] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e80210) with pdu=0x2000166e23b8 00:26:57.478 [2024-11-15 10:45:45.834717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:20783 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.478 [2024-11-15 10:45:45.834742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:26:57.478 [2024-11-15 10:45:45.846782] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e80210) with pdu=0x2000166e99d8 00:26:57.478 [2024-11-15 10:45:45.847975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:18521 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.478 [2024-11-15 10:45:45.848001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:57.478 [2024-11-15 10:45:45.858219] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e80210) with pdu=0x2000166ec408 00:26:57.478 [2024-11-15 10:45:45.859547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:8139 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.478 [2024-11-15 10:45:45.859573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:26:57.478 [2024-11-15 10:45:45.868318] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e80210) with pdu=0x2000166f8a50 00:26:57.478 [2024-11-15 10:45:45.869497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:21956 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.478 [2024-11-15 10:45:45.869522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:26:57.478 [2024-11-15 10:45:45.878798] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e80210) with pdu=0x2000166e2c28 00:26:57.478 [2024-11-15 10:45:45.879982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:23298 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.478 [2024-11-15 10:45:45.880007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:26:57.478 [2024-11-15 10:45:45.892098] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e80210) with pdu=0x2000166f8618 00:26:57.478 [2024-11-15 10:45:45.893861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:2759 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.478 [2024-11-15 10:45:45.893887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:57.478 [2024-11-15 10:45:45.899772] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e80210) with pdu=0x2000166eb760 00:26:57.478 [2024-11-15 10:45:45.900536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:14326 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.478 [2024-11-15 10:45:45.900562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:26:57.478 [2024-11-15 10:45:45.911312] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e80210) with pdu=0x2000166ef6a8 00:26:57.478 [2024-11-15 10:45:45.912255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:25546 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.478 [2024-11-15 10:45:45.912283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:26:57.478 [2024-11-15 10:45:45.923331] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e80210) with pdu=0x2000166e5220 00:26:57.478 [2024-11-15 10:45:45.924415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:23923 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.478 [2024-11-15 10:45:45.924444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:26:57.478 [2024-11-15 10:45:45.934893] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e80210) with pdu=0x2000166fdeb0 00:26:57.478 [2024-11-15 10:45:45.935515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:10491 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.478 [2024-11-15 10:45:45.935543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:26:57.736 [2024-11-15 10:45:45.947111] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e80210) with pdu=0x2000166f35f0 00:26:57.736 [2024-11-15 10:45:45.947986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:6880 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.736 [2024-11-15 10:45:45.948025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:26:57.736 [2024-11-15 10:45:45.958939] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e80210) with pdu=0x2000166e3d08 00:26:57.736 [2024-11-15 10:45:45.959838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:182 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.736 [2024-11-15 10:45:45.959865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:26:57.736 [2024-11-15 10:45:45.969202] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e80210) with pdu=0x2000166fbcf0 00:26:57.736 [2024-11-15 10:45:45.971024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:560 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.736 [2024-11-15 10:45:45.971051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:57.736 [2024-11-15 10:45:45.980386] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e80210) with pdu=0x2000166df550 00:26:57.737 [2024-11-15 10:45:45.981558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:6398 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.737 [2024-11-15 10:45:45.981585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:57.737 [2024-11-15 10:45:45.993307] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e80210) with pdu=0x2000166f31b8 00:26:57.737 [2024-11-15 10:45:45.995094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:17616 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.737 [2024-11-15 10:45:45.995119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:57.737 [2024-11-15 10:45:46.001256] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e80210) with pdu=0x2000166eaab8 00:26:57.737 [2024-11-15 10:45:46.002123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:7145 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.737 [2024-11-15 10:45:46.002153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:26:57.737 [2024-11-15 10:45:46.014530] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e80210) with pdu=0x2000166e3d08 00:26:57.737 [2024-11-15 10:45:46.015883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:7066 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.737 [2024-11-15 10:45:46.015909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:26:57.737 [2024-11-15 10:45:46.026108] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e80210) with pdu=0x2000166f6cc8 00:26:57.737 [2024-11-15 10:45:46.027782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:4878 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.737 [2024-11-15 10:45:46.027807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:26:57.737 [2024-11-15 10:45:46.033900] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e80210) with pdu=0x2000166e6738 00:26:57.737 [2024-11-15 10:45:46.034711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:20928 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.737 [2024-11-15 10:45:46.034750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:26:57.737 [2024-11-15 10:45:46.047444] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e80210) with pdu=0x2000166fa3a0 00:26:57.737 [2024-11-15 10:45:46.048804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:12792 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.737 [2024-11-15 10:45:46.048830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:26:57.737 [2024-11-15 10:45:46.058457] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e80210) with pdu=0x2000166f0bc0 00:26:57.737 [2024-11-15 10:45:46.059847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:23231 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.737 [2024-11-15 10:45:46.059873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:57.737 [2024-11-15 10:45:46.068886] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e80210) with pdu=0x2000166fc128 00:26:57.737 [2024-11-15 10:45:46.070080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:3989 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.737 [2024-11-15 10:45:46.070105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:57.737 [2024-11-15 10:45:46.081847] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e80210) with pdu=0x2000166f3e60 00:26:57.737 [2024-11-15 10:45:46.083635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:22356 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.737 [2024-11-15 10:45:46.083661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:57.737 [2024-11-15 10:45:46.089740] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e80210) with pdu=0x2000166ebfd0 00:26:57.737 [2024-11-15 10:45:46.090686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:23206 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.737 [2024-11-15 10:45:46.090725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:26:57.737 [2024-11-15 10:45:46.101216] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e80210) with pdu=0x2000166f9f68 00:26:57.737 [2024-11-15 10:45:46.102297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:18785 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.737 [2024-11-15 10:45:46.102322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:26:57.737 [2024-11-15 10:45:46.112256] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e80210) with pdu=0x2000166eff18 00:26:57.737 [2024-11-15 10:45:46.112931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:9519 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.737 [2024-11-15 10:45:46.112957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:26:57.737 [2024-11-15 10:45:46.123771] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e80210) with pdu=0x2000166e0a68 00:26:57.737 [2024-11-15 10:45:46.124589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:21222 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.737 [2024-11-15 10:45:46.124616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:26:57.737 [2024-11-15 10:45:46.135162] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e80210) with pdu=0x2000166f0bc0 00:26:57.737 [2024-11-15 10:45:46.136143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:15079 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.737 [2024-11-15 10:45:46.136168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:26:57.737 [2024-11-15 10:45:46.145514] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e80210) with pdu=0x2000166e9168 00:26:57.737 [2024-11-15 10:45:46.147108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:23533 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.737 [2024-11-15 10:45:46.147134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:57.737 [2024-11-15 10:45:46.154907] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e80210) with pdu=0x2000166fd640 00:26:57.737 [2024-11-15 10:45:46.155739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:9895 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.737 [2024-11-15 10:45:46.155778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:57.737 [2024-11-15 10:45:46.166622] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e80210) with pdu=0x2000166ef270 00:26:57.737 [2024-11-15 10:45:46.167575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:14205 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.737 [2024-11-15 10:45:46.167601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:26:57.737 [2024-11-15 10:45:46.177735] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e80210) with pdu=0x2000166e0a68 00:26:57.737 [2024-11-15 10:45:46.178690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:17118 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.737 [2024-11-15 10:45:46.178717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:26:57.737 [2024-11-15 10:45:46.190652] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e80210) with pdu=0x2000166fa3a0 00:26:57.737 [2024-11-15 10:45:46.192178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:13189 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.737 [2024-11-15 10:45:46.192203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:26:57.737 [2024-11-15 10:45:46.200761] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e80210) with pdu=0x2000166f2510 00:26:57.737 [2024-11-15 10:45:46.202738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:2298 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.737 [2024-11-15 10:45:46.202766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:26:57.996 [2024-11-15 10:45:46.211598] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e80210) with pdu=0x2000166f4298 00:26:57.996 [2024-11-15 10:45:46.212429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:3024 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.996 [2024-11-15 10:45:46.212458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:26:57.996 [2024-11-15 10:45:46.222868] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e80210) with pdu=0x2000166f0bc0 00:26:57.996 [2024-11-15 10:45:46.223850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:23519 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.996 [2024-11-15 10:45:46.223876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:26:57.996 [2024-11-15 10:45:46.234250] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e80210) with pdu=0x2000166f2948 00:26:57.996 [2024-11-15 10:45:46.235371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:6133 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.996 [2024-11-15 10:45:46.235398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:26:57.996 [2024-11-15 10:45:46.245739] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e80210) with pdu=0x2000166de038 00:26:57.996 [2024-11-15 10:45:46.246997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:2386 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.996 [2024-11-15 10:45:46.247022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:57.996 [2024-11-15 10:45:46.256083] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e80210) with pdu=0x2000166e38d0 00:26:57.996 [2024-11-15 10:45:46.257194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:22599 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.996 [2024-11-15 10:45:46.257220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:26:57.996 [2024-11-15 10:45:46.266898] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e80210) with pdu=0x2000166ed920 00:26:57.996 [2024-11-15 10:45:46.268130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:14957 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.996 [2024-11-15 10:45:46.268156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:26:57.996 [2024-11-15 10:45:46.278293] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e80210) with pdu=0x2000166ef6a8 00:26:57.996 [2024-11-15 10:45:46.279706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:21952 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.996 [2024-11-15 10:45:46.279732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:26:57.996 [2024-11-15 10:45:46.289332] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e80210) with pdu=0x2000166fc998 00:26:57.996 [2024-11-15 10:45:46.290767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:23307 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.996 [2024-11-15 10:45:46.290797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:26:57.996 [2024-11-15 10:45:46.300065] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e80210) with pdu=0x2000166e49b0 00:26:57.996 [2024-11-15 10:45:46.301020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:18681 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.996 [2024-11-15 10:45:46.301046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:26:57.996 [2024-11-15 10:45:46.310356] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e80210) with pdu=0x2000166dece0 00:26:57.996 [2024-11-15 10:45:46.311940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:10299 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.996 [2024-11-15 10:45:46.311964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:26:57.996 [2024-11-15 10:45:46.321465] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e80210) with pdu=0x2000166f81e0 00:26:57.996 [2024-11-15 10:45:46.322729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:18452 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.996 [2024-11-15 10:45:46.322754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:26:57.996 [2024-11-15 10:45:46.332392] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e80210) with pdu=0x2000166f4f40 00:26:57.996 [2024-11-15 10:45:46.333670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:6576 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.996 [2024-11-15 10:45:46.333696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:26:57.996 [2024-11-15 10:45:46.343789] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e80210) with pdu=0x2000166e23b8 00:26:57.996 [2024-11-15 10:45:46.345176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:19374 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.996 [2024-11-15 10:45:46.345202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:26:57.996 [2024-11-15 10:45:46.354840] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e80210) with pdu=0x2000166e5a90 00:26:57.996 [2024-11-15 10:45:46.355820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:12845 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.996 [2024-11-15 10:45:46.355846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:26:57.996 [2024-11-15 10:45:46.365386] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e80210) with pdu=0x2000166e4140 00:26:57.996 [2024-11-15 10:45:46.366700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:6864 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.996 [2024-11-15 10:45:46.366726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:26:57.996 [2024-11-15 10:45:46.376301] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e80210) with pdu=0x2000166efae0 00:26:57.996 [2024-11-15 10:45:46.377453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:13053 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.996 [2024-11-15 10:45:46.377481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:26:57.996 [2024-11-15 10:45:46.387778] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e80210) with pdu=0x2000166f2948 00:26:57.996 [2024-11-15 10:45:46.389048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:6621 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.996 [2024-11-15 10:45:46.389073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:57.996 [2024-11-15 10:45:46.399167] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e80210) with pdu=0x2000166fc128 00:26:57.996 [2024-11-15 10:45:46.400594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:8793 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.996 [2024-11-15 10:45:46.400620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:26:57.996 [2024-11-15 10:45:46.408316] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e80210) with pdu=0x2000166e3d08 00:26:57.996 [2024-11-15 10:45:46.409071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:7300 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.996 [2024-11-15 10:45:46.409113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:26:57.996 [2024-11-15 10:45:46.419048] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e80210) with pdu=0x2000166e27f0 00:26:57.996 [2024-11-15 10:45:46.419624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:8413 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.996 [2024-11-15 10:45:46.419650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:57.996 [2024-11-15 10:45:46.431879] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e80210) with pdu=0x2000166fac10 00:26:57.996 [2024-11-15 10:45:46.433236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:18095 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.996 [2024-11-15 10:45:46.433262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:57.996 [2024-11-15 10:45:46.441092] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e80210) with pdu=0x2000166f9b30 00:26:57.996 [2024-11-15 10:45:46.441954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:18942 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.996 [2024-11-15 10:45:46.441980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:26:57.996 [2024-11-15 10:45:46.452090] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e80210) with pdu=0x2000166e23b8 00:26:57.996 [2024-11-15 10:45:46.452917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:18544 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.996 [2024-11-15 10:45:46.452942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:58.254 [2024-11-15 10:45:46.466659] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e80210) with pdu=0x2000166f9b30 00:26:58.254 [2024-11-15 10:45:46.468435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16237 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:58.254 [2024-11-15 10:45:46.468463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:58.254 [2024-11-15 10:45:46.474427] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e80210) with pdu=0x2000166eff18 00:26:58.254 [2024-11-15 10:45:46.475147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:23801 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:58.254 [2024-11-15 10:45:46.475174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:58.254 [2024-11-15 10:45:46.487032] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e80210) with pdu=0x2000166f8a50 00:26:58.254 [2024-11-15 10:45:46.488518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:12178 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:58.254 [2024-11-15 10:45:46.488545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:26:58.254 [2024-11-15 10:45:46.498002] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e80210) with pdu=0x2000166fe2e8 00:26:58.254 [2024-11-15 10:45:46.499161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:23683 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:58.254 [2024-11-15 10:45:46.499186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:26:58.254 [2024-11-15 10:45:46.508471] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e80210) with pdu=0x2000166f1430 00:26:58.254 [2024-11-15 10:45:46.509619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:22263 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:58.254 [2024-11-15 10:45:46.509650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:26:58.254 [2024-11-15 10:45:46.519504] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e80210) with pdu=0x2000166edd58 00:26:58.254 [2024-11-15 10:45:46.520380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:8515 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:58.254 [2024-11-15 10:45:46.520407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:26:58.254 [2024-11-15 10:45:46.529618] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e80210) with pdu=0x2000166f6cc8 00:26:58.254 [2024-11-15 10:45:46.530492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:2233 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:58.254 [2024-11-15 10:45:46.530530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:26:58.254 [2024-11-15 10:45:46.543029] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e80210) with pdu=0x2000166eea00 00:26:58.254 [2024-11-15 10:45:46.544354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:16220 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:58.254 [2024-11-15 10:45:46.544401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:26:58.254 [2024-11-15 10:45:46.552546] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e80210) with pdu=0x2000166ef6a8 00:26:58.254 [2024-11-15 10:45:46.553142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:14839 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:58.254 [2024-11-15 10:45:46.553168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:26:58.254 [2024-11-15 10:45:46.564282] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e80210) with pdu=0x2000166fc560 00:26:58.254 [2024-11-15 10:45:46.565317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:21448 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:58.254 [2024-11-15 10:45:46.565342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:26:58.254 [2024-11-15 10:45:46.574891] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e80210) with pdu=0x2000166fc128 00:26:58.254 [2024-11-15 10:45:46.575797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:2117 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:58.254 [2024-11-15 10:45:46.575829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:58.254 [2024-11-15 10:45:46.587776] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e80210) with pdu=0x2000166feb58 00:26:58.254 [2024-11-15 10:45:46.589391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:19020 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:58.255 [2024-11-15 10:45:46.589417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:58.255 [2024-11-15 10:45:46.595841] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e80210) with pdu=0x2000166f6890 00:26:58.255 [2024-11-15 10:45:46.596570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:18200 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:58.255 [2024-11-15 10:45:46.596603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:58.255 [2024-11-15 10:45:46.608118] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e80210) with pdu=0x2000166f5be8 00:26:58.255 [2024-11-15 10:45:46.609028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:12906 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:58.255 [2024-11-15 10:45:46.609069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:26:58.255 [2024-11-15 10:45:46.619121] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e80210) with pdu=0x2000166f6890 00:26:58.255 23032.00 IOPS, 89.97 MiB/s [2024-11-15T09:45:46.718Z] [2024-11-15 10:45:46.620024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:15418 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:58.255 [2024-11-15 10:45:46.620049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:26:58.255 00:26:58.255 Latency(us) 00:26:58.255 [2024-11-15T09:45:46.718Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:58.255 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:26:58.255 nvme0n1 : 2.01 23027.36 89.95 0.00 0.00 5550.59 2682.12 15728.64 00:26:58.255 [2024-11-15T09:45:46.718Z] =================================================================================================================== 00:26:58.255 [2024-11-15T09:45:46.718Z] Total : 23027.36 89.95 0.00 0.00 5550.59 2682.12 15728.64 00:26:58.255 { 00:26:58.255 "results": [ 00:26:58.255 { 00:26:58.255 "job": "nvme0n1", 00:26:58.255 "core_mask": "0x2", 00:26:58.255 "workload": "randwrite", 00:26:58.255 "status": "finished", 00:26:58.255 "queue_depth": 128, 00:26:58.255 "io_size": 4096, 00:26:58.255 "runtime": 2.005962, 00:26:58.255 "iops": 23027.355453393433, 00:26:58.255 "mibps": 89.9506072398181, 00:26:58.255 "io_failed": 0, 00:26:58.255 "io_timeout": 0, 00:26:58.255 "avg_latency_us": 5550.591809516479, 00:26:58.255 "min_latency_us": 2682.1214814814816, 00:26:58.255 "max_latency_us": 15728.64 00:26:58.255 } 00:26:58.255 ], 00:26:58.255 "core_count": 1 00:26:58.255 } 00:26:58.255 10:45:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:26:58.255 10:45:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:26:58.255 10:45:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:26:58.255 | .driver_specific 00:26:58.255 | .nvme_error 00:26:58.255 | .status_code 00:26:58.255 | .command_transient_transport_error' 00:26:58.255 10:45:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:26:58.512 10:45:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 181 > 0 )) 00:26:58.512 10:45:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 484153 00:26:58.512 10:45:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # '[' -z 484153 ']' 00:26:58.512 10:45:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # kill -0 484153 00:26:58.512 10:45:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # uname 00:26:58.512 10:45:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:26:58.512 10:45:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 484153 00:26:58.512 10:45:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:26:58.512 10:45:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:26:58.512 10:45:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # echo 'killing process with pid 484153' 00:26:58.512 killing process with pid 484153 00:26:58.512 10:45:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@971 -- # kill 484153 00:26:58.512 Received shutdown signal, test time was about 2.000000 seconds 00:26:58.512 00:26:58.512 Latency(us) 00:26:58.512 [2024-11-15T09:45:46.975Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:58.512 [2024-11-15T09:45:46.975Z] =================================================================================================================== 00:26:58.512 [2024-11-15T09:45:46.975Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:58.512 10:45:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@976 -- # wait 484153 00:26:58.770 10:45:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:26:58.770 10:45:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:26:58.770 10:45:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:26:58.770 10:45:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:26:58.770 10:45:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:26:58.770 10:45:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=484563 00:26:58.770 10:45:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:26:58.770 10:45:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 484563 /var/tmp/bperf.sock 00:26:58.770 10:45:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # '[' -z 484563 ']' 00:26:58.770 10:45:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:58.770 10:45:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # local max_retries=100 00:26:58.770 10:45:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:58.770 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:58.770 10:45:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # xtrace_disable 00:26:58.770 10:45:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:58.770 [2024-11-15 10:45:47.224799] Starting SPDK v25.01-pre git sha1 318515b44 / DPDK 24.03.0 initialization... 00:26:58.770 [2024-11-15 10:45:47.224878] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid484563 ] 00:26:58.770 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:58.770 Zero copy mechanism will not be used. 00:26:59.028 [2024-11-15 10:45:47.289411] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:59.028 [2024-11-15 10:45:47.342877] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:59.028 10:45:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:26:59.028 10:45:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@866 -- # return 0 00:26:59.028 10:45:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:59.028 10:45:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:59.286 10:45:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:26:59.287 10:45:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:59.287 10:45:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:59.287 10:45:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:59.287 10:45:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:59.287 10:45:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:59.851 nvme0n1 00:26:59.851 10:45:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:26:59.851 10:45:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:59.851 10:45:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:59.851 10:45:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:59.851 10:45:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:26:59.851 10:45:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:27:00.109 I/O size of 131072 is greater than zero copy threshold (65536). 00:27:00.109 Zero copy mechanism will not be used. 00:27:00.109 Running I/O for 2 seconds... 00:27:00.109 [2024-11-15 10:45:48.328395] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e80550) with pdu=0x2000166ff3c8 00:27:00.109 [2024-11-15 10:45:48.328500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.109 [2024-11-15 10:45:48.328541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:00.109 [2024-11-15 10:45:48.334834] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e80550) with pdu=0x2000166ff3c8 00:27:00.109 [2024-11-15 10:45:48.334952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.109 [2024-11-15 10:45:48.334980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:00.109 [2024-11-15 10:45:48.340768] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e80550) with pdu=0x2000166ff3c8 00:27:00.109 [2024-11-15 10:45:48.340893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.109 [2024-11-15 10:45:48.340920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:00.109 [2024-11-15 10:45:48.347417] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e80550) with pdu=0x2000166ff3c8 00:27:00.109 [2024-11-15 10:45:48.347669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.109 [2024-11-15 10:45:48.347709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:00.109 [2024-11-15 10:45:48.353425] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e80550) with pdu=0x2000166ff3c8 00:27:00.109 [2024-11-15 10:45:48.353583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.109 [2024-11-15 10:45:48.353610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:00.109 [2024-11-15 10:45:48.359200] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e80550) with pdu=0x2000166ff3c8 00:27:00.109 [2024-11-15 10:45:48.359299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.109 [2024-11-15 10:45:48.359327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:00.109 [2024-11-15 10:45:48.364989] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e80550) with pdu=0x2000166ff3c8 00:27:00.109 [2024-11-15 10:45:48.365115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.109 [2024-11-15 10:45:48.365142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:00.109 [2024-11-15 10:45:48.371265] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e80550) with pdu=0x2000166ff3c8 00:27:00.109 [2024-11-15 10:45:48.371416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.109 [2024-11-15 10:45:48.371445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:00.109 [2024-11-15 10:45:48.378409] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e80550) with pdu=0x2000166ff3c8 00:27:00.109 [2024-11-15 10:45:48.378589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.109 [2024-11-15 10:45:48.378616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:00.109 [2024-11-15 10:45:48.384929] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e80550) with pdu=0x2000166ff3c8 00:27:00.109 [2024-11-15 10:45:48.385074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.109 [2024-11-15 10:45:48.385101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:00.109 [2024-11-15 10:45:48.391444] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e80550) with pdu=0x2000166ff3c8 00:27:00.109 [2024-11-15 10:45:48.391567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.109 [2024-11-15 10:45:48.391594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:00.109 [2024-11-15 10:45:48.398024] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e80550) with pdu=0x2000166ff3c8 00:27:00.109 [2024-11-15 10:45:48.398219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.109 [2024-11-15 10:45:48.398254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:00.109 [2024-11-15 10:45:48.404768] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e80550) with pdu=0x2000166ff3c8 00:27:00.109 [2024-11-15 10:45:48.404877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.109 [2024-11-15 10:45:48.404904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:00.109 [2024-11-15 10:45:48.411245] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e80550) with pdu=0x2000166ff3c8 00:27:00.109 [2024-11-15 10:45:48.411406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.109 [2024-11-15 10:45:48.411434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:00.109 [2024-11-15 10:45:48.417806] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e80550) with pdu=0x2000166ff3c8 00:27:00.109 [2024-11-15 10:45:48.417912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.109 [2024-11-15 10:45:48.417943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:00.109 [2024-11-15 10:45:48.424460] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e80550) with pdu=0x2000166ff3c8 00:27:00.109 [2024-11-15 10:45:48.424685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.110 [2024-11-15 10:45:48.424712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:00.110 [2024-11-15 10:45:48.431539] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e80550) with pdu=0x2000166ff3c8 00:27:00.110 [2024-11-15 10:45:48.431742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.110 [2024-11-15 10:45:48.431768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:00.110 [2024-11-15 10:45:48.438091] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e80550) with pdu=0x2000166ff3c8 00:27:00.110 [2024-11-15 10:45:48.438209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.110 [2024-11-15 10:45:48.438235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:00.110 [2024-11-15 10:45:48.444592] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e80550) with pdu=0x2000166ff3c8 00:27:00.110 [2024-11-15 10:45:48.444732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.110 [2024-11-15 10:45:48.444757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:00.110 [2024-11-15 10:45:48.451399] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e80550) with pdu=0x2000166ff3c8 00:27:00.110 [2024-11-15 10:45:48.451624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.110 [2024-11-15 10:45:48.451652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:00.110 [2024-11-15 10:45:48.457710] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e80550) with pdu=0x2000166ff3c8 00:27:00.110 [2024-11-15 10:45:48.457935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.110 [2024-11-15 10:45:48.457967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:00.110 [2024-11-15 10:45:48.464410] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e80550) with pdu=0x2000166ff3c8 00:27:00.110 [2024-11-15 10:45:48.464520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.110 [2024-11-15 10:45:48.464547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:00.110 [2024-11-15 10:45:48.471094] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e80550) with pdu=0x2000166ff3c8 00:27:00.110 [2024-11-15 10:45:48.471225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.110 [2024-11-15 10:45:48.471252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:00.110 [2024-11-15 10:45:48.477813] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e80550) with pdu=0x2000166ff3c8 00:27:00.110 [2024-11-15 10:45:48.478026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.110 [2024-11-15 10:45:48.478052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:00.110 [2024-11-15 10:45:48.484506] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e80550) with pdu=0x2000166ff3c8 00:27:00.110 [2024-11-15 10:45:48.484720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.110 [2024-11-15 10:45:48.484746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:00.110 [2024-11-15 10:45:48.491510] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e80550) with pdu=0x2000166ff3c8 00:27:00.110 [2024-11-15 10:45:48.491720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.110 [2024-11-15 10:45:48.491761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:00.110 [2024-11-15 10:45:48.497858] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e80550) with pdu=0x2000166ff3c8 00:27:00.110 [2024-11-15 10:45:48.497969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.110 [2024-11-15 10:45:48.497995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:00.110 [2024-11-15 10:45:48.504555] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e80550) with pdu=0x2000166ff3c8 00:27:00.110 [2024-11-15 10:45:48.504702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.110 [2024-11-15 10:45:48.504728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:00.110 [2024-11-15 10:45:48.511340] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e80550) with pdu=0x2000166ff3c8 00:27:00.110 [2024-11-15 10:45:48.511573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.110 [2024-11-15 10:45:48.511600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:00.110 [2024-11-15 10:45:48.518456] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e80550) with pdu=0x2000166ff3c8 00:27:00.110 [2024-11-15 10:45:48.518708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.110 [2024-11-15 10:45:48.518734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:00.110 [2024-11-15 10:45:48.524959] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e80550) with pdu=0x2000166ff3c8 00:27:00.110 [2024-11-15 10:45:48.525094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.110 [2024-11-15 10:45:48.525120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:00.110 [2024-11-15 10:45:48.531481] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e80550) with pdu=0x2000166ff3c8 00:27:00.110 [2024-11-15 10:45:48.531715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.110 [2024-11-15 10:45:48.531756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:00.110 [2024-11-15 10:45:48.538096] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e80550) with pdu=0x2000166ff3c8 00:27:00.110 [2024-11-15 10:45:48.538223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.110 [2024-11-15 10:45:48.538248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:00.110 [2024-11-15 10:45:48.544344] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e80550) with pdu=0x2000166ff3c8 00:27:00.110 [2024-11-15 10:45:48.544497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.110 [2024-11-15 10:45:48.544524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:00.110 [2024-11-15 10:45:48.550739] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e80550) with pdu=0x2000166ff3c8 00:27:00.110 [2024-11-15 10:45:48.550870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.110 [2024-11-15 10:45:48.550895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:00.110 [2024-11-15 10:45:48.557179] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e80550) with pdu=0x2000166ff3c8 00:27:00.110 [2024-11-15 10:45:48.557322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.110 [2024-11-15 10:45:48.557369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:00.110 [2024-11-15 10:45:48.563988] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e80550) with pdu=0x2000166ff3c8 00:27:00.110 [2024-11-15 10:45:48.564127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.110 [2024-11-15 10:45:48.564154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:00.110 [2024-11-15 10:45:48.570822] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e80550) with pdu=0x2000166ff3c8 00:27:00.110 [2024-11-15 10:45:48.570993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.110 [2024-11-15 10:45:48.571020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:00.370 [2024-11-15 10:45:48.577293] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e80550) with pdu=0x2000166ff3c8 00:27:00.370 [2024-11-15 10:45:48.577430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.370 [2024-11-15 10:45:48.577459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:00.370 [2024-11-15 10:45:48.583528] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e80550) with pdu=0x2000166ff3c8 00:27:00.370 [2024-11-15 10:45:48.583677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.370 [2024-11-15 10:45:48.583703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:00.370 [2024-11-15 10:45:48.589748] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e80550) with pdu=0x2000166ff3c8 00:27:00.370 [2024-11-15 10:45:48.589884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.370 [2024-11-15 10:45:48.589911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:00.370 [2024-11-15 10:45:48.595898] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e80550) with pdu=0x2000166ff3c8 00:27:00.370 [2024-11-15 10:45:48.595990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.370 [2024-11-15 10:45:48.596015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:00.370 [2024-11-15 10:45:48.601874] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e80550) with pdu=0x2000166ff3c8 00:27:00.370 [2024-11-15 10:45:48.601985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.370 [2024-11-15 10:45:48.602012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:00.370 [2024-11-15 10:45:48.607811] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e80550) with pdu=0x2000166ff3c8 00:27:00.370 [2024-11-15 10:45:48.607898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.370 [2024-11-15 10:45:48.607923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:00.370 [2024-11-15 10:45:48.613621] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e80550) with pdu=0x2000166ff3c8 00:27:00.370 [2024-11-15 10:45:48.613780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.370 [2024-11-15 10:45:48.613806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:00.370 [2024-11-15 10:45:48.620096] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e80550) with pdu=0x2000166ff3c8 00:27:00.370 [2024-11-15 10:45:48.620310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.370 [2024-11-15 10:45:48.620336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:00.370 [2024-11-15 10:45:48.626849] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e80550) with pdu=0x2000166ff3c8 00:27:00.370 [2024-11-15 10:45:48.626976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.370 [2024-11-15 10:45:48.627007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:00.370 [2024-11-15 10:45:48.633148] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e80550) with pdu=0x2000166ff3c8 00:27:00.370 [2024-11-15 10:45:48.633283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.370 [2024-11-15 10:45:48.633309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:00.370 [2024-11-15 10:45:48.638913] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e80550) with pdu=0x2000166ff3c8 00:27:00.370 [2024-11-15 10:45:48.639013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.370 [2024-11-15 10:45:48.639038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:00.370 [2024-11-15 10:45:48.644845] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e80550) with pdu=0x2000166ff3c8 00:27:00.370 [2024-11-15 10:45:48.645003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.370 [2024-11-15 10:45:48.645030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:00.370 [2024-11-15 10:45:48.650802] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e80550) with pdu=0x2000166ff3c8 00:27:00.370 [2024-11-15 10:45:48.650894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.370 [2024-11-15 10:45:48.650919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:00.370 [2024-11-15 10:45:48.656634] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e80550) with pdu=0x2000166ff3c8 00:27:00.370 [2024-11-15 10:45:48.656785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.370 [2024-11-15 10:45:48.656809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:00.370 [2024-11-15 10:45:48.662646] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e80550) with pdu=0x2000166ff3c8 00:27:00.370 [2024-11-15 10:45:48.662769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.370 [2024-11-15 10:45:48.662796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:00.370 [2024-11-15 10:45:48.668885] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e80550) with pdu=0x2000166ff3c8 00:27:00.370 [2024-11-15 10:45:48.669044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.370 [2024-11-15 10:45:48.669070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:00.370 [2024-11-15 10:45:48.675194] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e80550) with pdu=0x2000166ff3c8 00:27:00.370 [2024-11-15 10:45:48.675360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.370 [2024-11-15 10:45:48.675408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:00.370 [2024-11-15 10:45:48.681613] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e80550) with pdu=0x2000166ff3c8 00:27:00.370 [2024-11-15 10:45:48.681766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.370 [2024-11-15 10:45:48.681791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:00.370 [2024-11-15 10:45:48.688035] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e80550) with pdu=0x2000166ff3c8 00:27:00.370 [2024-11-15 10:45:48.688167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.370 [2024-11-15 10:45:48.688193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:00.370 [2024-11-15 10:45:48.694317] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e80550) with pdu=0x2000166ff3c8 00:27:00.370 [2024-11-15 10:45:48.694481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.370 [2024-11-15 10:45:48.694513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:00.370 [2024-11-15 10:45:48.700639] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e80550) with pdu=0x2000166ff3c8 00:27:00.370 [2024-11-15 10:45:48.700783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.370 [2024-11-15 10:45:48.700809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:00.370 [2024-11-15 10:45:48.707003] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e80550) with pdu=0x2000166ff3c8 00:27:00.370 [2024-11-15 10:45:48.707157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.370 [2024-11-15 10:45:48.707184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:00.370 [2024-11-15 10:45:48.713276] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e80550) with pdu=0x2000166ff3c8 00:27:00.370 [2024-11-15 10:45:48.713444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.370 [2024-11-15 10:45:48.713473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:00.371 [2024-11-15 10:45:48.719749] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e80550) with pdu=0x2000166ff3c8 00:27:00.371 [2024-11-15 10:45:48.719870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.371 [2024-11-15 10:45:48.719896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:00.371 [2024-11-15 10:45:48.727534] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e80550) with pdu=0x2000166ff3c8 00:27:00.371 [2024-11-15 10:45:48.727619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.371 [2024-11-15 10:45:48.727644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:00.371 [2024-11-15 10:45:48.734525] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e80550) with pdu=0x2000166ff3c8 00:27:00.371 [2024-11-15 10:45:48.734618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.371 [2024-11-15 10:45:48.734644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:00.371 [2024-11-15 10:45:48.740437] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e80550) with pdu=0x2000166ff3c8 00:27:00.371 [2024-11-15 10:45:48.740567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.371 [2024-11-15 10:45:48.740594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:00.371 [2024-11-15 10:45:48.746595] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e80550) with pdu=0x2000166ff3c8 00:27:00.371 [2024-11-15 10:45:48.746696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.371 [2024-11-15 10:45:48.746723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:00.371 [2024-11-15 10:45:48.752580] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e80550) with pdu=0x2000166ff3c8 00:27:00.371 [2024-11-15 10:45:48.752672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.371 [2024-11-15 10:45:48.752712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:00.371 [2024-11-15 10:45:48.759035] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e80550) with pdu=0x2000166ff3c8 00:27:00.371 [2024-11-15 10:45:48.759167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.371 [2024-11-15 10:45:48.759193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:00.371 [2024-11-15 10:45:48.765869] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e80550) with pdu=0x2000166ff3c8 00:27:00.371 [2024-11-15 10:45:48.766085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.371 [2024-11-15 10:45:48.766112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:00.371 [2024-11-15 10:45:48.773335] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e80550) with pdu=0x2000166ff3c8 00:27:00.371 [2024-11-15 10:45:48.773513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.371 [2024-11-15 10:45:48.773543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:00.371 [2024-11-15 10:45:48.781252] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e80550) with pdu=0x2000166ff3c8 00:27:00.371 [2024-11-15 10:45:48.781387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.371 [2024-11-15 10:45:48.781426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:00.371 [2024-11-15 10:45:48.788542] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e80550) with pdu=0x2000166ff3c8 00:27:00.371 [2024-11-15 10:45:48.788686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.371 [2024-11-15 10:45:48.788713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:00.371 [2024-11-15 10:45:48.796296] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e80550) with pdu=0x2000166ff3c8 00:27:00.371 [2024-11-15 10:45:48.796447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.371 [2024-11-15 10:45:48.796484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:00.371 [2024-11-15 10:45:48.803248] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e80550) with pdu=0x2000166ff3c8 00:27:00.371 [2024-11-15 10:45:48.803371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.371 [2024-11-15 10:45:48.803399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:00.371 [2024-11-15 10:45:48.809961] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e80550) with pdu=0x2000166ff3c8 00:27:00.371 [2024-11-15 10:45:48.810091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.371 [2024-11-15 10:45:48.810117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:00.371 [2024-11-15 10:45:48.816673] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e80550) with pdu=0x2000166ff3c8 00:27:00.371 [2024-11-15 10:45:48.816779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.371 [2024-11-15 10:45:48.816806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:00.371 [2024-11-15 10:45:48.824340] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e80550) with pdu=0x2000166ff3c8 00:27:00.371 [2024-11-15 10:45:48.824464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.371 [2024-11-15 10:45:48.824492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:00.371 [2024-11-15 10:45:48.831591] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e80550) with pdu=0x2000166ff3c8 00:27:00.371 [2024-11-15 10:45:48.831720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.371 [2024-11-15 10:45:48.831749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:00.630 [2024-11-15 10:45:48.839723] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e80550) with pdu=0x2000166ff3c8 00:27:00.630 [2024-11-15 10:45:48.839849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.630 [2024-11-15 10:45:48.839884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:00.630 [2024-11-15 10:45:48.846183] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e80550) with pdu=0x2000166ff3c8 00:27:00.630 [2024-11-15 10:45:48.846270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.630 [2024-11-15 10:45:48.846295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:00.630 [2024-11-15 10:45:48.852231] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e80550) with pdu=0x2000166ff3c8 00:27:00.630 [2024-11-15 10:45:48.852326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.630 [2024-11-15 10:45:48.852375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:00.630 [2024-11-15 10:45:48.858117] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e80550) with pdu=0x2000166ff3c8 00:27:00.630 [2024-11-15 10:45:48.858226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.630 [2024-11-15 10:45:48.858254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:00.630 [2024-11-15 10:45:48.863945] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e80550) with pdu=0x2000166ff3c8 00:27:00.630 [2024-11-15 10:45:48.864042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.630 [2024-11-15 10:45:48.864067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:00.630 [2024-11-15 10:45:48.870029] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e80550) with pdu=0x2000166ff3c8 00:27:00.630 [2024-11-15 10:45:48.870185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.630 [2024-11-15 10:45:48.870211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:00.630 [2024-11-15 10:45:48.876465] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e80550) with pdu=0x2000166ff3c8 00:27:00.630 [2024-11-15 10:45:48.876703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.631 [2024-11-15 10:45:48.876745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:00.631 [2024-11-15 10:45:48.882876] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e80550) with pdu=0x2000166ff3c8 00:27:00.631 [2024-11-15 10:45:48.883001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.631 [2024-11-15 10:45:48.883027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:00.631 [2024-11-15 10:45:48.889253] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e80550) with pdu=0x2000166ff3c8 00:27:00.631 [2024-11-15 10:45:48.889410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.631 [2024-11-15 10:45:48.889439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:00.631 [2024-11-15 10:45:48.895553] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e80550) with pdu=0x2000166ff3c8 00:27:00.631 [2024-11-15 10:45:48.895700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.631 [2024-11-15 10:45:48.895742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:00.631 [2024-11-15 10:45:48.901941] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e80550) with pdu=0x2000166ff3c8 00:27:00.631 [2024-11-15 10:45:48.902084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.631 [2024-11-15 10:45:48.902112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:00.631 [2024-11-15 10:45:48.908549] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e80550) with pdu=0x2000166ff3c8 00:27:00.631 [2024-11-15 10:45:48.908708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.631 [2024-11-15 10:45:48.908748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:00.631 [2024-11-15 10:45:48.914989] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e80550) with pdu=0x2000166ff3c8 00:27:00.631 [2024-11-15 10:45:48.915175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.631 [2024-11-15 10:45:48.915201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:00.631 [2024-11-15 10:45:48.921232] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e80550) with pdu=0x2000166ff3c8 00:27:00.631 [2024-11-15 10:45:48.921431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.631 [2024-11-15 10:45:48.921459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:00.631 [2024-11-15 10:45:48.927406] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e80550) with pdu=0x2000166ff3c8 00:27:00.631 [2024-11-15 10:45:48.927525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.631 [2024-11-15 10:45:48.927552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:00.631 [2024-11-15 10:45:48.933698] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e80550) with pdu=0x2000166ff3c8 00:27:00.631 [2024-11-15 10:45:48.933842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.631 [2024-11-15 10:45:48.933868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:00.631 [2024-11-15 10:45:48.939917] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e80550) with pdu=0x2000166ff3c8 00:27:00.631 [2024-11-15 10:45:48.940046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.631 [2024-11-15 10:45:48.940072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:00.631 [2024-11-15 10:45:48.945748] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e80550) with pdu=0x2000166ff3c8 00:27:00.631 [2024-11-15 10:45:48.945873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.631 [2024-11-15 10:45:48.945900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:00.631 [2024-11-15 10:45:48.952007] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e80550) with pdu=0x2000166ff3c8 00:27:00.631 [2024-11-15 10:45:48.952159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.631 [2024-11-15 10:45:48.952186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:00.631 [2024-11-15 10:45:48.958393] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e80550) with pdu=0x2000166ff3c8 00:27:00.631 [2024-11-15 10:45:48.958523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.631 [2024-11-15 10:45:48.958552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:00.631 [2024-11-15 10:45:48.964868] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e80550) with pdu=0x2000166ff3c8 00:27:00.631 [2024-11-15 10:45:48.965036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.631 [2024-11-15 10:45:48.965069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:00.631 [2024-11-15 10:45:48.971300] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e80550) with pdu=0x2000166ff3c8 00:27:00.631 [2024-11-15 10:45:48.971448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.631 [2024-11-15 10:45:48.971477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:00.631 [2024-11-15 10:45:48.977668] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e80550) with pdu=0x2000166ff3c8 00:27:00.631 [2024-11-15 10:45:48.977778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.631 [2024-11-15 10:45:48.977804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:00.631 [2024-11-15 10:45:48.983802] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e80550) with pdu=0x2000166ff3c8 00:27:00.631 [2024-11-15 10:45:48.983944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.631 [2024-11-15 10:45:48.983970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:00.631 [2024-11-15 10:45:48.990436] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e80550) with pdu=0x2000166ff3c8 00:27:00.631 [2024-11-15 10:45:48.990588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.631 [2024-11-15 10:45:48.990615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:00.631 [2024-11-15 10:45:48.996862] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e80550) with pdu=0x2000166ff3c8 00:27:00.631 [2024-11-15 10:45:48.997000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.631 [2024-11-15 10:45:48.997027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:00.631 [2024-11-15 10:45:49.003152] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e80550) with pdu=0x2000166ff3c8 00:27:00.632 [2024-11-15 10:45:49.003298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.632 [2024-11-15 10:45:49.003323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:00.632 [2024-11-15 10:45:49.009773] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e80550) with pdu=0x2000166ff3c8 00:27:00.632 [2024-11-15 10:45:49.009937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.632 [2024-11-15 10:45:49.009966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:00.632 [2024-11-15 10:45:49.016115] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e80550) with pdu=0x2000166ff3c8 00:27:00.632 [2024-11-15 10:45:49.016227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.632 [2024-11-15 10:45:49.016253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:00.632 [2024-11-15 10:45:49.021862] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e80550) with pdu=0x2000166ff3c8 00:27:00.632 [2024-11-15 10:45:49.021956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.632 [2024-11-15 10:45:49.021981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:00.632 [2024-11-15 10:45:49.027741] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e80550) with pdu=0x2000166ff3c8 00:27:00.632 [2024-11-15 10:45:49.027896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.632 [2024-11-15 10:45:49.027922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:00.632 [2024-11-15 10:45:49.033499] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e80550) with pdu=0x2000166ff3c8 00:27:00.632 [2024-11-15 10:45:49.033607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.632 [2024-11-15 10:45:49.033634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:00.632 [2024-11-15 10:45:49.039315] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e80550) with pdu=0x2000166ff3c8 00:27:00.632 [2024-11-15 10:45:49.039428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.632 [2024-11-15 10:45:49.039454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:00.632 [2024-11-15 10:45:49.045783] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e80550) with pdu=0x2000166ff3c8 00:27:00.632 [2024-11-15 10:45:49.046002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.632 [2024-11-15 10:45:49.046028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:00.632 [2024-11-15 10:45:49.052615] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e80550) with pdu=0x2000166ff3c8 00:27:00.632 [2024-11-15 10:45:49.052861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.632 [2024-11-15 10:45:49.052887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:00.632 [2024-11-15 10:45:49.058933] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e80550) with pdu=0x2000166ff3c8 00:27:00.632 [2024-11-15 10:45:49.059092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.632 [2024-11-15 10:45:49.059119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:00.632 [2024-11-15 10:45:49.065318] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e80550) with pdu=0x2000166ff3c8 00:27:00.632 [2024-11-15 10:45:49.065470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.632 [2024-11-15 10:45:49.065497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:00.632 [2024-11-15 10:45:49.071574] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e80550) with pdu=0x2000166ff3c8 00:27:00.632 [2024-11-15 10:45:49.071780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.632 [2024-11-15 10:45:49.071806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:00.632 [2024-11-15 10:45:49.077848] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e80550) with pdu=0x2000166ff3c8 00:27:00.632 [2024-11-15 10:45:49.078068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.632 [2024-11-15 10:45:49.078095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:00.632 [2024-11-15 10:45:49.084409] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e80550) with pdu=0x2000166ff3c8 00:27:00.632 [2024-11-15 10:45:49.084567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.632 [2024-11-15 10:45:49.084595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:00.632 [2024-11-15 10:45:49.090807] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e80550) with pdu=0x2000166ff3c8 00:27:00.632 [2024-11-15 10:45:49.090927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.632 [2024-11-15 10:45:49.090952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:00.891 [2024-11-15 10:45:49.097615] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e80550) with pdu=0x2000166ff3c8 00:27:00.891 [2024-11-15 10:45:49.097816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.891 [2024-11-15 10:45:49.097845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:00.891 [2024-11-15 10:45:49.104262] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e80550) with pdu=0x2000166ff3c8 00:27:00.891 [2024-11-15 10:45:49.104424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.891 [2024-11-15 10:45:49.104453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:00.891 [2024-11-15 10:45:49.110414] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e80550) with pdu=0x2000166ff3c8 00:27:00.891 [2024-11-15 10:45:49.110545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.891 [2024-11-15 10:45:49.110573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:00.891 [2024-11-15 10:45:49.116794] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e80550) with pdu=0x2000166ff3c8 00:27:00.891 [2024-11-15 10:45:49.116959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.891 [2024-11-15 10:45:49.116985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:00.891 [2024-11-15 10:45:49.122804] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e80550) with pdu=0x2000166ff3c8 00:27:00.891 [2024-11-15 10:45:49.122948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.891 [2024-11-15 10:45:49.122974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:00.891 [2024-11-15 10:45:49.128379] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e80550) with pdu=0x2000166ff3c8 00:27:00.891 [2024-11-15 10:45:49.128512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.891 [2024-11-15 10:45:49.128544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:00.891 [2024-11-15 10:45:49.134204] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e80550) with pdu=0x2000166ff3c8 00:27:00.891 [2024-11-15 10:45:49.134311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.891 [2024-11-15 10:45:49.134337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:00.891 [2024-11-15 10:45:49.139869] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e80550) with pdu=0x2000166ff3c8 00:27:00.892 [2024-11-15 10:45:49.139973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.892 [2024-11-15 10:45:49.139999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:00.892 [2024-11-15 10:45:49.145595] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e80550) with pdu=0x2000166ff3c8 00:27:00.892 [2024-11-15 10:45:49.145742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.892 [2024-11-15 10:45:49.145767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:00.892 [2024-11-15 10:45:49.151654] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e80550) with pdu=0x2000166ff3c8 00:27:00.892 [2024-11-15 10:45:49.151830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.892 [2024-11-15 10:45:49.151855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:00.892 [2024-11-15 10:45:49.158254] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e80550) with pdu=0x2000166ff3c8 00:27:00.892 [2024-11-15 10:45:49.158432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.892 [2024-11-15 10:45:49.158474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:00.892 [2024-11-15 10:45:49.164460] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e80550) with pdu=0x2000166ff3c8 00:27:00.892 [2024-11-15 10:45:49.164604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.892 [2024-11-15 10:45:49.164631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:00.892 [2024-11-15 10:45:49.170858] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e80550) with pdu=0x2000166ff3c8 00:27:00.892 [2024-11-15 10:45:49.171086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.892 [2024-11-15 10:45:49.171113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:00.892 [2024-11-15 10:45:49.177576] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e80550) with pdu=0x2000166ff3c8 00:27:00.892 [2024-11-15 10:45:49.177725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.892 [2024-11-15 10:45:49.177752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:00.892 [2024-11-15 10:45:49.183883] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e80550) with pdu=0x2000166ff3c8 00:27:00.892 [2024-11-15 10:45:49.184038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.892 [2024-11-15 10:45:49.184065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:00.892 [2024-11-15 10:45:49.190145] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e80550) with pdu=0x2000166ff3c8 00:27:00.892 [2024-11-15 10:45:49.190305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.892 [2024-11-15 10:45:49.190331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:00.892 [2024-11-15 10:45:49.196829] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e80550) with pdu=0x2000166ff3c8 00:27:00.892 [2024-11-15 10:45:49.196946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.892 [2024-11-15 10:45:49.196972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:00.892 [2024-11-15 10:45:49.203727] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e80550) with pdu=0x2000166ff3c8 00:27:00.892 [2024-11-15 10:45:49.203816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.892 [2024-11-15 10:45:49.203841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:00.892 [2024-11-15 10:45:49.209790] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e80550) with pdu=0x2000166ff3c8 00:27:00.892 [2024-11-15 10:45:49.209880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.892 [2024-11-15 10:45:49.209904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:00.892 [2024-11-15 10:45:49.215955] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e80550) with pdu=0x2000166ff3c8 00:27:00.892 [2024-11-15 10:45:49.216071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.892 [2024-11-15 10:45:49.216097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:00.892 [2024-11-15 10:45:49.222212] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e80550) with pdu=0x2000166ff3c8 00:27:00.892 [2024-11-15 10:45:49.222402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.892 [2024-11-15 10:45:49.222429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:00.892 [2024-11-15 10:45:49.228498] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e80550) with pdu=0x2000166ff3c8 00:27:00.892 [2024-11-15 10:45:49.228647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.892 [2024-11-15 10:45:49.228689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:00.892 [2024-11-15 10:45:49.234604] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e80550) with pdu=0x2000166ff3c8 00:27:00.892 [2024-11-15 10:45:49.234733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.892 [2024-11-15 10:45:49.234759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:00.892 [2024-11-15 10:45:49.240792] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e80550) with pdu=0x2000166ff3c8 00:27:00.892 [2024-11-15 10:45:49.240933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.892 [2024-11-15 10:45:49.240959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:00.892 [2024-11-15 10:45:49.247593] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e80550) with pdu=0x2000166ff3c8 00:27:00.892 [2024-11-15 10:45:49.247815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.892 [2024-11-15 10:45:49.247843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:00.892 [2024-11-15 10:45:49.254425] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e80550) with pdu=0x2000166ff3c8 00:27:00.892 [2024-11-15 10:45:49.254538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.892 [2024-11-15 10:45:49.254566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:00.892 [2024-11-15 10:45:49.260741] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e80550) with pdu=0x2000166ff3c8 00:27:00.892 [2024-11-15 10:45:49.260850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.892 [2024-11-15 10:45:49.260876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:00.892 [2024-11-15 10:45:49.266800] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e80550) with pdu=0x2000166ff3c8 00:27:00.892 [2024-11-15 10:45:49.266949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.892 [2024-11-15 10:45:49.266976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:00.892 [2024-11-15 10:45:49.273536] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e80550) with pdu=0x2000166ff3c8 00:27:00.892 [2024-11-15 10:45:49.273765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.892 [2024-11-15 10:45:49.273791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:00.892 [2024-11-15 10:45:49.280459] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e80550) with pdu=0x2000166ff3c8 00:27:00.892 [2024-11-15 10:45:49.280611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.892 [2024-11-15 10:45:49.280638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:00.892 [2024-11-15 10:45:49.286939] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e80550) with pdu=0x2000166ff3c8 00:27:00.892 [2024-11-15 10:45:49.287057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.892 [2024-11-15 10:45:49.287083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:00.892 [2024-11-15 10:45:49.293080] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e80550) with pdu=0x2000166ff3c8 00:27:00.892 [2024-11-15 10:45:49.293199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.893 [2024-11-15 10:45:49.293231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:00.893 [2024-11-15 10:45:49.300085] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e80550) with pdu=0x2000166ff3c8 00:27:00.893 [2024-11-15 10:45:49.300359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.893 [2024-11-15 10:45:49.300393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:00.893 [2024-11-15 10:45:49.307173] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e80550) with pdu=0x2000166ff3c8 00:27:00.893 [2024-11-15 10:45:49.307297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.893 [2024-11-15 10:45:49.307324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:00.893 [2024-11-15 10:45:49.313783] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e80550) with pdu=0x2000166ff3c8 00:27:00.893 [2024-11-15 10:45:49.313951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.893 [2024-11-15 10:45:49.313977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:00.893 [2024-11-15 10:45:49.320162] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e80550) with pdu=0x2000166ff3c8 00:27:00.893 [2024-11-15 10:45:49.320272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.893 [2024-11-15 10:45:49.320298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:00.893 4780.00 IOPS, 597.50 MiB/s [2024-11-15T09:45:49.356Z] [2024-11-15 10:45:49.327524] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e80550) with pdu=0x2000166ff3c8 00:27:00.893 [2024-11-15 10:45:49.327622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.893 [2024-11-15 10:45:49.327663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:00.893 [2024-11-15 10:45:49.333970] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e80550) with pdu=0x2000166ff3c8 00:27:00.893 [2024-11-15 10:45:49.334102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.893 [2024-11-15 10:45:49.334127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:00.893 [2024-11-15 10:45:49.341262] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e80550) with pdu=0x2000166ff3c8 00:27:00.893 [2024-11-15 10:45:49.341409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.893 [2024-11-15 10:45:49.341434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:00.893 [2024-11-15 10:45:49.349306] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e80550) with pdu=0x2000166ff3c8 00:27:00.893 [2024-11-15 10:45:49.349437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.893 [2024-11-15 10:45:49.349467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:00.893 [2024-11-15 10:45:49.357164] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e80550) with pdu=0x2000166ff3c8 00:27:00.893 [2024-11-15 10:45:49.357371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.893 [2024-11-15 10:45:49.357421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:01.152 [2024-11-15 10:45:49.365609] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e80550) with pdu=0x2000166ff3c8 00:27:01.152 [2024-11-15 10:45:49.365777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.152 [2024-11-15 10:45:49.365804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:01.152 [2024-11-15 10:45:49.373537] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e80550) with pdu=0x2000166ff3c8 00:27:01.152 [2024-11-15 10:45:49.373670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.152 [2024-11-15 10:45:49.373697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:01.152 [2024-11-15 10:45:49.380071] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e80550) with pdu=0x2000166ff3c8 00:27:01.152 [2024-11-15 10:45:49.380184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.152 [2024-11-15 10:45:49.380210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:01.152 [2024-11-15 10:45:49.386679] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e80550) with pdu=0x2000166ff3c8 00:27:01.152 [2024-11-15 10:45:49.386798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.152 [2024-11-15 10:45:49.386825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:01.152 [2024-11-15 10:45:49.393028] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e80550) with pdu=0x2000166ff3c8 00:27:01.152 [2024-11-15 10:45:49.393126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.152 [2024-11-15 10:45:49.393152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:01.152 [2024-11-15 10:45:49.400466] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e80550) with pdu=0x2000166ff3c8 00:27:01.152 [2024-11-15 10:45:49.400707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.152 [2024-11-15 10:45:49.400733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:01.152 [2024-11-15 10:45:49.407900] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e80550) with pdu=0x2000166ff3c8 00:27:01.152 [2024-11-15 10:45:49.408027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.152 [2024-11-15 10:45:49.408053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:01.152 [2024-11-15 10:45:49.414683] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e80550) with pdu=0x2000166ff3c8 00:27:01.152 [2024-11-15 10:45:49.414803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.152 [2024-11-15 10:45:49.414829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:01.152 [2024-11-15 10:45:49.421487] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e80550) with pdu=0x2000166ff3c8 00:27:01.152 [2024-11-15 10:45:49.421612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.152 [2024-11-15 10:45:49.421640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:01.152 [2024-11-15 10:45:49.428420] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e80550) with pdu=0x2000166ff3c8 00:27:01.153 [2024-11-15 10:45:49.428552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.153 [2024-11-15 10:45:49.428579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:01.153 [2024-11-15 10:45:49.435800] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e80550) with pdu=0x2000166ff3c8 00:27:01.153 [2024-11-15 10:45:49.435912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.153 [2024-11-15 10:45:49.435938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:01.153 [2024-11-15 10:45:49.443426] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e80550) with pdu=0x2000166ff3c8 00:27:01.153 [2024-11-15 10:45:49.443520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.153 [2024-11-15 10:45:49.443546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:01.153 [2024-11-15 10:45:49.452779] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e80550) with pdu=0x2000166ff3c8 00:27:01.153 [2024-11-15 10:45:49.452875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.153 [2024-11-15 10:45:49.452905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:01.153 [2024-11-15 10:45:49.460245] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e80550) with pdu=0x2000166ff3c8 00:27:01.153 [2024-11-15 10:45:49.460411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.153 [2024-11-15 10:45:49.460439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:01.153 [2024-11-15 10:45:49.466920] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e80550) with pdu=0x2000166ff3c8 00:27:01.153 [2024-11-15 10:45:49.467009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.153 [2024-11-15 10:45:49.467035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:01.153 [2024-11-15 10:45:49.473656] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e80550) with pdu=0x2000166ff3c8 00:27:01.153 [2024-11-15 10:45:49.473770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.153 [2024-11-15 10:45:49.473796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:01.153 [2024-11-15 10:45:49.481593] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e80550) with pdu=0x2000166ff3c8 00:27:01.153 [2024-11-15 10:45:49.481825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.153 [2024-11-15 10:45:49.481857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:01.153 [2024-11-15 10:45:49.489417] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e80550) with pdu=0x2000166ff3c8 00:27:01.153 [2024-11-15 10:45:49.489602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.153 [2024-11-15 10:45:49.489629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:01.153 [2024-11-15 10:45:49.497453] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e80550) with pdu=0x2000166ff3c8 00:27:01.153 [2024-11-15 10:45:49.497706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.153 [2024-11-15 10:45:49.497733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:01.153 [2024-11-15 10:45:49.506814] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e80550) with pdu=0x2000166ff3c8 00:27:01.153 [2024-11-15 10:45:49.507029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.153 [2024-11-15 10:45:49.507055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:01.153 [2024-11-15 10:45:49.516185] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e80550) with pdu=0x2000166ff3c8 00:27:01.153 [2024-11-15 10:45:49.516309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.153 [2024-11-15 10:45:49.516335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:01.153 [2024-11-15 10:45:49.524969] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e80550) with pdu=0x2000166ff3c8 00:27:01.153 [2024-11-15 10:45:49.525116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.153 [2024-11-15 10:45:49.525158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:01.153 [2024-11-15 10:45:49.533938] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e80550) with pdu=0x2000166ff3c8 00:27:01.153 [2024-11-15 10:45:49.534048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.153 [2024-11-15 10:45:49.534074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:01.153 [2024-11-15 10:45:49.541974] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e80550) with pdu=0x2000166ff3c8 00:27:01.153 [2024-11-15 10:45:49.542062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.153 [2024-11-15 10:45:49.542086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:01.153 [2024-11-15 10:45:49.549043] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e80550) with pdu=0x2000166ff3c8 00:27:01.153 [2024-11-15 10:45:49.549171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.153 [2024-11-15 10:45:49.549197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:01.153 [2024-11-15 10:45:49.555565] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e80550) with pdu=0x2000166ff3c8 00:27:01.153 [2024-11-15 10:45:49.555664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.153 [2024-11-15 10:45:49.555689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:01.153 [2024-11-15 10:45:49.561573] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e80550) with pdu=0x2000166ff3c8 00:27:01.153 [2024-11-15 10:45:49.561661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.153 [2024-11-15 10:45:49.561700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:01.153 [2024-11-15 10:45:49.568005] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e80550) with pdu=0x2000166ff3c8 00:27:01.153 [2024-11-15 10:45:49.568076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.153 [2024-11-15 10:45:49.568102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:01.153 [2024-11-15 10:45:49.574631] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e80550) with pdu=0x2000166ff3c8 00:27:01.153 [2024-11-15 10:45:49.574722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.153 [2024-11-15 10:45:49.574748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:01.153 [2024-11-15 10:45:49.581393] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e80550) with pdu=0x2000166ff3c8 00:27:01.153 [2024-11-15 10:45:49.581507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.153 [2024-11-15 10:45:49.581535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:01.153 [2024-11-15 10:45:49.587917] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e80550) with pdu=0x2000166ff3c8 00:27:01.153 [2024-11-15 10:45:49.588010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.153 [2024-11-15 10:45:49.588036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:01.153 [2024-11-15 10:45:49.594771] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e80550) with pdu=0x2000166ff3c8 00:27:01.153 [2024-11-15 10:45:49.594856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.153 [2024-11-15 10:45:49.594882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:01.153 [2024-11-15 10:45:49.601617] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e80550) with pdu=0x2000166ff3c8 00:27:01.153 [2024-11-15 10:45:49.601763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.153 [2024-11-15 10:45:49.601791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:01.153 [2024-11-15 10:45:49.608336] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e80550) with pdu=0x2000166ff3c8 00:27:01.153 [2024-11-15 10:45:49.608481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.153 [2024-11-15 10:45:49.608510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:01.153 [2024-11-15 10:45:49.614839] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e80550) with pdu=0x2000166ff3c8 00:27:01.153 [2024-11-15 10:45:49.614976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.153 [2024-11-15 10:45:49.615005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:01.413 [2024-11-15 10:45:49.621644] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e80550) with pdu=0x2000166ff3c8 00:27:01.413 [2024-11-15 10:45:49.621739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.413 [2024-11-15 10:45:49.621782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:01.413 [2024-11-15 10:45:49.628546] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e80550) with pdu=0x2000166ff3c8 00:27:01.413 [2024-11-15 10:45:49.628630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.413 [2024-11-15 10:45:49.628660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:01.413 [2024-11-15 10:45:49.635045] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e80550) with pdu=0x2000166ff3c8 00:27:01.413 [2024-11-15 10:45:49.635170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.413 [2024-11-15 10:45:49.635197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:01.413 [2024-11-15 10:45:49.641564] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e80550) with pdu=0x2000166ff3c8 00:27:01.413 [2024-11-15 10:45:49.641721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.413 [2024-11-15 10:45:49.641749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:01.413 [2024-11-15 10:45:49.648346] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e80550) with pdu=0x2000166ff3c8 00:27:01.413 [2024-11-15 10:45:49.648465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.413 [2024-11-15 10:45:49.648491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:01.413 [2024-11-15 10:45:49.655747] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e80550) with pdu=0x2000166ff3c8 00:27:01.413 [2024-11-15 10:45:49.655850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.413 [2024-11-15 10:45:49.655880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:01.413 [2024-11-15 10:45:49.662823] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e80550) with pdu=0x2000166ff3c8 00:27:01.413 [2024-11-15 10:45:49.662938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.413 [2024-11-15 10:45:49.662965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:01.413 [2024-11-15 10:45:49.670020] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e80550) with pdu=0x2000166ff3c8 00:27:01.413 [2024-11-15 10:45:49.670118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.413 [2024-11-15 10:45:49.670155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:01.413 [2024-11-15 10:45:49.677490] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e80550) with pdu=0x2000166ff3c8 00:27:01.413 [2024-11-15 10:45:49.677570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.413 [2024-11-15 10:45:49.677598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:01.413 [2024-11-15 10:45:49.687139] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e80550) with pdu=0x2000166ff3c8 00:27:01.413 [2024-11-15 10:45:49.687436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.413 [2024-11-15 10:45:49.687465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:01.413 [2024-11-15 10:45:49.695298] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e80550) with pdu=0x2000166ff3c8 00:27:01.413 [2024-11-15 10:45:49.695441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.413 [2024-11-15 10:45:49.695470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:01.413 [2024-11-15 10:45:49.702103] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e80550) with pdu=0x2000166ff3c8 00:27:01.413 [2024-11-15 10:45:49.702226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.413 [2024-11-15 10:45:49.702253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:01.413 [2024-11-15 10:45:49.709597] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e80550) with pdu=0x2000166ff3c8 00:27:01.413 [2024-11-15 10:45:49.709770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.413 [2024-11-15 10:45:49.709798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:01.413 [2024-11-15 10:45:49.717814] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e80550) with pdu=0x2000166ff3c8 00:27:01.413 [2024-11-15 10:45:49.718002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.413 [2024-11-15 10:45:49.718030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:01.413 [2024-11-15 10:45:49.725998] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e80550) with pdu=0x2000166ff3c8 00:27:01.413 [2024-11-15 10:45:49.726236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.413 [2024-11-15 10:45:49.726263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:01.413 [2024-11-15 10:45:49.735977] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e80550) with pdu=0x2000166ff3c8 00:27:01.413 [2024-11-15 10:45:49.736073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.413 [2024-11-15 10:45:49.736100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:01.413 [2024-11-15 10:45:49.742990] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e80550) with pdu=0x2000166ff3c8 00:27:01.413 [2024-11-15 10:45:49.743091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.413 [2024-11-15 10:45:49.743118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:01.413 [2024-11-15 10:45:49.750049] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e80550) with pdu=0x2000166ff3c8 00:27:01.413 [2024-11-15 10:45:49.750154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.413 [2024-11-15 10:45:49.750182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:01.414 [2024-11-15 10:45:49.756611] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e80550) with pdu=0x2000166ff3c8 00:27:01.414 [2024-11-15 10:45:49.756720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.414 [2024-11-15 10:45:49.756746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:01.414 [2024-11-15 10:45:49.763271] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e80550) with pdu=0x2000166ff3c8 00:27:01.414 [2024-11-15 10:45:49.763404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.414 [2024-11-15 10:45:49.763432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:01.414 [2024-11-15 10:45:49.770295] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e80550) with pdu=0x2000166ff3c8 00:27:01.414 [2024-11-15 10:45:49.770416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.414 [2024-11-15 10:45:49.770443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:01.414 [2024-11-15 10:45:49.777491] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e80550) with pdu=0x2000166ff3c8 00:27:01.414 [2024-11-15 10:45:49.777574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.414 [2024-11-15 10:45:49.777603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:01.414 [2024-11-15 10:45:49.784810] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e80550) with pdu=0x2000166ff3c8 00:27:01.414 [2024-11-15 10:45:49.784913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.414 [2024-11-15 10:45:49.784939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:01.414 [2024-11-15 10:45:49.792021] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e80550) with pdu=0x2000166ff3c8 00:27:01.414 [2024-11-15 10:45:49.792115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.414 [2024-11-15 10:45:49.792141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:01.414 [2024-11-15 10:45:49.799283] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e80550) with pdu=0x2000166ff3c8 00:27:01.414 [2024-11-15 10:45:49.799397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.414 [2024-11-15 10:45:49.799424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:01.414 [2024-11-15 10:45:49.806491] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e80550) with pdu=0x2000166ff3c8 00:27:01.414 [2024-11-15 10:45:49.806640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.414 [2024-11-15 10:45:49.806682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:01.414 [2024-11-15 10:45:49.814070] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e80550) with pdu=0x2000166ff3c8 00:27:01.414 [2024-11-15 10:45:49.814208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.414 [2024-11-15 10:45:49.814236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:01.414 [2024-11-15 10:45:49.821289] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e80550) with pdu=0x2000166ff3c8 00:27:01.414 [2024-11-15 10:45:49.821555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.414 [2024-11-15 10:45:49.821584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:01.414 [2024-11-15 10:45:49.828133] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e80550) with pdu=0x2000166ff3c8 00:27:01.414 [2024-11-15 10:45:49.828428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.414 [2024-11-15 10:45:49.828457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:01.414 [2024-11-15 10:45:49.835252] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e80550) with pdu=0x2000166ff3c8 00:27:01.414 [2024-11-15 10:45:49.835549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.414 [2024-11-15 10:45:49.835578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:01.414 [2024-11-15 10:45:49.842145] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e80550) with pdu=0x2000166ff3c8 00:27:01.414 [2024-11-15 10:45:49.842469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.414 [2024-11-15 10:45:49.842500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:01.414 [2024-11-15 10:45:49.848283] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e80550) with pdu=0x2000166ff3c8 00:27:01.414 [2024-11-15 10:45:49.848575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.414 [2024-11-15 10:45:49.848604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:01.414 [2024-11-15 10:45:49.854156] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e80550) with pdu=0x2000166ff3c8 00:27:01.414 [2024-11-15 10:45:49.854469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.414 [2024-11-15 10:45:49.854499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:01.414 [2024-11-15 10:45:49.859896] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e80550) with pdu=0x2000166ff3c8 00:27:01.414 [2024-11-15 10:45:49.860206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.414 [2024-11-15 10:45:49.860241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:01.414 [2024-11-15 10:45:49.865615] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e80550) with pdu=0x2000166ff3c8 00:27:01.414 [2024-11-15 10:45:49.865882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.414 [2024-11-15 10:45:49.865910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:01.414 [2024-11-15 10:45:49.871466] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e80550) with pdu=0x2000166ff3c8 00:27:01.414 [2024-11-15 10:45:49.871779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.414 [2024-11-15 10:45:49.871806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:01.414 [2024-11-15 10:45:49.877214] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e80550) with pdu=0x2000166ff3c8 00:27:01.414 [2024-11-15 10:45:49.877617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.414 [2024-11-15 10:45:49.877648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:01.673 [2024-11-15 10:45:49.883227] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e80550) with pdu=0x2000166ff3c8 00:27:01.673 [2024-11-15 10:45:49.883631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.673 [2024-11-15 10:45:49.883662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:01.673 [2024-11-15 10:45:49.889881] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e80550) with pdu=0x2000166ff3c8 00:27:01.673 [2024-11-15 10:45:49.890229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.673 [2024-11-15 10:45:49.890257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:01.673 [2024-11-15 10:45:49.896020] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e80550) with pdu=0x2000166ff3c8 00:27:01.673 [2024-11-15 10:45:49.896297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.673 [2024-11-15 10:45:49.896324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:01.673 [2024-11-15 10:45:49.902125] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e80550) with pdu=0x2000166ff3c8 00:27:01.673 [2024-11-15 10:45:49.902478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.673 [2024-11-15 10:45:49.902506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:01.673 [2024-11-15 10:45:49.908303] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e80550) with pdu=0x2000166ff3c8 00:27:01.673 [2024-11-15 10:45:49.908681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.673 [2024-11-15 10:45:49.908710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:01.673 [2024-11-15 10:45:49.914470] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e80550) with pdu=0x2000166ff3c8 00:27:01.673 [2024-11-15 10:45:49.914766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.673 [2024-11-15 10:45:49.914794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:01.673 [2024-11-15 10:45:49.920546] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e80550) with pdu=0x2000166ff3c8 00:27:01.673 [2024-11-15 10:45:49.920868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.673 [2024-11-15 10:45:49.920896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:01.673 [2024-11-15 10:45:49.926606] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e80550) with pdu=0x2000166ff3c8 00:27:01.673 [2024-11-15 10:45:49.926913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.673 [2024-11-15 10:45:49.926941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:01.673 [2024-11-15 10:45:49.932629] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e80550) with pdu=0x2000166ff3c8 00:27:01.673 [2024-11-15 10:45:49.932890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.673 [2024-11-15 10:45:49.932918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:01.673 [2024-11-15 10:45:49.938305] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e80550) with pdu=0x2000166ff3c8 00:27:01.673 [2024-11-15 10:45:49.938647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.673 [2024-11-15 10:45:49.938675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:01.673 [2024-11-15 10:45:49.944150] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e80550) with pdu=0x2000166ff3c8 00:27:01.673 [2024-11-15 10:45:49.944453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.673 [2024-11-15 10:45:49.944482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:01.673 [2024-11-15 10:45:49.949910] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e80550) with pdu=0x2000166ff3c8 00:27:01.673 [2024-11-15 10:45:49.950173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.673 [2024-11-15 10:45:49.950200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:01.673 [2024-11-15 10:45:49.955692] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e80550) with pdu=0x2000166ff3c8 00:27:01.673 [2024-11-15 10:45:49.955957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.673 [2024-11-15 10:45:49.955984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:01.673 [2024-11-15 10:45:49.961476] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e80550) with pdu=0x2000166ff3c8 00:27:01.673 [2024-11-15 10:45:49.961776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.673 [2024-11-15 10:45:49.961804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:01.673 [2024-11-15 10:45:49.967329] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e80550) with pdu=0x2000166ff3c8 00:27:01.673 [2024-11-15 10:45:49.967623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.673 [2024-11-15 10:45:49.967665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:01.673 [2024-11-15 10:45:49.973289] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e80550) with pdu=0x2000166ff3c8 00:27:01.673 [2024-11-15 10:45:49.973581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.673 [2024-11-15 10:45:49.973612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:01.673 [2024-11-15 10:45:49.979505] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e80550) with pdu=0x2000166ff3c8 00:27:01.673 [2024-11-15 10:45:49.979828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.673 [2024-11-15 10:45:49.979855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:01.673 [2024-11-15 10:45:49.985709] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e80550) with pdu=0x2000166ff3c8 00:27:01.673 [2024-11-15 10:45:49.985975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.673 [2024-11-15 10:45:49.986008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:01.673 [2024-11-15 10:45:49.991498] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e80550) with pdu=0x2000166ff3c8 00:27:01.673 [2024-11-15 10:45:49.991830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.673 [2024-11-15 10:45:49.991857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:01.673 [2024-11-15 10:45:49.997056] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e80550) with pdu=0x2000166ff3c8 00:27:01.673 [2024-11-15 10:45:49.997327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.673 [2024-11-15 10:45:49.997376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:01.673 [2024-11-15 10:45:50.003501] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e80550) with pdu=0x2000166ff3c8 00:27:01.673 [2024-11-15 10:45:50.003754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.673 [2024-11-15 10:45:50.003784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:01.674 [2024-11-15 10:45:50.010388] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e80550) with pdu=0x2000166ff3c8 00:27:01.674 [2024-11-15 10:45:50.010660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.674 [2024-11-15 10:45:50.010695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:01.674 [2024-11-15 10:45:50.016166] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e80550) with pdu=0x2000166ff3c8 00:27:01.674 [2024-11-15 10:45:50.016470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.674 [2024-11-15 10:45:50.016513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:01.674 [2024-11-15 10:45:50.021987] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e80550) with pdu=0x2000166ff3c8 00:27:01.674 [2024-11-15 10:45:50.022263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.674 [2024-11-15 10:45:50.022294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:01.674 [2024-11-15 10:45:50.027756] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e80550) with pdu=0x2000166ff3c8 00:27:01.674 [2024-11-15 10:45:50.028022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.674 [2024-11-15 10:45:50.028053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:01.674 [2024-11-15 10:45:50.033966] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e80550) with pdu=0x2000166ff3c8 00:27:01.674 [2024-11-15 10:45:50.034227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.674 [2024-11-15 10:45:50.034263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:01.674 [2024-11-15 10:45:50.039818] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e80550) with pdu=0x2000166ff3c8 00:27:01.674 [2024-11-15 10:45:50.040102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.674 [2024-11-15 10:45:50.040132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:01.674 [2024-11-15 10:45:50.045650] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e80550) with pdu=0x2000166ff3c8 00:27:01.674 [2024-11-15 10:45:50.045929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.674 [2024-11-15 10:45:50.045959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:01.674 [2024-11-15 10:45:50.051395] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e80550) with pdu=0x2000166ff3c8 00:27:01.674 [2024-11-15 10:45:50.051663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.674 [2024-11-15 10:45:50.051707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:01.674 [2024-11-15 10:45:50.057548] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e80550) with pdu=0x2000166ff3c8 00:27:01.674 [2024-11-15 10:45:50.057850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.674 [2024-11-15 10:45:50.057879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:01.674 [2024-11-15 10:45:50.063278] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e80550) with pdu=0x2000166ff3c8 00:27:01.674 [2024-11-15 10:45:50.063595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.674 [2024-11-15 10:45:50.063625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:01.674 [2024-11-15 10:45:50.069124] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e80550) with pdu=0x2000166ff3c8 00:27:01.674 [2024-11-15 10:45:50.069433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.674 [2024-11-15 10:45:50.069463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:01.674 [2024-11-15 10:45:50.074952] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e80550) with pdu=0x2000166ff3c8 00:27:01.674 [2024-11-15 10:45:50.075233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.674 [2024-11-15 10:45:50.075260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:01.674 [2024-11-15 10:45:50.080502] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e80550) with pdu=0x2000166ff3c8 00:27:01.674 [2024-11-15 10:45:50.080816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.674 [2024-11-15 10:45:50.080861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:01.674 [2024-11-15 10:45:50.086513] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e80550) with pdu=0x2000166ff3c8 00:27:01.674 [2024-11-15 10:45:50.086793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.674 [2024-11-15 10:45:50.086820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:01.674 [2024-11-15 10:45:50.092844] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e80550) with pdu=0x2000166ff3c8 00:27:01.674 [2024-11-15 10:45:50.093101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.674 [2024-11-15 10:45:50.093128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:01.674 [2024-11-15 10:45:50.099289] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e80550) with pdu=0x2000166ff3c8 00:27:01.674 [2024-11-15 10:45:50.099592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.674 [2024-11-15 10:45:50.099631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:01.674 [2024-11-15 10:45:50.105416] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e80550) with pdu=0x2000166ff3c8 00:27:01.674 [2024-11-15 10:45:50.105715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.674 [2024-11-15 10:45:50.105742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:01.674 [2024-11-15 10:45:50.111573] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e80550) with pdu=0x2000166ff3c8 00:27:01.674 [2024-11-15 10:45:50.111867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.674 [2024-11-15 10:45:50.111895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:01.674 [2024-11-15 10:45:50.118117] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e80550) with pdu=0x2000166ff3c8 00:27:01.674 [2024-11-15 10:45:50.118405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.674 [2024-11-15 10:45:50.118443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:01.674 [2024-11-15 10:45:50.124379] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e80550) with pdu=0x2000166ff3c8 00:27:01.674 [2024-11-15 10:45:50.124660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.674 [2024-11-15 10:45:50.124688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:01.674 [2024-11-15 10:45:50.130166] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e80550) with pdu=0x2000166ff3c8 00:27:01.674 [2024-11-15 10:45:50.130462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.674 [2024-11-15 10:45:50.130490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:01.674 [2024-11-15 10:45:50.135904] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e80550) with pdu=0x2000166ff3c8 00:27:01.674 [2024-11-15 10:45:50.136232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.674 [2024-11-15 10:45:50.136262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:01.932 [2024-11-15 10:45:50.141796] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e80550) with pdu=0x2000166ff3c8 00:27:01.933 [2024-11-15 10:45:50.142052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.933 [2024-11-15 10:45:50.142081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:01.933 [2024-11-15 10:45:50.147588] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e80550) with pdu=0x2000166ff3c8 00:27:01.933 [2024-11-15 10:45:50.147885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.933 [2024-11-15 10:45:50.147914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:01.933 [2024-11-15 10:45:50.153224] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e80550) with pdu=0x2000166ff3c8 00:27:01.933 [2024-11-15 10:45:50.153513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.933 [2024-11-15 10:45:50.153541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:01.933 [2024-11-15 10:45:50.159193] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e80550) with pdu=0x2000166ff3c8 00:27:01.933 [2024-11-15 10:45:50.159477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.933 [2024-11-15 10:45:50.159506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:01.933 [2024-11-15 10:45:50.165533] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e80550) with pdu=0x2000166ff3c8 00:27:01.933 [2024-11-15 10:45:50.165822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.933 [2024-11-15 10:45:50.165850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:01.933 [2024-11-15 10:45:50.171938] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e80550) with pdu=0x2000166ff3c8 00:27:01.933 [2024-11-15 10:45:50.172186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.933 [2024-11-15 10:45:50.172221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:01.933 [2024-11-15 10:45:50.178419] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e80550) with pdu=0x2000166ff3c8 00:27:01.933 [2024-11-15 10:45:50.178683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.933 [2024-11-15 10:45:50.178724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:01.933 [2024-11-15 10:45:50.184704] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e80550) with pdu=0x2000166ff3c8 00:27:01.933 [2024-11-15 10:45:50.184989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.933 [2024-11-15 10:45:50.185018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:01.933 [2024-11-15 10:45:50.191449] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e80550) with pdu=0x2000166ff3c8 00:27:01.933 [2024-11-15 10:45:50.191738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.933 [2024-11-15 10:45:50.191766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:01.933 [2024-11-15 10:45:50.197956] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e80550) with pdu=0x2000166ff3c8 00:27:01.933 [2024-11-15 10:45:50.198238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.933 [2024-11-15 10:45:50.198266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:01.933 [2024-11-15 10:45:50.204479] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e80550) with pdu=0x2000166ff3c8 00:27:01.933 [2024-11-15 10:45:50.204755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.933 [2024-11-15 10:45:50.204783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:01.933 [2024-11-15 10:45:50.211094] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e80550) with pdu=0x2000166ff3c8 00:27:01.933 [2024-11-15 10:45:50.211370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.933 [2024-11-15 10:45:50.211398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:01.933 [2024-11-15 10:45:50.217396] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e80550) with pdu=0x2000166ff3c8 00:27:01.933 [2024-11-15 10:45:50.217681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.933 [2024-11-15 10:45:50.217723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:01.933 [2024-11-15 10:45:50.223766] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e80550) with pdu=0x2000166ff3c8 00:27:01.933 [2024-11-15 10:45:50.224089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.933 [2024-11-15 10:45:50.224118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:01.933 [2024-11-15 10:45:50.230226] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e80550) with pdu=0x2000166ff3c8 00:27:01.933 [2024-11-15 10:45:50.230529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.933 [2024-11-15 10:45:50.230558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:01.933 [2024-11-15 10:45:50.236595] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e80550) with pdu=0x2000166ff3c8 00:27:01.933 [2024-11-15 10:45:50.236878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.933 [2024-11-15 10:45:50.236905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:01.933 [2024-11-15 10:45:50.243145] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e80550) with pdu=0x2000166ff3c8 00:27:01.933 [2024-11-15 10:45:50.243462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.933 [2024-11-15 10:45:50.243491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:01.933 [2024-11-15 10:45:50.250023] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e80550) with pdu=0x2000166ff3c8 00:27:01.933 [2024-11-15 10:45:50.250280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.933 [2024-11-15 10:45:50.250307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:01.933 [2024-11-15 10:45:50.256325] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e80550) with pdu=0x2000166ff3c8 00:27:01.933 [2024-11-15 10:45:50.256619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.933 [2024-11-15 10:45:50.256663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:01.933 [2024-11-15 10:45:50.262695] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e80550) with pdu=0x2000166ff3c8 00:27:01.933 [2024-11-15 10:45:50.262990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.933 [2024-11-15 10:45:50.263017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:01.933 [2024-11-15 10:45:50.269238] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e80550) with pdu=0x2000166ff3c8 00:27:01.933 [2024-11-15 10:45:50.269527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.933 [2024-11-15 10:45:50.269555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:01.933 [2024-11-15 10:45:50.275268] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e80550) with pdu=0x2000166ff3c8 00:27:01.933 [2024-11-15 10:45:50.275582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.933 [2024-11-15 10:45:50.275627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:01.933 [2024-11-15 10:45:50.281828] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e80550) with pdu=0x2000166ff3c8 00:27:01.933 [2024-11-15 10:45:50.282101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.933 [2024-11-15 10:45:50.282128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:01.933 [2024-11-15 10:45:50.288359] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e80550) with pdu=0x2000166ff3c8 00:27:01.933 [2024-11-15 10:45:50.288676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.933 [2024-11-15 10:45:50.288705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:01.933 [2024-11-15 10:45:50.294850] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e80550) with pdu=0x2000166ff3c8 00:27:01.933 [2024-11-15 10:45:50.295252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.933 [2024-11-15 10:45:50.295294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:01.933 [2024-11-15 10:45:50.301522] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e80550) with pdu=0x2000166ff3c8 00:27:01.933 [2024-11-15 10:45:50.301807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.933 [2024-11-15 10:45:50.301834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:01.933 [2024-11-15 10:45:50.308625] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e80550) with pdu=0x2000166ff3c8 00:27:01.933 [2024-11-15 10:45:50.308877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.933 [2024-11-15 10:45:50.308904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:01.933 [2024-11-15 10:45:50.316001] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e80550) with pdu=0x2000166ff3c8 00:27:01.934 [2024-11-15 10:45:50.316290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.934 [2024-11-15 10:45:50.316319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:01.934 [2024-11-15 10:45:50.323393] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e80550) with pdu=0x2000166ff3c8 00:27:01.934 [2024-11-15 10:45:50.323792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.934 [2024-11-15 10:45:50.323835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:01.934 4696.50 IOPS, 587.06 MiB/s 00:27:01.934 Latency(us) 00:27:01.934 [2024-11-15T09:45:50.397Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:01.934 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:27:01.934 nvme0n1 : 2.01 4693.52 586.69 0.00 0.00 3401.34 2633.58 13010.11 00:27:01.934 [2024-11-15T09:45:50.397Z] =================================================================================================================== 00:27:01.934 [2024-11-15T09:45:50.397Z] Total : 4693.52 586.69 0.00 0.00 3401.34 2633.58 13010.11 00:27:01.934 { 00:27:01.934 "results": [ 00:27:01.934 { 00:27:01.934 "job": "nvme0n1", 00:27:01.934 "core_mask": "0x2", 00:27:01.934 "workload": "randwrite", 00:27:01.934 "status": "finished", 00:27:01.934 "queue_depth": 16, 00:27:01.934 "io_size": 131072, 00:27:01.934 "runtime": 2.005319, 00:27:01.934 "iops": 4693.517589969476, 00:27:01.934 "mibps": 586.6896987461845, 00:27:01.934 "io_failed": 0, 00:27:01.934 "io_timeout": 0, 00:27:01.934 "avg_latency_us": 3401.3388764540146, 00:27:01.934 "min_latency_us": 2633.5762962962963, 00:27:01.934 "max_latency_us": 13010.10962962963 00:27:01.934 } 00:27:01.934 ], 00:27:01.934 "core_count": 1 00:27:01.934 } 00:27:01.934 10:45:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:27:01.934 10:45:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:27:01.934 10:45:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:27:01.934 10:45:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:27:01.934 | .driver_specific 00:27:01.934 | .nvme_error 00:27:01.934 | .status_code 00:27:01.934 | .command_transient_transport_error' 00:27:02.192 10:45:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 304 > 0 )) 00:27:02.192 10:45:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 484563 00:27:02.192 10:45:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # '[' -z 484563 ']' 00:27:02.192 10:45:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # kill -0 484563 00:27:02.192 10:45:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # uname 00:27:02.192 10:45:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:27:02.192 10:45:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 484563 00:27:02.449 10:45:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:27:02.449 10:45:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:27:02.449 10:45:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # echo 'killing process with pid 484563' 00:27:02.449 killing process with pid 484563 00:27:02.449 10:45:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@971 -- # kill 484563 00:27:02.449 Received shutdown signal, test time was about 2.000000 seconds 00:27:02.449 00:27:02.449 Latency(us) 00:27:02.449 [2024-11-15T09:45:50.912Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:02.449 [2024-11-15T09:45:50.912Z] =================================================================================================================== 00:27:02.449 [2024-11-15T09:45:50.912Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:02.449 10:45:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@976 -- # wait 484563 00:27:02.449 10:45:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 483195 00:27:02.449 10:45:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # '[' -z 483195 ']' 00:27:02.449 10:45:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # kill -0 483195 00:27:02.449 10:45:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # uname 00:27:02.449 10:45:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:27:02.449 10:45:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 483195 00:27:02.707 10:45:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:27:02.707 10:45:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:27:02.707 10:45:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # echo 'killing process with pid 483195' 00:27:02.707 killing process with pid 483195 00:27:02.707 10:45:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@971 -- # kill 483195 00:27:02.707 10:45:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@976 -- # wait 483195 00:27:02.707 00:27:02.707 real 0m15.413s 00:27:02.707 user 0m30.282s 00:27:02.707 sys 0m5.041s 00:27:02.707 10:45:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1128 -- # xtrace_disable 00:27:02.707 10:45:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:02.707 ************************************ 00:27:02.707 END TEST nvmf_digest_error 00:27:02.707 ************************************ 00:27:02.707 10:45:51 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:27:02.707 10:45:51 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:27:02.707 10:45:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@516 -- # nvmfcleanup 00:27:02.707 10:45:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@121 -- # sync 00:27:02.707 10:45:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:02.707 10:45:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@124 -- # set +e 00:27:02.707 10:45:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:02.707 10:45:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:02.967 rmmod nvme_tcp 00:27:02.967 rmmod nvme_fabrics 00:27:02.967 rmmod nvme_keyring 00:27:02.967 10:45:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:02.967 10:45:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@128 -- # set -e 00:27:02.967 10:45:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@129 -- # return 0 00:27:02.967 10:45:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@517 -- # '[' -n 483195 ']' 00:27:02.967 10:45:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@518 -- # killprocess 483195 00:27:02.967 10:45:51 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@952 -- # '[' -z 483195 ']' 00:27:02.967 10:45:51 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@956 -- # kill -0 483195 00:27:02.967 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 956: kill: (483195) - No such process 00:27:02.967 10:45:51 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@979 -- # echo 'Process with pid 483195 is not found' 00:27:02.967 Process with pid 483195 is not found 00:27:02.967 10:45:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:27:02.967 10:45:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:27:02.967 10:45:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:27:02.967 10:45:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@297 -- # iptr 00:27:02.967 10:45:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-save 00:27:02.967 10:45:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:27:02.967 10:45:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-restore 00:27:02.967 10:45:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:02.967 10:45:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@302 -- # remove_spdk_ns 00:27:02.967 10:45:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:02.967 10:45:51 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:02.967 10:45:51 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:04.923 10:45:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:27:04.923 00:27:04.923 real 0m35.906s 00:27:04.923 user 1m2.321s 00:27:04.923 sys 0m11.914s 00:27:04.923 10:45:53 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1128 -- # xtrace_disable 00:27:04.923 10:45:53 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:27:04.923 ************************************ 00:27:04.923 END TEST nvmf_digest 00:27:04.923 ************************************ 00:27:04.923 10:45:53 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@36 -- # [[ 0 -eq 1 ]] 00:27:04.923 10:45:53 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@41 -- # [[ 0 -eq 1 ]] 00:27:04.923 10:45:53 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@46 -- # [[ phy == phy ]] 00:27:04.923 10:45:53 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@47 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:27:04.923 10:45:53 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:27:04.923 10:45:53 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:27:04.923 10:45:53 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:27:04.923 ************************************ 00:27:04.923 START TEST nvmf_bdevperf 00:27:04.923 ************************************ 00:27:04.923 10:45:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:27:04.923 * Looking for test storage... 00:27:04.923 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:04.923 10:45:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:27:04.923 10:45:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1691 -- # lcov --version 00:27:04.923 10:45:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:27:05.182 10:45:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:27:05.182 10:45:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:05.182 10:45:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:05.182 10:45:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:05.182 10:45:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # IFS=.-: 00:27:05.182 10:45:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # read -ra ver1 00:27:05.182 10:45:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # IFS=.-: 00:27:05.182 10:45:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # read -ra ver2 00:27:05.182 10:45:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@338 -- # local 'op=<' 00:27:05.182 10:45:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@340 -- # ver1_l=2 00:27:05.182 10:45:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@341 -- # ver2_l=1 00:27:05.182 10:45:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:05.182 10:45:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@344 -- # case "$op" in 00:27:05.182 10:45:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@345 -- # : 1 00:27:05.182 10:45:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:05.182 10:45:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:05.182 10:45:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # decimal 1 00:27:05.182 10:45:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=1 00:27:05.182 10:45:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:05.182 10:45:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 1 00:27:05.182 10:45:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # ver1[v]=1 00:27:05.182 10:45:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # decimal 2 00:27:05.182 10:45:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=2 00:27:05.182 10:45:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:05.182 10:45:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 2 00:27:05.182 10:45:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # ver2[v]=2 00:27:05.182 10:45:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:05.182 10:45:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:05.182 10:45:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # return 0 00:27:05.182 10:45:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:05.182 10:45:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:27:05.182 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:05.182 --rc genhtml_branch_coverage=1 00:27:05.182 --rc genhtml_function_coverage=1 00:27:05.182 --rc genhtml_legend=1 00:27:05.182 --rc geninfo_all_blocks=1 00:27:05.182 --rc geninfo_unexecuted_blocks=1 00:27:05.182 00:27:05.182 ' 00:27:05.182 10:45:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:27:05.182 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:05.182 --rc genhtml_branch_coverage=1 00:27:05.182 --rc genhtml_function_coverage=1 00:27:05.182 --rc genhtml_legend=1 00:27:05.182 --rc geninfo_all_blocks=1 00:27:05.182 --rc geninfo_unexecuted_blocks=1 00:27:05.182 00:27:05.182 ' 00:27:05.182 10:45:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:27:05.182 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:05.182 --rc genhtml_branch_coverage=1 00:27:05.182 --rc genhtml_function_coverage=1 00:27:05.183 --rc genhtml_legend=1 00:27:05.183 --rc geninfo_all_blocks=1 00:27:05.183 --rc geninfo_unexecuted_blocks=1 00:27:05.183 00:27:05.183 ' 00:27:05.183 10:45:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:27:05.183 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:05.183 --rc genhtml_branch_coverage=1 00:27:05.183 --rc genhtml_function_coverage=1 00:27:05.183 --rc genhtml_legend=1 00:27:05.183 --rc geninfo_all_blocks=1 00:27:05.183 --rc geninfo_unexecuted_blocks=1 00:27:05.183 00:27:05.183 ' 00:27:05.183 10:45:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:05.183 10:45:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:27:05.183 10:45:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:05.183 10:45:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:05.183 10:45:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:05.183 10:45:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:05.183 10:45:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:05.183 10:45:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:05.183 10:45:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:05.183 10:45:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:05.183 10:45:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:05.183 10:45:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:05.183 10:45:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:27:05.183 10:45:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_HOSTID=8b464f06-2980-e311-ba20-001e67a94acd 00:27:05.183 10:45:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:05.183 10:45:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:05.183 10:45:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:05.183 10:45:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:05.183 10:45:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:05.183 10:45:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@15 -- # shopt -s extglob 00:27:05.183 10:45:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:05.183 10:45:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:05.183 10:45:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:05.183 10:45:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:05.183 10:45:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:05.183 10:45:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:05.183 10:45:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:27:05.183 10:45:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:05.183 10:45:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@51 -- # : 0 00:27:05.183 10:45:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:05.183 10:45:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:05.183 10:45:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:05.183 10:45:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:05.183 10:45:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:05.183 10:45:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:27:05.183 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:27:05.183 10:45:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:05.183 10:45:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:05.183 10:45:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:05.183 10:45:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:27:05.183 10:45:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:27:05.183 10:45:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:27:05.183 10:45:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:27:05.183 10:45:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:05.183 10:45:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@476 -- # prepare_net_devs 00:27:05.183 10:45:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:27:05.183 10:45:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:27:05.183 10:45:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:05.183 10:45:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:05.183 10:45:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:05.183 10:45:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:27:05.183 10:45:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:27:05.183 10:45:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@309 -- # xtrace_disable 00:27:05.183 10:45:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:07.714 10:45:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:07.714 10:45:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # pci_devs=() 00:27:07.714 10:45:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:07.714 10:45:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:07.714 10:45:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:07.714 10:45:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:07.714 10:45:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:07.714 10:45:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # net_devs=() 00:27:07.714 10:45:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:07.714 10:45:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # e810=() 00:27:07.714 10:45:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # local -ga e810 00:27:07.714 10:45:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # x722=() 00:27:07.714 10:45:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # local -ga x722 00:27:07.714 10:45:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # mlx=() 00:27:07.714 10:45:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # local -ga mlx 00:27:07.714 10:45:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:07.714 10:45:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:07.714 10:45:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:07.714 10:45:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:07.714 10:45:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:07.714 10:45:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:07.714 10:45:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:07.714 10:45:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:07.714 10:45:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:07.714 10:45:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:07.714 10:45:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:07.714 10:45:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:07.714 10:45:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:07.714 10:45:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:27:07.714 10:45:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:27:07.714 10:45:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:27:07.714 10:45:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:27:07.714 10:45:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:07.714 10:45:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:07.714 10:45:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@367 -- # echo 'Found 0000:82:00.0 (0x8086 - 0x159b)' 00:27:07.714 Found 0000:82:00.0 (0x8086 - 0x159b) 00:27:07.714 10:45:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:07.714 10:45:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:07.714 10:45:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:07.714 10:45:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:07.714 10:45:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:07.714 10:45:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:07.714 10:45:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@367 -- # echo 'Found 0000:82:00.1 (0x8086 - 0x159b)' 00:27:07.714 Found 0000:82:00.1 (0x8086 - 0x159b) 00:27:07.714 10:45:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:07.714 10:45:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:07.714 10:45:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:07.714 10:45:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:07.714 10:45:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:07.714 10:45:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:07.714 10:45:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:27:07.714 10:45:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:27:07.714 10:45:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:07.714 10:45:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:07.714 10:45:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:07.714 10:45:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:07.714 10:45:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:07.714 10:45:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:07.714 10:45:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:07.715 10:45:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:82:00.0: cvl_0_0' 00:27:07.715 Found net devices under 0000:82:00.0: cvl_0_0 00:27:07.715 10:45:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:07.715 10:45:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:07.715 10:45:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:07.715 10:45:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:07.715 10:45:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:07.715 10:45:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:07.715 10:45:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:07.715 10:45:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:07.715 10:45:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:82:00.1: cvl_0_1' 00:27:07.715 Found net devices under 0000:82:00.1: cvl_0_1 00:27:07.715 10:45:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:07.715 10:45:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:27:07.715 10:45:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # is_hw=yes 00:27:07.715 10:45:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:27:07.715 10:45:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:27:07.715 10:45:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:27:07.715 10:45:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:07.715 10:45:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:07.715 10:45:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:07.715 10:45:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:07.715 10:45:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:27:07.715 10:45:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:07.715 10:45:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:07.715 10:45:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:27:07.715 10:45:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:27:07.715 10:45:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:07.715 10:45:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:07.715 10:45:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:27:07.715 10:45:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:27:07.715 10:45:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:27:07.715 10:45:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:07.715 10:45:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:07.715 10:45:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:07.715 10:45:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:27:07.715 10:45:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:07.715 10:45:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:07.715 10:45:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:07.715 10:45:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:27:07.715 10:45:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:27:07.715 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:07.715 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.145 ms 00:27:07.715 00:27:07.715 --- 10.0.0.2 ping statistics --- 00:27:07.715 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:07.715 rtt min/avg/max/mdev = 0.145/0.145/0.145/0.000 ms 00:27:07.715 10:45:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:07.715 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:07.715 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.177 ms 00:27:07.715 00:27:07.715 --- 10.0.0.1 ping statistics --- 00:27:07.715 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:07.715 rtt min/avg/max/mdev = 0.177/0.177/0.177/0.000 ms 00:27:07.715 10:45:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:07.715 10:45:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@450 -- # return 0 00:27:07.715 10:45:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:27:07.715 10:45:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:07.715 10:45:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:27:07.715 10:45:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:27:07.715 10:45:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:07.715 10:45:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:27:07.715 10:45:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:27:07.715 10:45:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:27:07.715 10:45:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:27:07.715 10:45:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:27:07.715 10:45:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@724 -- # xtrace_disable 00:27:07.715 10:45:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:07.715 10:45:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@509 -- # nvmfpid=487046 00:27:07.715 10:45:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:27:07.715 10:45:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@510 -- # waitforlisten 487046 00:27:07.715 10:45:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@833 -- # '[' -z 487046 ']' 00:27:07.715 10:45:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:07.715 10:45:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@838 -- # local max_retries=100 00:27:07.715 10:45:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:07.715 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:07.715 10:45:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@842 -- # xtrace_disable 00:27:07.715 10:45:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:07.715 [2024-11-15 10:45:55.912029] Starting SPDK v25.01-pre git sha1 318515b44 / DPDK 24.03.0 initialization... 00:27:07.715 [2024-11-15 10:45:55.912120] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:07.715 [2024-11-15 10:45:55.984084] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:27:07.715 [2024-11-15 10:45:56.041556] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:07.715 [2024-11-15 10:45:56.041608] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:07.715 [2024-11-15 10:45:56.041637] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:07.715 [2024-11-15 10:45:56.041649] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:07.715 [2024-11-15 10:45:56.041659] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:07.715 [2024-11-15 10:45:56.043133] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:27:07.715 [2024-11-15 10:45:56.043196] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:27:07.715 [2024-11-15 10:45:56.043200] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:07.715 10:45:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:27:07.715 10:45:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@866 -- # return 0 00:27:07.715 10:45:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:27:07.715 10:45:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@730 -- # xtrace_disable 00:27:07.715 10:45:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:07.715 10:45:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:07.715 10:45:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:07.715 10:45:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:07.715 10:45:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:07.715 [2024-11-15 10:45:56.177130] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:07.973 10:45:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:07.973 10:45:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:27:07.973 10:45:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:07.973 10:45:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:07.973 Malloc0 00:27:07.973 10:45:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:07.973 10:45:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:07.973 10:45:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:07.973 10:45:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:07.973 10:45:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:07.973 10:45:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:27:07.973 10:45:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:07.973 10:45:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:07.973 10:45:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:07.973 10:45:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:07.973 10:45:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:07.973 10:45:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:07.973 [2024-11-15 10:45:56.233202] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:07.973 10:45:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:07.973 10:45:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:27:07.973 10:45:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:27:07.974 10:45:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # config=() 00:27:07.974 10:45:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # local subsystem config 00:27:07.974 10:45:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:27:07.974 10:45:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:27:07.974 { 00:27:07.974 "params": { 00:27:07.974 "name": "Nvme$subsystem", 00:27:07.974 "trtype": "$TEST_TRANSPORT", 00:27:07.974 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:07.974 "adrfam": "ipv4", 00:27:07.974 "trsvcid": "$NVMF_PORT", 00:27:07.974 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:07.974 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:07.974 "hdgst": ${hdgst:-false}, 00:27:07.974 "ddgst": ${ddgst:-false} 00:27:07.974 }, 00:27:07.974 "method": "bdev_nvme_attach_controller" 00:27:07.974 } 00:27:07.974 EOF 00:27:07.974 )") 00:27:07.974 10:45:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # cat 00:27:07.974 10:45:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@584 -- # jq . 00:27:07.974 10:45:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@585 -- # IFS=, 00:27:07.974 10:45:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:27:07.974 "params": { 00:27:07.974 "name": "Nvme1", 00:27:07.974 "trtype": "tcp", 00:27:07.974 "traddr": "10.0.0.2", 00:27:07.974 "adrfam": "ipv4", 00:27:07.974 "trsvcid": "4420", 00:27:07.974 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:07.974 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:27:07.974 "hdgst": false, 00:27:07.974 "ddgst": false 00:27:07.974 }, 00:27:07.974 "method": "bdev_nvme_attach_controller" 00:27:07.974 }' 00:27:07.974 [2024-11-15 10:45:56.281408] Starting SPDK v25.01-pre git sha1 318515b44 / DPDK 24.03.0 initialization... 00:27:07.974 [2024-11-15 10:45:56.281483] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid487074 ] 00:27:07.974 [2024-11-15 10:45:56.348923] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:07.974 [2024-11-15 10:45:56.407632] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:08.231 Running I/O for 1 seconds... 00:27:09.604 8849.00 IOPS, 34.57 MiB/s 00:27:09.604 Latency(us) 00:27:09.604 [2024-11-15T09:45:58.067Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:09.604 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:27:09.604 Verification LBA range: start 0x0 length 0x4000 00:27:09.604 Nvme1n1 : 1.01 8895.43 34.75 0.00 0.00 14318.46 2973.39 14272.28 00:27:09.604 [2024-11-15T09:45:58.067Z] =================================================================================================================== 00:27:09.604 [2024-11-15T09:45:58.067Z] Total : 8895.43 34.75 0.00 0.00 14318.46 2973.39 14272.28 00:27:09.604 10:45:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=487213 00:27:09.604 10:45:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:27:09.604 10:45:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:27:09.604 10:45:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:27:09.604 10:45:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # config=() 00:27:09.604 10:45:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # local subsystem config 00:27:09.604 10:45:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:27:09.605 10:45:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:27:09.605 { 00:27:09.605 "params": { 00:27:09.605 "name": "Nvme$subsystem", 00:27:09.605 "trtype": "$TEST_TRANSPORT", 00:27:09.605 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:09.605 "adrfam": "ipv4", 00:27:09.605 "trsvcid": "$NVMF_PORT", 00:27:09.605 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:09.605 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:09.605 "hdgst": ${hdgst:-false}, 00:27:09.605 "ddgst": ${ddgst:-false} 00:27:09.605 }, 00:27:09.605 "method": "bdev_nvme_attach_controller" 00:27:09.605 } 00:27:09.605 EOF 00:27:09.605 )") 00:27:09.605 10:45:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # cat 00:27:09.605 10:45:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@584 -- # jq . 00:27:09.605 10:45:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@585 -- # IFS=, 00:27:09.605 10:45:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:27:09.605 "params": { 00:27:09.605 "name": "Nvme1", 00:27:09.605 "trtype": "tcp", 00:27:09.605 "traddr": "10.0.0.2", 00:27:09.605 "adrfam": "ipv4", 00:27:09.605 "trsvcid": "4420", 00:27:09.605 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:09.605 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:27:09.605 "hdgst": false, 00:27:09.605 "ddgst": false 00:27:09.605 }, 00:27:09.605 "method": "bdev_nvme_attach_controller" 00:27:09.605 }' 00:27:09.605 [2024-11-15 10:45:57.896470] Starting SPDK v25.01-pre git sha1 318515b44 / DPDK 24.03.0 initialization... 00:27:09.605 [2024-11-15 10:45:57.896558] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid487213 ] 00:27:09.605 [2024-11-15 10:45:57.965389] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:09.605 [2024-11-15 10:45:58.022933] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:09.862 Running I/O for 15 seconds... 00:27:12.165 8785.00 IOPS, 34.32 MiB/s [2024-11-15T09:46:00.888Z] 8917.00 IOPS, 34.83 MiB/s [2024-11-15T09:46:00.888Z] 10:46:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 487046 00:27:12.425 10:46:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:27:12.425 [2024-11-15 10:46:00.862880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:58000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.425 [2024-11-15 10:46:00.862947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.425 [2024-11-15 10:46:00.862978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:58008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.425 [2024-11-15 10:46:00.862994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.425 [2024-11-15 10:46:00.863012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:58016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.425 [2024-11-15 10:46:00.863027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.425 [2024-11-15 10:46:00.863042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:58024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.425 [2024-11-15 10:46:00.863055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.425 [2024-11-15 10:46:00.863070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:58032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.425 [2024-11-15 10:46:00.863084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.426 [2024-11-15 10:46:00.863100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:58040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.426 [2024-11-15 10:46:00.863112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.426 [2024-11-15 10:46:00.863127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:58048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.426 [2024-11-15 10:46:00.863150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.426 [2024-11-15 10:46:00.863166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:58056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.426 [2024-11-15 10:46:00.863179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.426 [2024-11-15 10:46:00.863194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:58064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.426 [2024-11-15 10:46:00.863207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.426 [2024-11-15 10:46:00.863222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:58072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.426 [2024-11-15 10:46:00.863236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.426 [2024-11-15 10:46:00.863250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:58080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.426 [2024-11-15 10:46:00.863264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.426 [2024-11-15 10:46:00.863278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:58088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.426 [2024-11-15 10:46:00.863291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.426 [2024-11-15 10:46:00.863306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:58096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.426 [2024-11-15 10:46:00.863335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.426 [2024-11-15 10:46:00.863350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:58104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.426 [2024-11-15 10:46:00.863370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.426 [2024-11-15 10:46:00.863406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:58112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.426 [2024-11-15 10:46:00.863420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.426 [2024-11-15 10:46:00.863436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:58120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.426 [2024-11-15 10:46:00.863450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.426 [2024-11-15 10:46:00.863466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:58128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.426 [2024-11-15 10:46:00.863480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.426 [2024-11-15 10:46:00.863495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:58136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.426 [2024-11-15 10:46:00.863510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.426 [2024-11-15 10:46:00.863525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:58144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.426 [2024-11-15 10:46:00.863539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.426 [2024-11-15 10:46:00.863558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:58152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.426 [2024-11-15 10:46:00.863573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.426 [2024-11-15 10:46:00.863588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:58160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.426 [2024-11-15 10:46:00.863603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.426 [2024-11-15 10:46:00.863618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:58168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.426 [2024-11-15 10:46:00.863641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.426 [2024-11-15 10:46:00.863657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:58176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.426 [2024-11-15 10:46:00.863672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.426 [2024-11-15 10:46:00.863701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:58184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.426 [2024-11-15 10:46:00.863715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.426 [2024-11-15 10:46:00.863730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:58192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.426 [2024-11-15 10:46:00.863743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.426 [2024-11-15 10:46:00.863772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:58200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.426 [2024-11-15 10:46:00.863785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.426 [2024-11-15 10:46:00.863799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:58208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.426 [2024-11-15 10:46:00.863811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.426 [2024-11-15 10:46:00.863826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:58216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.426 [2024-11-15 10:46:00.863838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.426 [2024-11-15 10:46:00.863852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:58224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.426 [2024-11-15 10:46:00.863865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.426 [2024-11-15 10:46:00.863879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:58232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.426 [2024-11-15 10:46:00.863892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.426 [2024-11-15 10:46:00.863906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:58240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.426 [2024-11-15 10:46:00.863919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.426 [2024-11-15 10:46:00.863934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:58248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.426 [2024-11-15 10:46:00.863950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.426 [2024-11-15 10:46:00.863964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:58256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.426 [2024-11-15 10:46:00.863977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.426 [2024-11-15 10:46:00.863998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:58264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.426 [2024-11-15 10:46:00.864012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.426 [2024-11-15 10:46:00.864026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:58272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.426 [2024-11-15 10:46:00.864039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.426 [2024-11-15 10:46:00.864053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:58280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.426 [2024-11-15 10:46:00.864066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.426 [2024-11-15 10:46:00.864081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:58288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.426 [2024-11-15 10:46:00.864093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.426 [2024-11-15 10:46:00.864107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:58296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.426 [2024-11-15 10:46:00.864134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.426 [2024-11-15 10:46:00.864148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:58304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.426 [2024-11-15 10:46:00.864161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.426 [2024-11-15 10:46:00.864174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:58312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.426 [2024-11-15 10:46:00.864186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.426 [2024-11-15 10:46:00.864200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:58320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.426 [2024-11-15 10:46:00.864212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.426 [2024-11-15 10:46:00.864226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:58328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.426 [2024-11-15 10:46:00.864238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.426 [2024-11-15 10:46:00.864252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:58336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.426 [2024-11-15 10:46:00.864264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.426 [2024-11-15 10:46:00.864278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:58344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.426 [2024-11-15 10:46:00.864291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.426 [2024-11-15 10:46:00.864304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:58352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.427 [2024-11-15 10:46:00.864320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.427 [2024-11-15 10:46:00.864334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:58360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.427 [2024-11-15 10:46:00.864371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.427 [2024-11-15 10:46:00.864390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:58368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.427 [2024-11-15 10:46:00.864412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.427 [2024-11-15 10:46:00.864427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:58376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.427 [2024-11-15 10:46:00.864441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.427 [2024-11-15 10:46:00.864456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:58384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.427 [2024-11-15 10:46:00.864470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.427 [2024-11-15 10:46:00.864491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:58392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.427 [2024-11-15 10:46:00.864506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.427 [2024-11-15 10:46:00.864520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:58400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.427 [2024-11-15 10:46:00.864534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.427 [2024-11-15 10:46:00.864549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:58408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.427 [2024-11-15 10:46:00.864563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.427 [2024-11-15 10:46:00.864578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:58416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.427 [2024-11-15 10:46:00.864591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.427 [2024-11-15 10:46:00.864606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:58424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.427 [2024-11-15 10:46:00.864620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.427 [2024-11-15 10:46:00.864635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:58432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.427 [2024-11-15 10:46:00.864668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.427 [2024-11-15 10:46:00.864683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:58440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.427 [2024-11-15 10:46:00.864696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.427 [2024-11-15 10:46:00.864725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:58448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.427 [2024-11-15 10:46:00.864738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.427 [2024-11-15 10:46:00.864755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:58456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.427 [2024-11-15 10:46:00.864768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.427 [2024-11-15 10:46:00.864781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:58464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.427 [2024-11-15 10:46:00.864794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.427 [2024-11-15 10:46:00.864807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:58472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.427 [2024-11-15 10:46:00.864820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.427 [2024-11-15 10:46:00.864834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:58480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.427 [2024-11-15 10:46:00.864846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.427 [2024-11-15 10:46:00.864859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:58488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.427 [2024-11-15 10:46:00.864871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.427 [2024-11-15 10:46:00.864885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:58496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.427 [2024-11-15 10:46:00.864898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.427 [2024-11-15 10:46:00.864911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:58504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.427 [2024-11-15 10:46:00.864924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.427 [2024-11-15 10:46:00.864937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:58512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.427 [2024-11-15 10:46:00.864949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.427 [2024-11-15 10:46:00.864968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:58520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.427 [2024-11-15 10:46:00.864981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.427 [2024-11-15 10:46:00.864994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:58528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.427 [2024-11-15 10:46:00.865006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.427 [2024-11-15 10:46:00.865020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:58536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.427 [2024-11-15 10:46:00.865032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.427 [2024-11-15 10:46:00.865046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:58544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.427 [2024-11-15 10:46:00.865058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.427 [2024-11-15 10:46:00.865072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:58552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.427 [2024-11-15 10:46:00.865088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.427 [2024-11-15 10:46:00.865102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:58560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.427 [2024-11-15 10:46:00.865114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.427 [2024-11-15 10:46:00.865127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:58568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.427 [2024-11-15 10:46:00.865140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.427 [2024-11-15 10:46:00.865153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:58576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.427 [2024-11-15 10:46:00.865166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.427 [2024-11-15 10:46:00.865178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:58584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.427 [2024-11-15 10:46:00.865191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.427 [2024-11-15 10:46:00.865204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:58592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.427 [2024-11-15 10:46:00.865216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.427 [2024-11-15 10:46:00.865229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:58600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.427 [2024-11-15 10:46:00.865242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.427 [2024-11-15 10:46:00.865255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:58608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.427 [2024-11-15 10:46:00.865267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.427 [2024-11-15 10:46:00.865280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:58616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.427 [2024-11-15 10:46:00.865292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.427 [2024-11-15 10:46:00.865305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:58624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.427 [2024-11-15 10:46:00.865317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.427 [2024-11-15 10:46:00.865330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:58632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.427 [2024-11-15 10:46:00.865357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.427 [2024-11-15 10:46:00.865382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:58640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.427 [2024-11-15 10:46:00.865402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.427 [2024-11-15 10:46:00.865423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:58648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.427 [2024-11-15 10:46:00.865437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.427 [2024-11-15 10:46:00.865457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:58656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.427 [2024-11-15 10:46:00.865471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.427 [2024-11-15 10:46:00.865487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:58664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.428 [2024-11-15 10:46:00.865501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.428 [2024-11-15 10:46:00.865515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:58672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.428 [2024-11-15 10:46:00.865529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.428 [2024-11-15 10:46:00.865544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:58680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.428 [2024-11-15 10:46:00.865558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.428 [2024-11-15 10:46:00.865573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:58688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.428 [2024-11-15 10:46:00.865587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.428 [2024-11-15 10:46:00.865601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:58696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.428 [2024-11-15 10:46:00.865625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.428 [2024-11-15 10:46:00.865640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:58704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.428 [2024-11-15 10:46:00.865669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.428 [2024-11-15 10:46:00.865683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:58712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.428 [2024-11-15 10:46:00.865696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.428 [2024-11-15 10:46:00.865724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:58720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.428 [2024-11-15 10:46:00.865737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.428 [2024-11-15 10:46:00.865750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:58728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.428 [2024-11-15 10:46:00.865762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.428 [2024-11-15 10:46:00.865775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:58736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.428 [2024-11-15 10:46:00.865788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.428 [2024-11-15 10:46:00.865801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:58744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.428 [2024-11-15 10:46:00.865814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.428 [2024-11-15 10:46:00.865827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:58752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.428 [2024-11-15 10:46:00.865839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.428 [2024-11-15 10:46:00.865859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:58760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.428 [2024-11-15 10:46:00.865872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.428 [2024-11-15 10:46:00.865886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:58768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.428 [2024-11-15 10:46:00.865898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.428 [2024-11-15 10:46:00.865917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:58776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.428 [2024-11-15 10:46:00.865930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.428 [2024-11-15 10:46:00.865943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:58784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.428 [2024-11-15 10:46:00.865956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.428 [2024-11-15 10:46:00.865969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:58792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.428 [2024-11-15 10:46:00.865981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.428 [2024-11-15 10:46:00.865995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:58800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.428 [2024-11-15 10:46:00.866007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.428 [2024-11-15 10:46:00.866020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:58808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.428 [2024-11-15 10:46:00.866033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.428 [2024-11-15 10:46:00.866046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:58816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.428 [2024-11-15 10:46:00.866059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.428 [2024-11-15 10:46:00.866073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:58824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.428 [2024-11-15 10:46:00.866086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.428 [2024-11-15 10:46:00.866100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:58832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.428 [2024-11-15 10:46:00.866112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.428 [2024-11-15 10:46:00.866126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:58840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.428 [2024-11-15 10:46:00.866139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.428 [2024-11-15 10:46:00.866153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:58848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.428 [2024-11-15 10:46:00.866165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.428 [2024-11-15 10:46:00.866179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:58856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.428 [2024-11-15 10:46:00.866195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.428 [2024-11-15 10:46:00.866209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:58864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.428 [2024-11-15 10:46:00.866223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.428 [2024-11-15 10:46:00.866237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:58872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.428 [2024-11-15 10:46:00.866250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.428 [2024-11-15 10:46:00.866263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:58880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.428 [2024-11-15 10:46:00.866276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.428 [2024-11-15 10:46:00.866290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:57864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.428 [2024-11-15 10:46:00.866303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.428 [2024-11-15 10:46:00.866316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:57872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.428 [2024-11-15 10:46:00.866329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.428 [2024-11-15 10:46:00.866357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:57880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.428 [2024-11-15 10:46:00.866380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.428 [2024-11-15 10:46:00.866398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:57888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.428 [2024-11-15 10:46:00.866412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.428 [2024-11-15 10:46:00.866427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:57896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.428 [2024-11-15 10:46:00.866442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.428 [2024-11-15 10:46:00.866457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:57904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.428 [2024-11-15 10:46:00.866471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.428 [2024-11-15 10:46:00.866491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:57912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.428 [2024-11-15 10:46:00.866504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.428 [2024-11-15 10:46:00.866520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:57920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.428 [2024-11-15 10:46:00.866534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.428 [2024-11-15 10:46:00.866549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:57928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.428 [2024-11-15 10:46:00.866563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.428 [2024-11-15 10:46:00.866582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:57936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.428 [2024-11-15 10:46:00.866596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.428 [2024-11-15 10:46:00.866611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:57944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.428 [2024-11-15 10:46:00.866625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.428 [2024-11-15 10:46:00.866656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:57952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.428 [2024-11-15 10:46:00.866670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.428 [2024-11-15 10:46:00.866685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:57960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.429 [2024-11-15 10:46:00.866698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.429 [2024-11-15 10:46:00.866728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:57968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.429 [2024-11-15 10:46:00.866741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.429 [2024-11-15 10:46:00.866755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:57976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.429 [2024-11-15 10:46:00.866768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.429 [2024-11-15 10:46:00.866781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:57984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.429 [2024-11-15 10:46:00.866794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.429 [2024-11-15 10:46:00.866807] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e5e40 is same with the state(6) to be set 00:27:12.429 [2024-11-15 10:46:00.866824] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:12.429 [2024-11-15 10:46:00.866834] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:12.429 [2024-11-15 10:46:00.866860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:57992 len:8 PRP1 0x0 PRP2 0x0 00:27:12.429 [2024-11-15 10:46:00.866873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.429 [2024-11-15 10:46:00.870027] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:12.429 [2024-11-15 10:46:00.870096] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20bca40 (9): Bad file descriptor 00:27:12.429 [2024-11-15 10:46:00.870789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.429 [2024-11-15 10:46:00.870816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20bca40 with addr=10.0.0.2, port=4420 00:27:12.429 [2024-11-15 10:46:00.870831] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20bca40 is same with the state(6) to be set 00:27:12.429 [2024-11-15 10:46:00.871022] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20bca40 (9): Bad file descriptor 00:27:12.429 [2024-11-15 10:46:00.871218] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:12.429 [2024-11-15 10:46:00.871238] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:12.429 [2024-11-15 10:46:00.871259] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:12.429 [2024-11-15 10:46:00.871276] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:12.429 [2024-11-15 10:46:00.883617] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:12.429 [2024-11-15 10:46:00.884089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.429 [2024-11-15 10:46:00.884116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20bca40 with addr=10.0.0.2, port=4420 00:27:12.429 [2024-11-15 10:46:00.884144] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20bca40 is same with the state(6) to be set 00:27:12.429 [2024-11-15 10:46:00.884334] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20bca40 (9): Bad file descriptor 00:27:12.429 [2024-11-15 10:46:00.884579] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:12.429 [2024-11-15 10:46:00.884602] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:12.429 [2024-11-15 10:46:00.884617] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:12.429 [2024-11-15 10:46:00.884631] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:12.688 [2024-11-15 10:46:00.896855] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:12.688 [2024-11-15 10:46:00.897291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.688 [2024-11-15 10:46:00.897331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20bca40 with addr=10.0.0.2, port=4420 00:27:12.688 [2024-11-15 10:46:00.897347] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20bca40 is same with the state(6) to be set 00:27:12.688 [2024-11-15 10:46:00.897608] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20bca40 (9): Bad file descriptor 00:27:12.688 [2024-11-15 10:46:00.897879] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:12.688 [2024-11-15 10:46:00.897900] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:12.688 [2024-11-15 10:46:00.897913] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:12.688 [2024-11-15 10:46:00.897925] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:12.688 [2024-11-15 10:46:00.909873] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:12.688 [2024-11-15 10:46:00.910306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.688 [2024-11-15 10:46:00.910345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20bca40 with addr=10.0.0.2, port=4420 00:27:12.688 [2024-11-15 10:46:00.910360] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20bca40 is same with the state(6) to be set 00:27:12.688 [2024-11-15 10:46:00.910585] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20bca40 (9): Bad file descriptor 00:27:12.688 [2024-11-15 10:46:00.910816] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:12.688 [2024-11-15 10:46:00.910835] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:12.688 [2024-11-15 10:46:00.910847] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:12.688 [2024-11-15 10:46:00.910858] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:12.688 [2024-11-15 10:46:00.922982] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:12.688 [2024-11-15 10:46:00.923418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.688 [2024-11-15 10:46:00.923457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20bca40 with addr=10.0.0.2, port=4420 00:27:12.688 [2024-11-15 10:46:00.923472] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20bca40 is same with the state(6) to be set 00:27:12.688 [2024-11-15 10:46:00.923660] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20bca40 (9): Bad file descriptor 00:27:12.688 [2024-11-15 10:46:00.923853] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:12.688 [2024-11-15 10:46:00.923871] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:12.688 [2024-11-15 10:46:00.923884] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:12.688 [2024-11-15 10:46:00.923894] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:12.688 [2024-11-15 10:46:00.936071] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:12.688 [2024-11-15 10:46:00.936494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.688 [2024-11-15 10:46:00.936533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20bca40 with addr=10.0.0.2, port=4420 00:27:12.688 [2024-11-15 10:46:00.936547] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20bca40 is same with the state(6) to be set 00:27:12.688 [2024-11-15 10:46:00.936735] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20bca40 (9): Bad file descriptor 00:27:12.688 [2024-11-15 10:46:00.936927] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:12.688 [2024-11-15 10:46:00.936945] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:12.688 [2024-11-15 10:46:00.936958] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:12.688 [2024-11-15 10:46:00.936969] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:12.688 [2024-11-15 10:46:00.949030] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:12.688 [2024-11-15 10:46:00.949472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.688 [2024-11-15 10:46:00.949520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20bca40 with addr=10.0.0.2, port=4420 00:27:12.688 [2024-11-15 10:46:00.949534] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20bca40 is same with the state(6) to be set 00:27:12.688 [2024-11-15 10:46:00.949736] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20bca40 (9): Bad file descriptor 00:27:12.688 [2024-11-15 10:46:00.949928] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:12.688 [2024-11-15 10:46:00.949946] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:12.688 [2024-11-15 10:46:00.949959] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:12.688 [2024-11-15 10:46:00.949970] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:12.688 [2024-11-15 10:46:00.962091] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:12.688 [2024-11-15 10:46:00.962525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.688 [2024-11-15 10:46:00.962565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20bca40 with addr=10.0.0.2, port=4420 00:27:12.688 [2024-11-15 10:46:00.962584] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20bca40 is same with the state(6) to be set 00:27:12.688 [2024-11-15 10:46:00.962774] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20bca40 (9): Bad file descriptor 00:27:12.688 [2024-11-15 10:46:00.962966] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:12.688 [2024-11-15 10:46:00.962984] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:12.688 [2024-11-15 10:46:00.962996] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:12.688 [2024-11-15 10:46:00.963007] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:12.688 [2024-11-15 10:46:00.975200] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:12.688 [2024-11-15 10:46:00.975659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.688 [2024-11-15 10:46:00.975684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20bca40 with addr=10.0.0.2, port=4420 00:27:12.688 [2024-11-15 10:46:00.975712] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20bca40 is same with the state(6) to be set 00:27:12.688 [2024-11-15 10:46:00.975900] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20bca40 (9): Bad file descriptor 00:27:12.688 [2024-11-15 10:46:00.976092] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:12.688 [2024-11-15 10:46:00.976111] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:12.688 [2024-11-15 10:46:00.976123] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:12.688 [2024-11-15 10:46:00.976134] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:12.688 [2024-11-15 10:46:00.988250] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:12.688 [2024-11-15 10:46:00.988667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.688 [2024-11-15 10:46:00.988694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20bca40 with addr=10.0.0.2, port=4420 00:27:12.688 [2024-11-15 10:46:00.988707] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20bca40 is same with the state(6) to be set 00:27:12.688 [2024-11-15 10:46:00.988909] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20bca40 (9): Bad file descriptor 00:27:12.688 [2024-11-15 10:46:00.989101] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:12.688 [2024-11-15 10:46:00.989120] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:12.688 [2024-11-15 10:46:00.989132] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:12.688 [2024-11-15 10:46:00.989143] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:12.688 [2024-11-15 10:46:01.001306] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:12.688 [2024-11-15 10:46:01.001770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.688 [2024-11-15 10:46:01.001795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20bca40 with addr=10.0.0.2, port=4420 00:27:12.688 [2024-11-15 10:46:01.001809] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20bca40 is same with the state(6) to be set 00:27:12.688 [2024-11-15 10:46:01.002010] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20bca40 (9): Bad file descriptor 00:27:12.688 [2024-11-15 10:46:01.002207] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:12.688 [2024-11-15 10:46:01.002226] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:12.688 [2024-11-15 10:46:01.002246] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:12.689 [2024-11-15 10:46:01.002265] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:12.689 [2024-11-15 10:46:01.014326] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:12.689 [2024-11-15 10:46:01.014718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.689 [2024-11-15 10:46:01.014778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20bca40 with addr=10.0.0.2, port=4420 00:27:12.689 [2024-11-15 10:46:01.014792] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20bca40 is same with the state(6) to be set 00:27:12.689 [2024-11-15 10:46:01.014994] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20bca40 (9): Bad file descriptor 00:27:12.689 [2024-11-15 10:46:01.015186] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:12.689 [2024-11-15 10:46:01.015205] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:12.689 [2024-11-15 10:46:01.015217] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:12.689 [2024-11-15 10:46:01.015228] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:12.689 [2024-11-15 10:46:01.027408] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:12.689 [2024-11-15 10:46:01.027812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.689 [2024-11-15 10:46:01.027851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20bca40 with addr=10.0.0.2, port=4420 00:27:12.689 [2024-11-15 10:46:01.027865] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20bca40 is same with the state(6) to be set 00:27:12.689 [2024-11-15 10:46:01.028067] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20bca40 (9): Bad file descriptor 00:27:12.689 [2024-11-15 10:46:01.028259] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:12.689 [2024-11-15 10:46:01.028278] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:12.689 [2024-11-15 10:46:01.028290] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:12.689 [2024-11-15 10:46:01.028301] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:12.689 [2024-11-15 10:46:01.040452] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:12.689 [2024-11-15 10:46:01.040889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.689 [2024-11-15 10:46:01.040941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20bca40 with addr=10.0.0.2, port=4420 00:27:12.689 [2024-11-15 10:46:01.040955] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20bca40 is same with the state(6) to be set 00:27:12.689 [2024-11-15 10:46:01.041158] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20bca40 (9): Bad file descriptor 00:27:12.689 [2024-11-15 10:46:01.041350] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:12.689 [2024-11-15 10:46:01.041393] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:12.689 [2024-11-15 10:46:01.041413] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:12.689 [2024-11-15 10:46:01.041425] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:12.689 [2024-11-15 10:46:01.053484] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:12.689 [2024-11-15 10:46:01.053918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.689 [2024-11-15 10:46:01.053968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20bca40 with addr=10.0.0.2, port=4420 00:27:12.689 [2024-11-15 10:46:01.053982] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20bca40 is same with the state(6) to be set 00:27:12.689 [2024-11-15 10:46:01.054183] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20bca40 (9): Bad file descriptor 00:27:12.689 [2024-11-15 10:46:01.054402] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:12.689 [2024-11-15 10:46:01.054423] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:12.689 [2024-11-15 10:46:01.054450] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:12.689 [2024-11-15 10:46:01.054463] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:12.689 [2024-11-15 10:46:01.066636] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:12.689 [2024-11-15 10:46:01.067026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.689 [2024-11-15 10:46:01.067069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20bca40 with addr=10.0.0.2, port=4420 00:27:12.689 [2024-11-15 10:46:01.067083] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20bca40 is same with the state(6) to be set 00:27:12.689 [2024-11-15 10:46:01.067303] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20bca40 (9): Bad file descriptor 00:27:12.689 [2024-11-15 10:46:01.067543] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:12.689 [2024-11-15 10:46:01.067563] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:12.689 [2024-11-15 10:46:01.067577] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:12.689 [2024-11-15 10:46:01.067588] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:12.689 [2024-11-15 10:46:01.079733] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:12.689 [2024-11-15 10:46:01.080123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.689 [2024-11-15 10:46:01.080162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20bca40 with addr=10.0.0.2, port=4420 00:27:12.689 [2024-11-15 10:46:01.080176] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20bca40 is same with the state(6) to be set 00:27:12.689 [2024-11-15 10:46:01.080405] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20bca40 (9): Bad file descriptor 00:27:12.689 [2024-11-15 10:46:01.080610] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:12.689 [2024-11-15 10:46:01.080630] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:12.689 [2024-11-15 10:46:01.080643] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:12.689 [2024-11-15 10:46:01.080655] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:12.689 [2024-11-15 10:46:01.092853] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:12.689 [2024-11-15 10:46:01.093274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.689 [2024-11-15 10:46:01.093299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20bca40 with addr=10.0.0.2, port=4420 00:27:12.689 [2024-11-15 10:46:01.093327] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20bca40 is same with the state(6) to be set 00:27:12.689 [2024-11-15 10:46:01.093550] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20bca40 (9): Bad file descriptor 00:27:12.689 [2024-11-15 10:46:01.093766] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:12.689 [2024-11-15 10:46:01.093785] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:12.689 [2024-11-15 10:46:01.093798] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:12.689 [2024-11-15 10:46:01.093809] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:12.689 [2024-11-15 10:46:01.105835] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:12.689 [2024-11-15 10:46:01.106256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.689 [2024-11-15 10:46:01.106281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20bca40 with addr=10.0.0.2, port=4420 00:27:12.689 [2024-11-15 10:46:01.106309] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20bca40 is same with the state(6) to be set 00:27:12.689 [2024-11-15 10:46:01.106545] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20bca40 (9): Bad file descriptor 00:27:12.689 [2024-11-15 10:46:01.106765] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:12.689 [2024-11-15 10:46:01.106785] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:12.689 [2024-11-15 10:46:01.106812] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:12.689 [2024-11-15 10:46:01.106824] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:12.689 [2024-11-15 10:46:01.118934] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:12.689 [2024-11-15 10:46:01.119288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.689 [2024-11-15 10:46:01.119314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20bca40 with addr=10.0.0.2, port=4420 00:27:12.689 [2024-11-15 10:46:01.119329] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20bca40 is same with the state(6) to be set 00:27:12.689 [2024-11-15 10:46:01.119554] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20bca40 (9): Bad file descriptor 00:27:12.689 [2024-11-15 10:46:01.119786] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:12.689 [2024-11-15 10:46:01.119821] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:12.689 [2024-11-15 10:46:01.119834] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:12.689 [2024-11-15 10:46:01.119846] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:12.689 [2024-11-15 10:46:01.132597] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:12.689 [2024-11-15 10:46:01.133043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.689 [2024-11-15 10:46:01.133068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20bca40 with addr=10.0.0.2, port=4420 00:27:12.689 [2024-11-15 10:46:01.133102] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20bca40 is same with the state(6) to be set 00:27:12.689 [2024-11-15 10:46:01.133297] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20bca40 (9): Bad file descriptor 00:27:12.689 [2024-11-15 10:46:01.133537] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:12.689 [2024-11-15 10:46:01.133560] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:12.690 [2024-11-15 10:46:01.133574] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:12.690 [2024-11-15 10:46:01.133586] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:12.690 [2024-11-15 10:46:01.145929] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:12.690 [2024-11-15 10:46:01.146360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.690 [2024-11-15 10:46:01.146407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20bca40 with addr=10.0.0.2, port=4420 00:27:12.690 [2024-11-15 10:46:01.146423] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20bca40 is same with the state(6) to be set 00:27:12.690 [2024-11-15 10:46:01.146652] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20bca40 (9): Bad file descriptor 00:27:12.690 [2024-11-15 10:46:01.146881] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:12.690 [2024-11-15 10:46:01.146900] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:12.690 [2024-11-15 10:46:01.146912] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:12.690 [2024-11-15 10:46:01.146924] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:12.949 [2024-11-15 10:46:01.159436] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:12.949 [2024-11-15 10:46:01.159874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.949 [2024-11-15 10:46:01.159900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20bca40 with addr=10.0.0.2, port=4420 00:27:12.949 [2024-11-15 10:46:01.159915] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20bca40 is same with the state(6) to be set 00:27:12.949 [2024-11-15 10:46:01.160121] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20bca40 (9): Bad file descriptor 00:27:12.949 [2024-11-15 10:46:01.160313] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:12.949 [2024-11-15 10:46:01.160332] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:12.949 [2024-11-15 10:46:01.160360] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:12.949 [2024-11-15 10:46:01.160387] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:12.949 [2024-11-15 10:46:01.172457] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:12.949 [2024-11-15 10:46:01.172886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.949 [2024-11-15 10:46:01.172925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20bca40 with addr=10.0.0.2, port=4420 00:27:12.949 [2024-11-15 10:46:01.172940] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20bca40 is same with the state(6) to be set 00:27:12.949 [2024-11-15 10:46:01.173128] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20bca40 (9): Bad file descriptor 00:27:12.949 [2024-11-15 10:46:01.173324] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:12.949 [2024-11-15 10:46:01.173343] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:12.949 [2024-11-15 10:46:01.173355] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:12.949 [2024-11-15 10:46:01.173391] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:12.949 [2024-11-15 10:46:01.185499] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:12.949 [2024-11-15 10:46:01.185956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.949 [2024-11-15 10:46:01.185981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20bca40 with addr=10.0.0.2, port=4420 00:27:12.949 [2024-11-15 10:46:01.186010] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20bca40 is same with the state(6) to be set 00:27:12.949 [2024-11-15 10:46:01.186198] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20bca40 (9): Bad file descriptor 00:27:12.949 [2024-11-15 10:46:01.186416] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:12.949 [2024-11-15 10:46:01.186437] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:12.949 [2024-11-15 10:46:01.186450] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:12.949 [2024-11-15 10:46:01.186461] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:12.949 [2024-11-15 10:46:01.198498] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:12.949 [2024-11-15 10:46:01.198907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.949 [2024-11-15 10:46:01.198938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20bca40 with addr=10.0.0.2, port=4420 00:27:12.949 [2024-11-15 10:46:01.198966] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20bca40 is same with the state(6) to be set 00:27:12.949 [2024-11-15 10:46:01.199155] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20bca40 (9): Bad file descriptor 00:27:12.949 [2024-11-15 10:46:01.199360] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:12.949 [2024-11-15 10:46:01.199390] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:12.949 [2024-11-15 10:46:01.199403] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:12.949 [2024-11-15 10:46:01.199429] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:12.949 [2024-11-15 10:46:01.211485] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:12.949 [2024-11-15 10:46:01.211915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.949 [2024-11-15 10:46:01.211954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20bca40 with addr=10.0.0.2, port=4420 00:27:12.949 [2024-11-15 10:46:01.211968] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20bca40 is same with the state(6) to be set 00:27:12.949 [2024-11-15 10:46:01.212157] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20bca40 (9): Bad file descriptor 00:27:12.949 [2024-11-15 10:46:01.212348] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:12.949 [2024-11-15 10:46:01.212392] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:12.949 [2024-11-15 10:46:01.212412] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:12.949 [2024-11-15 10:46:01.212424] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:12.949 7872.33 IOPS, 30.75 MiB/s [2024-11-15T09:46:01.412Z] [2024-11-15 10:46:01.224728] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:12.949 [2024-11-15 10:46:01.225139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.949 [2024-11-15 10:46:01.225174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20bca40 with addr=10.0.0.2, port=4420 00:27:12.949 [2024-11-15 10:46:01.225203] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20bca40 is same with the state(6) to be set 00:27:12.949 [2024-11-15 10:46:01.225418] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20bca40 (9): Bad file descriptor 00:27:12.949 [2024-11-15 10:46:01.225637] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:12.949 [2024-11-15 10:46:01.225657] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:12.949 [2024-11-15 10:46:01.225670] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:12.949 [2024-11-15 10:46:01.225682] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:12.949 [2024-11-15 10:46:01.237698] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:12.949 [2024-11-15 10:46:01.238111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.949 [2024-11-15 10:46:01.238136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20bca40 with addr=10.0.0.2, port=4420 00:27:12.949 [2024-11-15 10:46:01.238163] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20bca40 is same with the state(6) to be set 00:27:12.949 [2024-11-15 10:46:01.238352] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20bca40 (9): Bad file descriptor 00:27:12.949 [2024-11-15 10:46:01.238594] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:12.949 [2024-11-15 10:46:01.238614] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:12.949 [2024-11-15 10:46:01.238627] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:12.949 [2024-11-15 10:46:01.238639] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:12.949 [2024-11-15 10:46:01.250665] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:12.949 [2024-11-15 10:46:01.251098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.949 [2024-11-15 10:46:01.251136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20bca40 with addr=10.0.0.2, port=4420 00:27:12.949 [2024-11-15 10:46:01.251150] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20bca40 is same with the state(6) to be set 00:27:12.949 [2024-11-15 10:46:01.251338] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20bca40 (9): Bad file descriptor 00:27:12.949 [2024-11-15 10:46:01.251578] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:12.949 [2024-11-15 10:46:01.251599] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:12.949 [2024-11-15 10:46:01.251613] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:12.949 [2024-11-15 10:46:01.251625] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:12.949 [2024-11-15 10:46:01.263760] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:12.949 [2024-11-15 10:46:01.264191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.949 [2024-11-15 10:46:01.264229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20bca40 with addr=10.0.0.2, port=4420 00:27:12.949 [2024-11-15 10:46:01.264244] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20bca40 is same with the state(6) to be set 00:27:12.949 [2024-11-15 10:46:01.264476] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20bca40 (9): Bad file descriptor 00:27:12.950 [2024-11-15 10:46:01.264681] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:12.950 [2024-11-15 10:46:01.264701] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:12.950 [2024-11-15 10:46:01.264714] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:12.950 [2024-11-15 10:46:01.264726] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:12.950 [2024-11-15 10:46:01.276792] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:12.950 [2024-11-15 10:46:01.277225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.950 [2024-11-15 10:46:01.277273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20bca40 with addr=10.0.0.2, port=4420 00:27:12.950 [2024-11-15 10:46:01.277286] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20bca40 is same with the state(6) to be set 00:27:12.950 [2024-11-15 10:46:01.277537] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20bca40 (9): Bad file descriptor 00:27:12.950 [2024-11-15 10:46:01.277757] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:12.950 [2024-11-15 10:46:01.277777] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:12.950 [2024-11-15 10:46:01.277789] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:12.950 [2024-11-15 10:46:01.277815] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:12.950 [2024-11-15 10:46:01.289904] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:12.950 [2024-11-15 10:46:01.290360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.950 [2024-11-15 10:46:01.290416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20bca40 with addr=10.0.0.2, port=4420 00:27:12.950 [2024-11-15 10:46:01.290429] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20bca40 is same with the state(6) to be set 00:27:12.950 [2024-11-15 10:46:01.290631] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20bca40 (9): Bad file descriptor 00:27:12.950 [2024-11-15 10:46:01.290823] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:12.950 [2024-11-15 10:46:01.290842] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:12.950 [2024-11-15 10:46:01.290854] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:12.950 [2024-11-15 10:46:01.290865] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:12.950 [2024-11-15 10:46:01.302908] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:12.950 [2024-11-15 10:46:01.303318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.950 [2024-11-15 10:46:01.303374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20bca40 with addr=10.0.0.2, port=4420 00:27:12.950 [2024-11-15 10:46:01.303396] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20bca40 is same with the state(6) to be set 00:27:12.950 [2024-11-15 10:46:01.303600] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20bca40 (9): Bad file descriptor 00:27:12.950 [2024-11-15 10:46:01.303792] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:12.950 [2024-11-15 10:46:01.303811] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:12.950 [2024-11-15 10:46:01.303823] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:12.950 [2024-11-15 10:46:01.303834] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:12.950 [2024-11-15 10:46:01.316014] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:12.950 [2024-11-15 10:46:01.316401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.950 [2024-11-15 10:46:01.316449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20bca40 with addr=10.0.0.2, port=4420 00:27:12.950 [2024-11-15 10:46:01.316463] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20bca40 is same with the state(6) to be set 00:27:12.950 [2024-11-15 10:46:01.316664] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20bca40 (9): Bad file descriptor 00:27:12.950 [2024-11-15 10:46:01.316856] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:12.950 [2024-11-15 10:46:01.316875] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:12.950 [2024-11-15 10:46:01.316888] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:12.950 [2024-11-15 10:46:01.316899] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:12.950 [2024-11-15 10:46:01.329036] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:12.950 [2024-11-15 10:46:01.329467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.950 [2024-11-15 10:46:01.329507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20bca40 with addr=10.0.0.2, port=4420 00:27:12.950 [2024-11-15 10:46:01.329522] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20bca40 is same with the state(6) to be set 00:27:12.950 [2024-11-15 10:46:01.329710] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20bca40 (9): Bad file descriptor 00:27:12.950 [2024-11-15 10:46:01.329902] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:12.950 [2024-11-15 10:46:01.329921] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:12.950 [2024-11-15 10:46:01.329933] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:12.950 [2024-11-15 10:46:01.329944] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:12.950 [2024-11-15 10:46:01.342150] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:12.950 [2024-11-15 10:46:01.342518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.950 [2024-11-15 10:46:01.342559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20bca40 with addr=10.0.0.2, port=4420 00:27:12.950 [2024-11-15 10:46:01.342574] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20bca40 is same with the state(6) to be set 00:27:12.950 [2024-11-15 10:46:01.342814] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20bca40 (9): Bad file descriptor 00:27:12.950 [2024-11-15 10:46:01.343011] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:12.950 [2024-11-15 10:46:01.343030] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:12.950 [2024-11-15 10:46:01.343043] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:12.950 [2024-11-15 10:46:01.343054] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:12.950 [2024-11-15 10:46:01.355317] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:12.950 [2024-11-15 10:46:01.355670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.950 [2024-11-15 10:46:01.355697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20bca40 with addr=10.0.0.2, port=4420 00:27:12.950 [2024-11-15 10:46:01.355726] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20bca40 is same with the state(6) to be set 00:27:12.950 [2024-11-15 10:46:01.355914] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20bca40 (9): Bad file descriptor 00:27:12.950 [2024-11-15 10:46:01.356106] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:12.950 [2024-11-15 10:46:01.356126] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:12.950 [2024-11-15 10:46:01.356138] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:12.950 [2024-11-15 10:46:01.356149] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:12.950 [2024-11-15 10:46:01.368561] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:12.950 [2024-11-15 10:46:01.368910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.950 [2024-11-15 10:46:01.368934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20bca40 with addr=10.0.0.2, port=4420 00:27:12.950 [2024-11-15 10:46:01.368948] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20bca40 is same with the state(6) to be set 00:27:12.950 [2024-11-15 10:46:01.369136] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20bca40 (9): Bad file descriptor 00:27:12.950 [2024-11-15 10:46:01.369328] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:12.950 [2024-11-15 10:46:01.369373] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:12.950 [2024-11-15 10:46:01.369389] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:12.950 [2024-11-15 10:46:01.369416] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:12.950 [2024-11-15 10:46:01.381744] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:12.950 [2024-11-15 10:46:01.382132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.950 [2024-11-15 10:46:01.382177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20bca40 with addr=10.0.0.2, port=4420 00:27:12.950 [2024-11-15 10:46:01.382191] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20bca40 is same with the state(6) to be set 00:27:12.950 [2024-11-15 10:46:01.382415] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20bca40 (9): Bad file descriptor 00:27:12.951 [2024-11-15 10:46:01.382614] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:12.951 [2024-11-15 10:46:01.382633] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:12.951 [2024-11-15 10:46:01.382651] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:12.951 [2024-11-15 10:46:01.382664] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:12.951 [2024-11-15 10:46:01.394853] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:12.951 [2024-11-15 10:46:01.395254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.951 [2024-11-15 10:46:01.395278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20bca40 with addr=10.0.0.2, port=4420 00:27:12.951 [2024-11-15 10:46:01.395292] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20bca40 is same with the state(6) to be set 00:27:12.951 [2024-11-15 10:46:01.395524] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20bca40 (9): Bad file descriptor 00:27:12.951 [2024-11-15 10:46:01.395738] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:12.951 [2024-11-15 10:46:01.395757] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:12.951 [2024-11-15 10:46:01.395769] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:12.951 [2024-11-15 10:46:01.395781] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:12.951 [2024-11-15 10:46:01.408236] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:12.951 [2024-11-15 10:46:01.408587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.951 [2024-11-15 10:46:01.408616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20bca40 with addr=10.0.0.2, port=4420 00:27:12.951 [2024-11-15 10:46:01.408631] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20bca40 is same with the state(6) to be set 00:27:12.951 [2024-11-15 10:46:01.408861] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20bca40 (9): Bad file descriptor 00:27:12.951 [2024-11-15 10:46:01.409059] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:12.951 [2024-11-15 10:46:01.409079] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:12.951 [2024-11-15 10:46:01.409092] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:12.951 [2024-11-15 10:46:01.409103] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:13.209 [2024-11-15 10:46:01.421854] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:13.209 [2024-11-15 10:46:01.422214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.209 [2024-11-15 10:46:01.422241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20bca40 with addr=10.0.0.2, port=4420 00:27:13.209 [2024-11-15 10:46:01.422270] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20bca40 is same with the state(6) to be set 00:27:13.209 [2024-11-15 10:46:01.422496] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20bca40 (9): Bad file descriptor 00:27:13.209 [2024-11-15 10:46:01.422738] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:13.209 [2024-11-15 10:46:01.422758] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:13.209 [2024-11-15 10:46:01.422770] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:13.209 [2024-11-15 10:46:01.422782] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:13.209 [2024-11-15 10:46:01.435482] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:13.209 [2024-11-15 10:46:01.435844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.209 [2024-11-15 10:46:01.435869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20bca40 with addr=10.0.0.2, port=4420 00:27:13.209 [2024-11-15 10:46:01.435897] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20bca40 is same with the state(6) to be set 00:27:13.209 [2024-11-15 10:46:01.436099] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20bca40 (9): Bad file descriptor 00:27:13.209 [2024-11-15 10:46:01.436292] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:13.209 [2024-11-15 10:46:01.436311] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:13.209 [2024-11-15 10:46:01.436324] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:13.209 [2024-11-15 10:46:01.436335] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:13.209 [2024-11-15 10:46:01.448695] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:13.209 [2024-11-15 10:46:01.448995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.209 [2024-11-15 10:46:01.449020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20bca40 with addr=10.0.0.2, port=4420 00:27:13.209 [2024-11-15 10:46:01.449034] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20bca40 is same with the state(6) to be set 00:27:13.209 [2024-11-15 10:46:01.449222] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20bca40 (9): Bad file descriptor 00:27:13.209 [2024-11-15 10:46:01.449441] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:13.209 [2024-11-15 10:46:01.449462] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:13.209 [2024-11-15 10:46:01.449475] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:13.209 [2024-11-15 10:46:01.449487] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:13.209 [2024-11-15 10:46:01.461938] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:13.209 [2024-11-15 10:46:01.462289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.209 [2024-11-15 10:46:01.462314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20bca40 with addr=10.0.0.2, port=4420 00:27:13.209 [2024-11-15 10:46:01.462342] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20bca40 is same with the state(6) to be set 00:27:13.209 [2024-11-15 10:46:01.462557] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20bca40 (9): Bad file descriptor 00:27:13.209 [2024-11-15 10:46:01.462767] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:13.209 [2024-11-15 10:46:01.462786] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:13.209 [2024-11-15 10:46:01.462798] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:13.209 [2024-11-15 10:46:01.462810] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:13.209 [2024-11-15 10:46:01.475088] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:13.209 [2024-11-15 10:46:01.475427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.209 [2024-11-15 10:46:01.475455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20bca40 with addr=10.0.0.2, port=4420 00:27:13.209 [2024-11-15 10:46:01.475475] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20bca40 is same with the state(6) to be set 00:27:13.209 [2024-11-15 10:46:01.475689] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20bca40 (9): Bad file descriptor 00:27:13.209 [2024-11-15 10:46:01.475882] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:13.209 [2024-11-15 10:46:01.475901] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:13.209 [2024-11-15 10:46:01.475913] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:13.209 [2024-11-15 10:46:01.475925] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:13.209 [2024-11-15 10:46:01.488275] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:13.209 [2024-11-15 10:46:01.488607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.209 [2024-11-15 10:46:01.488634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20bca40 with addr=10.0.0.2, port=4420 00:27:13.209 [2024-11-15 10:46:01.488649] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20bca40 is same with the state(6) to be set 00:27:13.209 [2024-11-15 10:46:01.488853] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20bca40 (9): Bad file descriptor 00:27:13.209 [2024-11-15 10:46:01.489045] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:13.209 [2024-11-15 10:46:01.489064] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:13.209 [2024-11-15 10:46:01.489077] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:13.209 [2024-11-15 10:46:01.489088] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:13.209 [2024-11-15 10:46:01.501493] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:13.209 [2024-11-15 10:46:01.501854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.209 [2024-11-15 10:46:01.501879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20bca40 with addr=10.0.0.2, port=4420 00:27:13.209 [2024-11-15 10:46:01.501893] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20bca40 is same with the state(6) to be set 00:27:13.209 [2024-11-15 10:46:01.502081] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20bca40 (9): Bad file descriptor 00:27:13.209 [2024-11-15 10:46:01.502273] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:13.209 [2024-11-15 10:46:01.502292] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:13.209 [2024-11-15 10:46:01.502304] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:13.209 [2024-11-15 10:46:01.502316] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:13.209 [2024-11-15 10:46:01.514783] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:13.209 [2024-11-15 10:46:01.515147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.209 [2024-11-15 10:46:01.515172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20bca40 with addr=10.0.0.2, port=4420 00:27:13.209 [2024-11-15 10:46:01.515186] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20bca40 is same with the state(6) to be set 00:27:13.209 [2024-11-15 10:46:01.515399] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20bca40 (9): Bad file descriptor 00:27:13.209 [2024-11-15 10:46:01.515602] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:13.209 [2024-11-15 10:46:01.515623] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:13.209 [2024-11-15 10:46:01.515636] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:13.210 [2024-11-15 10:46:01.515661] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:13.210 [2024-11-15 10:46:01.528016] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:13.210 [2024-11-15 10:46:01.528395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.210 [2024-11-15 10:46:01.528422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20bca40 with addr=10.0.0.2, port=4420 00:27:13.210 [2024-11-15 10:46:01.528436] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20bca40 is same with the state(6) to be set 00:27:13.210 [2024-11-15 10:46:01.528630] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20bca40 (9): Bad file descriptor 00:27:13.210 [2024-11-15 10:46:01.528839] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:13.210 [2024-11-15 10:46:01.528858] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:13.210 [2024-11-15 10:46:01.528870] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:13.210 [2024-11-15 10:46:01.528882] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:13.210 [2024-11-15 10:46:01.541108] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:13.210 [2024-11-15 10:46:01.541434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.210 [2024-11-15 10:46:01.541461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20bca40 with addr=10.0.0.2, port=4420 00:27:13.210 [2024-11-15 10:46:01.541477] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20bca40 is same with the state(6) to be set 00:27:13.210 [2024-11-15 10:46:01.541691] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20bca40 (9): Bad file descriptor 00:27:13.210 [2024-11-15 10:46:01.541883] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:13.210 [2024-11-15 10:46:01.541902] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:13.210 [2024-11-15 10:46:01.541915] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:13.210 [2024-11-15 10:46:01.541927] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:13.210 [2024-11-15 10:46:01.554201] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:13.210 [2024-11-15 10:46:01.554609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.210 [2024-11-15 10:46:01.554635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20bca40 with addr=10.0.0.2, port=4420 00:27:13.210 [2024-11-15 10:46:01.554649] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20bca40 is same with the state(6) to be set 00:27:13.210 [2024-11-15 10:46:01.554869] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20bca40 (9): Bad file descriptor 00:27:13.210 [2024-11-15 10:46:01.555061] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:13.210 [2024-11-15 10:46:01.555080] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:13.210 [2024-11-15 10:46:01.555097] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:13.210 [2024-11-15 10:46:01.555109] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:13.210 [2024-11-15 10:46:01.567318] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:13.210 [2024-11-15 10:46:01.567727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.210 [2024-11-15 10:46:01.567781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20bca40 with addr=10.0.0.2, port=4420 00:27:13.210 [2024-11-15 10:46:01.567796] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20bca40 is same with the state(6) to be set 00:27:13.210 [2024-11-15 10:46:01.567990] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20bca40 (9): Bad file descriptor 00:27:13.210 [2024-11-15 10:46:01.568188] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:13.210 [2024-11-15 10:46:01.568208] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:13.210 [2024-11-15 10:46:01.568220] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:13.210 [2024-11-15 10:46:01.568232] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:13.210 [2024-11-15 10:46:01.580793] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:13.210 [2024-11-15 10:46:01.581216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.210 [2024-11-15 10:46:01.581266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20bca40 with addr=10.0.0.2, port=4420 00:27:13.210 [2024-11-15 10:46:01.581280] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20bca40 is same with the state(6) to be set 00:27:13.210 [2024-11-15 10:46:01.581521] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20bca40 (9): Bad file descriptor 00:27:13.210 [2024-11-15 10:46:01.581744] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:13.210 [2024-11-15 10:46:01.581763] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:13.210 [2024-11-15 10:46:01.581776] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:13.210 [2024-11-15 10:46:01.581788] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:13.210 [2024-11-15 10:46:01.594086] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:13.210 [2024-11-15 10:46:01.594490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.210 [2024-11-15 10:46:01.594519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20bca40 with addr=10.0.0.2, port=4420 00:27:13.210 [2024-11-15 10:46:01.594535] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20bca40 is same with the state(6) to be set 00:27:13.210 [2024-11-15 10:46:01.594751] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20bca40 (9): Bad file descriptor 00:27:13.210 [2024-11-15 10:46:01.594949] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:13.210 [2024-11-15 10:46:01.594968] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:13.210 [2024-11-15 10:46:01.594981] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:13.210 [2024-11-15 10:46:01.594994] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:13.210 [2024-11-15 10:46:01.607450] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:13.210 [2024-11-15 10:46:01.607889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.210 [2024-11-15 10:46:01.607935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20bca40 with addr=10.0.0.2, port=4420 00:27:13.210 [2024-11-15 10:46:01.607950] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20bca40 is same with the state(6) to be set 00:27:13.210 [2024-11-15 10:46:01.608157] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20bca40 (9): Bad file descriptor 00:27:13.210 [2024-11-15 10:46:01.608382] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:13.210 [2024-11-15 10:46:01.608403] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:13.210 [2024-11-15 10:46:01.608417] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:13.210 [2024-11-15 10:46:01.608428] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:13.210 [2024-11-15 10:46:01.620848] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:13.210 [2024-11-15 10:46:01.621225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.210 [2024-11-15 10:46:01.621252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20bca40 with addr=10.0.0.2, port=4420 00:27:13.210 [2024-11-15 10:46:01.621268] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20bca40 is same with the state(6) to be set 00:27:13.210 [2024-11-15 10:46:01.621513] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20bca40 (9): Bad file descriptor 00:27:13.210 [2024-11-15 10:46:01.621738] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:13.210 [2024-11-15 10:46:01.621758] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:13.210 [2024-11-15 10:46:01.621771] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:13.210 [2024-11-15 10:46:01.621782] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:13.210 [2024-11-15 10:46:01.634068] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:13.210 [2024-11-15 10:46:01.634487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.210 [2024-11-15 10:46:01.634513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20bca40 with addr=10.0.0.2, port=4420 00:27:13.210 [2024-11-15 10:46:01.634527] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20bca40 is same with the state(6) to be set 00:27:13.210 [2024-11-15 10:46:01.634774] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20bca40 (9): Bad file descriptor 00:27:13.210 [2024-11-15 10:46:01.634991] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:13.210 [2024-11-15 10:46:01.635011] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:13.210 [2024-11-15 10:46:01.635024] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:13.210 [2024-11-15 10:46:01.635035] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:13.210 [2024-11-15 10:46:01.647408] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:13.210 [2024-11-15 10:46:01.647837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.210 [2024-11-15 10:46:01.647876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20bca40 with addr=10.0.0.2, port=4420 00:27:13.210 [2024-11-15 10:46:01.647895] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20bca40 is same with the state(6) to be set 00:27:13.210 [2024-11-15 10:46:01.648090] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20bca40 (9): Bad file descriptor 00:27:13.210 [2024-11-15 10:46:01.648287] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:13.210 [2024-11-15 10:46:01.648307] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:13.210 [2024-11-15 10:46:01.648319] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:13.210 [2024-11-15 10:46:01.648331] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:13.210 [2024-11-15 10:46:01.660987] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:13.210 [2024-11-15 10:46:01.661418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.210 [2024-11-15 10:46:01.661445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20bca40 with addr=10.0.0.2, port=4420 00:27:13.210 [2024-11-15 10:46:01.661461] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20bca40 is same with the state(6) to be set 00:27:13.210 [2024-11-15 10:46:01.661696] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20bca40 (9): Bad file descriptor 00:27:13.210 [2024-11-15 10:46:01.661901] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:13.210 [2024-11-15 10:46:01.661921] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:13.210 [2024-11-15 10:46:01.661934] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:13.210 [2024-11-15 10:46:01.661946] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:13.210 [2024-11-15 10:46:01.674553] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:13.210 [2024-11-15 10:46:01.675002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.210 [2024-11-15 10:46:01.675043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20bca40 with addr=10.0.0.2, port=4420 00:27:13.210 [2024-11-15 10:46:01.675059] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20bca40 is same with the state(6) to be set 00:27:13.210 [2024-11-15 10:46:01.675259] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20bca40 (9): Bad file descriptor 00:27:13.210 [2024-11-15 10:46:01.675517] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:13.210 [2024-11-15 10:46:01.675540] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:13.210 [2024-11-15 10:46:01.675554] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:13.210 [2024-11-15 10:46:01.675566] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:13.470 [2024-11-15 10:46:01.687852] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:13.470 [2024-11-15 10:46:01.688281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.470 [2024-11-15 10:46:01.688321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20bca40 with addr=10.0.0.2, port=4420 00:27:13.470 [2024-11-15 10:46:01.688336] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20bca40 is same with the state(6) to be set 00:27:13.470 [2024-11-15 10:46:01.688579] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20bca40 (9): Bad file descriptor 00:27:13.470 [2024-11-15 10:46:01.688807] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:13.470 [2024-11-15 10:46:01.688827] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:13.470 [2024-11-15 10:46:01.688840] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:13.470 [2024-11-15 10:46:01.688852] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:13.470 [2024-11-15 10:46:01.701126] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:13.470 [2024-11-15 10:46:01.701568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.470 [2024-11-15 10:46:01.701605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20bca40 with addr=10.0.0.2, port=4420 00:27:13.470 [2024-11-15 10:46:01.701636] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20bca40 is same with the state(6) to be set 00:27:13.470 [2024-11-15 10:46:01.701846] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20bca40 (9): Bad file descriptor 00:27:13.470 [2024-11-15 10:46:01.702044] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:13.470 [2024-11-15 10:46:01.702063] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:13.470 [2024-11-15 10:46:01.702076] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:13.470 [2024-11-15 10:46:01.702087] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:13.470 [2024-11-15 10:46:01.714461] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:13.470 [2024-11-15 10:46:01.714883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.470 [2024-11-15 10:46:01.714931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20bca40 with addr=10.0.0.2, port=4420 00:27:13.470 [2024-11-15 10:46:01.714946] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20bca40 is same with the state(6) to be set 00:27:13.470 [2024-11-15 10:46:01.715140] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20bca40 (9): Bad file descriptor 00:27:13.470 [2024-11-15 10:46:01.715352] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:13.470 [2024-11-15 10:46:01.715382] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:13.470 [2024-11-15 10:46:01.715395] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:13.470 [2024-11-15 10:46:01.715423] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:13.470 [2024-11-15 10:46:01.727758] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:13.470 [2024-11-15 10:46:01.728129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.470 [2024-11-15 10:46:01.728155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20bca40 with addr=10.0.0.2, port=4420 00:27:13.470 [2024-11-15 10:46:01.728183] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20bca40 is same with the state(6) to be set 00:27:13.470 [2024-11-15 10:46:01.728433] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20bca40 (9): Bad file descriptor 00:27:13.470 [2024-11-15 10:46:01.728645] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:13.470 [2024-11-15 10:46:01.728680] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:13.470 [2024-11-15 10:46:01.728698] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:13.470 [2024-11-15 10:46:01.728711] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:13.470 [2024-11-15 10:46:01.741057] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:13.470 [2024-11-15 10:46:01.741469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.470 [2024-11-15 10:46:01.741496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20bca40 with addr=10.0.0.2, port=4420 00:27:13.470 [2024-11-15 10:46:01.741525] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20bca40 is same with the state(6) to be set 00:27:13.470 [2024-11-15 10:46:01.741738] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20bca40 (9): Bad file descriptor 00:27:13.470 [2024-11-15 10:46:01.741935] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:13.470 [2024-11-15 10:46:01.741955] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:13.470 [2024-11-15 10:46:01.741967] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:13.470 [2024-11-15 10:46:01.741979] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:13.470 [2024-11-15 10:46:01.754588] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:13.470 [2024-11-15 10:46:01.755001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.470 [2024-11-15 10:46:01.755028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20bca40 with addr=10.0.0.2, port=4420 00:27:13.470 [2024-11-15 10:46:01.755042] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20bca40 is same with the state(6) to be set 00:27:13.470 [2024-11-15 10:46:01.755263] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20bca40 (9): Bad file descriptor 00:27:13.470 [2024-11-15 10:46:01.755505] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:13.470 [2024-11-15 10:46:01.755527] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:13.470 [2024-11-15 10:46:01.755542] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:13.470 [2024-11-15 10:46:01.755555] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:13.470 [2024-11-15 10:46:01.768021] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:13.470 [2024-11-15 10:46:01.768495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.470 [2024-11-15 10:46:01.768536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20bca40 with addr=10.0.0.2, port=4420 00:27:13.470 [2024-11-15 10:46:01.768551] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20bca40 is same with the state(6) to be set 00:27:13.470 [2024-11-15 10:46:01.768765] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20bca40 (9): Bad file descriptor 00:27:13.470 [2024-11-15 10:46:01.768962] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:13.470 [2024-11-15 10:46:01.768982] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:13.470 [2024-11-15 10:46:01.768995] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:13.470 [2024-11-15 10:46:01.769006] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:13.470 [2024-11-15 10:46:01.781351] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:13.470 [2024-11-15 10:46:01.781764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.470 [2024-11-15 10:46:01.781811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20bca40 with addr=10.0.0.2, port=4420 00:27:13.471 [2024-11-15 10:46:01.781827] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20bca40 is same with the state(6) to be set 00:27:13.471 [2024-11-15 10:46:01.782038] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20bca40 (9): Bad file descriptor 00:27:13.471 [2024-11-15 10:46:01.782236] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:13.471 [2024-11-15 10:46:01.782256] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:13.471 [2024-11-15 10:46:01.782269] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:13.471 [2024-11-15 10:46:01.782281] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:13.471 [2024-11-15 10:46:01.794829] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:13.471 [2024-11-15 10:46:01.795185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.471 [2024-11-15 10:46:01.795212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20bca40 with addr=10.0.0.2, port=4420 00:27:13.471 [2024-11-15 10:46:01.795226] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20bca40 is same with the state(6) to be set 00:27:13.471 [2024-11-15 10:46:01.795450] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20bca40 (9): Bad file descriptor 00:27:13.471 [2024-11-15 10:46:01.795677] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:13.471 [2024-11-15 10:46:01.795697] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:13.471 [2024-11-15 10:46:01.795725] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:13.471 [2024-11-15 10:46:01.795737] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:13.471 [2024-11-15 10:46:01.808107] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:13.471 [2024-11-15 10:46:01.808441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.471 [2024-11-15 10:46:01.808470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20bca40 with addr=10.0.0.2, port=4420 00:27:13.471 [2024-11-15 10:46:01.808486] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20bca40 is same with the state(6) to be set 00:27:13.471 [2024-11-15 10:46:01.808728] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20bca40 (9): Bad file descriptor 00:27:13.471 [2024-11-15 10:46:01.808926] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:13.471 [2024-11-15 10:46:01.808946] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:13.471 [2024-11-15 10:46:01.808958] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:13.471 [2024-11-15 10:46:01.808970] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:13.471 [2024-11-15 10:46:01.821447] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:13.471 [2024-11-15 10:46:01.821860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.471 [2024-11-15 10:46:01.821885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20bca40 with addr=10.0.0.2, port=4420 00:27:13.471 [2024-11-15 10:46:01.821918] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20bca40 is same with the state(6) to be set 00:27:13.471 [2024-11-15 10:46:01.822112] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20bca40 (9): Bad file descriptor 00:27:13.471 [2024-11-15 10:46:01.822328] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:13.471 [2024-11-15 10:46:01.822370] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:13.471 [2024-11-15 10:46:01.822387] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:13.471 [2024-11-15 10:46:01.822400] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:13.471 [2024-11-15 10:46:01.834623] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:13.471 [2024-11-15 10:46:01.835059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.471 [2024-11-15 10:46:01.835085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20bca40 with addr=10.0.0.2, port=4420 00:27:13.471 [2024-11-15 10:46:01.835114] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20bca40 is same with the state(6) to be set 00:27:13.471 [2024-11-15 10:46:01.835308] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20bca40 (9): Bad file descriptor 00:27:13.471 [2024-11-15 10:46:01.835540] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:13.471 [2024-11-15 10:46:01.835561] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:13.471 [2024-11-15 10:46:01.835575] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:13.471 [2024-11-15 10:46:01.835586] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:13.471 [2024-11-15 10:46:01.847914] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:13.471 [2024-11-15 10:46:01.848324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.471 [2024-11-15 10:46:01.848348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20bca40 with addr=10.0.0.2, port=4420 00:27:13.471 [2024-11-15 10:46:01.848383] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20bca40 is same with the state(6) to be set 00:27:13.471 [2024-11-15 10:46:01.848593] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20bca40 (9): Bad file descriptor 00:27:13.471 [2024-11-15 10:46:01.848829] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:13.471 [2024-11-15 10:46:01.848849] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:13.471 [2024-11-15 10:46:01.848861] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:13.471 [2024-11-15 10:46:01.848873] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:13.471 [2024-11-15 10:46:01.861080] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:13.471 [2024-11-15 10:46:01.861460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.471 [2024-11-15 10:46:01.861509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20bca40 with addr=10.0.0.2, port=4420 00:27:13.471 [2024-11-15 10:46:01.861524] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20bca40 is same with the state(6) to be set 00:27:13.471 [2024-11-15 10:46:01.861776] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20bca40 (9): Bad file descriptor 00:27:13.471 [2024-11-15 10:46:01.861979] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:13.471 [2024-11-15 10:46:01.861998] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:13.471 [2024-11-15 10:46:01.862011] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:13.471 [2024-11-15 10:46:01.862022] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:13.471 [2024-11-15 10:46:01.874239] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:13.471 [2024-11-15 10:46:01.874680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.471 [2024-11-15 10:46:01.874714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20bca40 with addr=10.0.0.2, port=4420 00:27:13.471 [2024-11-15 10:46:01.874728] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20bca40 is same with the state(6) to be set 00:27:13.471 [2024-11-15 10:46:01.874922] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20bca40 (9): Bad file descriptor 00:27:13.471 [2024-11-15 10:46:01.875120] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:13.471 [2024-11-15 10:46:01.875145] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:13.471 [2024-11-15 10:46:01.875167] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:13.471 [2024-11-15 10:46:01.875180] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:13.471 [2024-11-15 10:46:01.887488] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:13.471 [2024-11-15 10:46:01.887920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.471 [2024-11-15 10:46:01.887946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20bca40 with addr=10.0.0.2, port=4420 00:27:13.471 [2024-11-15 10:46:01.887974] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20bca40 is same with the state(6) to be set 00:27:13.471 [2024-11-15 10:46:01.888167] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20bca40 (9): Bad file descriptor 00:27:13.471 [2024-11-15 10:46:01.888388] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:13.472 [2024-11-15 10:46:01.888409] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:13.472 [2024-11-15 10:46:01.888422] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:13.472 [2024-11-15 10:46:01.888435] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:13.472 [2024-11-15 10:46:01.900860] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:13.472 [2024-11-15 10:46:01.901293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.472 [2024-11-15 10:46:01.901333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20bca40 with addr=10.0.0.2, port=4420 00:27:13.472 [2024-11-15 10:46:01.901349] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20bca40 is same with the state(6) to be set 00:27:13.472 [2024-11-15 10:46:01.901601] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20bca40 (9): Bad file descriptor 00:27:13.472 [2024-11-15 10:46:01.901825] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:13.472 [2024-11-15 10:46:01.901845] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:13.472 [2024-11-15 10:46:01.901862] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:13.472 [2024-11-15 10:46:01.901874] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:13.472 [2024-11-15 10:46:01.914058] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:13.472 [2024-11-15 10:46:01.914479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.472 [2024-11-15 10:46:01.914506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20bca40 with addr=10.0.0.2, port=4420 00:27:13.472 [2024-11-15 10:46:01.914534] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20bca40 is same with the state(6) to be set 00:27:13.472 [2024-11-15 10:46:01.914747] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20bca40 (9): Bad file descriptor 00:27:13.472 [2024-11-15 10:46:01.914946] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:13.472 [2024-11-15 10:46:01.914965] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:13.472 [2024-11-15 10:46:01.914978] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:13.472 [2024-11-15 10:46:01.914989] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:13.472 [2024-11-15 10:46:01.927274] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:13.472 [2024-11-15 10:46:01.927712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.472 [2024-11-15 10:46:01.927753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20bca40 with addr=10.0.0.2, port=4420 00:27:13.472 [2024-11-15 10:46:01.927767] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20bca40 is same with the state(6) to be set 00:27:13.472 [2024-11-15 10:46:01.927976] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20bca40 (9): Bad file descriptor 00:27:13.472 [2024-11-15 10:46:01.928174] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:13.472 [2024-11-15 10:46:01.928193] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:13.472 [2024-11-15 10:46:01.928206] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:13.472 [2024-11-15 10:46:01.928218] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:13.732 [2024-11-15 10:46:01.940629] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:13.732 [2024-11-15 10:46:01.941086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.732 [2024-11-15 10:46:01.941126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20bca40 with addr=10.0.0.2, port=4420 00:27:13.732 [2024-11-15 10:46:01.941140] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20bca40 is same with the state(6) to be set 00:27:13.732 [2024-11-15 10:46:01.941411] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20bca40 (9): Bad file descriptor 00:27:13.732 [2024-11-15 10:46:01.941630] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:13.732 [2024-11-15 10:46:01.941651] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:13.732 [2024-11-15 10:46:01.941665] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:13.732 [2024-11-15 10:46:01.941678] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:13.732 [2024-11-15 10:46:01.953813] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:13.732 [2024-11-15 10:46:01.954218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.732 [2024-11-15 10:46:01.954243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20bca40 with addr=10.0.0.2, port=4420 00:27:13.732 [2024-11-15 10:46:01.954257] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20bca40 is same with the state(6) to be set 00:27:13.732 [2024-11-15 10:46:01.954512] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20bca40 (9): Bad file descriptor 00:27:13.732 [2024-11-15 10:46:01.954751] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:13.732 [2024-11-15 10:46:01.954770] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:13.732 [2024-11-15 10:46:01.954783] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:13.732 [2024-11-15 10:46:01.954795] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:13.732 [2024-11-15 10:46:01.966993] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:13.732 [2024-11-15 10:46:01.967403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.732 [2024-11-15 10:46:01.967444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20bca40 with addr=10.0.0.2, port=4420 00:27:13.732 [2024-11-15 10:46:01.967458] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20bca40 is same with the state(6) to be set 00:27:13.732 [2024-11-15 10:46:01.967687] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20bca40 (9): Bad file descriptor 00:27:13.732 [2024-11-15 10:46:01.967885] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:13.732 [2024-11-15 10:46:01.967904] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:13.732 [2024-11-15 10:46:01.967917] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:13.732 [2024-11-15 10:46:01.967929] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:13.732 [2024-11-15 10:46:01.980277] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:13.732 [2024-11-15 10:46:01.980742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.732 [2024-11-15 10:46:01.980768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20bca40 with addr=10.0.0.2, port=4420 00:27:13.732 [2024-11-15 10:46:01.980782] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20bca40 is same with the state(6) to be set 00:27:13.732 [2024-11-15 10:46:01.980990] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20bca40 (9): Bad file descriptor 00:27:13.732 [2024-11-15 10:46:01.981188] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:13.732 [2024-11-15 10:46:01.981207] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:13.732 [2024-11-15 10:46:01.981219] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:13.732 [2024-11-15 10:46:01.981231] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:13.732 [2024-11-15 10:46:01.993475] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:13.732 [2024-11-15 10:46:01.993903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.732 [2024-11-15 10:46:01.993928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20bca40 with addr=10.0.0.2, port=4420 00:27:13.732 [2024-11-15 10:46:01.993962] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20bca40 is same with the state(6) to be set 00:27:13.732 [2024-11-15 10:46:01.994157] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20bca40 (9): Bad file descriptor 00:27:13.732 [2024-11-15 10:46:01.994381] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:13.732 [2024-11-15 10:46:01.994403] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:13.732 [2024-11-15 10:46:01.994431] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:13.732 [2024-11-15 10:46:01.994444] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:13.732 [2024-11-15 10:46:02.006791] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:13.733 [2024-11-15 10:46:02.007212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.733 [2024-11-15 10:46:02.007238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20bca40 with addr=10.0.0.2, port=4420 00:27:13.733 [2024-11-15 10:46:02.007266] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20bca40 is same with the state(6) to be set 00:27:13.733 [2024-11-15 10:46:02.007508] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20bca40 (9): Bad file descriptor 00:27:13.733 [2024-11-15 10:46:02.007748] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:13.733 [2024-11-15 10:46:02.007768] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:13.733 [2024-11-15 10:46:02.007781] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:13.733 [2024-11-15 10:46:02.007792] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:13.733 [2024-11-15 10:46:02.020025] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:13.733 [2024-11-15 10:46:02.020416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.733 [2024-11-15 10:46:02.020457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20bca40 with addr=10.0.0.2, port=4420 00:27:13.733 [2024-11-15 10:46:02.020472] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20bca40 is same with the state(6) to be set 00:27:13.733 [2024-11-15 10:46:02.020700] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20bca40 (9): Bad file descriptor 00:27:13.733 [2024-11-15 10:46:02.020898] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:13.733 [2024-11-15 10:46:02.020917] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:13.733 [2024-11-15 10:46:02.020930] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:13.733 [2024-11-15 10:46:02.020942] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:13.733 [2024-11-15 10:46:02.033311] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:13.733 [2024-11-15 10:46:02.033765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.733 [2024-11-15 10:46:02.033806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20bca40 with addr=10.0.0.2, port=4420 00:27:13.733 [2024-11-15 10:46:02.033821] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20bca40 is same with the state(6) to be set 00:27:13.733 [2024-11-15 10:46:02.034030] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20bca40 (9): Bad file descriptor 00:27:13.733 [2024-11-15 10:46:02.034235] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:13.733 [2024-11-15 10:46:02.034256] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:13.733 [2024-11-15 10:46:02.034271] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:13.733 [2024-11-15 10:46:02.034283] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:13.733 [2024-11-15 10:46:02.046537] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:13.733 [2024-11-15 10:46:02.046930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.733 [2024-11-15 10:46:02.046970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20bca40 with addr=10.0.0.2, port=4420 00:27:13.733 [2024-11-15 10:46:02.046985] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20bca40 is same with the state(6) to be set 00:27:13.733 [2024-11-15 10:46:02.047193] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20bca40 (9): Bad file descriptor 00:27:13.733 [2024-11-15 10:46:02.047434] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:13.733 [2024-11-15 10:46:02.047456] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:13.733 [2024-11-15 10:46:02.047469] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:13.733 [2024-11-15 10:46:02.047481] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:13.733 [2024-11-15 10:46:02.059800] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:13.733 [2024-11-15 10:46:02.060209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.733 [2024-11-15 10:46:02.060238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20bca40 with addr=10.0.0.2, port=4420 00:27:13.733 [2024-11-15 10:46:02.060266] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20bca40 is same with the state(6) to be set 00:27:13.733 [2024-11-15 10:46:02.060510] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20bca40 (9): Bad file descriptor 00:27:13.733 [2024-11-15 10:46:02.060736] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:13.733 [2024-11-15 10:46:02.060756] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:13.733 [2024-11-15 10:46:02.060770] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:13.733 [2024-11-15 10:46:02.060781] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:13.733 [2024-11-15 10:46:02.073156] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:13.733 [2024-11-15 10:46:02.073597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.733 [2024-11-15 10:46:02.073622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20bca40 with addr=10.0.0.2, port=4420 00:27:13.733 [2024-11-15 10:46:02.073651] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20bca40 is same with the state(6) to be set 00:27:13.733 [2024-11-15 10:46:02.073845] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20bca40 (9): Bad file descriptor 00:27:13.733 [2024-11-15 10:46:02.074043] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:13.733 [2024-11-15 10:46:02.074062] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:13.733 [2024-11-15 10:46:02.074079] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:13.733 [2024-11-15 10:46:02.074091] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:13.733 [2024-11-15 10:46:02.086493] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:13.733 [2024-11-15 10:46:02.086907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.733 [2024-11-15 10:46:02.086932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20bca40 with addr=10.0.0.2, port=4420 00:27:13.733 [2024-11-15 10:46:02.086947] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20bca40 is same with the state(6) to be set 00:27:13.733 [2024-11-15 10:46:02.087154] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20bca40 (9): Bad file descriptor 00:27:13.733 [2024-11-15 10:46:02.087353] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:13.733 [2024-11-15 10:46:02.087395] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:13.733 [2024-11-15 10:46:02.087410] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:13.733 [2024-11-15 10:46:02.087422] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:13.733 [2024-11-15 10:46:02.099727] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:13.733 [2024-11-15 10:46:02.100149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.733 [2024-11-15 10:46:02.100175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20bca40 with addr=10.0.0.2, port=4420 00:27:13.733 [2024-11-15 10:46:02.100203] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20bca40 is same with the state(6) to be set 00:27:13.733 [2024-11-15 10:46:02.100425] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20bca40 (9): Bad file descriptor 00:27:13.733 [2024-11-15 10:46:02.100652] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:13.733 [2024-11-15 10:46:02.100673] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:13.733 [2024-11-15 10:46:02.100687] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:13.733 [2024-11-15 10:46:02.100699] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:13.733 [2024-11-15 10:46:02.113142] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:13.733 [2024-11-15 10:46:02.113557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.733 [2024-11-15 10:46:02.113608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20bca40 with addr=10.0.0.2, port=4420 00:27:13.733 [2024-11-15 10:46:02.113623] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20bca40 is same with the state(6) to be set 00:27:13.733 [2024-11-15 10:46:02.113816] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20bca40 (9): Bad file descriptor 00:27:13.733 [2024-11-15 10:46:02.114015] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:13.733 [2024-11-15 10:46:02.114034] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:13.733 [2024-11-15 10:46:02.114046] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:13.733 [2024-11-15 10:46:02.114057] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:13.734 [2024-11-15 10:46:02.126337] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:13.734 [2024-11-15 10:46:02.126768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.734 [2024-11-15 10:46:02.126795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20bca40 with addr=10.0.0.2, port=4420 00:27:13.734 [2024-11-15 10:46:02.126809] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20bca40 is same with the state(6) to be set 00:27:13.734 [2024-11-15 10:46:02.127004] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20bca40 (9): Bad file descriptor 00:27:13.734 [2024-11-15 10:46:02.127202] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:13.734 [2024-11-15 10:46:02.127221] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:13.734 [2024-11-15 10:46:02.127234] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:13.734 [2024-11-15 10:46:02.127246] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:13.734 [2024-11-15 10:46:02.139815] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:13.734 [2024-11-15 10:46:02.140205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.734 [2024-11-15 10:46:02.140245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20bca40 with addr=10.0.0.2, port=4420 00:27:13.734 [2024-11-15 10:46:02.140259] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20bca40 is same with the state(6) to be set 00:27:13.734 [2024-11-15 10:46:02.140515] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20bca40 (9): Bad file descriptor 00:27:13.734 [2024-11-15 10:46:02.140755] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:13.734 [2024-11-15 10:46:02.140775] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:13.734 [2024-11-15 10:46:02.140788] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:13.734 [2024-11-15 10:46:02.140800] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:13.734 [2024-11-15 10:46:02.153223] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:13.734 [2024-11-15 10:46:02.153587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.734 [2024-11-15 10:46:02.153615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20bca40 with addr=10.0.0.2, port=4420 00:27:13.734 [2024-11-15 10:46:02.153630] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20bca40 is same with the state(6) to be set 00:27:13.734 [2024-11-15 10:46:02.153838] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20bca40 (9): Bad file descriptor 00:27:13.734 [2024-11-15 10:46:02.154036] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:13.734 [2024-11-15 10:46:02.154055] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:13.734 [2024-11-15 10:46:02.154068] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:13.734 [2024-11-15 10:46:02.154079] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:13.734 [2024-11-15 10:46:02.166621] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:13.734 [2024-11-15 10:46:02.167088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.734 [2024-11-15 10:46:02.167113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20bca40 with addr=10.0.0.2, port=4420 00:27:13.734 [2024-11-15 10:46:02.167153] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20bca40 is same with the state(6) to be set 00:27:13.734 [2024-11-15 10:46:02.167370] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20bca40 (9): Bad file descriptor 00:27:13.734 [2024-11-15 10:46:02.167604] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:13.734 [2024-11-15 10:46:02.167625] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:13.734 [2024-11-15 10:46:02.167655] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:13.734 [2024-11-15 10:46:02.167668] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:13.734 [2024-11-15 10:46:02.179936] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:13.734 [2024-11-15 10:46:02.180375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.734 [2024-11-15 10:46:02.180415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20bca40 with addr=10.0.0.2, port=4420 00:27:13.734 [2024-11-15 10:46:02.180430] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20bca40 is same with the state(6) to be set 00:27:13.734 [2024-11-15 10:46:02.180644] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20bca40 (9): Bad file descriptor 00:27:13.734 [2024-11-15 10:46:02.180859] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:13.734 [2024-11-15 10:46:02.180878] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:13.734 [2024-11-15 10:46:02.180891] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:13.734 [2024-11-15 10:46:02.180902] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:13.734 [2024-11-15 10:46:02.193296] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:13.734 [2024-11-15 10:46:02.193747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.734 [2024-11-15 10:46:02.193790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20bca40 with addr=10.0.0.2, port=4420 00:27:13.734 [2024-11-15 10:46:02.193805] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20bca40 is same with the state(6) to be set 00:27:13.734 [2024-11-15 10:46:02.194062] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20bca40 (9): Bad file descriptor 00:27:13.734 [2024-11-15 10:46:02.194289] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:13.734 [2024-11-15 10:46:02.194309] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:13.734 [2024-11-15 10:46:02.194322] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:13.734 [2024-11-15 10:46:02.194334] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:13.994 [2024-11-15 10:46:02.206815] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:13.994 [2024-11-15 10:46:02.207197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.994 [2024-11-15 10:46:02.207237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20bca40 with addr=10.0.0.2, port=4420 00:27:13.994 [2024-11-15 10:46:02.207251] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20bca40 is same with the state(6) to be set 00:27:13.994 [2024-11-15 10:46:02.207511] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20bca40 (9): Bad file descriptor 00:27:13.994 [2024-11-15 10:46:02.207759] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:13.994 [2024-11-15 10:46:02.207780] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:13.994 [2024-11-15 10:46:02.207792] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:13.994 [2024-11-15 10:46:02.207804] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:13.994 5904.25 IOPS, 23.06 MiB/s [2024-11-15T09:46:02.457Z] [2024-11-15 10:46:02.221686] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:13.994 [2024-11-15 10:46:02.222133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.994 [2024-11-15 10:46:02.222158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20bca40 with addr=10.0.0.2, port=4420 00:27:13.994 [2024-11-15 10:46:02.222187] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20bca40 is same with the state(6) to be set 00:27:13.994 [2024-11-15 10:46:02.222434] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20bca40 (9): Bad file descriptor 00:27:13.994 [2024-11-15 10:46:02.222665] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:13.994 [2024-11-15 10:46:02.222702] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:13.994 [2024-11-15 10:46:02.222716] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:13.994 [2024-11-15 10:46:02.222728] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:13.994 [2024-11-15 10:46:02.234970] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:13.994 [2024-11-15 10:46:02.235393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.995 [2024-11-15 10:46:02.235436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20bca40 with addr=10.0.0.2, port=4420 00:27:13.995 [2024-11-15 10:46:02.235451] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20bca40 is same with the state(6) to be set 00:27:13.995 [2024-11-15 10:46:02.235687] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20bca40 (9): Bad file descriptor 00:27:13.995 [2024-11-15 10:46:02.235901] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:13.995 [2024-11-15 10:46:02.235920] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:13.995 [2024-11-15 10:46:02.235933] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:13.995 [2024-11-15 10:46:02.235945] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:13.995 [2024-11-15 10:46:02.248274] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:13.995 [2024-11-15 10:46:02.248685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.995 [2024-11-15 10:46:02.248712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20bca40 with addr=10.0.0.2, port=4420 00:27:13.995 [2024-11-15 10:46:02.248726] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20bca40 is same with the state(6) to be set 00:27:13.995 [2024-11-15 10:46:02.248920] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20bca40 (9): Bad file descriptor 00:27:13.995 [2024-11-15 10:46:02.249118] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:13.995 [2024-11-15 10:46:02.249137] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:13.995 [2024-11-15 10:46:02.249154] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:13.995 [2024-11-15 10:46:02.249166] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:13.995 [2024-11-15 10:46:02.261541] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:13.995 [2024-11-15 10:46:02.261972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.995 [2024-11-15 10:46:02.262001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20bca40 with addr=10.0.0.2, port=4420 00:27:13.995 [2024-11-15 10:46:02.262030] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20bca40 is same with the state(6) to be set 00:27:13.995 [2024-11-15 10:46:02.262223] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20bca40 (9): Bad file descriptor 00:27:13.995 [2024-11-15 10:46:02.262464] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:13.995 [2024-11-15 10:46:02.262486] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:13.995 [2024-11-15 10:46:02.262500] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:13.995 [2024-11-15 10:46:02.262512] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:13.995 [2024-11-15 10:46:02.274838] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:13.995 [2024-11-15 10:46:02.275260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.995 [2024-11-15 10:46:02.275285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20bca40 with addr=10.0.0.2, port=4420 00:27:13.995 [2024-11-15 10:46:02.275299] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20bca40 is same with the state(6) to be set 00:27:13.995 [2024-11-15 10:46:02.275567] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20bca40 (9): Bad file descriptor 00:27:13.995 [2024-11-15 10:46:02.275806] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:13.995 [2024-11-15 10:46:02.275826] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:13.995 [2024-11-15 10:46:02.275838] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:13.995 [2024-11-15 10:46:02.275850] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:13.995 [2024-11-15 10:46:02.288116] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:13.995 [2024-11-15 10:46:02.288546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.995 [2024-11-15 10:46:02.288578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20bca40 with addr=10.0.0.2, port=4420 00:27:13.995 [2024-11-15 10:46:02.288608] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20bca40 is same with the state(6) to be set 00:27:13.995 [2024-11-15 10:46:02.288819] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20bca40 (9): Bad file descriptor 00:27:13.995 [2024-11-15 10:46:02.289017] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:13.995 [2024-11-15 10:46:02.289036] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:13.995 [2024-11-15 10:46:02.289049] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:13.995 [2024-11-15 10:46:02.289061] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:13.995 [2024-11-15 10:46:02.301330] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:13.995 [2024-11-15 10:46:02.301767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.995 [2024-11-15 10:46:02.301808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20bca40 with addr=10.0.0.2, port=4420 00:27:13.995 [2024-11-15 10:46:02.301823] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20bca40 is same with the state(6) to be set 00:27:13.995 [2024-11-15 10:46:02.302031] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20bca40 (9): Bad file descriptor 00:27:13.995 [2024-11-15 10:46:02.302229] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:13.995 [2024-11-15 10:46:02.302248] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:13.995 [2024-11-15 10:46:02.302261] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:13.995 [2024-11-15 10:46:02.302272] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:13.995 [2024-11-15 10:46:02.314503] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:13.995 [2024-11-15 10:46:02.314930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.995 [2024-11-15 10:46:02.314981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20bca40 with addr=10.0.0.2, port=4420 00:27:13.995 [2024-11-15 10:46:02.314995] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20bca40 is same with the state(6) to be set 00:27:13.995 [2024-11-15 10:46:02.315189] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20bca40 (9): Bad file descriptor 00:27:13.995 [2024-11-15 10:46:02.315431] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:13.995 [2024-11-15 10:46:02.315453] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:13.995 [2024-11-15 10:46:02.315466] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:13.995 [2024-11-15 10:46:02.315478] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:13.995 [2024-11-15 10:46:02.327801] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:13.995 [2024-11-15 10:46:02.328231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.995 [2024-11-15 10:46:02.328256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20bca40 with addr=10.0.0.2, port=4420 00:27:13.995 [2024-11-15 10:46:02.328285] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20bca40 is same with the state(6) to be set 00:27:13.996 [2024-11-15 10:46:02.328528] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20bca40 (9): Bad file descriptor 00:27:13.996 [2024-11-15 10:46:02.328765] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:13.996 [2024-11-15 10:46:02.328785] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:13.996 [2024-11-15 10:46:02.328798] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:13.996 [2024-11-15 10:46:02.328810] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:13.996 [2024-11-15 10:46:02.340990] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:13.996 [2024-11-15 10:46:02.341399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.996 [2024-11-15 10:46:02.341440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20bca40 with addr=10.0.0.2, port=4420 00:27:13.996 [2024-11-15 10:46:02.341460] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20bca40 is same with the state(6) to be set 00:27:13.996 [2024-11-15 10:46:02.341687] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20bca40 (9): Bad file descriptor 00:27:13.996 [2024-11-15 10:46:02.341885] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:13.996 [2024-11-15 10:46:02.341905] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:13.996 [2024-11-15 10:46:02.341917] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:13.996 [2024-11-15 10:46:02.341929] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:13.996 [2024-11-15 10:46:02.354260] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:13.996 [2024-11-15 10:46:02.354712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.996 [2024-11-15 10:46:02.354750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20bca40 with addr=10.0.0.2, port=4420 00:27:13.996 [2024-11-15 10:46:02.354764] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20bca40 is same with the state(6) to be set 00:27:13.996 [2024-11-15 10:46:02.354972] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20bca40 (9): Bad file descriptor 00:27:13.996 [2024-11-15 10:46:02.355170] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:13.996 [2024-11-15 10:46:02.355190] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:13.996 [2024-11-15 10:46:02.355202] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:13.996 [2024-11-15 10:46:02.355214] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:13.996 [2024-11-15 10:46:02.367574] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:13.996 [2024-11-15 10:46:02.368006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.996 [2024-11-15 10:46:02.368034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20bca40 with addr=10.0.0.2, port=4420 00:27:13.996 [2024-11-15 10:46:02.368048] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20bca40 is same with the state(6) to be set 00:27:13.996 [2024-11-15 10:46:02.368256] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20bca40 (9): Bad file descriptor 00:27:13.996 [2024-11-15 10:46:02.368503] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:13.996 [2024-11-15 10:46:02.368524] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:13.996 [2024-11-15 10:46:02.368538] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:13.996 [2024-11-15 10:46:02.368550] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:13.996 [2024-11-15 10:46:02.380897] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:13.996 [2024-11-15 10:46:02.381263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.996 [2024-11-15 10:46:02.381290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20bca40 with addr=10.0.0.2, port=4420 00:27:13.996 [2024-11-15 10:46:02.381304] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20bca40 is same with the state(6) to be set 00:27:13.996 [2024-11-15 10:46:02.381546] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20bca40 (9): Bad file descriptor 00:27:13.996 [2024-11-15 10:46:02.381787] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:13.996 [2024-11-15 10:46:02.381807] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:13.996 [2024-11-15 10:46:02.381821] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:13.996 [2024-11-15 10:46:02.381832] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:13.996 [2024-11-15 10:46:02.394222] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:13.996 [2024-11-15 10:46:02.394688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.996 [2024-11-15 10:46:02.394727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20bca40 with addr=10.0.0.2, port=4420 00:27:13.996 [2024-11-15 10:46:02.394742] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20bca40 is same with the state(6) to be set 00:27:13.996 [2024-11-15 10:46:02.394935] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20bca40 (9): Bad file descriptor 00:27:13.996 [2024-11-15 10:46:02.395133] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:13.996 [2024-11-15 10:46:02.395152] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:13.996 [2024-11-15 10:46:02.395165] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:13.996 [2024-11-15 10:46:02.395176] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:13.996 [2024-11-15 10:46:02.407571] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:13.996 [2024-11-15 10:46:02.408025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.996 [2024-11-15 10:46:02.408072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20bca40 with addr=10.0.0.2, port=4420 00:27:13.996 [2024-11-15 10:46:02.408087] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20bca40 is same with the state(6) to be set 00:27:13.996 [2024-11-15 10:46:02.408281] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20bca40 (9): Bad file descriptor 00:27:13.996 [2024-11-15 10:46:02.408507] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:13.996 [2024-11-15 10:46:02.408528] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:13.996 [2024-11-15 10:46:02.408541] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:13.996 [2024-11-15 10:46:02.408553] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:13.996 [2024-11-15 10:46:02.420811] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:13.996 [2024-11-15 10:46:02.421252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.996 [2024-11-15 10:46:02.421277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20bca40 with addr=10.0.0.2, port=4420 00:27:13.996 [2024-11-15 10:46:02.421307] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20bca40 is same with the state(6) to be set 00:27:13.996 [2024-11-15 10:46:02.421530] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20bca40 (9): Bad file descriptor 00:27:13.996 [2024-11-15 10:46:02.421768] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:13.996 [2024-11-15 10:46:02.421787] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:13.997 [2024-11-15 10:46:02.421806] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:13.997 [2024-11-15 10:46:02.421819] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:13.997 [2024-11-15 10:46:02.434160] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:13.997 [2024-11-15 10:46:02.434581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.997 [2024-11-15 10:46:02.434607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20bca40 with addr=10.0.0.2, port=4420 00:27:13.997 [2024-11-15 10:46:02.434638] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20bca40 is same with the state(6) to be set 00:27:13.997 [2024-11-15 10:46:02.434848] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20bca40 (9): Bad file descriptor 00:27:13.997 [2024-11-15 10:46:02.435046] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:13.997 [2024-11-15 10:46:02.435065] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:13.997 [2024-11-15 10:46:02.435078] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:13.997 [2024-11-15 10:46:02.435089] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:13.997 [2024-11-15 10:46:02.447439] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:13.997 [2024-11-15 10:46:02.447872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.997 [2024-11-15 10:46:02.447911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20bca40 with addr=10.0.0.2, port=4420 00:27:13.997 [2024-11-15 10:46:02.447926] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20bca40 is same with the state(6) to be set 00:27:13.997 [2024-11-15 10:46:02.448120] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20bca40 (9): Bad file descriptor 00:27:13.997 [2024-11-15 10:46:02.448317] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:13.997 [2024-11-15 10:46:02.448337] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:13.997 [2024-11-15 10:46:02.448375] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:13.997 [2024-11-15 10:46:02.448389] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:14.256 [2024-11-15 10:46:02.460977] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:14.257 [2024-11-15 10:46:02.461428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.257 [2024-11-15 10:46:02.461470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20bca40 with addr=10.0.0.2, port=4420 00:27:14.257 [2024-11-15 10:46:02.461485] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20bca40 is same with the state(6) to be set 00:27:14.257 [2024-11-15 10:46:02.461712] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20bca40 (9): Bad file descriptor 00:27:14.257 [2024-11-15 10:46:02.461910] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:14.257 [2024-11-15 10:46:02.461929] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:14.257 [2024-11-15 10:46:02.461942] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:14.257 [2024-11-15 10:46:02.461954] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:14.257 [2024-11-15 10:46:02.474224] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:14.257 [2024-11-15 10:46:02.474661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.257 [2024-11-15 10:46:02.474702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20bca40 with addr=10.0.0.2, port=4420 00:27:14.257 [2024-11-15 10:46:02.474717] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20bca40 is same with the state(6) to be set 00:27:14.257 [2024-11-15 10:46:02.474925] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20bca40 (9): Bad file descriptor 00:27:14.257 [2024-11-15 10:46:02.475123] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:14.257 [2024-11-15 10:46:02.475142] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:14.257 [2024-11-15 10:46:02.475155] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:14.257 [2024-11-15 10:46:02.475166] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:14.257 [2024-11-15 10:46:02.487585] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:14.257 [2024-11-15 10:46:02.487938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.257 [2024-11-15 10:46:02.487964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20bca40 with addr=10.0.0.2, port=4420 00:27:14.257 [2024-11-15 10:46:02.487978] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20bca40 is same with the state(6) to be set 00:27:14.257 [2024-11-15 10:46:02.488171] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20bca40 (9): Bad file descriptor 00:27:14.257 [2024-11-15 10:46:02.488396] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:14.257 [2024-11-15 10:46:02.488435] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:14.257 [2024-11-15 10:46:02.488449] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:14.257 [2024-11-15 10:46:02.488461] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:14.257 [2024-11-15 10:46:02.500862] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:14.257 [2024-11-15 10:46:02.501209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.257 [2024-11-15 10:46:02.501235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20bca40 with addr=10.0.0.2, port=4420 00:27:14.257 [2024-11-15 10:46:02.501250] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20bca40 is same with the state(6) to be set 00:27:14.257 [2024-11-15 10:46:02.501473] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20bca40 (9): Bad file descriptor 00:27:14.257 [2024-11-15 10:46:02.501693] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:14.257 [2024-11-15 10:46:02.501713] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:14.257 [2024-11-15 10:46:02.501726] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:14.257 [2024-11-15 10:46:02.501738] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:14.257 [2024-11-15 10:46:02.514132] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:14.257 [2024-11-15 10:46:02.514499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.257 [2024-11-15 10:46:02.514539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20bca40 with addr=10.0.0.2, port=4420 00:27:14.257 [2024-11-15 10:46:02.514559] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20bca40 is same with the state(6) to be set 00:27:14.257 [2024-11-15 10:46:02.514798] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20bca40 (9): Bad file descriptor 00:27:14.257 [2024-11-15 10:46:02.514991] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:14.257 [2024-11-15 10:46:02.515010] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:14.257 [2024-11-15 10:46:02.515022] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:14.257 [2024-11-15 10:46:02.515034] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:14.257 [2024-11-15 10:46:02.527304] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:14.257 [2024-11-15 10:46:02.527656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.257 [2024-11-15 10:46:02.527683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20bca40 with addr=10.0.0.2, port=4420 00:27:14.257 [2024-11-15 10:46:02.527713] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20bca40 is same with the state(6) to be set 00:27:14.257 [2024-11-15 10:46:02.527902] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20bca40 (9): Bad file descriptor 00:27:14.257 [2024-11-15 10:46:02.528095] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:14.257 [2024-11-15 10:46:02.528116] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:14.257 [2024-11-15 10:46:02.528130] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:14.257 [2024-11-15 10:46:02.528143] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:14.257 [2024-11-15 10:46:02.540419] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:14.257 [2024-11-15 10:46:02.540812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.257 [2024-11-15 10:46:02.540868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20bca40 with addr=10.0.0.2, port=4420 00:27:14.257 [2024-11-15 10:46:02.540882] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20bca40 is same with the state(6) to be set 00:27:14.257 [2024-11-15 10:46:02.541085] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20bca40 (9): Bad file descriptor 00:27:14.257 [2024-11-15 10:46:02.541277] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:14.257 [2024-11-15 10:46:02.541295] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:14.257 [2024-11-15 10:46:02.541308] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:14.257 [2024-11-15 10:46:02.541319] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:14.257 [2024-11-15 10:46:02.553549] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:14.257 [2024-11-15 10:46:02.553962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.257 [2024-11-15 10:46:02.554006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20bca40 with addr=10.0.0.2, port=4420 00:27:14.257 [2024-11-15 10:46:02.554020] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20bca40 is same with the state(6) to be set 00:27:14.257 [2024-11-15 10:46:02.554222] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20bca40 (9): Bad file descriptor 00:27:14.257 [2024-11-15 10:46:02.554446] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:14.258 [2024-11-15 10:46:02.554466] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:14.258 [2024-11-15 10:46:02.554479] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:14.258 [2024-11-15 10:46:02.554491] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:14.258 [2024-11-15 10:46:02.567140] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:14.258 [2024-11-15 10:46:02.567505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.258 [2024-11-15 10:46:02.567533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20bca40 with addr=10.0.0.2, port=4420 00:27:14.258 [2024-11-15 10:46:02.567550] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20bca40 is same with the state(6) to be set 00:27:14.258 [2024-11-15 10:46:02.567788] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20bca40 (9): Bad file descriptor 00:27:14.258 [2024-11-15 10:46:02.568014] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:14.258 [2024-11-15 10:46:02.568035] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:14.258 [2024-11-15 10:46:02.568049] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:14.258 [2024-11-15 10:46:02.568077] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:14.258 [2024-11-15 10:46:02.580812] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:14.258 [2024-11-15 10:46:02.581185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.258 [2024-11-15 10:46:02.581238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20bca40 with addr=10.0.0.2, port=4420 00:27:14.258 [2024-11-15 10:46:02.581252] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20bca40 is same with the state(6) to be set 00:27:14.258 [2024-11-15 10:46:02.581481] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20bca40 (9): Bad file descriptor 00:27:14.258 [2024-11-15 10:46:02.581728] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:14.258 [2024-11-15 10:46:02.581748] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:14.258 [2024-11-15 10:46:02.581760] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:14.258 [2024-11-15 10:46:02.581772] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:14.258 [2024-11-15 10:46:02.594091] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:14.258 [2024-11-15 10:46:02.594419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.258 [2024-11-15 10:46:02.594448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20bca40 with addr=10.0.0.2, port=4420 00:27:14.258 [2024-11-15 10:46:02.594465] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20bca40 is same with the state(6) to be set 00:27:14.258 [2024-11-15 10:46:02.594691] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20bca40 (9): Bad file descriptor 00:27:14.258 [2024-11-15 10:46:02.594884] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:14.258 [2024-11-15 10:46:02.594903] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:14.258 [2024-11-15 10:46:02.594923] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:14.258 [2024-11-15 10:46:02.594935] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:14.258 [2024-11-15 10:46:02.607389] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:14.258 [2024-11-15 10:46:02.607786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.258 [2024-11-15 10:46:02.607812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20bca40 with addr=10.0.0.2, port=4420 00:27:14.258 [2024-11-15 10:46:02.607826] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20bca40 is same with the state(6) to be set 00:27:14.258 [2024-11-15 10:46:02.608015] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20bca40 (9): Bad file descriptor 00:27:14.258 [2024-11-15 10:46:02.608207] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:14.258 [2024-11-15 10:46:02.608226] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:14.258 [2024-11-15 10:46:02.608238] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:14.258 [2024-11-15 10:46:02.608249] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:14.258 [2024-11-15 10:46:02.620852] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:14.258 [2024-11-15 10:46:02.621215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.258 [2024-11-15 10:46:02.621242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20bca40 with addr=10.0.0.2, port=4420 00:27:14.258 [2024-11-15 10:46:02.621257] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20bca40 is same with the state(6) to be set 00:27:14.258 [2024-11-15 10:46:02.621506] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20bca40 (9): Bad file descriptor 00:27:14.258 [2024-11-15 10:46:02.621753] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:14.258 [2024-11-15 10:46:02.621773] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:14.258 [2024-11-15 10:46:02.621802] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:14.258 [2024-11-15 10:46:02.621815] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:14.258 [2024-11-15 10:46:02.634434] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:14.258 [2024-11-15 10:46:02.634779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.258 [2024-11-15 10:46:02.634805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20bca40 with addr=10.0.0.2, port=4420 00:27:14.258 [2024-11-15 10:46:02.634819] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20bca40 is same with the state(6) to be set 00:27:14.258 [2024-11-15 10:46:02.635013] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20bca40 (9): Bad file descriptor 00:27:14.258 [2024-11-15 10:46:02.635211] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:14.258 [2024-11-15 10:46:02.635230] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:14.258 [2024-11-15 10:46:02.635243] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:14.258 [2024-11-15 10:46:02.635255] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:14.258 [2024-11-15 10:46:02.648116] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:14.258 [2024-11-15 10:46:02.648544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.258 [2024-11-15 10:46:02.648572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20bca40 with addr=10.0.0.2, port=4420 00:27:14.258 [2024-11-15 10:46:02.648588] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20bca40 is same with the state(6) to be set 00:27:14.258 [2024-11-15 10:46:02.648837] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20bca40 (9): Bad file descriptor 00:27:14.258 [2024-11-15 10:46:02.649048] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:14.258 [2024-11-15 10:46:02.649066] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:14.258 [2024-11-15 10:46:02.649079] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:14.258 [2024-11-15 10:46:02.649090] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:14.258 [2024-11-15 10:46:02.661547] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:14.258 [2024-11-15 10:46:02.661917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.258 [2024-11-15 10:46:02.661960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20bca40 with addr=10.0.0.2, port=4420 00:27:14.258 [2024-11-15 10:46:02.661974] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20bca40 is same with the state(6) to be set 00:27:14.258 [2024-11-15 10:46:02.662175] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20bca40 (9): Bad file descriptor 00:27:14.258 [2024-11-15 10:46:02.662397] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:14.258 [2024-11-15 10:46:02.662419] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:14.258 [2024-11-15 10:46:02.662433] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:14.258 [2024-11-15 10:46:02.662445] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:14.258 [2024-11-15 10:46:02.674863] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:14.258 [2024-11-15 10:46:02.675223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.259 [2024-11-15 10:46:02.675248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20bca40 with addr=10.0.0.2, port=4420 00:27:14.259 [2024-11-15 10:46:02.675262] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20bca40 is same with the state(6) to be set 00:27:14.259 [2024-11-15 10:46:02.675483] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20bca40 (9): Bad file descriptor 00:27:14.259 [2024-11-15 10:46:02.675701] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:14.259 [2024-11-15 10:46:02.675721] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:14.259 [2024-11-15 10:46:02.675733] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:14.259 [2024-11-15 10:46:02.675744] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:14.259 [2024-11-15 10:46:02.688114] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:14.259 [2024-11-15 10:46:02.688488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.259 [2024-11-15 10:46:02.688529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20bca40 with addr=10.0.0.2, port=4420 00:27:14.259 [2024-11-15 10:46:02.688548] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20bca40 is same with the state(6) to be set 00:27:14.259 [2024-11-15 10:46:02.688768] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20bca40 (9): Bad file descriptor 00:27:14.259 [2024-11-15 10:46:02.688960] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:14.259 [2024-11-15 10:46:02.688980] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:14.259 [2024-11-15 10:46:02.688992] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:14.259 [2024-11-15 10:46:02.689003] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:14.259 [2024-11-15 10:46:02.701271] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:14.259 [2024-11-15 10:46:02.701733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.259 [2024-11-15 10:46:02.701786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20bca40 with addr=10.0.0.2, port=4420 00:27:14.259 [2024-11-15 10:46:02.701799] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20bca40 is same with the state(6) to be set 00:27:14.259 [2024-11-15 10:46:02.702001] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20bca40 (9): Bad file descriptor 00:27:14.259 [2024-11-15 10:46:02.702194] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:14.259 [2024-11-15 10:46:02.702213] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:14.259 [2024-11-15 10:46:02.702225] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:14.259 [2024-11-15 10:46:02.702236] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:14.259 [2024-11-15 10:46:02.714480] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:14.259 [2024-11-15 10:46:02.714914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.259 [2024-11-15 10:46:02.714965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20bca40 with addr=10.0.0.2, port=4420 00:27:14.259 [2024-11-15 10:46:02.714979] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20bca40 is same with the state(6) to be set 00:27:14.259 [2024-11-15 10:46:02.715182] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20bca40 (9): Bad file descriptor 00:27:14.259 [2024-11-15 10:46:02.715401] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:14.259 [2024-11-15 10:46:02.715430] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:14.259 [2024-11-15 10:46:02.715458] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:14.259 [2024-11-15 10:46:02.715471] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:14.517 [2024-11-15 10:46:02.727754] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:14.517 [2024-11-15 10:46:02.728206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.518 [2024-11-15 10:46:02.728253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20bca40 with addr=10.0.0.2, port=4420 00:27:14.518 [2024-11-15 10:46:02.728268] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20bca40 is same with the state(6) to be set 00:27:14.518 [2024-11-15 10:46:02.728517] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20bca40 (9): Bad file descriptor 00:27:14.518 [2024-11-15 10:46:02.728752] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:14.518 [2024-11-15 10:46:02.728773] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:14.518 [2024-11-15 10:46:02.728801] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:14.518 [2024-11-15 10:46:02.728814] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:14.518 [2024-11-15 10:46:02.740817] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:14.518 [2024-11-15 10:46:02.741251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.518 [2024-11-15 10:46:02.741305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20bca40 with addr=10.0.0.2, port=4420 00:27:14.518 [2024-11-15 10:46:02.741319] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20bca40 is same with the state(6) to be set 00:27:14.518 [2024-11-15 10:46:02.741553] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20bca40 (9): Bad file descriptor 00:27:14.518 [2024-11-15 10:46:02.741787] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:14.518 [2024-11-15 10:46:02.741806] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:14.518 [2024-11-15 10:46:02.741818] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:14.518 [2024-11-15 10:46:02.741830] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:14.518 [2024-11-15 10:46:02.754238] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:14.518 [2024-11-15 10:46:02.754685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.518 [2024-11-15 10:46:02.754710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20bca40 with addr=10.0.0.2, port=4420 00:27:14.518 [2024-11-15 10:46:02.754724] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20bca40 is same with the state(6) to be set 00:27:14.518 [2024-11-15 10:46:02.754912] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20bca40 (9): Bad file descriptor 00:27:14.518 [2024-11-15 10:46:02.755104] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:14.518 [2024-11-15 10:46:02.755123] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:14.518 [2024-11-15 10:46:02.755135] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:14.518 [2024-11-15 10:46:02.755146] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:14.518 [2024-11-15 10:46:02.767349] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:14.518 [2024-11-15 10:46:02.767717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.518 [2024-11-15 10:46:02.767755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20bca40 with addr=10.0.0.2, port=4420 00:27:14.518 [2024-11-15 10:46:02.767770] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20bca40 is same with the state(6) to be set 00:27:14.518 [2024-11-15 10:46:02.767972] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20bca40 (9): Bad file descriptor 00:27:14.518 [2024-11-15 10:46:02.768164] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:14.518 [2024-11-15 10:46:02.768183] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:14.518 [2024-11-15 10:46:02.768200] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:14.518 [2024-11-15 10:46:02.768212] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:14.518 [2024-11-15 10:46:02.780442] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:14.518 [2024-11-15 10:46:02.780849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.518 [2024-11-15 10:46:02.780874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20bca40 with addr=10.0.0.2, port=4420 00:27:14.518 [2024-11-15 10:46:02.780888] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20bca40 is same with the state(6) to be set 00:27:14.518 [2024-11-15 10:46:02.781091] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20bca40 (9): Bad file descriptor 00:27:14.518 [2024-11-15 10:46:02.781283] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:14.518 [2024-11-15 10:46:02.781301] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:14.518 [2024-11-15 10:46:02.781313] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:14.518 [2024-11-15 10:46:02.781325] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:14.518 [2024-11-15 10:46:02.793427] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:14.518 [2024-11-15 10:46:02.793841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.518 [2024-11-15 10:46:02.793867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20bca40 with addr=10.0.0.2, port=4420 00:27:14.518 [2024-11-15 10:46:02.793880] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20bca40 is same with the state(6) to be set 00:27:14.518 [2024-11-15 10:46:02.794083] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20bca40 (9): Bad file descriptor 00:27:14.518 [2024-11-15 10:46:02.794274] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:14.518 [2024-11-15 10:46:02.794293] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:14.518 [2024-11-15 10:46:02.794305] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:14.518 [2024-11-15 10:46:02.794316] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:14.518 [2024-11-15 10:46:02.806508] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:14.518 [2024-11-15 10:46:02.806870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.518 [2024-11-15 10:46:02.806896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20bca40 with addr=10.0.0.2, port=4420 00:27:14.518 [2024-11-15 10:46:02.806924] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20bca40 is same with the state(6) to be set 00:27:14.518 [2024-11-15 10:46:02.807126] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20bca40 (9): Bad file descriptor 00:27:14.518 [2024-11-15 10:46:02.807318] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:14.518 [2024-11-15 10:46:02.807337] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:14.518 [2024-11-15 10:46:02.807349] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:14.518 [2024-11-15 10:46:02.807370] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:14.518 [2024-11-15 10:46:02.819495] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:14.518 [2024-11-15 10:46:02.819929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.518 [2024-11-15 10:46:02.819968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20bca40 with addr=10.0.0.2, port=4420 00:27:14.518 [2024-11-15 10:46:02.819981] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20bca40 is same with the state(6) to be set 00:27:14.518 [2024-11-15 10:46:02.820183] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20bca40 (9): Bad file descriptor 00:27:14.518 [2024-11-15 10:46:02.820401] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:14.518 [2024-11-15 10:46:02.820422] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:14.518 [2024-11-15 10:46:02.820435] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:14.518 [2024-11-15 10:46:02.820446] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:14.518 [2024-11-15 10:46:02.832598] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:14.518 [2024-11-15 10:46:02.833026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.518 [2024-11-15 10:46:02.833065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20bca40 with addr=10.0.0.2, port=4420 00:27:14.518 [2024-11-15 10:46:02.833079] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20bca40 is same with the state(6) to be set 00:27:14.518 [2024-11-15 10:46:02.833267] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20bca40 (9): Bad file descriptor 00:27:14.518 [2024-11-15 10:46:02.833488] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:14.518 [2024-11-15 10:46:02.833509] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:14.518 [2024-11-15 10:46:02.833522] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:14.518 [2024-11-15 10:46:02.833533] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:14.518 [2024-11-15 10:46:02.845644] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:14.518 [2024-11-15 10:46:02.846054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.518 [2024-11-15 10:46:02.846087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20bca40 with addr=10.0.0.2, port=4420 00:27:14.518 [2024-11-15 10:46:02.846115] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20bca40 is same with the state(6) to be set 00:27:14.518 [2024-11-15 10:46:02.846304] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20bca40 (9): Bad file descriptor 00:27:14.518 [2024-11-15 10:46:02.846526] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:14.518 [2024-11-15 10:46:02.846546] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:14.518 [2024-11-15 10:46:02.846559] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:14.519 [2024-11-15 10:46:02.846570] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:14.519 [2024-11-15 10:46:02.858761] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:14.519 [2024-11-15 10:46:02.859190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.519 [2024-11-15 10:46:02.859214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20bca40 with addr=10.0.0.2, port=4420 00:27:14.519 [2024-11-15 10:46:02.859247] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20bca40 is same with the state(6) to be set 00:27:14.519 [2024-11-15 10:46:02.859464] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20bca40 (9): Bad file descriptor 00:27:14.519 [2024-11-15 10:46:02.859663] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:14.519 [2024-11-15 10:46:02.859696] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:14.519 [2024-11-15 10:46:02.859708] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:14.519 [2024-11-15 10:46:02.859720] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:14.519 [2024-11-15 10:46:02.871796] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:14.519 [2024-11-15 10:46:02.872210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.519 [2024-11-15 10:46:02.872261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20bca40 with addr=10.0.0.2, port=4420 00:27:14.519 [2024-11-15 10:46:02.872275] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20bca40 is same with the state(6) to be set 00:27:14.519 [2024-11-15 10:46:02.872510] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20bca40 (9): Bad file descriptor 00:27:14.519 [2024-11-15 10:46:02.872730] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:14.519 [2024-11-15 10:46:02.872749] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:14.519 [2024-11-15 10:46:02.872761] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:14.519 [2024-11-15 10:46:02.872772] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:14.519 [2024-11-15 10:46:02.884840] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:14.519 [2024-11-15 10:46:02.885252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.519 [2024-11-15 10:46:02.885303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20bca40 with addr=10.0.0.2, port=4420 00:27:14.519 [2024-11-15 10:46:02.885317] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20bca40 is same with the state(6) to be set 00:27:14.519 [2024-11-15 10:46:02.885540] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20bca40 (9): Bad file descriptor 00:27:14.519 [2024-11-15 10:46:02.885774] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:14.519 [2024-11-15 10:46:02.885793] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:14.519 [2024-11-15 10:46:02.885806] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:14.519 [2024-11-15 10:46:02.885817] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:14.519 [2024-11-15 10:46:02.898244] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:14.519 [2024-11-15 10:46:02.898666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.519 [2024-11-15 10:46:02.898705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20bca40 with addr=10.0.0.2, port=4420 00:27:14.519 [2024-11-15 10:46:02.898719] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20bca40 is same with the state(6) to be set 00:27:14.519 [2024-11-15 10:46:02.898920] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20bca40 (9): Bad file descriptor 00:27:14.519 [2024-11-15 10:46:02.899117] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:14.519 [2024-11-15 10:46:02.899136] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:14.519 [2024-11-15 10:46:02.899148] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:14.519 [2024-11-15 10:46:02.899160] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:14.519 [2024-11-15 10:46:02.911408] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:14.519 [2024-11-15 10:46:02.911795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.519 [2024-11-15 10:46:02.911833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20bca40 with addr=10.0.0.2, port=4420 00:27:14.519 [2024-11-15 10:46:02.911848] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20bca40 is same with the state(6) to be set 00:27:14.519 [2024-11-15 10:46:02.912049] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20bca40 (9): Bad file descriptor 00:27:14.519 [2024-11-15 10:46:02.912241] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:14.519 [2024-11-15 10:46:02.912260] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:14.519 [2024-11-15 10:46:02.912272] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:14.519 [2024-11-15 10:46:02.912283] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:14.519 [2024-11-15 10:46:02.924504] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:14.519 [2024-11-15 10:46:02.924935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.519 [2024-11-15 10:46:02.924974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20bca40 with addr=10.0.0.2, port=4420 00:27:14.519 [2024-11-15 10:46:02.924988] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20bca40 is same with the state(6) to be set 00:27:14.519 [2024-11-15 10:46:02.925176] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20bca40 (9): Bad file descriptor 00:27:14.519 [2024-11-15 10:46:02.925394] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:14.519 [2024-11-15 10:46:02.925423] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:14.519 [2024-11-15 10:46:02.925436] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:14.519 [2024-11-15 10:46:02.925462] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:14.519 [2024-11-15 10:46:02.937572] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:14.519 [2024-11-15 10:46:02.937997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.519 [2024-11-15 10:46:02.938022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20bca40 with addr=10.0.0.2, port=4420 00:27:14.519 [2024-11-15 10:46:02.938050] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20bca40 is same with the state(6) to be set 00:27:14.519 [2024-11-15 10:46:02.938238] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20bca40 (9): Bad file descriptor 00:27:14.519 [2024-11-15 10:46:02.938472] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:14.519 [2024-11-15 10:46:02.938494] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:14.519 [2024-11-15 10:46:02.938511] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:14.519 [2024-11-15 10:46:02.938524] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:14.519 [2024-11-15 10:46:02.950598] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:14.519 [2024-11-15 10:46:02.951024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.519 [2024-11-15 10:46:02.951049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20bca40 with addr=10.0.0.2, port=4420 00:27:14.519 [2024-11-15 10:46:02.951077] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20bca40 is same with the state(6) to be set 00:27:14.520 [2024-11-15 10:46:02.951265] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20bca40 (9): Bad file descriptor 00:27:14.520 [2024-11-15 10:46:02.951503] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:14.520 [2024-11-15 10:46:02.951524] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:14.520 [2024-11-15 10:46:02.951537] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:14.520 [2024-11-15 10:46:02.951549] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:14.520 [2024-11-15 10:46:02.963623] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:14.520 [2024-11-15 10:46:02.964051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.520 [2024-11-15 10:46:02.964076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20bca40 with addr=10.0.0.2, port=4420 00:27:14.520 [2024-11-15 10:46:02.964105] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20bca40 is same with the state(6) to be set 00:27:14.520 [2024-11-15 10:46:02.964307] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20bca40 (9): Bad file descriptor 00:27:14.520 [2024-11-15 10:46:02.964529] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:14.520 [2024-11-15 10:46:02.964550] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:14.520 [2024-11-15 10:46:02.964562] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:14.520 [2024-11-15 10:46:02.964574] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:14.520 [2024-11-15 10:46:02.976622] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:14.520 [2024-11-15 10:46:02.977028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.520 [2024-11-15 10:46:02.977056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20bca40 with addr=10.0.0.2, port=4420 00:27:14.520 [2024-11-15 10:46:02.977070] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20bca40 is same with the state(6) to be set 00:27:14.520 [2024-11-15 10:46:02.977272] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20bca40 (9): Bad file descriptor 00:27:14.520 [2024-11-15 10:46:02.977492] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:14.520 [2024-11-15 10:46:02.977513] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:14.520 [2024-11-15 10:46:02.977525] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:14.520 [2024-11-15 10:46:02.977537] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:14.779 [2024-11-15 10:46:02.990184] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:14.779 [2024-11-15 10:46:02.990646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.779 [2024-11-15 10:46:02.990686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20bca40 with addr=10.0.0.2, port=4420 00:27:14.779 [2024-11-15 10:46:02.990699] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20bca40 is same with the state(6) to be set 00:27:14.779 [2024-11-15 10:46:02.990901] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20bca40 (9): Bad file descriptor 00:27:14.779 [2024-11-15 10:46:02.991093] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:14.779 [2024-11-15 10:46:02.991111] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:14.779 [2024-11-15 10:46:02.991124] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:14.779 [2024-11-15 10:46:02.991135] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:14.779 [2024-11-15 10:46:03.003174] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:14.779 [2024-11-15 10:46:03.003635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.779 [2024-11-15 10:46:03.003674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20bca40 with addr=10.0.0.2, port=4420 00:27:14.779 [2024-11-15 10:46:03.003689] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20bca40 is same with the state(6) to be set 00:27:14.779 [2024-11-15 10:46:03.003877] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20bca40 (9): Bad file descriptor 00:27:14.779 [2024-11-15 10:46:03.004069] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:14.779 [2024-11-15 10:46:03.004088] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:14.779 [2024-11-15 10:46:03.004100] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:14.779 [2024-11-15 10:46:03.004112] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:14.779 [2024-11-15 10:46:03.016190] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:14.779 [2024-11-15 10:46:03.016631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.779 [2024-11-15 10:46:03.016683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20bca40 with addr=10.0.0.2, port=4420 00:27:14.779 [2024-11-15 10:46:03.016698] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20bca40 is same with the state(6) to be set 00:27:14.779 [2024-11-15 10:46:03.016900] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20bca40 (9): Bad file descriptor 00:27:14.779 [2024-11-15 10:46:03.017093] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:14.779 [2024-11-15 10:46:03.017113] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:14.779 [2024-11-15 10:46:03.017127] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:14.779 [2024-11-15 10:46:03.017140] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:14.779 [2024-11-15 10:46:03.029378] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:14.779 [2024-11-15 10:46:03.029858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.779 [2024-11-15 10:46:03.029906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20bca40 with addr=10.0.0.2, port=4420 00:27:14.779 [2024-11-15 10:46:03.029925] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20bca40 is same with the state(6) to be set 00:27:14.779 [2024-11-15 10:46:03.030127] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20bca40 (9): Bad file descriptor 00:27:14.779 [2024-11-15 10:46:03.030319] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:14.779 [2024-11-15 10:46:03.030338] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:14.779 [2024-11-15 10:46:03.030350] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:14.779 [2024-11-15 10:46:03.030371] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:14.779 [2024-11-15 10:46:03.042467] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:14.779 [2024-11-15 10:46:03.042906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.779 [2024-11-15 10:46:03.042945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20bca40 with addr=10.0.0.2, port=4420 00:27:14.779 [2024-11-15 10:46:03.042960] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20bca40 is same with the state(6) to be set 00:27:14.779 [2024-11-15 10:46:03.043148] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20bca40 (9): Bad file descriptor 00:27:14.779 [2024-11-15 10:46:03.043340] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:14.780 [2024-11-15 10:46:03.043358] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:14.780 [2024-11-15 10:46:03.043396] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:14.780 [2024-11-15 10:46:03.043409] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:14.780 [2024-11-15 10:46:03.055529] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:14.780 [2024-11-15 10:46:03.055905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.780 [2024-11-15 10:46:03.055945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20bca40 with addr=10.0.0.2, port=4420 00:27:14.780 [2024-11-15 10:46:03.055960] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20bca40 is same with the state(6) to be set 00:27:14.780 [2024-11-15 10:46:03.056162] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20bca40 (9): Bad file descriptor 00:27:14.780 [2024-11-15 10:46:03.056355] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:14.780 [2024-11-15 10:46:03.056398] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:14.780 [2024-11-15 10:46:03.056412] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:14.780 [2024-11-15 10:46:03.056424] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:14.780 [2024-11-15 10:46:03.068735] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:14.780 [2024-11-15 10:46:03.069165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.780 [2024-11-15 10:46:03.069204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20bca40 with addr=10.0.0.2, port=4420 00:27:14.780 [2024-11-15 10:46:03.069219] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20bca40 is same with the state(6) to be set 00:27:14.780 [2024-11-15 10:46:03.069434] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20bca40 (9): Bad file descriptor 00:27:14.780 [2024-11-15 10:46:03.069638] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:14.780 [2024-11-15 10:46:03.069657] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:14.780 [2024-11-15 10:46:03.069670] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:14.780 [2024-11-15 10:46:03.069696] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:14.780 [2024-11-15 10:46:03.081725] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:14.780 [2024-11-15 10:46:03.082167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.780 [2024-11-15 10:46:03.082191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20bca40 with addr=10.0.0.2, port=4420 00:27:14.780 [2024-11-15 10:46:03.082220] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20bca40 is same with the state(6) to be set 00:27:14.780 [2024-11-15 10:46:03.082436] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20bca40 (9): Bad file descriptor 00:27:14.780 [2024-11-15 10:46:03.082635] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:14.780 [2024-11-15 10:46:03.082654] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:14.780 [2024-11-15 10:46:03.082668] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:14.780 [2024-11-15 10:46:03.082679] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:14.780 [2024-11-15 10:46:03.094715] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:14.780 [2024-11-15 10:46:03.095131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.780 [2024-11-15 10:46:03.095155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20bca40 with addr=10.0.0.2, port=4420 00:27:14.780 [2024-11-15 10:46:03.095184] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20bca40 is same with the state(6) to be set 00:27:14.780 [2024-11-15 10:46:03.095396] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20bca40 (9): Bad file descriptor 00:27:14.780 [2024-11-15 10:46:03.095594] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:14.780 [2024-11-15 10:46:03.095614] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:14.780 [2024-11-15 10:46:03.095626] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:14.780 [2024-11-15 10:46:03.095638] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:14.780 [2024-11-15 10:46:03.107816] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:14.780 [2024-11-15 10:46:03.108226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.780 [2024-11-15 10:46:03.108276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20bca40 with addr=10.0.0.2, port=4420 00:27:14.780 [2024-11-15 10:46:03.108290] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20bca40 is same with the state(6) to be set 00:27:14.780 [2024-11-15 10:46:03.108542] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20bca40 (9): Bad file descriptor 00:27:14.780 [2024-11-15 10:46:03.108762] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:14.780 [2024-11-15 10:46:03.108782] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:14.780 [2024-11-15 10:46:03.108799] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:14.780 [2024-11-15 10:46:03.108826] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:14.780 [2024-11-15 10:46:03.120909] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:14.780 [2024-11-15 10:46:03.121320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.780 [2024-11-15 10:46:03.121379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20bca40 with addr=10.0.0.2, port=4420 00:27:14.780 [2024-11-15 10:46:03.121394] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20bca40 is same with the state(6) to be set 00:27:14.780 [2024-11-15 10:46:03.121595] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20bca40 (9): Bad file descriptor 00:27:14.780 [2024-11-15 10:46:03.121787] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:14.780 [2024-11-15 10:46:03.121806] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:14.780 [2024-11-15 10:46:03.121818] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:14.780 [2024-11-15 10:46:03.121829] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:14.780 [2024-11-15 10:46:03.133938] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:14.780 [2024-11-15 10:46:03.134332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.780 [2024-11-15 10:46:03.134389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20bca40 with addr=10.0.0.2, port=4420 00:27:14.780 [2024-11-15 10:46:03.134403] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20bca40 is same with the state(6) to be set 00:27:14.781 [2024-11-15 10:46:03.134606] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20bca40 (9): Bad file descriptor 00:27:14.781 [2024-11-15 10:46:03.134798] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:14.781 [2024-11-15 10:46:03.134817] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:14.781 [2024-11-15 10:46:03.134830] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:14.781 [2024-11-15 10:46:03.134840] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:14.781 [2024-11-15 10:46:03.147048] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:14.781 [2024-11-15 10:46:03.147425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.781 [2024-11-15 10:46:03.147452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20bca40 with addr=10.0.0.2, port=4420 00:27:14.781 [2024-11-15 10:46:03.147467] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20bca40 is same with the state(6) to be set 00:27:14.781 [2024-11-15 10:46:03.147661] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20bca40 (9): Bad file descriptor 00:27:14.781 [2024-11-15 10:46:03.147868] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:14.781 [2024-11-15 10:46:03.147888] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:14.781 [2024-11-15 10:46:03.147900] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:14.781 [2024-11-15 10:46:03.147911] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:14.781 [2024-11-15 10:46:03.160322] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:14.781 [2024-11-15 10:46:03.160737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.781 [2024-11-15 10:46:03.160763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20bca40 with addr=10.0.0.2, port=4420 00:27:14.781 [2024-11-15 10:46:03.160777] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20bca40 is same with the state(6) to be set 00:27:14.781 [2024-11-15 10:46:03.160965] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20bca40 (9): Bad file descriptor 00:27:14.781 [2024-11-15 10:46:03.161157] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:14.781 [2024-11-15 10:46:03.161176] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:14.781 [2024-11-15 10:46:03.161188] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:14.781 [2024-11-15 10:46:03.161200] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:14.781 [2024-11-15 10:46:03.173563] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:14.781 [2024-11-15 10:46:03.174007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.781 [2024-11-15 10:46:03.174032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20bca40 with addr=10.0.0.2, port=4420 00:27:14.781 [2024-11-15 10:46:03.174060] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20bca40 is same with the state(6) to be set 00:27:14.781 [2024-11-15 10:46:03.174248] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20bca40 (9): Bad file descriptor 00:27:14.781 [2024-11-15 10:46:03.174467] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:14.781 [2024-11-15 10:46:03.174487] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:14.781 [2024-11-15 10:46:03.174500] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:14.781 [2024-11-15 10:46:03.174512] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:14.781 [2024-11-15 10:46:03.186715] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:14.781 [2024-11-15 10:46:03.187085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.781 [2024-11-15 10:46:03.187126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20bca40 with addr=10.0.0.2, port=4420 00:27:14.781 [2024-11-15 10:46:03.187140] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20bca40 is same with the state(6) to be set 00:27:14.781 [2024-11-15 10:46:03.187371] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20bca40 (9): Bad file descriptor 00:27:14.781 [2024-11-15 10:46:03.187592] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:14.781 [2024-11-15 10:46:03.187612] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:14.781 [2024-11-15 10:46:03.187625] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:14.781 [2024-11-15 10:46:03.187637] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:14.781 [2024-11-15 10:46:03.199767] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:14.781 [2024-11-15 10:46:03.200169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.781 [2024-11-15 10:46:03.200194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20bca40 with addr=10.0.0.2, port=4420 00:27:14.781 [2024-11-15 10:46:03.200226] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20bca40 is same with the state(6) to be set 00:27:14.781 [2024-11-15 10:46:03.200458] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20bca40 (9): Bad file descriptor 00:27:14.781 [2024-11-15 10:46:03.200663] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:14.781 [2024-11-15 10:46:03.200684] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:14.781 [2024-11-15 10:46:03.200696] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:14.781 [2024-11-15 10:46:03.200708] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:14.781 [2024-11-15 10:46:03.212816] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:14.781 [2024-11-15 10:46:03.213241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.781 [2024-11-15 10:46:03.213279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20bca40 with addr=10.0.0.2, port=4420 00:27:14.781 [2024-11-15 10:46:03.213294] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20bca40 is same with the state(6) to be set 00:27:14.781 [2024-11-15 10:46:03.213509] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20bca40 (9): Bad file descriptor 00:27:14.781 [2024-11-15 10:46:03.213722] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:14.781 [2024-11-15 10:46:03.213741] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:14.781 [2024-11-15 10:46:03.213754] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:14.781 [2024-11-15 10:46:03.213765] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:14.781 4723.40 IOPS, 18.45 MiB/s [2024-11-15T09:46:03.244Z] [2024-11-15 10:46:03.226060] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:14.781 [2024-11-15 10:46:03.226490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.781 [2024-11-15 10:46:03.226529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20bca40 with addr=10.0.0.2, port=4420 00:27:14.781 [2024-11-15 10:46:03.226545] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20bca40 is same with the state(6) to be set 00:27:14.781 [2024-11-15 10:46:03.226752] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20bca40 (9): Bad file descriptor 00:27:14.781 [2024-11-15 10:46:03.226944] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:14.781 [2024-11-15 10:46:03.226963] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:14.781 [2024-11-15 10:46:03.226975] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:14.781 [2024-11-15 10:46:03.226986] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:14.781 [2024-11-15 10:46:03.239237] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:14.781 [2024-11-15 10:46:03.239703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.781 [2024-11-15 10:46:03.239764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20bca40 with addr=10.0.0.2, port=4420 00:27:14.782 [2024-11-15 10:46:03.239778] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20bca40 is same with the state(6) to be set 00:27:14.782 [2024-11-15 10:46:03.239980] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20bca40 (9): Bad file descriptor 00:27:14.782 [2024-11-15 10:46:03.240180] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:14.782 [2024-11-15 10:46:03.240199] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:14.782 [2024-11-15 10:46:03.240211] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:14.782 [2024-11-15 10:46:03.240222] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:15.041 [2024-11-15 10:46:03.252710] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:15.041 [2024-11-15 10:46:03.253143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.041 [2024-11-15 10:46:03.253182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20bca40 with addr=10.0.0.2, port=4420 00:27:15.041 [2024-11-15 10:46:03.253197] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20bca40 is same with the state(6) to be set 00:27:15.041 [2024-11-15 10:46:03.253431] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20bca40 (9): Bad file descriptor 00:27:15.041 [2024-11-15 10:46:03.253643] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:15.041 [2024-11-15 10:46:03.253677] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:15.041 [2024-11-15 10:46:03.253690] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:15.041 [2024-11-15 10:46:03.253702] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:15.041 [2024-11-15 10:46:03.265885] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:15.041 [2024-11-15 10:46:03.266298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.041 [2024-11-15 10:46:03.266322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20bca40 with addr=10.0.0.2, port=4420 00:27:15.041 [2024-11-15 10:46:03.266335] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20bca40 is same with the state(6) to be set 00:27:15.042 [2024-11-15 10:46:03.266571] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20bca40 (9): Bad file descriptor 00:27:15.042 [2024-11-15 10:46:03.266803] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:15.042 [2024-11-15 10:46:03.266822] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:15.042 [2024-11-15 10:46:03.266835] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:15.042 [2024-11-15 10:46:03.266846] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:15.042 [2024-11-15 10:46:03.279009] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:15.042 [2024-11-15 10:46:03.279380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.042 [2024-11-15 10:46:03.279419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20bca40 with addr=10.0.0.2, port=4420 00:27:15.042 [2024-11-15 10:46:03.279432] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20bca40 is same with the state(6) to be set 00:27:15.042 [2024-11-15 10:46:03.279635] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20bca40 (9): Bad file descriptor 00:27:15.042 [2024-11-15 10:46:03.279827] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:15.042 [2024-11-15 10:46:03.279846] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:15.042 [2024-11-15 10:46:03.279863] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:15.042 [2024-11-15 10:46:03.279875] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:15.042 [2024-11-15 10:46:03.292107] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:15.042 [2024-11-15 10:46:03.292529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.042 [2024-11-15 10:46:03.292555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20bca40 with addr=10.0.0.2, port=4420 00:27:15.042 [2024-11-15 10:46:03.292583] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20bca40 is same with the state(6) to be set 00:27:15.042 [2024-11-15 10:46:03.292771] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20bca40 (9): Bad file descriptor 00:27:15.042 [2024-11-15 10:46:03.292963] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:15.042 [2024-11-15 10:46:03.292982] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:15.042 [2024-11-15 10:46:03.292994] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:15.042 [2024-11-15 10:46:03.293006] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:15.042 [2024-11-15 10:46:03.305259] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:15.042 [2024-11-15 10:46:03.305726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.042 [2024-11-15 10:46:03.305751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20bca40 with addr=10.0.0.2, port=4420 00:27:15.042 [2024-11-15 10:46:03.305780] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20bca40 is same with the state(6) to be set 00:27:15.042 [2024-11-15 10:46:03.305977] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20bca40 (9): Bad file descriptor 00:27:15.042 [2024-11-15 10:46:03.306171] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:15.042 [2024-11-15 10:46:03.306190] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:15.042 [2024-11-15 10:46:03.306202] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:15.042 [2024-11-15 10:46:03.306213] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:15.042 [2024-11-15 10:46:03.318252] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:15.042 [2024-11-15 10:46:03.318679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.042 [2024-11-15 10:46:03.318705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20bca40 with addr=10.0.0.2, port=4420 00:27:15.042 [2024-11-15 10:46:03.318719] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20bca40 is same with the state(6) to be set 00:27:15.042 [2024-11-15 10:46:03.318908] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20bca40 (9): Bad file descriptor 00:27:15.042 [2024-11-15 10:46:03.319099] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:15.042 [2024-11-15 10:46:03.319118] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:15.042 [2024-11-15 10:46:03.319130] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:15.042 [2024-11-15 10:46:03.319142] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:15.042 [2024-11-15 10:46:03.331334] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:15.042 [2024-11-15 10:46:03.331763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.042 [2024-11-15 10:46:03.331788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20bca40 with addr=10.0.0.2, port=4420 00:27:15.042 [2024-11-15 10:46:03.331816] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20bca40 is same with the state(6) to be set 00:27:15.042 [2024-11-15 10:46:03.332004] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20bca40 (9): Bad file descriptor 00:27:15.042 [2024-11-15 10:46:03.332196] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:15.042 [2024-11-15 10:46:03.332214] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:15.042 [2024-11-15 10:46:03.332227] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:15.042 [2024-11-15 10:46:03.332238] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:15.042 [2024-11-15 10:46:03.344350] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:15.042 [2024-11-15 10:46:03.344766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.042 [2024-11-15 10:46:03.344795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20bca40 with addr=10.0.0.2, port=4420 00:27:15.042 [2024-11-15 10:46:03.344824] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20bca40 is same with the state(6) to be set 00:27:15.042 [2024-11-15 10:46:03.345012] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20bca40 (9): Bad file descriptor 00:27:15.042 [2024-11-15 10:46:03.345204] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:15.042 [2024-11-15 10:46:03.345222] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:15.042 [2024-11-15 10:46:03.345234] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:15.042 [2024-11-15 10:46:03.345246] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:15.042 [2024-11-15 10:46:03.357486] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:15.042 [2024-11-15 10:46:03.357895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.042 [2024-11-15 10:46:03.357948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20bca40 with addr=10.0.0.2, port=4420 00:27:15.042 [2024-11-15 10:46:03.357962] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20bca40 is same with the state(6) to be set 00:27:15.042 [2024-11-15 10:46:03.358164] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20bca40 (9): Bad file descriptor 00:27:15.042 [2024-11-15 10:46:03.358356] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:15.042 [2024-11-15 10:46:03.358400] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:15.042 [2024-11-15 10:46:03.358413] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:15.042 [2024-11-15 10:46:03.358425] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:15.042 [2024-11-15 10:46:03.370685] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:15.042 [2024-11-15 10:46:03.371090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.042 [2024-11-15 10:46:03.371128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20bca40 with addr=10.0.0.2, port=4420 00:27:15.042 [2024-11-15 10:46:03.371147] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20bca40 is same with the state(6) to be set 00:27:15.042 [2024-11-15 10:46:03.371336] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20bca40 (9): Bad file descriptor 00:27:15.042 [2024-11-15 10:46:03.371558] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:15.042 [2024-11-15 10:46:03.371578] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:15.042 [2024-11-15 10:46:03.371590] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:15.042 [2024-11-15 10:46:03.371602] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:15.042 [2024-11-15 10:46:03.383624] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:15.042 [2024-11-15 10:46:03.384061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.043 [2024-11-15 10:46:03.384111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20bca40 with addr=10.0.0.2, port=4420 00:27:15.043 [2024-11-15 10:46:03.384125] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20bca40 is same with the state(6) to be set 00:27:15.043 [2024-11-15 10:46:03.384327] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20bca40 (9): Bad file descriptor 00:27:15.043 [2024-11-15 10:46:03.384568] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:15.043 [2024-11-15 10:46:03.384589] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:15.043 [2024-11-15 10:46:03.384602] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:15.043 [2024-11-15 10:46:03.384614] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:15.043 [2024-11-15 10:46:03.396645] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:15.043 [2024-11-15 10:46:03.397083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.043 [2024-11-15 10:46:03.397130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20bca40 with addr=10.0.0.2, port=4420 00:27:15.043 [2024-11-15 10:46:03.397144] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20bca40 is same with the state(6) to be set 00:27:15.043 [2024-11-15 10:46:03.397345] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20bca40 (9): Bad file descriptor 00:27:15.043 [2024-11-15 10:46:03.397567] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:15.043 [2024-11-15 10:46:03.397588] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:15.043 [2024-11-15 10:46:03.397601] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:15.043 [2024-11-15 10:46:03.397613] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:15.043 [2024-11-15 10:46:03.409921] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:15.043 [2024-11-15 10:46:03.410323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.043 [2024-11-15 10:46:03.410387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20bca40 with addr=10.0.0.2, port=4420 00:27:15.043 [2024-11-15 10:46:03.410403] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20bca40 is same with the state(6) to be set 00:27:15.043 [2024-11-15 10:46:03.410616] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20bca40 (9): Bad file descriptor 00:27:15.043 [2024-11-15 10:46:03.410846] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:15.043 [2024-11-15 10:46:03.410865] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:15.043 [2024-11-15 10:46:03.410877] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:15.043 [2024-11-15 10:46:03.410889] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:15.043 [2024-11-15 10:46:03.423011] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:15.043 [2024-11-15 10:46:03.423436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.043 [2024-11-15 10:46:03.423462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20bca40 with addr=10.0.0.2, port=4420 00:27:15.043 [2024-11-15 10:46:03.423491] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20bca40 is same with the state(6) to be set 00:27:15.043 [2024-11-15 10:46:03.423718] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20bca40 (9): Bad file descriptor 00:27:15.043 [2024-11-15 10:46:03.423929] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:15.043 [2024-11-15 10:46:03.423948] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:15.043 [2024-11-15 10:46:03.423960] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:15.043 [2024-11-15 10:46:03.423971] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:15.043 [2024-11-15 10:46:03.436148] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:15.043 [2024-11-15 10:46:03.436616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.043 [2024-11-15 10:46:03.436659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20bca40 with addr=10.0.0.2, port=4420 00:27:15.043 [2024-11-15 10:46:03.436673] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20bca40 is same with the state(6) to be set 00:27:15.043 [2024-11-15 10:46:03.436879] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20bca40 (9): Bad file descriptor 00:27:15.043 [2024-11-15 10:46:03.437070] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:15.043 [2024-11-15 10:46:03.437089] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:15.043 [2024-11-15 10:46:03.437102] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:15.043 [2024-11-15 10:46:03.437113] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:15.043 [2024-11-15 10:46:03.449143] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:15.043 [2024-11-15 10:46:03.449549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.043 [2024-11-15 10:46:03.449574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20bca40 with addr=10.0.0.2, port=4420 00:27:15.043 [2024-11-15 10:46:03.449587] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20bca40 is same with the state(6) to be set 00:27:15.043 [2024-11-15 10:46:03.449789] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20bca40 (9): Bad file descriptor 00:27:15.043 [2024-11-15 10:46:03.449981] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:15.043 [2024-11-15 10:46:03.450000] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:15.043 [2024-11-15 10:46:03.450017] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:15.043 [2024-11-15 10:46:03.450028] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:15.043 [2024-11-15 10:46:03.462259] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:15.043 [2024-11-15 10:46:03.462663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.043 [2024-11-15 10:46:03.462702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20bca40 with addr=10.0.0.2, port=4420 00:27:15.043 [2024-11-15 10:46:03.462716] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20bca40 is same with the state(6) to be set 00:27:15.043 [2024-11-15 10:46:03.462918] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20bca40 (9): Bad file descriptor 00:27:15.043 [2024-11-15 10:46:03.463110] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:15.043 [2024-11-15 10:46:03.463129] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:15.043 [2024-11-15 10:46:03.463141] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:15.043 [2024-11-15 10:46:03.463152] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:15.043 [2024-11-15 10:46:03.475279] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:15.043 [2024-11-15 10:46:03.475754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.043 [2024-11-15 10:46:03.475804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20bca40 with addr=10.0.0.2, port=4420 00:27:15.043 [2024-11-15 10:46:03.475818] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20bca40 is same with the state(6) to be set 00:27:15.043 [2024-11-15 10:46:03.476020] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20bca40 (9): Bad file descriptor 00:27:15.043 [2024-11-15 10:46:03.476212] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:15.043 [2024-11-15 10:46:03.476231] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:15.043 [2024-11-15 10:46:03.476243] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:15.043 [2024-11-15 10:46:03.476254] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:15.043 [2024-11-15 10:46:03.488397] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:15.043 [2024-11-15 10:46:03.488780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.043 [2024-11-15 10:46:03.488836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20bca40 with addr=10.0.0.2, port=4420 00:27:15.043 [2024-11-15 10:46:03.488850] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20bca40 is same with the state(6) to be set 00:27:15.043 [2024-11-15 10:46:03.489052] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20bca40 (9): Bad file descriptor 00:27:15.043 [2024-11-15 10:46:03.489244] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:15.043 [2024-11-15 10:46:03.489263] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:15.043 [2024-11-15 10:46:03.489275] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:15.043 [2024-11-15 10:46:03.489286] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:15.044 [2024-11-15 10:46:03.501516] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:15.044 [2024-11-15 10:46:03.501952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.044 [2024-11-15 10:46:03.501990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20bca40 with addr=10.0.0.2, port=4420 00:27:15.044 [2024-11-15 10:46:03.502005] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20bca40 is same with the state(6) to be set 00:27:15.044 [2024-11-15 10:46:03.502193] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20bca40 (9): Bad file descriptor 00:27:15.044 [2024-11-15 10:46:03.502411] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:15.044 [2024-11-15 10:46:03.502432] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:15.044 [2024-11-15 10:46:03.502444] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:15.044 [2024-11-15 10:46:03.502456] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:15.303 [2024-11-15 10:46:03.515100] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:15.303 [2024-11-15 10:46:03.515479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.303 [2024-11-15 10:46:03.515508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20bca40 with addr=10.0.0.2, port=4420 00:27:15.303 [2024-11-15 10:46:03.515524] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20bca40 is same with the state(6) to be set 00:27:15.303 [2024-11-15 10:46:03.515766] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20bca40 (9): Bad file descriptor 00:27:15.303 [2024-11-15 10:46:03.515959] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:15.303 [2024-11-15 10:46:03.515978] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:15.303 [2024-11-15 10:46:03.515991] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:15.303 [2024-11-15 10:46:03.516002] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:15.303 [2024-11-15 10:46:03.528468] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:15.303 [2024-11-15 10:46:03.528902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.303 [2024-11-15 10:46:03.528941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20bca40 with addr=10.0.0.2, port=4420 00:27:15.303 [2024-11-15 10:46:03.528956] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20bca40 is same with the state(6) to be set 00:27:15.303 [2024-11-15 10:46:03.529149] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20bca40 (9): Bad file descriptor 00:27:15.303 [2024-11-15 10:46:03.529372] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:15.303 [2024-11-15 10:46:03.529394] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:15.303 [2024-11-15 10:46:03.529408] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:15.303 [2024-11-15 10:46:03.529426] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:15.303 [2024-11-15 10:46:03.541927] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:15.303 [2024-11-15 10:46:03.542344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.303 [2024-11-15 10:46:03.542390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20bca40 with addr=10.0.0.2, port=4420 00:27:15.303 [2024-11-15 10:46:03.542412] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20bca40 is same with the state(6) to be set 00:27:15.303 [2024-11-15 10:46:03.542633] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20bca40 (9): Bad file descriptor 00:27:15.303 [2024-11-15 10:46:03.542853] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:15.303 [2024-11-15 10:46:03.542872] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:15.303 [2024-11-15 10:46:03.542885] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:15.303 [2024-11-15 10:46:03.542897] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:15.303 [2024-11-15 10:46:03.555218] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:15.303 [2024-11-15 10:46:03.555639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.303 [2024-11-15 10:46:03.555665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20bca40 with addr=10.0.0.2, port=4420 00:27:15.303 [2024-11-15 10:46:03.555680] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20bca40 is same with the state(6) to be set 00:27:15.303 [2024-11-15 10:46:03.555873] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20bca40 (9): Bad file descriptor 00:27:15.303 [2024-11-15 10:46:03.556072] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:15.303 [2024-11-15 10:46:03.556091] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:15.303 [2024-11-15 10:46:03.556103] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:15.303 [2024-11-15 10:46:03.556115] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:15.303 [2024-11-15 10:46:03.568460] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:15.303 [2024-11-15 10:46:03.568916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.303 [2024-11-15 10:46:03.568956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20bca40 with addr=10.0.0.2, port=4420 00:27:15.303 [2024-11-15 10:46:03.568971] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20bca40 is same with the state(6) to be set 00:27:15.303 [2024-11-15 10:46:03.569164] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20bca40 (9): Bad file descriptor 00:27:15.303 [2024-11-15 10:46:03.569388] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:15.303 [2024-11-15 10:46:03.569424] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:15.303 [2024-11-15 10:46:03.569438] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:15.303 [2024-11-15 10:46:03.569451] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:15.304 [2024-11-15 10:46:03.581728] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:15.304 [2024-11-15 10:46:03.582124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.304 [2024-11-15 10:46:03.582164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20bca40 with addr=10.0.0.2, port=4420 00:27:15.304 [2024-11-15 10:46:03.582178] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20bca40 is same with the state(6) to be set 00:27:15.304 [2024-11-15 10:46:03.582428] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20bca40 (9): Bad file descriptor 00:27:15.304 [2024-11-15 10:46:03.582660] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:15.304 [2024-11-15 10:46:03.582681] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:15.304 [2024-11-15 10:46:03.582694] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:15.304 [2024-11-15 10:46:03.582706] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:15.304 [2024-11-15 10:46:03.595024] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:15.304 [2024-11-15 10:46:03.595437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.304 [2024-11-15 10:46:03.595465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20bca40 with addr=10.0.0.2, port=4420 00:27:15.304 [2024-11-15 10:46:03.595495] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20bca40 is same with the state(6) to be set 00:27:15.304 [2024-11-15 10:46:03.595730] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20bca40 (9): Bad file descriptor 00:27:15.304 [2024-11-15 10:46:03.595928] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:15.304 [2024-11-15 10:46:03.595948] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:15.304 [2024-11-15 10:46:03.595960] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:15.304 [2024-11-15 10:46:03.595972] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:15.304 [2024-11-15 10:46:03.608306] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:15.304 [2024-11-15 10:46:03.608759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.304 [2024-11-15 10:46:03.608799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20bca40 with addr=10.0.0.2, port=4420 00:27:15.304 [2024-11-15 10:46:03.608813] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20bca40 is same with the state(6) to be set 00:27:15.304 [2024-11-15 10:46:03.609022] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20bca40 (9): Bad file descriptor 00:27:15.304 [2024-11-15 10:46:03.609221] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:15.304 [2024-11-15 10:46:03.609240] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:15.304 [2024-11-15 10:46:03.609253] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:15.304 [2024-11-15 10:46:03.609264] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:15.304 [2024-11-15 10:46:03.621576] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:15.304 [2024-11-15 10:46:03.622024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.304 [2024-11-15 10:46:03.622064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20bca40 with addr=10.0.0.2, port=4420 00:27:15.304 [2024-11-15 10:46:03.622080] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20bca40 is same with the state(6) to be set 00:27:15.304 [2024-11-15 10:46:03.622274] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20bca40 (9): Bad file descriptor 00:27:15.304 [2024-11-15 10:46:03.622508] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:15.304 [2024-11-15 10:46:03.622530] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:15.304 [2024-11-15 10:46:03.622548] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:15.304 [2024-11-15 10:46:03.622562] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:15.304 [2024-11-15 10:46:03.634892] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:15.304 [2024-11-15 10:46:03.635320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.304 [2024-11-15 10:46:03.635360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20bca40 with addr=10.0.0.2, port=4420 00:27:15.304 [2024-11-15 10:46:03.635386] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20bca40 is same with the state(6) to be set 00:27:15.304 [2024-11-15 10:46:03.635614] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20bca40 (9): Bad file descriptor 00:27:15.304 [2024-11-15 10:46:03.635855] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:15.304 [2024-11-15 10:46:03.635875] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:15.304 [2024-11-15 10:46:03.635888] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:15.304 [2024-11-15 10:46:03.635899] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:15.304 [2024-11-15 10:46:03.648253] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:15.304 [2024-11-15 10:46:03.648619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.304 [2024-11-15 10:46:03.648660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20bca40 with addr=10.0.0.2, port=4420 00:27:15.304 [2024-11-15 10:46:03.648675] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20bca40 is same with the state(6) to be set 00:27:15.304 [2024-11-15 10:46:03.648869] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20bca40 (9): Bad file descriptor 00:27:15.304 [2024-11-15 10:46:03.649067] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:15.304 [2024-11-15 10:46:03.649086] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:15.304 [2024-11-15 10:46:03.649100] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:15.304 [2024-11-15 10:46:03.649112] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:15.304 [2024-11-15 10:46:03.661488] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:15.304 [2024-11-15 10:46:03.661900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.304 [2024-11-15 10:46:03.661925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20bca40 with addr=10.0.0.2, port=4420 00:27:15.304 [2024-11-15 10:46:03.661939] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20bca40 is same with the state(6) to be set 00:27:15.304 [2024-11-15 10:46:03.662148] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20bca40 (9): Bad file descriptor 00:27:15.304 [2024-11-15 10:46:03.662346] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:15.304 [2024-11-15 10:46:03.662389] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:15.304 [2024-11-15 10:46:03.662416] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:15.304 [2024-11-15 10:46:03.662429] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:15.304 [2024-11-15 10:46:03.674721] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:15.304 [2024-11-15 10:46:03.675049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.304 [2024-11-15 10:46:03.675075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20bca40 with addr=10.0.0.2, port=4420 00:27:15.304 [2024-11-15 10:46:03.675090] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20bca40 is same with the state(6) to be set 00:27:15.304 [2024-11-15 10:46:03.675284] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20bca40 (9): Bad file descriptor 00:27:15.304 [2024-11-15 10:46:03.675537] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:15.304 [2024-11-15 10:46:03.675568] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:15.304 [2024-11-15 10:46:03.675582] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:15.304 [2024-11-15 10:46:03.675596] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:15.304 [2024-11-15 10:46:03.687962] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:15.304 [2024-11-15 10:46:03.688307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.304 [2024-11-15 10:46:03.688333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20bca40 with addr=10.0.0.2, port=4420 00:27:15.304 [2024-11-15 10:46:03.688348] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20bca40 is same with the state(6) to be set 00:27:15.304 [2024-11-15 10:46:03.688569] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20bca40 (9): Bad file descriptor 00:27:15.304 [2024-11-15 10:46:03.688786] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:15.304 [2024-11-15 10:46:03.688806] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:15.304 [2024-11-15 10:46:03.688818] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:15.304 [2024-11-15 10:46:03.688830] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:15.304 [2024-11-15 10:46:03.701326] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:15.304 [2024-11-15 10:46:03.701703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.304 [2024-11-15 10:46:03.701729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20bca40 with addr=10.0.0.2, port=4420 00:27:15.304 [2024-11-15 10:46:03.701744] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20bca40 is same with the state(6) to be set 00:27:15.304 [2024-11-15 10:46:03.701938] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20bca40 (9): Bad file descriptor 00:27:15.304 [2024-11-15 10:46:03.702136] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:15.304 [2024-11-15 10:46:03.702156] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:15.304 [2024-11-15 10:46:03.702170] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:15.305 [2024-11-15 10:46:03.702182] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:15.305 [2024-11-15 10:46:03.714605] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:15.305 [2024-11-15 10:46:03.714994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.305 [2024-11-15 10:46:03.715020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20bca40 with addr=10.0.0.2, port=4420 00:27:15.305 [2024-11-15 10:46:03.715040] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20bca40 is same with the state(6) to be set 00:27:15.305 [2024-11-15 10:46:03.715235] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20bca40 (9): Bad file descriptor 00:27:15.305 [2024-11-15 10:46:03.715463] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:15.305 [2024-11-15 10:46:03.715485] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:15.305 [2024-11-15 10:46:03.715499] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:15.305 [2024-11-15 10:46:03.715512] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:15.305 [2024-11-15 10:46:03.728211] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:15.305 [2024-11-15 10:46:03.728590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.305 [2024-11-15 10:46:03.728618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20bca40 with addr=10.0.0.2, port=4420 00:27:15.305 [2024-11-15 10:46:03.728649] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20bca40 is same with the state(6) to be set 00:27:15.305 [2024-11-15 10:46:03.728905] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20bca40 (9): Bad file descriptor 00:27:15.305 [2024-11-15 10:46:03.729131] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:15.305 [2024-11-15 10:46:03.729151] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:15.305 [2024-11-15 10:46:03.729164] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:15.305 [2024-11-15 10:46:03.729176] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:15.305 [2024-11-15 10:46:03.741476] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:15.305 [2024-11-15 10:46:03.741851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.305 [2024-11-15 10:46:03.741891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20bca40 with addr=10.0.0.2, port=4420 00:27:15.305 [2024-11-15 10:46:03.741906] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20bca40 is same with the state(6) to be set 00:27:15.305 [2024-11-15 10:46:03.742115] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20bca40 (9): Bad file descriptor 00:27:15.305 [2024-11-15 10:46:03.742313] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:15.305 [2024-11-15 10:46:03.742333] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:15.305 [2024-11-15 10:46:03.742360] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:15.305 [2024-11-15 10:46:03.742386] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:15.305 [2024-11-15 10:46:03.754898] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:15.305 [2024-11-15 10:46:03.755231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.305 [2024-11-15 10:46:03.755257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20bca40 with addr=10.0.0.2, port=4420 00:27:15.305 [2024-11-15 10:46:03.755272] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20bca40 is same with the state(6) to be set 00:27:15.305 [2024-11-15 10:46:03.755499] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20bca40 (9): Bad file descriptor 00:27:15.305 [2024-11-15 10:46:03.755727] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:15.305 [2024-11-15 10:46:03.755747] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:15.305 [2024-11-15 10:46:03.755760] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:15.305 [2024-11-15 10:46:03.755772] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:15.305 [2024-11-15 10:46:03.768554] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:15.305 [2024-11-15 10:46:03.768964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.305 [2024-11-15 10:46:03.768992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20bca40 with addr=10.0.0.2, port=4420 00:27:15.305 [2024-11-15 10:46:03.769008] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20bca40 is same with the state(6) to be set 00:27:15.305 [2024-11-15 10:46:03.769214] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20bca40 (9): Bad file descriptor 00:27:15.564 [2024-11-15 10:46:03.769452] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:15.564 [2024-11-15 10:46:03.769475] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:15.564 [2024-11-15 10:46:03.769490] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:15.564 [2024-11-15 10:46:03.769503] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:15.565 [2024-11-15 10:46:03.781912] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:15.565 [2024-11-15 10:46:03.782293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.565 [2024-11-15 10:46:03.782344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20bca40 with addr=10.0.0.2, port=4420 00:27:15.565 [2024-11-15 10:46:03.782359] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20bca40 is same with the state(6) to be set 00:27:15.565 [2024-11-15 10:46:03.782587] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20bca40 (9): Bad file descriptor 00:27:15.565 [2024-11-15 10:46:03.782826] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:15.565 [2024-11-15 10:46:03.782846] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:15.565 [2024-11-15 10:46:03.782859] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:15.565 [2024-11-15 10:46:03.782870] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:15.565 [2024-11-15 10:46:03.795213] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:15.565 [2024-11-15 10:46:03.795550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.565 [2024-11-15 10:46:03.795577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20bca40 with addr=10.0.0.2, port=4420 00:27:15.565 [2024-11-15 10:46:03.795593] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20bca40 is same with the state(6) to be set 00:27:15.565 [2024-11-15 10:46:03.795804] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20bca40 (9): Bad file descriptor 00:27:15.565 [2024-11-15 10:46:03.796001] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:15.565 [2024-11-15 10:46:03.796021] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:15.565 [2024-11-15 10:46:03.796038] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:15.565 [2024-11-15 10:46:03.796051] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:15.565 [2024-11-15 10:46:03.808521] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:15.565 [2024-11-15 10:46:03.808969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.565 [2024-11-15 10:46:03.808994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20bca40 with addr=10.0.0.2, port=4420 00:27:15.565 [2024-11-15 10:46:03.809021] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20bca40 is same with the state(6) to be set 00:27:15.565 [2024-11-15 10:46:03.809215] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20bca40 (9): Bad file descriptor 00:27:15.565 [2024-11-15 10:46:03.809442] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:15.565 [2024-11-15 10:46:03.809463] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:15.565 [2024-11-15 10:46:03.809476] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:15.565 [2024-11-15 10:46:03.809488] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:15.565 [2024-11-15 10:46:03.821953] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:15.565 [2024-11-15 10:46:03.822295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.565 [2024-11-15 10:46:03.822321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20bca40 with addr=10.0.0.2, port=4420 00:27:15.565 [2024-11-15 10:46:03.822336] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20bca40 is same with the state(6) to be set 00:27:15.565 [2024-11-15 10:46:03.822564] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20bca40 (9): Bad file descriptor 00:27:15.565 [2024-11-15 10:46:03.822788] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:15.565 [2024-11-15 10:46:03.822808] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:15.565 [2024-11-15 10:46:03.822821] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:15.565 [2024-11-15 10:46:03.822833] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:15.565 [2024-11-15 10:46:03.835254] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:15.565 [2024-11-15 10:46:03.835587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.565 [2024-11-15 10:46:03.835615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20bca40 with addr=10.0.0.2, port=4420 00:27:15.565 [2024-11-15 10:46:03.835646] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20bca40 is same with the state(6) to be set 00:27:15.565 [2024-11-15 10:46:03.835862] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20bca40 (9): Bad file descriptor 00:27:15.565 [2024-11-15 10:46:03.836079] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:15.565 [2024-11-15 10:46:03.836100] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:15.565 [2024-11-15 10:46:03.836114] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:15.565 [2024-11-15 10:46:03.836126] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:15.565 [2024-11-15 10:46:03.848600] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:15.565 [2024-11-15 10:46:03.849029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.565 [2024-11-15 10:46:03.849054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20bca40 with addr=10.0.0.2, port=4420 00:27:15.565 [2024-11-15 10:46:03.849068] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20bca40 is same with the state(6) to be set 00:27:15.565 [2024-11-15 10:46:03.849276] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20bca40 (9): Bad file descriptor 00:27:15.565 [2024-11-15 10:46:03.849504] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:15.565 [2024-11-15 10:46:03.849525] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:15.565 [2024-11-15 10:46:03.849538] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:15.565 [2024-11-15 10:46:03.849551] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:15.565 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 487046 Killed "${NVMF_APP[@]}" "$@" 00:27:15.565 10:46:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:27:15.565 10:46:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:27:15.565 10:46:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:27:15.565 10:46:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@724 -- # xtrace_disable 00:27:15.565 10:46:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:15.565 10:46:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@509 -- # nvmfpid=488014 00:27:15.565 10:46:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:27:15.565 10:46:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@510 -- # waitforlisten 488014 00:27:15.565 [2024-11-15 10:46:03.861886] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:15.565 10:46:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@833 -- # '[' -z 488014 ']' 00:27:15.565 [2024-11-15 10:46:03.862281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.565 [2024-11-15 10:46:03.862320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20bca40 with addr=10.0.0.2, port=4420 00:27:15.565 [2024-11-15 10:46:03.862335] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20bca40 is same with the state(6) to be set 00:27:15.565 10:46:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:15.565 10:46:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@838 -- # local max_retries=100 00:27:15.565 [2024-11-15 10:46:03.862562] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20bca40 (9): Bad file descriptor 00:27:15.565 10:46:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:15.565 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:15.565 [2024-11-15 10:46:03.862799] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:15.565 [2024-11-15 10:46:03.862820] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:15.565 [2024-11-15 10:46:03.862833] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:15.565 [2024-11-15 10:46:03.862845] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:15.565 10:46:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@842 -- # xtrace_disable 00:27:15.565 10:46:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:15.565 [2024-11-15 10:46:03.875097] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:15.565 [2024-11-15 10:46:03.875443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.566 [2024-11-15 10:46:03.875470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20bca40 with addr=10.0.0.2, port=4420 00:27:15.566 [2024-11-15 10:46:03.875486] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20bca40 is same with the state(6) to be set 00:27:15.566 [2024-11-15 10:46:03.875720] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20bca40 (9): Bad file descriptor 00:27:15.566 [2024-11-15 10:46:03.875919] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:15.566 [2024-11-15 10:46:03.875939] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:15.566 [2024-11-15 10:46:03.875951] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:15.566 [2024-11-15 10:46:03.875963] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:15.566 [2024-11-15 10:46:03.888276] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:15.566 [2024-11-15 10:46:03.888695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.566 [2024-11-15 10:46:03.888722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20bca40 with addr=10.0.0.2, port=4420 00:27:15.566 [2024-11-15 10:46:03.888736] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20bca40 is same with the state(6) to be set 00:27:15.566 [2024-11-15 10:46:03.888930] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20bca40 (9): Bad file descriptor 00:27:15.566 [2024-11-15 10:46:03.889128] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:15.566 [2024-11-15 10:46:03.889148] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:15.566 [2024-11-15 10:46:03.889160] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:15.566 [2024-11-15 10:46:03.889172] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:15.566 [2024-11-15 10:46:03.901594] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:15.566 [2024-11-15 10:46:03.901980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.566 [2024-11-15 10:46:03.902006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20bca40 with addr=10.0.0.2, port=4420 00:27:15.566 [2024-11-15 10:46:03.902020] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20bca40 is same with the state(6) to be set 00:27:15.566 [2024-11-15 10:46:03.902214] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20bca40 (9): Bad file descriptor 00:27:15.566 [2024-11-15 10:46:03.902446] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:15.566 [2024-11-15 10:46:03.902468] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:15.566 [2024-11-15 10:46:03.902481] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:15.566 [2024-11-15 10:46:03.902494] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:15.566 [2024-11-15 10:46:03.911578] Starting SPDK v25.01-pre git sha1 318515b44 / DPDK 24.03.0 initialization... 00:27:15.566 [2024-11-15 10:46:03.911666] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:15.566 [2024-11-15 10:46:03.915012] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:15.566 [2024-11-15 10:46:03.915372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.566 [2024-11-15 10:46:03.915399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20bca40 with addr=10.0.0.2, port=4420 00:27:15.566 [2024-11-15 10:46:03.915421] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20bca40 is same with the state(6) to be set 00:27:15.566 [2024-11-15 10:46:03.915622] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20bca40 (9): Bad file descriptor 00:27:15.566 [2024-11-15 10:46:03.915836] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:15.566 [2024-11-15 10:46:03.915856] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:15.566 [2024-11-15 10:46:03.915870] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:15.566 [2024-11-15 10:46:03.915882] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:15.566 [2024-11-15 10:46:03.928490] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:15.566 [2024-11-15 10:46:03.928938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.566 [2024-11-15 10:46:03.928963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20bca40 with addr=10.0.0.2, port=4420 00:27:15.566 [2024-11-15 10:46:03.928992] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20bca40 is same with the state(6) to be set 00:27:15.566 [2024-11-15 10:46:03.929186] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20bca40 (9): Bad file descriptor 00:27:15.566 [2024-11-15 10:46:03.929412] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:15.566 [2024-11-15 10:46:03.929433] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:15.566 [2024-11-15 10:46:03.929447] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:15.566 [2024-11-15 10:46:03.929459] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:15.566 [2024-11-15 10:46:03.941836] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:15.566 [2024-11-15 10:46:03.942255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.566 [2024-11-15 10:46:03.942281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20bca40 with addr=10.0.0.2, port=4420 00:27:15.566 [2024-11-15 10:46:03.942295] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20bca40 is same with the state(6) to be set 00:27:15.566 [2024-11-15 10:46:03.942534] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20bca40 (9): Bad file descriptor 00:27:15.566 [2024-11-15 10:46:03.942756] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:15.566 [2024-11-15 10:46:03.942775] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:15.566 [2024-11-15 10:46:03.942788] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:15.566 [2024-11-15 10:46:03.942800] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:15.566 [2024-11-15 10:46:03.955275] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:15.566 [2024-11-15 10:46:03.955628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.566 [2024-11-15 10:46:03.955656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20bca40 with addr=10.0.0.2, port=4420 00:27:15.566 [2024-11-15 10:46:03.955686] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20bca40 is same with the state(6) to be set 00:27:15.566 [2024-11-15 10:46:03.955886] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20bca40 (9): Bad file descriptor 00:27:15.566 [2024-11-15 10:46:03.956091] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:15.566 [2024-11-15 10:46:03.956111] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:15.566 [2024-11-15 10:46:03.956125] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:15.566 [2024-11-15 10:46:03.956137] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:15.566 [2024-11-15 10:46:03.968731] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:15.566 [2024-11-15 10:46:03.969112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.566 [2024-11-15 10:46:03.969152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20bca40 with addr=10.0.0.2, port=4420 00:27:15.566 [2024-11-15 10:46:03.969168] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20bca40 is same with the state(6) to be set 00:27:15.566 [2024-11-15 10:46:03.969415] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20bca40 (9): Bad file descriptor 00:27:15.566 [2024-11-15 10:46:03.969626] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:15.566 [2024-11-15 10:46:03.969647] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:15.566 [2024-11-15 10:46:03.969660] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:15.566 [2024-11-15 10:46:03.969688] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:15.566 [2024-11-15 10:46:03.981994] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:15.566 [2024-11-15 10:46:03.982339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.566 [2024-11-15 10:46:03.982389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20bca40 with addr=10.0.0.2, port=4420 00:27:15.566 [2024-11-15 10:46:03.982406] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20bca40 is same with the state(6) to be set 00:27:15.566 [2024-11-15 10:46:03.982620] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20bca40 (9): Bad file descriptor 00:27:15.566 [2024-11-15 10:46:03.982842] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:15.566 [2024-11-15 10:46:03.982862] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:15.566 [2024-11-15 10:46:03.982875] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:15.566 [2024-11-15 10:46:03.982887] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:15.566 [2024-11-15 10:46:03.986408] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:27:15.567 [2024-11-15 10:46:03.995434] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:15.567 [2024-11-15 10:46:03.995997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.567 [2024-11-15 10:46:03.996048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20bca40 with addr=10.0.0.2, port=4420 00:27:15.567 [2024-11-15 10:46:03.996078] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20bca40 is same with the state(6) to be set 00:27:15.567 [2024-11-15 10:46:03.996290] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20bca40 (9): Bad file descriptor 00:27:15.567 [2024-11-15 10:46:03.996538] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:15.567 [2024-11-15 10:46:03.996562] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:15.567 [2024-11-15 10:46:03.996580] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:15.567 [2024-11-15 10:46:03.996596] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:15.567 [2024-11-15 10:46:04.008928] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:15.567 [2024-11-15 10:46:04.009460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.567 [2024-11-15 10:46:04.009510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20bca40 with addr=10.0.0.2, port=4420 00:27:15.567 [2024-11-15 10:46:04.009530] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20bca40 is same with the state(6) to be set 00:27:15.567 [2024-11-15 10:46:04.009762] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20bca40 (9): Bad file descriptor 00:27:15.567 [2024-11-15 10:46:04.009979] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:15.567 [2024-11-15 10:46:04.010000] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:15.567 [2024-11-15 10:46:04.010016] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:15.567 [2024-11-15 10:46:04.010030] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:15.567 [2024-11-15 10:46:04.022273] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:15.567 [2024-11-15 10:46:04.022692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.567 [2024-11-15 10:46:04.022719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20bca40 with addr=10.0.0.2, port=4420 00:27:15.567 [2024-11-15 10:46:04.022734] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20bca40 is same with the state(6) to be set 00:27:15.567 [2024-11-15 10:46:04.022969] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20bca40 (9): Bad file descriptor 00:27:15.567 [2024-11-15 10:46:04.023203] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:15.567 [2024-11-15 10:46:04.023224] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:15.567 [2024-11-15 10:46:04.023238] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:15.567 [2024-11-15 10:46:04.023250] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:15.826 [2024-11-15 10:46:04.035696] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:15.826 [2024-11-15 10:46:04.036122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.826 [2024-11-15 10:46:04.036163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20bca40 with addr=10.0.0.2, port=4420 00:27:15.826 [2024-11-15 10:46:04.036179] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20bca40 is same with the state(6) to be set 00:27:15.826 [2024-11-15 10:46:04.036434] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20bca40 (9): Bad file descriptor 00:27:15.826 [2024-11-15 10:46:04.036662] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:15.826 [2024-11-15 10:46:04.036686] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:15.826 [2024-11-15 10:46:04.036700] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:15.826 [2024-11-15 10:46:04.036714] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:15.826 [2024-11-15 10:46:04.045949] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:15.826 [2024-11-15 10:46:04.045982] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:15.826 [2024-11-15 10:46:04.046011] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:15.826 [2024-11-15 10:46:04.046022] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:15.826 [2024-11-15 10:46:04.046031] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:15.826 [2024-11-15 10:46:04.047449] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:27:15.826 [2024-11-15 10:46:04.047514] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:27:15.826 [2024-11-15 10:46:04.047518] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:15.826 [2024-11-15 10:46:04.049223] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:15.826 [2024-11-15 10:46:04.049711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.826 [2024-11-15 10:46:04.049756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20bca40 with addr=10.0.0.2, port=4420 00:27:15.826 [2024-11-15 10:46:04.049775] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20bca40 is same with the state(6) to be set 00:27:15.826 [2024-11-15 10:46:04.049997] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20bca40 (9): Bad file descriptor 00:27:15.826 [2024-11-15 10:46:04.050218] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:15.826 [2024-11-15 10:46:04.050239] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:15.826 [2024-11-15 10:46:04.050255] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:15.826 [2024-11-15 10:46:04.050270] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:15.826 [2024-11-15 10:46:04.062789] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:15.826 [2024-11-15 10:46:04.063298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.826 [2024-11-15 10:46:04.063354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20bca40 with addr=10.0.0.2, port=4420 00:27:15.826 [2024-11-15 10:46:04.063383] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20bca40 is same with the state(6) to be set 00:27:15.826 [2024-11-15 10:46:04.063630] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20bca40 (9): Bad file descriptor 00:27:15.826 [2024-11-15 10:46:04.063854] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:15.826 [2024-11-15 10:46:04.063877] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:15.826 [2024-11-15 10:46:04.063896] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:15.826 [2024-11-15 10:46:04.063912] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:15.826 [2024-11-15 10:46:04.076324] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:15.826 [2024-11-15 10:46:04.076922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.826 [2024-11-15 10:46:04.076978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20bca40 with addr=10.0.0.2, port=4420 00:27:15.826 [2024-11-15 10:46:04.076999] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20bca40 is same with the state(6) to be set 00:27:15.826 [2024-11-15 10:46:04.077248] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20bca40 (9): Bad file descriptor 00:27:15.826 [2024-11-15 10:46:04.077485] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:15.826 [2024-11-15 10:46:04.077509] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:15.827 [2024-11-15 10:46:04.077526] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:15.827 [2024-11-15 10:46:04.077542] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:15.827 [2024-11-15 10:46:04.089940] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:15.827 [2024-11-15 10:46:04.090515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.827 [2024-11-15 10:46:04.090556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20bca40 with addr=10.0.0.2, port=4420 00:27:15.827 [2024-11-15 10:46:04.090577] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20bca40 is same with the state(6) to be set 00:27:15.827 [2024-11-15 10:46:04.090800] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20bca40 (9): Bad file descriptor 00:27:15.827 [2024-11-15 10:46:04.091034] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:15.827 [2024-11-15 10:46:04.091057] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:15.827 [2024-11-15 10:46:04.091075] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:15.827 [2024-11-15 10:46:04.091091] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:15.827 [2024-11-15 10:46:04.103534] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:15.827 [2024-11-15 10:46:04.104074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.827 [2024-11-15 10:46:04.104124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20bca40 with addr=10.0.0.2, port=4420 00:27:15.827 [2024-11-15 10:46:04.104145] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20bca40 is same with the state(6) to be set 00:27:15.827 [2024-11-15 10:46:04.104385] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20bca40 (9): Bad file descriptor 00:27:15.827 [2024-11-15 10:46:04.104618] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:15.827 [2024-11-15 10:46:04.104641] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:15.827 [2024-11-15 10:46:04.104658] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:15.827 [2024-11-15 10:46:04.104674] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:15.827 [2024-11-15 10:46:04.117100] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:15.827 [2024-11-15 10:46:04.117703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.827 [2024-11-15 10:46:04.117742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20bca40 with addr=10.0.0.2, port=4420 00:27:15.827 [2024-11-15 10:46:04.117789] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20bca40 is same with the state(6) to be set 00:27:15.827 [2024-11-15 10:46:04.118014] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20bca40 (9): Bad file descriptor 00:27:15.827 [2024-11-15 10:46:04.118239] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:15.827 [2024-11-15 10:46:04.118261] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:15.827 [2024-11-15 10:46:04.118279] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:15.827 [2024-11-15 10:46:04.118295] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:15.827 [2024-11-15 10:46:04.130744] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:15.827 [2024-11-15 10:46:04.131264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.827 [2024-11-15 10:46:04.131298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20bca40 with addr=10.0.0.2, port=4420 00:27:15.827 [2024-11-15 10:46:04.131333] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20bca40 is same with the state(6) to be set 00:27:15.827 [2024-11-15 10:46:04.131562] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20bca40 (9): Bad file descriptor 00:27:15.827 [2024-11-15 10:46:04.131785] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:15.827 [2024-11-15 10:46:04.131808] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:15.827 [2024-11-15 10:46:04.131825] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:15.827 [2024-11-15 10:46:04.131841] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:15.827 [2024-11-15 10:46:04.144402] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:15.827 [2024-11-15 10:46:04.144801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.827 [2024-11-15 10:46:04.144829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20bca40 with addr=10.0.0.2, port=4420 00:27:15.827 [2024-11-15 10:46:04.144845] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20bca40 is same with the state(6) to be set 00:27:15.827 [2024-11-15 10:46:04.145074] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20bca40 (9): Bad file descriptor 00:27:15.827 [2024-11-15 10:46:04.145292] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:15.827 [2024-11-15 10:46:04.145314] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:15.827 [2024-11-15 10:46:04.145329] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:15.827 [2024-11-15 10:46:04.145342] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:15.827 [2024-11-15 10:46:04.157986] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:15.827 [2024-11-15 10:46:04.158415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.827 [2024-11-15 10:46:04.158445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20bca40 with addr=10.0.0.2, port=4420 00:27:15.827 [2024-11-15 10:46:04.158462] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20bca40 is same with the state(6) to be set 00:27:15.827 [2024-11-15 10:46:04.158676] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20bca40 (9): Bad file descriptor 00:27:15.827 [2024-11-15 10:46:04.158902] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:15.827 [2024-11-15 10:46:04.158924] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:15.827 [2024-11-15 10:46:04.158938] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:15.827 [2024-11-15 10:46:04.158951] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:15.827 10:46:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:27:15.827 10:46:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@866 -- # return 0 00:27:15.827 10:46:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:27:15.827 10:46:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@730 -- # xtrace_disable 00:27:15.827 10:46:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:15.827 [2024-11-15 10:46:04.171623] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:15.827 [2024-11-15 10:46:04.171996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.827 [2024-11-15 10:46:04.172025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20bca40 with addr=10.0.0.2, port=4420 00:27:15.827 [2024-11-15 10:46:04.172041] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20bca40 is same with the state(6) to be set 00:27:15.827 [2024-11-15 10:46:04.172254] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20bca40 (9): Bad file descriptor 00:27:15.827 [2024-11-15 10:46:04.172482] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:15.827 [2024-11-15 10:46:04.172504] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:15.827 [2024-11-15 10:46:04.172518] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:15.827 [2024-11-15 10:46:04.172531] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:15.827 [2024-11-15 10:46:04.185090] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:15.827 [2024-11-15 10:46:04.185482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.827 [2024-11-15 10:46:04.185512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20bca40 with addr=10.0.0.2, port=4420 00:27:15.827 [2024-11-15 10:46:04.185529] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20bca40 is same with the state(6) to be set 00:27:15.827 [2024-11-15 10:46:04.185742] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20bca40 (9): Bad file descriptor 00:27:15.827 [2024-11-15 10:46:04.185960] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:15.827 [2024-11-15 10:46:04.185982] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:15.827 [2024-11-15 10:46:04.185996] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:15.827 [2024-11-15 10:46:04.186009] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:15.827 10:46:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:15.827 10:46:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:15.827 10:46:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:15.828 10:46:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:15.828 [2024-11-15 10:46:04.197390] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:15.828 [2024-11-15 10:46:04.198752] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:15.828 [2024-11-15 10:46:04.199247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.828 [2024-11-15 10:46:04.199289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20bca40 with addr=10.0.0.2, port=4420 00:27:15.828 [2024-11-15 10:46:04.199306] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20bca40 is same with the state(6) to be set 00:27:15.828 [2024-11-15 10:46:04.199529] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20bca40 (9): Bad file descriptor 00:27:15.828 [2024-11-15 10:46:04.199747] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:15.828 [2024-11-15 10:46:04.199769] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:15.828 [2024-11-15 10:46:04.199783] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:15.828 [2024-11-15 10:46:04.199796] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:15.828 10:46:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:15.828 10:46:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:27:15.828 10:46:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:15.828 10:46:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:15.828 [2024-11-15 10:46:04.212392] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:15.828 [2024-11-15 10:46:04.212869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.828 [2024-11-15 10:46:04.212901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20bca40 with addr=10.0.0.2, port=4420 00:27:15.828 [2024-11-15 10:46:04.212934] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20bca40 is same with the state(6) to be set 00:27:15.828 [2024-11-15 10:46:04.213156] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20bca40 (9): Bad file descriptor 00:27:15.828 [2024-11-15 10:46:04.213390] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:15.828 [2024-11-15 10:46:04.213412] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:15.828 [2024-11-15 10:46:04.213438] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:15.828 [2024-11-15 10:46:04.213453] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:15.828 3936.17 IOPS, 15.38 MiB/s [2024-11-15T09:46:04.291Z] [2024-11-15 10:46:04.227490] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:15.828 [2024-11-15 10:46:04.227909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.828 [2024-11-15 10:46:04.227949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20bca40 with addr=10.0.0.2, port=4420 00:27:15.828 [2024-11-15 10:46:04.227966] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20bca40 is same with the state(6) to be set 00:27:15.828 [2024-11-15 10:46:04.228173] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20bca40 (9): Bad file descriptor 00:27:15.828 [2024-11-15 10:46:04.228411] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:15.828 [2024-11-15 10:46:04.228434] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:15.828 [2024-11-15 10:46:04.228448] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:15.828 [2024-11-15 10:46:04.228461] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:15.828 [2024-11-15 10:46:04.241084] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:15.828 [2024-11-15 10:46:04.241524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.828 [2024-11-15 10:46:04.241566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20bca40 with addr=10.0.0.2, port=4420 00:27:15.828 [2024-11-15 10:46:04.241583] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20bca40 is same with the state(6) to be set 00:27:15.828 [2024-11-15 10:46:04.241797] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20bca40 (9): Bad file descriptor 00:27:15.828 [2024-11-15 10:46:04.242017] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:15.828 [2024-11-15 10:46:04.242038] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:15.828 [2024-11-15 10:46:04.242053] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:15.828 [2024-11-15 10:46:04.242067] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:15.828 Malloc0 00:27:15.828 10:46:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:15.828 10:46:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:15.828 10:46:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:15.828 10:46:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:15.828 [2024-11-15 10:46:04.254584] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:15.828 [2024-11-15 10:46:04.255010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.828 [2024-11-15 10:46:04.255041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20bca40 with addr=10.0.0.2, port=4420 00:27:15.828 [2024-11-15 10:46:04.255075] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20bca40 is same with the state(6) to be set 00:27:15.828 [2024-11-15 10:46:04.255301] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20bca40 (9): Bad file descriptor 00:27:15.828 [2024-11-15 10:46:04.255531] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:15.828 [2024-11-15 10:46:04.255554] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:15.828 [2024-11-15 10:46:04.255570] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:15.828 [2024-11-15 10:46:04.255584] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:15.828 10:46:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:15.828 10:46:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:27:15.828 10:46:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:15.828 10:46:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:15.828 10:46:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:15.828 10:46:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:15.828 10:46:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:15.828 10:46:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:15.828 [2024-11-15 10:46:04.268080] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:15.828 [2024-11-15 10:46:04.268514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.828 [2024-11-15 10:46:04.268550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20bca40 with addr=10.0.0.2, port=4420 00:27:15.828 [2024-11-15 10:46:04.268568] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20bca40 is same with the state(6) to be set 00:27:15.828 [2024-11-15 10:46:04.268782] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20bca40 (9): Bad file descriptor 00:27:15.828 [2024-11-15 10:46:04.269001] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:15.828 [2024-11-15 10:46:04.269023] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:15.828 [2024-11-15 10:46:04.269037] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:15.828 [2024-11-15 10:46:04.269050] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:15.828 [2024-11-15 10:46:04.269790] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:15.828 10:46:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:15.828 10:46:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 487213 00:27:15.828 [2024-11-15 10:46:04.281675] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:16.086 [2024-11-15 10:46:04.401278] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller successful. 00:27:17.948 4391.57 IOPS, 17.15 MiB/s [2024-11-15T09:46:07.343Z] 4952.12 IOPS, 19.34 MiB/s [2024-11-15T09:46:08.276Z] 5387.67 IOPS, 21.05 MiB/s [2024-11-15T09:46:09.647Z] 5746.90 IOPS, 22.45 MiB/s [2024-11-15T09:46:10.577Z] 6022.91 IOPS, 23.53 MiB/s [2024-11-15T09:46:11.508Z] 6231.17 IOPS, 24.34 MiB/s [2024-11-15T09:46:12.439Z] 6421.69 IOPS, 25.08 MiB/s [2024-11-15T09:46:13.372Z] 6583.79 IOPS, 25.72 MiB/s [2024-11-15T09:46:13.372Z] 6740.20 IOPS, 26.33 MiB/s 00:27:24.909 Latency(us) 00:27:24.909 [2024-11-15T09:46:13.372Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:24.909 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:27:24.909 Verification LBA range: start 0x0 length 0x4000 00:27:24.909 Nvme1n1 : 15.01 6743.40 26.34 10411.64 0.00 7439.50 843.47 18932.62 00:27:24.909 [2024-11-15T09:46:13.372Z] =================================================================================================================== 00:27:24.909 [2024-11-15T09:46:13.372Z] Total : 6743.40 26.34 10411.64 0.00 7439.50 843.47 18932.62 00:27:25.166 10:46:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:27:25.166 10:46:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:25.166 10:46:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:25.166 10:46:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:25.166 10:46:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:25.166 10:46:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:27:25.166 10:46:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:27:25.166 10:46:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@516 -- # nvmfcleanup 00:27:25.166 10:46:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@121 -- # sync 00:27:25.166 10:46:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:25.166 10:46:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@124 -- # set +e 00:27:25.166 10:46:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:25.166 10:46:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:25.166 rmmod nvme_tcp 00:27:25.166 rmmod nvme_fabrics 00:27:25.166 rmmod nvme_keyring 00:27:25.166 10:46:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:25.166 10:46:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@128 -- # set -e 00:27:25.166 10:46:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@129 -- # return 0 00:27:25.166 10:46:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@517 -- # '[' -n 488014 ']' 00:27:25.166 10:46:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@518 -- # killprocess 488014 00:27:25.166 10:46:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@952 -- # '[' -z 488014 ']' 00:27:25.166 10:46:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@956 -- # kill -0 488014 00:27:25.166 10:46:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@957 -- # uname 00:27:25.166 10:46:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:27:25.166 10:46:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 488014 00:27:25.166 10:46:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:27:25.166 10:46:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:27:25.166 10:46:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@970 -- # echo 'killing process with pid 488014' 00:27:25.166 killing process with pid 488014 00:27:25.166 10:46:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@971 -- # kill 488014 00:27:25.166 10:46:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@976 -- # wait 488014 00:27:25.425 10:46:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:27:25.425 10:46:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:27:25.425 10:46:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:27:25.425 10:46:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@297 -- # iptr 00:27:25.425 10:46:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # iptables-save 00:27:25.425 10:46:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:27:25.425 10:46:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # iptables-restore 00:27:25.425 10:46:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:25.425 10:46:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:27:25.425 10:46:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:25.425 10:46:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:25.425 10:46:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:27.970 10:46:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:27:27.970 00:27:27.970 real 0m22.528s 00:27:27.970 user 0m59.594s 00:27:27.970 sys 0m4.507s 00:27:27.970 10:46:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:27:27.970 10:46:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:27.970 ************************************ 00:27:27.970 END TEST nvmf_bdevperf 00:27:27.971 ************************************ 00:27:27.971 10:46:15 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@48 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:27:27.971 10:46:15 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:27:27.971 10:46:15 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:27:27.971 10:46:15 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:27:27.971 ************************************ 00:27:27.971 START TEST nvmf_target_disconnect 00:27:27.971 ************************************ 00:27:27.971 10:46:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:27:27.971 * Looking for test storage... 00:27:27.971 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:27.971 10:46:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:27:27.971 10:46:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1691 -- # lcov --version 00:27:27.971 10:46:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:27:27.971 10:46:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:27:27.971 10:46:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:27.971 10:46:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:27.971 10:46:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:27.971 10:46:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:27:27.971 10:46:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:27:27.971 10:46:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:27:27.971 10:46:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:27:27.971 10:46:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:27:27.971 10:46:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:27:27.971 10:46:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:27:27.971 10:46:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:27.971 10:46:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:27:27.971 10:46:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@345 -- # : 1 00:27:27.971 10:46:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:27.971 10:46:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:27.971 10:46:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # decimal 1 00:27:27.971 10:46:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=1 00:27:27.971 10:46:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:27.971 10:46:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 1 00:27:27.971 10:46:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:27:27.971 10:46:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # decimal 2 00:27:27.971 10:46:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=2 00:27:27.971 10:46:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:27.971 10:46:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 2 00:27:27.971 10:46:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:27:27.971 10:46:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:27.971 10:46:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:27.971 10:46:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # return 0 00:27:27.971 10:46:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:27.971 10:46:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:27:27.971 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:27.971 --rc genhtml_branch_coverage=1 00:27:27.971 --rc genhtml_function_coverage=1 00:27:27.971 --rc genhtml_legend=1 00:27:27.971 --rc geninfo_all_blocks=1 00:27:27.971 --rc geninfo_unexecuted_blocks=1 00:27:27.971 00:27:27.971 ' 00:27:27.971 10:46:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:27:27.971 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:27.971 --rc genhtml_branch_coverage=1 00:27:27.971 --rc genhtml_function_coverage=1 00:27:27.971 --rc genhtml_legend=1 00:27:27.971 --rc geninfo_all_blocks=1 00:27:27.971 --rc geninfo_unexecuted_blocks=1 00:27:27.971 00:27:27.971 ' 00:27:27.971 10:46:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:27:27.971 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:27.971 --rc genhtml_branch_coverage=1 00:27:27.971 --rc genhtml_function_coverage=1 00:27:27.971 --rc genhtml_legend=1 00:27:27.971 --rc geninfo_all_blocks=1 00:27:27.971 --rc geninfo_unexecuted_blocks=1 00:27:27.971 00:27:27.971 ' 00:27:27.971 10:46:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:27:27.971 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:27.971 --rc genhtml_branch_coverage=1 00:27:27.971 --rc genhtml_function_coverage=1 00:27:27.971 --rc genhtml_legend=1 00:27:27.971 --rc geninfo_all_blocks=1 00:27:27.971 --rc geninfo_unexecuted_blocks=1 00:27:27.971 00:27:27.971 ' 00:27:27.971 10:46:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:27.971 10:46:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:27:27.971 10:46:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:27.971 10:46:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:27.971 10:46:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:27.971 10:46:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:27.971 10:46:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:27.971 10:46:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:27.971 10:46:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:27.971 10:46:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:27.971 10:46:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:27.971 10:46:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:27.971 10:46:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:27:27.971 10:46:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=8b464f06-2980-e311-ba20-001e67a94acd 00:27:27.971 10:46:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:27.971 10:46:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:27.971 10:46:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:27.971 10:46:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:27.971 10:46:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:27.971 10:46:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:27:27.971 10:46:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:27.971 10:46:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:27.971 10:46:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:27.971 10:46:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:27.971 10:46:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:27.971 10:46:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:27.971 10:46:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:27:27.971 10:46:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:27.971 10:46:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@51 -- # : 0 00:27:27.971 10:46:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:27.971 10:46:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:27.972 10:46:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:27.972 10:46:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:27.972 10:46:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:27.972 10:46:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:27:27.972 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:27:27.972 10:46:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:27.972 10:46:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:27.972 10:46:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:27.972 10:46:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:27:27.972 10:46:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:27:27.972 10:46:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:27:27.972 10:46:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@69 -- # nvmftestinit 00:27:27.972 10:46:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:27:27.972 10:46:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:27.972 10:46:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@476 -- # prepare_net_devs 00:27:27.972 10:46:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@438 -- # local -g is_hw=no 00:27:27.972 10:46:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@440 -- # remove_spdk_ns 00:27:27.972 10:46:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:27.972 10:46:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:27.972 10:46:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:27.972 10:46:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:27:27.972 10:46:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:27:27.972 10:46:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:27:27.972 10:46:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:27:29.874 10:46:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:29.874 10:46:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:27:29.874 10:46:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:29.874 10:46:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:29.875 10:46:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:29.875 10:46:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:29.875 10:46:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:29.875 10:46:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:27:29.875 10:46:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:29.875 10:46:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # e810=() 00:27:29.875 10:46:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:27:29.875 10:46:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # x722=() 00:27:29.875 10:46:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:27:29.875 10:46:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:27:29.875 10:46:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:27:29.875 10:46:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:29.875 10:46:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:29.875 10:46:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:29.875 10:46:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:29.875 10:46:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:29.875 10:46:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:29.875 10:46:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:29.875 10:46:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:29.875 10:46:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:29.875 10:46:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:29.875 10:46:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:29.875 10:46:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:29.875 10:46:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:29.875 10:46:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:27:29.875 10:46:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:27:29.875 10:46:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:27:29.875 10:46:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:27:29.875 10:46:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:29.875 10:46:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:29.875 10:46:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:82:00.0 (0x8086 - 0x159b)' 00:27:29.875 Found 0000:82:00.0 (0x8086 - 0x159b) 00:27:29.875 10:46:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:29.875 10:46:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:29.875 10:46:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:29.875 10:46:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:29.875 10:46:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:29.875 10:46:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:29.875 10:46:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:82:00.1 (0x8086 - 0x159b)' 00:27:29.875 Found 0000:82:00.1 (0x8086 - 0x159b) 00:27:29.875 10:46:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:29.875 10:46:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:29.875 10:46:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:29.875 10:46:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:29.875 10:46:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:29.875 10:46:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:29.875 10:46:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:27:29.875 10:46:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:27:29.875 10:46:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:29.875 10:46:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:29.875 10:46:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:29.875 10:46:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:29.875 10:46:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:29.875 10:46:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:29.875 10:46:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:29.875 10:46:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:82:00.0: cvl_0_0' 00:27:29.875 Found net devices under 0000:82:00.0: cvl_0_0 00:27:29.875 10:46:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:29.875 10:46:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:29.875 10:46:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:29.875 10:46:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:29.875 10:46:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:29.875 10:46:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:29.875 10:46:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:29.875 10:46:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:29.875 10:46:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:82:00.1: cvl_0_1' 00:27:29.875 Found net devices under 0000:82:00.1: cvl_0_1 00:27:29.875 10:46:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:29.875 10:46:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:27:29.875 10:46:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # is_hw=yes 00:27:29.875 10:46:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:27:29.875 10:46:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:27:29.875 10:46:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:27:29.875 10:46:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:29.875 10:46:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:29.875 10:46:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:29.875 10:46:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:29.875 10:46:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:27:29.875 10:46:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:29.875 10:46:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:29.875 10:46:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:27:29.875 10:46:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:27:29.875 10:46:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:29.875 10:46:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:29.875 10:46:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:27:29.875 10:46:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:27:29.875 10:46:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:27:29.875 10:46:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:29.875 10:46:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:29.875 10:46:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:29.875 10:46:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:27:29.875 10:46:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:29.875 10:46:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:29.875 10:46:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:29.875 10:46:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:27:30.134 10:46:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:27:30.134 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:30.134 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.244 ms 00:27:30.134 00:27:30.134 --- 10.0.0.2 ping statistics --- 00:27:30.134 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:30.134 rtt min/avg/max/mdev = 0.244/0.244/0.244/0.000 ms 00:27:30.134 10:46:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:30.134 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:30.134 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.148 ms 00:27:30.134 00:27:30.134 --- 10.0.0.1 ping statistics --- 00:27:30.134 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:30.134 rtt min/avg/max/mdev = 0.148/0.148/0.148/0.000 ms 00:27:30.134 10:46:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:30.134 10:46:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@450 -- # return 0 00:27:30.134 10:46:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:27:30.134 10:46:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:30.134 10:46:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:27:30.134 10:46:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:27:30.134 10:46:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:30.134 10:46:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:27:30.134 10:46:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:27:30.134 10:46:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@70 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:27:30.134 10:46:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:27:30.134 10:46:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1109 -- # xtrace_disable 00:27:30.134 10:46:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:27:30.134 ************************************ 00:27:30.134 START TEST nvmf_target_disconnect_tc1 00:27:30.134 ************************************ 00:27:30.134 10:46:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1127 -- # nvmf_target_disconnect_tc1 00:27:30.134 10:46:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:27:30.134 10:46:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@650 -- # local es=0 00:27:30.134 10:46:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:27:30.134 10:46:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:27:30.134 10:46:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:30.134 10:46:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:27:30.134 10:46:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:30.134 10:46:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:27:30.134 10:46:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:30.134 10:46:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:27:30.134 10:46:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect ]] 00:27:30.134 10:46:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:27:30.134 [2024-11-15 10:46:18.492580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.134 [2024-11-15 10:46:18.492665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd9f40 with addr=10.0.0.2, port=4420 00:27:30.134 [2024-11-15 10:46:18.492698] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:27:30.134 [2024-11-15 10:46:18.492725] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:27:30.134 [2024-11-15 10:46:18.492739] nvme.c: 939:spdk_nvme_probe_ext: *ERROR*: Create probe context failed 00:27:30.134 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:27:30.134 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:27:30.134 Initializing NVMe Controllers 00:27:30.134 10:46:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@653 -- # es=1 00:27:30.134 10:46:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:27:30.134 10:46:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:27:30.134 10:46:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:27:30.134 00:27:30.134 real 0m0.100s 00:27:30.134 user 0m0.053s 00:27:30.134 sys 0m0.047s 00:27:30.134 10:46:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1128 -- # xtrace_disable 00:27:30.134 10:46:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:30.134 ************************************ 00:27:30.134 END TEST nvmf_target_disconnect_tc1 00:27:30.134 ************************************ 00:27:30.134 10:46:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@71 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:27:30.134 10:46:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:27:30.134 10:46:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1109 -- # xtrace_disable 00:27:30.134 10:46:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:27:30.134 ************************************ 00:27:30.134 START TEST nvmf_target_disconnect_tc2 00:27:30.134 ************************************ 00:27:30.134 10:46:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1127 -- # nvmf_target_disconnect_tc2 00:27:30.134 10:46:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@37 -- # disconnect_init 10.0.0.2 00:27:30.134 10:46:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:27:30.134 10:46:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:27:30.134 10:46:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:27:30.134 10:46:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:30.134 10:46:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@509 -- # nvmfpid=491070 00:27:30.134 10:46:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:27:30.134 10:46:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@510 -- # waitforlisten 491070 00:27:30.134 10:46:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@833 -- # '[' -z 491070 ']' 00:27:30.134 10:46:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:30.134 10:46:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@838 -- # local max_retries=100 00:27:30.134 10:46:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:30.134 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:30.134 10:46:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@842 -- # xtrace_disable 00:27:30.134 10:46:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:30.392 [2024-11-15 10:46:18.611402] Starting SPDK v25.01-pre git sha1 318515b44 / DPDK 24.03.0 initialization... 00:27:30.392 [2024-11-15 10:46:18.611495] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:30.392 [2024-11-15 10:46:18.688011] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:30.392 [2024-11-15 10:46:18.752696] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:30.392 [2024-11-15 10:46:18.752751] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:30.392 [2024-11-15 10:46:18.752782] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:30.392 [2024-11-15 10:46:18.752793] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:30.392 [2024-11-15 10:46:18.752802] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:30.392 [2024-11-15 10:46:18.754535] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:27:30.392 [2024-11-15 10:46:18.754598] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:27:30.392 [2024-11-15 10:46:18.754644] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:27:30.392 [2024-11-15 10:46:18.754647] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:27:30.650 10:46:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:27:30.650 10:46:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@866 -- # return 0 00:27:30.650 10:46:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:27:30.650 10:46:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:27:30.650 10:46:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:30.650 10:46:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:30.650 10:46:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:27:30.650 10:46:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:30.650 10:46:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:30.650 Malloc0 00:27:30.650 10:46:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:30.650 10:46:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:27:30.650 10:46:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:30.650 10:46:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:30.650 [2024-11-15 10:46:18.944209] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:30.650 10:46:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:30.650 10:46:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:30.650 10:46:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:30.650 10:46:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:30.650 10:46:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:30.650 10:46:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:27:30.650 10:46:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:30.650 10:46:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:30.650 10:46:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:30.650 10:46:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:30.650 10:46:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:30.650 10:46:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:30.650 [2024-11-15 10:46:18.972539] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:30.650 10:46:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:30.650 10:46:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:27:30.650 10:46:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:30.650 10:46:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:30.650 10:46:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:30.650 10:46:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@42 -- # reconnectpid=491200 00:27:30.650 10:46:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@44 -- # sleep 2 00:27:30.650 10:46:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:27:32.547 10:46:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # kill -9 491070 00:27:32.547 10:46:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@47 -- # sleep 2 00:27:32.547 Read completed with error (sct=0, sc=8) 00:27:32.547 starting I/O failed 00:27:32.547 Read completed with error (sct=0, sc=8) 00:27:32.547 starting I/O failed 00:27:32.547 Read completed with error (sct=0, sc=8) 00:27:32.547 starting I/O failed 00:27:32.547 Read completed with error (sct=0, sc=8) 00:27:32.547 starting I/O failed 00:27:32.547 Read completed with error (sct=0, sc=8) 00:27:32.547 starting I/O failed 00:27:32.547 Read completed with error (sct=0, sc=8) 00:27:32.547 starting I/O failed 00:27:32.547 Read completed with error (sct=0, sc=8) 00:27:32.547 starting I/O failed 00:27:32.547 Read completed with error (sct=0, sc=8) 00:27:32.547 starting I/O failed 00:27:32.547 Read completed with error (sct=0, sc=8) 00:27:32.547 starting I/O failed 00:27:32.547 Read completed with error (sct=0, sc=8) 00:27:32.547 starting I/O failed 00:27:32.547 Read completed with error (sct=0, sc=8) 00:27:32.547 starting I/O failed 00:27:32.547 Read completed with error (sct=0, sc=8) 00:27:32.547 starting I/O failed 00:27:32.547 Read completed with error (sct=0, sc=8) 00:27:32.547 starting I/O failed 00:27:32.547 Read completed with error (sct=0, sc=8) 00:27:32.547 starting I/O failed 00:27:32.547 Read completed with error (sct=0, sc=8) 00:27:32.547 starting I/O failed 00:27:32.547 Read completed with error (sct=0, sc=8) 00:27:32.547 starting I/O failed 00:27:32.547 Write completed with error (sct=0, sc=8) 00:27:32.547 starting I/O failed 00:27:32.547 Write completed with error (sct=0, sc=8) 00:27:32.547 starting I/O failed 00:27:32.547 Write completed with error (sct=0, sc=8) 00:27:32.547 starting I/O failed 00:27:32.547 Read completed with error (sct=0, sc=8) 00:27:32.547 starting I/O failed 00:27:32.547 Write completed with error (sct=0, sc=8) 00:27:32.547 starting I/O failed 00:27:32.547 Read completed with error (sct=0, sc=8) 00:27:32.547 starting I/O failed 00:27:32.547 Read completed with error (sct=0, sc=8) 00:27:32.547 starting I/O failed 00:27:32.547 Write completed with error (sct=0, sc=8) 00:27:32.547 starting I/O failed 00:27:32.547 Read completed with error (sct=0, sc=8) 00:27:32.547 starting I/O failed 00:27:32.547 Read completed with error (sct=0, sc=8) 00:27:32.547 starting I/O failed 00:27:32.547 Write completed with error (sct=0, sc=8) 00:27:32.547 starting I/O failed 00:27:32.547 Write completed with error (sct=0, sc=8) 00:27:32.547 starting I/O failed 00:27:32.547 Read completed with error (sct=0, sc=8) 00:27:32.547 starting I/O failed 00:27:32.547 Write completed with error (sct=0, sc=8) 00:27:32.547 starting I/O failed 00:27:32.547 Read completed with error (sct=0, sc=8) 00:27:32.547 starting I/O failed 00:27:32.547 Read completed with error (sct=0, sc=8) 00:27:32.547 starting I/O failed 00:27:32.547 Read completed with error (sct=0, sc=8) 00:27:32.547 starting I/O failed 00:27:32.547 [2024-11-15 10:46:20.997917] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:32.547 Read completed with error (sct=0, sc=8) 00:27:32.547 starting I/O failed 00:27:32.547 Read completed with error (sct=0, sc=8) 00:27:32.547 starting I/O failed 00:27:32.547 Read completed with error (sct=0, sc=8) 00:27:32.547 starting I/O failed 00:27:32.547 Read completed with error (sct=0, sc=8) 00:27:32.547 starting I/O failed 00:27:32.547 Read completed with error (sct=0, sc=8) 00:27:32.547 starting I/O failed 00:27:32.547 Read completed with error (sct=0, sc=8) 00:27:32.547 starting I/O failed 00:27:32.547 Read completed with error (sct=0, sc=8) 00:27:32.547 starting I/O failed 00:27:32.547 Read completed with error (sct=0, sc=8) 00:27:32.547 starting I/O failed 00:27:32.547 Write completed with error (sct=0, sc=8) 00:27:32.547 starting I/O failed 00:27:32.547 Write completed with error (sct=0, sc=8) 00:27:32.547 starting I/O failed 00:27:32.547 Write completed with error (sct=0, sc=8) 00:27:32.547 starting I/O failed 00:27:32.547 Read completed with error (sct=0, sc=8) 00:27:32.547 starting I/O failed 00:27:32.547 Write completed with error (sct=0, sc=8) 00:27:32.547 starting I/O failed 00:27:32.547 Write completed with error (sct=0, sc=8) 00:27:32.547 starting I/O failed 00:27:32.547 Write completed with error (sct=0, sc=8) 00:27:32.547 starting I/O failed 00:27:32.547 Write completed with error (sct=0, sc=8) 00:27:32.547 starting I/O failed 00:27:32.547 Read completed with error (sct=0, sc=8) 00:27:32.547 starting I/O failed 00:27:32.547 Read completed with error (sct=0, sc=8) 00:27:32.547 starting I/O failed 00:27:32.547 Read completed with error (sct=0, sc=8) 00:27:32.547 starting I/O failed 00:27:32.547 Read completed with error (sct=0, sc=8) 00:27:32.547 starting I/O failed 00:27:32.547 Write completed with error (sct=0, sc=8) 00:27:32.547 starting I/O failed 00:27:32.547 Read completed with error (sct=0, sc=8) 00:27:32.547 starting I/O failed 00:27:32.547 Read completed with error (sct=0, sc=8) 00:27:32.547 starting I/O failed 00:27:32.547 Read completed with error (sct=0, sc=8) 00:27:32.547 starting I/O failed 00:27:32.547 Read completed with error (sct=0, sc=8) 00:27:32.547 starting I/O failed 00:27:32.547 Read completed with error (sct=0, sc=8) 00:27:32.547 starting I/O failed 00:27:32.547 Read completed with error (sct=0, sc=8) 00:27:32.547 starting I/O failed 00:27:32.547 Write completed with error (sct=0, sc=8) 00:27:32.547 starting I/O failed 00:27:32.547 Write completed with error (sct=0, sc=8) 00:27:32.547 starting I/O failed 00:27:32.547 Write completed with error (sct=0, sc=8) 00:27:32.547 starting I/O failed 00:27:32.547 Read completed with error (sct=0, sc=8) 00:27:32.547 starting I/O failed 00:27:32.547 [2024-11-15 10:46:20.998229] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:32.547 Read completed with error (sct=0, sc=8) 00:27:32.547 starting I/O failed 00:27:32.547 Read completed with error (sct=0, sc=8) 00:27:32.547 starting I/O failed 00:27:32.547 Write completed with error (sct=0, sc=8) 00:27:32.547 starting I/O failed 00:27:32.547 Read completed with error (sct=0, sc=8) 00:27:32.547 starting I/O failed 00:27:32.547 Write completed with error (sct=0, sc=8) 00:27:32.547 starting I/O failed 00:27:32.547 Write completed with error (sct=0, sc=8) 00:27:32.547 starting I/O failed 00:27:32.547 Read completed with error (sct=0, sc=8) 00:27:32.547 starting I/O failed 00:27:32.547 Read completed with error (sct=0, sc=8) 00:27:32.547 starting I/O failed 00:27:32.547 Write completed with error (sct=0, sc=8) 00:27:32.547 starting I/O failed 00:27:32.547 Read completed with error (sct=0, sc=8) 00:27:32.547 starting I/O failed 00:27:32.547 Read completed with error (sct=0, sc=8) 00:27:32.547 starting I/O failed 00:27:32.547 Read completed with error (sct=0, sc=8) 00:27:32.547 starting I/O failed 00:27:32.547 Read completed with error (sct=0, sc=8) 00:27:32.547 starting I/O failed 00:27:32.547 Write completed with error (sct=0, sc=8) 00:27:32.547 starting I/O failed 00:27:32.547 Write completed with error (sct=0, sc=8) 00:27:32.547 starting I/O failed 00:27:32.547 Write completed with error (sct=0, sc=8) 00:27:32.547 starting I/O failed 00:27:32.547 Write completed with error (sct=0, sc=8) 00:27:32.547 starting I/O failed 00:27:32.547 Write completed with error (sct=0, sc=8) 00:27:32.547 starting I/O failed 00:27:32.547 Write completed with error (sct=0, sc=8) 00:27:32.547 starting I/O failed 00:27:32.547 Write completed with error (sct=0, sc=8) 00:27:32.547 starting I/O failed 00:27:32.547 Write completed with error (sct=0, sc=8) 00:27:32.548 starting I/O failed 00:27:32.548 Write completed with error (sct=0, sc=8) 00:27:32.548 starting I/O failed 00:27:32.548 Read completed with error (sct=0, sc=8) 00:27:32.548 starting I/O failed 00:27:32.548 Write completed with error (sct=0, sc=8) 00:27:32.548 starting I/O failed 00:27:32.548 Write completed with error (sct=0, sc=8) 00:27:32.548 starting I/O failed 00:27:32.548 Write completed with error (sct=0, sc=8) 00:27:32.548 starting I/O failed 00:27:32.548 Write completed with error (sct=0, sc=8) 00:27:32.548 starting I/O failed 00:27:32.548 Read completed with error (sct=0, sc=8) 00:27:32.548 starting I/O failed 00:27:32.548 Read completed with error (sct=0, sc=8) 00:27:32.548 starting I/O failed 00:27:32.548 Read completed with error (sct=0, sc=8) 00:27:32.548 starting I/O failed 00:27:32.548 Read completed with error (sct=0, sc=8) 00:27:32.548 starting I/O failed 00:27:32.548 Write completed with error (sct=0, sc=8) 00:27:32.548 starting I/O failed 00:27:32.548 [2024-11-15 10:46:20.998589] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:32.548 Read completed with error (sct=0, sc=8) 00:27:32.548 starting I/O failed 00:27:32.548 Read completed with error (sct=0, sc=8) 00:27:32.548 starting I/O failed 00:27:32.548 Read completed with error (sct=0, sc=8) 00:27:32.548 starting I/O failed 00:27:32.548 Read completed with error (sct=0, sc=8) 00:27:32.548 starting I/O failed 00:27:32.548 Read completed with error (sct=0, sc=8) 00:27:32.548 starting I/O failed 00:27:32.548 Read completed with error (sct=0, sc=8) 00:27:32.548 starting I/O failed 00:27:32.548 Read completed with error (sct=0, sc=8) 00:27:32.548 starting I/O failed 00:27:32.548 Read completed with error (sct=0, sc=8) 00:27:32.548 starting I/O failed 00:27:32.548 Write completed with error (sct=0, sc=8) 00:27:32.548 starting I/O failed 00:27:32.548 Write completed with error (sct=0, sc=8) 00:27:32.548 starting I/O failed 00:27:32.548 Write completed with error (sct=0, sc=8) 00:27:32.548 starting I/O failed 00:27:32.548 Write completed with error (sct=0, sc=8) 00:27:32.548 starting I/O failed 00:27:32.548 Read completed with error (sct=0, sc=8) 00:27:32.548 starting I/O failed 00:27:32.548 Read completed with error (sct=0, sc=8) 00:27:32.548 starting I/O failed 00:27:32.548 Read completed with error (sct=0, sc=8) 00:27:32.548 starting I/O failed 00:27:32.548 Write completed with error (sct=0, sc=8) 00:27:32.548 starting I/O failed 00:27:32.548 Write completed with error (sct=0, sc=8) 00:27:32.548 starting I/O failed 00:27:32.548 Write completed with error (sct=0, sc=8) 00:27:32.548 starting I/O failed 00:27:32.548 Read completed with error (sct=0, sc=8) 00:27:32.548 starting I/O failed 00:27:32.548 Write completed with error (sct=0, sc=8) 00:27:32.548 starting I/O failed 00:27:32.548 Read completed with error (sct=0, sc=8) 00:27:32.548 starting I/O failed 00:27:32.548 Read completed with error (sct=0, sc=8) 00:27:32.548 starting I/O failed 00:27:32.548 Read completed with error (sct=0, sc=8) 00:27:32.548 starting I/O failed 00:27:32.548 Write completed with error (sct=0, sc=8) 00:27:32.548 starting I/O failed 00:27:32.548 Write completed with error (sct=0, sc=8) 00:27:32.548 starting I/O failed 00:27:32.548 Write completed with error (sct=0, sc=8) 00:27:32.548 starting I/O failed 00:27:32.548 Read completed with error (sct=0, sc=8) 00:27:32.548 starting I/O failed 00:27:32.548 Write completed with error (sct=0, sc=8) 00:27:32.548 starting I/O failed 00:27:32.548 Write completed with error (sct=0, sc=8) 00:27:32.548 starting I/O failed 00:27:32.548 Write completed with error (sct=0, sc=8) 00:27:32.548 starting I/O failed 00:27:32.548 Write completed with error (sct=0, sc=8) 00:27:32.548 starting I/O failed 00:27:32.548 Read completed with error (sct=0, sc=8) 00:27:32.548 starting I/O failed 00:27:32.548 [2024-11-15 10:46:20.998879] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:32.548 [2024-11-15 10:46:20.999089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.548 [2024-11-15 10:46:20.999142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:32.548 qpair failed and we were unable to recover it. 00:27:32.548 [2024-11-15 10:46:20.999343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.548 [2024-11-15 10:46:20.999439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:32.548 qpair failed and we were unable to recover it. 00:27:32.548 [2024-11-15 10:46:20.999574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.548 [2024-11-15 10:46:20.999600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:32.548 qpair failed and we were unable to recover it. 00:27:32.548 [2024-11-15 10:46:20.999722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.548 [2024-11-15 10:46:20.999761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:32.548 qpair failed and we were unable to recover it. 00:27:32.548 [2024-11-15 10:46:20.999869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.548 [2024-11-15 10:46:20.999893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:32.548 qpair failed and we were unable to recover it. 00:27:32.548 [2024-11-15 10:46:21.000036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.548 [2024-11-15 10:46:21.000065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:32.548 qpair failed and we were unable to recover it. 00:27:32.548 [2024-11-15 10:46:21.000253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.548 [2024-11-15 10:46:21.000313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:32.548 qpair failed and we were unable to recover it. 00:27:32.548 [2024-11-15 10:46:21.000454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.548 [2024-11-15 10:46:21.000480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:32.548 qpair failed and we were unable to recover it. 00:27:32.548 [2024-11-15 10:46:21.000587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.548 [2024-11-15 10:46:21.000612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:32.548 qpair failed and we were unable to recover it. 00:27:32.548 [2024-11-15 10:46:21.000773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.548 [2024-11-15 10:46:21.000812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:32.548 qpair failed and we were unable to recover it. 00:27:32.548 [2024-11-15 10:46:21.000939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.548 [2024-11-15 10:46:21.000962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:32.548 qpair failed and we were unable to recover it. 00:27:32.548 [2024-11-15 10:46:21.001123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.548 [2024-11-15 10:46:21.001147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:32.548 qpair failed and we were unable to recover it. 00:27:32.548 [2024-11-15 10:46:21.001313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.548 [2024-11-15 10:46:21.001337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:32.548 qpair failed and we were unable to recover it. 00:27:32.548 [2024-11-15 10:46:21.001488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.548 [2024-11-15 10:46:21.001513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:32.548 qpair failed and we were unable to recover it. 00:27:32.548 [2024-11-15 10:46:21.001599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.548 [2024-11-15 10:46:21.001624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:32.548 qpair failed and we were unable to recover it. 00:27:32.548 [2024-11-15 10:46:21.001764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.548 [2024-11-15 10:46:21.001787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:32.548 qpair failed and we were unable to recover it. 00:27:32.548 [2024-11-15 10:46:21.001972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.548 [2024-11-15 10:46:21.001995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:32.548 qpair failed and we were unable to recover it. 00:27:32.548 [2024-11-15 10:46:21.002152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.548 [2024-11-15 10:46:21.002191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:32.548 qpair failed and we were unable to recover it. 00:27:32.548 [2024-11-15 10:46:21.002337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.548 [2024-11-15 10:46:21.002385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:32.548 qpair failed and we were unable to recover it. 00:27:32.548 [2024-11-15 10:46:21.002518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.548 [2024-11-15 10:46:21.002544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:32.548 qpair failed and we were unable to recover it. 00:27:32.548 [2024-11-15 10:46:21.002674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.548 [2024-11-15 10:46:21.002698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:32.548 qpair failed and we were unable to recover it. 00:27:32.548 [2024-11-15 10:46:21.002804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.548 [2024-11-15 10:46:21.002827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:32.548 qpair failed and we were unable to recover it. 00:27:32.548 [2024-11-15 10:46:21.002984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.548 [2024-11-15 10:46:21.003008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:32.548 qpair failed and we were unable to recover it. 00:27:32.548 [2024-11-15 10:46:21.003146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.548 [2024-11-15 10:46:21.003169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:32.548 qpair failed and we were unable to recover it. 00:27:32.548 [2024-11-15 10:46:21.003301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.548 [2024-11-15 10:46:21.003325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:32.548 qpair failed and we were unable to recover it. 00:27:32.548 [2024-11-15 10:46:21.003488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.548 [2024-11-15 10:46:21.003537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.548 qpair failed and we were unable to recover it. 00:27:32.548 [2024-11-15 10:46:21.003686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.548 [2024-11-15 10:46:21.003713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.548 qpair failed and we were unable to recover it. 00:27:32.548 [2024-11-15 10:46:21.003848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.548 [2024-11-15 10:46:21.003887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.548 qpair failed and we were unable to recover it. 00:27:32.548 [2024-11-15 10:46:21.004007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.548 [2024-11-15 10:46:21.004031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.548 qpair failed and we were unable to recover it. 00:27:32.548 [2024-11-15 10:46:21.004168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.548 [2024-11-15 10:46:21.004192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.548 qpair failed and we were unable to recover it. 00:27:32.548 [2024-11-15 10:46:21.004321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.548 [2024-11-15 10:46:21.004346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.548 qpair failed and we were unable to recover it. 00:27:32.548 [2024-11-15 10:46:21.004496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.548 [2024-11-15 10:46:21.004522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.548 qpair failed and we were unable to recover it. 00:27:32.548 [2024-11-15 10:46:21.004607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.548 [2024-11-15 10:46:21.004638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.548 qpair failed and we were unable to recover it. 00:27:32.548 [2024-11-15 10:46:21.004800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.548 [2024-11-15 10:46:21.004839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.548 qpair failed and we were unable to recover it. 00:27:32.548 [2024-11-15 10:46:21.004985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.548 [2024-11-15 10:46:21.005009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:32.548 qpair failed and we were unable to recover it. 00:27:32.548 [2024-11-15 10:46:21.005110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.548 [2024-11-15 10:46:21.005133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:32.548 qpair failed and we were unable to recover it. 00:27:32.548 [2024-11-15 10:46:21.005292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.548 [2024-11-15 10:46:21.005316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:32.548 qpair failed and we were unable to recover it. 00:27:32.548 [2024-11-15 10:46:21.005432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.548 [2024-11-15 10:46:21.005458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:32.548 qpair failed and we were unable to recover it. 00:27:32.548 [2024-11-15 10:46:21.005547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.548 [2024-11-15 10:46:21.005572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:32.548 qpair failed and we were unable to recover it. 00:27:32.548 [2024-11-15 10:46:21.005701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.548 [2024-11-15 10:46:21.005725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:32.548 qpair failed and we were unable to recover it. 00:27:32.548 [2024-11-15 10:46:21.005855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.548 [2024-11-15 10:46:21.005894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:32.548 qpair failed and we were unable to recover it. 00:27:32.548 [2024-11-15 10:46:21.006015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.548 [2024-11-15 10:46:21.006039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:32.548 qpair failed and we were unable to recover it. 00:27:32.548 [2024-11-15 10:46:21.006170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.548 [2024-11-15 10:46:21.006194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:32.548 qpair failed and we were unable to recover it. 00:27:32.548 [2024-11-15 10:46:21.006309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.548 [2024-11-15 10:46:21.006335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.548 qpair failed and we were unable to recover it. 00:27:32.548 [2024-11-15 10:46:21.006508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.548 [2024-11-15 10:46:21.006555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b48000b90 with addr=10.0.0.2, port=4420 00:27:32.548 qpair failed and we were unable to recover it. 00:27:32.548 [2024-11-15 10:46:21.006714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.548 [2024-11-15 10:46:21.006741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b48000b90 with addr=10.0.0.2, port=4420 00:27:32.548 qpair failed and we were unable to recover it. 00:27:32.548 [2024-11-15 10:46:21.006894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.548 [2024-11-15 10:46:21.006918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b48000b90 with addr=10.0.0.2, port=4420 00:27:32.548 qpair failed and we were unable to recover it. 00:27:32.548 [2024-11-15 10:46:21.007023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.549 [2024-11-15 10:46:21.007047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b48000b90 with addr=10.0.0.2, port=4420 00:27:32.549 qpair failed and we were unable to recover it. 00:27:32.549 [2024-11-15 10:46:21.007174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.549 [2024-11-15 10:46:21.007199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b48000b90 with addr=10.0.0.2, port=4420 00:27:32.549 qpair failed and we were unable to recover it. 00:27:32.549 [2024-11-15 10:46:21.007321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.549 [2024-11-15 10:46:21.007346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b48000b90 with addr=10.0.0.2, port=4420 00:27:32.549 qpair failed and we were unable to recover it. 00:27:32.549 [2024-11-15 10:46:21.007475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.549 [2024-11-15 10:46:21.007501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b48000b90 with addr=10.0.0.2, port=4420 00:27:32.549 qpair failed and we were unable to recover it. 00:27:32.549 [2024-11-15 10:46:21.007632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.549 [2024-11-15 10:46:21.007673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b48000b90 with addr=10.0.0.2, port=4420 00:27:32.549 qpair failed and we were unable to recover it. 00:27:32.549 [2024-11-15 10:46:21.007789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.549 [2024-11-15 10:46:21.007827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b48000b90 with addr=10.0.0.2, port=4420 00:27:32.549 qpair failed and we were unable to recover it. 00:27:32.549 [2024-11-15 10:46:21.007953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.549 [2024-11-15 10:46:21.007977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b48000b90 with addr=10.0.0.2, port=4420 00:27:32.549 qpair failed and we were unable to recover it. 00:27:32.549 [2024-11-15 10:46:21.008107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.549 [2024-11-15 10:46:21.008132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b48000b90 with addr=10.0.0.2, port=4420 00:27:32.549 qpair failed and we were unable to recover it. 00:27:32.549 [2024-11-15 10:46:21.008257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.549 [2024-11-15 10:46:21.008295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:32.549 qpair failed and we were unable to recover it. 00:27:32.549 [2024-11-15 10:46:21.008461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.549 [2024-11-15 10:46:21.008500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.549 qpair failed and we were unable to recover it. 00:27:32.549 [2024-11-15 10:46:21.008602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.549 [2024-11-15 10:46:21.008630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.549 qpair failed and we were unable to recover it. 00:27:32.549 [2024-11-15 10:46:21.008792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.549 [2024-11-15 10:46:21.008832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.549 qpair failed and we were unable to recover it. 00:27:32.549 [2024-11-15 10:46:21.008943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.549 [2024-11-15 10:46:21.008977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.549 qpair failed and we were unable to recover it. 00:27:32.549 [2024-11-15 10:46:21.009110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.549 [2024-11-15 10:46:21.009135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.549 qpair failed and we were unable to recover it. 00:27:32.549 [2024-11-15 10:46:21.009283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.549 [2024-11-15 10:46:21.009310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b48000b90 with addr=10.0.0.2, port=4420 00:27:32.549 qpair failed and we were unable to recover it. 00:27:32.549 [2024-11-15 10:46:21.009432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.549 [2024-11-15 10:46:21.009458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b48000b90 with addr=10.0.0.2, port=4420 00:27:32.549 qpair failed and we were unable to recover it. 00:27:32.549 [2024-11-15 10:46:21.009570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.549 [2024-11-15 10:46:21.009596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b48000b90 with addr=10.0.0.2, port=4420 00:27:32.549 qpair failed and we were unable to recover it. 00:27:32.549 [2024-11-15 10:46:21.009743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.549 [2024-11-15 10:46:21.009769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b48000b90 with addr=10.0.0.2, port=4420 00:27:32.549 qpair failed and we were unable to recover it. 00:27:32.549 [2024-11-15 10:46:21.009892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.549 [2024-11-15 10:46:21.009918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b48000b90 with addr=10.0.0.2, port=4420 00:27:32.549 qpair failed and we were unable to recover it. 00:27:32.549 [2024-11-15 10:46:21.010039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.549 [2024-11-15 10:46:21.010065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b48000b90 with addr=10.0.0.2, port=4420 00:27:32.549 qpair failed and we were unable to recover it. 00:27:32.549 [2024-11-15 10:46:21.010161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.549 [2024-11-15 10:46:21.010187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b48000b90 with addr=10.0.0.2, port=4420 00:27:32.549 qpair failed and we were unable to recover it. 00:27:32.549 [2024-11-15 10:46:21.010359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.549 [2024-11-15 10:46:21.010405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:32.549 qpair failed and we were unable to recover it. 00:27:32.549 [2024-11-15 10:46:21.010510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.549 [2024-11-15 10:46:21.010537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:32.549 qpair failed and we were unable to recover it. 00:27:32.549 [2024-11-15 10:46:21.010667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.549 [2024-11-15 10:46:21.010694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.549 qpair failed and we were unable to recover it. 00:27:32.549 [2024-11-15 10:46:21.010860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.549 [2024-11-15 10:46:21.010884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.549 qpair failed and we were unable to recover it. 00:27:32.549 [2024-11-15 10:46:21.011055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.549 [2024-11-15 10:46:21.011079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.549 qpair failed and we were unable to recover it. 00:27:32.549 [2024-11-15 10:46:21.011218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.549 [2024-11-15 10:46:21.011244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.549 qpair failed and we were unable to recover it. 00:27:32.549 [2024-11-15 10:46:21.011380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.549 [2024-11-15 10:46:21.011408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b48000b90 with addr=10.0.0.2, port=4420 00:27:32.549 qpair failed and we were unable to recover it. 00:27:32.549 [2024-11-15 10:46:21.011508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.549 [2024-11-15 10:46:21.011534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b48000b90 with addr=10.0.0.2, port=4420 00:27:32.549 qpair failed and we were unable to recover it. 00:27:32.549 [2024-11-15 10:46:21.011683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.549 [2024-11-15 10:46:21.011708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b48000b90 with addr=10.0.0.2, port=4420 00:27:32.549 qpair failed and we were unable to recover it. 00:27:32.549 [2024-11-15 10:46:21.011820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.549 [2024-11-15 10:46:21.011846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b48000b90 with addr=10.0.0.2, port=4420 00:27:32.549 qpair failed and we were unable to recover it. 00:27:32.549 [2024-11-15 10:46:21.011997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.549 [2024-11-15 10:46:21.012023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b48000b90 with addr=10.0.0.2, port=4420 00:27:32.549 qpair failed and we were unable to recover it. 00:27:32.549 [2024-11-15 10:46:21.012114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.549 [2024-11-15 10:46:21.012140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b48000b90 with addr=10.0.0.2, port=4420 00:27:32.549 qpair failed and we were unable to recover it. 00:27:32.549 [2024-11-15 10:46:21.012285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.549 [2024-11-15 10:46:21.012311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b48000b90 with addr=10.0.0.2, port=4420 00:27:32.549 qpair failed and we were unable to recover it. 00:27:32.549 [2024-11-15 10:46:21.012429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.549 [2024-11-15 10:46:21.012456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b48000b90 with addr=10.0.0.2, port=4420 00:27:32.549 qpair failed and we were unable to recover it. 00:27:32.549 [2024-11-15 10:46:21.012574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.549 [2024-11-15 10:46:21.012600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b48000b90 with addr=10.0.0.2, port=4420 00:27:32.549 qpair failed and we were unable to recover it. 00:27:32.549 [2024-11-15 10:46:21.012759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.549 [2024-11-15 10:46:21.012785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b48000b90 with addr=10.0.0.2, port=4420 00:27:32.549 qpair failed and we were unable to recover it. 00:27:32.549 [2024-11-15 10:46:21.012895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.549 [2024-11-15 10:46:21.012921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b48000b90 with addr=10.0.0.2, port=4420 00:27:32.549 qpair failed and we were unable to recover it. 00:27:32.824 [2024-11-15 10:46:21.013331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.824 [2024-11-15 10:46:21.013377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b48000b90 with addr=10.0.0.2, port=4420 00:27:32.824 qpair failed and we were unable to recover it. 00:27:32.824 [2024-11-15 10:46:21.013530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.824 [2024-11-15 10:46:21.013559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b48000b90 with addr=10.0.0.2, port=4420 00:27:32.824 qpair failed and we were unable to recover it. 00:27:32.824 [2024-11-15 10:46:21.013707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.824 [2024-11-15 10:46:21.013734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b48000b90 with addr=10.0.0.2, port=4420 00:27:32.824 qpair failed and we were unable to recover it. 00:27:32.824 [2024-11-15 10:46:21.013852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.824 [2024-11-15 10:46:21.013878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b48000b90 with addr=10.0.0.2, port=4420 00:27:32.824 qpair failed and we were unable to recover it. 00:27:32.824 [2024-11-15 10:46:21.014032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.824 [2024-11-15 10:46:21.014056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b48000b90 with addr=10.0.0.2, port=4420 00:27:32.824 qpair failed and we were unable to recover it. 00:27:32.824 [2024-11-15 10:46:21.014160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.824 [2024-11-15 10:46:21.014186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b48000b90 with addr=10.0.0.2, port=4420 00:27:32.824 qpair failed and we were unable to recover it. 00:27:32.824 [2024-11-15 10:46:21.014327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.824 [2024-11-15 10:46:21.014375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:32.824 qpair failed and we were unable to recover it. 00:27:32.824 [2024-11-15 10:46:21.014497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.824 [2024-11-15 10:46:21.014523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:32.824 qpair failed and we were unable to recover it. 00:27:32.824 [2024-11-15 10:46:21.014649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.824 [2024-11-15 10:46:21.014674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:32.824 qpair failed and we were unable to recover it. 00:27:32.824 [2024-11-15 10:46:21.014807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.824 [2024-11-15 10:46:21.014844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:32.824 qpair failed and we were unable to recover it. 00:27:32.824 [2024-11-15 10:46:21.014991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.824 [2024-11-15 10:46:21.015043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:32.824 qpair failed and we were unable to recover it. 00:27:32.824 [2024-11-15 10:46:21.015177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.825 [2024-11-15 10:46:21.015202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:32.825 qpair failed and we were unable to recover it. 00:27:32.825 [2024-11-15 10:46:21.015323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.825 [2024-11-15 10:46:21.015347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:32.825 qpair failed and we were unable to recover it. 00:27:32.825 [2024-11-15 10:46:21.015457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.825 [2024-11-15 10:46:21.015482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:32.825 qpair failed and we were unable to recover it. 00:27:32.825 [2024-11-15 10:46:21.015599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.825 [2024-11-15 10:46:21.015629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:32.825 qpair failed and we were unable to recover it. 00:27:32.825 [2024-11-15 10:46:21.015760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.825 [2024-11-15 10:46:21.015798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:32.825 qpair failed and we were unable to recover it. 00:27:32.825 [2024-11-15 10:46:21.015956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.825 [2024-11-15 10:46:21.015980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:32.825 qpair failed and we were unable to recover it. 00:27:32.825 [2024-11-15 10:46:21.016137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.825 [2024-11-15 10:46:21.016162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:32.825 qpair failed and we were unable to recover it. 00:27:32.825 [2024-11-15 10:46:21.016283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.825 [2024-11-15 10:46:21.016308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:32.825 qpair failed and we were unable to recover it. 00:27:32.825 [2024-11-15 10:46:21.016441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.825 [2024-11-15 10:46:21.016466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:32.825 qpair failed and we were unable to recover it. 00:27:32.825 [2024-11-15 10:46:21.016576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.825 [2024-11-15 10:46:21.016600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:32.825 qpair failed and we were unable to recover it. 00:27:32.825 [2024-11-15 10:46:21.016747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.825 [2024-11-15 10:46:21.016784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:32.825 qpair failed and we were unable to recover it. 00:27:32.825 [2024-11-15 10:46:21.016937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.825 [2024-11-15 10:46:21.016959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:32.825 qpair failed and we were unable to recover it. 00:27:32.825 [2024-11-15 10:46:21.017088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.825 [2024-11-15 10:46:21.017111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:32.825 qpair failed and we were unable to recover it. 00:27:32.825 [2024-11-15 10:46:21.017238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.825 [2024-11-15 10:46:21.017262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:32.825 qpair failed and we were unable to recover it. 00:27:32.825 [2024-11-15 10:46:21.017441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.825 [2024-11-15 10:46:21.017480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.825 qpair failed and we were unable to recover it. 00:27:32.825 [2024-11-15 10:46:21.017588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.825 [2024-11-15 10:46:21.017615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.825 qpair failed and we were unable to recover it. 00:27:32.825 [2024-11-15 10:46:21.017739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.825 [2024-11-15 10:46:21.017766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.825 qpair failed and we were unable to recover it. 00:27:32.825 [2024-11-15 10:46:21.017886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.825 [2024-11-15 10:46:21.017912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.825 qpair failed and we were unable to recover it. 00:27:32.825 [2024-11-15 10:46:21.018056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.825 [2024-11-15 10:46:21.018082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.825 qpair failed and we were unable to recover it. 00:27:32.825 [2024-11-15 10:46:21.018251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.825 [2024-11-15 10:46:21.018303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b48000b90 with addr=10.0.0.2, port=4420 00:27:32.825 qpair failed and we were unable to recover it. 00:27:32.825 [2024-11-15 10:46:21.018434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.825 [2024-11-15 10:46:21.018476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:32.825 qpair failed and we were unable to recover it. 00:27:32.825 [2024-11-15 10:46:21.018598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.825 [2024-11-15 10:46:21.018623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:32.825 qpair failed and we were unable to recover it. 00:27:32.825 [2024-11-15 10:46:21.018774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.825 [2024-11-15 10:46:21.018799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:32.825 qpair failed and we were unable to recover it. 00:27:32.825 [2024-11-15 10:46:21.018948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.825 [2024-11-15 10:46:21.018973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:32.825 qpair failed and we were unable to recover it. 00:27:32.825 [2024-11-15 10:46:21.019068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.825 [2024-11-15 10:46:21.019093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:32.825 qpair failed and we were unable to recover it. 00:27:32.825 [2024-11-15 10:46:21.019204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.825 [2024-11-15 10:46:21.019244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.825 qpair failed and we were unable to recover it. 00:27:32.825 [2024-11-15 10:46:21.019391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.825 [2024-11-15 10:46:21.019430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b48000b90 with addr=10.0.0.2, port=4420 00:27:32.825 qpair failed and we were unable to recover it. 00:27:32.825 [2024-11-15 10:46:21.019558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.825 [2024-11-15 10:46:21.019585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b48000b90 with addr=10.0.0.2, port=4420 00:27:32.825 qpair failed and we were unable to recover it. 00:27:32.825 [2024-11-15 10:46:21.019698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.825 [2024-11-15 10:46:21.019723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b48000b90 with addr=10.0.0.2, port=4420 00:27:32.825 qpair failed and we were unable to recover it. 00:27:32.825 [2024-11-15 10:46:21.019898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.825 [2024-11-15 10:46:21.019957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b48000b90 with addr=10.0.0.2, port=4420 00:27:32.825 qpair failed and we were unable to recover it. 00:27:32.825 [2024-11-15 10:46:21.020089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.825 [2024-11-15 10:46:21.020120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b48000b90 with addr=10.0.0.2, port=4420 00:27:32.825 qpair failed and we were unable to recover it. 00:27:32.825 [2024-11-15 10:46:21.020266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.825 [2024-11-15 10:46:21.020292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b48000b90 with addr=10.0.0.2, port=4420 00:27:32.825 qpair failed and we were unable to recover it. 00:27:32.825 [2024-11-15 10:46:21.020480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.825 [2024-11-15 10:46:21.020518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.825 qpair failed and we were unable to recover it. 00:27:32.825 [2024-11-15 10:46:21.020657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.825 [2024-11-15 10:46:21.020683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.825 qpair failed and we were unable to recover it. 00:27:32.825 [2024-11-15 10:46:21.020856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.825 [2024-11-15 10:46:21.020881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.825 qpair failed and we were unable to recover it. 00:27:32.825 [2024-11-15 10:46:21.021021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.825 [2024-11-15 10:46:21.021071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.825 qpair failed and we were unable to recover it. 00:27:32.825 [2024-11-15 10:46:21.021194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.825 [2024-11-15 10:46:21.021219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.825 qpair failed and we were unable to recover it. 00:27:32.825 [2024-11-15 10:46:21.021336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.825 [2024-11-15 10:46:21.021373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.826 qpair failed and we were unable to recover it. 00:27:32.826 [2024-11-15 10:46:21.021467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.826 [2024-11-15 10:46:21.021492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.826 qpair failed and we were unable to recover it. 00:27:32.826 [2024-11-15 10:46:21.021590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.826 [2024-11-15 10:46:21.021615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.826 qpair failed and we were unable to recover it. 00:27:32.826 [2024-11-15 10:46:21.021731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.826 [2024-11-15 10:46:21.021755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.826 qpair failed and we were unable to recover it. 00:27:32.826 [2024-11-15 10:46:21.021865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.826 [2024-11-15 10:46:21.021889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.826 qpair failed and we were unable to recover it. 00:27:32.826 [2024-11-15 10:46:21.022047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.826 [2024-11-15 10:46:21.022070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.826 qpair failed and we were unable to recover it. 00:27:32.826 [2024-11-15 10:46:21.022203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.826 [2024-11-15 10:46:21.022228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.826 qpair failed and we were unable to recover it. 00:27:32.826 [2024-11-15 10:46:21.022398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.826 [2024-11-15 10:46:21.022452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b48000b90 with addr=10.0.0.2, port=4420 00:27:32.826 qpair failed and we were unable to recover it. 00:27:32.826 [2024-11-15 10:46:21.022604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.826 [2024-11-15 10:46:21.022632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b48000b90 with addr=10.0.0.2, port=4420 00:27:32.826 qpair failed and we were unable to recover it. 00:27:32.826 [2024-11-15 10:46:21.022782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.826 [2024-11-15 10:46:21.022807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b48000b90 with addr=10.0.0.2, port=4420 00:27:32.826 qpair failed and we were unable to recover it. 00:27:32.826 [2024-11-15 10:46:21.022923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.826 [2024-11-15 10:46:21.022947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b48000b90 with addr=10.0.0.2, port=4420 00:27:32.826 qpair failed and we were unable to recover it. 00:27:32.826 [2024-11-15 10:46:21.023084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.826 [2024-11-15 10:46:21.023108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b48000b90 with addr=10.0.0.2, port=4420 00:27:32.826 qpair failed and we were unable to recover it. 00:27:32.826 [2024-11-15 10:46:21.023264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.826 [2024-11-15 10:46:21.023289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b48000b90 with addr=10.0.0.2, port=4420 00:27:32.826 qpair failed and we were unable to recover it. 00:27:32.826 [2024-11-15 10:46:21.023446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.826 [2024-11-15 10:46:21.023473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.826 qpair failed and we were unable to recover it. 00:27:32.826 [2024-11-15 10:46:21.023619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.826 [2024-11-15 10:46:21.023658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.826 qpair failed and we were unable to recover it. 00:27:32.826 [2024-11-15 10:46:21.023765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.826 [2024-11-15 10:46:21.023804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.826 qpair failed and we were unable to recover it. 00:27:32.826 [2024-11-15 10:46:21.023922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.826 [2024-11-15 10:46:21.023946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.826 qpair failed and we were unable to recover it. 00:27:32.826 [2024-11-15 10:46:21.024074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.826 [2024-11-15 10:46:21.024098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.826 qpair failed and we were unable to recover it. 00:27:32.826 [2024-11-15 10:46:21.024346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.826 [2024-11-15 10:46:21.024390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.826 qpair failed and we were unable to recover it. 00:27:32.826 [2024-11-15 10:46:21.024536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.826 [2024-11-15 10:46:21.024563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b48000b90 with addr=10.0.0.2, port=4420 00:27:32.826 qpair failed and we were unable to recover it. 00:27:32.826 [2024-11-15 10:46:21.024743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.826 [2024-11-15 10:46:21.024774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b48000b90 with addr=10.0.0.2, port=4420 00:27:32.826 qpair failed and we were unable to recover it. 00:27:32.826 [2024-11-15 10:46:21.025023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.826 [2024-11-15 10:46:21.025070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b48000b90 with addr=10.0.0.2, port=4420 00:27:32.826 qpair failed and we were unable to recover it. 00:27:32.826 [2024-11-15 10:46:21.025249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.826 [2024-11-15 10:46:21.025274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b48000b90 with addr=10.0.0.2, port=4420 00:27:32.826 qpair failed and we were unable to recover it. 00:27:32.826 [2024-11-15 10:46:21.025481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.826 [2024-11-15 10:46:21.025508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b48000b90 with addr=10.0.0.2, port=4420 00:27:32.826 qpair failed and we were unable to recover it. 00:27:32.826 [2024-11-15 10:46:21.025693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.826 [2024-11-15 10:46:21.025717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b48000b90 with addr=10.0.0.2, port=4420 00:27:32.826 qpair failed and we were unable to recover it. 00:27:32.826 [2024-11-15 10:46:21.025864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.826 [2024-11-15 10:46:21.025912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b48000b90 with addr=10.0.0.2, port=4420 00:27:32.826 qpair failed and we were unable to recover it. 00:27:32.826 [2024-11-15 10:46:21.026062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.826 [2024-11-15 10:46:21.026104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b48000b90 with addr=10.0.0.2, port=4420 00:27:32.826 qpair failed and we were unable to recover it. 00:27:32.826 [2024-11-15 10:46:21.026231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.826 [2024-11-15 10:46:21.026271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b48000b90 with addr=10.0.0.2, port=4420 00:27:32.826 qpair failed and we were unable to recover it. 00:27:32.826 [2024-11-15 10:46:21.026406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.826 [2024-11-15 10:46:21.026431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b48000b90 with addr=10.0.0.2, port=4420 00:27:32.826 qpair failed and we were unable to recover it. 00:27:32.826 [2024-11-15 10:46:21.026568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.826 [2024-11-15 10:46:21.026593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b48000b90 with addr=10.0.0.2, port=4420 00:27:32.826 qpair failed and we were unable to recover it. 00:27:32.826 [2024-11-15 10:46:21.026847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.826 [2024-11-15 10:46:21.026870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b48000b90 with addr=10.0.0.2, port=4420 00:27:32.826 qpair failed and we were unable to recover it. 00:27:32.826 [2024-11-15 10:46:21.027040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.826 [2024-11-15 10:46:21.027063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b48000b90 with addr=10.0.0.2, port=4420 00:27:32.826 qpair failed and we were unable to recover it. 00:27:32.826 [2024-11-15 10:46:21.027234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.826 [2024-11-15 10:46:21.027258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b48000b90 with addr=10.0.0.2, port=4420 00:27:32.826 qpair failed and we were unable to recover it. 00:27:32.826 [2024-11-15 10:46:21.027434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.826 [2024-11-15 10:46:21.027464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b48000b90 with addr=10.0.0.2, port=4420 00:27:32.826 qpair failed and we were unable to recover it. 00:27:32.826 [2024-11-15 10:46:21.027585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.826 [2024-11-15 10:46:21.027618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b48000b90 with addr=10.0.0.2, port=4420 00:27:32.826 qpair failed and we were unable to recover it. 00:27:32.826 [2024-11-15 10:46:21.027730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.826 [2024-11-15 10:46:21.027754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b48000b90 with addr=10.0.0.2, port=4420 00:27:32.826 qpair failed and we were unable to recover it. 00:27:32.826 [2024-11-15 10:46:21.027872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.826 [2024-11-15 10:46:21.027896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b48000b90 with addr=10.0.0.2, port=4420 00:27:32.826 qpair failed and we were unable to recover it. 00:27:32.826 [2024-11-15 10:46:21.028098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.827 [2024-11-15 10:46:21.028122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b48000b90 with addr=10.0.0.2, port=4420 00:27:32.827 qpair failed and we were unable to recover it. 00:27:32.827 [2024-11-15 10:46:21.028296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.827 [2024-11-15 10:46:21.028321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b48000b90 with addr=10.0.0.2, port=4420 00:27:32.827 qpair failed and we were unable to recover it. 00:27:32.827 [2024-11-15 10:46:21.028492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.827 [2024-11-15 10:46:21.028517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b48000b90 with addr=10.0.0.2, port=4420 00:27:32.827 qpair failed and we were unable to recover it. 00:27:32.827 [2024-11-15 10:46:21.028672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.827 [2024-11-15 10:46:21.028695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b48000b90 with addr=10.0.0.2, port=4420 00:27:32.827 qpair failed and we were unable to recover it. 00:27:32.827 [2024-11-15 10:46:21.028841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.827 [2024-11-15 10:46:21.028864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b48000b90 with addr=10.0.0.2, port=4420 00:27:32.827 qpair failed and we were unable to recover it. 00:27:32.827 [2024-11-15 10:46:21.028998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.827 [2024-11-15 10:46:21.029026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b48000b90 with addr=10.0.0.2, port=4420 00:27:32.827 qpair failed and we were unable to recover it. 00:27:32.827 [2024-11-15 10:46:21.029268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.827 [2024-11-15 10:46:21.029292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b48000b90 with addr=10.0.0.2, port=4420 00:27:32.827 qpair failed and we were unable to recover it. 00:27:32.827 [2024-11-15 10:46:21.029468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.827 [2024-11-15 10:46:21.029494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b48000b90 with addr=10.0.0.2, port=4420 00:27:32.827 qpair failed and we were unable to recover it. 00:27:32.827 [2024-11-15 10:46:21.029585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.827 [2024-11-15 10:46:21.029609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b48000b90 with addr=10.0.0.2, port=4420 00:27:32.827 qpair failed and we were unable to recover it. 00:27:32.827 [2024-11-15 10:46:21.029775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.827 [2024-11-15 10:46:21.029813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b48000b90 with addr=10.0.0.2, port=4420 00:27:32.827 qpair failed and we were unable to recover it. 00:27:32.827 [2024-11-15 10:46:21.029984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.827 [2024-11-15 10:46:21.030028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b48000b90 with addr=10.0.0.2, port=4420 00:27:32.827 qpair failed and we were unable to recover it. 00:27:32.827 [2024-11-15 10:46:21.030238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.827 [2024-11-15 10:46:21.030261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b48000b90 with addr=10.0.0.2, port=4420 00:27:32.827 qpair failed and we were unable to recover it. 00:27:32.827 [2024-11-15 10:46:21.030438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.827 [2024-11-15 10:46:21.030464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b48000b90 with addr=10.0.0.2, port=4420 00:27:32.827 qpair failed and we were unable to recover it. 00:27:32.827 [2024-11-15 10:46:21.030551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.827 [2024-11-15 10:46:21.030576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b48000b90 with addr=10.0.0.2, port=4420 00:27:32.827 qpair failed and we were unable to recover it. 00:27:32.827 [2024-11-15 10:46:21.030735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.827 [2024-11-15 10:46:21.030759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b48000b90 with addr=10.0.0.2, port=4420 00:27:32.827 qpair failed and we were unable to recover it. 00:27:32.827 [2024-11-15 10:46:21.030967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.827 [2024-11-15 10:46:21.031017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b48000b90 with addr=10.0.0.2, port=4420 00:27:32.827 qpair failed and we were unable to recover it. 00:27:32.827 [2024-11-15 10:46:21.031199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.827 [2024-11-15 10:46:21.031222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b48000b90 with addr=10.0.0.2, port=4420 00:27:32.827 qpair failed and we were unable to recover it. 00:27:32.827 [2024-11-15 10:46:21.031410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.827 [2024-11-15 10:46:21.031434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b48000b90 with addr=10.0.0.2, port=4420 00:27:32.827 qpair failed and we were unable to recover it. 00:27:32.827 [2024-11-15 10:46:21.031577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.827 [2024-11-15 10:46:21.031612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b48000b90 with addr=10.0.0.2, port=4420 00:27:32.827 qpair failed and we were unable to recover it. 00:27:32.827 [2024-11-15 10:46:21.031796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.827 [2024-11-15 10:46:21.031840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b48000b90 with addr=10.0.0.2, port=4420 00:27:32.827 qpair failed and we were unable to recover it. 00:27:32.827 [2024-11-15 10:46:21.031976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.827 [2024-11-15 10:46:21.031999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b48000b90 with addr=10.0.0.2, port=4420 00:27:32.827 qpair failed and we were unable to recover it. 00:27:32.827 [2024-11-15 10:46:21.032171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.827 [2024-11-15 10:46:21.032195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b48000b90 with addr=10.0.0.2, port=4420 00:27:32.827 qpair failed and we were unable to recover it. 00:27:32.827 [2024-11-15 10:46:21.032335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.827 [2024-11-15 10:46:21.032360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b48000b90 with addr=10.0.0.2, port=4420 00:27:32.827 qpair failed and we were unable to recover it. 00:27:32.827 [2024-11-15 10:46:21.032520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.827 [2024-11-15 10:46:21.032558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:32.827 qpair failed and we were unable to recover it. 00:27:32.827 [2024-11-15 10:46:21.032694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.827 [2024-11-15 10:46:21.032720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:32.827 qpair failed and we were unable to recover it. 00:27:32.827 [2024-11-15 10:46:21.032891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.827 [2024-11-15 10:46:21.032935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:32.827 qpair failed and we were unable to recover it. 00:27:32.827 [2024-11-15 10:46:21.033045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.827 [2024-11-15 10:46:21.033075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:32.827 qpair failed and we were unable to recover it. 00:27:32.827 [2024-11-15 10:46:21.033199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.827 [2024-11-15 10:46:21.033223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:32.827 qpair failed and we were unable to recover it. 00:27:32.827 [2024-11-15 10:46:21.033386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.827 [2024-11-15 10:46:21.033412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:32.827 qpair failed and we were unable to recover it. 00:27:32.827 [2024-11-15 10:46:21.033573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.827 [2024-11-15 10:46:21.033596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:32.827 qpair failed and we were unable to recover it. 00:27:32.827 [2024-11-15 10:46:21.033729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.827 [2024-11-15 10:46:21.033755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:32.827 qpair failed and we were unable to recover it. 00:27:32.827 [2024-11-15 10:46:21.033931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.827 [2024-11-15 10:46:21.033955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:32.827 qpair failed and we were unable to recover it. 00:27:32.827 [2024-11-15 10:46:21.034067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.827 [2024-11-15 10:46:21.034099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:32.827 qpair failed and we were unable to recover it. 00:27:32.827 [2024-11-15 10:46:21.034256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.827 [2024-11-15 10:46:21.034280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:32.827 qpair failed and we were unable to recover it. 00:27:32.827 [2024-11-15 10:46:21.034437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.827 [2024-11-15 10:46:21.034462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:32.827 qpair failed and we were unable to recover it. 00:27:32.827 [2024-11-15 10:46:21.034554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.827 [2024-11-15 10:46:21.034578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:32.827 qpair failed and we were unable to recover it. 00:27:32.827 [2024-11-15 10:46:21.034718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.827 [2024-11-15 10:46:21.034742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:32.827 qpair failed and we were unable to recover it. 00:27:32.827 [2024-11-15 10:46:21.034873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.827 [2024-11-15 10:46:21.034912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:32.827 qpair failed and we were unable to recover it. 00:27:32.828 [2024-11-15 10:46:21.035068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.828 [2024-11-15 10:46:21.035091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:32.828 qpair failed and we were unable to recover it. 00:27:32.828 [2024-11-15 10:46:21.035169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.828 [2024-11-15 10:46:21.035193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:32.828 qpair failed and we were unable to recover it. 00:27:32.828 [2024-11-15 10:46:21.035358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.828 [2024-11-15 10:46:21.035394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:32.828 qpair failed and we were unable to recover it. 00:27:32.828 [2024-11-15 10:46:21.035539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.828 [2024-11-15 10:46:21.035574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:32.828 qpair failed and we were unable to recover it. 00:27:32.828 [2024-11-15 10:46:21.035727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.828 [2024-11-15 10:46:21.035789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:32.828 qpair failed and we were unable to recover it. 00:27:32.828 [2024-11-15 10:46:21.035978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.828 [2024-11-15 10:46:21.036023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:32.828 qpair failed and we were unable to recover it. 00:27:32.828 [2024-11-15 10:46:21.036201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.828 [2024-11-15 10:46:21.036225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:32.828 qpair failed and we were unable to recover it. 00:27:32.828 [2024-11-15 10:46:21.036400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.828 [2024-11-15 10:46:21.036455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.828 qpair failed and we were unable to recover it. 00:27:32.828 [2024-11-15 10:46:21.036573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.828 [2024-11-15 10:46:21.036600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.828 qpair failed and we were unable to recover it. 00:27:32.828 [2024-11-15 10:46:21.036784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.828 [2024-11-15 10:46:21.036807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.828 qpair failed and we were unable to recover it. 00:27:32.828 [2024-11-15 10:46:21.036965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.828 [2024-11-15 10:46:21.037016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.828 qpair failed and we were unable to recover it. 00:27:32.828 [2024-11-15 10:46:21.037164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.828 [2024-11-15 10:46:21.037216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.828 qpair failed and we were unable to recover it. 00:27:32.828 [2024-11-15 10:46:21.037383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.828 [2024-11-15 10:46:21.037409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.828 qpair failed and we were unable to recover it. 00:27:32.828 [2024-11-15 10:46:21.037522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.828 [2024-11-15 10:46:21.037547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.828 qpair failed and we were unable to recover it. 00:27:32.828 [2024-11-15 10:46:21.037723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.828 [2024-11-15 10:46:21.037747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.828 qpair failed and we were unable to recover it. 00:27:32.828 [2024-11-15 10:46:21.037937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.828 [2024-11-15 10:46:21.037995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.828 qpair failed and we were unable to recover it. 00:27:32.828 [2024-11-15 10:46:21.038143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.828 [2024-11-15 10:46:21.038167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.828 qpair failed and we were unable to recover it. 00:27:32.828 [2024-11-15 10:46:21.038409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.828 [2024-11-15 10:46:21.038435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.828 qpair failed and we were unable to recover it. 00:27:32.828 [2024-11-15 10:46:21.038561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.828 [2024-11-15 10:46:21.038601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.828 qpair failed and we were unable to recover it. 00:27:32.828 [2024-11-15 10:46:21.038760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.828 [2024-11-15 10:46:21.038784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.828 qpair failed and we were unable to recover it. 00:27:32.828 [2024-11-15 10:46:21.039035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.828 [2024-11-15 10:46:21.039079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.828 qpair failed and we were unable to recover it. 00:27:32.828 [2024-11-15 10:46:21.039243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.828 [2024-11-15 10:46:21.039266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.828 qpair failed and we were unable to recover it. 00:27:32.828 [2024-11-15 10:46:21.039429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.828 [2024-11-15 10:46:21.039453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.828 qpair failed and we were unable to recover it. 00:27:32.828 [2024-11-15 10:46:21.039605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.828 [2024-11-15 10:46:21.039630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.828 qpair failed and we were unable to recover it. 00:27:32.828 [2024-11-15 10:46:21.039813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.828 [2024-11-15 10:46:21.039837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.828 qpair failed and we were unable to recover it. 00:27:32.828 [2024-11-15 10:46:21.040058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.828 [2024-11-15 10:46:21.040109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.828 qpair failed and we were unable to recover it. 00:27:32.828 [2024-11-15 10:46:21.040294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.828 [2024-11-15 10:46:21.040319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.828 qpair failed and we were unable to recover it. 00:27:32.828 [2024-11-15 10:46:21.040531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.828 [2024-11-15 10:46:21.040556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.828 qpair failed and we were unable to recover it. 00:27:32.828 [2024-11-15 10:46:21.040724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.828 [2024-11-15 10:46:21.040747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.828 qpair failed and we were unable to recover it. 00:27:32.828 [2024-11-15 10:46:21.040876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.828 [2024-11-15 10:46:21.040915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.828 qpair failed and we were unable to recover it. 00:27:32.828 [2024-11-15 10:46:21.041145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.828 [2024-11-15 10:46:21.041217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:32.828 qpair failed and we were unable to recover it. 00:27:32.828 [2024-11-15 10:46:21.041403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.828 [2024-11-15 10:46:21.041431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:32.828 qpair failed and we were unable to recover it. 00:27:32.828 [2024-11-15 10:46:21.041553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.828 [2024-11-15 10:46:21.041578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:32.828 qpair failed and we were unable to recover it. 00:27:32.828 [2024-11-15 10:46:21.041698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.828 [2024-11-15 10:46:21.041722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:32.828 qpair failed and we were unable to recover it. 00:27:32.828 [2024-11-15 10:46:21.041936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.828 [2024-11-15 10:46:21.041961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:32.828 qpair failed and we were unable to recover it. 00:27:32.828 [2024-11-15 10:46:21.042198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.828 [2024-11-15 10:46:21.042258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:32.828 qpair failed and we were unable to recover it. 00:27:32.828 [2024-11-15 10:46:21.042492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.828 [2024-11-15 10:46:21.042519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.828 qpair failed and we were unable to recover it. 00:27:32.828 [2024-11-15 10:46:21.042654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.829 [2024-11-15 10:46:21.042679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.829 qpair failed and we were unable to recover it. 00:27:32.829 [2024-11-15 10:46:21.042872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.829 [2024-11-15 10:46:21.042896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.829 qpair failed and we were unable to recover it. 00:27:32.829 [2024-11-15 10:46:21.043114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.829 [2024-11-15 10:46:21.043164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.829 qpair failed and we were unable to recover it. 00:27:32.829 [2024-11-15 10:46:21.043381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.829 [2024-11-15 10:46:21.043434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.829 qpair failed and we were unable to recover it. 00:27:32.829 [2024-11-15 10:46:21.043555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.829 [2024-11-15 10:46:21.043580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.829 qpair failed and we were unable to recover it. 00:27:32.829 [2024-11-15 10:46:21.043819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.829 [2024-11-15 10:46:21.043866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.829 qpair failed and we were unable to recover it. 00:27:32.829 [2024-11-15 10:46:21.044070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.829 [2024-11-15 10:46:21.044118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.829 qpair failed and we were unable to recover it. 00:27:32.829 [2024-11-15 10:46:21.044288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.829 [2024-11-15 10:46:21.044312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.829 qpair failed and we were unable to recover it. 00:27:32.829 [2024-11-15 10:46:21.044500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.829 [2024-11-15 10:46:21.044527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.829 qpair failed and we were unable to recover it. 00:27:32.829 [2024-11-15 10:46:21.044626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.829 [2024-11-15 10:46:21.044665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.829 qpair failed and we were unable to recover it. 00:27:32.829 [2024-11-15 10:46:21.044781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.829 [2024-11-15 10:46:21.044806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.829 qpair failed and we were unable to recover it. 00:27:32.829 [2024-11-15 10:46:21.045038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.829 [2024-11-15 10:46:21.045093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.829 qpair failed and we were unable to recover it. 00:27:32.829 [2024-11-15 10:46:21.045306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.829 [2024-11-15 10:46:21.045330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.829 qpair failed and we were unable to recover it. 00:27:32.829 [2024-11-15 10:46:21.045468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.829 [2024-11-15 10:46:21.045494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.829 qpair failed and we were unable to recover it. 00:27:32.829 [2024-11-15 10:46:21.045642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.829 [2024-11-15 10:46:21.045682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.829 qpair failed and we were unable to recover it. 00:27:32.829 [2024-11-15 10:46:21.045904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.829 [2024-11-15 10:46:21.045953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.829 qpair failed and we were unable to recover it. 00:27:32.829 [2024-11-15 10:46:21.046131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.829 [2024-11-15 10:46:21.046181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.829 qpair failed and we were unable to recover it. 00:27:32.829 [2024-11-15 10:46:21.046332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.829 [2024-11-15 10:46:21.046355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.829 qpair failed and we were unable to recover it. 00:27:32.829 [2024-11-15 10:46:21.046502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.829 [2024-11-15 10:46:21.046527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.829 qpair failed and we were unable to recover it. 00:27:32.829 [2024-11-15 10:46:21.046673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.829 [2024-11-15 10:46:21.046697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.829 qpair failed and we were unable to recover it. 00:27:32.829 [2024-11-15 10:46:21.046931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.829 [2024-11-15 10:46:21.046977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.829 qpair failed and we were unable to recover it. 00:27:32.829 [2024-11-15 10:46:21.047211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.829 [2024-11-15 10:46:21.047234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.829 qpair failed and we were unable to recover it. 00:27:32.829 [2024-11-15 10:46:21.047428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.829 [2024-11-15 10:46:21.047453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.829 qpair failed and we were unable to recover it. 00:27:32.829 [2024-11-15 10:46:21.047567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.829 [2024-11-15 10:46:21.047592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.829 qpair failed and we were unable to recover it. 00:27:32.829 [2024-11-15 10:46:21.047776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.829 [2024-11-15 10:46:21.047799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.829 qpair failed and we were unable to recover it. 00:27:32.829 [2024-11-15 10:46:21.047968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.829 [2024-11-15 10:46:21.048016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.829 qpair failed and we were unable to recover it. 00:27:32.829 [2024-11-15 10:46:21.048137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.829 [2024-11-15 10:46:21.048161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.829 qpair failed and we were unable to recover it. 00:27:32.829 [2024-11-15 10:46:21.048329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.829 [2024-11-15 10:46:21.048353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.829 qpair failed and we were unable to recover it. 00:27:32.829 [2024-11-15 10:46:21.048483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.829 [2024-11-15 10:46:21.048508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.830 qpair failed and we were unable to recover it. 00:27:32.830 [2024-11-15 10:46:21.048636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.830 [2024-11-15 10:46:21.048675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.830 qpair failed and we were unable to recover it. 00:27:32.830 [2024-11-15 10:46:21.048827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.830 [2024-11-15 10:46:21.048851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.830 qpair failed and we were unable to recover it. 00:27:32.830 [2024-11-15 10:46:21.049045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.830 [2024-11-15 10:46:21.049069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.830 qpair failed and we were unable to recover it. 00:27:32.830 [2024-11-15 10:46:21.049242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.830 [2024-11-15 10:46:21.049266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.830 qpair failed and we were unable to recover it. 00:27:32.830 [2024-11-15 10:46:21.049438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.830 [2024-11-15 10:46:21.049464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.830 qpair failed and we were unable to recover it. 00:27:32.830 [2024-11-15 10:46:21.049555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.830 [2024-11-15 10:46:21.049580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.830 qpair failed and we were unable to recover it. 00:27:32.830 [2024-11-15 10:46:21.049743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.830 [2024-11-15 10:46:21.049792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.830 qpair failed and we were unable to recover it. 00:27:32.830 [2024-11-15 10:46:21.050039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.830 [2024-11-15 10:46:21.050086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.830 qpair failed and we were unable to recover it. 00:27:32.830 [2024-11-15 10:46:21.050251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.830 [2024-11-15 10:46:21.050274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.830 qpair failed and we were unable to recover it. 00:27:32.830 [2024-11-15 10:46:21.050431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.830 [2024-11-15 10:46:21.050455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.830 qpair failed and we were unable to recover it. 00:27:32.830 [2024-11-15 10:46:21.050591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.830 [2024-11-15 10:46:21.050615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.830 qpair failed and we were unable to recover it. 00:27:32.830 [2024-11-15 10:46:21.050812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.830 [2024-11-15 10:46:21.050835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.830 qpair failed and we were unable to recover it. 00:27:32.830 [2024-11-15 10:46:21.051058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.830 [2024-11-15 10:46:21.051104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.830 qpair failed and we were unable to recover it. 00:27:32.830 [2024-11-15 10:46:21.051281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.830 [2024-11-15 10:46:21.051305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.830 qpair failed and we were unable to recover it. 00:27:32.830 [2024-11-15 10:46:21.051508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.830 [2024-11-15 10:46:21.051533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.830 qpair failed and we were unable to recover it. 00:27:32.830 [2024-11-15 10:46:21.051686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.830 [2024-11-15 10:46:21.051733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.830 qpair failed and we were unable to recover it. 00:27:32.830 [2024-11-15 10:46:21.051900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.830 [2024-11-15 10:46:21.051946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.830 qpair failed and we were unable to recover it. 00:27:32.830 [2024-11-15 10:46:21.052128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.830 [2024-11-15 10:46:21.052152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.830 qpair failed and we were unable to recover it. 00:27:32.830 [2024-11-15 10:46:21.052357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.830 [2024-11-15 10:46:21.052405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.830 qpair failed and we were unable to recover it. 00:27:32.830 [2024-11-15 10:46:21.052527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.830 [2024-11-15 10:46:21.052552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.830 qpair failed and we were unable to recover it. 00:27:32.830 [2024-11-15 10:46:21.052729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.830 [2024-11-15 10:46:21.052752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.830 qpair failed and we were unable to recover it. 00:27:32.830 [2024-11-15 10:46:21.052970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.830 [2024-11-15 10:46:21.053017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.830 qpair failed and we were unable to recover it. 00:27:32.830 [2024-11-15 10:46:21.053258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.830 [2024-11-15 10:46:21.053281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.830 qpair failed and we were unable to recover it. 00:27:32.830 [2024-11-15 10:46:21.053464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.830 [2024-11-15 10:46:21.053489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.830 qpair failed and we were unable to recover it. 00:27:32.830 [2024-11-15 10:46:21.053583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.830 [2024-11-15 10:46:21.053607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.830 qpair failed and we were unable to recover it. 00:27:32.830 [2024-11-15 10:46:21.053736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.830 [2024-11-15 10:46:21.053784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.830 qpair failed and we were unable to recover it. 00:27:32.830 [2024-11-15 10:46:21.053943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.830 [2024-11-15 10:46:21.053970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.830 qpair failed and we were unable to recover it. 00:27:32.830 [2024-11-15 10:46:21.054119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.830 [2024-11-15 10:46:21.054142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.830 qpair failed and we were unable to recover it. 00:27:32.830 [2024-11-15 10:46:21.054273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.830 [2024-11-15 10:46:21.054297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.830 qpair failed and we were unable to recover it. 00:27:32.830 [2024-11-15 10:46:21.054459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.830 [2024-11-15 10:46:21.054484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.830 qpair failed and we were unable to recover it. 00:27:32.830 [2024-11-15 10:46:21.054624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.830 [2024-11-15 10:46:21.054647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.830 qpair failed and we were unable to recover it. 00:27:32.830 [2024-11-15 10:46:21.054773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.830 [2024-11-15 10:46:21.054814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.830 qpair failed and we were unable to recover it. 00:27:32.830 [2024-11-15 10:46:21.054954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.830 [2024-11-15 10:46:21.055002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.830 qpair failed and we were unable to recover it. 00:27:32.830 [2024-11-15 10:46:21.055117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.830 [2024-11-15 10:46:21.055155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.830 qpair failed and we were unable to recover it. 00:27:32.830 [2024-11-15 10:46:21.055340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.830 [2024-11-15 10:46:21.055369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.830 qpair failed and we were unable to recover it. 00:27:32.830 [2024-11-15 10:46:21.055501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.830 [2024-11-15 10:46:21.055552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.830 qpair failed and we were unable to recover it. 00:27:32.830 [2024-11-15 10:46:21.055694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.830 [2024-11-15 10:46:21.055747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.830 qpair failed and we were unable to recover it. 00:27:32.830 [2024-11-15 10:46:21.055872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.830 [2024-11-15 10:46:21.055897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.831 qpair failed and we were unable to recover it. 00:27:32.831 [2024-11-15 10:46:21.056068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.831 [2024-11-15 10:46:21.056092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.831 qpair failed and we were unable to recover it. 00:27:32.831 [2024-11-15 10:46:21.056258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.831 [2024-11-15 10:46:21.056296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.831 qpair failed and we were unable to recover it. 00:27:32.831 [2024-11-15 10:46:21.056496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.831 [2024-11-15 10:46:21.056521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.831 qpair failed and we were unable to recover it. 00:27:32.831 [2024-11-15 10:46:21.056644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.831 [2024-11-15 10:46:21.056693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.831 qpair failed and we were unable to recover it. 00:27:32.831 [2024-11-15 10:46:21.056827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.831 [2024-11-15 10:46:21.056881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.831 qpair failed and we were unable to recover it. 00:27:32.831 [2024-11-15 10:46:21.057036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.831 [2024-11-15 10:46:21.057075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.831 qpair failed and we were unable to recover it. 00:27:32.831 [2024-11-15 10:46:21.057207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.831 [2024-11-15 10:46:21.057231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.831 qpair failed and we were unable to recover it. 00:27:32.831 [2024-11-15 10:46:21.057337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.831 [2024-11-15 10:46:21.057384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.831 qpair failed and we were unable to recover it. 00:27:32.831 [2024-11-15 10:46:21.057496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.831 [2024-11-15 10:46:21.057547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.831 qpair failed and we were unable to recover it. 00:27:32.831 [2024-11-15 10:46:21.057722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.831 [2024-11-15 10:46:21.057780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.831 qpair failed and we were unable to recover it. 00:27:32.831 [2024-11-15 10:46:21.057973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.831 [2024-11-15 10:46:21.057997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.831 qpair failed and we were unable to recover it. 00:27:32.831 [2024-11-15 10:46:21.058230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.831 [2024-11-15 10:46:21.058253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.831 qpair failed and we were unable to recover it. 00:27:32.831 [2024-11-15 10:46:21.058468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.831 [2024-11-15 10:46:21.058524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.831 qpair failed and we were unable to recover it. 00:27:32.831 [2024-11-15 10:46:21.058763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.831 [2024-11-15 10:46:21.058820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.831 qpair failed and we were unable to recover it. 00:27:32.831 [2024-11-15 10:46:21.058994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.831 [2024-11-15 10:46:21.059036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.831 qpair failed and we were unable to recover it. 00:27:32.831 [2024-11-15 10:46:21.059225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.831 [2024-11-15 10:46:21.059248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.831 qpair failed and we were unable to recover it. 00:27:32.831 [2024-11-15 10:46:21.059383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.831 [2024-11-15 10:46:21.059408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.831 qpair failed and we were unable to recover it. 00:27:32.831 [2024-11-15 10:46:21.059547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.831 [2024-11-15 10:46:21.059595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.831 qpair failed and we were unable to recover it. 00:27:32.831 [2024-11-15 10:46:21.059751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.831 [2024-11-15 10:46:21.059801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.831 qpair failed and we were unable to recover it. 00:27:32.831 [2024-11-15 10:46:21.060018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.831 [2024-11-15 10:46:21.060065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.831 qpair failed and we were unable to recover it. 00:27:32.831 [2024-11-15 10:46:21.060257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.831 [2024-11-15 10:46:21.060280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.831 qpair failed and we were unable to recover it. 00:27:32.831 [2024-11-15 10:46:21.060486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.831 [2024-11-15 10:46:21.060535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.831 qpair failed and we were unable to recover it. 00:27:32.831 [2024-11-15 10:46:21.060674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.831 [2024-11-15 10:46:21.060724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.831 qpair failed and we were unable to recover it. 00:27:32.831 [2024-11-15 10:46:21.060947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.831 [2024-11-15 10:46:21.060994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.831 qpair failed and we were unable to recover it. 00:27:32.831 [2024-11-15 10:46:21.061159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.831 [2024-11-15 10:46:21.061182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.831 qpair failed and we were unable to recover it. 00:27:32.831 [2024-11-15 10:46:21.061299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.831 [2024-11-15 10:46:21.061323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.831 qpair failed and we were unable to recover it. 00:27:32.831 [2024-11-15 10:46:21.061525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.831 [2024-11-15 10:46:21.061572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.831 qpair failed and we were unable to recover it. 00:27:32.831 [2024-11-15 10:46:21.061764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.831 [2024-11-15 10:46:21.061820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.831 qpair failed and we were unable to recover it. 00:27:32.831 [2024-11-15 10:46:21.061974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.831 [2024-11-15 10:46:21.062029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.831 qpair failed and we were unable to recover it. 00:27:32.831 [2024-11-15 10:46:21.062262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.831 [2024-11-15 10:46:21.062285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.831 qpair failed and we were unable to recover it. 00:27:32.831 [2024-11-15 10:46:21.062484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.831 [2024-11-15 10:46:21.062534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.831 qpair failed and we were unable to recover it. 00:27:32.831 [2024-11-15 10:46:21.062737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.831 [2024-11-15 10:46:21.062787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.831 qpair failed and we were unable to recover it. 00:27:32.831 [2024-11-15 10:46:21.063006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.831 [2024-11-15 10:46:21.063054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.831 qpair failed and we were unable to recover it. 00:27:32.831 [2024-11-15 10:46:21.063291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.831 [2024-11-15 10:46:21.063314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.831 qpair failed and we were unable to recover it. 00:27:32.831 [2024-11-15 10:46:21.063508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.831 [2024-11-15 10:46:21.063558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.831 qpair failed and we were unable to recover it. 00:27:32.831 [2024-11-15 10:46:21.063753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.831 [2024-11-15 10:46:21.063800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.831 qpair failed and we were unable to recover it. 00:27:32.831 [2024-11-15 10:46:21.063981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.831 [2024-11-15 10:46:21.064030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.832 qpair failed and we were unable to recover it. 00:27:32.832 [2024-11-15 10:46:21.064205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.832 [2024-11-15 10:46:21.064228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.832 qpair failed and we were unable to recover it. 00:27:32.832 [2024-11-15 10:46:21.064451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.832 [2024-11-15 10:46:21.064498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.832 qpair failed and we were unable to recover it. 00:27:32.832 [2024-11-15 10:46:21.064629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.832 [2024-11-15 10:46:21.064684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.832 qpair failed and we were unable to recover it. 00:27:32.832 [2024-11-15 10:46:21.064777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.832 [2024-11-15 10:46:21.064802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.832 qpair failed and we were unable to recover it. 00:27:32.832 [2024-11-15 10:46:21.065008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.832 [2024-11-15 10:46:21.065057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.832 qpair failed and we were unable to recover it. 00:27:32.832 [2024-11-15 10:46:21.065282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.832 [2024-11-15 10:46:21.065305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.832 qpair failed and we were unable to recover it. 00:27:32.832 [2024-11-15 10:46:21.065485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.832 [2024-11-15 10:46:21.065543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.832 qpair failed and we were unable to recover it. 00:27:32.832 [2024-11-15 10:46:21.065728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.832 [2024-11-15 10:46:21.065779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.832 qpair failed and we were unable to recover it. 00:27:32.832 [2024-11-15 10:46:21.065942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.832 [2024-11-15 10:46:21.065990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.832 qpair failed and we were unable to recover it. 00:27:32.832 [2024-11-15 10:46:21.066228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.832 [2024-11-15 10:46:21.066251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.832 qpair failed and we were unable to recover it. 00:27:32.832 [2024-11-15 10:46:21.066381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.832 [2024-11-15 10:46:21.066405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.832 qpair failed and we were unable to recover it. 00:27:32.832 [2024-11-15 10:46:21.066558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.832 [2024-11-15 10:46:21.066615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.832 qpair failed and we were unable to recover it. 00:27:32.832 [2024-11-15 10:46:21.066762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.832 [2024-11-15 10:46:21.066802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.832 qpair failed and we were unable to recover it. 00:27:32.832 [2024-11-15 10:46:21.067004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.832 [2024-11-15 10:46:21.067054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.832 qpair failed and we were unable to recover it. 00:27:32.832 [2024-11-15 10:46:21.067248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.832 [2024-11-15 10:46:21.067271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.832 qpair failed and we were unable to recover it. 00:27:32.832 [2024-11-15 10:46:21.067449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.832 [2024-11-15 10:46:21.067498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.832 qpair failed and we were unable to recover it. 00:27:32.832 [2024-11-15 10:46:21.067617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.832 [2024-11-15 10:46:21.067680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.832 qpair failed and we were unable to recover it. 00:27:32.832 [2024-11-15 10:46:21.067853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.832 [2024-11-15 10:46:21.067902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.832 qpair failed and we were unable to recover it. 00:27:32.832 [2024-11-15 10:46:21.068151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.832 [2024-11-15 10:46:21.068174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.832 qpair failed and we were unable to recover it. 00:27:32.832 [2024-11-15 10:46:21.068339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.832 [2024-11-15 10:46:21.068369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.832 qpair failed and we were unable to recover it. 00:27:32.832 [2024-11-15 10:46:21.068523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.832 [2024-11-15 10:46:21.068573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.832 qpair failed and we were unable to recover it. 00:27:32.832 [2024-11-15 10:46:21.068806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.832 [2024-11-15 10:46:21.068857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.832 qpair failed and we were unable to recover it. 00:27:32.832 [2024-11-15 10:46:21.069091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.832 [2024-11-15 10:46:21.069139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.832 qpair failed and we were unable to recover it. 00:27:32.832 [2024-11-15 10:46:21.069327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.832 [2024-11-15 10:46:21.069351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.832 qpair failed and we were unable to recover it. 00:27:32.832 [2024-11-15 10:46:21.069530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.832 [2024-11-15 10:46:21.069580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.832 qpair failed and we were unable to recover it. 00:27:32.832 [2024-11-15 10:46:21.069764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.832 [2024-11-15 10:46:21.069813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.832 qpair failed and we were unable to recover it. 00:27:32.832 [2024-11-15 10:46:21.070016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.832 [2024-11-15 10:46:21.070072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.832 qpair failed and we were unable to recover it. 00:27:32.832 [2024-11-15 10:46:21.070240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.832 [2024-11-15 10:46:21.070263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.832 qpair failed and we were unable to recover it. 00:27:32.832 [2024-11-15 10:46:21.070425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.832 [2024-11-15 10:46:21.070479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.832 qpair failed and we were unable to recover it. 00:27:32.832 [2024-11-15 10:46:21.070705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.832 [2024-11-15 10:46:21.070753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.832 qpair failed and we were unable to recover it. 00:27:32.832 [2024-11-15 10:46:21.070997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.832 [2024-11-15 10:46:21.071046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.832 qpair failed and we were unable to recover it. 00:27:32.832 [2024-11-15 10:46:21.071211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.832 [2024-11-15 10:46:21.071240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.832 qpair failed and we were unable to recover it. 00:27:32.832 [2024-11-15 10:46:21.071481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.832 [2024-11-15 10:46:21.071532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.832 qpair failed and we were unable to recover it. 00:27:32.832 [2024-11-15 10:46:21.071685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.832 [2024-11-15 10:46:21.071737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.832 qpair failed and we were unable to recover it. 00:27:32.832 [2024-11-15 10:46:21.071956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.832 [2024-11-15 10:46:21.072006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.832 qpair failed and we were unable to recover it. 00:27:32.832 [2024-11-15 10:46:21.072240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.832 [2024-11-15 10:46:21.072263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.832 qpair failed and we were unable to recover it. 00:27:32.832 [2024-11-15 10:46:21.072522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.832 [2024-11-15 10:46:21.072571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.832 qpair failed and we were unable to recover it. 00:27:32.832 [2024-11-15 10:46:21.072718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.833 [2024-11-15 10:46:21.072768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.833 qpair failed and we were unable to recover it. 00:27:32.833 [2024-11-15 10:46:21.072914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.833 [2024-11-15 10:46:21.072967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.833 qpair failed and we were unable to recover it. 00:27:32.833 [2024-11-15 10:46:21.073176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.833 [2024-11-15 10:46:21.073199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.833 qpair failed and we were unable to recover it. 00:27:32.833 [2024-11-15 10:46:21.073404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.833 [2024-11-15 10:46:21.073429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.833 qpair failed and we were unable to recover it. 00:27:32.833 [2024-11-15 10:46:21.073586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.833 [2024-11-15 10:46:21.073636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.833 qpair failed and we were unable to recover it. 00:27:32.833 [2024-11-15 10:46:21.073777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.833 [2024-11-15 10:46:21.073829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.833 qpair failed and we were unable to recover it. 00:27:32.833 [2024-11-15 10:46:21.074016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.833 [2024-11-15 10:46:21.074039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.833 qpair failed and we were unable to recover it. 00:27:32.833 [2024-11-15 10:46:21.074226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.833 [2024-11-15 10:46:21.074250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.833 qpair failed and we were unable to recover it. 00:27:32.833 [2024-11-15 10:46:21.074394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.833 [2024-11-15 10:46:21.074418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.833 qpair failed and we were unable to recover it. 00:27:32.833 [2024-11-15 10:46:21.074568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.833 [2024-11-15 10:46:21.074630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.833 qpair failed and we were unable to recover it. 00:27:32.833 [2024-11-15 10:46:21.074826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.833 [2024-11-15 10:46:21.074874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.833 qpair failed and we were unable to recover it. 00:27:32.833 [2024-11-15 10:46:21.075065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.833 [2024-11-15 10:46:21.075117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.833 qpair failed and we were unable to recover it. 00:27:32.833 [2024-11-15 10:46:21.075287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.833 [2024-11-15 10:46:21.075310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.833 qpair failed and we were unable to recover it. 00:27:32.833 [2024-11-15 10:46:21.075553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.833 [2024-11-15 10:46:21.075604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.833 qpair failed and we were unable to recover it. 00:27:32.833 [2024-11-15 10:46:21.075787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.833 [2024-11-15 10:46:21.075828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.833 qpair failed and we were unable to recover it. 00:27:32.833 [2024-11-15 10:46:21.076030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.833 [2024-11-15 10:46:21.076079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.833 qpair failed and we were unable to recover it. 00:27:32.833 [2024-11-15 10:46:21.076288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.833 [2024-11-15 10:46:21.076311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.833 qpair failed and we were unable to recover it. 00:27:32.833 [2024-11-15 10:46:21.076538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.833 [2024-11-15 10:46:21.076588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.833 qpair failed and we were unable to recover it. 00:27:32.833 [2024-11-15 10:46:21.076774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.833 [2024-11-15 10:46:21.076825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.833 qpair failed and we were unable to recover it. 00:27:32.833 [2024-11-15 10:46:21.077003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.833 [2024-11-15 10:46:21.077053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.833 qpair failed and we were unable to recover it. 00:27:32.833 [2024-11-15 10:46:21.077256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.833 [2024-11-15 10:46:21.077279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.833 qpair failed and we were unable to recover it. 00:27:32.833 [2024-11-15 10:46:21.077509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.833 [2024-11-15 10:46:21.077559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.833 qpair failed and we were unable to recover it. 00:27:32.833 [2024-11-15 10:46:21.077746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.833 [2024-11-15 10:46:21.077789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.833 qpair failed and we were unable to recover it. 00:27:32.833 [2024-11-15 10:46:21.077970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.833 [2024-11-15 10:46:21.078020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.833 qpair failed and we were unable to recover it. 00:27:32.833 [2024-11-15 10:46:21.078165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.833 [2024-11-15 10:46:21.078188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.833 qpair failed and we were unable to recover it. 00:27:32.833 [2024-11-15 10:46:21.078369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.833 [2024-11-15 10:46:21.078393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.833 qpair failed and we were unable to recover it. 00:27:32.833 [2024-11-15 10:46:21.078554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.833 [2024-11-15 10:46:21.078601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.833 qpair failed and we were unable to recover it. 00:27:32.833 [2024-11-15 10:46:21.078788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.833 [2024-11-15 10:46:21.078839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.833 qpair failed and we were unable to recover it. 00:27:32.833 [2024-11-15 10:46:21.079092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.833 [2024-11-15 10:46:21.079141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.833 qpair failed and we were unable to recover it. 00:27:32.833 [2024-11-15 10:46:21.079333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.833 [2024-11-15 10:46:21.079356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.833 qpair failed and we were unable to recover it. 00:27:32.833 [2024-11-15 10:46:21.079529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.833 [2024-11-15 10:46:21.079589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.833 qpair failed and we were unable to recover it. 00:27:32.833 [2024-11-15 10:46:21.079712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.833 [2024-11-15 10:46:21.079762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.833 qpair failed and we were unable to recover it. 00:27:32.833 [2024-11-15 10:46:21.079948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.833 [2024-11-15 10:46:21.079971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.833 qpair failed and we were unable to recover it. 00:27:32.833 [2024-11-15 10:46:21.080223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.833 [2024-11-15 10:46:21.080272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.833 qpair failed and we were unable to recover it. 00:27:32.833 [2024-11-15 10:46:21.080445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.833 [2024-11-15 10:46:21.080499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.833 qpair failed and we were unable to recover it. 00:27:32.833 [2024-11-15 10:46:21.080747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.833 [2024-11-15 10:46:21.080795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.833 qpair failed and we were unable to recover it. 00:27:32.833 [2024-11-15 10:46:21.081011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.833 [2024-11-15 10:46:21.081054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.833 qpair failed and we were unable to recover it. 00:27:32.833 [2024-11-15 10:46:21.081185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.834 [2024-11-15 10:46:21.081208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.834 qpair failed and we were unable to recover it. 00:27:32.834 [2024-11-15 10:46:21.081349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.834 [2024-11-15 10:46:21.081424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.834 qpair failed and we were unable to recover it. 00:27:32.834 [2024-11-15 10:46:21.081622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.834 [2024-11-15 10:46:21.081677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.834 qpair failed and we were unable to recover it. 00:27:32.834 [2024-11-15 10:46:21.081893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.834 [2024-11-15 10:46:21.081940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.834 qpair failed and we were unable to recover it. 00:27:32.834 [2024-11-15 10:46:21.082117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.834 [2024-11-15 10:46:21.082167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.834 qpair failed and we were unable to recover it. 00:27:32.834 [2024-11-15 10:46:21.082397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.834 [2024-11-15 10:46:21.082421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.834 qpair failed and we were unable to recover it. 00:27:32.834 [2024-11-15 10:46:21.082658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.834 [2024-11-15 10:46:21.082707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.834 qpair failed and we were unable to recover it. 00:27:32.834 [2024-11-15 10:46:21.082898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.834 [2024-11-15 10:46:21.082946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.834 qpair failed and we were unable to recover it. 00:27:32.834 [2024-11-15 10:46:21.083196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.834 [2024-11-15 10:46:21.083246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.834 qpair failed and we were unable to recover it. 00:27:32.834 [2024-11-15 10:46:21.083379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.834 [2024-11-15 10:46:21.083411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.834 qpair failed and we were unable to recover it. 00:27:32.834 [2024-11-15 10:46:21.083564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.834 [2024-11-15 10:46:21.083612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.834 qpair failed and we were unable to recover it. 00:27:32.834 [2024-11-15 10:46:21.083857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.834 [2024-11-15 10:46:21.083907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.834 qpair failed and we were unable to recover it. 00:27:32.834 [2024-11-15 10:46:21.084098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.834 [2024-11-15 10:46:21.084145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.834 qpair failed and we were unable to recover it. 00:27:32.834 [2024-11-15 10:46:21.084380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.834 [2024-11-15 10:46:21.084404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.834 qpair failed and we were unable to recover it. 00:27:32.834 [2024-11-15 10:46:21.084546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.834 [2024-11-15 10:46:21.084596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.834 qpair failed and we were unable to recover it. 00:27:32.834 [2024-11-15 10:46:21.084819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.834 [2024-11-15 10:46:21.084871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.834 qpair failed and we were unable to recover it. 00:27:32.834 [2024-11-15 10:46:21.085011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.834 [2024-11-15 10:46:21.085064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.834 qpair failed and we were unable to recover it. 00:27:32.834 [2024-11-15 10:46:21.085196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.834 [2024-11-15 10:46:21.085234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.834 qpair failed and we were unable to recover it. 00:27:32.834 [2024-11-15 10:46:21.085462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.834 [2024-11-15 10:46:21.085510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.834 qpair failed and we were unable to recover it. 00:27:32.834 [2024-11-15 10:46:21.085731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.834 [2024-11-15 10:46:21.085781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.834 qpair failed and we were unable to recover it. 00:27:32.834 [2024-11-15 10:46:21.085956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.834 [2024-11-15 10:46:21.086002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.834 qpair failed and we were unable to recover it. 00:27:32.834 [2024-11-15 10:46:21.086238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.834 [2024-11-15 10:46:21.086261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.834 qpair failed and we were unable to recover it. 00:27:32.834 [2024-11-15 10:46:21.086459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.834 [2024-11-15 10:46:21.086483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.834 qpair failed and we were unable to recover it. 00:27:32.834 [2024-11-15 10:46:21.086713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.834 [2024-11-15 10:46:21.086736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.834 qpair failed and we were unable to recover it. 00:27:32.834 [2024-11-15 10:46:21.086928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.834 [2024-11-15 10:46:21.086975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.834 qpair failed and we were unable to recover it. 00:27:32.834 [2024-11-15 10:46:21.087258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.834 [2024-11-15 10:46:21.087282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.834 qpair failed and we were unable to recover it. 00:27:32.834 [2024-11-15 10:46:21.087486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.834 [2024-11-15 10:46:21.087537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.834 qpair failed and we were unable to recover it. 00:27:32.834 [2024-11-15 10:46:21.087760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.834 [2024-11-15 10:46:21.087807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.834 qpair failed and we were unable to recover it. 00:27:32.834 [2024-11-15 10:46:21.088002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.834 [2024-11-15 10:46:21.088049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.834 qpair failed and we were unable to recover it. 00:27:32.834 [2024-11-15 10:46:21.088249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.834 [2024-11-15 10:46:21.088272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.834 qpair failed and we were unable to recover it. 00:27:32.834 [2024-11-15 10:46:21.088491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.834 [2024-11-15 10:46:21.088551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.834 qpair failed and we were unable to recover it. 00:27:32.834 [2024-11-15 10:46:21.088755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.834 [2024-11-15 10:46:21.088811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.834 qpair failed and we were unable to recover it. 00:27:32.834 [2024-11-15 10:46:21.089027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.834 [2024-11-15 10:46:21.089076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.835 qpair failed and we were unable to recover it. 00:27:32.835 [2024-11-15 10:46:21.089250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.835 [2024-11-15 10:46:21.089274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.835 qpair failed and we were unable to recover it. 00:27:32.835 [2024-11-15 10:46:21.089524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.835 [2024-11-15 10:46:21.089573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.835 qpair failed and we were unable to recover it. 00:27:32.835 [2024-11-15 10:46:21.089818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.835 [2024-11-15 10:46:21.089867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.835 qpair failed and we were unable to recover it. 00:27:32.835 [2024-11-15 10:46:21.090057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.835 [2024-11-15 10:46:21.090108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.835 qpair failed and we were unable to recover it. 00:27:32.835 [2024-11-15 10:46:21.090300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.835 [2024-11-15 10:46:21.090327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.835 qpair failed and we were unable to recover it. 00:27:32.835 [2024-11-15 10:46:21.090563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.835 [2024-11-15 10:46:21.090613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.835 qpair failed and we were unable to recover it. 00:27:32.835 [2024-11-15 10:46:21.090874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.835 [2024-11-15 10:46:21.090926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.835 qpair failed and we were unable to recover it. 00:27:32.835 [2024-11-15 10:46:21.091073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.835 [2024-11-15 10:46:21.091124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.835 qpair failed and we were unable to recover it. 00:27:32.835 [2024-11-15 10:46:21.091311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.835 [2024-11-15 10:46:21.091334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.835 qpair failed and we were unable to recover it. 00:27:32.835 [2024-11-15 10:46:21.091489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.835 [2024-11-15 10:46:21.091555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.835 qpair failed and we were unable to recover it. 00:27:32.835 [2024-11-15 10:46:21.091688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.835 [2024-11-15 10:46:21.091739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.835 qpair failed and we were unable to recover it. 00:27:32.835 [2024-11-15 10:46:21.091866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.835 [2024-11-15 10:46:21.091914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.835 qpair failed and we were unable to recover it. 00:27:32.835 [2024-11-15 10:46:21.092132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.835 [2024-11-15 10:46:21.092182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.835 qpair failed and we were unable to recover it. 00:27:32.835 [2024-11-15 10:46:21.092333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.835 [2024-11-15 10:46:21.092388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.835 qpair failed and we were unable to recover it. 00:27:32.835 [2024-11-15 10:46:21.092625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.835 [2024-11-15 10:46:21.092675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.835 qpair failed and we were unable to recover it. 00:27:32.835 [2024-11-15 10:46:21.092849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.835 [2024-11-15 10:46:21.092898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.835 qpair failed and we were unable to recover it. 00:27:32.835 [2024-11-15 10:46:21.093155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.835 [2024-11-15 10:46:21.093203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.835 qpair failed and we were unable to recover it. 00:27:32.835 [2024-11-15 10:46:21.093395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.835 [2024-11-15 10:46:21.093420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.835 qpair failed and we were unable to recover it. 00:27:32.835 [2024-11-15 10:46:21.093615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.835 [2024-11-15 10:46:21.093639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.835 qpair failed and we were unable to recover it. 00:27:32.835 [2024-11-15 10:46:21.093823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.835 [2024-11-15 10:46:21.093875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.835 qpair failed and we were unable to recover it. 00:27:32.835 [2024-11-15 10:46:21.094095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.835 [2024-11-15 10:46:21.094143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.835 qpair failed and we were unable to recover it. 00:27:32.835 [2024-11-15 10:46:21.094317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.835 [2024-11-15 10:46:21.094340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.835 qpair failed and we were unable to recover it. 00:27:32.835 [2024-11-15 10:46:21.094543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.835 [2024-11-15 10:46:21.094587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.835 qpair failed and we were unable to recover it. 00:27:32.835 [2024-11-15 10:46:21.094700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.835 [2024-11-15 10:46:21.094765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.835 qpair failed and we were unable to recover it. 00:27:32.835 [2024-11-15 10:46:21.094908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.835 [2024-11-15 10:46:21.094966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.835 qpair failed and we were unable to recover it. 00:27:32.835 [2024-11-15 10:46:21.095212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.835 [2024-11-15 10:46:21.095259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.835 qpair failed and we were unable to recover it. 00:27:32.835 [2024-11-15 10:46:21.095417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.835 [2024-11-15 10:46:21.095501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.835 qpair failed and we were unable to recover it. 00:27:32.835 [2024-11-15 10:46:21.095647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.835 [2024-11-15 10:46:21.095708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.835 qpair failed and we were unable to recover it. 00:27:32.835 [2024-11-15 10:46:21.095915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.835 [2024-11-15 10:46:21.095965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.835 qpair failed and we were unable to recover it. 00:27:32.835 [2024-11-15 10:46:21.096111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.835 [2024-11-15 10:46:21.096134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.835 qpair failed and we were unable to recover it. 00:27:32.835 [2024-11-15 10:46:21.096370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.835 [2024-11-15 10:46:21.096394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.835 qpair failed and we were unable to recover it. 00:27:32.835 [2024-11-15 10:46:21.096611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.835 [2024-11-15 10:46:21.096664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.835 qpair failed and we were unable to recover it. 00:27:32.835 [2024-11-15 10:46:21.096834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.835 [2024-11-15 10:46:21.096890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.835 qpair failed and we were unable to recover it. 00:27:32.835 [2024-11-15 10:46:21.097043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.835 [2024-11-15 10:46:21.097091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.835 qpair failed and we were unable to recover it. 00:27:32.835 [2024-11-15 10:46:21.097312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.835 [2024-11-15 10:46:21.097335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.835 qpair failed and we were unable to recover it. 00:27:32.835 [2024-11-15 10:46:21.097518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.835 [2024-11-15 10:46:21.097543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.835 qpair failed and we were unable to recover it. 00:27:32.835 [2024-11-15 10:46:21.097727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.835 [2024-11-15 10:46:21.097779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.835 qpair failed and we were unable to recover it. 00:27:32.836 [2024-11-15 10:46:21.097983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.836 [2024-11-15 10:46:21.098031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.836 qpair failed and we were unable to recover it. 00:27:32.836 [2024-11-15 10:46:21.098229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.836 [2024-11-15 10:46:21.098252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.836 qpair failed and we were unable to recover it. 00:27:32.836 [2024-11-15 10:46:21.098405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.836 [2024-11-15 10:46:21.098490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.836 qpair failed and we were unable to recover it. 00:27:32.836 [2024-11-15 10:46:21.098714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.836 [2024-11-15 10:46:21.098761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.836 qpair failed and we were unable to recover it. 00:27:32.836 [2024-11-15 10:46:21.098948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.836 [2024-11-15 10:46:21.098999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.836 qpair failed and we were unable to recover it. 00:27:32.836 [2024-11-15 10:46:21.099159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.836 [2024-11-15 10:46:21.099184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.836 qpair failed and we were unable to recover it. 00:27:32.836 [2024-11-15 10:46:21.099393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.836 [2024-11-15 10:46:21.099418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.836 qpair failed and we were unable to recover it. 00:27:32.836 [2024-11-15 10:46:21.099622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.836 [2024-11-15 10:46:21.099677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.836 qpair failed and we were unable to recover it. 00:27:32.836 [2024-11-15 10:46:21.099887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.836 [2024-11-15 10:46:21.099938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.836 qpair failed and we were unable to recover it. 00:27:32.836 [2024-11-15 10:46:21.100137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.836 [2024-11-15 10:46:21.100188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.836 qpair failed and we were unable to recover it. 00:27:32.836 [2024-11-15 10:46:21.100389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.836 [2024-11-15 10:46:21.100413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.836 qpair failed and we were unable to recover it. 00:27:32.836 [2024-11-15 10:46:21.100626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.836 [2024-11-15 10:46:21.100649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.836 qpair failed and we were unable to recover it. 00:27:32.836 [2024-11-15 10:46:21.100846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.836 [2024-11-15 10:46:21.100895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.836 qpair failed and we were unable to recover it. 00:27:32.836 [2024-11-15 10:46:21.101119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.836 [2024-11-15 10:46:21.101165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.836 qpair failed and we were unable to recover it. 00:27:32.836 [2024-11-15 10:46:21.101423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.836 [2024-11-15 10:46:21.101479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.836 qpair failed and we were unable to recover it. 00:27:32.836 [2024-11-15 10:46:21.101753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.836 [2024-11-15 10:46:21.101800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.836 qpair failed and we were unable to recover it. 00:27:32.836 [2024-11-15 10:46:21.101991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.836 [2024-11-15 10:46:21.102041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.836 qpair failed and we were unable to recover it. 00:27:32.836 [2024-11-15 10:46:21.102222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.836 [2024-11-15 10:46:21.102272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.836 qpair failed and we were unable to recover it. 00:27:32.836 [2024-11-15 10:46:21.102463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.836 [2024-11-15 10:46:21.102486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.836 qpair failed and we were unable to recover it. 00:27:32.836 [2024-11-15 10:46:21.102685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.836 [2024-11-15 10:46:21.102734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.836 qpair failed and we were unable to recover it. 00:27:32.836 [2024-11-15 10:46:21.102976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.836 [2024-11-15 10:46:21.103025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.836 qpair failed and we were unable to recover it. 00:27:32.836 [2024-11-15 10:46:21.103221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.836 [2024-11-15 10:46:21.103244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.836 qpair failed and we were unable to recover it. 00:27:32.836 [2024-11-15 10:46:21.103465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.836 [2024-11-15 10:46:21.103514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.836 qpair failed and we were unable to recover it. 00:27:32.836 [2024-11-15 10:46:21.103661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.836 [2024-11-15 10:46:21.103711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.836 qpair failed and we were unable to recover it. 00:27:32.836 [2024-11-15 10:46:21.103953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.836 [2024-11-15 10:46:21.104007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.836 qpair failed and we were unable to recover it. 00:27:32.836 [2024-11-15 10:46:21.104174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.836 [2024-11-15 10:46:21.104198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.836 qpair failed and we were unable to recover it. 00:27:32.836 [2024-11-15 10:46:21.104409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.836 [2024-11-15 10:46:21.104433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.836 qpair failed and we were unable to recover it. 00:27:32.836 [2024-11-15 10:46:21.104631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.836 [2024-11-15 10:46:21.104681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.836 qpair failed and we were unable to recover it. 00:27:32.836 [2024-11-15 10:46:21.104886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.836 [2024-11-15 10:46:21.104936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.836 qpair failed and we were unable to recover it. 00:27:32.836 [2024-11-15 10:46:21.105139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.836 [2024-11-15 10:46:21.105189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.836 qpair failed and we were unable to recover it. 00:27:32.836 [2024-11-15 10:46:21.105414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.836 [2024-11-15 10:46:21.105437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.836 qpair failed and we were unable to recover it. 00:27:32.836 [2024-11-15 10:46:21.105689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.836 [2024-11-15 10:46:21.105738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.836 qpair failed and we were unable to recover it. 00:27:32.836 [2024-11-15 10:46:21.105936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.836 [2024-11-15 10:46:21.105995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.836 qpair failed and we were unable to recover it. 00:27:32.836 [2024-11-15 10:46:21.106171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.836 [2024-11-15 10:46:21.106222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.836 qpair failed and we were unable to recover it. 00:27:32.836 [2024-11-15 10:46:21.106483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.836 [2024-11-15 10:46:21.106542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.836 qpair failed and we were unable to recover it. 00:27:32.836 [2024-11-15 10:46:21.106768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.836 [2024-11-15 10:46:21.106815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.836 qpair failed and we were unable to recover it. 00:27:32.836 [2024-11-15 10:46:21.107060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.836 [2024-11-15 10:46:21.107106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.836 qpair failed and we were unable to recover it. 00:27:32.836 [2024-11-15 10:46:21.107319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.836 [2024-11-15 10:46:21.107357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.836 qpair failed and we were unable to recover it. 00:27:32.837 [2024-11-15 10:46:21.107535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.837 [2024-11-15 10:46:21.107561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.837 qpair failed and we were unable to recover it. 00:27:32.837 [2024-11-15 10:46:21.107745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.837 [2024-11-15 10:46:21.107786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.837 qpair failed and we were unable to recover it. 00:27:32.837 [2024-11-15 10:46:21.107921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.837 [2024-11-15 10:46:21.107980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.837 qpair failed and we were unable to recover it. 00:27:32.837 [2024-11-15 10:46:21.108197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.837 [2024-11-15 10:46:21.108247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.837 qpair failed and we were unable to recover it. 00:27:32.837 [2024-11-15 10:46:21.108473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.837 [2024-11-15 10:46:21.108523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.837 qpair failed and we were unable to recover it. 00:27:32.837 [2024-11-15 10:46:21.108758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.837 [2024-11-15 10:46:21.108806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.837 qpair failed and we were unable to recover it. 00:27:32.837 [2024-11-15 10:46:21.109015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.837 [2024-11-15 10:46:21.109064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.837 qpair failed and we were unable to recover it. 00:27:32.837 [2024-11-15 10:46:21.109289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.837 [2024-11-15 10:46:21.109312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.837 qpair failed and we were unable to recover it. 00:27:32.837 [2024-11-15 10:46:21.109502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.837 [2024-11-15 10:46:21.109531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.837 qpair failed and we were unable to recover it. 00:27:32.837 [2024-11-15 10:46:21.109767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.837 [2024-11-15 10:46:21.109821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.837 qpair failed and we were unable to recover it. 00:27:32.837 [2024-11-15 10:46:21.110041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.837 [2024-11-15 10:46:21.110091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.837 qpair failed and we were unable to recover it. 00:27:32.837 [2024-11-15 10:46:21.110267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.837 [2024-11-15 10:46:21.110291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.837 qpair failed and we were unable to recover it. 00:27:32.837 [2024-11-15 10:46:21.110477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.837 [2024-11-15 10:46:21.110531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.837 qpair failed and we were unable to recover it. 00:27:32.837 [2024-11-15 10:46:21.110693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.837 [2024-11-15 10:46:21.110739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.837 qpair failed and we were unable to recover it. 00:27:32.837 [2024-11-15 10:46:21.110969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.837 [2024-11-15 10:46:21.111016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.837 qpair failed and we were unable to recover it. 00:27:32.837 [2024-11-15 10:46:21.111205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.837 [2024-11-15 10:46:21.111227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.837 qpair failed and we were unable to recover it. 00:27:32.837 [2024-11-15 10:46:21.111434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.837 [2024-11-15 10:46:21.111503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.837 qpair failed and we were unable to recover it. 00:27:32.837 [2024-11-15 10:46:21.111669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.837 [2024-11-15 10:46:21.111728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.837 qpair failed and we were unable to recover it. 00:27:32.837 [2024-11-15 10:46:21.111900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.837 [2024-11-15 10:46:21.111961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.837 qpair failed and we were unable to recover it. 00:27:32.837 [2024-11-15 10:46:21.112145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.837 [2024-11-15 10:46:21.112168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.837 qpair failed and we were unable to recover it. 00:27:32.837 [2024-11-15 10:46:21.112390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.837 [2024-11-15 10:46:21.112416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.837 qpair failed and we were unable to recover it. 00:27:32.837 [2024-11-15 10:46:21.112568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.837 [2024-11-15 10:46:21.112613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.837 qpair failed and we were unable to recover it. 00:27:32.837 [2024-11-15 10:46:21.112833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.837 [2024-11-15 10:46:21.112882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.837 qpair failed and we were unable to recover it. 00:27:32.837 [2024-11-15 10:46:21.113104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.837 [2024-11-15 10:46:21.113154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.837 qpair failed and we were unable to recover it. 00:27:32.837 [2024-11-15 10:46:21.113348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.837 [2024-11-15 10:46:21.113396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.837 qpair failed and we were unable to recover it. 00:27:32.837 [2024-11-15 10:46:21.113566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.837 [2024-11-15 10:46:21.113590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.837 qpair failed and we were unable to recover it. 00:27:32.837 [2024-11-15 10:46:21.113811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.837 [2024-11-15 10:46:21.113859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.837 qpair failed and we were unable to recover it. 00:27:32.837 [2024-11-15 10:46:21.114015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.837 [2024-11-15 10:46:21.114066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.837 qpair failed and we were unable to recover it. 00:27:32.837 [2024-11-15 10:46:21.114342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.837 [2024-11-15 10:46:21.114371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.837 qpair failed and we were unable to recover it. 00:27:32.837 [2024-11-15 10:46:21.114604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.837 [2024-11-15 10:46:21.114629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.837 qpair failed and we were unable to recover it. 00:27:32.837 [2024-11-15 10:46:21.114884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.837 [2024-11-15 10:46:21.114933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.837 qpair failed and we were unable to recover it. 00:27:32.837 [2024-11-15 10:46:21.115130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.837 [2024-11-15 10:46:21.115178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.837 qpair failed and we were unable to recover it. 00:27:32.837 [2024-11-15 10:46:21.115357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.837 [2024-11-15 10:46:21.115402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.837 qpair failed and we were unable to recover it. 00:27:32.837 [2024-11-15 10:46:21.115538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.837 [2024-11-15 10:46:21.115563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.837 qpair failed and we were unable to recover it. 00:27:32.837 [2024-11-15 10:46:21.115730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.837 [2024-11-15 10:46:21.115787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.837 qpair failed and we were unable to recover it. 00:27:32.837 [2024-11-15 10:46:21.115965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.837 [2024-11-15 10:46:21.116014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.837 qpair failed and we were unable to recover it. 00:27:32.837 [2024-11-15 10:46:21.116211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.837 [2024-11-15 10:46:21.116260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.838 qpair failed and we were unable to recover it. 00:27:32.838 [2024-11-15 10:46:21.116460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.838 [2024-11-15 10:46:21.116516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.838 qpair failed and we were unable to recover it. 00:27:32.838 [2024-11-15 10:46:21.116728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.838 [2024-11-15 10:46:21.116779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.838 qpair failed and we were unable to recover it. 00:27:32.838 [2024-11-15 10:46:21.117005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.838 [2024-11-15 10:46:21.117055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.838 qpair failed and we were unable to recover it. 00:27:32.838 [2024-11-15 10:46:21.117255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.838 [2024-11-15 10:46:21.117277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.838 qpair failed and we were unable to recover it. 00:27:32.838 [2024-11-15 10:46:21.117434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.838 [2024-11-15 10:46:21.117495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.838 qpair failed and we were unable to recover it. 00:27:32.838 [2024-11-15 10:46:21.117718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.838 [2024-11-15 10:46:21.117767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.838 qpair failed and we were unable to recover it. 00:27:32.838 [2024-11-15 10:46:21.118015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.838 [2024-11-15 10:46:21.118069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.838 qpair failed and we were unable to recover it. 00:27:32.838 [2024-11-15 10:46:21.118241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.838 [2024-11-15 10:46:21.118264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.838 qpair failed and we were unable to recover it. 00:27:32.838 [2024-11-15 10:46:21.118507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.838 [2024-11-15 10:46:21.118558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.838 qpair failed and we were unable to recover it. 00:27:32.838 [2024-11-15 10:46:21.118795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.838 [2024-11-15 10:46:21.118843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.838 qpair failed and we were unable to recover it. 00:27:32.838 [2024-11-15 10:46:21.119011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.838 [2024-11-15 10:46:21.119059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.838 qpair failed and we were unable to recover it. 00:27:32.838 [2024-11-15 10:46:21.119193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.838 [2024-11-15 10:46:21.119216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.838 qpair failed and we were unable to recover it. 00:27:32.838 [2024-11-15 10:46:21.119399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.838 [2024-11-15 10:46:21.119451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.838 qpair failed and we were unable to recover it. 00:27:32.838 [2024-11-15 10:46:21.119653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.838 [2024-11-15 10:46:21.119703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.838 qpair failed and we were unable to recover it. 00:27:32.838 [2024-11-15 10:46:21.119924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.838 [2024-11-15 10:46:21.119974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.838 qpair failed and we were unable to recover it. 00:27:32.838 [2024-11-15 10:46:21.120142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.838 [2024-11-15 10:46:21.120164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.838 qpair failed and we were unable to recover it. 00:27:32.838 [2024-11-15 10:46:21.120305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.838 [2024-11-15 10:46:21.120343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.838 qpair failed and we were unable to recover it. 00:27:32.838 [2024-11-15 10:46:21.120548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.838 [2024-11-15 10:46:21.120595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.838 qpair failed and we were unable to recover it. 00:27:32.838 [2024-11-15 10:46:21.120786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.838 [2024-11-15 10:46:21.120832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.838 qpair failed and we were unable to recover it. 00:27:32.838 [2024-11-15 10:46:21.121063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.838 [2024-11-15 10:46:21.121112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.838 qpair failed and we were unable to recover it. 00:27:32.838 [2024-11-15 10:46:21.121418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.838 [2024-11-15 10:46:21.121474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.838 qpair failed and we were unable to recover it. 00:27:32.838 [2024-11-15 10:46:21.121698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.838 [2024-11-15 10:46:21.121746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.838 qpair failed and we were unable to recover it. 00:27:32.838 [2024-11-15 10:46:21.121943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.838 [2024-11-15 10:46:21.121986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.838 qpair failed and we were unable to recover it. 00:27:32.838 [2024-11-15 10:46:21.122233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.838 [2024-11-15 10:46:21.122282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.838 qpair failed and we were unable to recover it. 00:27:32.838 [2024-11-15 10:46:21.122535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.838 [2024-11-15 10:46:21.122583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.838 qpair failed and we were unable to recover it. 00:27:32.838 [2024-11-15 10:46:21.122804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.838 [2024-11-15 10:46:21.122852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.838 qpair failed and we were unable to recover it. 00:27:32.838 [2024-11-15 10:46:21.123078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.838 [2024-11-15 10:46:21.123129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.838 qpair failed and we were unable to recover it. 00:27:32.838 [2024-11-15 10:46:21.123298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.838 [2024-11-15 10:46:21.123321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.838 qpair failed and we were unable to recover it. 00:27:32.838 [2024-11-15 10:46:21.123566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.838 [2024-11-15 10:46:21.123590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.838 qpair failed and we were unable to recover it. 00:27:32.838 [2024-11-15 10:46:21.123751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.839 [2024-11-15 10:46:21.123800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.839 qpair failed and we were unable to recover it. 00:27:32.839 [2024-11-15 10:46:21.123979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.839 [2024-11-15 10:46:21.124025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.839 qpair failed and we were unable to recover it. 00:27:32.839 [2024-11-15 10:46:21.124218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.839 [2024-11-15 10:46:21.124241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.839 qpair failed and we were unable to recover it. 00:27:32.839 [2024-11-15 10:46:21.124487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.839 [2024-11-15 10:46:21.124538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.839 qpair failed and we were unable to recover it. 00:27:32.839 [2024-11-15 10:46:21.124781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.839 [2024-11-15 10:46:21.124829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.839 qpair failed and we were unable to recover it. 00:27:32.839 [2024-11-15 10:46:21.125070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.839 [2024-11-15 10:46:21.125119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.839 qpair failed and we were unable to recover it. 00:27:32.839 [2024-11-15 10:46:21.125276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.839 [2024-11-15 10:46:21.125303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.839 qpair failed and we were unable to recover it. 00:27:32.839 [2024-11-15 10:46:21.125419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.839 [2024-11-15 10:46:21.125445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.839 qpair failed and we were unable to recover it. 00:27:32.839 [2024-11-15 10:46:21.125680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.839 [2024-11-15 10:46:21.125727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.839 qpair failed and we were unable to recover it. 00:27:32.839 [2024-11-15 10:46:21.125952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.839 [2024-11-15 10:46:21.126000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.839 qpair failed and we were unable to recover it. 00:27:32.839 [2024-11-15 10:46:21.126226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.839 [2024-11-15 10:46:21.126249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.839 qpair failed and we were unable to recover it. 00:27:32.839 [2024-11-15 10:46:21.126426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.839 [2024-11-15 10:46:21.126482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.839 qpair failed and we were unable to recover it. 00:27:32.839 [2024-11-15 10:46:21.126695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.839 [2024-11-15 10:46:21.126744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.839 qpair failed and we were unable to recover it. 00:27:32.839 [2024-11-15 10:46:21.126972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.839 [2024-11-15 10:46:21.127022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.839 qpair failed and we were unable to recover it. 00:27:32.839 [2024-11-15 10:46:21.127243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.839 [2024-11-15 10:46:21.127266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.839 qpair failed and we were unable to recover it. 00:27:32.839 [2024-11-15 10:46:21.127509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.839 [2024-11-15 10:46:21.127558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.839 qpair failed and we were unable to recover it. 00:27:32.839 [2024-11-15 10:46:21.127789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.839 [2024-11-15 10:46:21.127837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.839 qpair failed and we were unable to recover it. 00:27:32.839 [2024-11-15 10:46:21.127980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.839 [2024-11-15 10:46:21.128031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.839 qpair failed and we were unable to recover it. 00:27:32.839 [2024-11-15 10:46:21.128234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.839 [2024-11-15 10:46:21.128257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.839 qpair failed and we were unable to recover it. 00:27:32.839 [2024-11-15 10:46:21.128468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.839 [2024-11-15 10:46:21.128519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.839 qpair failed and we were unable to recover it. 00:27:32.839 [2024-11-15 10:46:21.128723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.839 [2024-11-15 10:46:21.128776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.839 qpair failed and we were unable to recover it. 00:27:32.839 [2024-11-15 10:46:21.129014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.839 [2024-11-15 10:46:21.129061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.839 qpair failed and we were unable to recover it. 00:27:32.839 [2024-11-15 10:46:21.129250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.839 [2024-11-15 10:46:21.129273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.839 qpair failed and we were unable to recover it. 00:27:32.839 [2024-11-15 10:46:21.129535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.839 [2024-11-15 10:46:21.129590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.839 qpair failed and we were unable to recover it. 00:27:32.839 [2024-11-15 10:46:21.129832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.839 [2024-11-15 10:46:21.129881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.839 qpair failed and we were unable to recover it. 00:27:32.839 [2024-11-15 10:46:21.130121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.839 [2024-11-15 10:46:21.130171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.839 qpair failed and we were unable to recover it. 00:27:32.839 [2024-11-15 10:46:21.130380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.839 [2024-11-15 10:46:21.130404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.839 qpair failed and we were unable to recover it. 00:27:32.839 [2024-11-15 10:46:21.130579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.839 [2024-11-15 10:46:21.130630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.839 qpair failed and we were unable to recover it. 00:27:32.839 [2024-11-15 10:46:21.130820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.839 [2024-11-15 10:46:21.130867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.839 qpair failed and we were unable to recover it. 00:27:32.839 [2024-11-15 10:46:21.131058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.839 [2024-11-15 10:46:21.131104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.839 qpair failed and we were unable to recover it. 00:27:32.839 [2024-11-15 10:46:21.131266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.839 [2024-11-15 10:46:21.131298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.839 qpair failed and we were unable to recover it. 00:27:32.839 [2024-11-15 10:46:21.131491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.839 [2024-11-15 10:46:21.131545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.839 qpair failed and we were unable to recover it. 00:27:32.839 [2024-11-15 10:46:21.131738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.839 [2024-11-15 10:46:21.131786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.839 qpair failed and we were unable to recover it. 00:27:32.839 [2024-11-15 10:46:21.132015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.839 [2024-11-15 10:46:21.132061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.839 qpair failed and we were unable to recover it. 00:27:32.839 [2024-11-15 10:46:21.132295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.839 [2024-11-15 10:46:21.132318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.839 qpair failed and we were unable to recover it. 00:27:32.839 [2024-11-15 10:46:21.132478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.839 [2024-11-15 10:46:21.132537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.839 qpair failed and we were unable to recover it. 00:27:32.839 [2024-11-15 10:46:21.132719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.839 [2024-11-15 10:46:21.132768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.839 qpair failed and we were unable to recover it. 00:27:32.839 [2024-11-15 10:46:21.133013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.840 [2024-11-15 10:46:21.133061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.840 qpair failed and we were unable to recover it. 00:27:32.840 [2024-11-15 10:46:21.133236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.840 [2024-11-15 10:46:21.133259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.840 qpair failed and we were unable to recover it. 00:27:32.840 [2024-11-15 10:46:21.133454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.840 [2024-11-15 10:46:21.133502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.840 qpair failed and we were unable to recover it. 00:27:32.840 [2024-11-15 10:46:21.133712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.840 [2024-11-15 10:46:21.133758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.840 qpair failed and we were unable to recover it. 00:27:32.840 [2024-11-15 10:46:21.134044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.840 [2024-11-15 10:46:21.134093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.840 qpair failed and we were unable to recover it. 00:27:32.840 [2024-11-15 10:46:21.134279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.840 [2024-11-15 10:46:21.134302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.840 qpair failed and we were unable to recover it. 00:27:32.840 [2024-11-15 10:46:21.134507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.840 [2024-11-15 10:46:21.134556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.840 qpair failed and we were unable to recover it. 00:27:32.840 [2024-11-15 10:46:21.134791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.840 [2024-11-15 10:46:21.134840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.840 qpair failed and we were unable to recover it. 00:27:32.840 [2024-11-15 10:46:21.135063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.840 [2024-11-15 10:46:21.135112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.840 qpair failed and we were unable to recover it. 00:27:32.840 [2024-11-15 10:46:21.135322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.840 [2024-11-15 10:46:21.135360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.840 qpair failed and we were unable to recover it. 00:27:32.840 [2024-11-15 10:46:21.135573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.840 [2024-11-15 10:46:21.135598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.840 qpair failed and we were unable to recover it. 00:27:32.840 [2024-11-15 10:46:21.135862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.840 [2024-11-15 10:46:21.135913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.840 qpair failed and we were unable to recover it. 00:27:32.840 [2024-11-15 10:46:21.136111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.840 [2024-11-15 10:46:21.136158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.840 qpair failed and we were unable to recover it. 00:27:32.840 [2024-11-15 10:46:21.136401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.840 [2024-11-15 10:46:21.136425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.840 qpair failed and we were unable to recover it. 00:27:32.840 [2024-11-15 10:46:21.136599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.840 [2024-11-15 10:46:21.136658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.840 qpair failed and we were unable to recover it. 00:27:32.840 [2024-11-15 10:46:21.136890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.840 [2024-11-15 10:46:21.136938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.840 qpair failed and we were unable to recover it. 00:27:32.840 [2024-11-15 10:46:21.137079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.840 [2024-11-15 10:46:21.137132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.840 qpair failed and we were unable to recover it. 00:27:32.840 [2024-11-15 10:46:21.137330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.840 [2024-11-15 10:46:21.137353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.840 qpair failed and we were unable to recover it. 00:27:32.840 [2024-11-15 10:46:21.137568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.840 [2024-11-15 10:46:21.137592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.840 qpair failed and we were unable to recover it. 00:27:32.840 [2024-11-15 10:46:21.137775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.840 [2024-11-15 10:46:21.137822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.840 qpair failed and we were unable to recover it. 00:27:32.840 [2024-11-15 10:46:21.138042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.840 [2024-11-15 10:46:21.138091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.840 qpair failed and we were unable to recover it. 00:27:32.840 [2024-11-15 10:46:21.138256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.840 [2024-11-15 10:46:21.138279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.840 qpair failed and we were unable to recover it. 00:27:32.840 [2024-11-15 10:46:21.138404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.840 [2024-11-15 10:46:21.138429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.840 qpair failed and we were unable to recover it. 00:27:32.840 [2024-11-15 10:46:21.138654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.840 [2024-11-15 10:46:21.138708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.840 qpair failed and we were unable to recover it. 00:27:32.840 [2024-11-15 10:46:21.138880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.840 [2024-11-15 10:46:21.138924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.840 qpair failed and we were unable to recover it. 00:27:32.840 [2024-11-15 10:46:21.139173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.840 [2024-11-15 10:46:21.139237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.840 qpair failed and we were unable to recover it. 00:27:32.840 [2024-11-15 10:46:21.139468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.840 [2024-11-15 10:46:21.139522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.840 qpair failed and we were unable to recover it. 00:27:32.840 [2024-11-15 10:46:21.139763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.840 [2024-11-15 10:46:21.139812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.840 qpair failed and we were unable to recover it. 00:27:32.840 [2024-11-15 10:46:21.140042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.840 [2024-11-15 10:46:21.140089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.840 qpair failed and we were unable to recover it. 00:27:32.840 [2024-11-15 10:46:21.140268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.840 [2024-11-15 10:46:21.140292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.840 qpair failed and we were unable to recover it. 00:27:32.840 [2024-11-15 10:46:21.140510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.840 [2024-11-15 10:46:21.140561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.840 qpair failed and we were unable to recover it. 00:27:32.840 [2024-11-15 10:46:21.140793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.840 [2024-11-15 10:46:21.140842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.840 qpair failed and we were unable to recover it. 00:27:32.840 [2024-11-15 10:46:21.141076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.840 [2024-11-15 10:46:21.141127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.840 qpair failed and we were unable to recover it. 00:27:32.840 [2024-11-15 10:46:21.141387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.840 [2024-11-15 10:46:21.141412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.840 qpair failed and we were unable to recover it. 00:27:32.840 [2024-11-15 10:46:21.141657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.840 [2024-11-15 10:46:21.141701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.840 qpair failed and we were unable to recover it. 00:27:32.840 [2024-11-15 10:46:21.141943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.840 [2024-11-15 10:46:21.141991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.840 qpair failed and we were unable to recover it. 00:27:32.840 [2024-11-15 10:46:21.142191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.840 [2024-11-15 10:46:21.142239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.840 qpair failed and we were unable to recover it. 00:27:32.840 [2024-11-15 10:46:21.142481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.840 [2024-11-15 10:46:21.142531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.840 qpair failed and we were unable to recover it. 00:27:32.841 [2024-11-15 10:46:21.142754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.841 [2024-11-15 10:46:21.142801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.841 qpair failed and we were unable to recover it. 00:27:32.841 [2024-11-15 10:46:21.143015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.841 [2024-11-15 10:46:21.143061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.841 qpair failed and we were unable to recover it. 00:27:32.841 [2024-11-15 10:46:21.143304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.841 [2024-11-15 10:46:21.143327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.841 qpair failed and we were unable to recover it. 00:27:32.841 [2024-11-15 10:46:21.143497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.841 [2024-11-15 10:46:21.143521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.841 qpair failed and we were unable to recover it. 00:27:32.841 [2024-11-15 10:46:21.143753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.841 [2024-11-15 10:46:21.143801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.841 qpair failed and we were unable to recover it. 00:27:32.841 [2024-11-15 10:46:21.144015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.841 [2024-11-15 10:46:21.144066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.841 qpair failed and we were unable to recover it. 00:27:32.841 [2024-11-15 10:46:21.144296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.841 [2024-11-15 10:46:21.144319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.841 qpair failed and we were unable to recover it. 00:27:32.841 [2024-11-15 10:46:21.144564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.841 [2024-11-15 10:46:21.144589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.841 qpair failed and we were unable to recover it. 00:27:32.841 [2024-11-15 10:46:21.144827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.841 [2024-11-15 10:46:21.144879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.841 qpair failed and we were unable to recover it. 00:27:32.841 [2024-11-15 10:46:21.145107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.841 [2024-11-15 10:46:21.145154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.841 qpair failed and we were unable to recover it. 00:27:32.841 [2024-11-15 10:46:21.145369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.841 [2024-11-15 10:46:21.145393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.841 qpair failed and we were unable to recover it. 00:27:32.841 [2024-11-15 10:46:21.145593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.841 [2024-11-15 10:46:21.145618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.841 qpair failed and we were unable to recover it. 00:27:32.841 [2024-11-15 10:46:21.145847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.841 [2024-11-15 10:46:21.145894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.841 qpair failed and we were unable to recover it. 00:27:32.841 [2024-11-15 10:46:21.146093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.841 [2024-11-15 10:46:21.146141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.841 qpair failed and we were unable to recover it. 00:27:32.841 [2024-11-15 10:46:21.146380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.841 [2024-11-15 10:46:21.146406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.841 qpair failed and we were unable to recover it. 00:27:32.841 [2024-11-15 10:46:21.146657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.841 [2024-11-15 10:46:21.146693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.841 qpair failed and we were unable to recover it. 00:27:32.841 [2024-11-15 10:46:21.146901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.841 [2024-11-15 10:46:21.146947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.841 qpair failed and we were unable to recover it. 00:27:32.841 [2024-11-15 10:46:21.147153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.841 [2024-11-15 10:46:21.147200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.841 qpair failed and we were unable to recover it. 00:27:32.841 [2024-11-15 10:46:21.147421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.841 [2024-11-15 10:46:21.147456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.841 qpair failed and we were unable to recover it. 00:27:32.841 [2024-11-15 10:46:21.147697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.841 [2024-11-15 10:46:21.147747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.841 qpair failed and we were unable to recover it. 00:27:32.841 [2024-11-15 10:46:21.148002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.841 [2024-11-15 10:46:21.148053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.841 qpair failed and we were unable to recover it. 00:27:32.841 [2024-11-15 10:46:21.148217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.841 [2024-11-15 10:46:21.148241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.841 qpair failed and we were unable to recover it. 00:27:32.841 [2024-11-15 10:46:21.148375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.841 [2024-11-15 10:46:21.148400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.841 qpair failed and we were unable to recover it. 00:27:32.841 [2024-11-15 10:46:21.148656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.841 [2024-11-15 10:46:21.148707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.841 qpair failed and we were unable to recover it. 00:27:32.841 [2024-11-15 10:46:21.148889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.841 [2024-11-15 10:46:21.148937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.841 qpair failed and we were unable to recover it. 00:27:32.841 [2024-11-15 10:46:21.149133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.841 [2024-11-15 10:46:21.149183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.841 qpair failed and we were unable to recover it. 00:27:32.841 [2024-11-15 10:46:21.149412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.841 [2024-11-15 10:46:21.149452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.841 qpair failed and we were unable to recover it. 00:27:32.841 [2024-11-15 10:46:21.149691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.841 [2024-11-15 10:46:21.149740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.841 qpair failed and we were unable to recover it. 00:27:32.841 [2024-11-15 10:46:21.149979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.841 [2024-11-15 10:46:21.150033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.841 qpair failed and we were unable to recover it. 00:27:32.841 [2024-11-15 10:46:21.150268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.841 [2024-11-15 10:46:21.150306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.841 qpair failed and we were unable to recover it. 00:27:32.841 [2024-11-15 10:46:21.150448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.841 [2024-11-15 10:46:21.150473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.841 qpair failed and we were unable to recover it. 00:27:32.841 [2024-11-15 10:46:21.150628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.841 [2024-11-15 10:46:21.150691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.841 qpair failed and we were unable to recover it. 00:27:32.841 [2024-11-15 10:46:21.150929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.841 [2024-11-15 10:46:21.150980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.841 qpair failed and we were unable to recover it. 00:27:32.841 [2024-11-15 10:46:21.151183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.841 [2024-11-15 10:46:21.151231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.841 qpair failed and we were unable to recover it. 00:27:32.841 [2024-11-15 10:46:21.151481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.841 [2024-11-15 10:46:21.151532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.841 qpair failed and we were unable to recover it. 00:27:32.841 [2024-11-15 10:46:21.151678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.841 [2024-11-15 10:46:21.151728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.841 qpair failed and we were unable to recover it. 00:27:32.841 [2024-11-15 10:46:21.151876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.841 [2024-11-15 10:46:21.151927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.842 qpair failed and we were unable to recover it. 00:27:32.842 [2024-11-15 10:46:21.152096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.842 [2024-11-15 10:46:21.152120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.842 qpair failed and we were unable to recover it. 00:27:32.842 [2024-11-15 10:46:21.152318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.842 [2024-11-15 10:46:21.152341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.842 qpair failed and we were unable to recover it. 00:27:32.842 [2024-11-15 10:46:21.152620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.842 [2024-11-15 10:46:21.152670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.842 qpair failed and we were unable to recover it. 00:27:32.842 [2024-11-15 10:46:21.152864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.842 [2024-11-15 10:46:21.152910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.842 qpair failed and we were unable to recover it. 00:27:32.842 [2024-11-15 10:46:21.153114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.842 [2024-11-15 10:46:21.153162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.842 qpair failed and we were unable to recover it. 00:27:32.842 [2024-11-15 10:46:21.153399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.842 [2024-11-15 10:46:21.153423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.842 qpair failed and we were unable to recover it. 00:27:32.842 [2024-11-15 10:46:21.153612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.842 [2024-11-15 10:46:21.153660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.842 qpair failed and we were unable to recover it. 00:27:32.842 [2024-11-15 10:46:21.153872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.842 [2024-11-15 10:46:21.153920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.842 qpair failed and we were unable to recover it. 00:27:32.842 [2024-11-15 10:46:21.154138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.842 [2024-11-15 10:46:21.154189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.842 qpair failed and we were unable to recover it. 00:27:32.842 [2024-11-15 10:46:21.154382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.842 [2024-11-15 10:46:21.154406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.842 qpair failed and we were unable to recover it. 00:27:32.842 [2024-11-15 10:46:21.154624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.842 [2024-11-15 10:46:21.154669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.842 qpair failed and we were unable to recover it. 00:27:32.842 [2024-11-15 10:46:21.154856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.842 [2024-11-15 10:46:21.154902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.842 qpair failed and we were unable to recover it. 00:27:32.842 [2024-11-15 10:46:21.155127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.842 [2024-11-15 10:46:21.155177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.842 qpair failed and we were unable to recover it. 00:27:32.842 [2024-11-15 10:46:21.155428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.842 [2024-11-15 10:46:21.155453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.842 qpair failed and we were unable to recover it. 00:27:32.842 [2024-11-15 10:46:21.155622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.842 [2024-11-15 10:46:21.155671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.842 qpair failed and we were unable to recover it. 00:27:32.842 [2024-11-15 10:46:21.155908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.842 [2024-11-15 10:46:21.155954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.842 qpair failed and we were unable to recover it. 00:27:32.842 [2024-11-15 10:46:21.156198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.842 [2024-11-15 10:46:21.156246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.842 qpair failed and we were unable to recover it. 00:27:32.842 [2024-11-15 10:46:21.156482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.842 [2024-11-15 10:46:21.156506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.842 qpair failed and we were unable to recover it. 00:27:32.842 [2024-11-15 10:46:21.156654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.842 [2024-11-15 10:46:21.156706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.842 qpair failed and we were unable to recover it. 00:27:32.842 [2024-11-15 10:46:21.156911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.842 [2024-11-15 10:46:21.156958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.842 qpair failed and we were unable to recover it. 00:27:32.842 [2024-11-15 10:46:21.157214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.842 [2024-11-15 10:46:21.157263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.842 qpair failed and we were unable to recover it. 00:27:32.842 [2024-11-15 10:46:21.157422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.842 [2024-11-15 10:46:21.157476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.842 qpair failed and we were unable to recover it. 00:27:32.842 [2024-11-15 10:46:21.157706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.842 [2024-11-15 10:46:21.157755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.842 qpair failed and we were unable to recover it. 00:27:32.842 [2024-11-15 10:46:21.158002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.842 [2024-11-15 10:46:21.158051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.842 qpair failed and we were unable to recover it. 00:27:32.842 [2024-11-15 10:46:21.158266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.842 [2024-11-15 10:46:21.158289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.842 qpair failed and we were unable to recover it. 00:27:32.842 [2024-11-15 10:46:21.158539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.842 [2024-11-15 10:46:21.158587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.842 qpair failed and we were unable to recover it. 00:27:32.842 [2024-11-15 10:46:21.158828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.842 [2024-11-15 10:46:21.158874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.842 qpair failed and we were unable to recover it. 00:27:32.842 [2024-11-15 10:46:21.159071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.842 [2024-11-15 10:46:21.159120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.842 qpair failed and we were unable to recover it. 00:27:32.842 [2024-11-15 10:46:21.159360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.842 [2024-11-15 10:46:21.159392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.842 qpair failed and we were unable to recover it. 00:27:32.842 [2024-11-15 10:46:21.159608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.842 [2024-11-15 10:46:21.159647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.842 qpair failed and we were unable to recover it. 00:27:32.842 [2024-11-15 10:46:21.159792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.842 [2024-11-15 10:46:21.159857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.842 qpair failed and we were unable to recover it. 00:27:32.842 [2024-11-15 10:46:21.160058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.842 [2024-11-15 10:46:21.160105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.842 qpair failed and we were unable to recover it. 00:27:32.842 [2024-11-15 10:46:21.160340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.842 [2024-11-15 10:46:21.160378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.842 qpair failed and we were unable to recover it. 00:27:32.842 [2024-11-15 10:46:21.160581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.842 [2024-11-15 10:46:21.160605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.842 qpair failed and we were unable to recover it. 00:27:32.842 [2024-11-15 10:46:21.160722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.842 [2024-11-15 10:46:21.160772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.842 qpair failed and we were unable to recover it. 00:27:32.842 [2024-11-15 10:46:21.160938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.842 [2024-11-15 10:46:21.160979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.842 qpair failed and we were unable to recover it. 00:27:32.842 [2024-11-15 10:46:21.161216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.842 [2024-11-15 10:46:21.161265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.842 qpair failed and we were unable to recover it. 00:27:32.842 [2024-11-15 10:46:21.161509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.843 [2024-11-15 10:46:21.161559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.843 qpair failed and we were unable to recover it. 00:27:32.843 [2024-11-15 10:46:21.161762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.843 [2024-11-15 10:46:21.161811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.843 qpair failed and we were unable to recover it. 00:27:32.843 [2024-11-15 10:46:21.162055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.843 [2024-11-15 10:46:21.162104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.843 qpair failed and we were unable to recover it. 00:27:32.843 [2024-11-15 10:46:21.162335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.843 [2024-11-15 10:46:21.162380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.843 qpair failed and we were unable to recover it. 00:27:32.843 [2024-11-15 10:46:21.162560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.843 [2024-11-15 10:46:21.162585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.843 qpair failed and we were unable to recover it. 00:27:32.843 [2024-11-15 10:46:21.162825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.843 [2024-11-15 10:46:21.162872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.843 qpair failed and we were unable to recover it. 00:27:32.843 [2024-11-15 10:46:21.163115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.843 [2024-11-15 10:46:21.163163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.843 qpair failed and we were unable to recover it. 00:27:32.843 [2024-11-15 10:46:21.163343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.843 [2024-11-15 10:46:21.163372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.843 qpair failed and we were unable to recover it. 00:27:32.843 [2024-11-15 10:46:21.163572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.843 [2024-11-15 10:46:21.163597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.843 qpair failed and we were unable to recover it. 00:27:32.843 [2024-11-15 10:46:21.163840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.843 [2024-11-15 10:46:21.163889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.843 qpair failed and we were unable to recover it. 00:27:32.843 [2024-11-15 10:46:21.164110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.843 [2024-11-15 10:46:21.164159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.843 qpair failed and we were unable to recover it. 00:27:32.843 [2024-11-15 10:46:21.164384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.843 [2024-11-15 10:46:21.164408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.843 qpair failed and we were unable to recover it. 00:27:32.843 [2024-11-15 10:46:21.164550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.843 [2024-11-15 10:46:21.164575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.843 qpair failed and we were unable to recover it. 00:27:32.843 [2024-11-15 10:46:21.164750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.843 [2024-11-15 10:46:21.164799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.843 qpair failed and we were unable to recover it. 00:27:32.843 [2024-11-15 10:46:21.165007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.843 [2024-11-15 10:46:21.165055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.843 qpair failed and we were unable to recover it. 00:27:32.843 [2024-11-15 10:46:21.165258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.843 [2024-11-15 10:46:21.165281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.843 qpair failed and we were unable to recover it. 00:27:32.843 [2024-11-15 10:46:21.165508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.843 [2024-11-15 10:46:21.165532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.843 qpair failed and we were unable to recover it. 00:27:32.843 [2024-11-15 10:46:21.165783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.843 [2024-11-15 10:46:21.165830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.843 qpair failed and we were unable to recover it. 00:27:32.843 [2024-11-15 10:46:21.166057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.843 [2024-11-15 10:46:21.166104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.843 qpair failed and we were unable to recover it. 00:27:32.843 [2024-11-15 10:46:21.166321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.843 [2024-11-15 10:46:21.166344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.843 qpair failed and we were unable to recover it. 00:27:32.843 [2024-11-15 10:46:21.166595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.843 [2024-11-15 10:46:21.166619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.843 qpair failed and we were unable to recover it. 00:27:32.843 [2024-11-15 10:46:21.166758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.843 [2024-11-15 10:46:21.166814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.843 qpair failed and we were unable to recover it. 00:27:32.843 [2024-11-15 10:46:21.167039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.843 [2024-11-15 10:46:21.167088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.843 qpair failed and we were unable to recover it. 00:27:32.843 [2024-11-15 10:46:21.167318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.843 [2024-11-15 10:46:21.167341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.843 qpair failed and we were unable to recover it. 00:27:32.843 [2024-11-15 10:46:21.167458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.843 [2024-11-15 10:46:21.167482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.843 qpair failed and we were unable to recover it. 00:27:32.843 [2024-11-15 10:46:21.167718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.843 [2024-11-15 10:46:21.167768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.843 qpair failed and we were unable to recover it. 00:27:32.843 [2024-11-15 10:46:21.168018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.843 [2024-11-15 10:46:21.168067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.843 qpair failed and we were unable to recover it. 00:27:32.843 [2024-11-15 10:46:21.168310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.844 [2024-11-15 10:46:21.168333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.844 qpair failed and we were unable to recover it. 00:27:32.844 [2024-11-15 10:46:21.168563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.844 [2024-11-15 10:46:21.168588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.844 qpair failed and we were unable to recover it. 00:27:32.844 [2024-11-15 10:46:21.168812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.844 [2024-11-15 10:46:21.168862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.844 qpair failed and we were unable to recover it. 00:27:32.844 [2024-11-15 10:46:21.169077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.844 [2024-11-15 10:46:21.169124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.844 qpair failed and we were unable to recover it. 00:27:32.844 [2024-11-15 10:46:21.169307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.844 [2024-11-15 10:46:21.169330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.844 qpair failed and we were unable to recover it. 00:27:32.844 [2024-11-15 10:46:21.169561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.844 [2024-11-15 10:46:21.169585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.844 qpair failed and we were unable to recover it. 00:27:32.844 [2024-11-15 10:46:21.169835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.844 [2024-11-15 10:46:21.169885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.844 qpair failed and we were unable to recover it. 00:27:32.844 [2024-11-15 10:46:21.170098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.844 [2024-11-15 10:46:21.170147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.844 qpair failed and we were unable to recover it. 00:27:32.844 [2024-11-15 10:46:21.170324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.844 [2024-11-15 10:46:21.170376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.844 qpair failed and we were unable to recover it. 00:27:32.844 [2024-11-15 10:46:21.170534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.844 [2024-11-15 10:46:21.170558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.844 qpair failed and we were unable to recover it. 00:27:32.844 [2024-11-15 10:46:21.170767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.844 [2024-11-15 10:46:21.170816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.844 qpair failed and we were unable to recover it. 00:27:32.844 [2024-11-15 10:46:21.171003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.844 [2024-11-15 10:46:21.171052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.844 qpair failed and we were unable to recover it. 00:27:32.844 [2024-11-15 10:46:21.171292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.844 [2024-11-15 10:46:21.171314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.844 qpair failed and we were unable to recover it. 00:27:32.844 [2024-11-15 10:46:21.171549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.844 [2024-11-15 10:46:21.171573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.844 qpair failed and we were unable to recover it. 00:27:32.844 [2024-11-15 10:46:21.171781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.844 [2024-11-15 10:46:21.171830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.844 qpair failed and we were unable to recover it. 00:27:32.844 [2024-11-15 10:46:21.172035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.844 [2024-11-15 10:46:21.172083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.844 qpair failed and we were unable to recover it. 00:27:32.844 [2024-11-15 10:46:21.172287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.844 [2024-11-15 10:46:21.172310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.844 qpair failed and we were unable to recover it. 00:27:32.844 [2024-11-15 10:46:21.172460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.844 [2024-11-15 10:46:21.172492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.844 qpair failed and we were unable to recover it. 00:27:32.844 [2024-11-15 10:46:21.172699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.844 [2024-11-15 10:46:21.172756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.844 qpair failed and we were unable to recover it. 00:27:32.844 [2024-11-15 10:46:21.173016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.844 [2024-11-15 10:46:21.173063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.844 qpair failed and we were unable to recover it. 00:27:32.844 [2024-11-15 10:46:21.173240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.844 [2024-11-15 10:46:21.173263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.844 qpair failed and we were unable to recover it. 00:27:32.844 [2024-11-15 10:46:21.173513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.844 [2024-11-15 10:46:21.173563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.844 qpair failed and we were unable to recover it. 00:27:32.844 [2024-11-15 10:46:21.173808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.844 [2024-11-15 10:46:21.173858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.844 qpair failed and we were unable to recover it. 00:27:32.844 [2024-11-15 10:46:21.174081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.844 [2024-11-15 10:46:21.174131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.844 qpair failed and we were unable to recover it. 00:27:32.844 [2024-11-15 10:46:21.174305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.844 [2024-11-15 10:46:21.174328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.844 qpair failed and we were unable to recover it. 00:27:32.844 [2024-11-15 10:46:21.174562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.844 [2024-11-15 10:46:21.174587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.844 qpair failed and we were unable to recover it. 00:27:32.844 [2024-11-15 10:46:21.174842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.844 [2024-11-15 10:46:21.174889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.844 qpair failed and we were unable to recover it. 00:27:32.844 [2024-11-15 10:46:21.175010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.844 [2024-11-15 10:46:21.175058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.844 qpair failed and we were unable to recover it. 00:27:32.844 [2024-11-15 10:46:21.175276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.844 [2024-11-15 10:46:21.175299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.844 qpair failed and we were unable to recover it. 00:27:32.844 [2024-11-15 10:46:21.175493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.844 [2024-11-15 10:46:21.175518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.844 qpair failed and we were unable to recover it. 00:27:32.844 [2024-11-15 10:46:21.175719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.844 [2024-11-15 10:46:21.175767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.844 qpair failed and we were unable to recover it. 00:27:32.844 [2024-11-15 10:46:21.175971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.844 [2024-11-15 10:46:21.176018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.844 qpair failed and we were unable to recover it. 00:27:32.844 [2024-11-15 10:46:21.176191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.844 [2024-11-15 10:46:21.176214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.844 qpair failed and we were unable to recover it. 00:27:32.844 [2024-11-15 10:46:21.176413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.844 [2024-11-15 10:46:21.176464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.844 qpair failed and we were unable to recover it. 00:27:32.844 [2024-11-15 10:46:21.176650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.844 [2024-11-15 10:46:21.176701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.844 qpair failed and we were unable to recover it. 00:27:32.844 [2024-11-15 10:46:21.176936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.844 [2024-11-15 10:46:21.176983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.844 qpair failed and we were unable to recover it. 00:27:32.844 [2024-11-15 10:46:21.177124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.844 [2024-11-15 10:46:21.177147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.844 qpair failed and we were unable to recover it. 00:27:32.844 [2024-11-15 10:46:21.177352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.845 [2024-11-15 10:46:21.177386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.845 qpair failed and we were unable to recover it. 00:27:32.845 [2024-11-15 10:46:21.177633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.845 [2024-11-15 10:46:21.177681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.845 qpair failed and we were unable to recover it. 00:27:32.845 [2024-11-15 10:46:21.177900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.845 [2024-11-15 10:46:21.177947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.845 qpair failed and we were unable to recover it. 00:27:32.845 [2024-11-15 10:46:21.178171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.845 [2024-11-15 10:46:21.178216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.845 qpair failed and we were unable to recover it. 00:27:32.845 [2024-11-15 10:46:21.178404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.845 [2024-11-15 10:46:21.178428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.845 qpair failed and we were unable to recover it. 00:27:32.845 [2024-11-15 10:46:21.178608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.845 [2024-11-15 10:46:21.178661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.845 qpair failed and we were unable to recover it. 00:27:32.845 [2024-11-15 10:46:21.178774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.845 [2024-11-15 10:46:21.178825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.845 qpair failed and we were unable to recover it. 00:27:32.845 [2024-11-15 10:46:21.178973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.845 [2024-11-15 10:46:21.179027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.845 qpair failed and we were unable to recover it. 00:27:32.845 [2024-11-15 10:46:21.179208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.845 [2024-11-15 10:46:21.179232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.845 qpair failed and we were unable to recover it. 00:27:32.845 [2024-11-15 10:46:21.179377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.845 [2024-11-15 10:46:21.179402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.845 qpair failed and we were unable to recover it. 00:27:32.845 [2024-11-15 10:46:21.179545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.845 [2024-11-15 10:46:21.179595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.845 qpair failed and we were unable to recover it. 00:27:32.845 [2024-11-15 10:46:21.179850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.845 [2024-11-15 10:46:21.179899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.845 qpair failed and we were unable to recover it. 00:27:32.845 [2024-11-15 10:46:21.180046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.845 [2024-11-15 10:46:21.180069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.845 qpair failed and we were unable to recover it. 00:27:32.845 [2024-11-15 10:46:21.180311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.845 [2024-11-15 10:46:21.180334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.845 qpair failed and we were unable to recover it. 00:27:32.845 [2024-11-15 10:46:21.180498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.845 [2024-11-15 10:46:21.180549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.845 qpair failed and we were unable to recover it. 00:27:32.845 [2024-11-15 10:46:21.180745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.845 [2024-11-15 10:46:21.180795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.845 qpair failed and we were unable to recover it. 00:27:32.845 [2024-11-15 10:46:21.180998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.845 [2024-11-15 10:46:21.181052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.845 qpair failed and we were unable to recover it. 00:27:32.845 [2024-11-15 10:46:21.181226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.845 [2024-11-15 10:46:21.181249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.845 qpair failed and we were unable to recover it. 00:27:32.845 [2024-11-15 10:46:21.181481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.845 [2024-11-15 10:46:21.181530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.845 qpair failed and we were unable to recover it. 00:27:32.845 [2024-11-15 10:46:21.181769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.845 [2024-11-15 10:46:21.181819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.845 qpair failed and we were unable to recover it. 00:27:32.845 [2024-11-15 10:46:21.182070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.845 [2024-11-15 10:46:21.182118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.845 qpair failed and we were unable to recover it. 00:27:32.845 [2024-11-15 10:46:21.182319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.845 [2024-11-15 10:46:21.182342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.845 qpair failed and we were unable to recover it. 00:27:32.845 [2024-11-15 10:46:21.182567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.845 [2024-11-15 10:46:21.182611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.845 qpair failed and we were unable to recover it. 00:27:32.845 [2024-11-15 10:46:21.182815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.845 [2024-11-15 10:46:21.182863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.845 qpair failed and we were unable to recover it. 00:27:32.845 [2024-11-15 10:46:21.183105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.845 [2024-11-15 10:46:21.183154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.845 qpair failed and we were unable to recover it. 00:27:32.845 [2024-11-15 10:46:21.183382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.845 [2024-11-15 10:46:21.183406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.845 qpair failed and we were unable to recover it. 00:27:32.845 [2024-11-15 10:46:21.183618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.845 [2024-11-15 10:46:21.183679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.845 qpair failed and we were unable to recover it. 00:27:32.845 [2024-11-15 10:46:21.183884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.845 [2024-11-15 10:46:21.183942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.845 qpair failed and we were unable to recover it. 00:27:32.845 [2024-11-15 10:46:21.184176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.845 [2024-11-15 10:46:21.184222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.845 qpair failed and we were unable to recover it. 00:27:32.845 [2024-11-15 10:46:21.184465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.845 [2024-11-15 10:46:21.184515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.845 qpair failed and we were unable to recover it. 00:27:32.845 [2024-11-15 10:46:21.184698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.845 [2024-11-15 10:46:21.184764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.845 qpair failed and we were unable to recover it. 00:27:32.845 [2024-11-15 10:46:21.184934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.846 [2024-11-15 10:46:21.184982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.846 qpair failed and we were unable to recover it. 00:27:32.846 [2024-11-15 10:46:21.185139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.846 [2024-11-15 10:46:21.185162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.846 qpair failed and we were unable to recover it. 00:27:32.846 [2024-11-15 10:46:21.185292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.846 [2024-11-15 10:46:21.185316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.846 qpair failed and we were unable to recover it. 00:27:32.846 [2024-11-15 10:46:21.185476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.846 [2024-11-15 10:46:21.185538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.846 qpair failed and we were unable to recover it. 00:27:32.846 [2024-11-15 10:46:21.185660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.846 [2024-11-15 10:46:21.185683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.846 qpair failed and we were unable to recover it. 00:27:32.846 [2024-11-15 10:46:21.185858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.846 [2024-11-15 10:46:21.185881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.846 qpair failed and we were unable to recover it. 00:27:32.846 [2024-11-15 10:46:21.186010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.846 [2024-11-15 10:46:21.186038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.846 qpair failed and we were unable to recover it. 00:27:32.846 [2024-11-15 10:46:21.186189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.846 [2024-11-15 10:46:21.186227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.846 qpair failed and we were unable to recover it. 00:27:32.846 [2024-11-15 10:46:21.186342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.846 [2024-11-15 10:46:21.186386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.846 qpair failed and we were unable to recover it. 00:27:32.846 [2024-11-15 10:46:21.186476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.846 [2024-11-15 10:46:21.186500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.846 qpair failed and we were unable to recover it. 00:27:32.846 [2024-11-15 10:46:21.186621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.846 [2024-11-15 10:46:21.186645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.846 qpair failed and we were unable to recover it. 00:27:32.846 [2024-11-15 10:46:21.186801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.846 [2024-11-15 10:46:21.186824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.846 qpair failed and we were unable to recover it. 00:27:32.846 [2024-11-15 10:46:21.186957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.846 [2024-11-15 10:46:21.187022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.846 qpair failed and we were unable to recover it. 00:27:32.846 [2024-11-15 10:46:21.187181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.846 [2024-11-15 10:46:21.187219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.846 qpair failed and we were unable to recover it. 00:27:32.846 [2024-11-15 10:46:21.187337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.846 [2024-11-15 10:46:21.187369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.846 qpair failed and we were unable to recover it. 00:27:32.846 [2024-11-15 10:46:21.187519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.846 [2024-11-15 10:46:21.187544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.846 qpair failed and we were unable to recover it. 00:27:32.846 [2024-11-15 10:46:21.187732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.846 [2024-11-15 10:46:21.187755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.846 qpair failed and we were unable to recover it. 00:27:32.846 [2024-11-15 10:46:21.187916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.846 [2024-11-15 10:46:21.187967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.846 qpair failed and we were unable to recover it. 00:27:32.846 [2024-11-15 10:46:21.188112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.846 [2024-11-15 10:46:21.188135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.846 qpair failed and we were unable to recover it. 00:27:32.846 [2024-11-15 10:46:21.188307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.846 [2024-11-15 10:46:21.188345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.846 qpair failed and we were unable to recover it. 00:27:32.846 [2024-11-15 10:46:21.188509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.846 [2024-11-15 10:46:21.188533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.846 qpair failed and we were unable to recover it. 00:27:32.846 [2024-11-15 10:46:21.188678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.846 [2024-11-15 10:46:21.188740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.846 qpair failed and we were unable to recover it. 00:27:32.846 [2024-11-15 10:46:21.188879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.846 [2024-11-15 10:46:21.188902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.846 qpair failed and we were unable to recover it. 00:27:32.846 [2024-11-15 10:46:21.189050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.846 [2024-11-15 10:46:21.189074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.846 qpair failed and we were unable to recover it. 00:27:32.846 [2024-11-15 10:46:21.189259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.846 [2024-11-15 10:46:21.189282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.846 qpair failed and we were unable to recover it. 00:27:32.846 [2024-11-15 10:46:21.189467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.846 [2024-11-15 10:46:21.189526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.846 qpair failed and we were unable to recover it. 00:27:32.846 [2024-11-15 10:46:21.189661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.846 [2024-11-15 10:46:21.189716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.846 qpair failed and we were unable to recover it. 00:27:32.846 [2024-11-15 10:46:21.189860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.846 [2024-11-15 10:46:21.189890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.846 qpair failed and we were unable to recover it. 00:27:32.846 [2024-11-15 10:46:21.190023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.846 [2024-11-15 10:46:21.190047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.846 qpair failed and we were unable to recover it. 00:27:32.846 [2024-11-15 10:46:21.190172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.846 [2024-11-15 10:46:21.190196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.846 qpair failed and we were unable to recover it. 00:27:32.846 [2024-11-15 10:46:21.190340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.846 [2024-11-15 10:46:21.190385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.846 qpair failed and we were unable to recover it. 00:27:32.846 [2024-11-15 10:46:21.190507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.846 [2024-11-15 10:46:21.190547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.846 qpair failed and we were unable to recover it. 00:27:32.846 [2024-11-15 10:46:21.190689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.846 [2024-11-15 10:46:21.190713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.846 qpair failed and we were unable to recover it. 00:27:32.846 [2024-11-15 10:46:21.190915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.846 [2024-11-15 10:46:21.190939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.846 qpair failed and we were unable to recover it. 00:27:32.846 [2024-11-15 10:46:21.191078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.846 [2024-11-15 10:46:21.191101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.846 qpair failed and we were unable to recover it. 00:27:32.846 [2024-11-15 10:46:21.191286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.846 [2024-11-15 10:46:21.191309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.846 qpair failed and we were unable to recover it. 00:27:32.846 [2024-11-15 10:46:21.191464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.846 [2024-11-15 10:46:21.191490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.846 qpair failed and we were unable to recover it. 00:27:32.846 [2024-11-15 10:46:21.191586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.847 [2024-11-15 10:46:21.191610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.847 qpair failed and we were unable to recover it. 00:27:32.847 [2024-11-15 10:46:21.191790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.847 [2024-11-15 10:46:21.191852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.847 qpair failed and we were unable to recover it. 00:27:32.847 [2024-11-15 10:46:21.192021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.847 [2024-11-15 10:46:21.192067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.847 qpair failed and we were unable to recover it. 00:27:32.847 [2024-11-15 10:46:21.192239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.847 [2024-11-15 10:46:21.192262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.847 qpair failed and we were unable to recover it. 00:27:32.847 [2024-11-15 10:46:21.192374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.847 [2024-11-15 10:46:21.192410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.847 qpair failed and we were unable to recover it. 00:27:32.847 [2024-11-15 10:46:21.192590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.847 [2024-11-15 10:46:21.192639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.847 qpair failed and we were unable to recover it. 00:27:32.847 [2024-11-15 10:46:21.192780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.847 [2024-11-15 10:46:21.192827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.847 qpair failed and we were unable to recover it. 00:27:32.847 [2024-11-15 10:46:21.192993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.847 [2024-11-15 10:46:21.193042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.847 qpair failed and we were unable to recover it. 00:27:32.847 [2024-11-15 10:46:21.193187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.847 [2024-11-15 10:46:21.193210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.847 qpair failed and we were unable to recover it. 00:27:32.847 [2024-11-15 10:46:21.193330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.847 [2024-11-15 10:46:21.193357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.847 qpair failed and we were unable to recover it. 00:27:32.847 [2024-11-15 10:46:21.193505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.847 [2024-11-15 10:46:21.193530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.847 qpair failed and we were unable to recover it. 00:27:32.847 [2024-11-15 10:46:21.193641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.847 [2024-11-15 10:46:21.193665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.847 qpair failed and we were unable to recover it. 00:27:32.847 [2024-11-15 10:46:21.193797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.847 [2024-11-15 10:46:21.193821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.847 qpair failed and we were unable to recover it. 00:27:32.847 [2024-11-15 10:46:21.193952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.847 [2024-11-15 10:46:21.193976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.847 qpair failed and we were unable to recover it. 00:27:32.847 [2024-11-15 10:46:21.194118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.847 [2024-11-15 10:46:21.194142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.847 qpair failed and we were unable to recover it. 00:27:32.847 [2024-11-15 10:46:21.194277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.847 [2024-11-15 10:46:21.194301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.847 qpair failed and we were unable to recover it. 00:27:32.847 [2024-11-15 10:46:21.194462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.847 [2024-11-15 10:46:21.194487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.847 qpair failed and we were unable to recover it. 00:27:32.847 [2024-11-15 10:46:21.194624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.847 [2024-11-15 10:46:21.194649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.847 qpair failed and we were unable to recover it. 00:27:32.847 [2024-11-15 10:46:21.194786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.847 [2024-11-15 10:46:21.194810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.847 qpair failed and we were unable to recover it. 00:27:32.847 [2024-11-15 10:46:21.194966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.847 [2024-11-15 10:46:21.195004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.847 qpair failed and we were unable to recover it. 00:27:32.847 [2024-11-15 10:46:21.195132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.847 [2024-11-15 10:46:21.195157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.847 qpair failed and we were unable to recover it. 00:27:32.847 [2024-11-15 10:46:21.195250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.847 [2024-11-15 10:46:21.195274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.847 qpair failed and we were unable to recover it. 00:27:32.847 [2024-11-15 10:46:21.195462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.847 [2024-11-15 10:46:21.195487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.847 qpair failed and we were unable to recover it. 00:27:32.847 [2024-11-15 10:46:21.195605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.847 [2024-11-15 10:46:21.195630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.847 qpair failed and we were unable to recover it. 00:27:32.847 [2024-11-15 10:46:21.195766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.847 [2024-11-15 10:46:21.195805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.847 qpair failed and we were unable to recover it. 00:27:32.847 [2024-11-15 10:46:21.195920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.847 [2024-11-15 10:46:21.195944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.847 qpair failed and we were unable to recover it. 00:27:32.847 [2024-11-15 10:46:21.196106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.847 [2024-11-15 10:46:21.196130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.847 qpair failed and we were unable to recover it. 00:27:32.847 [2024-11-15 10:46:21.196245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.847 [2024-11-15 10:46:21.196268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.847 qpair failed and we were unable to recover it. 00:27:32.847 [2024-11-15 10:46:21.196403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.847 [2024-11-15 10:46:21.196444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.847 qpair failed and we were unable to recover it. 00:27:32.847 [2024-11-15 10:46:21.196565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.847 [2024-11-15 10:46:21.196591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.847 qpair failed and we were unable to recover it. 00:27:32.847 [2024-11-15 10:46:21.196734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.847 [2024-11-15 10:46:21.196758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.847 qpair failed and we were unable to recover it. 00:27:32.847 [2024-11-15 10:46:21.196850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.847 [2024-11-15 10:46:21.196874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.847 qpair failed and we were unable to recover it. 00:27:32.847 [2024-11-15 10:46:21.197041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.847 [2024-11-15 10:46:21.197065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.847 qpair failed and we were unable to recover it. 00:27:32.847 [2024-11-15 10:46:21.197198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.847 [2024-11-15 10:46:21.197221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.847 qpair failed and we were unable to recover it. 00:27:32.847 [2024-11-15 10:46:21.197367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.847 [2024-11-15 10:46:21.197408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.847 qpair failed and we were unable to recover it. 00:27:32.848 [2024-11-15 10:46:21.197518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.848 [2024-11-15 10:46:21.197543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.848 qpair failed and we were unable to recover it. 00:27:32.848 [2024-11-15 10:46:21.197704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.848 [2024-11-15 10:46:21.197743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.848 qpair failed and we were unable to recover it. 00:27:32.848 [2024-11-15 10:46:21.197937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.848 [2024-11-15 10:46:21.197960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.848 qpair failed and we were unable to recover it. 00:27:32.848 [2024-11-15 10:46:21.198120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.848 [2024-11-15 10:46:21.198144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.848 qpair failed and we were unable to recover it. 00:27:32.848 [2024-11-15 10:46:21.198272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.848 [2024-11-15 10:46:21.198310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.848 qpair failed and we were unable to recover it. 00:27:32.848 [2024-11-15 10:46:21.198469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.848 [2024-11-15 10:46:21.198521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.848 qpair failed and we were unable to recover it. 00:27:32.848 [2024-11-15 10:46:21.198704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.848 [2024-11-15 10:46:21.198752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.848 qpair failed and we were unable to recover it. 00:27:32.848 [2024-11-15 10:46:21.198944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.848 [2024-11-15 10:46:21.198990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.848 qpair failed and we were unable to recover it. 00:27:32.848 [2024-11-15 10:46:21.199078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.848 [2024-11-15 10:46:21.199116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.848 qpair failed and we were unable to recover it. 00:27:32.848 [2024-11-15 10:46:21.199218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.848 [2024-11-15 10:46:21.199243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.848 qpair failed and we were unable to recover it. 00:27:32.848 [2024-11-15 10:46:21.199417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.848 [2024-11-15 10:46:21.199442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.848 qpair failed and we were unable to recover it. 00:27:32.848 [2024-11-15 10:46:21.199561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.848 [2024-11-15 10:46:21.199607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.848 qpair failed and we were unable to recover it. 00:27:32.848 [2024-11-15 10:46:21.200189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.848 [2024-11-15 10:46:21.200214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.848 qpair failed and we were unable to recover it. 00:27:32.848 [2024-11-15 10:46:21.200416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.848 [2024-11-15 10:46:21.200442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.848 qpair failed and we were unable to recover it. 00:27:32.848 [2024-11-15 10:46:21.200629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.848 [2024-11-15 10:46:21.200667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.848 qpair failed and we were unable to recover it. 00:27:32.848 [2024-11-15 10:46:21.200850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.848 [2024-11-15 10:46:21.200874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.848 qpair failed and we were unable to recover it. 00:27:32.848 [2024-11-15 10:46:21.201011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.848 [2024-11-15 10:46:21.201046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.848 qpair failed and we were unable to recover it. 00:27:32.848 [2024-11-15 10:46:21.201191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.848 [2024-11-15 10:46:21.201230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.848 qpair failed and we were unable to recover it. 00:27:32.848 [2024-11-15 10:46:21.201433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.848 [2024-11-15 10:46:21.201473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.848 qpair failed and we were unable to recover it. 00:27:32.848 [2024-11-15 10:46:21.201601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.848 [2024-11-15 10:46:21.201629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.848 qpair failed and we were unable to recover it. 00:27:32.848 [2024-11-15 10:46:21.201756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.848 [2024-11-15 10:46:21.201790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.848 qpair failed and we were unable to recover it. 00:27:32.848 [2024-11-15 10:46:21.201915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.848 [2024-11-15 10:46:21.201955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.848 qpair failed and we were unable to recover it. 00:27:32.848 [2024-11-15 10:46:21.202094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.848 [2024-11-15 10:46:21.202134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.848 qpair failed and we were unable to recover it. 00:27:32.848 [2024-11-15 10:46:21.202302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.848 [2024-11-15 10:46:21.202341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.848 qpair failed and we were unable to recover it. 00:27:32.848 [2024-11-15 10:46:21.202514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.848 [2024-11-15 10:46:21.202540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.848 qpair failed and we were unable to recover it. 00:27:32.848 [2024-11-15 10:46:21.202644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.848 [2024-11-15 10:46:21.202669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.848 qpair failed and we were unable to recover it. 00:27:32.848 [2024-11-15 10:46:21.202815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.848 [2024-11-15 10:46:21.202840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.848 qpair failed and we were unable to recover it. 00:27:32.848 [2024-11-15 10:46:21.203054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.848 [2024-11-15 10:46:21.203078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.848 qpair failed and we were unable to recover it. 00:27:32.848 [2024-11-15 10:46:21.203238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.848 [2024-11-15 10:46:21.203278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.848 qpair failed and we were unable to recover it. 00:27:32.848 [2024-11-15 10:46:21.203433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.848 [2024-11-15 10:46:21.203460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.848 qpair failed and we were unable to recover it. 00:27:32.848 [2024-11-15 10:46:21.203579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.848 [2024-11-15 10:46:21.203614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.848 qpair failed and we were unable to recover it. 00:27:32.849 [2024-11-15 10:46:21.203783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.849 [2024-11-15 10:46:21.203808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.849 qpair failed and we were unable to recover it. 00:27:32.849 [2024-11-15 10:46:21.203958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.849 [2024-11-15 10:46:21.203982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.849 qpair failed and we were unable to recover it. 00:27:32.849 [2024-11-15 10:46:21.204138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.849 [2024-11-15 10:46:21.204178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.849 qpair failed and we were unable to recover it. 00:27:32.849 [2024-11-15 10:46:21.204366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.849 [2024-11-15 10:46:21.204409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.849 qpair failed and we were unable to recover it. 00:27:32.849 [2024-11-15 10:46:21.204554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.849 [2024-11-15 10:46:21.204579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.849 qpair failed and we were unable to recover it. 00:27:32.849 [2024-11-15 10:46:21.204752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.849 [2024-11-15 10:46:21.204792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.849 qpair failed and we were unable to recover it. 00:27:32.849 [2024-11-15 10:46:21.205033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.849 [2024-11-15 10:46:21.205058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.849 qpair failed and we were unable to recover it. 00:27:32.849 [2024-11-15 10:46:21.205203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.849 [2024-11-15 10:46:21.205227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.849 qpair failed and we were unable to recover it. 00:27:32.849 [2024-11-15 10:46:21.205407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.849 [2024-11-15 10:46:21.205461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.849 qpair failed and we were unable to recover it. 00:27:32.849 [2024-11-15 10:46:21.205575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.849 [2024-11-15 10:46:21.205629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.849 qpair failed and we were unable to recover it. 00:27:32.849 [2024-11-15 10:46:21.205781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.849 [2024-11-15 10:46:21.205828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.849 qpair failed and we were unable to recover it. 00:27:32.849 [2024-11-15 10:46:21.205964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.849 [2024-11-15 10:46:21.206005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.849 qpair failed and we were unable to recover it. 00:27:32.849 [2024-11-15 10:46:21.206126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.849 [2024-11-15 10:46:21.206165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.849 qpair failed and we were unable to recover it. 00:27:32.849 [2024-11-15 10:46:21.206283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.849 [2024-11-15 10:46:21.206308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.849 qpair failed and we were unable to recover it. 00:27:32.849 [2024-11-15 10:46:21.206470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.849 [2024-11-15 10:46:21.206497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.849 qpair failed and we were unable to recover it. 00:27:32.849 [2024-11-15 10:46:21.206655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.849 [2024-11-15 10:46:21.206680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.849 qpair failed and we were unable to recover it. 00:27:32.849 [2024-11-15 10:46:21.206828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.849 [2024-11-15 10:46:21.206872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.849 qpair failed and we were unable to recover it. 00:27:32.849 [2024-11-15 10:46:21.207009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.849 [2024-11-15 10:46:21.207034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.849 qpair failed and we were unable to recover it. 00:27:32.849 [2024-11-15 10:46:21.207194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.849 [2024-11-15 10:46:21.207232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.849 qpair failed and we were unable to recover it. 00:27:32.849 [2024-11-15 10:46:21.207332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.849 [2024-11-15 10:46:21.207381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.849 qpair failed and we were unable to recover it. 00:27:32.849 [2024-11-15 10:46:21.207527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.849 [2024-11-15 10:46:21.207552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.849 qpair failed and we were unable to recover it. 00:27:32.849 [2024-11-15 10:46:21.207726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.849 [2024-11-15 10:46:21.207749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.849 qpair failed and we were unable to recover it. 00:27:32.849 [2024-11-15 10:46:21.207934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.849 [2024-11-15 10:46:21.207957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.849 qpair failed and we were unable to recover it. 00:27:32.849 [2024-11-15 10:46:21.208064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.849 [2024-11-15 10:46:21.208106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.849 qpair failed and we were unable to recover it. 00:27:32.849 [2024-11-15 10:46:21.208226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.849 [2024-11-15 10:46:21.208250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.849 qpair failed and we were unable to recover it. 00:27:32.849 [2024-11-15 10:46:21.208452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.849 [2024-11-15 10:46:21.208493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.849 qpair failed and we were unable to recover it. 00:27:32.849 [2024-11-15 10:46:21.208613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.849 [2024-11-15 10:46:21.208637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.849 qpair failed and we were unable to recover it. 00:27:32.849 [2024-11-15 10:46:21.208826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.849 [2024-11-15 10:46:21.208849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.849 qpair failed and we were unable to recover it. 00:27:32.849 [2024-11-15 10:46:21.208986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.849 [2024-11-15 10:46:21.209017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.849 qpair failed and we were unable to recover it. 00:27:32.849 [2024-11-15 10:46:21.209131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.849 [2024-11-15 10:46:21.209167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.849 qpair failed and we were unable to recover it. 00:27:32.849 [2024-11-15 10:46:21.209338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.849 [2024-11-15 10:46:21.209369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.849 qpair failed and we were unable to recover it. 00:27:32.849 [2024-11-15 10:46:21.209507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.849 [2024-11-15 10:46:21.209533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.849 qpair failed and we were unable to recover it. 00:27:32.849 [2024-11-15 10:46:21.209682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.849 [2024-11-15 10:46:21.209707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.849 qpair failed and we were unable to recover it. 00:27:32.850 [2024-11-15 10:46:21.209854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.850 [2024-11-15 10:46:21.209899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.850 qpair failed and we were unable to recover it. 00:27:32.850 [2024-11-15 10:46:21.210103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.850 [2024-11-15 10:46:21.210127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.850 qpair failed and we were unable to recover it. 00:27:32.850 [2024-11-15 10:46:21.210294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.850 [2024-11-15 10:46:21.210318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.850 qpair failed and we were unable to recover it. 00:27:32.850 [2024-11-15 10:46:21.210454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.850 [2024-11-15 10:46:21.210480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.850 qpair failed and we were unable to recover it. 00:27:32.850 [2024-11-15 10:46:21.210610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.850 [2024-11-15 10:46:21.210637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.850 qpair failed and we were unable to recover it. 00:27:32.850 [2024-11-15 10:46:21.210825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.850 [2024-11-15 10:46:21.210849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.850 qpair failed and we were unable to recover it. 00:27:32.850 [2024-11-15 10:46:21.211034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.850 [2024-11-15 10:46:21.211058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.850 qpair failed and we were unable to recover it. 00:27:32.850 [2024-11-15 10:46:21.211219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.850 [2024-11-15 10:46:21.211245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.850 qpair failed and we were unable to recover it. 00:27:32.850 [2024-11-15 10:46:21.211387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.850 [2024-11-15 10:46:21.211414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.850 qpair failed and we were unable to recover it. 00:27:32.850 [2024-11-15 10:46:21.211555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.850 [2024-11-15 10:46:21.211605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.850 qpair failed and we were unable to recover it. 00:27:32.850 [2024-11-15 10:46:21.211846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.850 [2024-11-15 10:46:21.211870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.850 qpair failed and we were unable to recover it. 00:27:32.850 [2024-11-15 10:46:21.212010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.850 [2024-11-15 10:46:21.212057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.850 qpair failed and we were unable to recover it. 00:27:32.850 [2024-11-15 10:46:21.212244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.850 [2024-11-15 10:46:21.212268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.850 qpair failed and we were unable to recover it. 00:27:32.850 [2024-11-15 10:46:21.212488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.850 [2024-11-15 10:46:21.212538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.850 qpair failed and we were unable to recover it. 00:27:32.850 [2024-11-15 10:46:21.212711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.850 [2024-11-15 10:46:21.212758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.850 qpair failed and we were unable to recover it. 00:27:32.850 [2024-11-15 10:46:21.212895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.850 [2024-11-15 10:46:21.212941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.850 qpair failed and we were unable to recover it. 00:27:32.850 [2024-11-15 10:46:21.213064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.850 [2024-11-15 10:46:21.213104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.850 qpair failed and we were unable to recover it. 00:27:32.850 [2024-11-15 10:46:21.213240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.850 [2024-11-15 10:46:21.213265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.850 qpair failed and we were unable to recover it. 00:27:32.850 [2024-11-15 10:46:21.213382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.850 [2024-11-15 10:46:21.213409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.850 qpair failed and we were unable to recover it. 00:27:32.850 [2024-11-15 10:46:21.213563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.850 [2024-11-15 10:46:21.213613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.850 qpair failed and we were unable to recover it. 00:27:32.850 [2024-11-15 10:46:21.213751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.850 [2024-11-15 10:46:21.213790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.850 qpair failed and we were unable to recover it. 00:27:32.850 [2024-11-15 10:46:21.213946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.850 [2024-11-15 10:46:21.213970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.850 qpair failed and we were unable to recover it. 00:27:32.850 [2024-11-15 10:46:21.214143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.850 [2024-11-15 10:46:21.214167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.850 qpair failed and we were unable to recover it. 00:27:32.850 [2024-11-15 10:46:21.214268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.850 [2024-11-15 10:46:21.214293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.850 qpair failed and we were unable to recover it. 00:27:32.850 [2024-11-15 10:46:21.214395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.850 [2024-11-15 10:46:21.214421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.850 qpair failed and we were unable to recover it. 00:27:32.850 [2024-11-15 10:46:21.214580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.850 [2024-11-15 10:46:21.214628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.850 qpair failed and we were unable to recover it. 00:27:32.850 [2024-11-15 10:46:21.214774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.850 [2024-11-15 10:46:21.214826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.850 qpair failed and we were unable to recover it. 00:27:32.850 [2024-11-15 10:46:21.215001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.850 [2024-11-15 10:46:21.215025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.850 qpair failed and we were unable to recover it. 00:27:32.850 [2024-11-15 10:46:21.215145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.850 [2024-11-15 10:46:21.215170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.850 qpair failed and we were unable to recover it. 00:27:32.850 [2024-11-15 10:46:21.215431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.850 [2024-11-15 10:46:21.215470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.850 qpair failed and we were unable to recover it. 00:27:32.850 [2024-11-15 10:46:21.215670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.850 [2024-11-15 10:46:21.215722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.850 qpair failed and we were unable to recover it. 00:27:32.850 [2024-11-15 10:46:21.215881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.850 [2024-11-15 10:46:21.215928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.850 qpair failed and we were unable to recover it. 00:27:32.850 [2024-11-15 10:46:21.216094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.850 [2024-11-15 10:46:21.216119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.850 qpair failed and we were unable to recover it. 00:27:32.850 [2024-11-15 10:46:21.216265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.850 [2024-11-15 10:46:21.216304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.850 qpair failed and we were unable to recover it. 00:27:32.850 [2024-11-15 10:46:21.216539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.850 [2024-11-15 10:46:21.216590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.850 qpair failed and we were unable to recover it. 00:27:32.850 [2024-11-15 10:46:21.216801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.850 [2024-11-15 10:46:21.216849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.850 qpair failed and we were unable to recover it. 00:27:32.850 [2024-11-15 10:46:21.217019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.851 [2024-11-15 10:46:21.217065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.851 qpair failed and we were unable to recover it. 00:27:32.851 [2024-11-15 10:46:21.217276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.851 [2024-11-15 10:46:21.217300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.851 qpair failed and we were unable to recover it. 00:27:32.851 [2024-11-15 10:46:21.217515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.851 [2024-11-15 10:46:21.217563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.851 qpair failed and we were unable to recover it. 00:27:32.851 [2024-11-15 10:46:21.217756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.851 [2024-11-15 10:46:21.217802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.851 qpair failed and we were unable to recover it. 00:27:32.851 [2024-11-15 10:46:21.217943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.851 [2024-11-15 10:46:21.217992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.851 qpair failed and we were unable to recover it. 00:27:32.851 [2024-11-15 10:46:21.218192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.851 [2024-11-15 10:46:21.218216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.851 qpair failed and we were unable to recover it. 00:27:32.851 [2024-11-15 10:46:21.218454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.851 [2024-11-15 10:46:21.218480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.851 qpair failed and we were unable to recover it. 00:27:32.851 [2024-11-15 10:46:21.218630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.851 [2024-11-15 10:46:21.218676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.851 qpair failed and we were unable to recover it. 00:27:32.851 [2024-11-15 10:46:21.218928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.851 [2024-11-15 10:46:21.218976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.851 qpair failed and we were unable to recover it. 00:27:32.851 [2024-11-15 10:46:21.219135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.851 [2024-11-15 10:46:21.219166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.851 qpair failed and we were unable to recover it. 00:27:32.851 [2024-11-15 10:46:21.219398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.851 [2024-11-15 10:46:21.219425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.851 qpair failed and we were unable to recover it. 00:27:32.851 [2024-11-15 10:46:21.219565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.851 [2024-11-15 10:46:21.219590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.851 qpair failed and we were unable to recover it. 00:27:32.851 [2024-11-15 10:46:21.219692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.851 [2024-11-15 10:46:21.219746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.851 qpair failed and we were unable to recover it. 00:27:32.851 [2024-11-15 10:46:21.219863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.851 [2024-11-15 10:46:21.219913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.851 qpair failed and we were unable to recover it. 00:27:32.851 [2024-11-15 10:46:21.220018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.851 [2024-11-15 10:46:21.220069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.851 qpair failed and we were unable to recover it. 00:27:32.851 [2024-11-15 10:46:21.220214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.851 [2024-11-15 10:46:21.220243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.851 qpair failed and we were unable to recover it. 00:27:32.851 [2024-11-15 10:46:21.220455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.851 [2024-11-15 10:46:21.220481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.851 qpair failed and we were unable to recover it. 00:27:32.851 [2024-11-15 10:46:21.220609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.851 [2024-11-15 10:46:21.220635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.851 qpair failed and we were unable to recover it. 00:27:32.851 [2024-11-15 10:46:21.220875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.851 [2024-11-15 10:46:21.220898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.851 qpair failed and we were unable to recover it. 00:27:32.851 [2024-11-15 10:46:21.221059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.851 [2024-11-15 10:46:21.221083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.851 qpair failed and we were unable to recover it. 00:27:32.851 [2024-11-15 10:46:21.221270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.851 [2024-11-15 10:46:21.221310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.851 qpair failed and we were unable to recover it. 00:27:32.851 [2024-11-15 10:46:21.221491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.851 [2024-11-15 10:46:21.221540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.851 qpair failed and we were unable to recover it. 00:27:32.851 [2024-11-15 10:46:21.221689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.851 [2024-11-15 10:46:21.221714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.851 qpair failed and we were unable to recover it. 00:27:32.851 [2024-11-15 10:46:21.221811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.851 [2024-11-15 10:46:21.221836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.851 qpair failed and we were unable to recover it. 00:27:32.851 [2024-11-15 10:46:21.221998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.851 [2024-11-15 10:46:21.222023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.851 qpair failed and we were unable to recover it. 00:27:32.851 [2024-11-15 10:46:21.222193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.851 [2024-11-15 10:46:21.222217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.851 qpair failed and we were unable to recover it. 00:27:32.851 [2024-11-15 10:46:21.222431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.851 [2024-11-15 10:46:21.222486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.851 qpair failed and we were unable to recover it. 00:27:32.851 [2024-11-15 10:46:21.222648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.851 [2024-11-15 10:46:21.222693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.851 qpair failed and we were unable to recover it. 00:27:32.851 [2024-11-15 10:46:21.222799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.851 [2024-11-15 10:46:21.222824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.851 qpair failed and we were unable to recover it. 00:27:32.851 [2024-11-15 10:46:21.222992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.851 [2024-11-15 10:46:21.223017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.851 qpair failed and we were unable to recover it. 00:27:32.851 [2024-11-15 10:46:21.223139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.851 [2024-11-15 10:46:21.223164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.851 qpair failed and we were unable to recover it. 00:27:32.851 [2024-11-15 10:46:21.223418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.852 [2024-11-15 10:46:21.223459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.852 qpair failed and we were unable to recover it. 00:27:32.852 [2024-11-15 10:46:21.223621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.852 [2024-11-15 10:46:21.223670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.852 qpair failed and we were unable to recover it. 00:27:32.852 [2024-11-15 10:46:21.223855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.852 [2024-11-15 10:46:21.223902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.852 qpair failed and we were unable to recover it. 00:27:32.852 [2024-11-15 10:46:21.224083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.852 [2024-11-15 10:46:21.224134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.852 qpair failed and we were unable to recover it. 00:27:32.852 [2024-11-15 10:46:21.224317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.852 [2024-11-15 10:46:21.224354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.852 qpair failed and we were unable to recover it. 00:27:32.852 [2024-11-15 10:46:21.224541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.852 [2024-11-15 10:46:21.224589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.852 qpair failed and we were unable to recover it. 00:27:32.852 [2024-11-15 10:46:21.224745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.852 [2024-11-15 10:46:21.224797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.852 qpair failed and we were unable to recover it. 00:27:32.852 [2024-11-15 10:46:21.225031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.852 [2024-11-15 10:46:21.225078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.852 qpair failed and we were unable to recover it. 00:27:32.852 [2024-11-15 10:46:21.225328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.852 [2024-11-15 10:46:21.225352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.852 qpair failed and we were unable to recover it. 00:27:32.852 [2024-11-15 10:46:21.225531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.852 [2024-11-15 10:46:21.225583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.852 qpair failed and we were unable to recover it. 00:27:32.852 [2024-11-15 10:46:21.225741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.852 [2024-11-15 10:46:21.225788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.852 qpair failed and we were unable to recover it. 00:27:32.852 [2024-11-15 10:46:21.225973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.852 [2024-11-15 10:46:21.226019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.852 qpair failed and we were unable to recover it. 00:27:32.852 [2024-11-15 10:46:21.226177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.852 [2024-11-15 10:46:21.226200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.852 qpair failed and we were unable to recover it. 00:27:32.852 [2024-11-15 10:46:21.226436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.852 [2024-11-15 10:46:21.226461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.852 qpair failed and we were unable to recover it. 00:27:32.852 [2024-11-15 10:46:21.226624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.852 [2024-11-15 10:46:21.226665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.852 qpair failed and we were unable to recover it. 00:27:32.852 [2024-11-15 10:46:21.226772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.852 [2024-11-15 10:46:21.226836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.852 qpair failed and we were unable to recover it. 00:27:32.852 [2024-11-15 10:46:21.227040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.852 [2024-11-15 10:46:21.227088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.852 qpair failed and we were unable to recover it. 00:27:32.852 [2024-11-15 10:46:21.227275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.852 [2024-11-15 10:46:21.227306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.852 qpair failed and we were unable to recover it. 00:27:32.852 [2024-11-15 10:46:21.227462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.852 [2024-11-15 10:46:21.227510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.852 qpair failed and we were unable to recover it. 00:27:32.852 [2024-11-15 10:46:21.227645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.852 [2024-11-15 10:46:21.227696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.852 qpair failed and we were unable to recover it. 00:27:32.852 [2024-11-15 10:46:21.227856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.852 [2024-11-15 10:46:21.227902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.852 qpair failed and we were unable to recover it. 00:27:32.852 [2024-11-15 10:46:21.228025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.852 [2024-11-15 10:46:21.228050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.852 qpair failed and we were unable to recover it. 00:27:32.852 [2024-11-15 10:46:21.228181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.852 [2024-11-15 10:46:21.228207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.852 qpair failed and we were unable to recover it. 00:27:32.852 [2024-11-15 10:46:21.228334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.852 [2024-11-15 10:46:21.228360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.852 qpair failed and we were unable to recover it. 00:27:32.852 [2024-11-15 10:46:21.228499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.852 [2024-11-15 10:46:21.228525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.852 qpair failed and we were unable to recover it. 00:27:32.852 [2024-11-15 10:46:21.228664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.852 [2024-11-15 10:46:21.228689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.852 qpair failed and we were unable to recover it. 00:27:32.852 [2024-11-15 10:46:21.228915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.852 [2024-11-15 10:46:21.228940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.852 qpair failed and we were unable to recover it. 00:27:32.852 [2024-11-15 10:46:21.229079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.852 [2024-11-15 10:46:21.229119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.852 qpair failed and we were unable to recover it. 00:27:32.852 [2024-11-15 10:46:21.229235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.852 [2024-11-15 10:46:21.229260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.852 qpair failed and we were unable to recover it. 00:27:32.852 [2024-11-15 10:46:21.229427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.852 [2024-11-15 10:46:21.229454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.852 qpair failed and we were unable to recover it. 00:27:32.852 [2024-11-15 10:46:21.229570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.852 [2024-11-15 10:46:21.229596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.852 qpair failed and we were unable to recover it. 00:27:32.852 [2024-11-15 10:46:21.229755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.852 [2024-11-15 10:46:21.229795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.852 qpair failed and we were unable to recover it. 00:27:32.852 [2024-11-15 10:46:21.229973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.852 [2024-11-15 10:46:21.229998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.852 qpair failed and we were unable to recover it. 00:27:32.852 [2024-11-15 10:46:21.230131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.852 [2024-11-15 10:46:21.230170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.852 qpair failed and we were unable to recover it. 00:27:32.852 [2024-11-15 10:46:21.230305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.852 [2024-11-15 10:46:21.230344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.852 qpair failed and we were unable to recover it. 00:27:32.852 [2024-11-15 10:46:21.230460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.852 [2024-11-15 10:46:21.230485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.852 qpair failed and we were unable to recover it. 00:27:32.852 [2024-11-15 10:46:21.230574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.852 [2024-11-15 10:46:21.230599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.853 qpair failed and we were unable to recover it. 00:27:32.853 [2024-11-15 10:46:21.230758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.853 [2024-11-15 10:46:21.230782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.853 qpair failed and we were unable to recover it. 00:27:32.853 [2024-11-15 10:46:21.230940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.853 [2024-11-15 10:46:21.230986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.853 qpair failed and we were unable to recover it. 00:27:32.853 [2024-11-15 10:46:21.231141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.853 [2024-11-15 10:46:21.231165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.853 qpair failed and we were unable to recover it. 00:27:32.853 [2024-11-15 10:46:21.231301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.853 [2024-11-15 10:46:21.231326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.853 qpair failed and we were unable to recover it. 00:27:32.853 [2024-11-15 10:46:21.231514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.853 [2024-11-15 10:46:21.231561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.853 qpair failed and we were unable to recover it. 00:27:32.853 [2024-11-15 10:46:21.231741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.853 [2024-11-15 10:46:21.231787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.853 qpair failed and we were unable to recover it. 00:27:32.853 [2024-11-15 10:46:21.231973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.853 [2024-11-15 10:46:21.232024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.853 qpair failed and we were unable to recover it. 00:27:32.853 [2024-11-15 10:46:21.232232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.853 [2024-11-15 10:46:21.232256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.853 qpair failed and we were unable to recover it. 00:27:32.853 [2024-11-15 10:46:21.232424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.853 [2024-11-15 10:46:21.232475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.853 qpair failed and we were unable to recover it. 00:27:32.853 [2024-11-15 10:46:21.232623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.853 [2024-11-15 10:46:21.232675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.853 qpair failed and we were unable to recover it. 00:27:32.853 [2024-11-15 10:46:21.232832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.853 [2024-11-15 10:46:21.232879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.853 qpair failed and we were unable to recover it. 00:27:32.853 [2024-11-15 10:46:21.233113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.853 [2024-11-15 10:46:21.233137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.853 qpair failed and we were unable to recover it. 00:27:32.853 [2024-11-15 10:46:21.233374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.853 [2024-11-15 10:46:21.233399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.853 qpair failed and we were unable to recover it. 00:27:32.853 [2024-11-15 10:46:21.233588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.853 [2024-11-15 10:46:21.233641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.853 qpair failed and we were unable to recover it. 00:27:32.853 [2024-11-15 10:46:21.233780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.853 [2024-11-15 10:46:21.233830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.853 qpair failed and we were unable to recover it. 00:27:32.853 [2024-11-15 10:46:21.233993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.853 [2024-11-15 10:46:21.234038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.853 qpair failed and we were unable to recover it. 00:27:32.853 [2024-11-15 10:46:21.234145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.853 [2024-11-15 10:46:21.234169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.853 qpair failed and we were unable to recover it. 00:27:32.853 [2024-11-15 10:46:21.234309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.853 [2024-11-15 10:46:21.234334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.853 qpair failed and we were unable to recover it. 00:27:32.853 [2024-11-15 10:46:21.234517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.853 [2024-11-15 10:46:21.234564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.853 qpair failed and we were unable to recover it. 00:27:32.853 [2024-11-15 10:46:21.234668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.853 [2024-11-15 10:46:21.234718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.853 qpair failed and we were unable to recover it. 00:27:32.853 [2024-11-15 10:46:21.234890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.853 [2024-11-15 10:46:21.234936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.853 qpair failed and we were unable to recover it. 00:27:32.853 [2024-11-15 10:46:21.235122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.853 [2024-11-15 10:46:21.235146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.853 qpair failed and we were unable to recover it. 00:27:32.853 [2024-11-15 10:46:21.235320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.853 [2024-11-15 10:46:21.235344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.853 qpair failed and we were unable to recover it. 00:27:32.853 [2024-11-15 10:46:21.235505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.853 [2024-11-15 10:46:21.235543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.853 qpair failed and we were unable to recover it. 00:27:32.853 [2024-11-15 10:46:21.235698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.853 [2024-11-15 10:46:21.235744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.853 qpair failed and we were unable to recover it. 00:27:32.853 [2024-11-15 10:46:21.235990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.853 [2024-11-15 10:46:21.236036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.853 qpair failed and we were unable to recover it. 00:27:32.853 [2024-11-15 10:46:21.236187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.853 [2024-11-15 10:46:21.236211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.853 qpair failed and we were unable to recover it. 00:27:32.853 [2024-11-15 10:46:21.236379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.853 [2024-11-15 10:46:21.236414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.853 qpair failed and we were unable to recover it. 00:27:32.853 [2024-11-15 10:46:21.236526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.853 [2024-11-15 10:46:21.236576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.853 qpair failed and we were unable to recover it. 00:27:32.854 [2024-11-15 10:46:21.236796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.854 [2024-11-15 10:46:21.236842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.854 qpair failed and we were unable to recover it. 00:27:32.854 [2024-11-15 10:46:21.236977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.854 [2024-11-15 10:46:21.237023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.854 qpair failed and we were unable to recover it. 00:27:32.854 [2024-11-15 10:46:21.237203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.854 [2024-11-15 10:46:21.237227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.854 qpair failed and we were unable to recover it. 00:27:32.854 [2024-11-15 10:46:21.237423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.854 [2024-11-15 10:46:21.237448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.854 qpair failed and we were unable to recover it. 00:27:32.854 [2024-11-15 10:46:21.237605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.854 [2024-11-15 10:46:21.237654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.854 qpair failed and we were unable to recover it. 00:27:32.854 [2024-11-15 10:46:21.237778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.854 [2024-11-15 10:46:21.237824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.854 qpair failed and we were unable to recover it. 00:27:32.854 [2024-11-15 10:46:21.237951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.854 [2024-11-15 10:46:21.237976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.854 qpair failed and we were unable to recover it. 00:27:32.854 [2024-11-15 10:46:21.238161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.854 [2024-11-15 10:46:21.238200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.854 qpair failed and we were unable to recover it. 00:27:32.854 [2024-11-15 10:46:21.238396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.854 [2024-11-15 10:46:21.238430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.854 qpair failed and we were unable to recover it. 00:27:32.854 [2024-11-15 10:46:21.238534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.854 [2024-11-15 10:46:21.238575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.854 qpair failed and we were unable to recover it. 00:27:32.854 [2024-11-15 10:46:21.238767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.854 [2024-11-15 10:46:21.238793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.854 qpair failed and we were unable to recover it. 00:27:32.854 [2024-11-15 10:46:21.238980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.854 [2024-11-15 10:46:21.239004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.854 qpair failed and we were unable to recover it. 00:27:32.854 [2024-11-15 10:46:21.239139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.854 [2024-11-15 10:46:21.239164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.854 qpair failed and we were unable to recover it. 00:27:32.854 [2024-11-15 10:46:21.239302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.854 [2024-11-15 10:46:21.239326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.854 qpair failed and we were unable to recover it. 00:27:32.854 [2024-11-15 10:46:21.239507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.854 [2024-11-15 10:46:21.239556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.854 qpair failed and we were unable to recover it. 00:27:32.854 [2024-11-15 10:46:21.239775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.854 [2024-11-15 10:46:21.239821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.854 qpair failed and we were unable to recover it. 00:27:32.854 [2024-11-15 10:46:21.239959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.854 [2024-11-15 10:46:21.240015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.854 qpair failed and we were unable to recover it. 00:27:32.854 [2024-11-15 10:46:21.240187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.854 [2024-11-15 10:46:21.240217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.854 qpair failed and we were unable to recover it. 00:27:32.854 [2024-11-15 10:46:21.240437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.854 [2024-11-15 10:46:21.240483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.854 qpair failed and we were unable to recover it. 00:27:32.854 [2024-11-15 10:46:21.240610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.854 [2024-11-15 10:46:21.240634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.854 qpair failed and we were unable to recover it. 00:27:32.854 [2024-11-15 10:46:21.240828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.854 [2024-11-15 10:46:21.240852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.854 qpair failed and we were unable to recover it. 00:27:32.854 [2024-11-15 10:46:21.241054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.854 [2024-11-15 10:46:21.241078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.854 qpair failed and we were unable to recover it. 00:27:32.854 [2024-11-15 10:46:21.241213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.854 [2024-11-15 10:46:21.241238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.854 qpair failed and we were unable to recover it. 00:27:32.854 [2024-11-15 10:46:21.241395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.854 [2024-11-15 10:46:21.241435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.854 qpair failed and we were unable to recover it. 00:27:32.854 [2024-11-15 10:46:21.241542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.854 [2024-11-15 10:46:21.241591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.854 qpair failed and we were unable to recover it. 00:27:32.854 [2024-11-15 10:46:21.241802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.854 [2024-11-15 10:46:21.241847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.854 qpair failed and we were unable to recover it. 00:27:32.854 [2024-11-15 10:46:21.242045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.854 [2024-11-15 10:46:21.242069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.854 qpair failed and we were unable to recover it. 00:27:32.854 [2024-11-15 10:46:21.242255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.854 [2024-11-15 10:46:21.242278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.854 qpair failed and we were unable to recover it. 00:27:32.854 [2024-11-15 10:46:21.242480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.854 [2024-11-15 10:46:21.242527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.854 qpair failed and we were unable to recover it. 00:27:32.854 [2024-11-15 10:46:21.242673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.854 [2024-11-15 10:46:21.242719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.854 qpair failed and we were unable to recover it. 00:27:32.854 [2024-11-15 10:46:21.242883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.854 [2024-11-15 10:46:21.242924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.854 qpair failed and we were unable to recover it. 00:27:32.854 [2024-11-15 10:46:21.243055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.854 [2024-11-15 10:46:21.243094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.854 qpair failed and we were unable to recover it. 00:27:32.854 [2024-11-15 10:46:21.243228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.854 [2024-11-15 10:46:21.243252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.854 qpair failed and we were unable to recover it. 00:27:32.854 [2024-11-15 10:46:21.243413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.854 [2024-11-15 10:46:21.243451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.854 qpair failed and we were unable to recover it. 00:27:32.854 [2024-11-15 10:46:21.243599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.854 [2024-11-15 10:46:21.243625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.854 qpair failed and we were unable to recover it. 00:27:32.854 [2024-11-15 10:46:21.243785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.854 [2024-11-15 10:46:21.243837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.854 qpair failed and we were unable to recover it. 00:27:32.854 [2024-11-15 10:46:21.244056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.854 [2024-11-15 10:46:21.244080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.854 qpair failed and we were unable to recover it. 00:27:32.854 [2024-11-15 10:46:21.244220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.855 [2024-11-15 10:46:21.244244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.855 qpair failed and we were unable to recover it. 00:27:32.855 [2024-11-15 10:46:21.244381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.855 [2024-11-15 10:46:21.244407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.855 qpair failed and we were unable to recover it. 00:27:32.855 [2024-11-15 10:46:21.244580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.855 [2024-11-15 10:46:21.244626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.855 qpair failed and we were unable to recover it. 00:27:32.855 [2024-11-15 10:46:21.244833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.855 [2024-11-15 10:46:21.244876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.855 qpair failed and we were unable to recover it. 00:27:32.855 [2024-11-15 10:46:21.245014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.855 [2024-11-15 10:46:21.245038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.855 qpair failed and we were unable to recover it. 00:27:32.855 [2024-11-15 10:46:21.245168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.855 [2024-11-15 10:46:21.245193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.855 qpair failed and we were unable to recover it. 00:27:32.855 [2024-11-15 10:46:21.245335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.855 [2024-11-15 10:46:21.245359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.855 qpair failed and we were unable to recover it. 00:27:32.855 [2024-11-15 10:46:21.245491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.855 [2024-11-15 10:46:21.245537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.855 qpair failed and we were unable to recover it. 00:27:32.855 [2024-11-15 10:46:21.245686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.855 [2024-11-15 10:46:21.245726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.855 qpair failed and we were unable to recover it. 00:27:32.855 [2024-11-15 10:46:21.245896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.855 [2024-11-15 10:46:21.245941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.855 qpair failed and we were unable to recover it. 00:27:32.855 [2024-11-15 10:46:21.246133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.855 [2024-11-15 10:46:21.246156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.855 qpair failed and we were unable to recover it. 00:27:32.855 [2024-11-15 10:46:21.246331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.855 [2024-11-15 10:46:21.246355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.855 qpair failed and we were unable to recover it. 00:27:32.855 [2024-11-15 10:46:21.246526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.855 [2024-11-15 10:46:21.246558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.855 qpair failed and we were unable to recover it. 00:27:32.855 [2024-11-15 10:46:21.246719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.855 [2024-11-15 10:46:21.246751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.855 qpair failed and we were unable to recover it. 00:27:32.855 [2024-11-15 10:46:21.246920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.855 [2024-11-15 10:46:21.246951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.855 qpair failed and we were unable to recover it. 00:27:32.855 [2024-11-15 10:46:21.247117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.855 [2024-11-15 10:46:21.247156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.855 qpair failed and we were unable to recover it. 00:27:32.855 [2024-11-15 10:46:21.247318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.855 [2024-11-15 10:46:21.247356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.855 qpair failed and we were unable to recover it. 00:27:32.855 [2024-11-15 10:46:21.247537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.855 [2024-11-15 10:46:21.247581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.855 qpair failed and we were unable to recover it. 00:27:32.855 [2024-11-15 10:46:21.247703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.855 [2024-11-15 10:46:21.247750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.855 qpair failed and we were unable to recover it. 00:27:32.855 [2024-11-15 10:46:21.247910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.855 [2024-11-15 10:46:21.247954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.855 qpair failed and we were unable to recover it. 00:27:32.855 [2024-11-15 10:46:21.248136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.855 [2024-11-15 10:46:21.248164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.855 qpair failed and we were unable to recover it. 00:27:32.855 [2024-11-15 10:46:21.248278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.855 [2024-11-15 10:46:21.248303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.855 qpair failed and we were unable to recover it. 00:27:32.855 [2024-11-15 10:46:21.248434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.855 [2024-11-15 10:46:21.248461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.855 qpair failed and we were unable to recover it. 00:27:32.855 [2024-11-15 10:46:21.248637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.855 [2024-11-15 10:46:21.248680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.855 qpair failed and we were unable to recover it. 00:27:32.855 [2024-11-15 10:46:21.248869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.855 [2024-11-15 10:46:21.248912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.855 qpair failed and we were unable to recover it. 00:27:32.855 [2024-11-15 10:46:21.249107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.855 [2024-11-15 10:46:21.249130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.855 qpair failed and we were unable to recover it. 00:27:32.855 [2024-11-15 10:46:21.249273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.855 [2024-11-15 10:46:21.249299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.855 qpair failed and we were unable to recover it. 00:27:32.855 [2024-11-15 10:46:21.249429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.855 [2024-11-15 10:46:21.249475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.855 qpair failed and we were unable to recover it. 00:27:32.855 [2024-11-15 10:46:21.249673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.855 [2024-11-15 10:46:21.249716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.855 qpair failed and we were unable to recover it. 00:27:32.855 [2024-11-15 10:46:21.249912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.855 [2024-11-15 10:46:21.249954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.855 qpair failed and we were unable to recover it. 00:27:32.855 [2024-11-15 10:46:21.250095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.855 [2024-11-15 10:46:21.250120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.855 qpair failed and we were unable to recover it. 00:27:32.855 [2024-11-15 10:46:21.250273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.855 [2024-11-15 10:46:21.250298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.855 qpair failed and we were unable to recover it. 00:27:32.855 [2024-11-15 10:46:21.250426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.855 [2024-11-15 10:46:21.250475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.855 qpair failed and we were unable to recover it. 00:27:32.855 [2024-11-15 10:46:21.250643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.855 [2024-11-15 10:46:21.250687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.855 qpair failed and we were unable to recover it. 00:27:32.855 [2024-11-15 10:46:21.250866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.855 [2024-11-15 10:46:21.250898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.855 qpair failed and we were unable to recover it. 00:27:32.855 [2024-11-15 10:46:21.251034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.855 [2024-11-15 10:46:21.251078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.855 qpair failed and we were unable to recover it. 00:27:32.855 [2024-11-15 10:46:21.251213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.855 [2024-11-15 10:46:21.251252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.855 qpair failed and we were unable to recover it. 00:27:32.855 [2024-11-15 10:46:21.251447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.855 [2024-11-15 10:46:21.251474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.855 qpair failed and we were unable to recover it. 00:27:32.855 [2024-11-15 10:46:21.251574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.856 [2024-11-15 10:46:21.251600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.856 qpair failed and we were unable to recover it. 00:27:32.856 [2024-11-15 10:46:21.251738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.856 [2024-11-15 10:46:21.251763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.856 qpair failed and we were unable to recover it. 00:27:32.856 [2024-11-15 10:46:21.251884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.856 [2024-11-15 10:46:21.251909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.856 qpair failed and we were unable to recover it. 00:27:32.856 [2024-11-15 10:46:21.252090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.856 [2024-11-15 10:46:21.252115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.856 qpair failed and we were unable to recover it. 00:27:32.856 [2024-11-15 10:46:21.252259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.856 [2024-11-15 10:46:21.252284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.856 qpair failed and we were unable to recover it. 00:27:32.856 [2024-11-15 10:46:21.252401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.856 [2024-11-15 10:46:21.252427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.856 qpair failed and we were unable to recover it. 00:27:32.856 [2024-11-15 10:46:21.252646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.856 [2024-11-15 10:46:21.252686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.856 qpair failed and we were unable to recover it. 00:27:32.856 [2024-11-15 10:46:21.252845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.856 [2024-11-15 10:46:21.252869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.856 qpair failed and we were unable to recover it. 00:27:32.856 [2024-11-15 10:46:21.253031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.856 [2024-11-15 10:46:21.253070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.856 qpair failed and we were unable to recover it. 00:27:32.856 [2024-11-15 10:46:21.253193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.856 [2024-11-15 10:46:21.253243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.856 qpair failed and we were unable to recover it. 00:27:32.856 [2024-11-15 10:46:21.253427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.856 [2024-11-15 10:46:21.253472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.856 qpair failed and we were unable to recover it. 00:27:32.856 [2024-11-15 10:46:21.253682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.856 [2024-11-15 10:46:21.253725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.856 qpair failed and we were unable to recover it. 00:27:32.856 [2024-11-15 10:46:21.253850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.856 [2024-11-15 10:46:21.253880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.856 qpair failed and we were unable to recover it. 00:27:32.856 [2024-11-15 10:46:21.254043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.856 [2024-11-15 10:46:21.254067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.856 qpair failed and we were unable to recover it. 00:27:32.856 [2024-11-15 10:46:21.254193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.856 [2024-11-15 10:46:21.254227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.856 qpair failed and we were unable to recover it. 00:27:32.856 [2024-11-15 10:46:21.254433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.856 [2024-11-15 10:46:21.254474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.856 qpair failed and we were unable to recover it. 00:27:32.856 [2024-11-15 10:46:21.254603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.856 [2024-11-15 10:46:21.254628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.856 qpair failed and we were unable to recover it. 00:27:32.856 [2024-11-15 10:46:21.254834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.856 [2024-11-15 10:46:21.254858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.856 qpair failed and we were unable to recover it. 00:27:32.856 [2024-11-15 10:46:21.255058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.856 [2024-11-15 10:46:21.255082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.856 qpair failed and we were unable to recover it. 00:27:32.856 [2024-11-15 10:46:21.255229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.856 [2024-11-15 10:46:21.255258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.856 qpair failed and we were unable to recover it. 00:27:32.856 [2024-11-15 10:46:21.255433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.856 [2024-11-15 10:46:21.255460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.856 qpair failed and we were unable to recover it. 00:27:32.856 [2024-11-15 10:46:21.255593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.856 [2024-11-15 10:46:21.255619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.856 qpair failed and we were unable to recover it. 00:27:32.856 [2024-11-15 10:46:21.255790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.856 [2024-11-15 10:46:21.255817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.856 qpair failed and we were unable to recover it. 00:27:32.856 [2024-11-15 10:46:21.256028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.856 [2024-11-15 10:46:21.256052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.856 qpair failed and we were unable to recover it. 00:27:32.856 [2024-11-15 10:46:21.256186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.856 [2024-11-15 10:46:21.256210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.856 qpair failed and we were unable to recover it. 00:27:32.856 [2024-11-15 10:46:21.256321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.856 [2024-11-15 10:46:21.256369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.856 qpair failed and we were unable to recover it. 00:27:32.856 [2024-11-15 10:46:21.256499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.856 [2024-11-15 10:46:21.256524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.856 qpair failed and we were unable to recover it. 00:27:32.856 [2024-11-15 10:46:21.256662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.856 [2024-11-15 10:46:21.256687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.856 qpair failed and we were unable to recover it. 00:27:32.856 [2024-11-15 10:46:21.256825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.856 [2024-11-15 10:46:21.256863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.856 qpair failed and we were unable to recover it. 00:27:32.856 [2024-11-15 10:46:21.257002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.856 [2024-11-15 10:46:21.257041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.856 qpair failed and we were unable to recover it. 00:27:32.856 [2024-11-15 10:46:21.257224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.856 [2024-11-15 10:46:21.257248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.856 qpair failed and we were unable to recover it. 00:27:32.856 [2024-11-15 10:46:21.257397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.856 [2024-11-15 10:46:21.257424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.856 qpair failed and we were unable to recover it. 00:27:32.856 [2024-11-15 10:46:21.257612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.856 [2024-11-15 10:46:21.257653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.856 qpair failed and we were unable to recover it. 00:27:32.856 [2024-11-15 10:46:21.257824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.856 [2024-11-15 10:46:21.257865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.856 qpair failed and we were unable to recover it. 00:27:32.856 [2024-11-15 10:46:21.258071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.856 [2024-11-15 10:46:21.258095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.856 qpair failed and we were unable to recover it. 00:27:32.856 [2024-11-15 10:46:21.258269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.856 [2024-11-15 10:46:21.258292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.856 qpair failed and we were unable to recover it. 00:27:32.856 [2024-11-15 10:46:21.258485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.856 [2024-11-15 10:46:21.258513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.856 qpair failed and we were unable to recover it. 00:27:32.856 [2024-11-15 10:46:21.258738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.857 [2024-11-15 10:46:21.258780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.857 qpair failed and we were unable to recover it. 00:27:32.857 [2024-11-15 10:46:21.258999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.857 [2024-11-15 10:46:21.259041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.857 qpair failed and we were unable to recover it. 00:27:32.857 [2024-11-15 10:46:21.259227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.857 [2024-11-15 10:46:21.259251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.857 qpair failed and we were unable to recover it. 00:27:32.857 [2024-11-15 10:46:21.259390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.857 [2024-11-15 10:46:21.259439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.857 qpair failed and we were unable to recover it. 00:27:32.857 [2024-11-15 10:46:21.259651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.857 [2024-11-15 10:46:21.259692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.857 qpair failed and we were unable to recover it. 00:27:32.857 [2024-11-15 10:46:21.259847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.857 [2024-11-15 10:46:21.259889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.857 qpair failed and we were unable to recover it. 00:27:32.857 [2024-11-15 10:46:21.260090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.857 [2024-11-15 10:46:21.260114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.857 qpair failed and we were unable to recover it. 00:27:32.857 [2024-11-15 10:46:21.260277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.857 [2024-11-15 10:46:21.260301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.857 qpair failed and we were unable to recover it. 00:27:32.857 [2024-11-15 10:46:21.260533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.857 [2024-11-15 10:46:21.260576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.857 qpair failed and we were unable to recover it. 00:27:32.857 [2024-11-15 10:46:21.260811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.857 [2024-11-15 10:46:21.260852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.857 qpair failed and we were unable to recover it. 00:27:32.857 [2024-11-15 10:46:21.261029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.857 [2024-11-15 10:46:21.261071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.857 qpair failed and we were unable to recover it. 00:27:32.857 [2024-11-15 10:46:21.261320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.857 [2024-11-15 10:46:21.261345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:32.857 qpair failed and we were unable to recover it. 00:27:32.857 [2024-11-15 10:46:21.261535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.857 [2024-11-15 10:46:21.261576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:32.857 qpair failed and we were unable to recover it. 00:27:32.857 [2024-11-15 10:46:21.261719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.857 [2024-11-15 10:46:21.261746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:32.857 qpair failed and we were unable to recover it. 00:27:32.857 [2024-11-15 10:46:21.261921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.857 [2024-11-15 10:46:21.261947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:32.857 qpair failed and we were unable to recover it. 00:27:32.857 [2024-11-15 10:46:21.262179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.857 [2024-11-15 10:46:21.262205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:32.857 qpair failed and we were unable to recover it. 00:27:32.857 [2024-11-15 10:46:21.262448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.857 [2024-11-15 10:46:21.262473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:32.857 qpair failed and we were unable to recover it. 00:27:32.857 [2024-11-15 10:46:21.262702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.857 [2024-11-15 10:46:21.262728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:32.857 qpair failed and we were unable to recover it. 00:27:32.857 [2024-11-15 10:46:21.262861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.857 [2024-11-15 10:46:21.262887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:32.857 qpair failed and we were unable to recover it. 00:27:32.857 [2024-11-15 10:46:21.263068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.857 [2024-11-15 10:46:21.263102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:32.857 qpair failed and we were unable to recover it. 00:27:32.857 [2024-11-15 10:46:21.263255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.857 [2024-11-15 10:46:21.263281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:32.857 qpair failed and we were unable to recover it. 00:27:32.857 [2024-11-15 10:46:21.263428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.857 [2024-11-15 10:46:21.263453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:32.857 qpair failed and we were unable to recover it. 00:27:32.857 [2024-11-15 10:46:21.263635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.857 [2024-11-15 10:46:21.263679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:32.857 qpair failed and we were unable to recover it. 00:27:32.857 [2024-11-15 10:46:21.263838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.857 [2024-11-15 10:46:21.263874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:32.857 qpair failed and we were unable to recover it. 00:27:32.857 [2024-11-15 10:46:21.264065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.857 [2024-11-15 10:46:21.264092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:32.857 qpair failed and we were unable to recover it. 00:27:32.857 [2024-11-15 10:46:21.264267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.857 [2024-11-15 10:46:21.264294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:32.857 qpair failed and we were unable to recover it. 00:27:32.857 [2024-11-15 10:46:21.264447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.857 [2024-11-15 10:46:21.264472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:32.857 qpair failed and we were unable to recover it. 00:27:32.857 [2024-11-15 10:46:21.264591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.857 [2024-11-15 10:46:21.264616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:32.857 qpair failed and we were unable to recover it. 00:27:32.857 [2024-11-15 10:46:21.264816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.857 [2024-11-15 10:46:21.264840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:32.857 qpair failed and we were unable to recover it. 00:27:32.857 [2024-11-15 10:46:21.264968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.857 [2024-11-15 10:46:21.264993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:32.857 qpair failed and we were unable to recover it. 00:27:32.857 [2024-11-15 10:46:21.265114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.857 [2024-11-15 10:46:21.265140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:32.857 qpair failed and we were unable to recover it. 00:27:32.857 [2024-11-15 10:46:21.265294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.857 [2024-11-15 10:46:21.265318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:32.857 qpair failed and we were unable to recover it. 00:27:32.857 [2024-11-15 10:46:21.265453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.857 [2024-11-15 10:46:21.265489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:32.857 qpair failed and we were unable to recover it. 00:27:32.857 [2024-11-15 10:46:21.265668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.857 [2024-11-15 10:46:21.265694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:32.857 qpair failed and we were unable to recover it. 00:27:32.857 [2024-11-15 10:46:21.265836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.857 [2024-11-15 10:46:21.265871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:32.857 qpair failed and we were unable to recover it. 00:27:32.857 [2024-11-15 10:46:21.265995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.857 [2024-11-15 10:46:21.266019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:32.857 qpair failed and we were unable to recover it. 00:27:32.857 [2024-11-15 10:46:21.266245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.857 [2024-11-15 10:46:21.266271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:32.857 qpair failed and we were unable to recover it. 00:27:32.857 [2024-11-15 10:46:21.266424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.857 [2024-11-15 10:46:21.266451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:32.857 qpair failed and we were unable to recover it. 00:27:32.857 [2024-11-15 10:46:21.266673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.858 [2024-11-15 10:46:21.266699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:32.858 qpair failed and we were unable to recover it. 00:27:32.858 [2024-11-15 10:46:21.266904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.858 [2024-11-15 10:46:21.266929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:32.858 qpair failed and we were unable to recover it. 00:27:32.858 [2024-11-15 10:46:21.267050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.858 [2024-11-15 10:46:21.267096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:32.858 qpair failed and we were unable to recover it. 00:27:32.858 [2024-11-15 10:46:21.267314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.858 [2024-11-15 10:46:21.267340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:32.858 qpair failed and we were unable to recover it. 00:27:32.858 [2024-11-15 10:46:21.267536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.858 [2024-11-15 10:46:21.267561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:32.858 qpair failed and we were unable to recover it. 00:27:32.858 [2024-11-15 10:46:21.267745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.858 [2024-11-15 10:46:21.267770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:32.858 qpair failed and we were unable to recover it. 00:27:32.858 [2024-11-15 10:46:21.267855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.858 [2024-11-15 10:46:21.267894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:32.858 qpair failed and we were unable to recover it. 00:27:32.858 [2024-11-15 10:46:21.268032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.858 [2024-11-15 10:46:21.268055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:32.858 qpair failed and we were unable to recover it. 00:27:32.858 [2024-11-15 10:46:21.268211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.858 [2024-11-15 10:46:21.268235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:32.858 qpair failed and we were unable to recover it. 00:27:32.858 [2024-11-15 10:46:21.268427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.858 [2024-11-15 10:46:21.268468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:32.858 qpair failed and we were unable to recover it. 00:27:32.858 [2024-11-15 10:46:21.268623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.858 [2024-11-15 10:46:21.268661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:32.858 qpair failed and we were unable to recover it. 00:27:32.858 [2024-11-15 10:46:21.268791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.858 [2024-11-15 10:46:21.268815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:32.858 qpair failed and we were unable to recover it. 00:27:32.858 [2024-11-15 10:46:21.268914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.858 [2024-11-15 10:46:21.268938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:32.858 qpair failed and we were unable to recover it. 00:27:32.858 [2024-11-15 10:46:21.269174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.858 [2024-11-15 10:46:21.269199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:32.858 qpair failed and we were unable to recover it. 00:27:32.858 [2024-11-15 10:46:21.269427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.858 [2024-11-15 10:46:21.269458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:32.858 qpair failed and we were unable to recover it. 00:27:32.858 [2024-11-15 10:46:21.269551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.858 [2024-11-15 10:46:21.269575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:32.858 qpair failed and we were unable to recover it. 00:27:32.858 [2024-11-15 10:46:21.269757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.858 [2024-11-15 10:46:21.269782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:32.858 qpair failed and we were unable to recover it. 00:27:32.858 [2024-11-15 10:46:21.269944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.858 [2024-11-15 10:46:21.269968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:32.858 qpair failed and we were unable to recover it. 00:27:32.858 [2024-11-15 10:46:21.270069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.858 [2024-11-15 10:46:21.270092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:32.858 qpair failed and we were unable to recover it. 00:27:32.858 [2024-11-15 10:46:21.270254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.858 [2024-11-15 10:46:21.270279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:32.858 qpair failed and we were unable to recover it. 00:27:32.858 [2024-11-15 10:46:21.270436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.858 [2024-11-15 10:46:21.270461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:32.858 qpair failed and we were unable to recover it. 00:27:32.858 [2024-11-15 10:46:21.270669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.858 [2024-11-15 10:46:21.270709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:32.858 qpair failed and we were unable to recover it. 00:27:32.858 [2024-11-15 10:46:21.270850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.858 [2024-11-15 10:46:21.270878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:32.858 qpair failed and we were unable to recover it. 00:27:32.858 [2024-11-15 10:46:21.271073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.858 [2024-11-15 10:46:21.271099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:32.858 qpair failed and we were unable to recover it. 00:27:32.858 [2024-11-15 10:46:21.271259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.858 [2024-11-15 10:46:21.271298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:32.858 qpair failed and we were unable to recover it. 00:27:32.858 [2024-11-15 10:46:21.271401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.858 [2024-11-15 10:46:21.271428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:32.858 qpair failed and we were unable to recover it. 00:27:32.858 [2024-11-15 10:46:21.271666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.858 [2024-11-15 10:46:21.271690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:32.858 qpair failed and we were unable to recover it. 00:27:32.858 [2024-11-15 10:46:21.271916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.858 [2024-11-15 10:46:21.271940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:32.858 qpair failed and we were unable to recover it. 00:27:32.858 [2024-11-15 10:46:21.272127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.858 [2024-11-15 10:46:21.272151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:32.858 qpair failed and we were unable to recover it. 00:27:32.858 [2024-11-15 10:46:21.272384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.858 [2024-11-15 10:46:21.272411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:32.858 qpair failed and we were unable to recover it. 00:27:32.858 [2024-11-15 10:46:21.272629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.858 [2024-11-15 10:46:21.272654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:32.858 qpair failed and we were unable to recover it. 00:27:32.858 [2024-11-15 10:46:21.272781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.858 [2024-11-15 10:46:21.272806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:32.858 qpair failed and we were unable to recover it. 00:27:32.858 [2024-11-15 10:46:21.273004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.858 [2024-11-15 10:46:21.273029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:32.858 qpair failed and we were unable to recover it. 00:27:32.858 [2024-11-15 10:46:21.273181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.858 [2024-11-15 10:46:21.273216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:32.858 qpair failed and we were unable to recover it. 00:27:32.858 [2024-11-15 10:46:21.273388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.859 [2024-11-15 10:46:21.273414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:32.859 qpair failed and we were unable to recover it. 00:27:32.859 [2024-11-15 10:46:21.273608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.859 [2024-11-15 10:46:21.273634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:32.859 qpair failed and we were unable to recover it. 00:27:32.859 [2024-11-15 10:46:21.273823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.859 [2024-11-15 10:46:21.273867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:32.859 qpair failed and we were unable to recover it. 00:27:32.859 [2024-11-15 10:46:21.274079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.859 [2024-11-15 10:46:21.274104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:32.859 qpair failed and we were unable to recover it. 00:27:33.135 [2024-11-15 10:46:21.274284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.135 [2024-11-15 10:46:21.274310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.135 qpair failed and we were unable to recover it. 00:27:33.135 [2024-11-15 10:46:21.274485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.135 [2024-11-15 10:46:21.274512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.135 qpair failed and we were unable to recover it. 00:27:33.135 [2024-11-15 10:46:21.274670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.136 [2024-11-15 10:46:21.274706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.136 qpair failed and we were unable to recover it. 00:27:33.136 [2024-11-15 10:46:21.274832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.136 [2024-11-15 10:46:21.274876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.136 qpair failed and we were unable to recover it. 00:27:33.136 [2024-11-15 10:46:21.275116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.136 [2024-11-15 10:46:21.275142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.136 qpair failed and we were unable to recover it. 00:27:33.136 [2024-11-15 10:46:21.275251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.136 [2024-11-15 10:46:21.275276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.136 qpair failed and we were unable to recover it. 00:27:33.136 [2024-11-15 10:46:21.275400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.136 [2024-11-15 10:46:21.275426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.136 qpair failed and we were unable to recover it. 00:27:33.136 [2024-11-15 10:46:21.275566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.136 [2024-11-15 10:46:21.275591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.136 qpair failed and we were unable to recover it. 00:27:33.136 [2024-11-15 10:46:21.275725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.136 [2024-11-15 10:46:21.275750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.136 qpair failed and we were unable to recover it. 00:27:33.136 [2024-11-15 10:46:21.275932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.136 [2024-11-15 10:46:21.275966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.136 qpair failed and we were unable to recover it. 00:27:33.136 [2024-11-15 10:46:21.276092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.136 [2024-11-15 10:46:21.276118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.136 qpair failed and we were unable to recover it. 00:27:33.136 [2024-11-15 10:46:21.276268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.136 [2024-11-15 10:46:21.276308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.136 qpair failed and we were unable to recover it. 00:27:33.136 [2024-11-15 10:46:21.276458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.136 [2024-11-15 10:46:21.276484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.136 qpair failed and we were unable to recover it. 00:27:33.136 [2024-11-15 10:46:21.276654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.136 [2024-11-15 10:46:21.276679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.136 qpair failed and we were unable to recover it. 00:27:33.136 [2024-11-15 10:46:21.276874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.136 [2024-11-15 10:46:21.276898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.136 qpair failed and we were unable to recover it. 00:27:33.136 [2024-11-15 10:46:21.277039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.136 [2024-11-15 10:46:21.277085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.136 qpair failed and we were unable to recover it. 00:27:33.136 [2024-11-15 10:46:21.277221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.136 [2024-11-15 10:46:21.277247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.136 qpair failed and we were unable to recover it. 00:27:33.136 [2024-11-15 10:46:21.277389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.136 [2024-11-15 10:46:21.277415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.136 qpair failed and we were unable to recover it. 00:27:33.136 [2024-11-15 10:46:21.277591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.136 [2024-11-15 10:46:21.277616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.136 qpair failed and we were unable to recover it. 00:27:33.136 [2024-11-15 10:46:21.277760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.136 [2024-11-15 10:46:21.277785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.136 qpair failed and we were unable to recover it. 00:27:33.136 [2024-11-15 10:46:21.277921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.136 [2024-11-15 10:46:21.277946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.136 qpair failed and we were unable to recover it. 00:27:33.136 [2024-11-15 10:46:21.278101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.136 [2024-11-15 10:46:21.278126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.136 qpair failed and we were unable to recover it. 00:27:33.136 [2024-11-15 10:46:21.278310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.136 [2024-11-15 10:46:21.278334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.136 qpair failed and we were unable to recover it. 00:27:33.136 [2024-11-15 10:46:21.278542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.136 [2024-11-15 10:46:21.278567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.136 qpair failed and we were unable to recover it. 00:27:33.136 [2024-11-15 10:46:21.278746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.136 [2024-11-15 10:46:21.278771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.136 qpair failed and we were unable to recover it. 00:27:33.136 [2024-11-15 10:46:21.278970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.136 [2024-11-15 10:46:21.278995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.136 qpair failed and we were unable to recover it. 00:27:33.136 [2024-11-15 10:46:21.279214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.136 [2024-11-15 10:46:21.279239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.136 qpair failed and we were unable to recover it. 00:27:33.136 [2024-11-15 10:46:21.279376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.136 [2024-11-15 10:46:21.279417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.136 qpair failed and we were unable to recover it. 00:27:33.136 [2024-11-15 10:46:21.279653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.136 [2024-11-15 10:46:21.279677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.136 qpair failed and we were unable to recover it. 00:27:33.136 [2024-11-15 10:46:21.279904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.136 [2024-11-15 10:46:21.279928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.136 qpair failed and we were unable to recover it. 00:27:33.136 [2024-11-15 10:46:21.280072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.136 [2024-11-15 10:46:21.280100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.136 qpair failed and we were unable to recover it. 00:27:33.136 [2024-11-15 10:46:21.280231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.136 [2024-11-15 10:46:21.280256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.136 qpair failed and we were unable to recover it. 00:27:33.136 [2024-11-15 10:46:21.280389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.136 [2024-11-15 10:46:21.280430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.136 qpair failed and we were unable to recover it. 00:27:33.136 [2024-11-15 10:46:21.280568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.136 [2024-11-15 10:46:21.280594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.136 qpair failed and we were unable to recover it. 00:27:33.136 [2024-11-15 10:46:21.280743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.136 [2024-11-15 10:46:21.280768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.136 qpair failed and we were unable to recover it. 00:27:33.136 [2024-11-15 10:46:21.280938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.136 [2024-11-15 10:46:21.280962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.136 qpair failed and we were unable to recover it. 00:27:33.136 [2024-11-15 10:46:21.281091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.136 [2024-11-15 10:46:21.281115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.136 qpair failed and we were unable to recover it. 00:27:33.136 [2024-11-15 10:46:21.281359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.136 [2024-11-15 10:46:21.281405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.136 qpair failed and we were unable to recover it. 00:27:33.136 [2024-11-15 10:46:21.281583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.136 [2024-11-15 10:46:21.281607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.136 qpair failed and we were unable to recover it. 00:27:33.136 [2024-11-15 10:46:21.281767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.136 [2024-11-15 10:46:21.281791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.137 qpair failed and we were unable to recover it. 00:27:33.137 [2024-11-15 10:46:21.281957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.137 [2024-11-15 10:46:21.281980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.137 qpair failed and we were unable to recover it. 00:27:33.137 [2024-11-15 10:46:21.282206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.137 [2024-11-15 10:46:21.282230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.137 qpair failed and we were unable to recover it. 00:27:33.137 [2024-11-15 10:46:21.282404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.137 [2024-11-15 10:46:21.282439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.137 qpair failed and we were unable to recover it. 00:27:33.137 [2024-11-15 10:46:21.282599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.137 [2024-11-15 10:46:21.282623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.137 qpair failed and we were unable to recover it. 00:27:33.137 [2024-11-15 10:46:21.282850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.137 [2024-11-15 10:46:21.282874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.137 qpair failed and we were unable to recover it. 00:27:33.137 [2024-11-15 10:46:21.283051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.137 [2024-11-15 10:46:21.283075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.137 qpair failed and we were unable to recover it. 00:27:33.137 [2024-11-15 10:46:21.283231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.137 [2024-11-15 10:46:21.283255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.137 qpair failed and we were unable to recover it. 00:27:33.137 [2024-11-15 10:46:21.283463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.137 [2024-11-15 10:46:21.283488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.137 qpair failed and we were unable to recover it. 00:27:33.137 [2024-11-15 10:46:21.283668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.137 [2024-11-15 10:46:21.283697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.137 qpair failed and we were unable to recover it. 00:27:33.137 [2024-11-15 10:46:21.283869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.137 [2024-11-15 10:46:21.283892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.137 qpair failed and we were unable to recover it. 00:27:33.137 [2024-11-15 10:46:21.284094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.137 [2024-11-15 10:46:21.284117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.137 qpair failed and we were unable to recover it. 00:27:33.137 [2024-11-15 10:46:21.284264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.137 [2024-11-15 10:46:21.284288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.137 qpair failed and we were unable to recover it. 00:27:33.137 [2024-11-15 10:46:21.284479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.137 [2024-11-15 10:46:21.284504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.137 qpair failed and we were unable to recover it. 00:27:33.137 [2024-11-15 10:46:21.284666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.137 [2024-11-15 10:46:21.284705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.137 qpair failed and we were unable to recover it. 00:27:33.137 [2024-11-15 10:46:21.284897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.137 [2024-11-15 10:46:21.284920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.137 qpair failed and we were unable to recover it. 00:27:33.137 [2024-11-15 10:46:21.285047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.137 [2024-11-15 10:46:21.285085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.137 qpair failed and we were unable to recover it. 00:27:33.137 [2024-11-15 10:46:21.285268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.137 [2024-11-15 10:46:21.285292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.137 qpair failed and we were unable to recover it. 00:27:33.137 [2024-11-15 10:46:21.285497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.137 [2024-11-15 10:46:21.285527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.137 qpair failed and we were unable to recover it. 00:27:33.137 [2024-11-15 10:46:21.285726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.137 [2024-11-15 10:46:21.285750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.137 qpair failed and we were unable to recover it. 00:27:33.137 [2024-11-15 10:46:21.285919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.137 [2024-11-15 10:46:21.285943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.137 qpair failed and we were unable to recover it. 00:27:33.137 [2024-11-15 10:46:21.286178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.137 [2024-11-15 10:46:21.286201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.137 qpair failed and we were unable to recover it. 00:27:33.137 [2024-11-15 10:46:21.286448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.137 [2024-11-15 10:46:21.286474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.137 qpair failed and we were unable to recover it. 00:27:33.137 [2024-11-15 10:46:21.286621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.137 [2024-11-15 10:46:21.286646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.137 qpair failed and we were unable to recover it. 00:27:33.137 [2024-11-15 10:46:21.286888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.137 [2024-11-15 10:46:21.286912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.137 qpair failed and we were unable to recover it. 00:27:33.137 [2024-11-15 10:46:21.287063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.137 [2024-11-15 10:46:21.287092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.137 qpair failed and we were unable to recover it. 00:27:33.137 [2024-11-15 10:46:21.287214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.137 [2024-11-15 10:46:21.287238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.137 qpair failed and we were unable to recover it. 00:27:33.137 [2024-11-15 10:46:21.287390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.137 [2024-11-15 10:46:21.287416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.137 qpair failed and we were unable to recover it. 00:27:33.137 [2024-11-15 10:46:21.287607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.137 [2024-11-15 10:46:21.287632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.137 qpair failed and we were unable to recover it. 00:27:33.137 [2024-11-15 10:46:21.287828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.137 [2024-11-15 10:46:21.287851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.137 qpair failed and we were unable to recover it. 00:27:33.137 [2024-11-15 10:46:21.287998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.137 [2024-11-15 10:46:21.288022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.137 qpair failed and we were unable to recover it. 00:27:33.137 [2024-11-15 10:46:21.288266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.137 [2024-11-15 10:46:21.288290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.137 qpair failed and we were unable to recover it. 00:27:33.137 [2024-11-15 10:46:21.288437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.137 [2024-11-15 10:46:21.288463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.137 qpair failed and we were unable to recover it. 00:27:33.137 [2024-11-15 10:46:21.288609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.137 [2024-11-15 10:46:21.288635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.137 qpair failed and we were unable to recover it. 00:27:33.137 [2024-11-15 10:46:21.288795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.137 [2024-11-15 10:46:21.288833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.137 qpair failed and we were unable to recover it. 00:27:33.137 [2024-11-15 10:46:21.289031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.137 [2024-11-15 10:46:21.289055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.137 qpair failed and we were unable to recover it. 00:27:33.137 [2024-11-15 10:46:21.289252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.137 [2024-11-15 10:46:21.289277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.137 qpair failed and we were unable to recover it. 00:27:33.137 [2024-11-15 10:46:21.289434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.137 [2024-11-15 10:46:21.289459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.137 qpair failed and we were unable to recover it. 00:27:33.137 [2024-11-15 10:46:21.289684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.137 [2024-11-15 10:46:21.289708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.137 qpair failed and we were unable to recover it. 00:27:33.138 [2024-11-15 10:46:21.289870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.138 [2024-11-15 10:46:21.289893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.138 qpair failed and we were unable to recover it. 00:27:33.138 [2024-11-15 10:46:21.290037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.138 [2024-11-15 10:46:21.290076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.138 qpair failed and we were unable to recover it. 00:27:33.138 [2024-11-15 10:46:21.290256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.138 [2024-11-15 10:46:21.290279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.138 qpair failed and we were unable to recover it. 00:27:33.138 [2024-11-15 10:46:21.290431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.138 [2024-11-15 10:46:21.290480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.138 qpair failed and we were unable to recover it. 00:27:33.138 [2024-11-15 10:46:21.290721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.138 [2024-11-15 10:46:21.290745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.138 qpair failed and we were unable to recover it. 00:27:33.138 [2024-11-15 10:46:21.290941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.138 [2024-11-15 10:46:21.290964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.138 qpair failed and we were unable to recover it. 00:27:33.138 [2024-11-15 10:46:21.291108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.138 [2024-11-15 10:46:21.291132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.138 qpair failed and we were unable to recover it. 00:27:33.138 [2024-11-15 10:46:21.291336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.138 [2024-11-15 10:46:21.291360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.138 qpair failed and we were unable to recover it. 00:27:33.138 [2024-11-15 10:46:21.291589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.138 [2024-11-15 10:46:21.291614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.138 qpair failed and we were unable to recover it. 00:27:33.138 [2024-11-15 10:46:21.291776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.138 [2024-11-15 10:46:21.291799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.138 qpair failed and we were unable to recover it. 00:27:33.138 [2024-11-15 10:46:21.291983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.138 [2024-11-15 10:46:21.292006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.138 qpair failed and we were unable to recover it. 00:27:33.138 [2024-11-15 10:46:21.292198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.138 [2024-11-15 10:46:21.292221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.138 qpair failed and we were unable to recover it. 00:27:33.138 [2024-11-15 10:46:21.292350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.138 [2024-11-15 10:46:21.292405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.138 qpair failed and we were unable to recover it. 00:27:33.138 [2024-11-15 10:46:21.292532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.138 [2024-11-15 10:46:21.292572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.138 qpair failed and we were unable to recover it. 00:27:33.138 [2024-11-15 10:46:21.292783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.138 [2024-11-15 10:46:21.292807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.138 qpair failed and we were unable to recover it. 00:27:33.138 [2024-11-15 10:46:21.293020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.138 [2024-11-15 10:46:21.293043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.138 qpair failed and we were unable to recover it. 00:27:33.138 [2024-11-15 10:46:21.293236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.138 [2024-11-15 10:46:21.293259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.138 qpair failed and we were unable to recover it. 00:27:33.138 [2024-11-15 10:46:21.293423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.138 [2024-11-15 10:46:21.293448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.138 qpair failed and we were unable to recover it. 00:27:33.138 [2024-11-15 10:46:21.293660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.138 [2024-11-15 10:46:21.293685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.138 qpair failed and we were unable to recover it. 00:27:33.138 [2024-11-15 10:46:21.293826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.138 [2024-11-15 10:46:21.293850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.138 qpair failed and we were unable to recover it. 00:27:33.138 [2024-11-15 10:46:21.294100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.138 [2024-11-15 10:46:21.294123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.138 qpair failed and we were unable to recover it. 00:27:33.138 [2024-11-15 10:46:21.294302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.138 [2024-11-15 10:46:21.294325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.138 qpair failed and we were unable to recover it. 00:27:33.138 [2024-11-15 10:46:21.294539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.138 [2024-11-15 10:46:21.294564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.138 qpair failed and we were unable to recover it. 00:27:33.138 [2024-11-15 10:46:21.294694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.138 [2024-11-15 10:46:21.294729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.138 qpair failed and we were unable to recover it. 00:27:33.138 [2024-11-15 10:46:21.294847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.138 [2024-11-15 10:46:21.294871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.138 qpair failed and we were unable to recover it. 00:27:33.138 [2024-11-15 10:46:21.294999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.138 [2024-11-15 10:46:21.295023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.138 qpair failed and we were unable to recover it. 00:27:33.138 [2024-11-15 10:46:21.295261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.138 [2024-11-15 10:46:21.295285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.138 qpair failed and we were unable to recover it. 00:27:33.138 [2024-11-15 10:46:21.295519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.138 [2024-11-15 10:46:21.295545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.138 qpair failed and we were unable to recover it. 00:27:33.138 [2024-11-15 10:46:21.295756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.138 [2024-11-15 10:46:21.295779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.138 qpair failed and we were unable to recover it. 00:27:33.138 [2024-11-15 10:46:21.295990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.138 [2024-11-15 10:46:21.296014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.138 qpair failed and we were unable to recover it. 00:27:33.138 [2024-11-15 10:46:21.296151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.138 [2024-11-15 10:46:21.296175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.138 qpair failed and we were unable to recover it. 00:27:33.138 [2024-11-15 10:46:21.296330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.138 [2024-11-15 10:46:21.296367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.138 qpair failed and we were unable to recover it. 00:27:33.138 [2024-11-15 10:46:21.296605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.138 [2024-11-15 10:46:21.296631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.138 qpair failed and we were unable to recover it. 00:27:33.138 [2024-11-15 10:46:21.296854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.138 [2024-11-15 10:46:21.296876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.138 qpair failed and we were unable to recover it. 00:27:33.138 [2024-11-15 10:46:21.297073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.138 [2024-11-15 10:46:21.297099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.138 qpair failed and we were unable to recover it. 00:27:33.138 [2024-11-15 10:46:21.297207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.138 [2024-11-15 10:46:21.297245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.138 qpair failed and we were unable to recover it. 00:27:33.138 [2024-11-15 10:46:21.297452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.138 [2024-11-15 10:46:21.297478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.138 qpair failed and we were unable to recover it. 00:27:33.138 [2024-11-15 10:46:21.297674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.138 [2024-11-15 10:46:21.297698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.138 qpair failed and we were unable to recover it. 00:27:33.138 [2024-11-15 10:46:21.297875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.139 [2024-11-15 10:46:21.297898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.139 qpair failed and we were unable to recover it. 00:27:33.139 [2024-11-15 10:46:21.298089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.139 [2024-11-15 10:46:21.298113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.139 qpair failed and we were unable to recover it. 00:27:33.139 [2024-11-15 10:46:21.298307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.139 [2024-11-15 10:46:21.298331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.139 qpair failed and we were unable to recover it. 00:27:33.139 [2024-11-15 10:46:21.298480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.139 [2024-11-15 10:46:21.298520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.139 qpair failed and we were unable to recover it. 00:27:33.139 [2024-11-15 10:46:21.298698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.139 [2024-11-15 10:46:21.298721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.139 qpair failed and we were unable to recover it. 00:27:33.139 [2024-11-15 10:46:21.298940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.139 [2024-11-15 10:46:21.298962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.139 qpair failed and we were unable to recover it. 00:27:33.139 [2024-11-15 10:46:21.299098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.139 [2024-11-15 10:46:21.299136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.139 qpair failed and we were unable to recover it. 00:27:33.139 [2024-11-15 10:46:21.299244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.139 [2024-11-15 10:46:21.299269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.139 qpair failed and we were unable to recover it. 00:27:33.139 [2024-11-15 10:46:21.299372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.139 [2024-11-15 10:46:21.299396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.139 qpair failed and we were unable to recover it. 00:27:33.139 [2024-11-15 10:46:21.299538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.139 [2024-11-15 10:46:21.299568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.139 qpair failed and we were unable to recover it. 00:27:33.139 [2024-11-15 10:46:21.299725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.139 [2024-11-15 10:46:21.299749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.139 qpair failed and we were unable to recover it. 00:27:33.139 [2024-11-15 10:46:21.299919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.139 [2024-11-15 10:46:21.299968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.139 qpair failed and we were unable to recover it. 00:27:33.139 [2024-11-15 10:46:21.300167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.139 [2024-11-15 10:46:21.300191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.139 qpair failed and we were unable to recover it. 00:27:33.139 [2024-11-15 10:46:21.300347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.139 [2024-11-15 10:46:21.300409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.139 qpair failed and we were unable to recover it. 00:27:33.139 [2024-11-15 10:46:21.300551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.139 [2024-11-15 10:46:21.300576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.139 qpair failed and we were unable to recover it. 00:27:33.139 [2024-11-15 10:46:21.300794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.139 [2024-11-15 10:46:21.300818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.139 qpair failed and we were unable to recover it. 00:27:33.139 [2024-11-15 10:46:21.301023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.139 [2024-11-15 10:46:21.301046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.139 qpair failed and we were unable to recover it. 00:27:33.139 [2024-11-15 10:46:21.301188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.139 [2024-11-15 10:46:21.301215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.139 qpair failed and we were unable to recover it. 00:27:33.139 [2024-11-15 10:46:21.301390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.139 [2024-11-15 10:46:21.301416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.139 qpair failed and we were unable to recover it. 00:27:33.139 [2024-11-15 10:46:21.301538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.139 [2024-11-15 10:46:21.301576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.139 qpair failed and we were unable to recover it. 00:27:33.139 [2024-11-15 10:46:21.301770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.139 [2024-11-15 10:46:21.301793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.139 qpair failed and we were unable to recover it. 00:27:33.139 [2024-11-15 10:46:21.301991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.139 [2024-11-15 10:46:21.302030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.139 qpair failed and we were unable to recover it. 00:27:33.139 [2024-11-15 10:46:21.302229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.139 [2024-11-15 10:46:21.302252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.139 qpair failed and we were unable to recover it. 00:27:33.139 [2024-11-15 10:46:21.302442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.139 [2024-11-15 10:46:21.302468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.139 qpair failed and we were unable to recover it. 00:27:33.139 [2024-11-15 10:46:21.302660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.139 [2024-11-15 10:46:21.302684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.139 qpair failed and we were unable to recover it. 00:27:33.139 [2024-11-15 10:46:21.302903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.139 [2024-11-15 10:46:21.302927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.139 qpair failed and we were unable to recover it. 00:27:33.139 [2024-11-15 10:46:21.303125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.139 [2024-11-15 10:46:21.303149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.139 qpair failed and we were unable to recover it. 00:27:33.139 [2024-11-15 10:46:21.303329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.139 [2024-11-15 10:46:21.303373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.139 qpair failed and we were unable to recover it. 00:27:33.139 [2024-11-15 10:46:21.303511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.139 [2024-11-15 10:46:21.303546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.139 qpair failed and we were unable to recover it. 00:27:33.139 [2024-11-15 10:46:21.303668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.139 [2024-11-15 10:46:21.303693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.139 qpair failed and we were unable to recover it. 00:27:33.139 [2024-11-15 10:46:21.303893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.139 [2024-11-15 10:46:21.303916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.139 qpair failed and we were unable to recover it. 00:27:33.139 [2024-11-15 10:46:21.304101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.139 [2024-11-15 10:46:21.304124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.139 qpair failed and we were unable to recover it. 00:27:33.139 [2024-11-15 10:46:21.304343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.139 [2024-11-15 10:46:21.304386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.139 qpair failed and we were unable to recover it. 00:27:33.139 [2024-11-15 10:46:21.304555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.139 [2024-11-15 10:46:21.304579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.139 qpair failed and we were unable to recover it. 00:27:33.139 [2024-11-15 10:46:21.304813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.139 [2024-11-15 10:46:21.304836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.139 qpair failed and we were unable to recover it. 00:27:33.139 [2024-11-15 10:46:21.305063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.139 [2024-11-15 10:46:21.305086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.139 qpair failed and we were unable to recover it. 00:27:33.139 [2024-11-15 10:46:21.305286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.139 [2024-11-15 10:46:21.305315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.139 qpair failed and we were unable to recover it. 00:27:33.139 [2024-11-15 10:46:21.305478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.139 [2024-11-15 10:46:21.305503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.139 qpair failed and we were unable to recover it. 00:27:33.139 [2024-11-15 10:46:21.305684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.139 [2024-11-15 10:46:21.305707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.139 qpair failed and we were unable to recover it. 00:27:33.140 [2024-11-15 10:46:21.305843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.140 [2024-11-15 10:46:21.305881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.140 qpair failed and we were unable to recover it. 00:27:33.140 [2024-11-15 10:46:21.306007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.140 [2024-11-15 10:46:21.306031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.140 qpair failed and we were unable to recover it. 00:27:33.140 [2024-11-15 10:46:21.306206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.140 [2024-11-15 10:46:21.306245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.140 qpair failed and we were unable to recover it. 00:27:33.140 [2024-11-15 10:46:21.306426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.140 [2024-11-15 10:46:21.306451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.140 qpair failed and we were unable to recover it. 00:27:33.140 [2024-11-15 10:46:21.306579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.140 [2024-11-15 10:46:21.306618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.140 qpair failed and we were unable to recover it. 00:27:33.140 [2024-11-15 10:46:21.306844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.140 [2024-11-15 10:46:21.306868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.140 qpair failed and we were unable to recover it. 00:27:33.140 [2024-11-15 10:46:21.307093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.140 [2024-11-15 10:46:21.307117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.140 qpair failed and we were unable to recover it. 00:27:33.140 [2024-11-15 10:46:21.307293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.140 [2024-11-15 10:46:21.307316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.140 qpair failed and we were unable to recover it. 00:27:33.140 [2024-11-15 10:46:21.307526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.140 [2024-11-15 10:46:21.307551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.140 qpair failed and we were unable to recover it. 00:27:33.140 [2024-11-15 10:46:21.307780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.140 [2024-11-15 10:46:21.307804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.140 qpair failed and we were unable to recover it. 00:27:33.140 [2024-11-15 10:46:21.307928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.140 [2024-11-15 10:46:21.307951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.140 qpair failed and we were unable to recover it. 00:27:33.140 [2024-11-15 10:46:21.308124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.140 [2024-11-15 10:46:21.308164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.140 qpair failed and we were unable to recover it. 00:27:33.140 [2024-11-15 10:46:21.308275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.140 [2024-11-15 10:46:21.308313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.140 qpair failed and we were unable to recover it. 00:27:33.140 [2024-11-15 10:46:21.308534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.140 [2024-11-15 10:46:21.308560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.140 qpair failed and we were unable to recover it. 00:27:33.140 [2024-11-15 10:46:21.308758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.140 [2024-11-15 10:46:21.308782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.140 qpair failed and we were unable to recover it. 00:27:33.140 [2024-11-15 10:46:21.308966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.140 [2024-11-15 10:46:21.308990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.140 qpair failed and we were unable to recover it. 00:27:33.140 [2024-11-15 10:46:21.309163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.140 [2024-11-15 10:46:21.309185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.140 qpair failed and we were unable to recover it. 00:27:33.140 [2024-11-15 10:46:21.309400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.140 [2024-11-15 10:46:21.309426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.140 qpair failed and we were unable to recover it. 00:27:33.140 [2024-11-15 10:46:21.309619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.140 [2024-11-15 10:46:21.309643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.140 qpair failed and we were unable to recover it. 00:27:33.140 [2024-11-15 10:46:21.309864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.140 [2024-11-15 10:46:21.309888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.140 qpair failed and we were unable to recover it. 00:27:33.140 [2024-11-15 10:46:21.310123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.140 [2024-11-15 10:46:21.310146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.140 qpair failed and we were unable to recover it. 00:27:33.140 [2024-11-15 10:46:21.310335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.140 [2024-11-15 10:46:21.310358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.140 qpair failed and we were unable to recover it. 00:27:33.140 [2024-11-15 10:46:21.310602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.140 [2024-11-15 10:46:21.310626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.140 qpair failed and we were unable to recover it. 00:27:33.140 [2024-11-15 10:46:21.310814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.140 [2024-11-15 10:46:21.310842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.140 qpair failed and we were unable to recover it. 00:27:33.140 [2024-11-15 10:46:21.311049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.140 [2024-11-15 10:46:21.311072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.140 qpair failed and we were unable to recover it. 00:27:33.140 [2024-11-15 10:46:21.311294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.140 [2024-11-15 10:46:21.311318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.140 qpair failed and we were unable to recover it. 00:27:33.140 [2024-11-15 10:46:21.311476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.140 [2024-11-15 10:46:21.311501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.140 qpair failed and we were unable to recover it. 00:27:33.140 [2024-11-15 10:46:21.311711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.140 [2024-11-15 10:46:21.311735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.140 qpair failed and we were unable to recover it. 00:27:33.140 [2024-11-15 10:46:21.311918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.140 [2024-11-15 10:46:21.311942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.140 qpair failed and we were unable to recover it. 00:27:33.140 [2024-11-15 10:46:21.312122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.140 [2024-11-15 10:46:21.312146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.140 qpair failed and we were unable to recover it. 00:27:33.140 [2024-11-15 10:46:21.312299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.140 [2024-11-15 10:46:21.312321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.140 qpair failed and we were unable to recover it. 00:27:33.140 [2024-11-15 10:46:21.312521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.140 [2024-11-15 10:46:21.312545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.140 qpair failed and we were unable to recover it. 00:27:33.140 [2024-11-15 10:46:21.312694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.140 [2024-11-15 10:46:21.312718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.140 qpair failed and we were unable to recover it. 00:27:33.140 [2024-11-15 10:46:21.312968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.140 [2024-11-15 10:46:21.312991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.141 qpair failed and we were unable to recover it. 00:27:33.141 [2024-11-15 10:46:21.313129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.141 [2024-11-15 10:46:21.313152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.141 qpair failed and we were unable to recover it. 00:27:33.141 [2024-11-15 10:46:21.313292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.141 [2024-11-15 10:46:21.313316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.141 qpair failed and we were unable to recover it. 00:27:33.141 [2024-11-15 10:46:21.313583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.141 [2024-11-15 10:46:21.313608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.141 qpair failed and we were unable to recover it. 00:27:33.141 [2024-11-15 10:46:21.313766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.141 [2024-11-15 10:46:21.313789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.141 qpair failed and we were unable to recover it. 00:27:33.141 [2024-11-15 10:46:21.314052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.141 [2024-11-15 10:46:21.314076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.141 qpair failed and we were unable to recover it. 00:27:33.141 [2024-11-15 10:46:21.314288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.141 [2024-11-15 10:46:21.314312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.141 qpair failed and we were unable to recover it. 00:27:33.141 [2024-11-15 10:46:21.314462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.141 [2024-11-15 10:46:21.314501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.141 qpair failed and we were unable to recover it. 00:27:33.141 [2024-11-15 10:46:21.314713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.141 [2024-11-15 10:46:21.314737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.141 qpair failed and we were unable to recover it. 00:27:33.141 [2024-11-15 10:46:21.314878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.141 [2024-11-15 10:46:21.314901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.141 qpair failed and we were unable to recover it. 00:27:33.141 [2024-11-15 10:46:21.315087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.141 [2024-11-15 10:46:21.315111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.141 qpair failed and we were unable to recover it. 00:27:33.141 [2024-11-15 10:46:21.315332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.141 [2024-11-15 10:46:21.315378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.141 qpair failed and we were unable to recover it. 00:27:33.141 [2024-11-15 10:46:21.315571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.141 [2024-11-15 10:46:21.315604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.141 qpair failed and we were unable to recover it. 00:27:33.141 [2024-11-15 10:46:21.315728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.141 [2024-11-15 10:46:21.315751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.141 qpair failed and we were unable to recover it. 00:27:33.141 [2024-11-15 10:46:21.315899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.141 [2024-11-15 10:46:21.315923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.141 qpair failed and we were unable to recover it. 00:27:33.141 [2024-11-15 10:46:21.316057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.141 [2024-11-15 10:46:21.316081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.141 qpair failed and we were unable to recover it. 00:27:33.141 [2024-11-15 10:46:21.316324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.141 [2024-11-15 10:46:21.316347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.141 qpair failed and we were unable to recover it. 00:27:33.141 [2024-11-15 10:46:21.316565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.141 [2024-11-15 10:46:21.316589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.141 qpair failed and we were unable to recover it. 00:27:33.141 [2024-11-15 10:46:21.316760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.141 [2024-11-15 10:46:21.316783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.141 qpair failed and we were unable to recover it. 00:27:33.141 [2024-11-15 10:46:21.316970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.141 [2024-11-15 10:46:21.317003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.141 qpair failed and we were unable to recover it. 00:27:33.141 [2024-11-15 10:46:21.317275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.141 [2024-11-15 10:46:21.317299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.141 qpair failed and we were unable to recover it. 00:27:33.141 [2024-11-15 10:46:21.317498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.141 [2024-11-15 10:46:21.317523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.141 qpair failed and we were unable to recover it. 00:27:33.141 [2024-11-15 10:46:21.317710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.141 [2024-11-15 10:46:21.317748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.141 qpair failed and we were unable to recover it. 00:27:33.141 [2024-11-15 10:46:21.317964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.141 [2024-11-15 10:46:21.317986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.141 qpair failed and we were unable to recover it. 00:27:33.141 [2024-11-15 10:46:21.318151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.141 [2024-11-15 10:46:21.318175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.141 qpair failed and we were unable to recover it. 00:27:33.141 [2024-11-15 10:46:21.318313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.141 [2024-11-15 10:46:21.318352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.141 qpair failed and we were unable to recover it. 00:27:33.141 [2024-11-15 10:46:21.318574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.141 [2024-11-15 10:46:21.318599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.141 qpair failed and we were unable to recover it. 00:27:33.141 [2024-11-15 10:46:21.318800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.141 [2024-11-15 10:46:21.318823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.141 qpair failed and we were unable to recover it. 00:27:33.141 [2024-11-15 10:46:21.319010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.141 [2024-11-15 10:46:21.319033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.141 qpair failed and we were unable to recover it. 00:27:33.141 [2024-11-15 10:46:21.319239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.141 [2024-11-15 10:46:21.319261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.141 qpair failed and we were unable to recover it. 00:27:33.141 [2024-11-15 10:46:21.319459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.141 [2024-11-15 10:46:21.319485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.141 qpair failed and we were unable to recover it. 00:27:33.141 [2024-11-15 10:46:21.319607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.141 [2024-11-15 10:46:21.319631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.141 qpair failed and we were unable to recover it. 00:27:33.141 [2024-11-15 10:46:21.319775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.141 [2024-11-15 10:46:21.319816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.141 qpair failed and we were unable to recover it. 00:27:33.141 [2024-11-15 10:46:21.320030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.141 [2024-11-15 10:46:21.320053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.141 qpair failed and we were unable to recover it. 00:27:33.141 [2024-11-15 10:46:21.320248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.141 [2024-11-15 10:46:21.320271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.141 qpair failed and we were unable to recover it. 00:27:33.141 [2024-11-15 10:46:21.320466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.141 [2024-11-15 10:46:21.320492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.141 qpair failed and we were unable to recover it. 00:27:33.141 [2024-11-15 10:46:21.320704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.141 [2024-11-15 10:46:21.320742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.141 qpair failed and we were unable to recover it. 00:27:33.141 [2024-11-15 10:46:21.320914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.141 [2024-11-15 10:46:21.320937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.141 qpair failed and we were unable to recover it. 00:27:33.141 [2024-11-15 10:46:21.321143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.141 [2024-11-15 10:46:21.321167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.141 qpair failed and we were unable to recover it. 00:27:33.141 [2024-11-15 10:46:21.321396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.142 [2024-11-15 10:46:21.321421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.142 qpair failed and we were unable to recover it. 00:27:33.142 [2024-11-15 10:46:21.321660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.142 [2024-11-15 10:46:21.321683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.142 qpair failed and we were unable to recover it. 00:27:33.142 [2024-11-15 10:46:21.321821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.142 [2024-11-15 10:46:21.321844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.142 qpair failed and we were unable to recover it. 00:27:33.142 [2024-11-15 10:46:21.322037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.142 [2024-11-15 10:46:21.322060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.142 qpair failed and we were unable to recover it. 00:27:33.142 [2024-11-15 10:46:21.322317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.142 [2024-11-15 10:46:21.322355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.142 qpair failed and we were unable to recover it. 00:27:33.142 [2024-11-15 10:46:21.322644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.142 [2024-11-15 10:46:21.322684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.142 qpair failed and we were unable to recover it. 00:27:33.142 [2024-11-15 10:46:21.322801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.142 [2024-11-15 10:46:21.322825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.142 qpair failed and we were unable to recover it. 00:27:33.142 [2024-11-15 10:46:21.323060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.142 [2024-11-15 10:46:21.323084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.142 qpair failed and we were unable to recover it. 00:27:33.142 [2024-11-15 10:46:21.323223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.142 [2024-11-15 10:46:21.323246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.142 qpair failed and we were unable to recover it. 00:27:33.142 [2024-11-15 10:46:21.323477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.142 [2024-11-15 10:46:21.323501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.142 qpair failed and we were unable to recover it. 00:27:33.142 [2024-11-15 10:46:21.323706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.142 [2024-11-15 10:46:21.323730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.142 qpair failed and we were unable to recover it. 00:27:33.142 [2024-11-15 10:46:21.323892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.142 [2024-11-15 10:46:21.323915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.142 qpair failed and we were unable to recover it. 00:27:33.142 [2024-11-15 10:46:21.324063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.142 [2024-11-15 10:46:21.324102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.142 qpair failed and we were unable to recover it. 00:27:33.142 [2024-11-15 10:46:21.324287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.142 [2024-11-15 10:46:21.324310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.142 qpair failed and we were unable to recover it. 00:27:33.142 [2024-11-15 10:46:21.324530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.142 [2024-11-15 10:46:21.324555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.142 qpair failed and we were unable to recover it. 00:27:33.142 [2024-11-15 10:46:21.324737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.142 [2024-11-15 10:46:21.324760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.142 qpair failed and we were unable to recover it. 00:27:33.142 [2024-11-15 10:46:21.324914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.142 [2024-11-15 10:46:21.324937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.142 qpair failed and we were unable to recover it. 00:27:33.142 [2024-11-15 10:46:21.325086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.142 [2024-11-15 10:46:21.325124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.142 qpair failed and we were unable to recover it. 00:27:33.142 [2024-11-15 10:46:21.325271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.142 [2024-11-15 10:46:21.325308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.142 qpair failed and we were unable to recover it. 00:27:33.142 [2024-11-15 10:46:21.325538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.142 [2024-11-15 10:46:21.325562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.142 qpair failed and we were unable to recover it. 00:27:33.142 [2024-11-15 10:46:21.325680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.142 [2024-11-15 10:46:21.325707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.142 qpair failed and we were unable to recover it. 00:27:33.142 [2024-11-15 10:46:21.325849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.142 [2024-11-15 10:46:21.325887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.142 qpair failed and we were unable to recover it. 00:27:33.142 [2024-11-15 10:46:21.326062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.142 [2024-11-15 10:46:21.326085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.142 qpair failed and we were unable to recover it. 00:27:33.142 [2024-11-15 10:46:21.326264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.142 [2024-11-15 10:46:21.326287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.142 qpair failed and we were unable to recover it. 00:27:33.142 [2024-11-15 10:46:21.326488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.142 [2024-11-15 10:46:21.326512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.142 qpair failed and we were unable to recover it. 00:27:33.142 [2024-11-15 10:46:21.326728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.142 [2024-11-15 10:46:21.326752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.142 qpair failed and we were unable to recover it. 00:27:33.142 [2024-11-15 10:46:21.326929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.142 [2024-11-15 10:46:21.326953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.142 qpair failed and we were unable to recover it. 00:27:33.142 [2024-11-15 10:46:21.327134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.142 [2024-11-15 10:46:21.327157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.142 qpair failed and we were unable to recover it. 00:27:33.142 [2024-11-15 10:46:21.327295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.142 [2024-11-15 10:46:21.327334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.142 qpair failed and we were unable to recover it. 00:27:33.142 [2024-11-15 10:46:21.327553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.142 [2024-11-15 10:46:21.327579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.142 qpair failed and we were unable to recover it. 00:27:33.142 [2024-11-15 10:46:21.327852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.142 [2024-11-15 10:46:21.327878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.142 qpair failed and we were unable to recover it. 00:27:33.142 [2024-11-15 10:46:21.328045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.142 [2024-11-15 10:46:21.328070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.142 qpair failed and we were unable to recover it. 00:27:33.142 [2024-11-15 10:46:21.328250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.142 [2024-11-15 10:46:21.328274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.142 qpair failed and we were unable to recover it. 00:27:33.142 [2024-11-15 10:46:21.328439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.142 [2024-11-15 10:46:21.328464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.142 qpair failed and we were unable to recover it. 00:27:33.142 [2024-11-15 10:46:21.328633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.142 [2024-11-15 10:46:21.328658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.142 qpair failed and we were unable to recover it. 00:27:33.142 [2024-11-15 10:46:21.328844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.142 [2024-11-15 10:46:21.328869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.142 qpair failed and we were unable to recover it. 00:27:33.142 [2024-11-15 10:46:21.329016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.142 [2024-11-15 10:46:21.329055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.142 qpair failed and we were unable to recover it. 00:27:33.142 [2024-11-15 10:46:21.329265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.142 [2024-11-15 10:46:21.329288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.142 qpair failed and we were unable to recover it. 00:27:33.142 [2024-11-15 10:46:21.329467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.142 [2024-11-15 10:46:21.329493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.142 qpair failed and we were unable to recover it. 00:27:33.143 [2024-11-15 10:46:21.329643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.143 [2024-11-15 10:46:21.329668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.143 qpair failed and we were unable to recover it. 00:27:33.143 [2024-11-15 10:46:21.329869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.143 [2024-11-15 10:46:21.329892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.143 qpair failed and we were unable to recover it. 00:27:33.143 [2024-11-15 10:46:21.330071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.143 [2024-11-15 10:46:21.330094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.143 qpair failed and we were unable to recover it. 00:27:33.143 [2024-11-15 10:46:21.330211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.143 [2024-11-15 10:46:21.330250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.143 qpair failed and we were unable to recover it. 00:27:33.143 [2024-11-15 10:46:21.330451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.143 [2024-11-15 10:46:21.330477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.143 qpair failed and we were unable to recover it. 00:27:33.143 [2024-11-15 10:46:21.330693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.143 [2024-11-15 10:46:21.330716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.143 qpair failed and we were unable to recover it. 00:27:33.143 [2024-11-15 10:46:21.330951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.143 [2024-11-15 10:46:21.330974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.143 qpair failed and we were unable to recover it. 00:27:33.143 [2024-11-15 10:46:21.331161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.143 [2024-11-15 10:46:21.331184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.143 qpair failed and we were unable to recover it. 00:27:33.143 [2024-11-15 10:46:21.331342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.143 [2024-11-15 10:46:21.331388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.143 qpair failed and we were unable to recover it. 00:27:33.143 [2024-11-15 10:46:21.331579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.143 [2024-11-15 10:46:21.331602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.143 qpair failed and we were unable to recover it. 00:27:33.143 [2024-11-15 10:46:21.331825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.143 [2024-11-15 10:46:21.331849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.143 qpair failed and we were unable to recover it. 00:27:33.143 [2024-11-15 10:46:21.331974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.143 [2024-11-15 10:46:21.331997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.143 qpair failed and we were unable to recover it. 00:27:33.143 [2024-11-15 10:46:21.332131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.143 [2024-11-15 10:46:21.332155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.143 qpair failed and we were unable to recover it. 00:27:33.143 [2024-11-15 10:46:21.332357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.143 [2024-11-15 10:46:21.332395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.143 qpair failed and we were unable to recover it. 00:27:33.143 [2024-11-15 10:46:21.332595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.143 [2024-11-15 10:46:21.332620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.143 qpair failed and we were unable to recover it. 00:27:33.143 [2024-11-15 10:46:21.332770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.143 [2024-11-15 10:46:21.332808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.143 qpair failed and we were unable to recover it. 00:27:33.143 [2024-11-15 10:46:21.333036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.143 [2024-11-15 10:46:21.333059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.143 qpair failed and we were unable to recover it. 00:27:33.143 [2024-11-15 10:46:21.333225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.143 [2024-11-15 10:46:21.333249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.143 qpair failed and we were unable to recover it. 00:27:33.143 [2024-11-15 10:46:21.333424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.143 [2024-11-15 10:46:21.333449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.143 qpair failed and we were unable to recover it. 00:27:33.143 [2024-11-15 10:46:21.333635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.143 [2024-11-15 10:46:21.333675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.143 qpair failed and we were unable to recover it. 00:27:33.143 [2024-11-15 10:46:21.333892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.143 [2024-11-15 10:46:21.333915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.143 qpair failed and we were unable to recover it. 00:27:33.143 [2024-11-15 10:46:21.334082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.143 [2024-11-15 10:46:21.334105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.143 qpair failed and we were unable to recover it. 00:27:33.143 [2024-11-15 10:46:21.334317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.143 [2024-11-15 10:46:21.334356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.143 qpair failed and we were unable to recover it. 00:27:33.143 [2024-11-15 10:46:21.334484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.143 [2024-11-15 10:46:21.334509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.143 qpair failed and we were unable to recover it. 00:27:33.143 [2024-11-15 10:46:21.334627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.143 [2024-11-15 10:46:21.334652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.143 qpair failed and we were unable to recover it. 00:27:33.143 [2024-11-15 10:46:21.334890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.143 [2024-11-15 10:46:21.334913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.143 qpair failed and we were unable to recover it. 00:27:33.143 [2024-11-15 10:46:21.335113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.143 [2024-11-15 10:46:21.335137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.143 qpair failed and we were unable to recover it. 00:27:33.143 [2024-11-15 10:46:21.335300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.143 [2024-11-15 10:46:21.335323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.143 qpair failed and we were unable to recover it. 00:27:33.143 [2024-11-15 10:46:21.335480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.143 [2024-11-15 10:46:21.335506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.143 qpair failed and we were unable to recover it. 00:27:33.143 [2024-11-15 10:46:21.335638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.143 [2024-11-15 10:46:21.335676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.143 qpair failed and we were unable to recover it. 00:27:33.143 [2024-11-15 10:46:21.335895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.143 [2024-11-15 10:46:21.335918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.143 qpair failed and we were unable to recover it. 00:27:33.143 [2024-11-15 10:46:21.336116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.143 [2024-11-15 10:46:21.336139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.143 qpair failed and we were unable to recover it. 00:27:33.143 [2024-11-15 10:46:21.336390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.143 [2024-11-15 10:46:21.336438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.143 qpair failed and we were unable to recover it. 00:27:33.143 [2024-11-15 10:46:21.336659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.143 [2024-11-15 10:46:21.336684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.143 qpair failed and we were unable to recover it. 00:27:33.143 [2024-11-15 10:46:21.336919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.143 [2024-11-15 10:46:21.336942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.143 qpair failed and we were unable to recover it. 00:27:33.143 [2024-11-15 10:46:21.337083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.143 [2024-11-15 10:46:21.337107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.143 qpair failed and we were unable to recover it. 00:27:33.143 [2024-11-15 10:46:21.337332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.143 [2024-11-15 10:46:21.337376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.143 qpair failed and we were unable to recover it. 00:27:33.143 [2024-11-15 10:46:21.337598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.143 [2024-11-15 10:46:21.337622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.143 qpair failed and we were unable to recover it. 00:27:33.143 [2024-11-15 10:46:21.337848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.144 [2024-11-15 10:46:21.337872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.144 qpair failed and we were unable to recover it. 00:27:33.144 [2024-11-15 10:46:21.338117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.144 [2024-11-15 10:46:21.338140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.144 qpair failed and we were unable to recover it. 00:27:33.144 [2024-11-15 10:46:21.338373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.144 [2024-11-15 10:46:21.338398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.144 qpair failed and we were unable to recover it. 00:27:33.144 [2024-11-15 10:46:21.338593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.144 [2024-11-15 10:46:21.338618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.144 qpair failed and we were unable to recover it. 00:27:33.144 [2024-11-15 10:46:21.338860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.144 [2024-11-15 10:46:21.338884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.144 qpair failed and we were unable to recover it. 00:27:33.144 [2024-11-15 10:46:21.339051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.144 [2024-11-15 10:46:21.339091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.144 qpair failed and we were unable to recover it. 00:27:33.144 [2024-11-15 10:46:21.339324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.144 [2024-11-15 10:46:21.339347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.144 qpair failed and we were unable to recover it. 00:27:33.144 [2024-11-15 10:46:21.339582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.144 [2024-11-15 10:46:21.339606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.144 qpair failed and we were unable to recover it. 00:27:33.144 [2024-11-15 10:46:21.339766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.144 [2024-11-15 10:46:21.339789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.144 qpair failed and we were unable to recover it. 00:27:33.144 [2024-11-15 10:46:21.339957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.144 [2024-11-15 10:46:21.339981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.144 qpair failed and we were unable to recover it. 00:27:33.144 [2024-11-15 10:46:21.340172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.144 [2024-11-15 10:46:21.340195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.144 qpair failed and we were unable to recover it. 00:27:33.144 [2024-11-15 10:46:21.340377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.144 [2024-11-15 10:46:21.340402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.144 qpair failed and we were unable to recover it. 00:27:33.144 [2024-11-15 10:46:21.340533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.144 [2024-11-15 10:46:21.340557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.144 qpair failed and we were unable to recover it. 00:27:33.144 [2024-11-15 10:46:21.340811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.144 [2024-11-15 10:46:21.340834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.144 qpair failed and we were unable to recover it. 00:27:33.144 [2024-11-15 10:46:21.340974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.144 [2024-11-15 10:46:21.340997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.144 qpair failed and we were unable to recover it. 00:27:33.144 [2024-11-15 10:46:21.341254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.144 [2024-11-15 10:46:21.341278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.144 qpair failed and we were unable to recover it. 00:27:33.144 [2024-11-15 10:46:21.341481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.144 [2024-11-15 10:46:21.341505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.144 qpair failed and we were unable to recover it. 00:27:33.144 [2024-11-15 10:46:21.341647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.144 [2024-11-15 10:46:21.341679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.144 qpair failed and we were unable to recover it. 00:27:33.144 [2024-11-15 10:46:21.341866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.144 [2024-11-15 10:46:21.341889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.144 qpair failed and we were unable to recover it. 00:27:33.144 [2024-11-15 10:46:21.342029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.144 [2024-11-15 10:46:21.342052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.144 qpair failed and we were unable to recover it. 00:27:33.144 [2024-11-15 10:46:21.342278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.144 [2024-11-15 10:46:21.342301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.144 qpair failed and we were unable to recover it. 00:27:33.144 [2024-11-15 10:46:21.342474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.144 [2024-11-15 10:46:21.342499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.144 qpair failed and we were unable to recover it. 00:27:33.144 [2024-11-15 10:46:21.342702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.144 [2024-11-15 10:46:21.342741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.144 qpair failed and we were unable to recover it. 00:27:33.144 [2024-11-15 10:46:21.342982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.144 [2024-11-15 10:46:21.343006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.144 qpair failed and we were unable to recover it. 00:27:33.144 [2024-11-15 10:46:21.343180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.144 [2024-11-15 10:46:21.343204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.144 qpair failed and we were unable to recover it. 00:27:33.144 [2024-11-15 10:46:21.343420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.144 [2024-11-15 10:46:21.343444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.144 qpair failed and we were unable to recover it. 00:27:33.144 [2024-11-15 10:46:21.343630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.144 [2024-11-15 10:46:21.343654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.144 qpair failed and we were unable to recover it. 00:27:33.144 [2024-11-15 10:46:21.343869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.144 [2024-11-15 10:46:21.343892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.144 qpair failed and we were unable to recover it. 00:27:33.144 [2024-11-15 10:46:21.344080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.144 [2024-11-15 10:46:21.344104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.144 qpair failed and we were unable to recover it. 00:27:33.144 [2024-11-15 10:46:21.344332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.144 [2024-11-15 10:46:21.344387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.144 qpair failed and we were unable to recover it. 00:27:33.144 [2024-11-15 10:46:21.344625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.144 [2024-11-15 10:46:21.344665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.144 qpair failed and we were unable to recover it. 00:27:33.144 [2024-11-15 10:46:21.344799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.144 [2024-11-15 10:46:21.344823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.144 qpair failed and we were unable to recover it. 00:27:33.144 [2024-11-15 10:46:21.345023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.144 [2024-11-15 10:46:21.345047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.144 qpair failed and we were unable to recover it. 00:27:33.144 [2024-11-15 10:46:21.345228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.144 [2024-11-15 10:46:21.345251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.144 qpair failed and we were unable to recover it. 00:27:33.144 [2024-11-15 10:46:21.345427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.144 [2024-11-15 10:46:21.345452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.144 qpair failed and we were unable to recover it. 00:27:33.144 [2024-11-15 10:46:21.345674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.144 [2024-11-15 10:46:21.345697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.144 qpair failed and we were unable to recover it. 00:27:33.144 [2024-11-15 10:46:21.345901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.144 [2024-11-15 10:46:21.345925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.144 qpair failed and we were unable to recover it. 00:27:33.144 [2024-11-15 10:46:21.346108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.144 [2024-11-15 10:46:21.346140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.144 qpair failed and we were unable to recover it. 00:27:33.144 [2024-11-15 10:46:21.346249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.145 [2024-11-15 10:46:21.346291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.145 qpair failed and we were unable to recover it. 00:27:33.145 [2024-11-15 10:46:21.346418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.145 [2024-11-15 10:46:21.346444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.145 qpair failed and we were unable to recover it. 00:27:33.145 [2024-11-15 10:46:21.346556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.145 [2024-11-15 10:46:21.346581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.145 qpair failed and we were unable to recover it. 00:27:33.145 [2024-11-15 10:46:21.346739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.145 [2024-11-15 10:46:21.346764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.145 qpair failed and we were unable to recover it. 00:27:33.145 [2024-11-15 10:46:21.346994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.145 [2024-11-15 10:46:21.347018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.145 qpair failed and we were unable to recover it. 00:27:33.145 [2024-11-15 10:46:21.347201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.145 [2024-11-15 10:46:21.347225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.145 qpair failed and we were unable to recover it. 00:27:33.145 [2024-11-15 10:46:21.347418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.145 [2024-11-15 10:46:21.347443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.145 qpair failed and we were unable to recover it. 00:27:33.145 [2024-11-15 10:46:21.347587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.145 [2024-11-15 10:46:21.347612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.145 qpair failed and we were unable to recover it. 00:27:33.145 [2024-11-15 10:46:21.347801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.145 [2024-11-15 10:46:21.347825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.145 qpair failed and we were unable to recover it. 00:27:33.145 [2024-11-15 10:46:21.347978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.145 [2024-11-15 10:46:21.348001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.145 qpair failed and we were unable to recover it. 00:27:33.145 [2024-11-15 10:46:21.348227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.145 [2024-11-15 10:46:21.348251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.145 qpair failed and we were unable to recover it. 00:27:33.145 [2024-11-15 10:46:21.348421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.145 [2024-11-15 10:46:21.348447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.145 qpair failed and we were unable to recover it. 00:27:33.145 [2024-11-15 10:46:21.348656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.145 [2024-11-15 10:46:21.348680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.145 qpair failed and we were unable to recover it. 00:27:33.145 [2024-11-15 10:46:21.348871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.145 [2024-11-15 10:46:21.348894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.145 qpair failed and we were unable to recover it. 00:27:33.145 [2024-11-15 10:46:21.349056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.145 [2024-11-15 10:46:21.349079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.145 qpair failed and we were unable to recover it. 00:27:33.145 [2024-11-15 10:46:21.349267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.145 [2024-11-15 10:46:21.349290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.145 qpair failed and we were unable to recover it. 00:27:33.145 [2024-11-15 10:46:21.349525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.145 [2024-11-15 10:46:21.349551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.145 qpair failed and we were unable to recover it. 00:27:33.145 [2024-11-15 10:46:21.349723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.145 [2024-11-15 10:46:21.349747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.145 qpair failed and we were unable to recover it. 00:27:33.145 [2024-11-15 10:46:21.349956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.145 [2024-11-15 10:46:21.349980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.145 qpair failed and we were unable to recover it. 00:27:33.145 [2024-11-15 10:46:21.350203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.145 [2024-11-15 10:46:21.350227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.145 qpair failed and we were unable to recover it. 00:27:33.145 [2024-11-15 10:46:21.350415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.145 [2024-11-15 10:46:21.350440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.145 qpair failed and we were unable to recover it. 00:27:33.145 [2024-11-15 10:46:21.350655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.145 [2024-11-15 10:46:21.350679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.145 qpair failed and we were unable to recover it. 00:27:33.145 [2024-11-15 10:46:21.350827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.145 [2024-11-15 10:46:21.350850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.145 qpair failed and we were unable to recover it. 00:27:33.145 [2024-11-15 10:46:21.351061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.145 [2024-11-15 10:46:21.351085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.145 qpair failed and we were unable to recover it. 00:27:33.145 [2024-11-15 10:46:21.351313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.145 [2024-11-15 10:46:21.351336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.145 qpair failed and we were unable to recover it. 00:27:33.145 [2024-11-15 10:46:21.351547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.145 [2024-11-15 10:46:21.351571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.145 qpair failed and we were unable to recover it. 00:27:33.145 [2024-11-15 10:46:21.351786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.145 [2024-11-15 10:46:21.351810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.145 qpair failed and we were unable to recover it. 00:27:33.145 [2024-11-15 10:46:21.351947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.145 [2024-11-15 10:46:21.351974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.145 qpair failed and we were unable to recover it. 00:27:33.145 [2024-11-15 10:46:21.352194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.145 [2024-11-15 10:46:21.352218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.145 qpair failed and we were unable to recover it. 00:27:33.145 [2024-11-15 10:46:21.352432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.145 [2024-11-15 10:46:21.352456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.145 qpair failed and we were unable to recover it. 00:27:33.145 [2024-11-15 10:46:21.352589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.145 [2024-11-15 10:46:21.352613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.145 qpair failed and we were unable to recover it. 00:27:33.145 [2024-11-15 10:46:21.352832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.145 [2024-11-15 10:46:21.352856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.145 qpair failed and we were unable to recover it. 00:27:33.145 [2024-11-15 10:46:21.353038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.145 [2024-11-15 10:46:21.353061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.145 qpair failed and we were unable to recover it. 00:27:33.145 [2024-11-15 10:46:21.353228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.145 [2024-11-15 10:46:21.353251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.145 qpair failed and we were unable to recover it. 00:27:33.145 [2024-11-15 10:46:21.353436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.145 [2024-11-15 10:46:21.353462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.145 qpair failed and we were unable to recover it. 00:27:33.145 [2024-11-15 10:46:21.353701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.145 [2024-11-15 10:46:21.353725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.146 qpair failed and we were unable to recover it. 00:27:33.146 [2024-11-15 10:46:21.353962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.146 [2024-11-15 10:46:21.353985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.146 qpair failed and we were unable to recover it. 00:27:33.146 [2024-11-15 10:46:21.354213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.146 [2024-11-15 10:46:21.354237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.146 qpair failed and we were unable to recover it. 00:27:33.146 [2024-11-15 10:46:21.354462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.146 [2024-11-15 10:46:21.354487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.146 qpair failed and we were unable to recover it. 00:27:33.146 [2024-11-15 10:46:21.354719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.146 [2024-11-15 10:46:21.354742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.146 qpair failed and we were unable to recover it. 00:27:33.146 [2024-11-15 10:46:21.354907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.146 [2024-11-15 10:46:21.354931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.146 qpair failed and we were unable to recover it. 00:27:33.146 [2024-11-15 10:46:21.355152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.146 [2024-11-15 10:46:21.355175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.146 qpair failed and we were unable to recover it. 00:27:33.146 [2024-11-15 10:46:21.355373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.146 [2024-11-15 10:46:21.355409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.146 qpair failed and we were unable to recover it. 00:27:33.146 [2024-11-15 10:46:21.355632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.146 [2024-11-15 10:46:21.355656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.146 qpair failed and we were unable to recover it. 00:27:33.146 [2024-11-15 10:46:21.355875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.146 [2024-11-15 10:46:21.355898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.146 qpair failed and we were unable to recover it. 00:27:33.146 [2024-11-15 10:46:21.356133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.146 [2024-11-15 10:46:21.356156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.146 qpair failed and we were unable to recover it. 00:27:33.146 [2024-11-15 10:46:21.356372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.146 [2024-11-15 10:46:21.356397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.146 qpair failed and we were unable to recover it. 00:27:33.146 [2024-11-15 10:46:21.356618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.146 [2024-11-15 10:46:21.356658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.146 qpair failed and we were unable to recover it. 00:27:33.146 [2024-11-15 10:46:21.356839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.146 [2024-11-15 10:46:21.356863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.146 qpair failed and we were unable to recover it. 00:27:33.146 [2024-11-15 10:46:21.357092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.146 [2024-11-15 10:46:21.357115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.146 qpair failed and we were unable to recover it. 00:27:33.146 [2024-11-15 10:46:21.357309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.146 [2024-11-15 10:46:21.357332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.146 qpair failed and we were unable to recover it. 00:27:33.146 [2024-11-15 10:46:21.357562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.146 [2024-11-15 10:46:21.357588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.146 qpair failed and we were unable to recover it. 00:27:33.146 [2024-11-15 10:46:21.357809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.146 [2024-11-15 10:46:21.357832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.146 qpair failed and we were unable to recover it. 00:27:33.146 [2024-11-15 10:46:21.358004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.146 [2024-11-15 10:46:21.358028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.146 qpair failed and we were unable to recover it. 00:27:33.146 [2024-11-15 10:46:21.358230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.146 [2024-11-15 10:46:21.358253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.146 qpair failed and we were unable to recover it. 00:27:33.146 [2024-11-15 10:46:21.358444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.146 [2024-11-15 10:46:21.358471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.146 qpair failed and we were unable to recover it. 00:27:33.146 [2024-11-15 10:46:21.358630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.146 [2024-11-15 10:46:21.358668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.146 qpair failed and we were unable to recover it. 00:27:33.146 [2024-11-15 10:46:21.358843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.146 [2024-11-15 10:46:21.358866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.146 qpair failed and we were unable to recover it. 00:27:33.146 [2024-11-15 10:46:21.359023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.146 [2024-11-15 10:46:21.359045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.146 qpair failed and we were unable to recover it. 00:27:33.146 [2024-11-15 10:46:21.359268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.146 [2024-11-15 10:46:21.359292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.146 qpair failed and we were unable to recover it. 00:27:33.146 [2024-11-15 10:46:21.359416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.146 [2024-11-15 10:46:21.359440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.146 qpair failed and we were unable to recover it. 00:27:33.146 [2024-11-15 10:46:21.359592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.146 [2024-11-15 10:46:21.359617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.146 qpair failed and we were unable to recover it. 00:27:33.146 [2024-11-15 10:46:21.359820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.146 [2024-11-15 10:46:21.359843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.146 qpair failed and we were unable to recover it. 00:27:33.146 [2024-11-15 10:46:21.360048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.146 [2024-11-15 10:46:21.360071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.146 qpair failed and we were unable to recover it. 00:27:33.146 [2024-11-15 10:46:21.360300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.146 [2024-11-15 10:46:21.360324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.146 qpair failed and we were unable to recover it. 00:27:33.146 [2024-11-15 10:46:21.360515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.146 [2024-11-15 10:46:21.360539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.146 qpair failed and we were unable to recover it. 00:27:33.146 [2024-11-15 10:46:21.360724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.146 [2024-11-15 10:46:21.360747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.146 qpair failed and we were unable to recover it. 00:27:33.146 [2024-11-15 10:46:21.360969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.146 [2024-11-15 10:46:21.360992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.146 qpair failed and we were unable to recover it. 00:27:33.146 [2024-11-15 10:46:21.361194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.146 [2024-11-15 10:46:21.361218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.146 qpair failed and we were unable to recover it. 00:27:33.146 [2024-11-15 10:46:21.361419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.146 [2024-11-15 10:46:21.361443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.146 qpair failed and we were unable to recover it. 00:27:33.146 [2024-11-15 10:46:21.361578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.146 [2024-11-15 10:46:21.361602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.146 qpair failed and we were unable to recover it. 00:27:33.146 [2024-11-15 10:46:21.361840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.146 [2024-11-15 10:46:21.361863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.146 qpair failed and we were unable to recover it. 00:27:33.146 [2024-11-15 10:46:21.362031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.146 [2024-11-15 10:46:21.362053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.146 qpair failed and we were unable to recover it. 00:27:33.146 [2024-11-15 10:46:21.362286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.146 [2024-11-15 10:46:21.362309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.147 qpair failed and we were unable to recover it. 00:27:33.147 [2024-11-15 10:46:21.362505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.147 [2024-11-15 10:46:21.362531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.147 qpair failed and we were unable to recover it. 00:27:33.147 [2024-11-15 10:46:21.362732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.147 [2024-11-15 10:46:21.362756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.147 qpair failed and we were unable to recover it. 00:27:33.147 [2024-11-15 10:46:21.362986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.147 [2024-11-15 10:46:21.363009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.147 qpair failed and we were unable to recover it. 00:27:33.147 [2024-11-15 10:46:21.363199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.147 [2024-11-15 10:46:21.363221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.147 qpair failed and we were unable to recover it. 00:27:33.147 [2024-11-15 10:46:21.363431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.147 [2024-11-15 10:46:21.363456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.147 qpair failed and we were unable to recover it. 00:27:33.147 [2024-11-15 10:46:21.363618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.147 [2024-11-15 10:46:21.363642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.147 qpair failed and we were unable to recover it. 00:27:33.147 [2024-11-15 10:46:21.363763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.147 [2024-11-15 10:46:21.363786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.147 qpair failed and we were unable to recover it. 00:27:33.147 [2024-11-15 10:46:21.363928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.147 [2024-11-15 10:46:21.363952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.147 qpair failed and we were unable to recover it. 00:27:33.147 [2024-11-15 10:46:21.364197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.147 [2024-11-15 10:46:21.364221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.147 qpair failed and we were unable to recover it. 00:27:33.147 [2024-11-15 10:46:21.364394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.147 [2024-11-15 10:46:21.364420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.147 qpair failed and we were unable to recover it. 00:27:33.147 [2024-11-15 10:46:21.364647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.147 [2024-11-15 10:46:21.364685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.147 qpair failed and we were unable to recover it. 00:27:33.147 [2024-11-15 10:46:21.364875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.147 [2024-11-15 10:46:21.364897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.147 qpair failed and we were unable to recover it. 00:27:33.147 [2024-11-15 10:46:21.365116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.147 [2024-11-15 10:46:21.365138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.147 qpair failed and we were unable to recover it. 00:27:33.147 [2024-11-15 10:46:21.365286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.147 [2024-11-15 10:46:21.365310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.147 qpair failed and we were unable to recover it. 00:27:33.147 [2024-11-15 10:46:21.365528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.147 [2024-11-15 10:46:21.365553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.147 qpair failed and we were unable to recover it. 00:27:33.147 [2024-11-15 10:46:21.365721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.147 [2024-11-15 10:46:21.365745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.147 qpair failed and we were unable to recover it. 00:27:33.147 [2024-11-15 10:46:21.365964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.147 [2024-11-15 10:46:21.365986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.147 qpair failed and we were unable to recover it. 00:27:33.147 [2024-11-15 10:46:21.366210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.147 [2024-11-15 10:46:21.366232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.147 qpair failed and we were unable to recover it. 00:27:33.147 [2024-11-15 10:46:21.366455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.147 [2024-11-15 10:46:21.366479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.147 qpair failed and we were unable to recover it. 00:27:33.147 [2024-11-15 10:46:21.366678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.147 [2024-11-15 10:46:21.366702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.147 qpair failed and we were unable to recover it. 00:27:33.147 [2024-11-15 10:46:21.366857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.147 [2024-11-15 10:46:21.366879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.147 qpair failed and we were unable to recover it. 00:27:33.147 [2024-11-15 10:46:21.367070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.147 [2024-11-15 10:46:21.367098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.147 qpair failed and we were unable to recover it. 00:27:33.147 [2024-11-15 10:46:21.367304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.147 [2024-11-15 10:46:21.367327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.147 qpair failed and we were unable to recover it. 00:27:33.147 [2024-11-15 10:46:21.367557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.147 [2024-11-15 10:46:21.367582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.147 qpair failed and we were unable to recover it. 00:27:33.147 [2024-11-15 10:46:21.367741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.147 [2024-11-15 10:46:21.367765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.147 qpair failed and we were unable to recover it. 00:27:33.147 [2024-11-15 10:46:21.367976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.147 [2024-11-15 10:46:21.368000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.147 qpair failed and we were unable to recover it. 00:27:33.147 [2024-11-15 10:46:21.368186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.147 [2024-11-15 10:46:21.368210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.147 qpair failed and we were unable to recover it. 00:27:33.147 [2024-11-15 10:46:21.368418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.147 [2024-11-15 10:46:21.368444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.147 qpair failed and we were unable to recover it. 00:27:33.147 [2024-11-15 10:46:21.368725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.147 [2024-11-15 10:46:21.368750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.147 qpair failed and we were unable to recover it. 00:27:33.147 [2024-11-15 10:46:21.368914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.147 [2024-11-15 10:46:21.368938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.147 qpair failed and we were unable to recover it. 00:27:33.147 [2024-11-15 10:46:21.369168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.147 [2024-11-15 10:46:21.369192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.147 qpair failed and we were unable to recover it. 00:27:33.147 [2024-11-15 10:46:21.369373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.147 [2024-11-15 10:46:21.369397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.147 qpair failed and we were unable to recover it. 00:27:33.147 [2024-11-15 10:46:21.369617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.147 [2024-11-15 10:46:21.369642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.147 qpair failed and we were unable to recover it. 00:27:33.147 [2024-11-15 10:46:21.369833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.147 [2024-11-15 10:46:21.369857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.147 qpair failed and we were unable to recover it. 00:27:33.147 [2024-11-15 10:46:21.370080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.147 [2024-11-15 10:46:21.370118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.147 qpair failed and we were unable to recover it. 00:27:33.147 [2024-11-15 10:46:21.370373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.147 [2024-11-15 10:46:21.370397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.147 qpair failed and we were unable to recover it. 00:27:33.147 [2024-11-15 10:46:21.370611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.147 [2024-11-15 10:46:21.370636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.147 qpair failed and we were unable to recover it. 00:27:33.147 [2024-11-15 10:46:21.370858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.147 [2024-11-15 10:46:21.370882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.147 qpair failed and we were unable to recover it. 00:27:33.147 [2024-11-15 10:46:21.371059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.148 [2024-11-15 10:46:21.371083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.148 qpair failed and we were unable to recover it. 00:27:33.148 [2024-11-15 10:46:21.371312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.148 [2024-11-15 10:46:21.371335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.148 qpair failed and we were unable to recover it. 00:27:33.148 [2024-11-15 10:46:21.371566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.148 [2024-11-15 10:46:21.371591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.148 qpair failed and we were unable to recover it. 00:27:33.148 [2024-11-15 10:46:21.371752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.148 [2024-11-15 10:46:21.371775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.148 qpair failed and we were unable to recover it. 00:27:33.148 [2024-11-15 10:46:21.372009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.148 [2024-11-15 10:46:21.372032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.148 qpair failed and we were unable to recover it. 00:27:33.148 [2024-11-15 10:46:21.372215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.148 [2024-11-15 10:46:21.372238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.148 qpair failed and we were unable to recover it. 00:27:33.148 [2024-11-15 10:46:21.372446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.148 [2024-11-15 10:46:21.372471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.148 qpair failed and we were unable to recover it. 00:27:33.148 [2024-11-15 10:46:21.372674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.148 [2024-11-15 10:46:21.372698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.148 qpair failed and we were unable to recover it. 00:27:33.148 [2024-11-15 10:46:21.372919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.148 [2024-11-15 10:46:21.372942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.148 qpair failed and we were unable to recover it. 00:27:33.148 [2024-11-15 10:46:21.373144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.148 [2024-11-15 10:46:21.373168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.148 qpair failed and we were unable to recover it. 00:27:33.148 [2024-11-15 10:46:21.373355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.148 [2024-11-15 10:46:21.373409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.148 qpair failed and we were unable to recover it. 00:27:33.148 [2024-11-15 10:46:21.373576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.148 [2024-11-15 10:46:21.373601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.148 qpair failed and we were unable to recover it. 00:27:33.148 [2024-11-15 10:46:21.373795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.148 [2024-11-15 10:46:21.373818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.148 qpair failed and we were unable to recover it. 00:27:33.148 [2024-11-15 10:46:21.374009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.148 [2024-11-15 10:46:21.374031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.148 qpair failed and we were unable to recover it. 00:27:33.148 [2024-11-15 10:46:21.374195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.148 [2024-11-15 10:46:21.374219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.148 qpair failed and we were unable to recover it. 00:27:33.148 [2024-11-15 10:46:21.374461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.148 [2024-11-15 10:46:21.374486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.148 qpair failed and we were unable to recover it. 00:27:33.148 [2024-11-15 10:46:21.374621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.148 [2024-11-15 10:46:21.374645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.148 qpair failed and we were unable to recover it. 00:27:33.148 [2024-11-15 10:46:21.374810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.148 [2024-11-15 10:46:21.374848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.148 qpair failed and we were unable to recover it. 00:27:33.148 [2024-11-15 10:46:21.375066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.148 [2024-11-15 10:46:21.375090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.148 qpair failed and we were unable to recover it. 00:27:33.148 [2024-11-15 10:46:21.375282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.148 [2024-11-15 10:46:21.375304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.148 qpair failed and we were unable to recover it. 00:27:33.148 [2024-11-15 10:46:21.375478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.148 [2024-11-15 10:46:21.375502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.148 qpair failed and we were unable to recover it. 00:27:33.148 [2024-11-15 10:46:21.375626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.148 [2024-11-15 10:46:21.375651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.148 qpair failed and we were unable to recover it. 00:27:33.148 [2024-11-15 10:46:21.375846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.148 [2024-11-15 10:46:21.375869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.148 qpair failed and we were unable to recover it. 00:27:33.148 [2024-11-15 10:46:21.376107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.148 [2024-11-15 10:46:21.376130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.148 qpair failed and we were unable to recover it. 00:27:33.148 [2024-11-15 10:46:21.376314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.148 [2024-11-15 10:46:21.376338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.148 qpair failed and we were unable to recover it. 00:27:33.148 [2024-11-15 10:46:21.376565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.148 [2024-11-15 10:46:21.376590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.148 qpair failed and we were unable to recover it. 00:27:33.148 [2024-11-15 10:46:21.376809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.148 [2024-11-15 10:46:21.376832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.148 qpair failed and we were unable to recover it. 00:27:33.148 [2024-11-15 10:46:21.377025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.148 [2024-11-15 10:46:21.377048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.148 qpair failed and we were unable to recover it. 00:27:33.148 [2024-11-15 10:46:21.377255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.148 [2024-11-15 10:46:21.377279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.148 qpair failed and we were unable to recover it. 00:27:33.148 [2024-11-15 10:46:21.377503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.148 [2024-11-15 10:46:21.377528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.148 qpair failed and we were unable to recover it. 00:27:33.148 [2024-11-15 10:46:21.377749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.148 [2024-11-15 10:46:21.377772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.148 qpair failed and we were unable to recover it. 00:27:33.148 [2024-11-15 10:46:21.377953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.148 [2024-11-15 10:46:21.377976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.148 qpair failed and we were unable to recover it. 00:27:33.148 [2024-11-15 10:46:21.378173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.148 [2024-11-15 10:46:21.378196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.148 qpair failed and we were unable to recover it. 00:27:33.148 [2024-11-15 10:46:21.378398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.148 [2024-11-15 10:46:21.378423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.148 qpair failed and we were unable to recover it. 00:27:33.148 [2024-11-15 10:46:21.378649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.148 [2024-11-15 10:46:21.378687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.148 qpair failed and we were unable to recover it. 00:27:33.148 [2024-11-15 10:46:21.378832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.148 [2024-11-15 10:46:21.378855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.148 qpair failed and we were unable to recover it. 00:27:33.148 [2024-11-15 10:46:21.379029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.148 [2024-11-15 10:46:21.379052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.148 qpair failed and we were unable to recover it. 00:27:33.148 [2024-11-15 10:46:21.379246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.148 [2024-11-15 10:46:21.379273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.148 qpair failed and we were unable to recover it. 00:27:33.148 [2024-11-15 10:46:21.379503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.148 [2024-11-15 10:46:21.379528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.149 qpair failed and we were unable to recover it. 00:27:33.149 [2024-11-15 10:46:21.379733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.149 [2024-11-15 10:46:21.379756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.149 qpair failed and we were unable to recover it. 00:27:33.149 [2024-11-15 10:46:21.379996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.149 [2024-11-15 10:46:21.380019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.149 qpair failed and we were unable to recover it. 00:27:33.149 [2024-11-15 10:46:21.380165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.149 [2024-11-15 10:46:21.380187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.149 qpair failed and we were unable to recover it. 00:27:33.149 [2024-11-15 10:46:21.380433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.149 [2024-11-15 10:46:21.380457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.149 qpair failed and we were unable to recover it. 00:27:33.149 [2024-11-15 10:46:21.380691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.149 [2024-11-15 10:46:21.380730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.149 qpair failed and we were unable to recover it. 00:27:33.149 [2024-11-15 10:46:21.380944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.149 [2024-11-15 10:46:21.380967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.149 qpair failed and we were unable to recover it. 00:27:33.149 [2024-11-15 10:46:21.381169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.149 [2024-11-15 10:46:21.381192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.149 qpair failed and we were unable to recover it. 00:27:33.149 [2024-11-15 10:46:21.381429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.149 [2024-11-15 10:46:21.381453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.149 qpair failed and we were unable to recover it. 00:27:33.149 [2024-11-15 10:46:21.381693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.149 [2024-11-15 10:46:21.381717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.149 qpair failed and we were unable to recover it. 00:27:33.149 [2024-11-15 10:46:21.381948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.149 [2024-11-15 10:46:21.381970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.149 qpair failed and we were unable to recover it. 00:27:33.149 [2024-11-15 10:46:21.382124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.149 [2024-11-15 10:46:21.382147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.149 qpair failed and we were unable to recover it. 00:27:33.149 [2024-11-15 10:46:21.382360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.149 [2024-11-15 10:46:21.382403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.149 qpair failed and we were unable to recover it. 00:27:33.149 [2024-11-15 10:46:21.382569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.149 [2024-11-15 10:46:21.382594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.149 qpair failed and we were unable to recover it. 00:27:33.149 [2024-11-15 10:46:21.382734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.149 [2024-11-15 10:46:21.382771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.149 qpair failed and we were unable to recover it. 00:27:33.149 [2024-11-15 10:46:21.382979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.149 [2024-11-15 10:46:21.383002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.149 qpair failed and we were unable to recover it. 00:27:33.149 [2024-11-15 10:46:21.383184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.149 [2024-11-15 10:46:21.383207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.149 qpair failed and we were unable to recover it. 00:27:33.149 [2024-11-15 10:46:21.383398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.149 [2024-11-15 10:46:21.383438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.149 qpair failed and we were unable to recover it. 00:27:33.149 [2024-11-15 10:46:21.383627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.149 [2024-11-15 10:46:21.383651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.149 qpair failed and we were unable to recover it. 00:27:33.149 [2024-11-15 10:46:21.383839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.149 [2024-11-15 10:46:21.383862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.149 qpair failed and we were unable to recover it. 00:27:33.149 [2024-11-15 10:46:21.384053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.149 [2024-11-15 10:46:21.384076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.149 qpair failed and we were unable to recover it. 00:27:33.149 [2024-11-15 10:46:21.384276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.149 [2024-11-15 10:46:21.384299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.149 qpair failed and we were unable to recover it. 00:27:33.149 [2024-11-15 10:46:21.384558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.149 [2024-11-15 10:46:21.384582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.149 qpair failed and we were unable to recover it. 00:27:33.149 [2024-11-15 10:46:21.384751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.149 [2024-11-15 10:46:21.384774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.149 qpair failed and we were unable to recover it. 00:27:33.149 [2024-11-15 10:46:21.384995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.149 [2024-11-15 10:46:21.385019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.149 qpair failed and we were unable to recover it. 00:27:33.149 [2024-11-15 10:46:21.385221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.149 [2024-11-15 10:46:21.385245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.149 qpair failed and we were unable to recover it. 00:27:33.149 [2024-11-15 10:46:21.385434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.149 [2024-11-15 10:46:21.385459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.149 qpair failed and we were unable to recover it. 00:27:33.149 [2024-11-15 10:46:21.385604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.149 [2024-11-15 10:46:21.385628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.149 qpair failed and we were unable to recover it. 00:27:33.149 [2024-11-15 10:46:21.385855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.149 [2024-11-15 10:46:21.385878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.149 qpair failed and we were unable to recover it. 00:27:33.149 [2024-11-15 10:46:21.386067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.149 [2024-11-15 10:46:21.386089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.149 qpair failed and we were unable to recover it. 00:27:33.149 [2024-11-15 10:46:21.386285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.149 [2024-11-15 10:46:21.386308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.149 qpair failed and we were unable to recover it. 00:27:33.149 [2024-11-15 10:46:21.386487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.149 [2024-11-15 10:46:21.386514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.149 qpair failed and we were unable to recover it. 00:27:33.149 [2024-11-15 10:46:21.386685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.149 [2024-11-15 10:46:21.386725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.149 qpair failed and we were unable to recover it. 00:27:33.149 [2024-11-15 10:46:21.386893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.149 [2024-11-15 10:46:21.386916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.149 qpair failed and we were unable to recover it. 00:27:33.149 [2024-11-15 10:46:21.387116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.149 [2024-11-15 10:46:21.387139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.149 qpair failed and we were unable to recover it. 00:27:33.149 [2024-11-15 10:46:21.387316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.150 [2024-11-15 10:46:21.387339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.150 qpair failed and we were unable to recover it. 00:27:33.150 [2024-11-15 10:46:21.387530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.150 [2024-11-15 10:46:21.387555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.150 qpair failed and we were unable to recover it. 00:27:33.150 [2024-11-15 10:46:21.387778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.150 [2024-11-15 10:46:21.387801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.150 qpair failed and we were unable to recover it. 00:27:33.150 [2024-11-15 10:46:21.387963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.150 [2024-11-15 10:46:21.387987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.150 qpair failed and we were unable to recover it. 00:27:33.150 [2024-11-15 10:46:21.388127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.150 [2024-11-15 10:46:21.388164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.150 qpair failed and we were unable to recover it. 00:27:33.150 [2024-11-15 10:46:21.388367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.150 [2024-11-15 10:46:21.388405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.150 qpair failed and we were unable to recover it. 00:27:33.150 [2024-11-15 10:46:21.388601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.150 [2024-11-15 10:46:21.388624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.150 qpair failed and we were unable to recover it. 00:27:33.150 [2024-11-15 10:46:21.388812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.150 [2024-11-15 10:46:21.388837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.150 qpair failed and we were unable to recover it. 00:27:33.150 [2024-11-15 10:46:21.389048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.150 [2024-11-15 10:46:21.389071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.150 qpair failed and we were unable to recover it. 00:27:33.150 [2024-11-15 10:46:21.389258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.150 [2024-11-15 10:46:21.389282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.150 qpair failed and we were unable to recover it. 00:27:33.150 [2024-11-15 10:46:21.389475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.150 [2024-11-15 10:46:21.389500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.150 qpair failed and we were unable to recover it. 00:27:33.150 [2024-11-15 10:46:21.389724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.150 [2024-11-15 10:46:21.389747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.150 qpair failed and we were unable to recover it. 00:27:33.150 [2024-11-15 10:46:21.389967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.150 [2024-11-15 10:46:21.389990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.150 qpair failed and we were unable to recover it. 00:27:33.150 [2024-11-15 10:46:21.390148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.150 [2024-11-15 10:46:21.390170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.150 qpair failed and we were unable to recover it. 00:27:33.150 [2024-11-15 10:46:21.390409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.150 [2024-11-15 10:46:21.390435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.150 qpair failed and we were unable to recover it. 00:27:33.150 [2024-11-15 10:46:21.390630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.150 [2024-11-15 10:46:21.390655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.150 qpair failed and we were unable to recover it. 00:27:33.150 [2024-11-15 10:46:21.390838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.150 [2024-11-15 10:46:21.390861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.150 qpair failed and we were unable to recover it. 00:27:33.150 [2024-11-15 10:46:21.391045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.150 [2024-11-15 10:46:21.391068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.150 qpair failed and we were unable to recover it. 00:27:33.150 [2024-11-15 10:46:21.391259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.150 [2024-11-15 10:46:21.391282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.150 qpair failed and we were unable to recover it. 00:27:33.150 [2024-11-15 10:46:21.391505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.150 [2024-11-15 10:46:21.391530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.150 qpair failed and we were unable to recover it. 00:27:33.150 [2024-11-15 10:46:21.391711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.150 [2024-11-15 10:46:21.391733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.150 qpair failed and we were unable to recover it. 00:27:33.150 [2024-11-15 10:46:21.391968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.150 [2024-11-15 10:46:21.391991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.150 qpair failed and we were unable to recover it. 00:27:33.150 [2024-11-15 10:46:21.392186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.150 [2024-11-15 10:46:21.392209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.150 qpair failed and we were unable to recover it. 00:27:33.150 [2024-11-15 10:46:21.392409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.150 [2024-11-15 10:46:21.392434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.150 qpair failed and we were unable to recover it. 00:27:33.150 [2024-11-15 10:46:21.392606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.150 [2024-11-15 10:46:21.392630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.150 qpair failed and we were unable to recover it. 00:27:33.150 [2024-11-15 10:46:21.392811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.150 [2024-11-15 10:46:21.392833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.150 qpair failed and we were unable to recover it. 00:27:33.150 [2024-11-15 10:46:21.393050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.150 [2024-11-15 10:46:21.393073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.150 qpair failed and we were unable to recover it. 00:27:33.150 [2024-11-15 10:46:21.393313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.150 [2024-11-15 10:46:21.393336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.150 qpair failed and we were unable to recover it. 00:27:33.150 [2024-11-15 10:46:21.393524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.150 [2024-11-15 10:46:21.393549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.150 qpair failed and we were unable to recover it. 00:27:33.150 [2024-11-15 10:46:21.393672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.150 [2024-11-15 10:46:21.393696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.150 qpair failed and we were unable to recover it. 00:27:33.150 [2024-11-15 10:46:21.393918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.150 [2024-11-15 10:46:21.393941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.150 qpair failed and we were unable to recover it. 00:27:33.150 [2024-11-15 10:46:21.394116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.150 [2024-11-15 10:46:21.394139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.150 qpair failed and we were unable to recover it. 00:27:33.150 [2024-11-15 10:46:21.394331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.150 [2024-11-15 10:46:21.394384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.150 qpair failed and we were unable to recover it. 00:27:33.150 [2024-11-15 10:46:21.394609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.150 [2024-11-15 10:46:21.394633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.150 qpair failed and we were unable to recover it. 00:27:33.150 [2024-11-15 10:46:21.394810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.150 [2024-11-15 10:46:21.394833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.150 qpair failed and we were unable to recover it. 00:27:33.150 [2024-11-15 10:46:21.395009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.150 [2024-11-15 10:46:21.395034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.150 qpair failed and we were unable to recover it. 00:27:33.150 [2024-11-15 10:46:21.395217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.150 [2024-11-15 10:46:21.395241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.150 qpair failed and we were unable to recover it. 00:27:33.150 [2024-11-15 10:46:21.395433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.150 [2024-11-15 10:46:21.395474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.150 qpair failed and we were unable to recover it. 00:27:33.150 [2024-11-15 10:46:21.395681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.150 [2024-11-15 10:46:21.395706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.150 qpair failed and we were unable to recover it. 00:27:33.151 [2024-11-15 10:46:21.395916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.151 [2024-11-15 10:46:21.395939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.151 qpair failed and we were unable to recover it. 00:27:33.151 [2024-11-15 10:46:21.396125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.151 [2024-11-15 10:46:21.396148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.151 qpair failed and we were unable to recover it. 00:27:33.151 [2024-11-15 10:46:21.396335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.151 [2024-11-15 10:46:21.396358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.151 qpair failed and we were unable to recover it. 00:27:33.151 [2024-11-15 10:46:21.396582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.151 [2024-11-15 10:46:21.396606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.151 qpair failed and we were unable to recover it. 00:27:33.151 [2024-11-15 10:46:21.396744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.151 [2024-11-15 10:46:21.396784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.151 qpair failed and we were unable to recover it. 00:27:33.151 [2024-11-15 10:46:21.397007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.151 [2024-11-15 10:46:21.397030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.151 qpair failed and we were unable to recover it. 00:27:33.151 [2024-11-15 10:46:21.397237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.151 [2024-11-15 10:46:21.397260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.151 qpair failed and we were unable to recover it. 00:27:33.151 [2024-11-15 10:46:21.397510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.151 [2024-11-15 10:46:21.397536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.151 qpair failed and we were unable to recover it. 00:27:33.151 [2024-11-15 10:46:21.397720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.151 [2024-11-15 10:46:21.397743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.151 qpair failed and we were unable to recover it. 00:27:33.151 [2024-11-15 10:46:21.397967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.151 [2024-11-15 10:46:21.397990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.151 qpair failed and we were unable to recover it. 00:27:33.151 [2024-11-15 10:46:21.398218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.151 [2024-11-15 10:46:21.398241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.151 qpair failed and we were unable to recover it. 00:27:33.151 [2024-11-15 10:46:21.398456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.151 [2024-11-15 10:46:21.398480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.151 qpair failed and we were unable to recover it. 00:27:33.151 [2024-11-15 10:46:21.398722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.151 [2024-11-15 10:46:21.398745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.151 qpair failed and we were unable to recover it. 00:27:33.151 [2024-11-15 10:46:21.398971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.151 [2024-11-15 10:46:21.398994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.151 qpair failed and we were unable to recover it. 00:27:33.151 [2024-11-15 10:46:21.399218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.151 [2024-11-15 10:46:21.399241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.151 qpair failed and we were unable to recover it. 00:27:33.151 [2024-11-15 10:46:21.399467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.151 [2024-11-15 10:46:21.399493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.151 qpair failed and we were unable to recover it. 00:27:33.151 [2024-11-15 10:46:21.399731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.151 [2024-11-15 10:46:21.399754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.151 qpair failed and we were unable to recover it. 00:27:33.151 [2024-11-15 10:46:21.399943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.151 [2024-11-15 10:46:21.399968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.151 qpair failed and we were unable to recover it. 00:27:33.151 [2024-11-15 10:46:21.400163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.151 [2024-11-15 10:46:21.400187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.151 qpair failed and we were unable to recover it. 00:27:33.151 [2024-11-15 10:46:21.400342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.151 [2024-11-15 10:46:21.400395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.151 qpair failed and we were unable to recover it. 00:27:33.151 [2024-11-15 10:46:21.400634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.151 [2024-11-15 10:46:21.400663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.151 qpair failed and we were unable to recover it. 00:27:33.151 [2024-11-15 10:46:21.400892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.151 [2024-11-15 10:46:21.400915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.151 qpair failed and we were unable to recover it. 00:27:33.151 [2024-11-15 10:46:21.401162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.151 [2024-11-15 10:46:21.401186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.151 qpair failed and we were unable to recover it. 00:27:33.151 [2024-11-15 10:46:21.401405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.151 [2024-11-15 10:46:21.401445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.151 qpair failed and we were unable to recover it. 00:27:33.151 [2024-11-15 10:46:21.401621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.151 [2024-11-15 10:46:21.401661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.151 qpair failed and we were unable to recover it. 00:27:33.151 [2024-11-15 10:46:21.401827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.151 [2024-11-15 10:46:21.401851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.151 qpair failed and we were unable to recover it. 00:27:33.151 [2024-11-15 10:46:21.402041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.151 [2024-11-15 10:46:21.402064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.151 qpair failed and we were unable to recover it. 00:27:33.151 [2024-11-15 10:46:21.402237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.151 [2024-11-15 10:46:21.402261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.151 qpair failed and we were unable to recover it. 00:27:33.151 [2024-11-15 10:46:21.402456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.151 [2024-11-15 10:46:21.402482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.151 qpair failed and we were unable to recover it. 00:27:33.151 [2024-11-15 10:46:21.402628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.151 [2024-11-15 10:46:21.402652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.151 qpair failed and we were unable to recover it. 00:27:33.151 [2024-11-15 10:46:21.402866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.151 [2024-11-15 10:46:21.402889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.151 qpair failed and we were unable to recover it. 00:27:33.151 [2024-11-15 10:46:21.403073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.151 [2024-11-15 10:46:21.403096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.151 qpair failed and we were unable to recover it. 00:27:33.151 [2024-11-15 10:46:21.403275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.151 [2024-11-15 10:46:21.403299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.151 qpair failed and we were unable to recover it. 00:27:33.151 [2024-11-15 10:46:21.403487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.151 [2024-11-15 10:46:21.403512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.151 qpair failed and we were unable to recover it. 00:27:33.151 [2024-11-15 10:46:21.403741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.151 [2024-11-15 10:46:21.403765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.151 qpair failed and we were unable to recover it. 00:27:33.151 [2024-11-15 10:46:21.403990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.151 [2024-11-15 10:46:21.404014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.151 qpair failed and we were unable to recover it. 00:27:33.151 [2024-11-15 10:46:21.404160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.151 [2024-11-15 10:46:21.404184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.151 qpair failed and we were unable to recover it. 00:27:33.151 [2024-11-15 10:46:21.404306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.151 [2024-11-15 10:46:21.404329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.151 qpair failed and we were unable to recover it. 00:27:33.151 [2024-11-15 10:46:21.404521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.152 [2024-11-15 10:46:21.404545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.152 qpair failed and we were unable to recover it. 00:27:33.152 [2024-11-15 10:46:21.404684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.152 [2024-11-15 10:46:21.404708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.152 qpair failed and we were unable to recover it. 00:27:33.152 [2024-11-15 10:46:21.404908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.152 [2024-11-15 10:46:21.404931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.152 qpair failed and we were unable to recover it. 00:27:33.152 [2024-11-15 10:46:21.405132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.152 [2024-11-15 10:46:21.405154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.152 qpair failed and we were unable to recover it. 00:27:33.152 [2024-11-15 10:46:21.405309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.152 [2024-11-15 10:46:21.405331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.152 qpair failed and we were unable to recover it. 00:27:33.152 [2024-11-15 10:46:21.405573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.152 [2024-11-15 10:46:21.405599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.152 qpair failed and we were unable to recover it. 00:27:33.152 [2024-11-15 10:46:21.405813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.152 [2024-11-15 10:46:21.405837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.152 qpair failed and we were unable to recover it. 00:27:33.152 [2024-11-15 10:46:21.406011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.152 [2024-11-15 10:46:21.406034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.152 qpair failed and we were unable to recover it. 00:27:33.152 [2024-11-15 10:46:21.406266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.152 [2024-11-15 10:46:21.406290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.152 qpair failed and we were unable to recover it. 00:27:33.152 [2024-11-15 10:46:21.406504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.152 [2024-11-15 10:46:21.406529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.152 qpair failed and we were unable to recover it. 00:27:33.152 [2024-11-15 10:46:21.406762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.152 [2024-11-15 10:46:21.406786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.152 qpair failed and we were unable to recover it. 00:27:33.152 [2024-11-15 10:46:21.406942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.152 [2024-11-15 10:46:21.406965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.152 qpair failed and we were unable to recover it. 00:27:33.152 [2024-11-15 10:46:21.407145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.152 [2024-11-15 10:46:21.407168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.152 qpair failed and we were unable to recover it. 00:27:33.152 [2024-11-15 10:46:21.407354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.152 [2024-11-15 10:46:21.407399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.152 qpair failed and we were unable to recover it. 00:27:33.152 [2024-11-15 10:46:21.407628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.152 [2024-11-15 10:46:21.407667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.152 qpair failed and we were unable to recover it. 00:27:33.152 [2024-11-15 10:46:21.407856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.152 [2024-11-15 10:46:21.407879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.152 qpair failed and we were unable to recover it. 00:27:33.152 [2024-11-15 10:46:21.408110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.152 [2024-11-15 10:46:21.408133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.152 qpair failed and we were unable to recover it. 00:27:33.152 [2024-11-15 10:46:21.408325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.152 [2024-11-15 10:46:21.408380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.152 qpair failed and we were unable to recover it. 00:27:33.152 [2024-11-15 10:46:21.408608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.152 [2024-11-15 10:46:21.408634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.152 qpair failed and we were unable to recover it. 00:27:33.152 [2024-11-15 10:46:21.408823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.152 [2024-11-15 10:46:21.408846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.152 qpair failed and we were unable to recover it. 00:27:33.152 [2024-11-15 10:46:21.409000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.152 [2024-11-15 10:46:21.409023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.152 qpair failed and we were unable to recover it. 00:27:33.152 [2024-11-15 10:46:21.409215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.152 [2024-11-15 10:46:21.409238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.152 qpair failed and we were unable to recover it. 00:27:33.152 [2024-11-15 10:46:21.409394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.152 [2024-11-15 10:46:21.409419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.152 qpair failed and we were unable to recover it. 00:27:33.152 [2024-11-15 10:46:21.409665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.152 [2024-11-15 10:46:21.409689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.152 qpair failed and we were unable to recover it. 00:27:33.152 [2024-11-15 10:46:21.409891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.152 [2024-11-15 10:46:21.409915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.152 qpair failed and we were unable to recover it. 00:27:33.152 [2024-11-15 10:46:21.410137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.152 [2024-11-15 10:46:21.410160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.152 qpair failed and we were unable to recover it. 00:27:33.152 [2024-11-15 10:46:21.410397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.152 [2024-11-15 10:46:21.410423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.152 qpair failed and we were unable to recover it. 00:27:33.152 [2024-11-15 10:46:21.410565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.152 [2024-11-15 10:46:21.410589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.152 qpair failed and we were unable to recover it. 00:27:33.152 [2024-11-15 10:46:21.410823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.152 [2024-11-15 10:46:21.410846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.152 qpair failed and we were unable to recover it. 00:27:33.152 [2024-11-15 10:46:21.411074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.152 [2024-11-15 10:46:21.411098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.152 qpair failed and we were unable to recover it. 00:27:33.152 [2024-11-15 10:46:21.411286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.152 [2024-11-15 10:46:21.411308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.152 qpair failed and we were unable to recover it. 00:27:33.152 [2024-11-15 10:46:21.411481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.152 [2024-11-15 10:46:21.411506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.152 qpair failed and we were unable to recover it. 00:27:33.152 [2024-11-15 10:46:21.411687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.152 [2024-11-15 10:46:21.411710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.152 qpair failed and we were unable to recover it. 00:27:33.152 [2024-11-15 10:46:21.411943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.152 [2024-11-15 10:46:21.411967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.152 qpair failed and we were unable to recover it. 00:27:33.152 [2024-11-15 10:46:21.412183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.152 [2024-11-15 10:46:21.412206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.152 qpair failed and we were unable to recover it. 00:27:33.152 [2024-11-15 10:46:21.412404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.152 [2024-11-15 10:46:21.412429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.152 qpair failed and we were unable to recover it. 00:27:33.152 [2024-11-15 10:46:21.412622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.152 [2024-11-15 10:46:21.412647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.152 qpair failed and we were unable to recover it. 00:27:33.152 [2024-11-15 10:46:21.412836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.152 [2024-11-15 10:46:21.412860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.152 qpair failed and we were unable to recover it. 00:27:33.152 [2024-11-15 10:46:21.413095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.152 [2024-11-15 10:46:21.413120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.153 qpair failed and we were unable to recover it. 00:27:33.153 [2024-11-15 10:46:21.413304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.153 [2024-11-15 10:46:21.413326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.153 qpair failed and we were unable to recover it. 00:27:33.153 [2024-11-15 10:46:21.413540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.153 [2024-11-15 10:46:21.413566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.153 qpair failed and we were unable to recover it. 00:27:33.153 [2024-11-15 10:46:21.413774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.153 [2024-11-15 10:46:21.413797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.153 qpair failed and we were unable to recover it. 00:27:33.153 [2024-11-15 10:46:21.413945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.153 [2024-11-15 10:46:21.413969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.153 qpair failed and we were unable to recover it. 00:27:33.153 [2024-11-15 10:46:21.414192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.153 [2024-11-15 10:46:21.414216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.153 qpair failed and we were unable to recover it. 00:27:33.153 [2024-11-15 10:46:21.414426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.153 [2024-11-15 10:46:21.414452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.153 qpair failed and we were unable to recover it. 00:27:33.153 [2024-11-15 10:46:21.414590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.153 [2024-11-15 10:46:21.414613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.153 qpair failed and we were unable to recover it. 00:27:33.153 [2024-11-15 10:46:21.414779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.153 [2024-11-15 10:46:21.414817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.153 qpair failed and we were unable to recover it. 00:27:33.153 [2024-11-15 10:46:21.414934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.153 [2024-11-15 10:46:21.414974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.153 qpair failed and we were unable to recover it. 00:27:33.153 [2024-11-15 10:46:21.415094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.153 [2024-11-15 10:46:21.415118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.153 qpair failed and we were unable to recover it. 00:27:33.153 [2024-11-15 10:46:21.415266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.153 [2024-11-15 10:46:21.415291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.153 qpair failed and we were unable to recover it. 00:27:33.153 [2024-11-15 10:46:21.415419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.153 [2024-11-15 10:46:21.415448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.153 qpair failed and we were unable to recover it. 00:27:33.153 [2024-11-15 10:46:21.415584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.153 [2024-11-15 10:46:21.415608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.153 qpair failed and we were unable to recover it. 00:27:33.153 [2024-11-15 10:46:21.415791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.153 [2024-11-15 10:46:21.415815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.153 qpair failed and we were unable to recover it. 00:27:33.153 [2024-11-15 10:46:21.415952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.153 [2024-11-15 10:46:21.415976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.153 qpair failed and we were unable to recover it. 00:27:33.153 [2024-11-15 10:46:21.416121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.153 [2024-11-15 10:46:21.416145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.153 qpair failed and we were unable to recover it. 00:27:33.153 [2024-11-15 10:46:21.416288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.153 [2024-11-15 10:46:21.416312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.153 qpair failed and we were unable to recover it. 00:27:33.153 [2024-11-15 10:46:21.416497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.153 [2024-11-15 10:46:21.416522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.153 qpair failed and we were unable to recover it. 00:27:33.153 [2024-11-15 10:46:21.416676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.153 [2024-11-15 10:46:21.416699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.153 qpair failed and we were unable to recover it. 00:27:33.153 [2024-11-15 10:46:21.416879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.153 [2024-11-15 10:46:21.416902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.153 qpair failed and we were unable to recover it. 00:27:33.153 [2024-11-15 10:46:21.417044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.153 [2024-11-15 10:46:21.417068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.153 qpair failed and we were unable to recover it. 00:27:33.153 [2024-11-15 10:46:21.417210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.153 [2024-11-15 10:46:21.417233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.153 qpair failed and we were unable to recover it. 00:27:33.153 [2024-11-15 10:46:21.417430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.153 [2024-11-15 10:46:21.417472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.153 qpair failed and we were unable to recover it. 00:27:33.153 [2024-11-15 10:46:21.417607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.153 [2024-11-15 10:46:21.417647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.153 qpair failed and we were unable to recover it. 00:27:33.153 [2024-11-15 10:46:21.417800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.153 [2024-11-15 10:46:21.417824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.153 qpair failed and we were unable to recover it. 00:27:33.153 [2024-11-15 10:46:21.417975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.153 [2024-11-15 10:46:21.417998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.153 qpair failed and we were unable to recover it. 00:27:33.153 [2024-11-15 10:46:21.418139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.153 [2024-11-15 10:46:21.418163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.153 qpair failed and we were unable to recover it. 00:27:33.153 [2024-11-15 10:46:21.418312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.153 [2024-11-15 10:46:21.418350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.153 qpair failed and we were unable to recover it. 00:27:33.153 [2024-11-15 10:46:21.418500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.153 [2024-11-15 10:46:21.418525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.153 qpair failed and we were unable to recover it. 00:27:33.153 [2024-11-15 10:46:21.418660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.153 [2024-11-15 10:46:21.418684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.153 qpair failed and we were unable to recover it. 00:27:33.153 [2024-11-15 10:46:21.418806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.153 [2024-11-15 10:46:21.418845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.153 qpair failed and we were unable to recover it. 00:27:33.153 [2024-11-15 10:46:21.418975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.153 [2024-11-15 10:46:21.418999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.153 qpair failed and we were unable to recover it. 00:27:33.153 [2024-11-15 10:46:21.419145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.153 [2024-11-15 10:46:21.419169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.153 qpair failed and we were unable to recover it. 00:27:33.153 [2024-11-15 10:46:21.419289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.153 [2024-11-15 10:46:21.419312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.153 qpair failed and we were unable to recover it. 00:27:33.153 [2024-11-15 10:46:21.419424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.153 [2024-11-15 10:46:21.419449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.153 qpair failed and we were unable to recover it. 00:27:33.153 [2024-11-15 10:46:21.419579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.153 [2024-11-15 10:46:21.419603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.153 qpair failed and we were unable to recover it. 00:27:33.153 [2024-11-15 10:46:21.419718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.153 [2024-11-15 10:46:21.419741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.153 qpair failed and we were unable to recover it. 00:27:33.153 [2024-11-15 10:46:21.419881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.153 [2024-11-15 10:46:21.419921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.153 qpair failed and we were unable to recover it. 00:27:33.153 [2024-11-15 10:46:21.420071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.154 [2024-11-15 10:46:21.420099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.154 qpair failed and we were unable to recover it. 00:27:33.154 [2024-11-15 10:46:21.420227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.154 [2024-11-15 10:46:21.420250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.154 qpair failed and we were unable to recover it. 00:27:33.154 [2024-11-15 10:46:21.420341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.154 [2024-11-15 10:46:21.420389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.154 qpair failed and we were unable to recover it. 00:27:33.154 [2024-11-15 10:46:21.420487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.154 [2024-11-15 10:46:21.420512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.154 qpair failed and we were unable to recover it. 00:27:33.154 [2024-11-15 10:46:21.420659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.154 [2024-11-15 10:46:21.420683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.154 qpair failed and we were unable to recover it. 00:27:33.154 [2024-11-15 10:46:21.420832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.154 [2024-11-15 10:46:21.420855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.154 qpair failed and we were unable to recover it. 00:27:33.154 [2024-11-15 10:46:21.421040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.154 [2024-11-15 10:46:21.421063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.154 qpair failed and we were unable to recover it. 00:27:33.154 [2024-11-15 10:46:21.421221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.154 [2024-11-15 10:46:21.421245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.154 qpair failed and we were unable to recover it. 00:27:33.154 [2024-11-15 10:46:21.421471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.154 [2024-11-15 10:46:21.421496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.154 qpair failed and we were unable to recover it. 00:27:33.154 [2024-11-15 10:46:21.421611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.154 [2024-11-15 10:46:21.421635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.154 qpair failed and we were unable to recover it. 00:27:33.154 [2024-11-15 10:46:21.421731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.154 [2024-11-15 10:46:21.421754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.154 qpair failed and we were unable to recover it. 00:27:33.154 [2024-11-15 10:46:21.421874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.154 [2024-11-15 10:46:21.421897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.154 qpair failed and we were unable to recover it. 00:27:33.154 [2024-11-15 10:46:21.422057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.154 [2024-11-15 10:46:21.422081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.154 qpair failed and we were unable to recover it. 00:27:33.154 [2024-11-15 10:46:21.422262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.154 [2024-11-15 10:46:21.422286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.154 qpair failed and we were unable to recover it. 00:27:33.154 [2024-11-15 10:46:21.422497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.154 [2024-11-15 10:46:21.422521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.154 qpair failed and we were unable to recover it. 00:27:33.154 [2024-11-15 10:46:21.422678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.154 [2024-11-15 10:46:21.422703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.154 qpair failed and we were unable to recover it. 00:27:33.154 [2024-11-15 10:46:21.422830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.154 [2024-11-15 10:46:21.422854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.154 qpair failed and we were unable to recover it. 00:27:33.154 [2024-11-15 10:46:21.422960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.154 [2024-11-15 10:46:21.422985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.154 qpair failed and we were unable to recover it. 00:27:33.154 [2024-11-15 10:46:21.423118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.154 [2024-11-15 10:46:21.423142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.154 qpair failed and we were unable to recover it. 00:27:33.154 [2024-11-15 10:46:21.423289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.154 [2024-11-15 10:46:21.423314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.154 qpair failed and we were unable to recover it. 00:27:33.154 [2024-11-15 10:46:21.423500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.154 [2024-11-15 10:46:21.423525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.154 qpair failed and we were unable to recover it. 00:27:33.154 [2024-11-15 10:46:21.423662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.154 [2024-11-15 10:46:21.423703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.154 qpair failed and we were unable to recover it. 00:27:33.154 [2024-11-15 10:46:21.423831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.154 [2024-11-15 10:46:21.423879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.154 qpair failed and we were unable to recover it. 00:27:33.154 [2024-11-15 10:46:21.424050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.154 [2024-11-15 10:46:21.424074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.154 qpair failed and we were unable to recover it. 00:27:33.154 [2024-11-15 10:46:21.424231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.154 [2024-11-15 10:46:21.424270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.154 qpair failed and we were unable to recover it. 00:27:33.154 [2024-11-15 10:46:21.424485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.154 [2024-11-15 10:46:21.424510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.154 qpair failed and we were unable to recover it. 00:27:33.154 [2024-11-15 10:46:21.424627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.154 [2024-11-15 10:46:21.424661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.154 qpair failed and we were unable to recover it. 00:27:33.154 [2024-11-15 10:46:21.424798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.154 [2024-11-15 10:46:21.424841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.154 qpair failed and we were unable to recover it. 00:27:33.154 [2024-11-15 10:46:21.424962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.154 [2024-11-15 10:46:21.424986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.154 qpair failed and we were unable to recover it. 00:27:33.154 [2024-11-15 10:46:21.425125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.154 [2024-11-15 10:46:21.425149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.154 qpair failed and we were unable to recover it. 00:27:33.154 [2024-11-15 10:46:21.425288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.154 [2024-11-15 10:46:21.425313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.154 qpair failed and we were unable to recover it. 00:27:33.154 [2024-11-15 10:46:21.425467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.154 [2024-11-15 10:46:21.425493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.154 qpair failed and we were unable to recover it. 00:27:33.154 [2024-11-15 10:46:21.425593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.154 [2024-11-15 10:46:21.425627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.154 qpair failed and we were unable to recover it. 00:27:33.154 [2024-11-15 10:46:21.425768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.154 [2024-11-15 10:46:21.425792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.154 qpair failed and we were unable to recover it. 00:27:33.154 [2024-11-15 10:46:21.425979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.155 [2024-11-15 10:46:21.426003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.155 qpair failed and we were unable to recover it. 00:27:33.155 [2024-11-15 10:46:21.426176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.155 [2024-11-15 10:46:21.426200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.155 qpair failed and we were unable to recover it. 00:27:33.155 [2024-11-15 10:46:21.426311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.155 [2024-11-15 10:46:21.426350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.155 qpair failed and we were unable to recover it. 00:27:33.155 [2024-11-15 10:46:21.426481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.155 [2024-11-15 10:46:21.426506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.155 qpair failed and we were unable to recover it. 00:27:33.155 [2024-11-15 10:46:21.426651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.155 [2024-11-15 10:46:21.426675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.155 qpair failed and we were unable to recover it. 00:27:33.155 [2024-11-15 10:46:21.426853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.155 [2024-11-15 10:46:21.426877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.155 qpair failed and we were unable to recover it. 00:27:33.155 [2024-11-15 10:46:21.427053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.155 [2024-11-15 10:46:21.427077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.155 qpair failed and we were unable to recover it. 00:27:33.155 [2024-11-15 10:46:21.427197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.155 [2024-11-15 10:46:21.427236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.155 qpair failed and we were unable to recover it. 00:27:33.155 [2024-11-15 10:46:21.427327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.155 [2024-11-15 10:46:21.427382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.155 qpair failed and we were unable to recover it. 00:27:33.155 [2024-11-15 10:46:21.427488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.155 [2024-11-15 10:46:21.427513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.155 qpair failed and we were unable to recover it. 00:27:33.155 [2024-11-15 10:46:21.427614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.155 [2024-11-15 10:46:21.427647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.155 qpair failed and we were unable to recover it. 00:27:33.155 [2024-11-15 10:46:21.427787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.155 [2024-11-15 10:46:21.427812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.155 qpair failed and we were unable to recover it. 00:27:33.155 [2024-11-15 10:46:21.428012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.155 [2024-11-15 10:46:21.428046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.155 qpair failed and we were unable to recover it. 00:27:33.155 [2024-11-15 10:46:21.428214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.155 [2024-11-15 10:46:21.428238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.155 qpair failed and we were unable to recover it. 00:27:33.155 [2024-11-15 10:46:21.428377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.155 [2024-11-15 10:46:21.428402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.155 qpair failed and we were unable to recover it. 00:27:33.155 [2024-11-15 10:46:21.428509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.155 [2024-11-15 10:46:21.428534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.155 qpair failed and we were unable to recover it. 00:27:33.155 [2024-11-15 10:46:21.428709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.155 [2024-11-15 10:46:21.428733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.155 qpair failed and we were unable to recover it. 00:27:33.155 [2024-11-15 10:46:21.428954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.155 [2024-11-15 10:46:21.428977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.155 qpair failed and we were unable to recover it. 00:27:33.155 [2024-11-15 10:46:21.429115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.155 [2024-11-15 10:46:21.429154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.155 qpair failed and we were unable to recover it. 00:27:33.155 [2024-11-15 10:46:21.429286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.155 [2024-11-15 10:46:21.429310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.155 qpair failed and we were unable to recover it. 00:27:33.155 [2024-11-15 10:46:21.429462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.155 [2024-11-15 10:46:21.429488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.155 qpair failed and we were unable to recover it. 00:27:33.155 [2024-11-15 10:46:21.429609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.155 [2024-11-15 10:46:21.429633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.155 qpair failed and we were unable to recover it. 00:27:33.155 [2024-11-15 10:46:21.429783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.155 [2024-11-15 10:46:21.429807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.155 qpair failed and we were unable to recover it. 00:27:33.155 [2024-11-15 10:46:21.429951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.155 [2024-11-15 10:46:21.429992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.155 qpair failed and we were unable to recover it. 00:27:33.155 [2024-11-15 10:46:21.430141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.155 [2024-11-15 10:46:21.430166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.155 qpair failed and we were unable to recover it. 00:27:33.155 [2024-11-15 10:46:21.430357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.155 [2024-11-15 10:46:21.430389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.155 qpair failed and we were unable to recover it. 00:27:33.155 [2024-11-15 10:46:21.430503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.155 [2024-11-15 10:46:21.430529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.155 qpair failed and we were unable to recover it. 00:27:33.155 [2024-11-15 10:46:21.430691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.155 [2024-11-15 10:46:21.430726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.155 qpair failed and we were unable to recover it. 00:27:33.155 [2024-11-15 10:46:21.430962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.155 [2024-11-15 10:46:21.430985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.155 qpair failed and we were unable to recover it. 00:27:33.155 [2024-11-15 10:46:21.431216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.155 [2024-11-15 10:46:21.431239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.155 qpair failed and we were unable to recover it. 00:27:33.155 [2024-11-15 10:46:21.431428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.155 [2024-11-15 10:46:21.431454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.155 qpair failed and we were unable to recover it. 00:27:33.155 [2024-11-15 10:46:21.431619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.155 [2024-11-15 10:46:21.431643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.155 qpair failed and we were unable to recover it. 00:27:33.155 [2024-11-15 10:46:21.431799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.155 [2024-11-15 10:46:21.431837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.155 qpair failed and we were unable to recover it. 00:27:33.155 [2024-11-15 10:46:21.432069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.155 [2024-11-15 10:46:21.432092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.155 qpair failed and we were unable to recover it. 00:27:33.155 [2024-11-15 10:46:21.432318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.155 [2024-11-15 10:46:21.432357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.155 qpair failed and we were unable to recover it. 00:27:33.155 [2024-11-15 10:46:21.432502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.155 [2024-11-15 10:46:21.432526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.155 qpair failed and we were unable to recover it. 00:27:33.155 [2024-11-15 10:46:21.432696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.155 [2024-11-15 10:46:21.432720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.155 qpair failed and we were unable to recover it. 00:27:33.155 [2024-11-15 10:46:21.432907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.155 [2024-11-15 10:46:21.432930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.155 qpair failed and we were unable to recover it. 00:27:33.156 [2024-11-15 10:46:21.433100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.156 [2024-11-15 10:46:21.433124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.156 qpair failed and we were unable to recover it. 00:27:33.156 [2024-11-15 10:46:21.433330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.156 [2024-11-15 10:46:21.433354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.156 qpair failed and we were unable to recover it. 00:27:33.156 [2024-11-15 10:46:21.433518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.156 [2024-11-15 10:46:21.433542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.156 qpair failed and we were unable to recover it. 00:27:33.156 [2024-11-15 10:46:21.433650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.156 [2024-11-15 10:46:21.433676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.156 qpair failed and we were unable to recover it. 00:27:33.156 [2024-11-15 10:46:21.433802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.156 [2024-11-15 10:46:21.433826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.156 qpair failed and we were unable to recover it. 00:27:33.156 [2024-11-15 10:46:21.433968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.156 [2024-11-15 10:46:21.433992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.156 qpair failed and we were unable to recover it. 00:27:33.156 [2024-11-15 10:46:21.434227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.156 [2024-11-15 10:46:21.434251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.156 qpair failed and we were unable to recover it. 00:27:33.156 [2024-11-15 10:46:21.434478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.156 [2024-11-15 10:46:21.434504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.156 qpair failed and we were unable to recover it. 00:27:33.156 [2024-11-15 10:46:21.434674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.156 [2024-11-15 10:46:21.434698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.156 qpair failed and we were unable to recover it. 00:27:33.156 [2024-11-15 10:46:21.434944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.156 [2024-11-15 10:46:21.434967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.156 qpair failed and we were unable to recover it. 00:27:33.156 [2024-11-15 10:46:21.435172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.156 [2024-11-15 10:46:21.435196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.156 qpair failed and we were unable to recover it. 00:27:33.156 [2024-11-15 10:46:21.435337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.156 [2024-11-15 10:46:21.435400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.156 qpair failed and we were unable to recover it. 00:27:33.156 [2024-11-15 10:46:21.435529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.156 [2024-11-15 10:46:21.435555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.156 qpair failed and we were unable to recover it. 00:27:33.156 [2024-11-15 10:46:21.435637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.156 [2024-11-15 10:46:21.435661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.156 qpair failed and we were unable to recover it. 00:27:33.156 [2024-11-15 10:46:21.435805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.156 [2024-11-15 10:46:21.435830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.156 qpair failed and we were unable to recover it. 00:27:33.156 [2024-11-15 10:46:21.436008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.156 [2024-11-15 10:46:21.436031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.156 qpair failed and we were unable to recover it. 00:27:33.156 [2024-11-15 10:46:21.436219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.156 [2024-11-15 10:46:21.436258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.156 qpair failed and we were unable to recover it. 00:27:33.156 [2024-11-15 10:46:21.436435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.156 [2024-11-15 10:46:21.436460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.156 qpair failed and we were unable to recover it. 00:27:33.156 [2024-11-15 10:46:21.436554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.156 [2024-11-15 10:46:21.436579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.156 qpair failed and we were unable to recover it. 00:27:33.156 [2024-11-15 10:46:21.436722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.156 [2024-11-15 10:46:21.436747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.156 qpair failed and we were unable to recover it. 00:27:33.156 [2024-11-15 10:46:21.436879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.156 [2024-11-15 10:46:21.436921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.156 qpair failed and we were unable to recover it. 00:27:33.156 [2024-11-15 10:46:21.437087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.156 [2024-11-15 10:46:21.437110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.156 qpair failed and we were unable to recover it. 00:27:33.156 [2024-11-15 10:46:21.437333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.156 [2024-11-15 10:46:21.437356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.156 qpair failed and we were unable to recover it. 00:27:33.156 [2024-11-15 10:46:21.437492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.156 [2024-11-15 10:46:21.437520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.156 qpair failed and we were unable to recover it. 00:27:33.156 [2024-11-15 10:46:21.437632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.156 [2024-11-15 10:46:21.437672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.156 qpair failed and we were unable to recover it. 00:27:33.156 [2024-11-15 10:46:21.437820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.156 [2024-11-15 10:46:21.437846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.156 qpair failed and we were unable to recover it. 00:27:33.156 [2024-11-15 10:46:21.438078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.156 [2024-11-15 10:46:21.438101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.156 qpair failed and we were unable to recover it. 00:27:33.156 [2024-11-15 10:46:21.438312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.156 [2024-11-15 10:46:21.438335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.156 qpair failed and we were unable to recover it. 00:27:33.156 [2024-11-15 10:46:21.438496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.156 [2024-11-15 10:46:21.438521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.156 qpair failed and we were unable to recover it. 00:27:33.156 [2024-11-15 10:46:21.438696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.156 [2024-11-15 10:46:21.438734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.156 qpair failed and we were unable to recover it. 00:27:33.156 [2024-11-15 10:46:21.438952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.156 [2024-11-15 10:46:21.438976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.156 qpair failed and we were unable to recover it. 00:27:33.156 [2024-11-15 10:46:21.439143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.156 [2024-11-15 10:46:21.439166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.156 qpair failed and we were unable to recover it. 00:27:33.156 [2024-11-15 10:46:21.439295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.156 [2024-11-15 10:46:21.439320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.156 qpair failed and we were unable to recover it. 00:27:33.156 [2024-11-15 10:46:21.439461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.156 [2024-11-15 10:46:21.439485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.156 qpair failed and we were unable to recover it. 00:27:33.156 [2024-11-15 10:46:21.439633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.156 [2024-11-15 10:46:21.439657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.156 qpair failed and we were unable to recover it. 00:27:33.156 [2024-11-15 10:46:21.439876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.156 [2024-11-15 10:46:21.439899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.156 qpair failed and we were unable to recover it. 00:27:33.156 [2024-11-15 10:46:21.440093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.156 [2024-11-15 10:46:21.440116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.156 qpair failed and we were unable to recover it. 00:27:33.156 [2024-11-15 10:46:21.440311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.156 [2024-11-15 10:46:21.440336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.156 qpair failed and we were unable to recover it. 00:27:33.156 [2024-11-15 10:46:21.440478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.157 [2024-11-15 10:46:21.440518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.157 qpair failed and we were unable to recover it. 00:27:33.157 [2024-11-15 10:46:21.440605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.157 [2024-11-15 10:46:21.440630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.157 qpair failed and we were unable to recover it. 00:27:33.157 [2024-11-15 10:46:21.440791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.157 [2024-11-15 10:46:21.440819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.157 qpair failed and we were unable to recover it. 00:27:33.157 [2024-11-15 10:46:21.441062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.157 [2024-11-15 10:46:21.441084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.157 qpair failed and we were unable to recover it. 00:27:33.157 [2024-11-15 10:46:21.441320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.157 [2024-11-15 10:46:21.441359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.157 qpair failed and we were unable to recover it. 00:27:33.157 [2024-11-15 10:46:21.441510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.157 [2024-11-15 10:46:21.441534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.157 qpair failed and we were unable to recover it. 00:27:33.157 [2024-11-15 10:46:21.441654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.157 [2024-11-15 10:46:21.441688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.157 qpair failed and we were unable to recover it. 00:27:33.157 [2024-11-15 10:46:21.441834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.157 [2024-11-15 10:46:21.441872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.157 qpair failed and we were unable to recover it. 00:27:33.157 [2024-11-15 10:46:21.442011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.157 [2024-11-15 10:46:21.442035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.157 qpair failed and we were unable to recover it. 00:27:33.157 [2024-11-15 10:46:21.442208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.157 [2024-11-15 10:46:21.442252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.157 qpair failed and we were unable to recover it. 00:27:33.157 [2024-11-15 10:46:21.442464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.157 [2024-11-15 10:46:21.442500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.157 qpair failed and we were unable to recover it. 00:27:33.157 [2024-11-15 10:46:21.442654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.157 [2024-11-15 10:46:21.442678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.157 qpair failed and we were unable to recover it. 00:27:33.157 [2024-11-15 10:46:21.442804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.157 [2024-11-15 10:46:21.442832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.157 qpair failed and we were unable to recover it. 00:27:33.157 [2024-11-15 10:46:21.443068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.157 [2024-11-15 10:46:21.443091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.157 qpair failed and we were unable to recover it. 00:27:33.157 [2024-11-15 10:46:21.443286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.157 [2024-11-15 10:46:21.443309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.157 qpair failed and we were unable to recover it. 00:27:33.157 [2024-11-15 10:46:21.443532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.157 [2024-11-15 10:46:21.443556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.157 qpair failed and we were unable to recover it. 00:27:33.157 [2024-11-15 10:46:21.443647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.157 [2024-11-15 10:46:21.443685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.157 qpair failed and we were unable to recover it. 00:27:33.157 [2024-11-15 10:46:21.443886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.157 [2024-11-15 10:46:21.443908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.157 qpair failed and we were unable to recover it. 00:27:33.157 [2024-11-15 10:46:21.444145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.157 [2024-11-15 10:46:21.444168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.157 qpair failed and we were unable to recover it. 00:27:33.157 [2024-11-15 10:46:21.444338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.157 [2024-11-15 10:46:21.444366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.157 qpair failed and we were unable to recover it. 00:27:33.157 [2024-11-15 10:46:21.444482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.157 [2024-11-15 10:46:21.444506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.157 qpair failed and we were unable to recover it. 00:27:33.157 [2024-11-15 10:46:21.444659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.157 [2024-11-15 10:46:21.444682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.157 qpair failed and we were unable to recover it. 00:27:33.157 [2024-11-15 10:46:21.444885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.157 [2024-11-15 10:46:21.444909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.157 qpair failed and we were unable to recover it. 00:27:33.157 [2024-11-15 10:46:21.445090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.157 [2024-11-15 10:46:21.445113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.157 qpair failed and we were unable to recover it. 00:27:33.157 [2024-11-15 10:46:21.445359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.157 [2024-11-15 10:46:21.445388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.157 qpair failed and we were unable to recover it. 00:27:33.157 [2024-11-15 10:46:21.445539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.157 [2024-11-15 10:46:21.445564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.157 qpair failed and we were unable to recover it. 00:27:33.157 [2024-11-15 10:46:21.445701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.157 [2024-11-15 10:46:21.445724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.157 qpair failed and we were unable to recover it. 00:27:33.157 [2024-11-15 10:46:21.445889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.157 [2024-11-15 10:46:21.445926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.157 qpair failed and we were unable to recover it. 00:27:33.157 [2024-11-15 10:46:21.446117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.157 [2024-11-15 10:46:21.446140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.157 qpair failed and we were unable to recover it. 00:27:33.157 [2024-11-15 10:46:21.446384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.157 [2024-11-15 10:46:21.446409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.157 qpair failed and we were unable to recover it. 00:27:33.157 [2024-11-15 10:46:21.446564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.157 [2024-11-15 10:46:21.446589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.157 qpair failed and we were unable to recover it. 00:27:33.157 [2024-11-15 10:46:21.446767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.157 [2024-11-15 10:46:21.446791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.157 qpair failed and we were unable to recover it. 00:27:33.157 [2024-11-15 10:46:21.446961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.157 [2024-11-15 10:46:21.446984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.157 qpair failed and we were unable to recover it. 00:27:33.157 [2024-11-15 10:46:21.447112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.157 [2024-11-15 10:46:21.447151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.157 qpair failed and we were unable to recover it. 00:27:33.157 [2024-11-15 10:46:21.447311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.157 [2024-11-15 10:46:21.447351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.157 qpair failed and we were unable to recover it. 00:27:33.157 [2024-11-15 10:46:21.447509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.157 [2024-11-15 10:46:21.447533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.157 qpair failed and we were unable to recover it. 00:27:33.157 [2024-11-15 10:46:21.447712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.157 [2024-11-15 10:46:21.447745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.157 qpair failed and we were unable to recover it. 00:27:33.157 [2024-11-15 10:46:21.447926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.157 [2024-11-15 10:46:21.447949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.157 qpair failed and we were unable to recover it. 00:27:33.157 [2024-11-15 10:46:21.448162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.158 [2024-11-15 10:46:21.448186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.158 qpair failed and we were unable to recover it. 00:27:33.158 [2024-11-15 10:46:21.448332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.158 [2024-11-15 10:46:21.448377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.158 qpair failed and we were unable to recover it. 00:27:33.158 [2024-11-15 10:46:21.448499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.158 [2024-11-15 10:46:21.448523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.158 qpair failed and we were unable to recover it. 00:27:33.158 [2024-11-15 10:46:21.448666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.158 [2024-11-15 10:46:21.448689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.158 qpair failed and we were unable to recover it. 00:27:33.158 [2024-11-15 10:46:21.448834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.158 [2024-11-15 10:46:21.448882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.158 qpair failed and we were unable to recover it. 00:27:33.158 [2024-11-15 10:46:21.449040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.158 [2024-11-15 10:46:21.449065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.158 qpair failed and we were unable to recover it. 00:27:33.158 [2024-11-15 10:46:21.449201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.158 [2024-11-15 10:46:21.449239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.158 qpair failed and we were unable to recover it. 00:27:33.158 [2024-11-15 10:46:21.449392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.158 [2024-11-15 10:46:21.449418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.158 qpair failed and we were unable to recover it. 00:27:33.158 [2024-11-15 10:46:21.449518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.158 [2024-11-15 10:46:21.449542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.158 qpair failed and we were unable to recover it. 00:27:33.158 [2024-11-15 10:46:21.449752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.158 [2024-11-15 10:46:21.449776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.158 qpair failed and we were unable to recover it. 00:27:33.158 [2024-11-15 10:46:21.449920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.158 [2024-11-15 10:46:21.449944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.158 qpair failed and we were unable to recover it. 00:27:33.158 [2024-11-15 10:46:21.450050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.158 [2024-11-15 10:46:21.450073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.158 qpair failed and we were unable to recover it. 00:27:33.158 [2024-11-15 10:46:21.450327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.158 [2024-11-15 10:46:21.450351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.158 qpair failed and we were unable to recover it. 00:27:33.158 [2024-11-15 10:46:21.450491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.158 [2024-11-15 10:46:21.450514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.158 qpair failed and we were unable to recover it. 00:27:33.158 [2024-11-15 10:46:21.450658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.158 [2024-11-15 10:46:21.450707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.158 qpair failed and we were unable to recover it. 00:27:33.158 [2024-11-15 10:46:21.450954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.158 [2024-11-15 10:46:21.450978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.158 qpair failed and we were unable to recover it. 00:27:33.158 [2024-11-15 10:46:21.451138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.158 [2024-11-15 10:46:21.451162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.158 qpair failed and we were unable to recover it. 00:27:33.158 [2024-11-15 10:46:21.451335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.158 [2024-11-15 10:46:21.451358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.158 qpair failed and we were unable to recover it. 00:27:33.158 [2024-11-15 10:46:21.451519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.158 [2024-11-15 10:46:21.451543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.158 qpair failed and we were unable to recover it. 00:27:33.158 [2024-11-15 10:46:21.451762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.158 [2024-11-15 10:46:21.451786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.158 qpair failed and we were unable to recover it. 00:27:33.158 [2024-11-15 10:46:21.451983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.158 [2024-11-15 10:46:21.452006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.158 qpair failed and we were unable to recover it. 00:27:33.158 [2024-11-15 10:46:21.452244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.158 [2024-11-15 10:46:21.452267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.158 qpair failed and we were unable to recover it. 00:27:33.158 [2024-11-15 10:46:21.452423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.158 [2024-11-15 10:46:21.452447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.158 qpair failed and we were unable to recover it. 00:27:33.158 [2024-11-15 10:46:21.452630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.158 [2024-11-15 10:46:21.452668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.158 qpair failed and we were unable to recover it. 00:27:33.158 [2024-11-15 10:46:21.452854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.158 [2024-11-15 10:46:21.452878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.158 qpair failed and we were unable to recover it. 00:27:33.158 [2024-11-15 10:46:21.453018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.158 [2024-11-15 10:46:21.453055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.158 qpair failed and we were unable to recover it. 00:27:33.158 [2024-11-15 10:46:21.453262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.158 [2024-11-15 10:46:21.453295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.158 qpair failed and we were unable to recover it. 00:27:33.158 [2024-11-15 10:46:21.453460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.158 [2024-11-15 10:46:21.453486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.158 qpair failed and we were unable to recover it. 00:27:33.158 [2024-11-15 10:46:21.453656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.158 [2024-11-15 10:46:21.453679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.158 qpair failed and we were unable to recover it. 00:27:33.158 [2024-11-15 10:46:21.453792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.158 [2024-11-15 10:46:21.453816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.158 qpair failed and we were unable to recover it. 00:27:33.158 [2024-11-15 10:46:21.454052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.158 [2024-11-15 10:46:21.454075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.158 qpair failed and we were unable to recover it. 00:27:33.158 [2024-11-15 10:46:21.454226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.158 [2024-11-15 10:46:21.454257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.158 qpair failed and we were unable to recover it. 00:27:33.158 [2024-11-15 10:46:21.454441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.158 [2024-11-15 10:46:21.454480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.158 qpair failed and we were unable to recover it. 00:27:33.158 [2024-11-15 10:46:21.454636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.158 [2024-11-15 10:46:21.454660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.158 qpair failed and we were unable to recover it. 00:27:33.158 [2024-11-15 10:46:21.454791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.158 [2024-11-15 10:46:21.454815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.158 qpair failed and we were unable to recover it. 00:27:33.159 [2024-11-15 10:46:21.454954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.159 [2024-11-15 10:46:21.454991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.159 qpair failed and we were unable to recover it. 00:27:33.159 [2024-11-15 10:46:21.455211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.159 [2024-11-15 10:46:21.455233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.159 qpair failed and we were unable to recover it. 00:27:33.159 [2024-11-15 10:46:21.455405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.159 [2024-11-15 10:46:21.455430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.159 qpair failed and we were unable to recover it. 00:27:33.159 [2024-11-15 10:46:21.455585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.159 [2024-11-15 10:46:21.455610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.159 qpair failed and we were unable to recover it. 00:27:33.159 [2024-11-15 10:46:21.455757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.159 [2024-11-15 10:46:21.455784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.159 qpair failed and we were unable to recover it. 00:27:33.159 [2024-11-15 10:46:21.455959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.159 [2024-11-15 10:46:21.455982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.159 qpair failed and we were unable to recover it. 00:27:33.159 [2024-11-15 10:46:21.456136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.159 [2024-11-15 10:46:21.456160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.159 qpair failed and we were unable to recover it. 00:27:33.159 [2024-11-15 10:46:21.456305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.159 [2024-11-15 10:46:21.456347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.159 qpair failed and we were unable to recover it. 00:27:33.159 [2024-11-15 10:46:21.456460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.159 [2024-11-15 10:46:21.456484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.159 qpair failed and we were unable to recover it. 00:27:33.159 [2024-11-15 10:46:21.456672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.159 [2024-11-15 10:46:21.456712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.159 qpair failed and we were unable to recover it. 00:27:33.159 [2024-11-15 10:46:21.456928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.159 [2024-11-15 10:46:21.456952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.159 qpair failed and we were unable to recover it. 00:27:33.159 [2024-11-15 10:46:21.457140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.159 [2024-11-15 10:46:21.457164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.159 qpair failed and we were unable to recover it. 00:27:33.159 [2024-11-15 10:46:21.457400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.159 [2024-11-15 10:46:21.457440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.159 qpair failed and we were unable to recover it. 00:27:33.159 [2024-11-15 10:46:21.457574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.159 [2024-11-15 10:46:21.457600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.159 qpair failed and we were unable to recover it. 00:27:33.159 [2024-11-15 10:46:21.457750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.159 [2024-11-15 10:46:21.457773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.159 qpair failed and we were unable to recover it. 00:27:33.159 [2024-11-15 10:46:21.457954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.159 [2024-11-15 10:46:21.457978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.159 qpair failed and we were unable to recover it. 00:27:33.159 [2024-11-15 10:46:21.458089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.159 [2024-11-15 10:46:21.458125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.159 qpair failed and we were unable to recover it. 00:27:33.159 [2024-11-15 10:46:21.458283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.159 [2024-11-15 10:46:21.458306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.159 qpair failed and we were unable to recover it. 00:27:33.159 [2024-11-15 10:46:21.458483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.159 [2024-11-15 10:46:21.458509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.159 qpair failed and we were unable to recover it. 00:27:33.159 [2024-11-15 10:46:21.458648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.159 [2024-11-15 10:46:21.458673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.159 qpair failed and we were unable to recover it. 00:27:33.159 [2024-11-15 10:46:21.458860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.159 [2024-11-15 10:46:21.458882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.159 qpair failed and we were unable to recover it. 00:27:33.159 [2024-11-15 10:46:21.459099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.159 [2024-11-15 10:46:21.459122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.159 qpair failed and we were unable to recover it. 00:27:33.159 [2024-11-15 10:46:21.459240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.159 [2024-11-15 10:46:21.459263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.159 qpair failed and we were unable to recover it. 00:27:33.159 [2024-11-15 10:46:21.459477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.159 [2024-11-15 10:46:21.459501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.159 qpair failed and we were unable to recover it. 00:27:33.159 [2024-11-15 10:46:21.459685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.159 [2024-11-15 10:46:21.459710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.159 qpair failed and we were unable to recover it. 00:27:33.159 [2024-11-15 10:46:21.459874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.159 [2024-11-15 10:46:21.459897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.159 qpair failed and we were unable to recover it. 00:27:33.159 [2024-11-15 10:46:21.460038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.159 [2024-11-15 10:46:21.460062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.159 qpair failed and we were unable to recover it. 00:27:33.159 [2024-11-15 10:46:21.460193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.159 [2024-11-15 10:46:21.460219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.159 qpair failed and we were unable to recover it. 00:27:33.159 [2024-11-15 10:46:21.460347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.159 [2024-11-15 10:46:21.460376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.159 qpair failed and we were unable to recover it. 00:27:33.159 [2024-11-15 10:46:21.460495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.159 [2024-11-15 10:46:21.460519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.159 qpair failed and we were unable to recover it. 00:27:33.159 [2024-11-15 10:46:21.460667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.159 [2024-11-15 10:46:21.460691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.159 qpair failed and we were unable to recover it. 00:27:33.159 [2024-11-15 10:46:21.460826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.159 [2024-11-15 10:46:21.460864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.159 qpair failed and we were unable to recover it. 00:27:33.159 [2024-11-15 10:46:21.461011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.159 [2024-11-15 10:46:21.461035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.159 qpair failed and we were unable to recover it. 00:27:33.159 [2024-11-15 10:46:21.461267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.159 [2024-11-15 10:46:21.461291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.159 qpair failed and we were unable to recover it. 00:27:33.159 [2024-11-15 10:46:21.461403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.159 [2024-11-15 10:46:21.461433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.159 qpair failed and we were unable to recover it. 00:27:33.159 [2024-11-15 10:46:21.461584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.159 [2024-11-15 10:46:21.461609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.159 qpair failed and we were unable to recover it. 00:27:33.159 [2024-11-15 10:46:21.461726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.159 [2024-11-15 10:46:21.461764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.159 qpair failed and we were unable to recover it. 00:27:33.159 [2024-11-15 10:46:21.461869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.159 [2024-11-15 10:46:21.461892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.159 qpair failed and we were unable to recover it. 00:27:33.159 [2024-11-15 10:46:21.462125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.160 [2024-11-15 10:46:21.462149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.160 qpair failed and we were unable to recover it. 00:27:33.160 [2024-11-15 10:46:21.462381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.160 [2024-11-15 10:46:21.462406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.160 qpair failed and we were unable to recover it. 00:27:33.160 [2024-11-15 10:46:21.462544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.160 [2024-11-15 10:46:21.462569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.160 qpair failed and we were unable to recover it. 00:27:33.160 [2024-11-15 10:46:21.462789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.160 [2024-11-15 10:46:21.462813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.160 qpair failed and we were unable to recover it. 00:27:33.160 [2024-11-15 10:46:21.462989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.160 [2024-11-15 10:46:21.463013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.160 qpair failed and we were unable to recover it. 00:27:33.160 [2024-11-15 10:46:21.463179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.160 [2024-11-15 10:46:21.463204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.160 qpair failed and we were unable to recover it. 00:27:33.160 [2024-11-15 10:46:21.463419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.160 [2024-11-15 10:46:21.463443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.160 qpair failed and we were unable to recover it. 00:27:33.160 [2024-11-15 10:46:21.463555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.160 [2024-11-15 10:46:21.463579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.160 qpair failed and we were unable to recover it. 00:27:33.160 [2024-11-15 10:46:21.463729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.160 [2024-11-15 10:46:21.463752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.160 qpair failed and we were unable to recover it. 00:27:33.160 [2024-11-15 10:46:21.463968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.160 [2024-11-15 10:46:21.463992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.160 qpair failed and we were unable to recover it. 00:27:33.160 [2024-11-15 10:46:21.464237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.160 [2024-11-15 10:46:21.464263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.160 qpair failed and we were unable to recover it. 00:27:33.160 [2024-11-15 10:46:21.464512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.160 [2024-11-15 10:46:21.464538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.160 qpair failed and we were unable to recover it. 00:27:33.160 [2024-11-15 10:46:21.464700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.160 [2024-11-15 10:46:21.464722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.160 qpair failed and we were unable to recover it. 00:27:33.160 [2024-11-15 10:46:21.464917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.160 [2024-11-15 10:46:21.464941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.160 qpair failed and we were unable to recover it. 00:27:33.160 [2024-11-15 10:46:21.465128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.160 [2024-11-15 10:46:21.465151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.160 qpair failed and we were unable to recover it. 00:27:33.160 [2024-11-15 10:46:21.465323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.160 [2024-11-15 10:46:21.465346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.160 qpair failed and we were unable to recover it. 00:27:33.160 [2024-11-15 10:46:21.465525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.160 [2024-11-15 10:46:21.465549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.160 qpair failed and we were unable to recover it. 00:27:33.160 [2024-11-15 10:46:21.465699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.160 [2024-11-15 10:46:21.465723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.160 qpair failed and we were unable to recover it. 00:27:33.160 [2024-11-15 10:46:21.465929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.160 [2024-11-15 10:46:21.465953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.160 qpair failed and we were unable to recover it. 00:27:33.160 [2024-11-15 10:46:21.466063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.160 [2024-11-15 10:46:21.466092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.160 qpair failed and we were unable to recover it. 00:27:33.160 [2024-11-15 10:46:21.466228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.160 [2024-11-15 10:46:21.466253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.160 qpair failed and we were unable to recover it. 00:27:33.160 [2024-11-15 10:46:21.466448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.160 [2024-11-15 10:46:21.466472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.160 qpair failed and we were unable to recover it. 00:27:33.160 [2024-11-15 10:46:21.466595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.160 [2024-11-15 10:46:21.466619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.160 qpair failed and we were unable to recover it. 00:27:33.160 [2024-11-15 10:46:21.466745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.160 [2024-11-15 10:46:21.466772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.160 qpair failed and we were unable to recover it. 00:27:33.160 [2024-11-15 10:46:21.466911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.160 [2024-11-15 10:46:21.466935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.160 qpair failed and we were unable to recover it. 00:27:33.160 [2024-11-15 10:46:21.467085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.160 [2024-11-15 10:46:21.467123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.160 qpair failed and we were unable to recover it. 00:27:33.160 [2024-11-15 10:46:21.467306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.160 [2024-11-15 10:46:21.467328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.160 qpair failed and we were unable to recover it. 00:27:33.160 [2024-11-15 10:46:21.467451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.160 [2024-11-15 10:46:21.467476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.160 qpair failed and we were unable to recover it. 00:27:33.160 [2024-11-15 10:46:21.467685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.160 [2024-11-15 10:46:21.467710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.160 qpair failed and we were unable to recover it. 00:27:33.160 [2024-11-15 10:46:21.467899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.160 [2024-11-15 10:46:21.467921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.160 qpair failed and we were unable to recover it. 00:27:33.160 [2024-11-15 10:46:21.468114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.160 [2024-11-15 10:46:21.468137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.160 qpair failed and we were unable to recover it. 00:27:33.160 [2024-11-15 10:46:21.468303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.160 [2024-11-15 10:46:21.468326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.160 qpair failed and we were unable to recover it. 00:27:33.160 [2024-11-15 10:46:21.468474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.160 [2024-11-15 10:46:21.468499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.160 qpair failed and we were unable to recover it. 00:27:33.160 [2024-11-15 10:46:21.468607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.160 [2024-11-15 10:46:21.468631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.160 qpair failed and we were unable to recover it. 00:27:33.160 [2024-11-15 10:46:21.468828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.160 [2024-11-15 10:46:21.468851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.160 qpair failed and we were unable to recover it. 00:27:33.160 [2024-11-15 10:46:21.468982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.160 [2024-11-15 10:46:21.469006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.160 qpair failed and we were unable to recover it. 00:27:33.160 [2024-11-15 10:46:21.469173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.160 [2024-11-15 10:46:21.469221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.160 qpair failed and we were unable to recover it. 00:27:33.160 [2024-11-15 10:46:21.469336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.160 [2024-11-15 10:46:21.469405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.160 qpair failed and we were unable to recover it. 00:27:33.160 [2024-11-15 10:46:21.469580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.160 [2024-11-15 10:46:21.469604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.160 qpair failed and we were unable to recover it. 00:27:33.161 [2024-11-15 10:46:21.469771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.161 [2024-11-15 10:46:21.469809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.161 qpair failed and we were unable to recover it. 00:27:33.161 [2024-11-15 10:46:21.469950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.161 [2024-11-15 10:46:21.469973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.161 qpair failed and we were unable to recover it. 00:27:33.161 [2024-11-15 10:46:21.470151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.161 [2024-11-15 10:46:21.470189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.161 qpair failed and we were unable to recover it. 00:27:33.161 [2024-11-15 10:46:21.470369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.161 [2024-11-15 10:46:21.470409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.161 qpair failed and we were unable to recover it. 00:27:33.161 [2024-11-15 10:46:21.470589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.161 [2024-11-15 10:46:21.470621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.161 qpair failed and we were unable to recover it. 00:27:33.161 [2024-11-15 10:46:21.470815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.161 [2024-11-15 10:46:21.470839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.161 qpair failed and we were unable to recover it. 00:27:33.161 [2024-11-15 10:46:21.470985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.161 [2024-11-15 10:46:21.471008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.161 qpair failed and we were unable to recover it. 00:27:33.161 [2024-11-15 10:46:21.471207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.161 [2024-11-15 10:46:21.471230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.161 qpair failed and we were unable to recover it. 00:27:33.161 [2024-11-15 10:46:21.471392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.161 [2024-11-15 10:46:21.471417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.161 qpair failed and we were unable to recover it. 00:27:33.161 [2024-11-15 10:46:21.471586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.161 [2024-11-15 10:46:21.471611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.161 qpair failed and we were unable to recover it. 00:27:33.161 [2024-11-15 10:46:21.471769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.161 [2024-11-15 10:46:21.471792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.161 qpair failed and we were unable to recover it. 00:27:33.161 [2024-11-15 10:46:21.471940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.161 [2024-11-15 10:46:21.471962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.161 qpair failed and we were unable to recover it. 00:27:33.161 [2024-11-15 10:46:21.472199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.161 [2024-11-15 10:46:21.472223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.161 qpair failed and we were unable to recover it. 00:27:33.161 [2024-11-15 10:46:21.472405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.161 [2024-11-15 10:46:21.472430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.161 qpair failed and we were unable to recover it. 00:27:33.161 [2024-11-15 10:46:21.472582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.161 [2024-11-15 10:46:21.472606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.161 qpair failed and we were unable to recover it. 00:27:33.161 [2024-11-15 10:46:21.472760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.161 [2024-11-15 10:46:21.472782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.161 qpair failed and we were unable to recover it. 00:27:33.161 [2024-11-15 10:46:21.472973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.161 [2024-11-15 10:46:21.472995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.161 qpair failed and we were unable to recover it. 00:27:33.161 [2024-11-15 10:46:21.473099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.161 [2024-11-15 10:46:21.473138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.161 qpair failed and we were unable to recover it. 00:27:33.161 [2024-11-15 10:46:21.473300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.161 [2024-11-15 10:46:21.473324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.161 qpair failed and we were unable to recover it. 00:27:33.161 [2024-11-15 10:46:21.473504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.161 [2024-11-15 10:46:21.473528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.161 qpair failed and we were unable to recover it. 00:27:33.161 [2024-11-15 10:46:21.473678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.161 [2024-11-15 10:46:21.473703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.161 qpair failed and we were unable to recover it. 00:27:33.161 [2024-11-15 10:46:21.473917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.161 [2024-11-15 10:46:21.473941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.161 qpair failed and we were unable to recover it. 00:27:33.161 [2024-11-15 10:46:21.474047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.161 [2024-11-15 10:46:21.474070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.161 qpair failed and we were unable to recover it. 00:27:33.161 [2024-11-15 10:46:21.474314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.161 [2024-11-15 10:46:21.474338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.161 qpair failed and we were unable to recover it. 00:27:33.161 [2024-11-15 10:46:21.474521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.161 [2024-11-15 10:46:21.474546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.161 qpair failed and we were unable to recover it. 00:27:33.161 [2024-11-15 10:46:21.474720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.161 [2024-11-15 10:46:21.474744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.161 qpair failed and we were unable to recover it. 00:27:33.161 [2024-11-15 10:46:21.474901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.161 [2024-11-15 10:46:21.474924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.161 qpair failed and we were unable to recover it. 00:27:33.161 [2024-11-15 10:46:21.475034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.161 [2024-11-15 10:46:21.475057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.161 qpair failed and we were unable to recover it. 00:27:33.161 [2024-11-15 10:46:21.475244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.161 [2024-11-15 10:46:21.475283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.161 qpair failed and we were unable to recover it. 00:27:33.162 [2024-11-15 10:46:21.475447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.162 [2024-11-15 10:46:21.475472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.162 qpair failed and we were unable to recover it. 00:27:33.162 [2024-11-15 10:46:21.475593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.162 [2024-11-15 10:46:21.475618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.162 qpair failed and we were unable to recover it. 00:27:33.162 [2024-11-15 10:46:21.475750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.162 [2024-11-15 10:46:21.475787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.162 qpair failed and we were unable to recover it. 00:27:33.162 [2024-11-15 10:46:21.475957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.162 [2024-11-15 10:46:21.475989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.162 qpair failed and we were unable to recover it. 00:27:33.162 [2024-11-15 10:46:21.476180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.162 [2024-11-15 10:46:21.476203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.162 qpair failed and we were unable to recover it. 00:27:33.162 [2024-11-15 10:46:21.476305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.162 [2024-11-15 10:46:21.476343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.162 qpair failed and we were unable to recover it. 00:27:33.162 [2024-11-15 10:46:21.476570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.162 [2024-11-15 10:46:21.476596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.162 qpair failed and we were unable to recover it. 00:27:33.162 [2024-11-15 10:46:21.476851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.162 [2024-11-15 10:46:21.476874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.162 qpair failed and we were unable to recover it. 00:27:33.162 [2024-11-15 10:46:21.477100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.162 [2024-11-15 10:46:21.477123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.162 qpair failed and we were unable to recover it. 00:27:33.162 [2024-11-15 10:46:21.477347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.162 [2024-11-15 10:46:21.477390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.162 qpair failed and we were unable to recover it. 00:27:33.162 [2024-11-15 10:46:21.477562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.162 [2024-11-15 10:46:21.477596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.162 qpair failed and we were unable to recover it. 00:27:33.162 [2024-11-15 10:46:21.477766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.162 [2024-11-15 10:46:21.477788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.162 qpair failed and we were unable to recover it. 00:27:33.162 [2024-11-15 10:46:21.478062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.162 [2024-11-15 10:46:21.478085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.162 qpair failed and we were unable to recover it. 00:27:33.162 [2024-11-15 10:46:21.478267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.162 [2024-11-15 10:46:21.478289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.162 qpair failed and we were unable to recover it. 00:27:33.162 [2024-11-15 10:46:21.478431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.162 [2024-11-15 10:46:21.478464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.162 qpair failed and we were unable to recover it. 00:27:33.162 [2024-11-15 10:46:21.478630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.162 [2024-11-15 10:46:21.478669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.162 qpair failed and we were unable to recover it. 00:27:33.162 [2024-11-15 10:46:21.478825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.162 [2024-11-15 10:46:21.478848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.162 qpair failed and we were unable to recover it. 00:27:33.162 [2024-11-15 10:46:21.479061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.162 [2024-11-15 10:46:21.479085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.162 qpair failed and we were unable to recover it. 00:27:33.162 [2024-11-15 10:46:21.479308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.162 [2024-11-15 10:46:21.479331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.162 qpair failed and we were unable to recover it. 00:27:33.162 [2024-11-15 10:46:21.479571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.162 [2024-11-15 10:46:21.479597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.162 qpair failed and we were unable to recover it. 00:27:33.162 [2024-11-15 10:46:21.479814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.162 [2024-11-15 10:46:21.479837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.162 qpair failed and we were unable to recover it. 00:27:33.162 [2024-11-15 10:46:21.479980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.162 [2024-11-15 10:46:21.480003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.162 qpair failed and we were unable to recover it. 00:27:33.162 [2024-11-15 10:46:21.480133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.162 [2024-11-15 10:46:21.480157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.162 qpair failed and we were unable to recover it. 00:27:33.162 [2024-11-15 10:46:21.480375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.162 [2024-11-15 10:46:21.480429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.162 qpair failed and we were unable to recover it. 00:27:33.162 [2024-11-15 10:46:21.480659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.162 [2024-11-15 10:46:21.480683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.162 qpair failed and we were unable to recover it. 00:27:33.162 [2024-11-15 10:46:21.480914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.162 [2024-11-15 10:46:21.480937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.162 qpair failed and we were unable to recover it. 00:27:33.162 [2024-11-15 10:46:21.481108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.162 [2024-11-15 10:46:21.481133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.162 qpair failed and we were unable to recover it. 00:27:33.162 [2024-11-15 10:46:21.481325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.162 [2024-11-15 10:46:21.481349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.162 qpair failed and we were unable to recover it. 00:27:33.162 [2024-11-15 10:46:21.481538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.162 [2024-11-15 10:46:21.481562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.162 qpair failed and we were unable to recover it. 00:27:33.162 [2024-11-15 10:46:21.481782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.162 [2024-11-15 10:46:21.481805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.162 qpair failed and we were unable to recover it. 00:27:33.162 [2024-11-15 10:46:21.481985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.162 [2024-11-15 10:46:21.482009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.162 qpair failed and we were unable to recover it. 00:27:33.162 [2024-11-15 10:46:21.482167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.162 [2024-11-15 10:46:21.482196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.162 qpair failed and we were unable to recover it. 00:27:33.162 [2024-11-15 10:46:21.482376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.162 [2024-11-15 10:46:21.482401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.162 qpair failed and we were unable to recover it. 00:27:33.162 [2024-11-15 10:46:21.482506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.162 [2024-11-15 10:46:21.482545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.162 qpair failed and we were unable to recover it. 00:27:33.162 [2024-11-15 10:46:21.482692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.162 [2024-11-15 10:46:21.482741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.162 qpair failed and we were unable to recover it. 00:27:33.162 [2024-11-15 10:46:21.482922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.162 [2024-11-15 10:46:21.482953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.162 qpair failed and we were unable to recover it. 00:27:33.162 [2024-11-15 10:46:21.483173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.162 [2024-11-15 10:46:21.483197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.162 qpair failed and we were unable to recover it. 00:27:33.162 [2024-11-15 10:46:21.483382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.162 [2024-11-15 10:46:21.483407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.163 qpair failed and we were unable to recover it. 00:27:33.163 [2024-11-15 10:46:21.483585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.163 [2024-11-15 10:46:21.483610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.163 qpair failed and we were unable to recover it. 00:27:33.163 [2024-11-15 10:46:21.483769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.163 [2024-11-15 10:46:21.483792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.163 qpair failed and we were unable to recover it. 00:27:33.163 [2024-11-15 10:46:21.483976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.163 [2024-11-15 10:46:21.483999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.163 qpair failed and we were unable to recover it. 00:27:33.163 [2024-11-15 10:46:21.484171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.163 [2024-11-15 10:46:21.484196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.163 qpair failed and we were unable to recover it. 00:27:33.163 [2024-11-15 10:46:21.484379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.163 [2024-11-15 10:46:21.484419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.163 qpair failed and we were unable to recover it. 00:27:33.163 [2024-11-15 10:46:21.484579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.163 [2024-11-15 10:46:21.484603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.163 qpair failed and we were unable to recover it. 00:27:33.163 [2024-11-15 10:46:21.484754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.163 [2024-11-15 10:46:21.484792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.163 qpair failed and we were unable to recover it. 00:27:33.163 [2024-11-15 10:46:21.484909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.163 [2024-11-15 10:46:21.484934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.163 qpair failed and we were unable to recover it. 00:27:33.163 [2024-11-15 10:46:21.485086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.163 [2024-11-15 10:46:21.485110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.163 qpair failed and we were unable to recover it. 00:27:33.163 [2024-11-15 10:46:21.485331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.163 [2024-11-15 10:46:21.485355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.163 qpair failed and we were unable to recover it. 00:27:33.163 [2024-11-15 10:46:21.485545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.163 [2024-11-15 10:46:21.485569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.163 qpair failed and we were unable to recover it. 00:27:33.163 [2024-11-15 10:46:21.485777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.163 [2024-11-15 10:46:21.485800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.163 qpair failed and we were unable to recover it. 00:27:33.163 [2024-11-15 10:46:21.486003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.163 [2024-11-15 10:46:21.486030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.163 qpair failed and we were unable to recover it. 00:27:33.163 [2024-11-15 10:46:21.486177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.163 [2024-11-15 10:46:21.486201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.163 qpair failed and we were unable to recover it. 00:27:33.163 [2024-11-15 10:46:21.486341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.163 [2024-11-15 10:46:21.486386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.163 qpair failed and we were unable to recover it. 00:27:33.163 [2024-11-15 10:46:21.486584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.163 [2024-11-15 10:46:21.486608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.163 qpair failed and we were unable to recover it. 00:27:33.163 [2024-11-15 10:46:21.486780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.163 [2024-11-15 10:46:21.486804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.163 qpair failed and we were unable to recover it. 00:27:33.163 [2024-11-15 10:46:21.486931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.163 [2024-11-15 10:46:21.486955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.163 qpair failed and we were unable to recover it. 00:27:33.163 [2024-11-15 10:46:21.487175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.163 [2024-11-15 10:46:21.487198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.163 qpair failed and we were unable to recover it. 00:27:33.163 [2024-11-15 10:46:21.487430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.163 [2024-11-15 10:46:21.487455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.163 qpair failed and we were unable to recover it. 00:27:33.163 [2024-11-15 10:46:21.487602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.163 [2024-11-15 10:46:21.487627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.163 qpair failed and we were unable to recover it. 00:27:33.163 [2024-11-15 10:46:21.487772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.163 [2024-11-15 10:46:21.487811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.163 qpair failed and we were unable to recover it. 00:27:33.163 [2024-11-15 10:46:21.487974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.163 [2024-11-15 10:46:21.487998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.163 qpair failed and we were unable to recover it. 00:27:33.163 [2024-11-15 10:46:21.488158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.163 [2024-11-15 10:46:21.488197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.163 qpair failed and we were unable to recover it. 00:27:33.163 [2024-11-15 10:46:21.488371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.163 [2024-11-15 10:46:21.488396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.163 qpair failed and we were unable to recover it. 00:27:33.164 [2024-11-15 10:46:21.488579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.164 [2024-11-15 10:46:21.488603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.164 qpair failed and we were unable to recover it. 00:27:33.164 [2024-11-15 10:46:21.488817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.164 [2024-11-15 10:46:21.488841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.164 qpair failed and we were unable to recover it. 00:27:33.164 [2024-11-15 10:46:21.489048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.164 [2024-11-15 10:46:21.489071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.164 qpair failed and we were unable to recover it. 00:27:33.164 [2024-11-15 10:46:21.489246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.164 [2024-11-15 10:46:21.489270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.164 qpair failed and we were unable to recover it. 00:27:33.164 [2024-11-15 10:46:21.489487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.164 [2024-11-15 10:46:21.489512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.164 qpair failed and we were unable to recover it. 00:27:33.164 [2024-11-15 10:46:21.489652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.164 [2024-11-15 10:46:21.489700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.164 qpair failed and we were unable to recover it. 00:27:33.164 [2024-11-15 10:46:21.489875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.164 [2024-11-15 10:46:21.489899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.164 qpair failed and we were unable to recover it. 00:27:33.164 [2024-11-15 10:46:21.490077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.164 [2024-11-15 10:46:21.490100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.164 qpair failed and we were unable to recover it. 00:27:33.164 [2024-11-15 10:46:21.490269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.164 [2024-11-15 10:46:21.490302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.164 qpair failed and we were unable to recover it. 00:27:33.164 [2024-11-15 10:46:21.490475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.164 [2024-11-15 10:46:21.490501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.164 qpair failed and we were unable to recover it. 00:27:33.164 [2024-11-15 10:46:21.490632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.164 [2024-11-15 10:46:21.490657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.164 qpair failed and we were unable to recover it. 00:27:33.164 [2024-11-15 10:46:21.490800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.164 [2024-11-15 10:46:21.490839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.164 qpair failed and we were unable to recover it. 00:27:33.164 [2024-11-15 10:46:21.491015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.164 [2024-11-15 10:46:21.491038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.164 qpair failed and we were unable to recover it. 00:27:33.164 [2024-11-15 10:46:21.491208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.164 [2024-11-15 10:46:21.491232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.164 qpair failed and we were unable to recover it. 00:27:33.164 [2024-11-15 10:46:21.491390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.164 [2024-11-15 10:46:21.491431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.164 qpair failed and we were unable to recover it. 00:27:33.164 [2024-11-15 10:46:21.491576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.164 [2024-11-15 10:46:21.491611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.164 qpair failed and we were unable to recover it. 00:27:33.164 [2024-11-15 10:46:21.491793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.164 [2024-11-15 10:46:21.491816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.164 qpair failed and we were unable to recover it. 00:27:33.164 [2024-11-15 10:46:21.491923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.164 [2024-11-15 10:46:21.491962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.164 qpair failed and we were unable to recover it. 00:27:33.164 [2024-11-15 10:46:21.492102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.164 [2024-11-15 10:46:21.492126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.164 qpair failed and we were unable to recover it. 00:27:33.164 [2024-11-15 10:46:21.492253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.164 [2024-11-15 10:46:21.492277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.164 qpair failed and we were unable to recover it. 00:27:33.164 [2024-11-15 10:46:21.492395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.164 [2024-11-15 10:46:21.492422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.164 qpair failed and we were unable to recover it. 00:27:33.164 [2024-11-15 10:46:21.492569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.164 [2024-11-15 10:46:21.492594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.164 qpair failed and we were unable to recover it. 00:27:33.164 [2024-11-15 10:46:21.492792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.164 [2024-11-15 10:46:21.492816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.164 qpair failed and we were unable to recover it. 00:27:33.164 [2024-11-15 10:46:21.492989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.164 [2024-11-15 10:46:21.493013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.164 qpair failed and we were unable to recover it. 00:27:33.164 [2024-11-15 10:46:21.493222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.164 [2024-11-15 10:46:21.493246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.164 qpair failed and we were unable to recover it. 00:27:33.164 [2024-11-15 10:46:21.493386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.164 [2024-11-15 10:46:21.493410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.164 qpair failed and we were unable to recover it. 00:27:33.164 [2024-11-15 10:46:21.493568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.164 [2024-11-15 10:46:21.493592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.164 qpair failed and we were unable to recover it. 00:27:33.164 [2024-11-15 10:46:21.493786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.164 [2024-11-15 10:46:21.493810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.164 qpair failed and we were unable to recover it. 00:27:33.164 [2024-11-15 10:46:21.493961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.164 [2024-11-15 10:46:21.493987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.164 qpair failed and we were unable to recover it. 00:27:33.164 [2024-11-15 10:46:21.494237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.164 [2024-11-15 10:46:21.494260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.164 qpair failed and we were unable to recover it. 00:27:33.164 [2024-11-15 10:46:21.494373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.164 [2024-11-15 10:46:21.494398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.164 qpair failed and we were unable to recover it. 00:27:33.164 [2024-11-15 10:46:21.494536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.164 [2024-11-15 10:46:21.494561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.164 qpair failed and we were unable to recover it. 00:27:33.164 [2024-11-15 10:46:21.494759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.164 [2024-11-15 10:46:21.494783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.164 qpair failed and we were unable to recover it. 00:27:33.164 [2024-11-15 10:46:21.494967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.164 [2024-11-15 10:46:21.494990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.164 qpair failed and we were unable to recover it. 00:27:33.164 [2024-11-15 10:46:21.495189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.164 [2024-11-15 10:46:21.495213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.164 qpair failed and we were unable to recover it. 00:27:33.164 [2024-11-15 10:46:21.495368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.164 [2024-11-15 10:46:21.495419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.164 qpair failed and we were unable to recover it. 00:27:33.164 [2024-11-15 10:46:21.495575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.164 [2024-11-15 10:46:21.495610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.164 qpair failed and we were unable to recover it. 00:27:33.164 [2024-11-15 10:46:21.495794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.164 [2024-11-15 10:46:21.495818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.164 qpair failed and we were unable to recover it. 00:27:33.164 [2024-11-15 10:46:21.496029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.165 [2024-11-15 10:46:21.496052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.165 qpair failed and we were unable to recover it. 00:27:33.165 [2024-11-15 10:46:21.496254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.165 [2024-11-15 10:46:21.496278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.165 qpair failed and we were unable to recover it. 00:27:33.165 [2024-11-15 10:46:21.496474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.165 [2024-11-15 10:46:21.496499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.165 qpair failed and we were unable to recover it. 00:27:33.165 [2024-11-15 10:46:21.496697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.165 [2024-11-15 10:46:21.496721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.165 qpair failed and we were unable to recover it. 00:27:33.165 [2024-11-15 10:46:21.496931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.165 [2024-11-15 10:46:21.496955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.165 qpair failed and we were unable to recover it. 00:27:33.165 [2024-11-15 10:46:21.497060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.165 [2024-11-15 10:46:21.497085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.165 qpair failed and we were unable to recover it. 00:27:33.165 [2024-11-15 10:46:21.497237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.165 [2024-11-15 10:46:21.497262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.165 qpair failed and we were unable to recover it. 00:27:33.165 [2024-11-15 10:46:21.497425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.165 [2024-11-15 10:46:21.497451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.165 qpair failed and we were unable to recover it. 00:27:33.165 [2024-11-15 10:46:21.497602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.165 [2024-11-15 10:46:21.497626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.165 qpair failed and we were unable to recover it. 00:27:33.165 [2024-11-15 10:46:21.497768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.165 [2024-11-15 10:46:21.497808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.165 qpair failed and we were unable to recover it. 00:27:33.165 [2024-11-15 10:46:21.498045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.165 [2024-11-15 10:46:21.498068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.165 qpair failed and we were unable to recover it. 00:27:33.165 [2024-11-15 10:46:21.498250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.165 [2024-11-15 10:46:21.498274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.165 qpair failed and we were unable to recover it. 00:27:33.165 [2024-11-15 10:46:21.498478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.165 [2024-11-15 10:46:21.498503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.165 qpair failed and we were unable to recover it. 00:27:33.165 [2024-11-15 10:46:21.498747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.165 [2024-11-15 10:46:21.498770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.165 qpair failed and we were unable to recover it. 00:27:33.165 [2024-11-15 10:46:21.498910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.165 [2024-11-15 10:46:21.498934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.165 qpair failed and we were unable to recover it. 00:27:33.165 [2024-11-15 10:46:21.499098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.165 [2024-11-15 10:46:21.499137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.165 qpair failed and we were unable to recover it. 00:27:33.165 [2024-11-15 10:46:21.499374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.165 [2024-11-15 10:46:21.499398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.165 qpair failed and we were unable to recover it. 00:27:33.165 [2024-11-15 10:46:21.499577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.165 [2024-11-15 10:46:21.499610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.165 qpair failed and we were unable to recover it. 00:27:33.165 [2024-11-15 10:46:21.499826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.165 [2024-11-15 10:46:21.499851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.165 qpair failed and we were unable to recover it. 00:27:33.165 [2024-11-15 10:46:21.500092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.165 [2024-11-15 10:46:21.500116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.165 qpair failed and we were unable to recover it. 00:27:33.165 [2024-11-15 10:46:21.500303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.165 [2024-11-15 10:46:21.500336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.165 qpair failed and we were unable to recover it. 00:27:33.165 [2024-11-15 10:46:21.500593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.165 [2024-11-15 10:46:21.500618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.165 qpair failed and we were unable to recover it. 00:27:33.165 [2024-11-15 10:46:21.500742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.165 [2024-11-15 10:46:21.500765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.165 qpair failed and we were unable to recover it. 00:27:33.165 [2024-11-15 10:46:21.500975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.165 [2024-11-15 10:46:21.500999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.165 qpair failed and we were unable to recover it. 00:27:33.165 [2024-11-15 10:46:21.501171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.165 [2024-11-15 10:46:21.501194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.165 qpair failed and we were unable to recover it. 00:27:33.165 [2024-11-15 10:46:21.501304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.165 [2024-11-15 10:46:21.501342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.165 qpair failed and we were unable to recover it. 00:27:33.165 [2024-11-15 10:46:21.501511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.165 [2024-11-15 10:46:21.501551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.165 qpair failed and we were unable to recover it. 00:27:33.165 [2024-11-15 10:46:21.501747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.165 [2024-11-15 10:46:21.501785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.165 qpair failed and we were unable to recover it. 00:27:33.165 [2024-11-15 10:46:21.501932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.165 [2024-11-15 10:46:21.501956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.165 qpair failed and we were unable to recover it. 00:27:33.165 [2024-11-15 10:46:21.502123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.165 [2024-11-15 10:46:21.502172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.166 qpair failed and we were unable to recover it. 00:27:33.166 [2024-11-15 10:46:21.502348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.166 [2024-11-15 10:46:21.502400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.166 qpair failed and we were unable to recover it. 00:27:33.166 [2024-11-15 10:46:21.502608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.166 [2024-11-15 10:46:21.502632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.166 qpair failed and we were unable to recover it. 00:27:33.166 [2024-11-15 10:46:21.502787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.166 [2024-11-15 10:46:21.502811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.166 qpair failed and we were unable to recover it. 00:27:33.166 [2024-11-15 10:46:21.503052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.166 [2024-11-15 10:46:21.503076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.166 qpair failed and we were unable to recover it. 00:27:33.166 [2024-11-15 10:46:21.503211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.166 [2024-11-15 10:46:21.503235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.166 qpair failed and we were unable to recover it. 00:27:33.166 [2024-11-15 10:46:21.503412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.166 [2024-11-15 10:46:21.503438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.166 qpair failed and we were unable to recover it. 00:27:33.166 [2024-11-15 10:46:21.503640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.166 [2024-11-15 10:46:21.503663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.166 qpair failed and we were unable to recover it. 00:27:33.166 [2024-11-15 10:46:21.503887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.166 [2024-11-15 10:46:21.503911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.166 qpair failed and we were unable to recover it. 00:27:33.166 [2024-11-15 10:46:21.504116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.166 [2024-11-15 10:46:21.504140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.166 qpair failed and we were unable to recover it. 00:27:33.166 [2024-11-15 10:46:21.504397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.166 [2024-11-15 10:46:21.504422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.166 qpair failed and we were unable to recover it. 00:27:33.166 [2024-11-15 10:46:21.504524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.166 [2024-11-15 10:46:21.504548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.166 qpair failed and we were unable to recover it. 00:27:33.166 [2024-11-15 10:46:21.504749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.166 [2024-11-15 10:46:21.504772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.166 qpair failed and we were unable to recover it. 00:27:33.166 [2024-11-15 10:46:21.504948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.166 [2024-11-15 10:46:21.504984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.166 qpair failed and we were unable to recover it. 00:27:33.166 [2024-11-15 10:46:21.505137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.166 [2024-11-15 10:46:21.505170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.166 qpair failed and we were unable to recover it. 00:27:33.166 [2024-11-15 10:46:21.505348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.166 [2024-11-15 10:46:21.505406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.166 qpair failed and we were unable to recover it. 00:27:33.166 [2024-11-15 10:46:21.505586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.166 [2024-11-15 10:46:21.505611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.166 qpair failed and we were unable to recover it. 00:27:33.166 [2024-11-15 10:46:21.505807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.166 [2024-11-15 10:46:21.505831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.166 qpair failed and we were unable to recover it. 00:27:33.166 [2024-11-15 10:46:21.506036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.166 [2024-11-15 10:46:21.506059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.166 qpair failed and we were unable to recover it. 00:27:33.166 [2024-11-15 10:46:21.506271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.166 [2024-11-15 10:46:21.506295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.166 qpair failed and we were unable to recover it. 00:27:33.166 [2024-11-15 10:46:21.506525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.166 [2024-11-15 10:46:21.506551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.166 qpair failed and we were unable to recover it. 00:27:33.166 [2024-11-15 10:46:21.506708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.166 [2024-11-15 10:46:21.506732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.166 qpair failed and we were unable to recover it. 00:27:33.166 [2024-11-15 10:46:21.506969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.166 [2024-11-15 10:46:21.506994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.166 qpair failed and we were unable to recover it. 00:27:33.166 [2024-11-15 10:46:21.507215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.166 [2024-11-15 10:46:21.507238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.166 qpair failed and we were unable to recover it. 00:27:33.166 [2024-11-15 10:46:21.507387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.166 [2024-11-15 10:46:21.507427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.166 qpair failed and we were unable to recover it. 00:27:33.166 [2024-11-15 10:46:21.507651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.166 [2024-11-15 10:46:21.507676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.166 qpair failed and we were unable to recover it. 00:27:33.166 [2024-11-15 10:46:21.507797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.166 [2024-11-15 10:46:21.507836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.166 qpair failed and we were unable to recover it. 00:27:33.166 [2024-11-15 10:46:21.507930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.166 [2024-11-15 10:46:21.507953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.166 qpair failed and we were unable to recover it. 00:27:33.166 [2024-11-15 10:46:21.508107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.166 [2024-11-15 10:46:21.508146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.166 qpair failed and we were unable to recover it. 00:27:33.166 [2024-11-15 10:46:21.508333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.166 [2024-11-15 10:46:21.508357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.166 qpair failed and we were unable to recover it. 00:27:33.166 [2024-11-15 10:46:21.508492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.166 [2024-11-15 10:46:21.508532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.166 qpair failed and we were unable to recover it. 00:27:33.166 [2024-11-15 10:46:21.508734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.166 [2024-11-15 10:46:21.508772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.166 qpair failed and we were unable to recover it. 00:27:33.166 [2024-11-15 10:46:21.508905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.166 [2024-11-15 10:46:21.508928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.166 qpair failed and we were unable to recover it. 00:27:33.166 [2024-11-15 10:46:21.509111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.166 [2024-11-15 10:46:21.509150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.166 qpair failed and we were unable to recover it. 00:27:33.166 [2024-11-15 10:46:21.509404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.166 [2024-11-15 10:46:21.509430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.166 qpair failed and we were unable to recover it. 00:27:33.166 [2024-11-15 10:46:21.509622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.166 [2024-11-15 10:46:21.509646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.166 qpair failed and we were unable to recover it. 00:27:33.166 [2024-11-15 10:46:21.509826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.166 [2024-11-15 10:46:21.509849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.166 qpair failed and we were unable to recover it. 00:27:33.166 [2024-11-15 10:46:21.510018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.166 [2024-11-15 10:46:21.510042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.166 qpair failed and we were unable to recover it. 00:27:33.166 [2024-11-15 10:46:21.510203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.167 [2024-11-15 10:46:21.510227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.167 qpair failed and we were unable to recover it. 00:27:33.167 [2024-11-15 10:46:21.510422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.167 [2024-11-15 10:46:21.510462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.167 qpair failed and we were unable to recover it. 00:27:33.167 [2024-11-15 10:46:21.510568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.167 [2024-11-15 10:46:21.510593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.167 qpair failed and we were unable to recover it. 00:27:33.167 [2024-11-15 10:46:21.510757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.167 [2024-11-15 10:46:21.510794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.167 qpair failed and we were unable to recover it. 00:27:33.167 [2024-11-15 10:46:21.510973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.167 [2024-11-15 10:46:21.511015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.167 qpair failed and we were unable to recover it. 00:27:33.167 [2024-11-15 10:46:21.511188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.167 [2024-11-15 10:46:21.511212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.167 qpair failed and we were unable to recover it. 00:27:33.167 [2024-11-15 10:46:21.511391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.167 [2024-11-15 10:46:21.511416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.167 qpair failed and we were unable to recover it. 00:27:33.167 [2024-11-15 10:46:21.511552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.167 [2024-11-15 10:46:21.511577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.167 qpair failed and we were unable to recover it. 00:27:33.167 [2024-11-15 10:46:21.511805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.167 [2024-11-15 10:46:21.511829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.167 qpair failed and we were unable to recover it. 00:27:33.167 [2024-11-15 10:46:21.512051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.167 [2024-11-15 10:46:21.512074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.167 qpair failed and we were unable to recover it. 00:27:33.167 [2024-11-15 10:46:21.512300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.167 [2024-11-15 10:46:21.512334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.167 qpair failed and we were unable to recover it. 00:27:33.167 [2024-11-15 10:46:21.512462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.167 [2024-11-15 10:46:21.512486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.167 qpair failed and we were unable to recover it. 00:27:33.167 [2024-11-15 10:46:21.512703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.167 [2024-11-15 10:46:21.512741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.167 qpair failed and we were unable to recover it. 00:27:33.167 [2024-11-15 10:46:21.512907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.167 [2024-11-15 10:46:21.512938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.167 qpair failed and we were unable to recover it. 00:27:33.167 [2024-11-15 10:46:21.513078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.167 [2024-11-15 10:46:21.513117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.167 qpair failed and we were unable to recover it. 00:27:33.167 [2024-11-15 10:46:21.513311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.167 [2024-11-15 10:46:21.513335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.167 qpair failed and we were unable to recover it. 00:27:33.167 [2024-11-15 10:46:21.513514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.167 [2024-11-15 10:46:21.513540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.167 qpair failed and we were unable to recover it. 00:27:33.167 [2024-11-15 10:46:21.513737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.167 [2024-11-15 10:46:21.513777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.167 qpair failed and we were unable to recover it. 00:27:33.167 [2024-11-15 10:46:21.513976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.167 [2024-11-15 10:46:21.513999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.167 qpair failed and we were unable to recover it. 00:27:33.167 [2024-11-15 10:46:21.514189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.167 [2024-11-15 10:46:21.514213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.167 qpair failed and we were unable to recover it. 00:27:33.167 [2024-11-15 10:46:21.514349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.167 [2024-11-15 10:46:21.514393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.167 qpair failed and we were unable to recover it. 00:27:33.167 [2024-11-15 10:46:21.514553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.167 [2024-11-15 10:46:21.514578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.167 qpair failed and we were unable to recover it. 00:27:33.167 [2024-11-15 10:46:21.514747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.167 [2024-11-15 10:46:21.514770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.167 qpair failed and we were unable to recover it. 00:27:33.167 [2024-11-15 10:46:21.514965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.167 [2024-11-15 10:46:21.514988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.167 qpair failed and we were unable to recover it. 00:27:33.167 [2024-11-15 10:46:21.515155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.167 [2024-11-15 10:46:21.515188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.167 qpair failed and we were unable to recover it. 00:27:33.167 [2024-11-15 10:46:21.515338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.167 [2024-11-15 10:46:21.515372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.167 qpair failed and we were unable to recover it. 00:27:33.167 [2024-11-15 10:46:21.515598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.167 [2024-11-15 10:46:21.515623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.167 qpair failed and we were unable to recover it. 00:27:33.167 [2024-11-15 10:46:21.515719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.167 [2024-11-15 10:46:21.515743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.167 qpair failed and we were unable to recover it. 00:27:33.167 [2024-11-15 10:46:21.515923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.167 [2024-11-15 10:46:21.515963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.167 qpair failed and we were unable to recover it. 00:27:33.167 [2024-11-15 10:46:21.516110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.167 [2024-11-15 10:46:21.516148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.167 qpair failed and we were unable to recover it. 00:27:33.167 [2024-11-15 10:46:21.516261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.167 [2024-11-15 10:46:21.516285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.167 qpair failed and we were unable to recover it. 00:27:33.167 [2024-11-15 10:46:21.516431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.167 [2024-11-15 10:46:21.516457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.167 qpair failed and we were unable to recover it. 00:27:33.167 [2024-11-15 10:46:21.516581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.167 [2024-11-15 10:46:21.516607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.167 qpair failed and we were unable to recover it. 00:27:33.168 [2024-11-15 10:46:21.516732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.168 [2024-11-15 10:46:21.516757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.168 qpair failed and we were unable to recover it. 00:27:33.168 [2024-11-15 10:46:21.516945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.168 [2024-11-15 10:46:21.516968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.168 qpair failed and we were unable to recover it. 00:27:33.168 [2024-11-15 10:46:21.517146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.168 [2024-11-15 10:46:21.517170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.168 qpair failed and we were unable to recover it. 00:27:33.168 [2024-11-15 10:46:21.517366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.168 [2024-11-15 10:46:21.517405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.168 qpair failed and we were unable to recover it. 00:27:33.168 [2024-11-15 10:46:21.517637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.168 [2024-11-15 10:46:21.517662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.168 qpair failed and we were unable to recover it. 00:27:33.168 [2024-11-15 10:46:21.517833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.168 [2024-11-15 10:46:21.517856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.168 qpair failed and we were unable to recover it. 00:27:33.168 [2024-11-15 10:46:21.517954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.168 [2024-11-15 10:46:21.517978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.168 qpair failed and we were unable to recover it. 00:27:33.168 [2024-11-15 10:46:21.518134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.168 [2024-11-15 10:46:21.518159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.168 qpair failed and we were unable to recover it. 00:27:33.168 [2024-11-15 10:46:21.518285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.168 [2024-11-15 10:46:21.518309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.168 qpair failed and we were unable to recover it. 00:27:33.168 [2024-11-15 10:46:21.518454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.168 [2024-11-15 10:46:21.518503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.168 qpair failed and we were unable to recover it. 00:27:33.168 [2024-11-15 10:46:21.518660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.168 [2024-11-15 10:46:21.518685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.168 qpair failed and we were unable to recover it. 00:27:33.168 [2024-11-15 10:46:21.518851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.168 [2024-11-15 10:46:21.518875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.168 qpair failed and we were unable to recover it. 00:27:33.168 [2024-11-15 10:46:21.519118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.168 [2024-11-15 10:46:21.519145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.168 qpair failed and we were unable to recover it. 00:27:33.168 [2024-11-15 10:46:21.519341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.168 [2024-11-15 10:46:21.519376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.168 qpair failed and we were unable to recover it. 00:27:33.168 [2024-11-15 10:46:21.519557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.168 [2024-11-15 10:46:21.519582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.168 qpair failed and we were unable to recover it. 00:27:33.168 [2024-11-15 10:46:21.519766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.168 [2024-11-15 10:46:21.519790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.168 qpair failed and we were unable to recover it. 00:27:33.168 [2024-11-15 10:46:21.519917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.168 [2024-11-15 10:46:21.519955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.168 qpair failed and we were unable to recover it. 00:27:33.168 [2024-11-15 10:46:21.520069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.168 [2024-11-15 10:46:21.520093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.168 qpair failed and we were unable to recover it. 00:27:33.168 [2024-11-15 10:46:21.520222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.168 [2024-11-15 10:46:21.520247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.168 qpair failed and we were unable to recover it. 00:27:33.168 [2024-11-15 10:46:21.520450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.168 [2024-11-15 10:46:21.520491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.168 qpair failed and we were unable to recover it. 00:27:33.168 [2024-11-15 10:46:21.520642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.168 [2024-11-15 10:46:21.520666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.168 qpair failed and we were unable to recover it. 00:27:33.168 [2024-11-15 10:46:21.520793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.168 [2024-11-15 10:46:21.520817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.168 qpair failed and we were unable to recover it. 00:27:33.168 [2024-11-15 10:46:21.521028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.168 [2024-11-15 10:46:21.521051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.168 qpair failed and we were unable to recover it. 00:27:33.168 [2024-11-15 10:46:21.521260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.168 [2024-11-15 10:46:21.521283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.168 qpair failed and we were unable to recover it. 00:27:33.168 [2024-11-15 10:46:21.521502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.168 [2024-11-15 10:46:21.521527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.168 qpair failed and we were unable to recover it. 00:27:33.168 [2024-11-15 10:46:21.521604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.168 [2024-11-15 10:46:21.521641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.168 qpair failed and we were unable to recover it. 00:27:33.168 [2024-11-15 10:46:21.521769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.168 [2024-11-15 10:46:21.521794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.168 qpair failed and we were unable to recover it. 00:27:33.168 [2024-11-15 10:46:21.521945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.168 [2024-11-15 10:46:21.521976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.168 qpair failed and we were unable to recover it. 00:27:33.168 [2024-11-15 10:46:21.522142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.168 [2024-11-15 10:46:21.522165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.168 qpair failed and we were unable to recover it. 00:27:33.168 [2024-11-15 10:46:21.522340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.169 [2024-11-15 10:46:21.522389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.169 qpair failed and we were unable to recover it. 00:27:33.169 [2024-11-15 10:46:21.522541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.169 [2024-11-15 10:46:21.522567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.169 qpair failed and we were unable to recover it. 00:27:33.169 [2024-11-15 10:46:21.522773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.169 [2024-11-15 10:46:21.522812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.169 qpair failed and we were unable to recover it. 00:27:33.169 [2024-11-15 10:46:21.523008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.169 [2024-11-15 10:46:21.523032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.169 qpair failed and we were unable to recover it. 00:27:33.169 [2024-11-15 10:46:21.523190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.169 [2024-11-15 10:46:21.523214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.169 qpair failed and we were unable to recover it. 00:27:33.169 [2024-11-15 10:46:21.523347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.169 [2024-11-15 10:46:21.523414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.169 qpair failed and we were unable to recover it. 00:27:33.169 [2024-11-15 10:46:21.523575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.169 [2024-11-15 10:46:21.523600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.169 qpair failed and we were unable to recover it. 00:27:33.169 [2024-11-15 10:46:21.523759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.169 [2024-11-15 10:46:21.523783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.169 qpair failed and we were unable to recover it. 00:27:33.169 [2024-11-15 10:46:21.524005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.169 [2024-11-15 10:46:21.524028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.169 qpair failed and we were unable to recover it. 00:27:33.169 [2024-11-15 10:46:21.524187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.169 [2024-11-15 10:46:21.524221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.169 qpair failed and we were unable to recover it. 00:27:33.169 [2024-11-15 10:46:21.524415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.169 [2024-11-15 10:46:21.524460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.169 qpair failed and we were unable to recover it. 00:27:33.169 [2024-11-15 10:46:21.524696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.169 [2024-11-15 10:46:21.524720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.169 qpair failed and we were unable to recover it. 00:27:33.169 [2024-11-15 10:46:21.524932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.169 [2024-11-15 10:46:21.524955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.169 qpair failed and we were unable to recover it. 00:27:33.169 [2024-11-15 10:46:21.525130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.169 [2024-11-15 10:46:21.525154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.169 qpair failed and we were unable to recover it. 00:27:33.169 [2024-11-15 10:46:21.525343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.169 [2024-11-15 10:46:21.525397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.169 qpair failed and we were unable to recover it. 00:27:33.169 [2024-11-15 10:46:21.525612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.169 [2024-11-15 10:46:21.525637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.169 qpair failed and we were unable to recover it. 00:27:33.169 [2024-11-15 10:46:21.525774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.169 [2024-11-15 10:46:21.525806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.169 qpair failed and we were unable to recover it. 00:27:33.169 [2024-11-15 10:46:21.525943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.169 [2024-11-15 10:46:21.525981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.169 qpair failed and we were unable to recover it. 00:27:33.169 [2024-11-15 10:46:21.526122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.169 [2024-11-15 10:46:21.526172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.169 qpair failed and we were unable to recover it. 00:27:33.169 [2024-11-15 10:46:21.526358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.169 [2024-11-15 10:46:21.526386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.169 qpair failed and we were unable to recover it. 00:27:33.169 [2024-11-15 10:46:21.526569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.169 [2024-11-15 10:46:21.526592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.169 qpair failed and we were unable to recover it. 00:27:33.169 [2024-11-15 10:46:21.526691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.169 [2024-11-15 10:46:21.526731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.169 qpair failed and we were unable to recover it. 00:27:33.169 [2024-11-15 10:46:21.526880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.169 [2024-11-15 10:46:21.526904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.169 qpair failed and we were unable to recover it. 00:27:33.169 [2024-11-15 10:46:21.527097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.169 [2024-11-15 10:46:21.527120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.169 qpair failed and we were unable to recover it. 00:27:33.169 [2024-11-15 10:46:21.527285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.169 [2024-11-15 10:46:21.527323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.169 qpair failed and we were unable to recover it. 00:27:33.169 [2024-11-15 10:46:21.527495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.169 [2024-11-15 10:46:21.527520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.169 qpair failed and we were unable to recover it. 00:27:33.169 [2024-11-15 10:46:21.527702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.169 [2024-11-15 10:46:21.527742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.169 qpair failed and we were unable to recover it. 00:27:33.169 [2024-11-15 10:46:21.527948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.169 [2024-11-15 10:46:21.527972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.169 qpair failed and we were unable to recover it. 00:27:33.169 [2024-11-15 10:46:21.528177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.169 [2024-11-15 10:46:21.528200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.169 qpair failed and we were unable to recover it. 00:27:33.169 [2024-11-15 10:46:21.528345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.169 [2024-11-15 10:46:21.528390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.169 qpair failed and we were unable to recover it. 00:27:33.169 [2024-11-15 10:46:21.528568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.169 [2024-11-15 10:46:21.528593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.169 qpair failed and we were unable to recover it. 00:27:33.169 [2024-11-15 10:46:21.528798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.169 [2024-11-15 10:46:21.528822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.169 qpair failed and we were unable to recover it. 00:27:33.169 [2024-11-15 10:46:21.528995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.169 [2024-11-15 10:46:21.529018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.169 qpair failed and we were unable to recover it. 00:27:33.169 [2024-11-15 10:46:21.529126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.169 [2024-11-15 10:46:21.529151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.169 qpair failed and we were unable to recover it. 00:27:33.169 [2024-11-15 10:46:21.529316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.169 [2024-11-15 10:46:21.529341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.169 qpair failed and we were unable to recover it. 00:27:33.169 [2024-11-15 10:46:21.529489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.169 [2024-11-15 10:46:21.529514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.169 qpair failed and we were unable to recover it. 00:27:33.169 [2024-11-15 10:46:21.529704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.169 [2024-11-15 10:46:21.529728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.169 qpair failed and we were unable to recover it. 00:27:33.169 [2024-11-15 10:46:21.529839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.169 [2024-11-15 10:46:21.529866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.170 qpair failed and we were unable to recover it. 00:27:33.170 [2024-11-15 10:46:21.529992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.170 [2024-11-15 10:46:21.530017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.170 qpair failed and we were unable to recover it. 00:27:33.170 [2024-11-15 10:46:21.530236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.170 [2024-11-15 10:46:21.530260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.170 qpair failed and we were unable to recover it. 00:27:33.170 [2024-11-15 10:46:21.530466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.170 [2024-11-15 10:46:21.530492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.170 qpair failed and we were unable to recover it. 00:27:33.170 [2024-11-15 10:46:21.530608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.170 [2024-11-15 10:46:21.530633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.170 qpair failed and we were unable to recover it. 00:27:33.170 [2024-11-15 10:46:21.530787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.170 [2024-11-15 10:46:21.530812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.170 qpair failed and we were unable to recover it. 00:27:33.170 [2024-11-15 10:46:21.530987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.170 [2024-11-15 10:46:21.531037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.170 qpair failed and we were unable to recover it. 00:27:33.170 [2024-11-15 10:46:21.531171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.170 [2024-11-15 10:46:21.531210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.170 qpair failed and we were unable to recover it. 00:27:33.170 [2024-11-15 10:46:21.531409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.170 [2024-11-15 10:46:21.531434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.170 qpair failed and we were unable to recover it. 00:27:33.170 [2024-11-15 10:46:21.531601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.170 [2024-11-15 10:46:21.531626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.170 qpair failed and we were unable to recover it. 00:27:33.170 [2024-11-15 10:46:21.531800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.170 [2024-11-15 10:46:21.531824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.170 qpair failed and we were unable to recover it. 00:27:33.170 [2024-11-15 10:46:21.532037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.170 [2024-11-15 10:46:21.532060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.170 qpair failed and we were unable to recover it. 00:27:33.170 [2024-11-15 10:46:21.532224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.170 [2024-11-15 10:46:21.532247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.170 qpair failed and we were unable to recover it. 00:27:33.170 [2024-11-15 10:46:21.532477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.170 [2024-11-15 10:46:21.532502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.170 qpair failed and we were unable to recover it. 00:27:33.170 [2024-11-15 10:46:21.532685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.170 [2024-11-15 10:46:21.532710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.170 qpair failed and we were unable to recover it. 00:27:33.170 [2024-11-15 10:46:21.532941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.170 [2024-11-15 10:46:21.532965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.170 qpair failed and we were unable to recover it. 00:27:33.170 [2024-11-15 10:46:21.533135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.170 [2024-11-15 10:46:21.533165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.170 qpair failed and we were unable to recover it. 00:27:33.170 [2024-11-15 10:46:21.533349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.170 [2024-11-15 10:46:21.533395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.170 qpair failed and we were unable to recover it. 00:27:33.170 [2024-11-15 10:46:21.533576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.170 [2024-11-15 10:46:21.533602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.170 qpair failed and we were unable to recover it. 00:27:33.170 [2024-11-15 10:46:21.533772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.170 [2024-11-15 10:46:21.533795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.170 qpair failed and we were unable to recover it. 00:27:33.170 [2024-11-15 10:46:21.533977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.170 [2024-11-15 10:46:21.534000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.170 qpair failed and we were unable to recover it. 00:27:33.170 [2024-11-15 10:46:21.534168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.170 [2024-11-15 10:46:21.534192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.170 qpair failed and we were unable to recover it. 00:27:33.170 [2024-11-15 10:46:21.534376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.170 [2024-11-15 10:46:21.534416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.170 qpair failed and we were unable to recover it. 00:27:33.170 [2024-11-15 10:46:21.534502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.170 [2024-11-15 10:46:21.534527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.170 qpair failed and we were unable to recover it. 00:27:33.170 [2024-11-15 10:46:21.534690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.170 [2024-11-15 10:46:21.534715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.170 qpair failed and we were unable to recover it. 00:27:33.170 [2024-11-15 10:46:21.534900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.170 [2024-11-15 10:46:21.534924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.170 qpair failed and we were unable to recover it. 00:27:33.170 [2024-11-15 10:46:21.535110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.170 [2024-11-15 10:46:21.535134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.170 qpair failed and we were unable to recover it. 00:27:33.170 [2024-11-15 10:46:21.535374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.170 [2024-11-15 10:46:21.535399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.170 qpair failed and we were unable to recover it. 00:27:33.170 [2024-11-15 10:46:21.535572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.170 [2024-11-15 10:46:21.535598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.170 qpair failed and we were unable to recover it. 00:27:33.170 [2024-11-15 10:46:21.535725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.170 [2024-11-15 10:46:21.535765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.170 qpair failed and we were unable to recover it. 00:27:33.170 [2024-11-15 10:46:21.535914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.170 [2024-11-15 10:46:21.535965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.170 qpair failed and we were unable to recover it. 00:27:33.170 [2024-11-15 10:46:21.536135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.170 [2024-11-15 10:46:21.536158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.170 qpair failed and we were unable to recover it. 00:27:33.170 [2024-11-15 10:46:21.536345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.170 [2024-11-15 10:46:21.536375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.170 qpair failed and we were unable to recover it. 00:27:33.170 [2024-11-15 10:46:21.536568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.170 [2024-11-15 10:46:21.536593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.170 qpair failed and we were unable to recover it. 00:27:33.170 [2024-11-15 10:46:21.536818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.170 [2024-11-15 10:46:21.536842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.170 qpair failed and we were unable to recover it. 00:27:33.170 [2024-11-15 10:46:21.537019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.170 [2024-11-15 10:46:21.537042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.170 qpair failed and we were unable to recover it. 00:27:33.170 [2024-11-15 10:46:21.537206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.170 [2024-11-15 10:46:21.537230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.170 qpair failed and we were unable to recover it. 00:27:33.170 [2024-11-15 10:46:21.537385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.170 [2024-11-15 10:46:21.537410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.170 qpair failed and we were unable to recover it. 00:27:33.170 [2024-11-15 10:46:21.537531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.170 [2024-11-15 10:46:21.537567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.170 qpair failed and we were unable to recover it. 00:27:33.171 [2024-11-15 10:46:21.537791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.171 [2024-11-15 10:46:21.537815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.171 qpair failed and we were unable to recover it. 00:27:33.171 [2024-11-15 10:46:21.537985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.171 [2024-11-15 10:46:21.538009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.171 qpair failed and we were unable to recover it. 00:27:33.171 [2024-11-15 10:46:21.538197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.171 [2024-11-15 10:46:21.538221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.171 qpair failed and we were unable to recover it. 00:27:33.171 [2024-11-15 10:46:21.538330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.171 [2024-11-15 10:46:21.538354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.171 qpair failed and we were unable to recover it. 00:27:33.171 [2024-11-15 10:46:21.538498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.171 [2024-11-15 10:46:21.538524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.171 qpair failed and we were unable to recover it. 00:27:33.171 [2024-11-15 10:46:21.538676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.171 [2024-11-15 10:46:21.538701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.171 qpair failed and we were unable to recover it. 00:27:33.171 [2024-11-15 10:46:21.538888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.171 [2024-11-15 10:46:21.538911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.171 qpair failed and we were unable to recover it. 00:27:33.171 [2024-11-15 10:46:21.539128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.171 [2024-11-15 10:46:21.539153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.171 qpair failed and we were unable to recover it. 00:27:33.171 [2024-11-15 10:46:21.539281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.171 [2024-11-15 10:46:21.539305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.171 qpair failed and we were unable to recover it. 00:27:33.171 [2024-11-15 10:46:21.539546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.171 [2024-11-15 10:46:21.539571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.171 qpair failed and we were unable to recover it. 00:27:33.171 [2024-11-15 10:46:21.539789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.171 [2024-11-15 10:46:21.539813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.171 qpair failed and we were unable to recover it. 00:27:33.171 [2024-11-15 10:46:21.539963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.171 [2024-11-15 10:46:21.539987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.171 qpair failed and we were unable to recover it. 00:27:33.171 [2024-11-15 10:46:21.540172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.171 [2024-11-15 10:46:21.540196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.171 qpair failed and we were unable to recover it. 00:27:33.171 [2024-11-15 10:46:21.540422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.171 [2024-11-15 10:46:21.540447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.171 qpair failed and we were unable to recover it. 00:27:33.171 [2024-11-15 10:46:21.540673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.171 [2024-11-15 10:46:21.540697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.171 qpair failed and we were unable to recover it. 00:27:33.171 [2024-11-15 10:46:21.540901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.171 [2024-11-15 10:46:21.540925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.171 qpair failed and we were unable to recover it. 00:27:33.171 [2024-11-15 10:46:21.541143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.171 [2024-11-15 10:46:21.541181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.171 qpair failed and we were unable to recover it. 00:27:33.171 [2024-11-15 10:46:21.541419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.171 [2024-11-15 10:46:21.541444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.171 qpair failed and we were unable to recover it. 00:27:33.171 [2024-11-15 10:46:21.541621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.171 [2024-11-15 10:46:21.541661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.171 qpair failed and we were unable to recover it. 00:27:33.171 [2024-11-15 10:46:21.541866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.171 [2024-11-15 10:46:21.541891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.171 qpair failed and we were unable to recover it. 00:27:33.171 [2024-11-15 10:46:21.542101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.171 [2024-11-15 10:46:21.542125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.171 qpair failed and we were unable to recover it. 00:27:33.171 [2024-11-15 10:46:21.542277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.171 [2024-11-15 10:46:21.542301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.171 qpair failed and we were unable to recover it. 00:27:33.171 [2024-11-15 10:46:21.542450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.171 [2024-11-15 10:46:21.542476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.171 qpair failed and we were unable to recover it. 00:27:33.171 [2024-11-15 10:46:21.542682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.171 [2024-11-15 10:46:21.542721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.171 qpair failed and we were unable to recover it. 00:27:33.171 [2024-11-15 10:46:21.542938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.171 [2024-11-15 10:46:21.542961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.171 qpair failed and we were unable to recover it. 00:27:33.171 [2024-11-15 10:46:21.543129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.171 [2024-11-15 10:46:21.543152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.171 qpair failed and we were unable to recover it. 00:27:33.171 [2024-11-15 10:46:21.543369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.171 [2024-11-15 10:46:21.543409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.171 qpair failed and we were unable to recover it. 00:27:33.171 [2024-11-15 10:46:21.543632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.171 [2024-11-15 10:46:21.543657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.171 qpair failed and we were unable to recover it. 00:27:33.171 [2024-11-15 10:46:21.543876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.171 [2024-11-15 10:46:21.543900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.171 qpair failed and we were unable to recover it. 00:27:33.171 [2024-11-15 10:46:21.544121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.171 [2024-11-15 10:46:21.544150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.171 qpair failed and we were unable to recover it. 00:27:33.171 [2024-11-15 10:46:21.544379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.171 [2024-11-15 10:46:21.544404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.171 qpair failed and we were unable to recover it. 00:27:33.171 [2024-11-15 10:46:21.544542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.171 [2024-11-15 10:46:21.544567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.171 qpair failed and we were unable to recover it. 00:27:33.171 [2024-11-15 10:46:21.544689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.171 [2024-11-15 10:46:21.544713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.171 qpair failed and we were unable to recover it. 00:27:33.171 [2024-11-15 10:46:21.544916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.171 [2024-11-15 10:46:21.544940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.171 qpair failed and we were unable to recover it. 00:27:33.171 [2024-11-15 10:46:21.545136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.171 [2024-11-15 10:46:21.545160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.171 qpair failed and we were unable to recover it. 00:27:33.171 [2024-11-15 10:46:21.545331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.171 [2024-11-15 10:46:21.545355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.171 qpair failed and we were unable to recover it. 00:27:33.171 [2024-11-15 10:46:21.545492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.171 [2024-11-15 10:46:21.545531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.171 qpair failed and we were unable to recover it. 00:27:33.171 [2024-11-15 10:46:21.545761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.171 [2024-11-15 10:46:21.545785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.171 qpair failed and we were unable to recover it. 00:27:33.171 [2024-11-15 10:46:21.546007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.172 [2024-11-15 10:46:21.546030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.172 qpair failed and we were unable to recover it. 00:27:33.172 [2024-11-15 10:46:21.546226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.172 [2024-11-15 10:46:21.546250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.172 qpair failed and we were unable to recover it. 00:27:33.172 [2024-11-15 10:46:21.546435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.172 [2024-11-15 10:46:21.546460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.172 qpair failed and we were unable to recover it. 00:27:33.172 [2024-11-15 10:46:21.546669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.172 [2024-11-15 10:46:21.546693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.172 qpair failed and we were unable to recover it. 00:27:33.172 [2024-11-15 10:46:21.546913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.172 [2024-11-15 10:46:21.546937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.172 qpair failed and we were unable to recover it. 00:27:33.172 [2024-11-15 10:46:21.547170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.172 [2024-11-15 10:46:21.547194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.172 qpair failed and we were unable to recover it. 00:27:33.172 [2024-11-15 10:46:21.547387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.172 [2024-11-15 10:46:21.547413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.172 qpair failed and we were unable to recover it. 00:27:33.172 [2024-11-15 10:46:21.547599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.172 [2024-11-15 10:46:21.547624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.172 qpair failed and we were unable to recover it. 00:27:33.172 [2024-11-15 10:46:21.547854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.172 [2024-11-15 10:46:21.547877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.172 qpair failed and we were unable to recover it. 00:27:33.172 [2024-11-15 10:46:21.548068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.172 [2024-11-15 10:46:21.548097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.172 qpair failed and we were unable to recover it. 00:27:33.172 [2024-11-15 10:46:21.548308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.172 [2024-11-15 10:46:21.548332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.172 qpair failed and we were unable to recover it. 00:27:33.172 [2024-11-15 10:46:21.548534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.172 [2024-11-15 10:46:21.548560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.172 qpair failed and we were unable to recover it. 00:27:33.172 [2024-11-15 10:46:21.548770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.172 [2024-11-15 10:46:21.548794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.172 qpair failed and we were unable to recover it. 00:27:33.172 [2024-11-15 10:46:21.548986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.172 [2024-11-15 10:46:21.549010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.172 qpair failed and we were unable to recover it. 00:27:33.172 [2024-11-15 10:46:21.549171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.172 [2024-11-15 10:46:21.549194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.172 qpair failed and we were unable to recover it. 00:27:33.172 [2024-11-15 10:46:21.549408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.172 [2024-11-15 10:46:21.549433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.172 qpair failed and we were unable to recover it. 00:27:33.172 [2024-11-15 10:46:21.549650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.172 [2024-11-15 10:46:21.549675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.172 qpair failed and we were unable to recover it. 00:27:33.172 [2024-11-15 10:46:21.549912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.172 [2024-11-15 10:46:21.549936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.172 qpair failed and we were unable to recover it. 00:27:33.172 [2024-11-15 10:46:21.550172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.172 [2024-11-15 10:46:21.550199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.172 qpair failed and we were unable to recover it. 00:27:33.172 [2024-11-15 10:46:21.550433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.172 [2024-11-15 10:46:21.550458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.172 qpair failed and we were unable to recover it. 00:27:33.172 [2024-11-15 10:46:21.550621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.172 [2024-11-15 10:46:21.550646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.172 qpair failed and we were unable to recover it. 00:27:33.172 [2024-11-15 10:46:21.550847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.172 [2024-11-15 10:46:21.550870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.172 qpair failed and we were unable to recover it. 00:27:33.172 [2024-11-15 10:46:21.551078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.172 [2024-11-15 10:46:21.551101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.172 qpair failed and we were unable to recover it. 00:27:33.172 [2024-11-15 10:46:21.551318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.172 [2024-11-15 10:46:21.551342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.172 qpair failed and we were unable to recover it. 00:27:33.172 [2024-11-15 10:46:21.551471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.172 [2024-11-15 10:46:21.551496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.172 qpair failed and we were unable to recover it. 00:27:33.172 [2024-11-15 10:46:21.551724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.172 [2024-11-15 10:46:21.551749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.172 qpair failed and we were unable to recover it. 00:27:33.172 [2024-11-15 10:46:21.551926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.172 [2024-11-15 10:46:21.551951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.172 qpair failed and we were unable to recover it. 00:27:33.172 [2024-11-15 10:46:21.552141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.172 [2024-11-15 10:46:21.552165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.172 qpair failed and we were unable to recover it. 00:27:33.172 [2024-11-15 10:46:21.552357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.172 [2024-11-15 10:46:21.552400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.172 qpair failed and we were unable to recover it. 00:27:33.172 [2024-11-15 10:46:21.552621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.172 [2024-11-15 10:46:21.552645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.172 qpair failed and we were unable to recover it. 00:27:33.172 [2024-11-15 10:46:21.552817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.172 [2024-11-15 10:46:21.552841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.172 qpair failed and we were unable to recover it. 00:27:33.172 [2024-11-15 10:46:21.553000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.172 [2024-11-15 10:46:21.553024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.172 qpair failed and we were unable to recover it. 00:27:33.172 [2024-11-15 10:46:21.553244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.172 [2024-11-15 10:46:21.553268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.172 qpair failed and we were unable to recover it. 00:27:33.172 [2024-11-15 10:46:21.553495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.172 [2024-11-15 10:46:21.553521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.172 qpair failed and we were unable to recover it. 00:27:33.172 [2024-11-15 10:46:21.553648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.172 [2024-11-15 10:46:21.553671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.172 qpair failed and we were unable to recover it. 00:27:33.172 [2024-11-15 10:46:21.553818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.172 [2024-11-15 10:46:21.553865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.172 qpair failed and we were unable to recover it. 00:27:33.172 [2024-11-15 10:46:21.554030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.172 [2024-11-15 10:46:21.554054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.172 qpair failed and we were unable to recover it. 00:27:33.172 [2024-11-15 10:46:21.554239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.172 [2024-11-15 10:46:21.554262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.172 qpair failed and we were unable to recover it. 00:27:33.172 [2024-11-15 10:46:21.554480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.173 [2024-11-15 10:46:21.554506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.173 qpair failed and we were unable to recover it. 00:27:33.173 [2024-11-15 10:46:21.554698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.173 [2024-11-15 10:46:21.554737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.173 qpair failed and we were unable to recover it. 00:27:33.173 [2024-11-15 10:46:21.554903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.173 [2024-11-15 10:46:21.554926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.173 qpair failed and we were unable to recover it. 00:27:33.173 [2024-11-15 10:46:21.555055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.173 [2024-11-15 10:46:21.555095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.173 qpair failed and we were unable to recover it. 00:27:33.173 [2024-11-15 10:46:21.555225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.173 [2024-11-15 10:46:21.555252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.173 qpair failed and we were unable to recover it. 00:27:33.173 [2024-11-15 10:46:21.555468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.173 [2024-11-15 10:46:21.555493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.173 qpair failed and we were unable to recover it. 00:27:33.173 [2024-11-15 10:46:21.555677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.173 [2024-11-15 10:46:21.555702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.173 qpair failed and we were unable to recover it. 00:27:33.173 [2024-11-15 10:46:21.555914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.173 [2024-11-15 10:46:21.555942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.173 qpair failed and we were unable to recover it. 00:27:33.173 [2024-11-15 10:46:21.556132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.173 [2024-11-15 10:46:21.556156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.173 qpair failed and we were unable to recover it. 00:27:33.173 [2024-11-15 10:46:21.556370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.173 [2024-11-15 10:46:21.556410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.173 qpair failed and we were unable to recover it. 00:27:33.173 [2024-11-15 10:46:21.556644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.173 [2024-11-15 10:46:21.556683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.173 qpair failed and we were unable to recover it. 00:27:33.173 [2024-11-15 10:46:21.556848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.173 [2024-11-15 10:46:21.556871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.173 qpair failed and we were unable to recover it. 00:27:33.173 [2024-11-15 10:46:21.557068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.173 [2024-11-15 10:46:21.557093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.173 qpair failed and we were unable to recover it. 00:27:33.173 [2024-11-15 10:46:21.557272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.173 [2024-11-15 10:46:21.557295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.173 qpair failed and we were unable to recover it. 00:27:33.173 [2024-11-15 10:46:21.557532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.173 [2024-11-15 10:46:21.557557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.173 qpair failed and we were unable to recover it. 00:27:33.173 [2024-11-15 10:46:21.557786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.173 [2024-11-15 10:46:21.557810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.173 qpair failed and we were unable to recover it. 00:27:33.173 [2024-11-15 10:46:21.558115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.173 [2024-11-15 10:46:21.558139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.173 qpair failed and we were unable to recover it. 00:27:33.173 [2024-11-15 10:46:21.558330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.173 [2024-11-15 10:46:21.558374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.173 qpair failed and we were unable to recover it. 00:27:33.173 [2024-11-15 10:46:21.558535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.173 [2024-11-15 10:46:21.558560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.173 qpair failed and we were unable to recover it. 00:27:33.173 [2024-11-15 10:46:21.558798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.173 [2024-11-15 10:46:21.558821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.173 qpair failed and we were unable to recover it. 00:27:33.173 [2024-11-15 10:46:21.558987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.173 [2024-11-15 10:46:21.559010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.173 qpair failed and we were unable to recover it. 00:27:33.173 [2024-11-15 10:46:21.559242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.173 [2024-11-15 10:46:21.559266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.173 qpair failed and we were unable to recover it. 00:27:33.173 [2024-11-15 10:46:21.559385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.173 [2024-11-15 10:46:21.559409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.173 qpair failed and we were unable to recover it. 00:27:33.173 [2024-11-15 10:46:21.559566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.173 [2024-11-15 10:46:21.559591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.173 qpair failed and we were unable to recover it. 00:27:33.173 [2024-11-15 10:46:21.559823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.173 [2024-11-15 10:46:21.559846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.173 qpair failed and we were unable to recover it. 00:27:33.173 [2024-11-15 10:46:21.560004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.173 [2024-11-15 10:46:21.560028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.173 qpair failed and we were unable to recover it. 00:27:33.173 [2024-11-15 10:46:21.560245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.173 [2024-11-15 10:46:21.560269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.173 qpair failed and we were unable to recover it. 00:27:33.173 [2024-11-15 10:46:21.560457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.173 [2024-11-15 10:46:21.560482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.173 qpair failed and we were unable to recover it. 00:27:33.173 [2024-11-15 10:46:21.560671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.173 [2024-11-15 10:46:21.560695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.173 qpair failed and we were unable to recover it. 00:27:33.173 [2024-11-15 10:46:21.560882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.173 [2024-11-15 10:46:21.560906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.173 qpair failed and we were unable to recover it. 00:27:33.173 [2024-11-15 10:46:21.561130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.173 [2024-11-15 10:46:21.561154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.173 qpair failed and we were unable to recover it. 00:27:33.173 [2024-11-15 10:46:21.561284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.173 [2024-11-15 10:46:21.561317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.173 qpair failed and we were unable to recover it. 00:27:33.173 [2024-11-15 10:46:21.561509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.173 [2024-11-15 10:46:21.561544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.173 qpair failed and we were unable to recover it. 00:27:33.173 [2024-11-15 10:46:21.561712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.173 [2024-11-15 10:46:21.561735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.173 qpair failed and we were unable to recover it. 00:27:33.173 [2024-11-15 10:46:21.561946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.174 [2024-11-15 10:46:21.561970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.174 qpair failed and we were unable to recover it. 00:27:33.174 [2024-11-15 10:46:21.562134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.174 [2024-11-15 10:46:21.562158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.174 qpair failed and we were unable to recover it. 00:27:33.174 [2024-11-15 10:46:21.562340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.174 [2024-11-15 10:46:21.562369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.174 qpair failed and we were unable to recover it. 00:27:33.174 [2024-11-15 10:46:21.562607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.174 [2024-11-15 10:46:21.562632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.174 qpair failed and we were unable to recover it. 00:27:33.174 [2024-11-15 10:46:21.562813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.174 [2024-11-15 10:46:21.562837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.174 qpair failed and we were unable to recover it. 00:27:33.174 [2024-11-15 10:46:21.563048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.174 [2024-11-15 10:46:21.563071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.174 qpair failed and we were unable to recover it. 00:27:33.174 [2024-11-15 10:46:21.563268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.174 [2024-11-15 10:46:21.563291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.174 qpair failed and we were unable to recover it. 00:27:33.174 [2024-11-15 10:46:21.563506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.174 [2024-11-15 10:46:21.563533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.174 qpair failed and we were unable to recover it. 00:27:33.174 [2024-11-15 10:46:21.563738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.174 [2024-11-15 10:46:21.563761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.174 qpair failed and we were unable to recover it. 00:27:33.174 [2024-11-15 10:46:21.563914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.174 [2024-11-15 10:46:21.563940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.174 qpair failed and we were unable to recover it. 00:27:33.174 [2024-11-15 10:46:21.564149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.174 [2024-11-15 10:46:21.564173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.174 qpair failed and we were unable to recover it. 00:27:33.174 [2024-11-15 10:46:21.564394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.174 [2024-11-15 10:46:21.564427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.174 qpair failed and we were unable to recover it. 00:27:33.174 [2024-11-15 10:46:21.564568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.174 [2024-11-15 10:46:21.564592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.174 qpair failed and we were unable to recover it. 00:27:33.174 [2024-11-15 10:46:21.564832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.174 [2024-11-15 10:46:21.564856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.174 qpair failed and we were unable to recover it. 00:27:33.174 [2024-11-15 10:46:21.565054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.174 [2024-11-15 10:46:21.565082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.174 qpair failed and we were unable to recover it. 00:27:33.174 [2024-11-15 10:46:21.565265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.174 [2024-11-15 10:46:21.565289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.174 qpair failed and we were unable to recover it. 00:27:33.174 [2024-11-15 10:46:21.565492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.174 [2024-11-15 10:46:21.565517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.174 qpair failed and we were unable to recover it. 00:27:33.174 [2024-11-15 10:46:21.565739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.174 [2024-11-15 10:46:21.565762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.174 qpair failed and we were unable to recover it. 00:27:33.174 [2024-11-15 10:46:21.565879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.174 [2024-11-15 10:46:21.565902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.174 qpair failed and we were unable to recover it. 00:27:33.174 [2024-11-15 10:46:21.566029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.174 [2024-11-15 10:46:21.566053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.174 qpair failed and we were unable to recover it. 00:27:33.174 [2024-11-15 10:46:21.566267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.174 [2024-11-15 10:46:21.566290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.174 qpair failed and we were unable to recover it. 00:27:33.174 [2024-11-15 10:46:21.566472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.174 [2024-11-15 10:46:21.566497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.174 qpair failed and we were unable to recover it. 00:27:33.174 [2024-11-15 10:46:21.566669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.174 [2024-11-15 10:46:21.566693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.174 qpair failed and we were unable to recover it. 00:27:33.174 [2024-11-15 10:46:21.566871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.174 [2024-11-15 10:46:21.566895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.174 qpair failed and we were unable to recover it. 00:27:33.174 [2024-11-15 10:46:21.567130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.174 [2024-11-15 10:46:21.567154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.174 qpair failed and we were unable to recover it. 00:27:33.174 [2024-11-15 10:46:21.567358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.174 [2024-11-15 10:46:21.567399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.174 qpair failed and we were unable to recover it. 00:27:33.174 [2024-11-15 10:46:21.567578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.174 [2024-11-15 10:46:21.567603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.174 qpair failed and we were unable to recover it. 00:27:33.174 [2024-11-15 10:46:21.567758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.174 [2024-11-15 10:46:21.567782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.174 qpair failed and we were unable to recover it. 00:27:33.174 [2024-11-15 10:46:21.568039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.174 [2024-11-15 10:46:21.568063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.174 qpair failed and we were unable to recover it. 00:27:33.174 [2024-11-15 10:46:21.568245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.174 [2024-11-15 10:46:21.568268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.174 qpair failed and we were unable to recover it. 00:27:33.174 [2024-11-15 10:46:21.568421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.174 [2024-11-15 10:46:21.568456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.174 qpair failed and we were unable to recover it. 00:27:33.174 [2024-11-15 10:46:21.568687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.174 [2024-11-15 10:46:21.568711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.174 qpair failed and we were unable to recover it. 00:27:33.174 [2024-11-15 10:46:21.568892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.174 [2024-11-15 10:46:21.568916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.174 qpair failed and we were unable to recover it. 00:27:33.174 [2024-11-15 10:46:21.569132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.174 [2024-11-15 10:46:21.569155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.174 qpair failed and we were unable to recover it. 00:27:33.174 [2024-11-15 10:46:21.569398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.174 [2024-11-15 10:46:21.569424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.174 qpair failed and we were unable to recover it. 00:27:33.174 [2024-11-15 10:46:21.569606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.174 [2024-11-15 10:46:21.569631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.174 qpair failed and we were unable to recover it. 00:27:33.174 [2024-11-15 10:46:21.569828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.174 [2024-11-15 10:46:21.569852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.174 qpair failed and we were unable to recover it. 00:27:33.174 [2024-11-15 10:46:21.570013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.174 [2024-11-15 10:46:21.570037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.174 qpair failed and we were unable to recover it. 00:27:33.174 [2024-11-15 10:46:21.570259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.175 [2024-11-15 10:46:21.570283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.175 qpair failed and we were unable to recover it. 00:27:33.175 [2024-11-15 10:46:21.570497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.175 [2024-11-15 10:46:21.570522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.175 qpair failed and we were unable to recover it. 00:27:33.175 [2024-11-15 10:46:21.570718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.175 [2024-11-15 10:46:21.570741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.175 qpair failed and we were unable to recover it. 00:27:33.175 [2024-11-15 10:46:21.570886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.175 [2024-11-15 10:46:21.570920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.175 qpair failed and we were unable to recover it. 00:27:33.175 [2024-11-15 10:46:21.571091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.175 [2024-11-15 10:46:21.571114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.175 qpair failed and we were unable to recover it. 00:27:33.175 [2024-11-15 10:46:21.571321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.175 [2024-11-15 10:46:21.571359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.175 qpair failed and we were unable to recover it. 00:27:33.175 [2024-11-15 10:46:21.571598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.175 [2024-11-15 10:46:21.571623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.175 qpair failed and we were unable to recover it. 00:27:33.175 [2024-11-15 10:46:21.571871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.175 [2024-11-15 10:46:21.571895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.175 qpair failed and we were unable to recover it. 00:27:33.175 [2024-11-15 10:46:21.572067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.175 [2024-11-15 10:46:21.572090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.175 qpair failed and we were unable to recover it. 00:27:33.175 [2024-11-15 10:46:21.572321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.175 [2024-11-15 10:46:21.572360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.175 qpair failed and we were unable to recover it. 00:27:33.175 [2024-11-15 10:46:21.572542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.175 [2024-11-15 10:46:21.572567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.175 qpair failed and we were unable to recover it. 00:27:33.175 [2024-11-15 10:46:21.572743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.175 [2024-11-15 10:46:21.572767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.175 qpair failed and we were unable to recover it. 00:27:33.175 [2024-11-15 10:46:21.572947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.175 [2024-11-15 10:46:21.572971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.175 qpair failed and we were unable to recover it. 00:27:33.175 [2024-11-15 10:46:21.573183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.175 [2024-11-15 10:46:21.573207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.175 qpair failed and we were unable to recover it. 00:27:33.175 [2024-11-15 10:46:21.573418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.175 [2024-11-15 10:46:21.573443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.175 qpair failed and we were unable to recover it. 00:27:33.175 [2024-11-15 10:46:21.573620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.175 [2024-11-15 10:46:21.573645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.175 qpair failed and we were unable to recover it. 00:27:33.175 [2024-11-15 10:46:21.573875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.175 [2024-11-15 10:46:21.573899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.175 qpair failed and we were unable to recover it. 00:27:33.175 [2024-11-15 10:46:21.574077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.175 [2024-11-15 10:46:21.574100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.175 qpair failed and we were unable to recover it. 00:27:33.175 [2024-11-15 10:46:21.574276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.175 [2024-11-15 10:46:21.574311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.175 qpair failed and we were unable to recover it. 00:27:33.175 [2024-11-15 10:46:21.574526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.175 [2024-11-15 10:46:21.574566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.175 qpair failed and we were unable to recover it. 00:27:33.175 [2024-11-15 10:46:21.574780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.175 [2024-11-15 10:46:21.574804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.175 qpair failed and we were unable to recover it. 00:27:33.175 [2024-11-15 10:46:21.575024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.175 [2024-11-15 10:46:21.575047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.175 qpair failed and we were unable to recover it. 00:27:33.175 [2024-11-15 10:46:21.575274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.175 [2024-11-15 10:46:21.575297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.175 qpair failed and we were unable to recover it. 00:27:33.175 [2024-11-15 10:46:21.575529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.175 [2024-11-15 10:46:21.575555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.175 qpair failed and we were unable to recover it. 00:27:33.175 [2024-11-15 10:46:21.575732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.175 [2024-11-15 10:46:21.575755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.175 qpair failed and we were unable to recover it. 00:27:33.175 [2024-11-15 10:46:21.575981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.175 [2024-11-15 10:46:21.576004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.175 qpair failed and we were unable to recover it. 00:27:33.175 [2024-11-15 10:46:21.576178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.175 [2024-11-15 10:46:21.576201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.175 qpair failed and we were unable to recover it. 00:27:33.175 [2024-11-15 10:46:21.576399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.175 [2024-11-15 10:46:21.576425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.175 qpair failed and we were unable to recover it. 00:27:33.175 [2024-11-15 10:46:21.576553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.175 [2024-11-15 10:46:21.576577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.175 qpair failed and we were unable to recover it. 00:27:33.175 [2024-11-15 10:46:21.576799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.175 [2024-11-15 10:46:21.576823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.175 qpair failed and we were unable to recover it. 00:27:33.175 [2024-11-15 10:46:21.577071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.175 [2024-11-15 10:46:21.577099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.175 qpair failed and we were unable to recover it. 00:27:33.175 [2024-11-15 10:46:21.577295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.175 [2024-11-15 10:46:21.577318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.175 qpair failed and we were unable to recover it. 00:27:33.175 [2024-11-15 10:46:21.577505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.175 [2024-11-15 10:46:21.577529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.175 qpair failed and we were unable to recover it. 00:27:33.175 [2024-11-15 10:46:21.577749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.175 [2024-11-15 10:46:21.577772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.175 qpair failed and we were unable to recover it. 00:27:33.175 [2024-11-15 10:46:21.577925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.175 [2024-11-15 10:46:21.577948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.175 qpair failed and we were unable to recover it. 00:27:33.175 [2024-11-15 10:46:21.578167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.175 [2024-11-15 10:46:21.578190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.175 qpair failed and we were unable to recover it. 00:27:33.175 [2024-11-15 10:46:21.578348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.175 [2024-11-15 10:46:21.578380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.175 qpair failed and we were unable to recover it. 00:27:33.175 [2024-11-15 10:46:21.578532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.175 [2024-11-15 10:46:21.578557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.175 qpair failed and we were unable to recover it. 00:27:33.175 [2024-11-15 10:46:21.578740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.176 [2024-11-15 10:46:21.578763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.176 qpair failed and we were unable to recover it. 00:27:33.176 [2024-11-15 10:46:21.578997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.176 [2024-11-15 10:46:21.579020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.176 qpair failed and we were unable to recover it. 00:27:33.176 [2024-11-15 10:46:21.579203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.176 [2024-11-15 10:46:21.579226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.176 qpair failed and we were unable to recover it. 00:27:33.176 [2024-11-15 10:46:21.579458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.176 [2024-11-15 10:46:21.579484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.176 qpair failed and we were unable to recover it. 00:27:33.176 [2024-11-15 10:46:21.579687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.176 [2024-11-15 10:46:21.579712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.176 qpair failed and we were unable to recover it. 00:27:33.176 [2024-11-15 10:46:21.579950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.176 [2024-11-15 10:46:21.579975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.176 qpair failed and we were unable to recover it. 00:27:33.176 [2024-11-15 10:46:21.580195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.176 [2024-11-15 10:46:21.580235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.176 qpair failed and we were unable to recover it. 00:27:33.176 [2024-11-15 10:46:21.580393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.176 [2024-11-15 10:46:21.580419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.176 qpair failed and we were unable to recover it. 00:27:33.176 [2024-11-15 10:46:21.580540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.176 [2024-11-15 10:46:21.580565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.176 qpair failed and we were unable to recover it. 00:27:33.176 [2024-11-15 10:46:21.580734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.176 [2024-11-15 10:46:21.580774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.176 qpair failed and we were unable to recover it. 00:27:33.176 [2024-11-15 10:46:21.580994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.176 [2024-11-15 10:46:21.581017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.176 qpair failed and we were unable to recover it. 00:27:33.176 [2024-11-15 10:46:21.581200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.176 [2024-11-15 10:46:21.581224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.176 qpair failed and we were unable to recover it. 00:27:33.176 [2024-11-15 10:46:21.581387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.176 [2024-11-15 10:46:21.581412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.176 qpair failed and we were unable to recover it. 00:27:33.176 [2024-11-15 10:46:21.581580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.176 [2024-11-15 10:46:21.581605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.176 qpair failed and we were unable to recover it. 00:27:33.176 [2024-11-15 10:46:21.581816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.176 [2024-11-15 10:46:21.581841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.176 qpair failed and we were unable to recover it. 00:27:33.176 [2024-11-15 10:46:21.582037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.176 [2024-11-15 10:46:21.582062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.176 qpair failed and we were unable to recover it. 00:27:33.176 [2024-11-15 10:46:21.582275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.176 [2024-11-15 10:46:21.582299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.176 qpair failed and we were unable to recover it. 00:27:33.176 [2024-11-15 10:46:21.582488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.176 [2024-11-15 10:46:21.582513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.176 qpair failed and we were unable to recover it. 00:27:33.176 [2024-11-15 10:46:21.582694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.176 [2024-11-15 10:46:21.582718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.176 qpair failed and we were unable to recover it. 00:27:33.176 [2024-11-15 10:46:21.582913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.176 [2024-11-15 10:46:21.582938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.176 qpair failed and we were unable to recover it. 00:27:33.176 [2024-11-15 10:46:21.583122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.176 [2024-11-15 10:46:21.583148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.176 qpair failed and we were unable to recover it. 00:27:33.455 [2024-11-15 10:46:21.583371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.455 [2024-11-15 10:46:21.583397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.455 qpair failed and we were unable to recover it. 00:27:33.455 [2024-11-15 10:46:21.583586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.455 [2024-11-15 10:46:21.583612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.455 qpair failed and we were unable to recover it. 00:27:33.455 [2024-11-15 10:46:21.583826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.455 [2024-11-15 10:46:21.583851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.455 qpair failed and we were unable to recover it. 00:27:33.455 [2024-11-15 10:46:21.584075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.455 [2024-11-15 10:46:21.584100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.455 qpair failed and we were unable to recover it. 00:27:33.455 [2024-11-15 10:46:21.584264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.455 [2024-11-15 10:46:21.584289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.455 qpair failed and we were unable to recover it. 00:27:33.455 [2024-11-15 10:46:21.584434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.455 [2024-11-15 10:46:21.584460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.455 qpair failed and we were unable to recover it. 00:27:33.455 [2024-11-15 10:46:21.584585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.455 [2024-11-15 10:46:21.584610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.455 qpair failed and we were unable to recover it. 00:27:33.455 [2024-11-15 10:46:21.584730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.455 [2024-11-15 10:46:21.584756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.455 qpair failed and we were unable to recover it. 00:27:33.455 [2024-11-15 10:46:21.584951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.455 [2024-11-15 10:46:21.584977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.455 qpair failed and we were unable to recover it. 00:27:33.455 [2024-11-15 10:46:21.585144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.455 [2024-11-15 10:46:21.585168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.455 qpair failed and we were unable to recover it. 00:27:33.455 [2024-11-15 10:46:21.585334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.455 [2024-11-15 10:46:21.585359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.455 qpair failed and we were unable to recover it. 00:27:33.455 [2024-11-15 10:46:21.585574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.455 [2024-11-15 10:46:21.585600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.455 qpair failed and we were unable to recover it. 00:27:33.455 [2024-11-15 10:46:21.585816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.455 [2024-11-15 10:46:21.585841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.455 qpair failed and we were unable to recover it. 00:27:33.455 [2024-11-15 10:46:21.586013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.455 [2024-11-15 10:46:21.586037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.455 qpair failed and we were unable to recover it. 00:27:33.455 [2024-11-15 10:46:21.586184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.455 [2024-11-15 10:46:21.586209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.455 qpair failed and we were unable to recover it. 00:27:33.455 [2024-11-15 10:46:21.586445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.455 [2024-11-15 10:46:21.586471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.455 qpair failed and we were unable to recover it. 00:27:33.455 [2024-11-15 10:46:21.586636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.455 [2024-11-15 10:46:21.586661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.455 qpair failed and we were unable to recover it. 00:27:33.455 [2024-11-15 10:46:21.586841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.455 [2024-11-15 10:46:21.586866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.455 qpair failed and we were unable to recover it. 00:27:33.455 [2024-11-15 10:46:21.587015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.455 [2024-11-15 10:46:21.587055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.455 qpair failed and we were unable to recover it. 00:27:33.455 [2024-11-15 10:46:21.587266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.455 [2024-11-15 10:46:21.587305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.455 qpair failed and we were unable to recover it. 00:27:33.455 [2024-11-15 10:46:21.587547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.455 [2024-11-15 10:46:21.587573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.455 qpair failed and we were unable to recover it. 00:27:33.455 [2024-11-15 10:46:21.587709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.455 [2024-11-15 10:46:21.587749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.455 qpair failed and we were unable to recover it. 00:27:33.455 [2024-11-15 10:46:21.587936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.455 [2024-11-15 10:46:21.587960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.455 qpair failed and we were unable to recover it. 00:27:33.455 [2024-11-15 10:46:21.588128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.455 [2024-11-15 10:46:21.588152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.455 qpair failed and we were unable to recover it. 00:27:33.455 [2024-11-15 10:46:21.588339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.455 [2024-11-15 10:46:21.588371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.455 qpair failed and we were unable to recover it. 00:27:33.455 [2024-11-15 10:46:21.588522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.455 [2024-11-15 10:46:21.588546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.455 qpair failed and we were unable to recover it. 00:27:33.455 [2024-11-15 10:46:21.588793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.455 [2024-11-15 10:46:21.588816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.455 qpair failed and we were unable to recover it. 00:27:33.455 [2024-11-15 10:46:21.589013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.455 [2024-11-15 10:46:21.589036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.455 qpair failed and we were unable to recover it. 00:27:33.455 [2024-11-15 10:46:21.589180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.455 [2024-11-15 10:46:21.589203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.455 qpair failed and we were unable to recover it. 00:27:33.455 [2024-11-15 10:46:21.589401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.455 [2024-11-15 10:46:21.589442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.455 qpair failed and we were unable to recover it. 00:27:33.455 [2024-11-15 10:46:21.589616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.455 [2024-11-15 10:46:21.589640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.455 qpair failed and we were unable to recover it. 00:27:33.455 [2024-11-15 10:46:21.589826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.455 [2024-11-15 10:46:21.589850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.455 qpair failed and we were unable to recover it. 00:27:33.455 [2024-11-15 10:46:21.590053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.455 [2024-11-15 10:46:21.590076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.455 qpair failed and we were unable to recover it. 00:27:33.455 [2024-11-15 10:46:21.590299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.455 [2024-11-15 10:46:21.590323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.455 qpair failed and we were unable to recover it. 00:27:33.455 [2024-11-15 10:46:21.590511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.455 [2024-11-15 10:46:21.590536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.455 qpair failed and we were unable to recover it. 00:27:33.455 [2024-11-15 10:46:21.590741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.455 [2024-11-15 10:46:21.590765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.455 qpair failed and we were unable to recover it. 00:27:33.455 [2024-11-15 10:46:21.590955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.455 [2024-11-15 10:46:21.590978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.456 qpair failed and we were unable to recover it. 00:27:33.456 [2024-11-15 10:46:21.591207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.456 [2024-11-15 10:46:21.591231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.456 qpair failed and we were unable to recover it. 00:27:33.456 [2024-11-15 10:46:21.591350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.456 [2024-11-15 10:46:21.591379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.456 qpair failed and we were unable to recover it. 00:27:33.456 [2024-11-15 10:46:21.591616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.456 [2024-11-15 10:46:21.591645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.456 qpair failed and we were unable to recover it. 00:27:33.456 [2024-11-15 10:46:21.591861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.456 [2024-11-15 10:46:21.591885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.456 qpair failed and we were unable to recover it. 00:27:33.456 [2024-11-15 10:46:21.592112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.456 [2024-11-15 10:46:21.592135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.456 qpair failed and we were unable to recover it. 00:27:33.456 [2024-11-15 10:46:21.592358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.456 [2024-11-15 10:46:21.592390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.456 qpair failed and we were unable to recover it. 00:27:33.456 [2024-11-15 10:46:21.592572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.456 [2024-11-15 10:46:21.592598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.456 qpair failed and we were unable to recover it. 00:27:33.456 [2024-11-15 10:46:21.592774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.456 [2024-11-15 10:46:21.592798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.456 qpair failed and we were unable to recover it. 00:27:33.456 [2024-11-15 10:46:21.593026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.456 [2024-11-15 10:46:21.593050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.456 qpair failed and we were unable to recover it. 00:27:33.456 [2024-11-15 10:46:21.593283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.456 [2024-11-15 10:46:21.593307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.456 qpair failed and we were unable to recover it. 00:27:33.456 [2024-11-15 10:46:21.593513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.456 [2024-11-15 10:46:21.593539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.456 qpair failed and we were unable to recover it. 00:27:33.456 [2024-11-15 10:46:21.593756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.456 [2024-11-15 10:46:21.593779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.456 qpair failed and we were unable to recover it. 00:27:33.456 [2024-11-15 10:46:21.594010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.456 [2024-11-15 10:46:21.594034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.456 qpair failed and we were unable to recover it. 00:27:33.456 [2024-11-15 10:46:21.594265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.456 [2024-11-15 10:46:21.594289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.456 qpair failed and we were unable to recover it. 00:27:33.456 [2024-11-15 10:46:21.594487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.456 [2024-11-15 10:46:21.594512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.456 qpair failed and we were unable to recover it. 00:27:33.456 [2024-11-15 10:46:21.594740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.456 [2024-11-15 10:46:21.594764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.456 qpair failed and we were unable to recover it. 00:27:33.456 [2024-11-15 10:46:21.594917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.456 [2024-11-15 10:46:21.594940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.456 qpair failed and we were unable to recover it. 00:27:33.456 [2024-11-15 10:46:21.595082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.456 [2024-11-15 10:46:21.595120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.456 qpair failed and we were unable to recover it. 00:27:33.456 [2024-11-15 10:46:21.595360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.456 [2024-11-15 10:46:21.595396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.456 qpair failed and we were unable to recover it. 00:27:33.456 [2024-11-15 10:46:21.595619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.456 [2024-11-15 10:46:21.595659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.456 qpair failed and we were unable to recover it. 00:27:33.456 [2024-11-15 10:46:21.595849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.456 [2024-11-15 10:46:21.595872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.456 qpair failed and we were unable to recover it. 00:27:33.456 [2024-11-15 10:46:21.596097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.456 [2024-11-15 10:46:21.596121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.456 qpair failed and we were unable to recover it. 00:27:33.456 [2024-11-15 10:46:21.596371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.456 [2024-11-15 10:46:21.596396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.456 qpair failed and we were unable to recover it. 00:27:33.456 [2024-11-15 10:46:21.596559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.456 [2024-11-15 10:46:21.596584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.456 qpair failed and we were unable to recover it. 00:27:33.456 [2024-11-15 10:46:21.596800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.456 [2024-11-15 10:46:21.596824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.456 qpair failed and we were unable to recover it. 00:27:33.456 [2024-11-15 10:46:21.597016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.456 [2024-11-15 10:46:21.597040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.456 qpair failed and we were unable to recover it. 00:27:33.456 [2024-11-15 10:46:21.597255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.456 [2024-11-15 10:46:21.597279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.456 qpair failed and we were unable to recover it. 00:27:33.456 [2024-11-15 10:46:21.597492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.456 [2024-11-15 10:46:21.597518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.456 qpair failed and we were unable to recover it. 00:27:33.456 [2024-11-15 10:46:21.597747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.456 [2024-11-15 10:46:21.597770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.456 qpair failed and we were unable to recover it. 00:27:33.456 [2024-11-15 10:46:21.598000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.456 [2024-11-15 10:46:21.598027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.456 qpair failed and we were unable to recover it. 00:27:33.456 [2024-11-15 10:46:21.598224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.456 [2024-11-15 10:46:21.598248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.456 qpair failed and we were unable to recover it. 00:27:33.456 [2024-11-15 10:46:21.598446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.456 [2024-11-15 10:46:21.598471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.456 qpair failed and we were unable to recover it. 00:27:33.456 [2024-11-15 10:46:21.598682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.456 [2024-11-15 10:46:21.598706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.456 qpair failed and we were unable to recover it. 00:27:33.456 [2024-11-15 10:46:21.598880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.456 [2024-11-15 10:46:21.598903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.456 qpair failed and we were unable to recover it. 00:27:33.456 [2024-11-15 10:46:21.599095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.456 [2024-11-15 10:46:21.599118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.456 qpair failed and we were unable to recover it. 00:27:33.456 [2024-11-15 10:46:21.599341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.456 [2024-11-15 10:46:21.599394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.456 qpair failed and we were unable to recover it. 00:27:33.457 [2024-11-15 10:46:21.599565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.457 [2024-11-15 10:46:21.599589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.457 qpair failed and we were unable to recover it. 00:27:33.457 [2024-11-15 10:46:21.599786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.457 [2024-11-15 10:46:21.599810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.457 qpair failed and we were unable to recover it. 00:27:33.457 [2024-11-15 10:46:21.599974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.457 [2024-11-15 10:46:21.599998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.457 qpair failed and we were unable to recover it. 00:27:33.457 [2024-11-15 10:46:21.600223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.457 [2024-11-15 10:46:21.600247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.457 qpair failed and we were unable to recover it. 00:27:33.457 [2024-11-15 10:46:21.600395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.457 [2024-11-15 10:46:21.600420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.457 qpair failed and we were unable to recover it. 00:27:33.457 [2024-11-15 10:46:21.600598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.457 [2024-11-15 10:46:21.600623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.457 qpair failed and we were unable to recover it. 00:27:33.457 [2024-11-15 10:46:21.600819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.457 [2024-11-15 10:46:21.600843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.457 qpair failed and we were unable to recover it. 00:27:33.457 [2024-11-15 10:46:21.601031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.457 [2024-11-15 10:46:21.601054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.457 qpair failed and we were unable to recover it. 00:27:33.457 [2024-11-15 10:46:21.601214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.457 [2024-11-15 10:46:21.601238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.457 qpair failed and we were unable to recover it. 00:27:33.457 [2024-11-15 10:46:21.601425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.457 [2024-11-15 10:46:21.601451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.457 qpair failed and we were unable to recover it. 00:27:33.457 [2024-11-15 10:46:21.601689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.457 [2024-11-15 10:46:21.601712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.457 qpair failed and we were unable to recover it. 00:27:33.457 [2024-11-15 10:46:21.601934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.457 [2024-11-15 10:46:21.601957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.457 qpair failed and we were unable to recover it. 00:27:33.457 [2024-11-15 10:46:21.602185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.457 [2024-11-15 10:46:21.602209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.457 qpair failed and we were unable to recover it. 00:27:33.457 [2024-11-15 10:46:21.602436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.457 [2024-11-15 10:46:21.602461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.457 qpair failed and we were unable to recover it. 00:27:33.457 [2024-11-15 10:46:21.602668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.457 [2024-11-15 10:46:21.602706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.457 qpair failed and we were unable to recover it. 00:27:33.457 [2024-11-15 10:46:21.602938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.457 [2024-11-15 10:46:21.602961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.457 qpair failed and we were unable to recover it. 00:27:33.457 [2024-11-15 10:46:21.603139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.457 [2024-11-15 10:46:21.603163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.457 qpair failed and we were unable to recover it. 00:27:33.457 [2024-11-15 10:46:21.603351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.457 [2024-11-15 10:46:21.603395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.457 qpair failed and we were unable to recover it. 00:27:33.457 [2024-11-15 10:46:21.603627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.457 [2024-11-15 10:46:21.603651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.457 qpair failed and we were unable to recover it. 00:27:33.457 [2024-11-15 10:46:21.603873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.457 [2024-11-15 10:46:21.603897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.457 qpair failed and we were unable to recover it. 00:27:33.457 [2024-11-15 10:46:21.604142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.457 [2024-11-15 10:46:21.604173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.457 qpair failed and we were unable to recover it. 00:27:33.457 [2024-11-15 10:46:21.604389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.457 [2024-11-15 10:46:21.604414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.457 qpair failed and we were unable to recover it. 00:27:33.457 [2024-11-15 10:46:21.604620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.457 [2024-11-15 10:46:21.604644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.457 qpair failed and we were unable to recover it. 00:27:33.457 [2024-11-15 10:46:21.604836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.457 [2024-11-15 10:46:21.604860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.457 qpair failed and we were unable to recover it. 00:27:33.457 [2024-11-15 10:46:21.605046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.457 [2024-11-15 10:46:21.605070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.457 qpair failed and we were unable to recover it. 00:27:33.457 [2024-11-15 10:46:21.605250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.457 [2024-11-15 10:46:21.605274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.457 qpair failed and we were unable to recover it. 00:27:33.457 [2024-11-15 10:46:21.605481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.457 [2024-11-15 10:46:21.605508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.457 qpair failed and we were unable to recover it. 00:27:33.457 [2024-11-15 10:46:21.605735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.457 [2024-11-15 10:46:21.605759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.457 qpair failed and we were unable to recover it. 00:27:33.457 [2024-11-15 10:46:21.605957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.457 [2024-11-15 10:46:21.605980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.457 qpair failed and we were unable to recover it. 00:27:33.457 [2024-11-15 10:46:21.606152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.457 [2024-11-15 10:46:21.606176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.457 qpair failed and we were unable to recover it. 00:27:33.457 [2024-11-15 10:46:21.606409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.457 [2024-11-15 10:46:21.606435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.457 qpair failed and we were unable to recover it. 00:27:33.457 [2024-11-15 10:46:21.606596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.457 [2024-11-15 10:46:21.606621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.457 qpair failed and we were unable to recover it. 00:27:33.457 [2024-11-15 10:46:21.606830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.458 [2024-11-15 10:46:21.606854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.458 qpair failed and we were unable to recover it. 00:27:33.458 [2024-11-15 10:46:21.607080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.458 [2024-11-15 10:46:21.607104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.458 qpair failed and we were unable to recover it. 00:27:33.458 [2024-11-15 10:46:21.607340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.458 [2024-11-15 10:46:21.607391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.458 qpair failed and we were unable to recover it. 00:27:33.458 [2024-11-15 10:46:21.607559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.458 [2024-11-15 10:46:21.607583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.458 qpair failed and we were unable to recover it. 00:27:33.458 [2024-11-15 10:46:21.607786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.458 [2024-11-15 10:46:21.607809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.458 qpair failed and we were unable to recover it. 00:27:33.458 [2024-11-15 10:46:21.607950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.458 [2024-11-15 10:46:21.607973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.458 qpair failed and we were unable to recover it. 00:27:33.458 [2024-11-15 10:46:21.608172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.458 [2024-11-15 10:46:21.608196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.458 qpair failed and we were unable to recover it. 00:27:33.458 [2024-11-15 10:46:21.608400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.458 [2024-11-15 10:46:21.608424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.458 qpair failed and we were unable to recover it. 00:27:33.458 [2024-11-15 10:46:21.608648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.458 [2024-11-15 10:46:21.608688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.458 qpair failed and we were unable to recover it. 00:27:33.458 [2024-11-15 10:46:21.608852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.458 [2024-11-15 10:46:21.608875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.458 qpair failed and we were unable to recover it. 00:27:33.458 [2024-11-15 10:46:21.609111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.458 [2024-11-15 10:46:21.609135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.458 qpair failed and we were unable to recover it. 00:27:33.458 [2024-11-15 10:46:21.609320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.458 [2024-11-15 10:46:21.609344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.458 qpair failed and we were unable to recover it. 00:27:33.458 [2024-11-15 10:46:21.609569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.458 [2024-11-15 10:46:21.609594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.458 qpair failed and we were unable to recover it. 00:27:33.458 [2024-11-15 10:46:21.609787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.458 [2024-11-15 10:46:21.609811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.458 qpair failed and we were unable to recover it. 00:27:33.458 [2024-11-15 10:46:21.610020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.458 [2024-11-15 10:46:21.610044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.458 qpair failed and we were unable to recover it. 00:27:33.458 [2024-11-15 10:46:21.610211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.458 [2024-11-15 10:46:21.610234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.458 qpair failed and we were unable to recover it. 00:27:33.458 [2024-11-15 10:46:21.610398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.458 [2024-11-15 10:46:21.610423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.458 qpair failed and we were unable to recover it. 00:27:33.458 [2024-11-15 10:46:21.610622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.458 [2024-11-15 10:46:21.610662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.458 qpair failed and we were unable to recover it. 00:27:33.458 [2024-11-15 10:46:21.610806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.458 [2024-11-15 10:46:21.610829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.458 qpair failed and we were unable to recover it. 00:27:33.458 [2024-11-15 10:46:21.611061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.458 [2024-11-15 10:46:21.611084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.458 qpair failed and we were unable to recover it. 00:27:33.458 [2024-11-15 10:46:21.611287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.458 [2024-11-15 10:46:21.611312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.458 qpair failed and we were unable to recover it. 00:27:33.458 [2024-11-15 10:46:21.611517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.458 [2024-11-15 10:46:21.611542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.458 qpair failed and we were unable to recover it. 00:27:33.458 [2024-11-15 10:46:21.611744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.458 [2024-11-15 10:46:21.611768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.458 qpair failed and we were unable to recover it. 00:27:33.458 [2024-11-15 10:46:21.611979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.458 [2024-11-15 10:46:21.612003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.458 qpair failed and we were unable to recover it. 00:27:33.458 [2024-11-15 10:46:21.612206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.458 [2024-11-15 10:46:21.612229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.458 qpair failed and we were unable to recover it. 00:27:33.458 [2024-11-15 10:46:21.612433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.458 [2024-11-15 10:46:21.612459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.458 qpair failed and we were unable to recover it. 00:27:33.458 [2024-11-15 10:46:21.612718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.458 [2024-11-15 10:46:21.612742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.458 qpair failed and we were unable to recover it. 00:27:33.458 [2024-11-15 10:46:21.612925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.458 [2024-11-15 10:46:21.612948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.458 qpair failed and we were unable to recover it. 00:27:33.458 [2024-11-15 10:46:21.613104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.458 [2024-11-15 10:46:21.613128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.458 qpair failed and we were unable to recover it. 00:27:33.458 [2024-11-15 10:46:21.613326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.458 [2024-11-15 10:46:21.613354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.458 qpair failed and we were unable to recover it. 00:27:33.458 [2024-11-15 10:46:21.613536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.458 [2024-11-15 10:46:21.613560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.458 qpair failed and we were unable to recover it. 00:27:33.458 [2024-11-15 10:46:21.613706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.458 [2024-11-15 10:46:21.613731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.458 qpair failed and we were unable to recover it. 00:27:33.458 [2024-11-15 10:46:21.613957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.458 [2024-11-15 10:46:21.613981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.458 qpair failed and we were unable to recover it. 00:27:33.458 [2024-11-15 10:46:21.614199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.458 [2024-11-15 10:46:21.614222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.458 qpair failed and we were unable to recover it. 00:27:33.458 [2024-11-15 10:46:21.614415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.458 [2024-11-15 10:46:21.614440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.458 qpair failed and we were unable to recover it. 00:27:33.458 [2024-11-15 10:46:21.614647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.458 [2024-11-15 10:46:21.614672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.459 qpair failed and we were unable to recover it. 00:27:33.459 [2024-11-15 10:46:21.614913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.459 [2024-11-15 10:46:21.614936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.459 qpair failed and we were unable to recover it. 00:27:33.459 [2024-11-15 10:46:21.615144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.459 [2024-11-15 10:46:21.615168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.459 qpair failed and we were unable to recover it. 00:27:33.459 [2024-11-15 10:46:21.615419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.459 [2024-11-15 10:46:21.615445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.459 qpair failed and we were unable to recover it. 00:27:33.459 [2024-11-15 10:46:21.615638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.459 [2024-11-15 10:46:21.615663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.459 qpair failed and we were unable to recover it. 00:27:33.459 [2024-11-15 10:46:21.615844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.459 [2024-11-15 10:46:21.615868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.459 qpair failed and we were unable to recover it. 00:27:33.459 [2024-11-15 10:46:21.616065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.459 [2024-11-15 10:46:21.616090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.459 qpair failed and we were unable to recover it. 00:27:33.459 [2024-11-15 10:46:21.616315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.459 [2024-11-15 10:46:21.616339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.459 qpair failed and we were unable to recover it. 00:27:33.459 [2024-11-15 10:46:21.616509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.459 [2024-11-15 10:46:21.616535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.459 qpair failed and we were unable to recover it. 00:27:33.459 [2024-11-15 10:46:21.616716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.459 [2024-11-15 10:46:21.616741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.459 qpair failed and we were unable to recover it. 00:27:33.459 [2024-11-15 10:46:21.616922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.459 [2024-11-15 10:46:21.616946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.459 qpair failed and we were unable to recover it. 00:27:33.459 [2024-11-15 10:46:21.617171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.459 [2024-11-15 10:46:21.617195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.459 qpair failed and we were unable to recover it. 00:27:33.459 [2024-11-15 10:46:21.617427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.459 [2024-11-15 10:46:21.617452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.459 qpair failed and we were unable to recover it. 00:27:33.459 [2024-11-15 10:46:21.617606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.459 [2024-11-15 10:46:21.617631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.459 qpair failed and we were unable to recover it. 00:27:33.459 [2024-11-15 10:46:21.617859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.459 [2024-11-15 10:46:21.617882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.459 qpair failed and we were unable to recover it. 00:27:33.459 [2024-11-15 10:46:21.618066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.459 [2024-11-15 10:46:21.618092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.459 qpair failed and we were unable to recover it. 00:27:33.459 [2024-11-15 10:46:21.618290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.459 [2024-11-15 10:46:21.618317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.459 qpair failed and we were unable to recover it. 00:27:33.459 [2024-11-15 10:46:21.618475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.459 [2024-11-15 10:46:21.618502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.459 qpair failed and we were unable to recover it. 00:27:33.459 [2024-11-15 10:46:21.618617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.459 [2024-11-15 10:46:21.618643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.459 qpair failed and we were unable to recover it. 00:27:33.459 [2024-11-15 10:46:21.618783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.459 [2024-11-15 10:46:21.618822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.459 qpair failed and we were unable to recover it. 00:27:33.459 [2024-11-15 10:46:21.618948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.459 [2024-11-15 10:46:21.618972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.459 qpair failed and we were unable to recover it. 00:27:33.459 [2024-11-15 10:46:21.619113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.459 [2024-11-15 10:46:21.619143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.459 qpair failed and we were unable to recover it. 00:27:33.459 [2024-11-15 10:46:21.619247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.459 [2024-11-15 10:46:21.619274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.459 qpair failed and we were unable to recover it. 00:27:33.459 [2024-11-15 10:46:21.619474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.459 [2024-11-15 10:46:21.619522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.459 qpair failed and we were unable to recover it. 00:27:33.459 [2024-11-15 10:46:21.619692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.459 [2024-11-15 10:46:21.619737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.459 qpair failed and we were unable to recover it. 00:27:33.459 [2024-11-15 10:46:21.619939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.459 [2024-11-15 10:46:21.619969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.459 qpair failed and we were unable to recover it. 00:27:33.459 [2024-11-15 10:46:21.620219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.459 [2024-11-15 10:46:21.620261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.459 qpair failed and we were unable to recover it. 00:27:33.459 [2024-11-15 10:46:21.620444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.459 [2024-11-15 10:46:21.620478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.459 qpair failed and we were unable to recover it. 00:27:33.459 [2024-11-15 10:46:21.620668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.459 [2024-11-15 10:46:21.620716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.459 qpair failed and we were unable to recover it. 00:27:33.459 [2024-11-15 10:46:21.620969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.459 [2024-11-15 10:46:21.620996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.459 qpair failed and we were unable to recover it. 00:27:33.459 [2024-11-15 10:46:21.621227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.459 [2024-11-15 10:46:21.621251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.459 qpair failed and we were unable to recover it. 00:27:33.459 [2024-11-15 10:46:21.621435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.459 [2024-11-15 10:46:21.621461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.459 qpair failed and we were unable to recover it. 00:27:33.459 [2024-11-15 10:46:21.621603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.459 [2024-11-15 10:46:21.621653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.459 qpair failed and we were unable to recover it. 00:27:33.459 [2024-11-15 10:46:21.621844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.459 [2024-11-15 10:46:21.621868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.459 qpair failed and we were unable to recover it. 00:27:33.460 [2024-11-15 10:46:21.622064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.460 [2024-11-15 10:46:21.622088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.460 qpair failed and we were unable to recover it. 00:27:33.460 [2024-11-15 10:46:21.622273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.460 [2024-11-15 10:46:21.622297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.460 qpair failed and we were unable to recover it. 00:27:33.460 [2024-11-15 10:46:21.622493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.460 [2024-11-15 10:46:21.622519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.460 qpair failed and we were unable to recover it. 00:27:33.460 [2024-11-15 10:46:21.622687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.460 [2024-11-15 10:46:21.622711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.460 qpair failed and we were unable to recover it. 00:27:33.460 [2024-11-15 10:46:21.622939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.460 [2024-11-15 10:46:21.622962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.460 qpair failed and we were unable to recover it. 00:27:33.460 [2024-11-15 10:46:21.623194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.460 [2024-11-15 10:46:21.623217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.460 qpair failed and we were unable to recover it. 00:27:33.460 [2024-11-15 10:46:21.623423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.460 [2024-11-15 10:46:21.623449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.460 qpair failed and we were unable to recover it. 00:27:33.460 [2024-11-15 10:46:21.623569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.460 [2024-11-15 10:46:21.623594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.460 qpair failed and we were unable to recover it. 00:27:33.460 [2024-11-15 10:46:21.623815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.460 [2024-11-15 10:46:21.623838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.460 qpair failed and we were unable to recover it. 00:27:33.460 [2024-11-15 10:46:21.624013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.460 [2024-11-15 10:46:21.624036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.460 qpair failed and we were unable to recover it. 00:27:33.460 [2024-11-15 10:46:21.624253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.460 [2024-11-15 10:46:21.624276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.460 qpair failed and we were unable to recover it. 00:27:33.460 [2024-11-15 10:46:21.624476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.460 [2024-11-15 10:46:21.624501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.460 qpair failed and we were unable to recover it. 00:27:33.460 [2024-11-15 10:46:21.624671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.460 [2024-11-15 10:46:21.624696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.460 qpair failed and we were unable to recover it. 00:27:33.460 [2024-11-15 10:46:21.624906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.460 [2024-11-15 10:46:21.624929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.460 qpair failed and we were unable to recover it. 00:27:33.460 [2024-11-15 10:46:21.625167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.460 [2024-11-15 10:46:21.625195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.460 qpair failed and we were unable to recover it. 00:27:33.460 [2024-11-15 10:46:21.625436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.460 [2024-11-15 10:46:21.625462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.460 qpair failed and we were unable to recover it. 00:27:33.460 [2024-11-15 10:46:21.625661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.460 [2024-11-15 10:46:21.625684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.460 qpair failed and we were unable to recover it. 00:27:33.460 [2024-11-15 10:46:21.625903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.460 [2024-11-15 10:46:21.625926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.460 qpair failed and we were unable to recover it. 00:27:33.460 [2024-11-15 10:46:21.626156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.460 [2024-11-15 10:46:21.626179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.460 qpair failed and we were unable to recover it. 00:27:33.460 [2024-11-15 10:46:21.626410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.460 [2024-11-15 10:46:21.626445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.460 qpair failed and we were unable to recover it. 00:27:33.460 [2024-11-15 10:46:21.626572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.460 [2024-11-15 10:46:21.626597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.460 qpair failed and we were unable to recover it. 00:27:33.460 [2024-11-15 10:46:21.626733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.460 [2024-11-15 10:46:21.626771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.460 qpair failed and we were unable to recover it. 00:27:33.460 [2024-11-15 10:46:21.626989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.460 [2024-11-15 10:46:21.627013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.460 qpair failed and we were unable to recover it. 00:27:33.460 [2024-11-15 10:46:21.627248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.460 [2024-11-15 10:46:21.627272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.460 qpair failed and we were unable to recover it. 00:27:33.460 [2024-11-15 10:46:21.627458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.460 [2024-11-15 10:46:21.627485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.460 qpair failed and we were unable to recover it. 00:27:33.460 [2024-11-15 10:46:21.627629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.460 [2024-11-15 10:46:21.627653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.460 qpair failed and we were unable to recover it. 00:27:33.460 [2024-11-15 10:46:21.627829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.460 [2024-11-15 10:46:21.627852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.460 qpair failed and we were unable to recover it. 00:27:33.460 [2024-11-15 10:46:21.628077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.460 [2024-11-15 10:46:21.628100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.460 qpair failed and we were unable to recover it. 00:27:33.460 [2024-11-15 10:46:21.628303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.460 [2024-11-15 10:46:21.628326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.460 qpair failed and we were unable to recover it. 00:27:33.460 [2024-11-15 10:46:21.628518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.460 [2024-11-15 10:46:21.628543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.460 qpair failed and we were unable to recover it. 00:27:33.460 [2024-11-15 10:46:21.628757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.460 [2024-11-15 10:46:21.628780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.460 qpair failed and we were unable to recover it. 00:27:33.460 [2024-11-15 10:46:21.628938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.460 [2024-11-15 10:46:21.628961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.460 qpair failed and we were unable to recover it. 00:27:33.460 [2024-11-15 10:46:21.629175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.460 [2024-11-15 10:46:21.629198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.460 qpair failed and we were unable to recover it. 00:27:33.460 [2024-11-15 10:46:21.629430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.460 [2024-11-15 10:46:21.629455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.460 qpair failed and we were unable to recover it. 00:27:33.460 [2024-11-15 10:46:21.629600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.460 [2024-11-15 10:46:21.629625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.461 qpair failed and we were unable to recover it. 00:27:33.461 [2024-11-15 10:46:21.629819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.461 [2024-11-15 10:46:21.629842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.461 qpair failed and we were unable to recover it. 00:27:33.461 [2024-11-15 10:46:21.630033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.461 [2024-11-15 10:46:21.630057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.461 qpair failed and we were unable to recover it. 00:27:33.461 [2024-11-15 10:46:21.630234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.461 [2024-11-15 10:46:21.630257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.461 qpair failed and we were unable to recover it. 00:27:33.461 [2024-11-15 10:46:21.630473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.461 [2024-11-15 10:46:21.630499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.461 qpair failed and we were unable to recover it. 00:27:33.461 [2024-11-15 10:46:21.630632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.461 [2024-11-15 10:46:21.630670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.461 qpair failed and we were unable to recover it. 00:27:33.461 [2024-11-15 10:46:21.630834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.461 [2024-11-15 10:46:21.630858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.461 qpair failed and we were unable to recover it. 00:27:33.461 [2024-11-15 10:46:21.630979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.461 [2024-11-15 10:46:21.631003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.461 qpair failed and we were unable to recover it. 00:27:33.461 [2024-11-15 10:46:21.631170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.461 [2024-11-15 10:46:21.631209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.461 qpair failed and we were unable to recover it. 00:27:33.461 [2024-11-15 10:46:21.631389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.461 [2024-11-15 10:46:21.631439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.461 qpair failed and we were unable to recover it. 00:27:33.461 [2024-11-15 10:46:21.631581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.461 [2024-11-15 10:46:21.631606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.461 qpair failed and we were unable to recover it. 00:27:33.461 [2024-11-15 10:46:21.631775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.461 [2024-11-15 10:46:21.631814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.461 qpair failed and we were unable to recover it. 00:27:33.461 [2024-11-15 10:46:21.632007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.461 [2024-11-15 10:46:21.632031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.461 qpair failed and we were unable to recover it. 00:27:33.461 [2024-11-15 10:46:21.632196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.461 [2024-11-15 10:46:21.632220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.461 qpair failed and we were unable to recover it. 00:27:33.461 [2024-11-15 10:46:21.632446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.461 [2024-11-15 10:46:21.632472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.461 qpair failed and we were unable to recover it. 00:27:33.461 [2024-11-15 10:46:21.632612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.461 [2024-11-15 10:46:21.632638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.461 qpair failed and we were unable to recover it. 00:27:33.461 [2024-11-15 10:46:21.632813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.461 [2024-11-15 10:46:21.632836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.461 qpair failed and we were unable to recover it. 00:27:33.461 [2024-11-15 10:46:21.633056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.461 [2024-11-15 10:46:21.633079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.461 qpair failed and we were unable to recover it. 00:27:33.461 [2024-11-15 10:46:21.633297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.461 [2024-11-15 10:46:21.633321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.461 qpair failed and we were unable to recover it. 00:27:33.461 [2024-11-15 10:46:21.633477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.461 [2024-11-15 10:46:21.633503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.461 qpair failed and we were unable to recover it. 00:27:33.461 [2024-11-15 10:46:21.633631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.461 [2024-11-15 10:46:21.633656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.461 qpair failed and we were unable to recover it. 00:27:33.461 [2024-11-15 10:46:21.633797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.461 [2024-11-15 10:46:21.633830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.461 qpair failed and we were unable to recover it. 00:27:33.461 [2024-11-15 10:46:21.634466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.461 [2024-11-15 10:46:21.634492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.461 qpair failed and we were unable to recover it. 00:27:33.461 [2024-11-15 10:46:21.634636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.461 [2024-11-15 10:46:21.634676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.461 qpair failed and we were unable to recover it. 00:27:33.461 [2024-11-15 10:46:21.634890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.461 [2024-11-15 10:46:21.634913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.461 qpair failed and we were unable to recover it. 00:27:33.461 [2024-11-15 10:46:21.635044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.461 [2024-11-15 10:46:21.635068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.461 qpair failed and we were unable to recover it. 00:27:33.461 [2024-11-15 10:46:21.635270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.461 [2024-11-15 10:46:21.635293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.461 qpair failed and we were unable to recover it. 00:27:33.461 [2024-11-15 10:46:21.635484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.461 [2024-11-15 10:46:21.635517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.461 qpair failed and we were unable to recover it. 00:27:33.461 [2024-11-15 10:46:21.635726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.461 [2024-11-15 10:46:21.635765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.461 qpair failed and we were unable to recover it. 00:27:33.461 [2024-11-15 10:46:21.635999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.461 [2024-11-15 10:46:21.636022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.461 qpair failed and we were unable to recover it. 00:27:33.461 [2024-11-15 10:46:21.636267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.461 [2024-11-15 10:46:21.636290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.461 qpair failed and we were unable to recover it. 00:27:33.461 [2024-11-15 10:46:21.636483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.461 [2024-11-15 10:46:21.636509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.461 qpair failed and we were unable to recover it. 00:27:33.461 [2024-11-15 10:46:21.636629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.461 [2024-11-15 10:46:21.636669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.461 qpair failed and we were unable to recover it. 00:27:33.461 [2024-11-15 10:46:21.636811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.461 [2024-11-15 10:46:21.636835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.461 qpair failed and we were unable to recover it. 00:27:33.461 [2024-11-15 10:46:21.637079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.461 [2024-11-15 10:46:21.637103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.461 qpair failed and we were unable to recover it. 00:27:33.461 [2024-11-15 10:46:21.637332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.461 [2024-11-15 10:46:21.637356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.461 qpair failed and we were unable to recover it. 00:27:33.461 [2024-11-15 10:46:21.637573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.461 [2024-11-15 10:46:21.637598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.461 qpair failed and we were unable to recover it. 00:27:33.461 [2024-11-15 10:46:21.637732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.461 [2024-11-15 10:46:21.637772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.461 qpair failed and we were unable to recover it. 00:27:33.461 [2024-11-15 10:46:21.637991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.461 [2024-11-15 10:46:21.638015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.461 qpair failed and we were unable to recover it. 00:27:33.461 [2024-11-15 10:46:21.638204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.462 [2024-11-15 10:46:21.638227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.462 qpair failed and we were unable to recover it. 00:27:33.462 [2024-11-15 10:46:21.638400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.462 [2024-11-15 10:46:21.638426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.462 qpair failed and we were unable to recover it. 00:27:33.462 [2024-11-15 10:46:21.638548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.462 [2024-11-15 10:46:21.638588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.462 qpair failed and we were unable to recover it. 00:27:33.462 [2024-11-15 10:46:21.638716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.462 [2024-11-15 10:46:21.638754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.462 qpair failed and we were unable to recover it. 00:27:33.462 [2024-11-15 10:46:21.638976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.462 [2024-11-15 10:46:21.638999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.462 qpair failed and we were unable to recover it. 00:27:33.462 [2024-11-15 10:46:21.639217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.462 [2024-11-15 10:46:21.639240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.462 qpair failed and we were unable to recover it. 00:27:33.462 [2024-11-15 10:46:21.639468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.462 [2024-11-15 10:46:21.639508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.462 qpair failed and we were unable to recover it. 00:27:33.462 [2024-11-15 10:46:21.639668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.462 [2024-11-15 10:46:21.639691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.462 qpair failed and we were unable to recover it. 00:27:33.462 [2024-11-15 10:46:21.639872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.462 [2024-11-15 10:46:21.639895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.462 qpair failed and we were unable to recover it. 00:27:33.462 [2024-11-15 10:46:21.640122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.462 [2024-11-15 10:46:21.640149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.462 qpair failed and we were unable to recover it. 00:27:33.462 [2024-11-15 10:46:21.640386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.462 [2024-11-15 10:46:21.640411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.462 qpair failed and we were unable to recover it. 00:27:33.462 [2024-11-15 10:46:21.640609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.462 [2024-11-15 10:46:21.640633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.462 qpair failed and we were unable to recover it. 00:27:33.462 [2024-11-15 10:46:21.640857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.462 [2024-11-15 10:46:21.640880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.462 qpair failed and we were unable to recover it. 00:27:33.462 [2024-11-15 10:46:21.641120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.462 [2024-11-15 10:46:21.641143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.462 qpair failed and we were unable to recover it. 00:27:33.462 [2024-11-15 10:46:21.641288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.462 [2024-11-15 10:46:21.641311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.462 qpair failed and we were unable to recover it. 00:27:33.462 [2024-11-15 10:46:21.641550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.462 [2024-11-15 10:46:21.641574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.462 qpair failed and we were unable to recover it. 00:27:33.462 [2024-11-15 10:46:21.641736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.462 [2024-11-15 10:46:21.641759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.462 qpair failed and we were unable to recover it. 00:27:33.462 [2024-11-15 10:46:21.641995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.462 [2024-11-15 10:46:21.642018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.462 qpair failed and we were unable to recover it. 00:27:33.462 [2024-11-15 10:46:21.642192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.462 [2024-11-15 10:46:21.642215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.462 qpair failed and we were unable to recover it. 00:27:33.462 [2024-11-15 10:46:21.642415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.462 [2024-11-15 10:46:21.642441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.462 qpair failed and we were unable to recover it. 00:27:33.462 [2024-11-15 10:46:21.642653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.462 [2024-11-15 10:46:21.642691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.462 qpair failed and we were unable to recover it. 00:27:33.462 [2024-11-15 10:46:21.642899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.462 [2024-11-15 10:46:21.642922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.462 qpair failed and we were unable to recover it. 00:27:33.462 [2024-11-15 10:46:21.643152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.462 [2024-11-15 10:46:21.643176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.462 qpair failed and we were unable to recover it. 00:27:33.462 [2024-11-15 10:46:21.643394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.462 [2024-11-15 10:46:21.643420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.462 qpair failed and we were unable to recover it. 00:27:33.462 [2024-11-15 10:46:21.643672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.462 [2024-11-15 10:46:21.643697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.462 qpair failed and we were unable to recover it. 00:27:33.462 [2024-11-15 10:46:21.643936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.462 [2024-11-15 10:46:21.643959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.462 qpair failed and we were unable to recover it. 00:27:33.462 [2024-11-15 10:46:21.644188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.462 [2024-11-15 10:46:21.644212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.462 qpair failed and we were unable to recover it. 00:27:33.462 [2024-11-15 10:46:21.644463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.462 [2024-11-15 10:46:21.644489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.462 qpair failed and we were unable to recover it. 00:27:33.462 [2024-11-15 10:46:21.644701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.462 [2024-11-15 10:46:21.644725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.462 qpair failed and we were unable to recover it. 00:27:33.462 [2024-11-15 10:46:21.644837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.462 [2024-11-15 10:46:21.644862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.462 qpair failed and we were unable to recover it. 00:27:33.462 [2024-11-15 10:46:21.645083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.462 [2024-11-15 10:46:21.645121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.462 qpair failed and we were unable to recover it. 00:27:33.462 [2024-11-15 10:46:21.645344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.462 [2024-11-15 10:46:21.645390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.462 qpair failed and we were unable to recover it. 00:27:33.462 [2024-11-15 10:46:21.645590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.462 [2024-11-15 10:46:21.645615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.462 qpair failed and we were unable to recover it. 00:27:33.462 [2024-11-15 10:46:21.645749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.462 [2024-11-15 10:46:21.645774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.462 qpair failed and we were unable to recover it. 00:27:33.462 [2024-11-15 10:46:21.645998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.462 [2024-11-15 10:46:21.646022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.462 qpair failed and we were unable to recover it. 00:27:33.462 [2024-11-15 10:46:21.646248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.462 [2024-11-15 10:46:21.646272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.462 qpair failed and we were unable to recover it. 00:27:33.463 [2024-11-15 10:46:21.646505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.463 [2024-11-15 10:46:21.646534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.463 qpair failed and we were unable to recover it. 00:27:33.463 [2024-11-15 10:46:21.646677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.463 [2024-11-15 10:46:21.646716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.463 qpair failed and we were unable to recover it. 00:27:33.463 [2024-11-15 10:46:21.646909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.463 [2024-11-15 10:46:21.646933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.463 qpair failed and we were unable to recover it. 00:27:33.463 [2024-11-15 10:46:21.647082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.463 [2024-11-15 10:46:21.647105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.463 qpair failed and we were unable to recover it. 00:27:33.463 [2024-11-15 10:46:21.647296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.463 [2024-11-15 10:46:21.647319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.463 qpair failed and we were unable to recover it. 00:27:33.463 [2024-11-15 10:46:21.647551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.463 [2024-11-15 10:46:21.647576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.463 qpair failed and we were unable to recover it. 00:27:33.463 [2024-11-15 10:46:21.647789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.463 [2024-11-15 10:46:21.647812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.463 qpair failed and we were unable to recover it. 00:27:33.463 [2024-11-15 10:46:21.648002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.463 [2024-11-15 10:46:21.648032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.463 qpair failed and we were unable to recover it. 00:27:33.463 [2024-11-15 10:46:21.648251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.463 [2024-11-15 10:46:21.648274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.463 qpair failed and we were unable to recover it. 00:27:33.463 [2024-11-15 10:46:21.648412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.463 [2024-11-15 10:46:21.648438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.463 qpair failed and we were unable to recover it. 00:27:33.463 [2024-11-15 10:46:21.648586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.463 [2024-11-15 10:46:21.648611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.463 qpair failed and we were unable to recover it. 00:27:33.463 [2024-11-15 10:46:21.648841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.463 [2024-11-15 10:46:21.648864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.463 qpair failed and we were unable to recover it. 00:27:33.463 [2024-11-15 10:46:21.649104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.463 [2024-11-15 10:46:21.649127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.463 qpair failed and we were unable to recover it. 00:27:33.463 [2024-11-15 10:46:21.649317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.463 [2024-11-15 10:46:21.649341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.463 qpair failed and we were unable to recover it. 00:27:33.463 [2024-11-15 10:46:21.649559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.463 [2024-11-15 10:46:21.649584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.463 qpair failed and we were unable to recover it. 00:27:33.463 [2024-11-15 10:46:21.649807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.463 [2024-11-15 10:46:21.649830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.463 qpair failed and we were unable to recover it. 00:27:33.463 [2024-11-15 10:46:21.650060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.463 [2024-11-15 10:46:21.650083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.463 qpair failed and we were unable to recover it. 00:27:33.463 [2024-11-15 10:46:21.650303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.463 [2024-11-15 10:46:21.650326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.463 qpair failed and we were unable to recover it. 00:27:33.463 [2024-11-15 10:46:21.650502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.463 [2024-11-15 10:46:21.650528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.463 qpair failed and we were unable to recover it. 00:27:33.463 [2024-11-15 10:46:21.650694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.463 [2024-11-15 10:46:21.650720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.463 qpair failed and we were unable to recover it. 00:27:33.463 [2024-11-15 10:46:21.650875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.463 [2024-11-15 10:46:21.650899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.463 qpair failed and we were unable to recover it. 00:27:33.463 [2024-11-15 10:46:21.651089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.463 [2024-11-15 10:46:21.651128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.463 qpair failed and we were unable to recover it. 00:27:33.463 [2024-11-15 10:46:21.651355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.463 [2024-11-15 10:46:21.651385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.463 qpair failed and we were unable to recover it. 00:27:33.463 [2024-11-15 10:46:21.651561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.463 [2024-11-15 10:46:21.651585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.463 qpair failed and we were unable to recover it. 00:27:33.463 [2024-11-15 10:46:21.651816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.463 [2024-11-15 10:46:21.651839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.463 qpair failed and we were unable to recover it. 00:27:33.463 [2024-11-15 10:46:21.652068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.463 [2024-11-15 10:46:21.652091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.463 qpair failed and we were unable to recover it. 00:27:33.463 [2024-11-15 10:46:21.652273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.463 [2024-11-15 10:46:21.652296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.463 qpair failed and we were unable to recover it. 00:27:33.463 [2024-11-15 10:46:21.652500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.463 [2024-11-15 10:46:21.652526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.463 qpair failed and we were unable to recover it. 00:27:33.463 [2024-11-15 10:46:21.652679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.463 [2024-11-15 10:46:21.652702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.463 qpair failed and we were unable to recover it. 00:27:33.463 [2024-11-15 10:46:21.652927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.463 [2024-11-15 10:46:21.652951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.463 qpair failed and we were unable to recover it. 00:27:33.463 [2024-11-15 10:46:21.653175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.463 [2024-11-15 10:46:21.653198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.463 qpair failed and we were unable to recover it. 00:27:33.463 [2024-11-15 10:46:21.653399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.463 [2024-11-15 10:46:21.653424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.463 qpair failed and we were unable to recover it. 00:27:33.463 [2024-11-15 10:46:21.653600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.463 [2024-11-15 10:46:21.653624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.463 qpair failed and we were unable to recover it. 00:27:33.463 [2024-11-15 10:46:21.653765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.463 [2024-11-15 10:46:21.653803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.463 qpair failed and we were unable to recover it. 00:27:33.463 [2024-11-15 10:46:21.653989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.463 [2024-11-15 10:46:21.654013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.463 qpair failed and we were unable to recover it. 00:27:33.463 [2024-11-15 10:46:21.654208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.463 [2024-11-15 10:46:21.654231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.463 qpair failed and we were unable to recover it. 00:27:33.463 [2024-11-15 10:46:21.654458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.463 [2024-11-15 10:46:21.654483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.463 qpair failed and we were unable to recover it. 00:27:33.463 [2024-11-15 10:46:21.654725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.463 [2024-11-15 10:46:21.654748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.463 qpair failed and we were unable to recover it. 00:27:33.463 [2024-11-15 10:46:21.654982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.464 [2024-11-15 10:46:21.655005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.464 qpair failed and we were unable to recover it. 00:27:33.464 [2024-11-15 10:46:21.655174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.464 [2024-11-15 10:46:21.655197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.464 qpair failed and we were unable to recover it. 00:27:33.464 [2024-11-15 10:46:21.655430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.464 [2024-11-15 10:46:21.655455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.464 qpair failed and we were unable to recover it. 00:27:33.464 [2024-11-15 10:46:21.655654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.464 [2024-11-15 10:46:21.655679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.464 qpair failed and we were unable to recover it. 00:27:33.464 [2024-11-15 10:46:21.655861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.464 [2024-11-15 10:46:21.655885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.464 qpair failed and we were unable to recover it. 00:27:33.464 [2024-11-15 10:46:21.656088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.464 [2024-11-15 10:46:21.656111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.464 qpair failed and we were unable to recover it. 00:27:33.464 [2024-11-15 10:46:21.656350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.464 [2024-11-15 10:46:21.656397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.464 qpair failed and we were unable to recover it. 00:27:33.464 [2024-11-15 10:46:21.656603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.464 [2024-11-15 10:46:21.656628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.464 qpair failed and we were unable to recover it. 00:27:33.464 [2024-11-15 10:46:21.656774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.464 [2024-11-15 10:46:21.656798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.464 qpair failed and we were unable to recover it. 00:27:33.464 [2024-11-15 10:46:21.656984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.464 [2024-11-15 10:46:21.657007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.464 qpair failed and we were unable to recover it. 00:27:33.464 [2024-11-15 10:46:21.657216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.464 [2024-11-15 10:46:21.657239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.464 qpair failed and we were unable to recover it. 00:27:33.464 [2024-11-15 10:46:21.657370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.464 [2024-11-15 10:46:21.657395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.464 qpair failed and we were unable to recover it. 00:27:33.464 [2024-11-15 10:46:21.657562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.464 [2024-11-15 10:46:21.657586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.464 qpair failed and we were unable to recover it. 00:27:33.464 [2024-11-15 10:46:21.657794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.464 [2024-11-15 10:46:21.657817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.464 qpair failed and we were unable to recover it. 00:27:33.464 [2024-11-15 10:46:21.658017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.464 [2024-11-15 10:46:21.658040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.464 qpair failed and we were unable to recover it. 00:27:33.464 [2024-11-15 10:46:21.658227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.464 [2024-11-15 10:46:21.658250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.464 qpair failed and we were unable to recover it. 00:27:33.464 [2024-11-15 10:46:21.658440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.464 [2024-11-15 10:46:21.658465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.464 qpair failed and we were unable to recover it. 00:27:33.464 [2024-11-15 10:46:21.658692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.464 [2024-11-15 10:46:21.658731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.464 qpair failed and we were unable to recover it. 00:27:33.464 [2024-11-15 10:46:21.658936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.464 [2024-11-15 10:46:21.658959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.464 qpair failed and we were unable to recover it. 00:27:33.464 [2024-11-15 10:46:21.659140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.464 [2024-11-15 10:46:21.659163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.464 qpair failed and we were unable to recover it. 00:27:33.464 [2024-11-15 10:46:21.659400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.464 [2024-11-15 10:46:21.659424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.464 qpair failed and we were unable to recover it. 00:27:33.464 [2024-11-15 10:46:21.659565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.464 [2024-11-15 10:46:21.659589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.464 qpair failed and we were unable to recover it. 00:27:33.464 [2024-11-15 10:46:21.659828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.464 [2024-11-15 10:46:21.659851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.464 qpair failed and we were unable to recover it. 00:27:33.464 [2024-11-15 10:46:21.660083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.464 [2024-11-15 10:46:21.660107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.464 qpair failed and we were unable to recover it. 00:27:33.464 [2024-11-15 10:46:21.660294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.464 [2024-11-15 10:46:21.660318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.464 qpair failed and we were unable to recover it. 00:27:33.464 [2024-11-15 10:46:21.660466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.464 [2024-11-15 10:46:21.660491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.464 qpair failed and we were unable to recover it. 00:27:33.464 [2024-11-15 10:46:21.660719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.464 [2024-11-15 10:46:21.660743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.464 qpair failed and we were unable to recover it. 00:27:33.464 [2024-11-15 10:46:21.660892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.464 [2024-11-15 10:46:21.660916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.464 qpair failed and we were unable to recover it. 00:27:33.464 [2024-11-15 10:46:21.661069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.464 [2024-11-15 10:46:21.661108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.464 qpair failed and we were unable to recover it. 00:27:33.464 [2024-11-15 10:46:21.661308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.464 [2024-11-15 10:46:21.661332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.464 qpair failed and we were unable to recover it. 00:27:33.464 [2024-11-15 10:46:21.661570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.464 [2024-11-15 10:46:21.661599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.464 qpair failed and we were unable to recover it. 00:27:33.464 [2024-11-15 10:46:21.661761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.464 [2024-11-15 10:46:21.661784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.464 qpair failed and we were unable to recover it. 00:27:33.464 [2024-11-15 10:46:21.662009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.464 [2024-11-15 10:46:21.662032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.464 qpair failed and we were unable to recover it. 00:27:33.464 [2024-11-15 10:46:21.662265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.464 [2024-11-15 10:46:21.662289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.464 qpair failed and we were unable to recover it. 00:27:33.464 [2024-11-15 10:46:21.662460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.465 [2024-11-15 10:46:21.662485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.465 qpair failed and we were unable to recover it. 00:27:33.465 [2024-11-15 10:46:21.662726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.465 [2024-11-15 10:46:21.662749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.465 qpair failed and we were unable to recover it. 00:27:33.465 [2024-11-15 10:46:21.662997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.465 [2024-11-15 10:46:21.663021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.465 qpair failed and we were unable to recover it. 00:27:33.465 [2024-11-15 10:46:21.663159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.465 [2024-11-15 10:46:21.663191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.465 qpair failed and we were unable to recover it. 00:27:33.465 [2024-11-15 10:46:21.663410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.465 [2024-11-15 10:46:21.663450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.465 qpair failed and we were unable to recover it. 00:27:33.465 [2024-11-15 10:46:21.663681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.465 [2024-11-15 10:46:21.663705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.465 qpair failed and we were unable to recover it. 00:27:33.465 [2024-11-15 10:46:21.663927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.465 [2024-11-15 10:46:21.663950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.465 qpair failed and we were unable to recover it. 00:27:33.465 [2024-11-15 10:46:21.664115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.465 [2024-11-15 10:46:21.664139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.465 qpair failed and we were unable to recover it. 00:27:33.465 [2024-11-15 10:46:21.664366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.465 [2024-11-15 10:46:21.664405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.465 qpair failed and we were unable to recover it. 00:27:33.465 [2024-11-15 10:46:21.664633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.465 [2024-11-15 10:46:21.664657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.465 qpair failed and we were unable to recover it. 00:27:33.465 [2024-11-15 10:46:21.664857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.465 [2024-11-15 10:46:21.664880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.465 qpair failed and we were unable to recover it. 00:27:33.465 [2024-11-15 10:46:21.665102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.465 [2024-11-15 10:46:21.665140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.465 qpair failed and we were unable to recover it. 00:27:33.465 [2024-11-15 10:46:21.665330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.465 [2024-11-15 10:46:21.665380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.465 qpair failed and we were unable to recover it. 00:27:33.465 [2024-11-15 10:46:21.665584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.465 [2024-11-15 10:46:21.665608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.465 qpair failed and we were unable to recover it. 00:27:33.465 [2024-11-15 10:46:21.665797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.465 [2024-11-15 10:46:21.665820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.465 qpair failed and we were unable to recover it. 00:27:33.465 [2024-11-15 10:46:21.666060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.465 [2024-11-15 10:46:21.666084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.465 qpair failed and we were unable to recover it. 00:27:33.465 [2024-11-15 10:46:21.666248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.465 [2024-11-15 10:46:21.666271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.465 qpair failed and we were unable to recover it. 00:27:33.465 [2024-11-15 10:46:21.666442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.465 [2024-11-15 10:46:21.666468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.465 qpair failed and we were unable to recover it. 00:27:33.465 [2024-11-15 10:46:21.666682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.465 [2024-11-15 10:46:21.666706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.465 qpair failed and we were unable to recover it. 00:27:33.465 [2024-11-15 10:46:21.666870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.465 [2024-11-15 10:46:21.666893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.465 qpair failed and we were unable to recover it. 00:27:33.465 [2024-11-15 10:46:21.667069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.465 [2024-11-15 10:46:21.667092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.465 qpair failed and we were unable to recover it. 00:27:33.465 [2024-11-15 10:46:21.667262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.465 [2024-11-15 10:46:21.667285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.465 qpair failed and we were unable to recover it. 00:27:33.465 [2024-11-15 10:46:21.667533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.465 [2024-11-15 10:46:21.667558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.465 qpair failed and we were unable to recover it. 00:27:33.465 [2024-11-15 10:46:21.667742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.465 [2024-11-15 10:46:21.667770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.465 qpair failed and we were unable to recover it. 00:27:33.465 [2024-11-15 10:46:21.667942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.465 [2024-11-15 10:46:21.667965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.465 qpair failed and we were unable to recover it. 00:27:33.465 [2024-11-15 10:46:21.668136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.465 [2024-11-15 10:46:21.668159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.465 qpair failed and we were unable to recover it. 00:27:33.465 [2024-11-15 10:46:21.668337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.465 [2024-11-15 10:46:21.668360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.465 qpair failed and we were unable to recover it. 00:27:33.465 [2024-11-15 10:46:21.668557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.465 [2024-11-15 10:46:21.668580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.465 qpair failed and we were unable to recover it. 00:27:33.465 [2024-11-15 10:46:21.668782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.465 [2024-11-15 10:46:21.668805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.465 qpair failed and we were unable to recover it. 00:27:33.465 [2024-11-15 10:46:21.669029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.465 [2024-11-15 10:46:21.669052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.465 qpair failed and we were unable to recover it. 00:27:33.465 [2024-11-15 10:46:21.669278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.465 [2024-11-15 10:46:21.669300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.465 qpair failed and we were unable to recover it. 00:27:33.465 [2024-11-15 10:46:21.669469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.465 [2024-11-15 10:46:21.669494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.465 qpair failed and we were unable to recover it. 00:27:33.465 [2024-11-15 10:46:21.669718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.465 [2024-11-15 10:46:21.669741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.465 qpair failed and we were unable to recover it. 00:27:33.465 [2024-11-15 10:46:21.669936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.466 [2024-11-15 10:46:21.669960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.466 qpair failed and we were unable to recover it. 00:27:33.466 [2024-11-15 10:46:21.670185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.466 [2024-11-15 10:46:21.670208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.466 qpair failed and we were unable to recover it. 00:27:33.466 [2024-11-15 10:46:21.670438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.466 [2024-11-15 10:46:21.670463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.466 qpair failed and we were unable to recover it. 00:27:33.466 [2024-11-15 10:46:21.670655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.466 [2024-11-15 10:46:21.670693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.466 qpair failed and we were unable to recover it. 00:27:33.466 [2024-11-15 10:46:21.670917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.466 [2024-11-15 10:46:21.670941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.466 qpair failed and we were unable to recover it. 00:27:33.466 [2024-11-15 10:46:21.671179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.466 [2024-11-15 10:46:21.671202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.466 qpair failed and we were unable to recover it. 00:27:33.466 [2024-11-15 10:46:21.671405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.466 [2024-11-15 10:46:21.671430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.466 qpair failed and we were unable to recover it. 00:27:33.466 [2024-11-15 10:46:21.671651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.466 [2024-11-15 10:46:21.671689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.466 qpair failed and we were unable to recover it. 00:27:33.466 [2024-11-15 10:46:21.671903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.466 [2024-11-15 10:46:21.671926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.466 qpair failed and we were unable to recover it. 00:27:33.466 [2024-11-15 10:46:21.672098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.466 [2024-11-15 10:46:21.672121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.466 qpair failed and we were unable to recover it. 00:27:33.466 [2024-11-15 10:46:21.672282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.466 [2024-11-15 10:46:21.672305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.466 qpair failed and we were unable to recover it. 00:27:33.466 [2024-11-15 10:46:21.672505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.466 [2024-11-15 10:46:21.672530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.466 qpair failed and we were unable to recover it. 00:27:33.466 [2024-11-15 10:46:21.672766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.466 [2024-11-15 10:46:21.672790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.466 qpair failed and we were unable to recover it. 00:27:33.466 [2024-11-15 10:46:21.672910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.466 [2024-11-15 10:46:21.672933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.466 qpair failed and we were unable to recover it. 00:27:33.466 [2024-11-15 10:46:21.673101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.466 [2024-11-15 10:46:21.673139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.466 qpair failed and we were unable to recover it. 00:27:33.466 [2024-11-15 10:46:21.673311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.466 [2024-11-15 10:46:21.673334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.466 qpair failed and we were unable to recover it. 00:27:33.466 [2024-11-15 10:46:21.673561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.466 [2024-11-15 10:46:21.673586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.466 qpair failed and we were unable to recover it. 00:27:33.466 [2024-11-15 10:46:21.673712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.466 [2024-11-15 10:46:21.673753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.466 qpair failed and we were unable to recover it. 00:27:33.466 [2024-11-15 10:46:21.673981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.466 [2024-11-15 10:46:21.674004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.466 qpair failed and we were unable to recover it. 00:27:33.466 [2024-11-15 10:46:21.674248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.466 [2024-11-15 10:46:21.674272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.466 qpair failed and we were unable to recover it. 00:27:33.466 [2024-11-15 10:46:21.674513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.466 [2024-11-15 10:46:21.674539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.466 qpair failed and we were unable to recover it. 00:27:33.466 [2024-11-15 10:46:21.674769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.466 [2024-11-15 10:46:21.674792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.466 qpair failed and we were unable to recover it. 00:27:33.466 [2024-11-15 10:46:21.675024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.466 [2024-11-15 10:46:21.675047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.466 qpair failed and we were unable to recover it. 00:27:33.466 [2024-11-15 10:46:21.675243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.466 [2024-11-15 10:46:21.675266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.466 qpair failed and we were unable to recover it. 00:27:33.466 [2024-11-15 10:46:21.675513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.466 [2024-11-15 10:46:21.675538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.466 qpair failed and we were unable to recover it. 00:27:33.466 [2024-11-15 10:46:21.675767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.466 [2024-11-15 10:46:21.675790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.466 qpair failed and we were unable to recover it. 00:27:33.466 [2024-11-15 10:46:21.675985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.466 [2024-11-15 10:46:21.676009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.466 qpair failed and we were unable to recover it. 00:27:33.466 [2024-11-15 10:46:21.676176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.466 [2024-11-15 10:46:21.676199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.466 qpair failed and we were unable to recover it. 00:27:33.466 [2024-11-15 10:46:21.676406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.466 [2024-11-15 10:46:21.676446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.466 qpair failed and we were unable to recover it. 00:27:33.466 [2024-11-15 10:46:21.676606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.466 [2024-11-15 10:46:21.676630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.466 qpair failed and we were unable to recover it. 00:27:33.466 [2024-11-15 10:46:21.676840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.466 [2024-11-15 10:46:21.676863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.466 qpair failed and we were unable to recover it. 00:27:33.466 [2024-11-15 10:46:21.677114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.466 [2024-11-15 10:46:21.677138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.466 qpair failed and we were unable to recover it. 00:27:33.466 [2024-11-15 10:46:21.677418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.466 [2024-11-15 10:46:21.677443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.466 qpair failed and we were unable to recover it. 00:27:33.466 [2024-11-15 10:46:21.677592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.466 [2024-11-15 10:46:21.677615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.466 qpair failed and we were unable to recover it. 00:27:33.466 [2024-11-15 10:46:21.677853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.466 [2024-11-15 10:46:21.677877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.466 qpair failed and we were unable to recover it. 00:27:33.466 [2024-11-15 10:46:21.678104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.466 [2024-11-15 10:46:21.678127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.466 qpair failed and we were unable to recover it. 00:27:33.466 [2024-11-15 10:46:21.678346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.466 [2024-11-15 10:46:21.678393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.466 qpair failed and we were unable to recover it. 00:27:33.466 [2024-11-15 10:46:21.678602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.466 [2024-11-15 10:46:21.678626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.466 qpair failed and we were unable to recover it. 00:27:33.466 [2024-11-15 10:46:21.678850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.466 [2024-11-15 10:46:21.678873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.466 qpair failed and we were unable to recover it. 00:27:33.467 [2024-11-15 10:46:21.679007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.467 [2024-11-15 10:46:21.679030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.467 qpair failed and we were unable to recover it. 00:27:33.467 [2024-11-15 10:46:21.679196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.467 [2024-11-15 10:46:21.679234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.467 qpair failed and we were unable to recover it. 00:27:33.467 [2024-11-15 10:46:21.679468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.467 [2024-11-15 10:46:21.679492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.467 qpair failed and we were unable to recover it. 00:27:33.467 [2024-11-15 10:46:21.679674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.467 [2024-11-15 10:46:21.679698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.467 qpair failed and we were unable to recover it. 00:27:33.467 [2024-11-15 10:46:21.679919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.467 [2024-11-15 10:46:21.679942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.467 qpair failed and we were unable to recover it. 00:27:33.467 [2024-11-15 10:46:21.680128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.467 [2024-11-15 10:46:21.680151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.467 qpair failed and we were unable to recover it. 00:27:33.467 [2024-11-15 10:46:21.680397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.467 [2024-11-15 10:46:21.680422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.467 qpair failed and we were unable to recover it. 00:27:33.467 [2024-11-15 10:46:21.680589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.467 [2024-11-15 10:46:21.680615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.467 qpair failed and we were unable to recover it. 00:27:33.467 [2024-11-15 10:46:21.680846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.467 [2024-11-15 10:46:21.680869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.467 qpair failed and we were unable to recover it. 00:27:33.467 [2024-11-15 10:46:21.681099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.467 [2024-11-15 10:46:21.681122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.467 qpair failed and we were unable to recover it. 00:27:33.467 [2024-11-15 10:46:21.681306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.467 [2024-11-15 10:46:21.681329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.467 qpair failed and we were unable to recover it. 00:27:33.467 [2024-11-15 10:46:21.681532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.467 [2024-11-15 10:46:21.681557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.467 qpair failed and we were unable to recover it. 00:27:33.467 [2024-11-15 10:46:21.681740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.467 [2024-11-15 10:46:21.681763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.467 qpair failed and we were unable to recover it. 00:27:33.467 [2024-11-15 10:46:21.681959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.467 [2024-11-15 10:46:21.681982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.467 qpair failed and we were unable to recover it. 00:27:33.467 [2024-11-15 10:46:21.682158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.467 [2024-11-15 10:46:21.682182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.467 qpair failed and we were unable to recover it. 00:27:33.467 [2024-11-15 10:46:21.682416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.467 [2024-11-15 10:46:21.682441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.467 qpair failed and we were unable to recover it. 00:27:33.467 [2024-11-15 10:46:21.682617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.467 [2024-11-15 10:46:21.682657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.467 qpair failed and we were unable to recover it. 00:27:33.467 [2024-11-15 10:46:21.682871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.467 [2024-11-15 10:46:21.682894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.467 qpair failed and we were unable to recover it. 00:27:33.467 [2024-11-15 10:46:21.683121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.467 [2024-11-15 10:46:21.683145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.467 qpair failed and we were unable to recover it. 00:27:33.467 [2024-11-15 10:46:21.683338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.467 [2024-11-15 10:46:21.683368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.467 qpair failed and we were unable to recover it. 00:27:33.467 [2024-11-15 10:46:21.683551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.467 [2024-11-15 10:46:21.683576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.467 qpair failed and we were unable to recover it. 00:27:33.467 [2024-11-15 10:46:21.683764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.467 [2024-11-15 10:46:21.683788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.467 qpair failed and we were unable to recover it. 00:27:33.467 [2024-11-15 10:46:21.683972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.467 [2024-11-15 10:46:21.683995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.467 qpair failed and we were unable to recover it. 00:27:33.467 [2024-11-15 10:46:21.684167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.467 [2024-11-15 10:46:21.684190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.467 qpair failed and we were unable to recover it. 00:27:33.467 [2024-11-15 10:46:21.684375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.467 [2024-11-15 10:46:21.684414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.467 qpair failed and we were unable to recover it. 00:27:33.467 [2024-11-15 10:46:21.684570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.467 [2024-11-15 10:46:21.684595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.467 qpair failed and we were unable to recover it. 00:27:33.467 [2024-11-15 10:46:21.684748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.467 [2024-11-15 10:46:21.684785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.467 qpair failed and we were unable to recover it. 00:27:33.467 [2024-11-15 10:46:21.684999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.467 [2024-11-15 10:46:21.685023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.467 qpair failed and we were unable to recover it. 00:27:33.467 [2024-11-15 10:46:21.685196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.467 [2024-11-15 10:46:21.685219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.467 qpair failed and we were unable to recover it. 00:27:33.467 [2024-11-15 10:46:21.685382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.467 [2024-11-15 10:46:21.685407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.467 qpair failed and we were unable to recover it. 00:27:33.467 [2024-11-15 10:46:21.685575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.467 [2024-11-15 10:46:21.685599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.467 qpair failed and we were unable to recover it. 00:27:33.467 [2024-11-15 10:46:21.685791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.467 [2024-11-15 10:46:21.685815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.467 qpair failed and we were unable to recover it. 00:27:33.467 [2024-11-15 10:46:21.686045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.467 [2024-11-15 10:46:21.686068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.467 qpair failed and we were unable to recover it. 00:27:33.467 [2024-11-15 10:46:21.686315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.467 [2024-11-15 10:46:21.686339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.467 qpair failed and we were unable to recover it. 00:27:33.467 [2024-11-15 10:46:21.686583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.467 [2024-11-15 10:46:21.686608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.467 qpair failed and we were unable to recover it. 00:27:33.467 [2024-11-15 10:46:21.686793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.467 [2024-11-15 10:46:21.686816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.467 qpair failed and we were unable to recover it. 00:27:33.467 [2024-11-15 10:46:21.687058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.467 [2024-11-15 10:46:21.687081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.467 qpair failed and we were unable to recover it. 00:27:33.467 [2024-11-15 10:46:21.687205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.467 [2024-11-15 10:46:21.687229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.467 qpair failed and we were unable to recover it. 00:27:33.467 [2024-11-15 10:46:21.687471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.468 [2024-11-15 10:46:21.687495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.468 qpair failed and we were unable to recover it. 00:27:33.468 [2024-11-15 10:46:21.687776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.468 [2024-11-15 10:46:21.687800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.468 qpair failed and we were unable to recover it. 00:27:33.468 [2024-11-15 10:46:21.687975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.468 [2024-11-15 10:46:21.687998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.468 qpair failed and we were unable to recover it. 00:27:33.468 [2024-11-15 10:46:21.688216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.468 [2024-11-15 10:46:21.688239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.468 qpair failed and we were unable to recover it. 00:27:33.468 [2024-11-15 10:46:21.688423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.468 [2024-11-15 10:46:21.688447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.468 qpair failed and we were unable to recover it. 00:27:33.468 [2024-11-15 10:46:21.688674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.468 [2024-11-15 10:46:21.688699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.468 qpair failed and we were unable to recover it. 00:27:33.468 [2024-11-15 10:46:21.688942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.468 [2024-11-15 10:46:21.688966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.468 qpair failed and we were unable to recover it. 00:27:33.468 [2024-11-15 10:46:21.689131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.468 [2024-11-15 10:46:21.689170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.468 qpair failed and we were unable to recover it. 00:27:33.468 [2024-11-15 10:46:21.689417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.468 [2024-11-15 10:46:21.689447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.468 qpair failed and we were unable to recover it. 00:27:33.468 [2024-11-15 10:46:21.689681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.468 [2024-11-15 10:46:21.689706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.468 qpair failed and we were unable to recover it. 00:27:33.468 [2024-11-15 10:46:21.689927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.468 [2024-11-15 10:46:21.689952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.468 qpair failed and we were unable to recover it. 00:27:33.468 [2024-11-15 10:46:21.690197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.468 [2024-11-15 10:46:21.690220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.468 qpair failed and we were unable to recover it. 00:27:33.468 [2024-11-15 10:46:21.690349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.468 [2024-11-15 10:46:21.690392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.468 qpair failed and we were unable to recover it. 00:27:33.468 [2024-11-15 10:46:21.690592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.468 [2024-11-15 10:46:21.690617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.468 qpair failed and we were unable to recover it. 00:27:33.468 [2024-11-15 10:46:21.690862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.468 [2024-11-15 10:46:21.690887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.468 qpair failed and we were unable to recover it. 00:27:33.468 [2024-11-15 10:46:21.691091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.468 [2024-11-15 10:46:21.691116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.468 qpair failed and we were unable to recover it. 00:27:33.468 [2024-11-15 10:46:21.691391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.468 [2024-11-15 10:46:21.691418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.468 qpair failed and we were unable to recover it. 00:27:33.468 [2024-11-15 10:46:21.691649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.468 [2024-11-15 10:46:21.691675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.468 qpair failed and we were unable to recover it. 00:27:33.468 [2024-11-15 10:46:21.691842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.468 [2024-11-15 10:46:21.691866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.468 qpair failed and we were unable to recover it. 00:27:33.468 [2024-11-15 10:46:21.692090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.468 [2024-11-15 10:46:21.692114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.468 qpair failed and we were unable to recover it. 00:27:33.468 [2024-11-15 10:46:21.692355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.468 [2024-11-15 10:46:21.692386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.468 qpair failed and we were unable to recover it. 00:27:33.468 [2024-11-15 10:46:21.692660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.468 [2024-11-15 10:46:21.692700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.468 qpair failed and we were unable to recover it. 00:27:33.468 [2024-11-15 10:46:21.692919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.468 [2024-11-15 10:46:21.692943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.468 qpair failed and we were unable to recover it. 00:27:33.468 [2024-11-15 10:46:21.693128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.468 [2024-11-15 10:46:21.693152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.468 qpair failed and we were unable to recover it. 00:27:33.468 [2024-11-15 10:46:21.693391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.468 [2024-11-15 10:46:21.693417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.468 qpair failed and we were unable to recover it. 00:27:33.468 [2024-11-15 10:46:21.693613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.468 [2024-11-15 10:46:21.693638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.468 qpair failed and we were unable to recover it. 00:27:33.468 [2024-11-15 10:46:21.693865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.468 [2024-11-15 10:46:21.693890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.468 qpair failed and we were unable to recover it. 00:27:33.468 [2024-11-15 10:46:21.694118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.468 [2024-11-15 10:46:21.694141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.468 qpair failed and we were unable to recover it. 00:27:33.468 [2024-11-15 10:46:21.694385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.468 [2024-11-15 10:46:21.694410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.468 qpair failed and we were unable to recover it. 00:27:33.468 [2024-11-15 10:46:21.694664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.468 [2024-11-15 10:46:21.694689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.468 qpair failed and we were unable to recover it. 00:27:33.468 [2024-11-15 10:46:21.694912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.468 [2024-11-15 10:46:21.694937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.468 qpair failed and we were unable to recover it. 00:27:33.468 [2024-11-15 10:46:21.695158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.468 [2024-11-15 10:46:21.695183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.468 qpair failed and we were unable to recover it. 00:27:33.468 [2024-11-15 10:46:21.695408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.468 [2024-11-15 10:46:21.695434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.468 qpair failed and we were unable to recover it. 00:27:33.468 [2024-11-15 10:46:21.695589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.468 [2024-11-15 10:46:21.695614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.468 qpair failed and we were unable to recover it. 00:27:33.468 [2024-11-15 10:46:21.695843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.468 [2024-11-15 10:46:21.695868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.468 qpair failed and we were unable to recover it. 00:27:33.468 [2024-11-15 10:46:21.696052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.468 [2024-11-15 10:46:21.696083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.468 qpair failed and we were unable to recover it. 00:27:33.468 [2024-11-15 10:46:21.696262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.468 [2024-11-15 10:46:21.696287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.468 qpair failed and we were unable to recover it. 00:27:33.468 [2024-11-15 10:46:21.696519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.468 [2024-11-15 10:46:21.696544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.468 qpair failed and we were unable to recover it. 00:27:33.468 [2024-11-15 10:46:21.696737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.468 [2024-11-15 10:46:21.696762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.469 qpair failed and we were unable to recover it. 00:27:33.469 [2024-11-15 10:46:21.696994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.469 [2024-11-15 10:46:21.697018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.469 qpair failed and we were unable to recover it. 00:27:33.469 [2024-11-15 10:46:21.697209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.469 [2024-11-15 10:46:21.697233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.469 qpair failed and we were unable to recover it. 00:27:33.469 [2024-11-15 10:46:21.697459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.469 [2024-11-15 10:46:21.697485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.469 qpair failed and we were unable to recover it. 00:27:33.469 [2024-11-15 10:46:21.697629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.469 [2024-11-15 10:46:21.697655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.469 qpair failed and we were unable to recover it. 00:27:33.469 [2024-11-15 10:46:21.697848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.469 [2024-11-15 10:46:21.697872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.469 qpair failed and we were unable to recover it. 00:27:33.469 [2024-11-15 10:46:21.698060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.469 [2024-11-15 10:46:21.698084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.469 qpair failed and we were unable to recover it. 00:27:33.469 [2024-11-15 10:46:21.698308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.469 [2024-11-15 10:46:21.698332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.469 qpair failed and we were unable to recover it. 00:27:33.469 [2024-11-15 10:46:21.698535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.469 [2024-11-15 10:46:21.698561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.469 qpair failed and we were unable to recover it. 00:27:33.469 [2024-11-15 10:46:21.698786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.469 [2024-11-15 10:46:21.698810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.469 qpair failed and we were unable to recover it. 00:27:33.469 [2024-11-15 10:46:21.698975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.469 [2024-11-15 10:46:21.698999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.469 qpair failed and we were unable to recover it. 00:27:33.469 [2024-11-15 10:46:21.699221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.469 [2024-11-15 10:46:21.699244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.469 qpair failed and we were unable to recover it. 00:27:33.469 [2024-11-15 10:46:21.699413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.469 [2024-11-15 10:46:21.699439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.469 qpair failed and we were unable to recover it. 00:27:33.469 [2024-11-15 10:46:21.699664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.469 [2024-11-15 10:46:21.699688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.469 qpair failed and we were unable to recover it. 00:27:33.469 [2024-11-15 10:46:21.699856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.469 [2024-11-15 10:46:21.699880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.469 qpair failed and we were unable to recover it. 00:27:33.469 [2024-11-15 10:46:21.700079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.469 [2024-11-15 10:46:21.700104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.469 qpair failed and we were unable to recover it. 00:27:33.469 [2024-11-15 10:46:21.700314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.469 [2024-11-15 10:46:21.700338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.469 qpair failed and we were unable to recover it. 00:27:33.469 [2024-11-15 10:46:21.700573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.469 [2024-11-15 10:46:21.700598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.469 qpair failed and we were unable to recover it. 00:27:33.469 [2024-11-15 10:46:21.700764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.469 [2024-11-15 10:46:21.700788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.469 qpair failed and we were unable to recover it. 00:27:33.469 [2024-11-15 10:46:21.701021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.469 [2024-11-15 10:46:21.701044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.469 qpair failed and we were unable to recover it. 00:27:33.469 [2024-11-15 10:46:21.701276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.469 [2024-11-15 10:46:21.701299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.469 qpair failed and we were unable to recover it. 00:27:33.469 [2024-11-15 10:46:21.701529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.469 [2024-11-15 10:46:21.701555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.469 qpair failed and we were unable to recover it. 00:27:33.469 [2024-11-15 10:46:21.701767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.469 [2024-11-15 10:46:21.701791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.469 qpair failed and we were unable to recover it. 00:27:33.469 [2024-11-15 10:46:21.702018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.469 [2024-11-15 10:46:21.702041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.469 qpair failed and we were unable to recover it. 00:27:33.469 [2024-11-15 10:46:21.702232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.469 [2024-11-15 10:46:21.702256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.469 qpair failed and we were unable to recover it. 00:27:33.469 [2024-11-15 10:46:21.702401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.469 [2024-11-15 10:46:21.702427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.469 qpair failed and we were unable to recover it. 00:27:33.469 [2024-11-15 10:46:21.702648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.469 [2024-11-15 10:46:21.702671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.469 qpair failed and we were unable to recover it. 00:27:33.469 [2024-11-15 10:46:21.702857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.469 [2024-11-15 10:46:21.702880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.469 qpair failed and we were unable to recover it. 00:27:33.469 [2024-11-15 10:46:21.703103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.469 [2024-11-15 10:46:21.703127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.469 qpair failed and we were unable to recover it. 00:27:33.469 [2024-11-15 10:46:21.703384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.469 [2024-11-15 10:46:21.703409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.469 qpair failed and we were unable to recover it. 00:27:33.469 [2024-11-15 10:46:21.703611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.469 [2024-11-15 10:46:21.703635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.469 qpair failed and we were unable to recover it. 00:27:33.469 [2024-11-15 10:46:21.703866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.469 [2024-11-15 10:46:21.703889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.469 qpair failed and we were unable to recover it. 00:27:33.469 [2024-11-15 10:46:21.704024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.469 [2024-11-15 10:46:21.704048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.469 qpair failed and we were unable to recover it. 00:27:33.469 [2024-11-15 10:46:21.704278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.469 [2024-11-15 10:46:21.704302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.469 qpair failed and we were unable to recover it. 00:27:33.469 [2024-11-15 10:46:21.704510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.469 [2024-11-15 10:46:21.704535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.469 qpair failed and we were unable to recover it. 00:27:33.469 [2024-11-15 10:46:21.704778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.470 [2024-11-15 10:46:21.704801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.470 qpair failed and we were unable to recover it. 00:27:33.470 [2024-11-15 10:46:21.704924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.470 [2024-11-15 10:46:21.704947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.470 qpair failed and we were unable to recover it. 00:27:33.470 [2024-11-15 10:46:21.705056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.470 [2024-11-15 10:46:21.705081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.470 qpair failed and we were unable to recover it. 00:27:33.470 [2024-11-15 10:46:21.705337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.470 [2024-11-15 10:46:21.705367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.470 qpair failed and we were unable to recover it. 00:27:33.470 [2024-11-15 10:46:21.705557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.470 [2024-11-15 10:46:21.705581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.470 qpair failed and we were unable to recover it. 00:27:33.470 [2024-11-15 10:46:21.705807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.470 [2024-11-15 10:46:21.705831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.470 qpair failed and we were unable to recover it. 00:27:33.470 [2024-11-15 10:46:21.706061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.470 [2024-11-15 10:46:21.706084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.470 qpair failed and we were unable to recover it. 00:27:33.470 [2024-11-15 10:46:21.706311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.470 [2024-11-15 10:46:21.706334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.470 qpair failed and we were unable to recover it. 00:27:33.470 [2024-11-15 10:46:21.706476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.470 [2024-11-15 10:46:21.706501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.470 qpair failed and we were unable to recover it. 00:27:33.470 [2024-11-15 10:46:21.706737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.470 [2024-11-15 10:46:21.706762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.470 qpair failed and we were unable to recover it. 00:27:33.470 [2024-11-15 10:46:21.706895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.470 [2024-11-15 10:46:21.706920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.470 qpair failed and we were unable to recover it. 00:27:33.470 [2024-11-15 10:46:21.707083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.470 [2024-11-15 10:46:21.707122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.470 qpair failed and we were unable to recover it. 00:27:33.470 [2024-11-15 10:46:21.707302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.470 [2024-11-15 10:46:21.707325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.470 qpair failed and we were unable to recover it. 00:27:33.470 [2024-11-15 10:46:21.707509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.470 [2024-11-15 10:46:21.707534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.470 qpair failed and we were unable to recover it. 00:27:33.470 [2024-11-15 10:46:21.707741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.470 [2024-11-15 10:46:21.707765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.470 qpair failed and we were unable to recover it. 00:27:33.470 [2024-11-15 10:46:21.707991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.470 [2024-11-15 10:46:21.708014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.470 qpair failed and we were unable to recover it. 00:27:33.470 [2024-11-15 10:46:21.708197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.470 [2024-11-15 10:46:21.708220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.470 qpair failed and we were unable to recover it. 00:27:33.470 [2024-11-15 10:46:21.708400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.470 [2024-11-15 10:46:21.708426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.470 qpair failed and we were unable to recover it. 00:27:33.470 [2024-11-15 10:46:21.708638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.470 [2024-11-15 10:46:21.708678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.470 qpair failed and we were unable to recover it. 00:27:33.470 [2024-11-15 10:46:21.708890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.470 [2024-11-15 10:46:21.708914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.470 qpair failed and we were unable to recover it. 00:27:33.470 [2024-11-15 10:46:21.709132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.470 [2024-11-15 10:46:21.709155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.470 qpair failed and we were unable to recover it. 00:27:33.470 [2024-11-15 10:46:21.709406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.470 [2024-11-15 10:46:21.709431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.470 qpair failed and we were unable to recover it. 00:27:33.470 [2024-11-15 10:46:21.709653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.470 [2024-11-15 10:46:21.709677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.470 qpair failed and we were unable to recover it. 00:27:33.470 [2024-11-15 10:46:21.709863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.470 [2024-11-15 10:46:21.709886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.470 qpair failed and we were unable to recover it. 00:27:33.470 [2024-11-15 10:46:21.710106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.470 [2024-11-15 10:46:21.710130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.470 qpair failed and we were unable to recover it. 00:27:33.470 [2024-11-15 10:46:21.710390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.470 [2024-11-15 10:46:21.710415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.470 qpair failed and we were unable to recover it. 00:27:33.470 [2024-11-15 10:46:21.710556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.470 [2024-11-15 10:46:21.710581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.470 qpair failed and we were unable to recover it. 00:27:33.470 [2024-11-15 10:46:21.710764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.470 [2024-11-15 10:46:21.710787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.470 qpair failed and we were unable to recover it. 00:27:33.470 [2024-11-15 10:46:21.710994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.470 [2024-11-15 10:46:21.711017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.470 qpair failed and we were unable to recover it. 00:27:33.470 [2024-11-15 10:46:21.711228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.470 [2024-11-15 10:46:21.711251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.470 qpair failed and we were unable to recover it. 00:27:33.470 [2024-11-15 10:46:21.711492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.470 [2024-11-15 10:46:21.711521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.470 qpair failed and we were unable to recover it. 00:27:33.470 [2024-11-15 10:46:21.711701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.470 [2024-11-15 10:46:21.711725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.470 qpair failed and we were unable to recover it. 00:27:33.470 [2024-11-15 10:46:21.711964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.470 [2024-11-15 10:46:21.711987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.470 qpair failed and we were unable to recover it. 00:27:33.470 [2024-11-15 10:46:21.712222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.470 [2024-11-15 10:46:21.712245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.470 qpair failed and we were unable to recover it. 00:27:33.470 [2024-11-15 10:46:21.712446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.470 [2024-11-15 10:46:21.712471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.470 qpair failed and we were unable to recover it. 00:27:33.470 [2024-11-15 10:46:21.712602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.470 [2024-11-15 10:46:21.712626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.470 qpair failed and we were unable to recover it. 00:27:33.470 [2024-11-15 10:46:21.712818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.470 [2024-11-15 10:46:21.712841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.470 qpair failed and we were unable to recover it. 00:27:33.470 [2024-11-15 10:46:21.712984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.470 [2024-11-15 10:46:21.713008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.470 qpair failed and we were unable to recover it. 00:27:33.470 [2024-11-15 10:46:21.713141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.470 [2024-11-15 10:46:21.713165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.470 qpair failed and we were unable to recover it. 00:27:33.471 [2024-11-15 10:46:21.713344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.471 [2024-11-15 10:46:21.713389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.471 qpair failed and we were unable to recover it. 00:27:33.471 [2024-11-15 10:46:21.713576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.471 [2024-11-15 10:46:21.713600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.471 qpair failed and we were unable to recover it. 00:27:33.471 [2024-11-15 10:46:21.713732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.471 [2024-11-15 10:46:21.713771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.471 qpair failed and we were unable to recover it. 00:27:33.471 [2024-11-15 10:46:21.713978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.471 [2024-11-15 10:46:21.714017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.471 qpair failed and we were unable to recover it. 00:27:33.471 [2024-11-15 10:46:21.714237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.471 [2024-11-15 10:46:21.714261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.471 qpair failed and we were unable to recover it. 00:27:33.471 [2024-11-15 10:46:21.714444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.471 [2024-11-15 10:46:21.714470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.471 qpair failed and we were unable to recover it. 00:27:33.471 [2024-11-15 10:46:21.714689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.471 [2024-11-15 10:46:21.714728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.471 qpair failed and we were unable to recover it. 00:27:33.471 [2024-11-15 10:46:21.714908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.471 [2024-11-15 10:46:21.714931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.471 qpair failed and we were unable to recover it. 00:27:33.471 [2024-11-15 10:46:21.715157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.471 [2024-11-15 10:46:21.715180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.471 qpair failed and we were unable to recover it. 00:27:33.471 [2024-11-15 10:46:21.715427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.471 [2024-11-15 10:46:21.715453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.471 qpair failed and we were unable to recover it. 00:27:33.471 [2024-11-15 10:46:21.715632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.471 [2024-11-15 10:46:21.715672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.471 qpair failed and we were unable to recover it. 00:27:33.471 [2024-11-15 10:46:21.715889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.471 [2024-11-15 10:46:21.715912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.471 qpair failed and we were unable to recover it. 00:27:33.471 [2024-11-15 10:46:21.716104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.471 [2024-11-15 10:46:21.716127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.471 qpair failed and we were unable to recover it. 00:27:33.471 [2024-11-15 10:46:21.716366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.471 [2024-11-15 10:46:21.716406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.471 qpair failed and we were unable to recover it. 00:27:33.471 [2024-11-15 10:46:21.716576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.471 [2024-11-15 10:46:21.716601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.471 qpair failed and we were unable to recover it. 00:27:33.471 [2024-11-15 10:46:21.716731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.471 [2024-11-15 10:46:21.716768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.471 qpair failed and we were unable to recover it. 00:27:33.471 [2024-11-15 10:46:21.716972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.471 [2024-11-15 10:46:21.716995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.471 qpair failed and we were unable to recover it. 00:27:33.471 [2024-11-15 10:46:21.717230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.471 [2024-11-15 10:46:21.717253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.471 qpair failed and we were unable to recover it. 00:27:33.471 [2024-11-15 10:46:21.717486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.471 [2024-11-15 10:46:21.717515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.471 qpair failed and we were unable to recover it. 00:27:33.471 [2024-11-15 10:46:21.717724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.471 [2024-11-15 10:46:21.717748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.471 qpair failed and we were unable to recover it. 00:27:33.471 [2024-11-15 10:46:21.717909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.471 [2024-11-15 10:46:21.717932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.471 qpair failed and we were unable to recover it. 00:27:33.471 [2024-11-15 10:46:21.718155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.471 [2024-11-15 10:46:21.718178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.471 qpair failed and we were unable to recover it. 00:27:33.471 [2024-11-15 10:46:21.718413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.471 [2024-11-15 10:46:21.718437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.471 qpair failed and we were unable to recover it. 00:27:33.471 [2024-11-15 10:46:21.718662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.471 [2024-11-15 10:46:21.718700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.471 qpair failed and we were unable to recover it. 00:27:33.471 [2024-11-15 10:46:21.718893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.471 [2024-11-15 10:46:21.718916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.471 qpair failed and we were unable to recover it. 00:27:33.471 [2024-11-15 10:46:21.719138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.471 [2024-11-15 10:46:21.719162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.471 qpair failed and we were unable to recover it. 00:27:33.471 [2024-11-15 10:46:21.719402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.471 [2024-11-15 10:46:21.719426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.471 qpair failed and we were unable to recover it. 00:27:33.471 [2024-11-15 10:46:21.719654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.471 [2024-11-15 10:46:21.719692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.471 qpair failed and we were unable to recover it. 00:27:33.471 [2024-11-15 10:46:21.719907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.471 [2024-11-15 10:46:21.719930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.471 qpair failed and we were unable to recover it. 00:27:33.471 [2024-11-15 10:46:21.720096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.471 [2024-11-15 10:46:21.720119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.471 qpair failed and we were unable to recover it. 00:27:33.471 [2024-11-15 10:46:21.720307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.471 [2024-11-15 10:46:21.720330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.471 qpair failed and we were unable to recover it. 00:27:33.471 [2024-11-15 10:46:21.720558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.471 [2024-11-15 10:46:21.720583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.471 qpair failed and we were unable to recover it. 00:27:33.471 [2024-11-15 10:46:21.720780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.471 [2024-11-15 10:46:21.720803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.471 qpair failed and we were unable to recover it. 00:27:33.471 [2024-11-15 10:46:21.721039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.471 [2024-11-15 10:46:21.721063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.471 qpair failed and we were unable to recover it. 00:27:33.471 [2024-11-15 10:46:21.721243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.471 [2024-11-15 10:46:21.721266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.471 qpair failed and we were unable to recover it. 00:27:33.471 [2024-11-15 10:46:21.721490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.471 [2024-11-15 10:46:21.721515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.471 qpair failed and we were unable to recover it. 00:27:33.471 [2024-11-15 10:46:21.721745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.471 [2024-11-15 10:46:21.721768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.471 qpair failed and we were unable to recover it. 00:27:33.471 [2024-11-15 10:46:21.721978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.471 [2024-11-15 10:46:21.722001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.471 qpair failed and we were unable to recover it. 00:27:33.471 [2024-11-15 10:46:21.722237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.472 [2024-11-15 10:46:21.722260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.472 qpair failed and we were unable to recover it. 00:27:33.472 [2024-11-15 10:46:21.722485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.472 [2024-11-15 10:46:21.722510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.472 qpair failed and we were unable to recover it. 00:27:33.472 [2024-11-15 10:46:21.722739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.472 [2024-11-15 10:46:21.722762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.472 qpair failed and we were unable to recover it. 00:27:33.472 [2024-11-15 10:46:21.722928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.472 [2024-11-15 10:46:21.722952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.472 qpair failed and we were unable to recover it. 00:27:33.472 [2024-11-15 10:46:21.723143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.472 [2024-11-15 10:46:21.723166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.472 qpair failed and we were unable to recover it. 00:27:33.472 [2024-11-15 10:46:21.723392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.472 [2024-11-15 10:46:21.723432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.472 qpair failed and we were unable to recover it. 00:27:33.472 [2024-11-15 10:46:21.723664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.472 [2024-11-15 10:46:21.723688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.472 qpair failed and we were unable to recover it. 00:27:33.472 [2024-11-15 10:46:21.723875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.472 [2024-11-15 10:46:21.723902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.472 qpair failed and we were unable to recover it. 00:27:33.472 [2024-11-15 10:46:21.724117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.472 [2024-11-15 10:46:21.724141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.472 qpair failed and we were unable to recover it. 00:27:33.472 [2024-11-15 10:46:21.724373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.472 [2024-11-15 10:46:21.724413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.472 qpair failed and we were unable to recover it. 00:27:33.472 [2024-11-15 10:46:21.724620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.472 [2024-11-15 10:46:21.724658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.472 qpair failed and we were unable to recover it. 00:27:33.472 [2024-11-15 10:46:21.724856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.472 [2024-11-15 10:46:21.724879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.472 qpair failed and we were unable to recover it. 00:27:33.472 [2024-11-15 10:46:21.725072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.472 [2024-11-15 10:46:21.725110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.472 qpair failed and we were unable to recover it. 00:27:33.472 [2024-11-15 10:46:21.725272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.472 [2024-11-15 10:46:21.725295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.472 qpair failed and we were unable to recover it. 00:27:33.472 [2024-11-15 10:46:21.725458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.472 [2024-11-15 10:46:21.725483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.472 qpair failed and we were unable to recover it. 00:27:33.472 [2024-11-15 10:46:21.725660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.472 [2024-11-15 10:46:21.725684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.472 qpair failed and we were unable to recover it. 00:27:33.472 [2024-11-15 10:46:21.725895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.472 [2024-11-15 10:46:21.725918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.472 qpair failed and we were unable to recover it. 00:27:33.472 [2024-11-15 10:46:21.726156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.472 [2024-11-15 10:46:21.726180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.472 qpair failed and we were unable to recover it. 00:27:33.472 [2024-11-15 10:46:21.726372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.472 [2024-11-15 10:46:21.726396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.472 qpair failed and we were unable to recover it. 00:27:33.472 [2024-11-15 10:46:21.726594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.472 [2024-11-15 10:46:21.726618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.472 qpair failed and we were unable to recover it. 00:27:33.472 [2024-11-15 10:46:21.726760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.472 [2024-11-15 10:46:21.726797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.472 qpair failed and we were unable to recover it. 00:27:33.472 [2024-11-15 10:46:21.727036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.472 [2024-11-15 10:46:21.727059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.472 qpair failed and we were unable to recover it. 00:27:33.472 [2024-11-15 10:46:21.727284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.472 [2024-11-15 10:46:21.727307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.472 qpair failed and we were unable to recover it. 00:27:33.472 [2024-11-15 10:46:21.727531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.472 [2024-11-15 10:46:21.727555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.472 qpair failed and we were unable to recover it. 00:27:33.472 [2024-11-15 10:46:21.727793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.472 [2024-11-15 10:46:21.727817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.472 qpair failed and we were unable to recover it. 00:27:33.472 [2024-11-15 10:46:21.727994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.472 [2024-11-15 10:46:21.728017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.472 qpair failed and we were unable to recover it. 00:27:33.472 [2024-11-15 10:46:21.728174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.472 [2024-11-15 10:46:21.728197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.472 qpair failed and we were unable to recover it. 00:27:33.472 [2024-11-15 10:46:21.728434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.472 [2024-11-15 10:46:21.728458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.472 qpair failed and we were unable to recover it. 00:27:33.472 [2024-11-15 10:46:21.728631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.472 [2024-11-15 10:46:21.728670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.472 qpair failed and we were unable to recover it. 00:27:33.472 [2024-11-15 10:46:21.728923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.472 [2024-11-15 10:46:21.728946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.472 qpair failed and we were unable to recover it. 00:27:33.472 [2024-11-15 10:46:21.729173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.472 [2024-11-15 10:46:21.729196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.472 qpair failed and we were unable to recover it. 00:27:33.472 [2024-11-15 10:46:21.729437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.472 [2024-11-15 10:46:21.729463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.472 qpair failed and we were unable to recover it. 00:27:33.472 [2024-11-15 10:46:21.729681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.472 [2024-11-15 10:46:21.729706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.472 qpair failed and we were unable to recover it. 00:27:33.472 [2024-11-15 10:46:21.729934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.472 [2024-11-15 10:46:21.729957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.472 qpair failed and we were unable to recover it. 00:27:33.472 [2024-11-15 10:46:21.730182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.472 [2024-11-15 10:46:21.730205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.472 qpair failed and we were unable to recover it. 00:27:33.472 [2024-11-15 10:46:21.730435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.472 [2024-11-15 10:46:21.730460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.472 qpair failed and we were unable to recover it. 00:27:33.472 [2024-11-15 10:46:21.730686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.472 [2024-11-15 10:46:21.730710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.472 qpair failed and we were unable to recover it. 00:27:33.472 [2024-11-15 10:46:21.730912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.472 [2024-11-15 10:46:21.730935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.472 qpair failed and we were unable to recover it. 00:27:33.472 [2024-11-15 10:46:21.731131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.472 [2024-11-15 10:46:21.731170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.472 qpair failed and we were unable to recover it. 00:27:33.472 [2024-11-15 10:46:21.731399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.472 [2024-11-15 10:46:21.731438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.472 qpair failed and we were unable to recover it. 00:27:33.472 [2024-11-15 10:46:21.731622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.472 [2024-11-15 10:46:21.731647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.472 qpair failed and we were unable to recover it. 00:27:33.472 [2024-11-15 10:46:21.731826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.472 [2024-11-15 10:46:21.731849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.473 qpair failed and we were unable to recover it. 00:27:33.473 [2024-11-15 10:46:21.732074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.473 [2024-11-15 10:46:21.732098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.473 qpair failed and we were unable to recover it. 00:27:33.473 [2024-11-15 10:46:21.732320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.473 [2024-11-15 10:46:21.732359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.473 qpair failed and we were unable to recover it. 00:27:33.473 [2024-11-15 10:46:21.732595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.473 [2024-11-15 10:46:21.732619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.473 qpair failed and we were unable to recover it. 00:27:33.473 [2024-11-15 10:46:21.732847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.473 [2024-11-15 10:46:21.732870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.473 qpair failed and we were unable to recover it. 00:27:33.473 [2024-11-15 10:46:21.733110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.473 [2024-11-15 10:46:21.733133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.473 qpair failed and we were unable to recover it. 00:27:33.473 [2024-11-15 10:46:21.733316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.473 [2024-11-15 10:46:21.733354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.473 qpair failed and we were unable to recover it. 00:27:33.473 [2024-11-15 10:46:21.733611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.473 [2024-11-15 10:46:21.733636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.473 qpair failed and we were unable to recover it. 00:27:33.473 [2024-11-15 10:46:21.733827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.473 [2024-11-15 10:46:21.733850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.473 qpair failed and we were unable to recover it. 00:27:33.473 [2024-11-15 10:46:21.734040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.473 [2024-11-15 10:46:21.734064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.473 qpair failed and we were unable to recover it. 00:27:33.473 [2024-11-15 10:46:21.734250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.473 [2024-11-15 10:46:21.734273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.473 qpair failed and we were unable to recover it. 00:27:33.473 [2024-11-15 10:46:21.734445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.473 [2024-11-15 10:46:21.734471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.473 qpair failed and we were unable to recover it. 00:27:33.473 [2024-11-15 10:46:21.734690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.473 [2024-11-15 10:46:21.734713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.473 qpair failed and we were unable to recover it. 00:27:33.473 [2024-11-15 10:46:21.734904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.473 [2024-11-15 10:46:21.734927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.473 qpair failed and we were unable to recover it. 00:27:33.473 [2024-11-15 10:46:21.735122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.473 [2024-11-15 10:46:21.735145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.473 qpair failed and we were unable to recover it. 00:27:33.473 [2024-11-15 10:46:21.735380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.473 [2024-11-15 10:46:21.735405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.473 qpair failed and we were unable to recover it. 00:27:33.473 [2024-11-15 10:46:21.735584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.473 [2024-11-15 10:46:21.735609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.473 qpair failed and we were unable to recover it. 00:27:33.473 [2024-11-15 10:46:21.735844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.473 [2024-11-15 10:46:21.735867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.473 qpair failed and we were unable to recover it. 00:27:33.473 [2024-11-15 10:46:21.736110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.473 [2024-11-15 10:46:21.736133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.473 qpair failed and we were unable to recover it. 00:27:33.473 [2024-11-15 10:46:21.736386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.473 [2024-11-15 10:46:21.736426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.473 qpair failed and we were unable to recover it. 00:27:33.473 [2024-11-15 10:46:21.736619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.473 [2024-11-15 10:46:21.736643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.473 qpair failed and we were unable to recover it. 00:27:33.473 [2024-11-15 10:46:21.736851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.473 [2024-11-15 10:46:21.736875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.473 qpair failed and we were unable to recover it. 00:27:33.473 [2024-11-15 10:46:21.737074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.473 [2024-11-15 10:46:21.737113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.473 qpair failed and we were unable to recover it. 00:27:33.473 [2024-11-15 10:46:21.737324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.473 [2024-11-15 10:46:21.737347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.473 qpair failed and we were unable to recover it. 00:27:33.473 [2024-11-15 10:46:21.737546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.473 [2024-11-15 10:46:21.737570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.473 qpair failed and we were unable to recover it. 00:27:33.473 [2024-11-15 10:46:21.737785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.473 [2024-11-15 10:46:21.737808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.473 qpair failed and we were unable to recover it. 00:27:33.473 [2024-11-15 10:46:21.738035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.473 [2024-11-15 10:46:21.738058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.473 qpair failed and we were unable to recover it. 00:27:33.473 [2024-11-15 10:46:21.738278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.473 [2024-11-15 10:46:21.738301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.473 qpair failed and we were unable to recover it. 00:27:33.473 [2024-11-15 10:46:21.738479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.473 [2024-11-15 10:46:21.738505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.473 qpair failed and we were unable to recover it. 00:27:33.473 [2024-11-15 10:46:21.738709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.473 [2024-11-15 10:46:21.738732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.473 qpair failed and we were unable to recover it. 00:27:33.473 [2024-11-15 10:46:21.738955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.473 [2024-11-15 10:46:21.738978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.473 qpair failed and we were unable to recover it. 00:27:33.473 [2024-11-15 10:46:21.739214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.473 [2024-11-15 10:46:21.739252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.473 qpair failed and we were unable to recover it. 00:27:33.473 [2024-11-15 10:46:21.739415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.473 [2024-11-15 10:46:21.739455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.473 qpair failed and we were unable to recover it. 00:27:33.473 [2024-11-15 10:46:21.739587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.473 [2024-11-15 10:46:21.739612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.473 qpair failed and we were unable to recover it. 00:27:33.473 [2024-11-15 10:46:21.739780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.473 [2024-11-15 10:46:21.739807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.473 qpair failed and we were unable to recover it. 00:27:33.473 [2024-11-15 10:46:21.740013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.473 [2024-11-15 10:46:21.740037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.473 qpair failed and we were unable to recover it. 00:27:33.473 [2024-11-15 10:46:21.740231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.473 [2024-11-15 10:46:21.740254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.473 qpair failed and we were unable to recover it. 00:27:33.473 [2024-11-15 10:46:21.740428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.473 [2024-11-15 10:46:21.740453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.473 qpair failed and we were unable to recover it. 00:27:33.473 [2024-11-15 10:46:21.740672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.474 [2024-11-15 10:46:21.740695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.474 qpair failed and we were unable to recover it. 00:27:33.474 [2024-11-15 10:46:21.740934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.474 [2024-11-15 10:46:21.740958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.474 qpair failed and we were unable to recover it. 00:27:33.474 [2024-11-15 10:46:21.741134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.474 [2024-11-15 10:46:21.741157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.474 qpair failed and we were unable to recover it. 00:27:33.474 [2024-11-15 10:46:21.741350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.474 [2024-11-15 10:46:21.741394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.474 qpair failed and we were unable to recover it. 00:27:33.474 [2024-11-15 10:46:21.741594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.474 [2024-11-15 10:46:21.741618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.474 qpair failed and we were unable to recover it. 00:27:33.474 [2024-11-15 10:46:21.741787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.474 [2024-11-15 10:46:21.741810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.474 qpair failed and we were unable to recover it. 00:27:33.474 [2024-11-15 10:46:21.741996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.474 [2024-11-15 10:46:21.742020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.474 qpair failed and we were unable to recover it. 00:27:33.474 [2024-11-15 10:46:21.742242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.474 [2024-11-15 10:46:21.742265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.474 qpair failed and we were unable to recover it. 00:27:33.474 [2024-11-15 10:46:21.742501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.474 [2024-11-15 10:46:21.742525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.474 qpair failed and we were unable to recover it. 00:27:33.474 [2024-11-15 10:46:21.742744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.474 [2024-11-15 10:46:21.742767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.474 qpair failed and we were unable to recover it. 00:27:33.474 [2024-11-15 10:46:21.742960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.474 [2024-11-15 10:46:21.742983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.474 qpair failed and we were unable to recover it. 00:27:33.474 [2024-11-15 10:46:21.743154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.474 [2024-11-15 10:46:21.743177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.474 qpair failed and we were unable to recover it. 00:27:33.474 [2024-11-15 10:46:21.743409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.474 [2024-11-15 10:46:21.743433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.474 qpair failed and we were unable to recover it. 00:27:33.474 [2024-11-15 10:46:21.743616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.474 [2024-11-15 10:46:21.743656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.474 qpair failed and we were unable to recover it. 00:27:33.474 [2024-11-15 10:46:21.743836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.474 [2024-11-15 10:46:21.743859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.474 qpair failed and we were unable to recover it. 00:27:33.474 [2024-11-15 10:46:21.744058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.474 [2024-11-15 10:46:21.744081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.474 qpair failed and we were unable to recover it. 00:27:33.474 [2024-11-15 10:46:21.744298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.474 [2024-11-15 10:46:21.744322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.474 qpair failed and we were unable to recover it. 00:27:33.474 [2024-11-15 10:46:21.744551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.474 [2024-11-15 10:46:21.744576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.474 qpair failed and we were unable to recover it. 00:27:33.474 [2024-11-15 10:46:21.744786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.474 [2024-11-15 10:46:21.744809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.474 qpair failed and we were unable to recover it. 00:27:33.474 [2024-11-15 10:46:21.745004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.474 [2024-11-15 10:46:21.745028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.474 qpair failed and we were unable to recover it. 00:27:33.474 [2024-11-15 10:46:21.745250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.474 [2024-11-15 10:46:21.745273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.474 qpair failed and we were unable to recover it. 00:27:33.474 [2024-11-15 10:46:21.745503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.474 [2024-11-15 10:46:21.745527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.474 qpair failed and we were unable to recover it. 00:27:33.474 [2024-11-15 10:46:21.745709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.474 [2024-11-15 10:46:21.745748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.474 qpair failed and we were unable to recover it. 00:27:33.474 [2024-11-15 10:46:21.745966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.474 [2024-11-15 10:46:21.745993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.474 qpair failed and we were unable to recover it. 00:27:33.474 [2024-11-15 10:46:21.746221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.474 [2024-11-15 10:46:21.746244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.474 qpair failed and we were unable to recover it. 00:27:33.474 [2024-11-15 10:46:21.746474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.474 [2024-11-15 10:46:21.746499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.474 qpair failed and we were unable to recover it. 00:27:33.474 [2024-11-15 10:46:21.746746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.474 [2024-11-15 10:46:21.746770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.474 qpair failed and we were unable to recover it. 00:27:33.474 [2024-11-15 10:46:21.746955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.474 [2024-11-15 10:46:21.746978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.474 qpair failed and we were unable to recover it. 00:27:33.474 [2024-11-15 10:46:21.747143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.474 [2024-11-15 10:46:21.747166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.474 qpair failed and we were unable to recover it. 00:27:33.474 [2024-11-15 10:46:21.747400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.474 [2024-11-15 10:46:21.747424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.474 qpair failed and we were unable to recover it. 00:27:33.474 [2024-11-15 10:46:21.747599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.474 [2024-11-15 10:46:21.747623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.474 qpair failed and we were unable to recover it. 00:27:33.474 [2024-11-15 10:46:21.747824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.474 [2024-11-15 10:46:21.747847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.474 qpair failed and we were unable to recover it. 00:27:33.474 [2024-11-15 10:46:21.748008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.474 [2024-11-15 10:46:21.748031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.474 qpair failed and we were unable to recover it. 00:27:33.474 [2024-11-15 10:46:21.748262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.474 [2024-11-15 10:46:21.748285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.475 qpair failed and we were unable to recover it. 00:27:33.475 [2024-11-15 10:46:21.748455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.475 [2024-11-15 10:46:21.748480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.475 qpair failed and we were unable to recover it. 00:27:33.475 [2024-11-15 10:46:21.748710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.475 [2024-11-15 10:46:21.748734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.475 qpair failed and we were unable to recover it. 00:27:33.475 [2024-11-15 10:46:21.748963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.475 [2024-11-15 10:46:21.748987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.475 qpair failed and we were unable to recover it. 00:27:33.475 [2024-11-15 10:46:21.749167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.475 [2024-11-15 10:46:21.749191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.475 qpair failed and we were unable to recover it. 00:27:33.475 [2024-11-15 10:46:21.749389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.475 [2024-11-15 10:46:21.749429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.475 qpair failed and we were unable to recover it. 00:27:33.475 [2024-11-15 10:46:21.749660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.475 [2024-11-15 10:46:21.749685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.475 qpair failed and we were unable to recover it. 00:27:33.475 [2024-11-15 10:46:21.749922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.475 [2024-11-15 10:46:21.749945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.475 qpair failed and we were unable to recover it. 00:27:33.475 [2024-11-15 10:46:21.750114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.475 [2024-11-15 10:46:21.750138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.475 qpair failed and we were unable to recover it. 00:27:33.475 [2024-11-15 10:46:21.750263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.475 [2024-11-15 10:46:21.750301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.475 qpair failed and we were unable to recover it. 00:27:33.475 [2024-11-15 10:46:21.750490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.475 [2024-11-15 10:46:21.750516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.475 qpair failed and we were unable to recover it. 00:27:33.475 [2024-11-15 10:46:21.750725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.475 [2024-11-15 10:46:21.750749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.475 qpair failed and we were unable to recover it. 00:27:33.475 [2024-11-15 10:46:21.750928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.475 [2024-11-15 10:46:21.750952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.475 qpair failed and we were unable to recover it. 00:27:33.475 [2024-11-15 10:46:21.751176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.475 [2024-11-15 10:46:21.751199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.475 qpair failed and we were unable to recover it. 00:27:33.475 [2024-11-15 10:46:21.751382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.475 [2024-11-15 10:46:21.751407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.475 qpair failed and we were unable to recover it. 00:27:33.475 [2024-11-15 10:46:21.751600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.475 [2024-11-15 10:46:21.751625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.475 qpair failed and we were unable to recover it. 00:27:33.475 [2024-11-15 10:46:21.751844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.475 [2024-11-15 10:46:21.751867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.475 qpair failed and we were unable to recover it. 00:27:33.475 [2024-11-15 10:46:21.752087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.475 [2024-11-15 10:46:21.752110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.475 qpair failed and we were unable to recover it. 00:27:33.475 [2024-11-15 10:46:21.752337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.475 [2024-11-15 10:46:21.752360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.475 qpair failed and we were unable to recover it. 00:27:33.475 [2024-11-15 10:46:21.752573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.475 [2024-11-15 10:46:21.752596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.475 qpair failed and we were unable to recover it. 00:27:33.475 [2024-11-15 10:46:21.752782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.475 [2024-11-15 10:46:21.752805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.475 qpair failed and we were unable to recover it. 00:27:33.475 [2024-11-15 10:46:21.753022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.475 [2024-11-15 10:46:21.753045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.475 qpair failed and we were unable to recover it. 00:27:33.475 [2024-11-15 10:46:21.753278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.475 [2024-11-15 10:46:21.753301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.475 qpair failed and we were unable to recover it. 00:27:33.475 [2024-11-15 10:46:21.753534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.475 [2024-11-15 10:46:21.753558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.475 qpair failed and we were unable to recover it. 00:27:33.475 [2024-11-15 10:46:21.753712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.475 [2024-11-15 10:46:21.753736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.475 qpair failed and we were unable to recover it. 00:27:33.475 [2024-11-15 10:46:21.753946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.475 [2024-11-15 10:46:21.753969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.475 qpair failed and we were unable to recover it. 00:27:33.475 [2024-11-15 10:46:21.754193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.475 [2024-11-15 10:46:21.754216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.475 qpair failed and we were unable to recover it. 00:27:33.475 [2024-11-15 10:46:21.754408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.475 [2024-11-15 10:46:21.754432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.475 qpair failed and we were unable to recover it. 00:27:33.475 [2024-11-15 10:46:21.754551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.475 [2024-11-15 10:46:21.754575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.475 qpair failed and we were unable to recover it. 00:27:33.475 [2024-11-15 10:46:21.754788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.475 [2024-11-15 10:46:21.754812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.475 qpair failed and we were unable to recover it. 00:27:33.475 [2024-11-15 10:46:21.755038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.475 [2024-11-15 10:46:21.755061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.475 qpair failed and we were unable to recover it. 00:27:33.475 [2024-11-15 10:46:21.755308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.475 [2024-11-15 10:46:21.755332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.475 qpair failed and we were unable to recover it. 00:27:33.475 [2024-11-15 10:46:21.755519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.475 [2024-11-15 10:46:21.755544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.475 qpair failed and we were unable to recover it. 00:27:33.475 [2024-11-15 10:46:21.755778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.475 [2024-11-15 10:46:21.755801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.475 qpair failed and we were unable to recover it. 00:27:33.475 [2024-11-15 10:46:21.756012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.475 [2024-11-15 10:46:21.756035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.475 qpair failed and we were unable to recover it. 00:27:33.475 [2024-11-15 10:46:21.756226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.475 [2024-11-15 10:46:21.756249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.475 qpair failed and we were unable to recover it. 00:27:33.475 [2024-11-15 10:46:21.756465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.475 [2024-11-15 10:46:21.756490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.475 qpair failed and we were unable to recover it. 00:27:33.475 [2024-11-15 10:46:21.756691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.475 [2024-11-15 10:46:21.756715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.475 qpair failed and we were unable to recover it. 00:27:33.475 [2024-11-15 10:46:21.756927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.475 [2024-11-15 10:46:21.756951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.475 qpair failed and we were unable to recover it. 00:27:33.475 [2024-11-15 10:46:21.757134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.475 [2024-11-15 10:46:21.757157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.475 qpair failed and we were unable to recover it. 00:27:33.475 [2024-11-15 10:46:21.757385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.475 [2024-11-15 10:46:21.757409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.475 qpair failed and we were unable to recover it. 00:27:33.475 [2024-11-15 10:46:21.757645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.475 [2024-11-15 10:46:21.757669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.475 qpair failed and we were unable to recover it. 00:27:33.475 [2024-11-15 10:46:21.757863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.476 [2024-11-15 10:46:21.757886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.476 qpair failed and we were unable to recover it. 00:27:33.476 [2024-11-15 10:46:21.758068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.476 [2024-11-15 10:46:21.758091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.476 qpair failed and we were unable to recover it. 00:27:33.476 [2024-11-15 10:46:21.758283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.476 [2024-11-15 10:46:21.758307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.476 qpair failed and we were unable to recover it. 00:27:33.476 [2024-11-15 10:46:21.758480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.476 [2024-11-15 10:46:21.758505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.476 qpair failed and we were unable to recover it. 00:27:33.476 [2024-11-15 10:46:21.758673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.476 [2024-11-15 10:46:21.758714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.476 qpair failed and we were unable to recover it. 00:27:33.476 [2024-11-15 10:46:21.758921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.476 [2024-11-15 10:46:21.758945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.476 qpair failed and we were unable to recover it. 00:27:33.476 [2024-11-15 10:46:21.759110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.476 [2024-11-15 10:46:21.759134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.476 qpair failed and we were unable to recover it. 00:27:33.476 [2024-11-15 10:46:21.759315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.476 [2024-11-15 10:46:21.759338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.476 qpair failed and we were unable to recover it. 00:27:33.476 [2024-11-15 10:46:21.759517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.476 [2024-11-15 10:46:21.759542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.476 qpair failed and we were unable to recover it. 00:27:33.476 [2024-11-15 10:46:21.759740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.476 [2024-11-15 10:46:21.759764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.476 qpair failed and we were unable to recover it. 00:27:33.476 [2024-11-15 10:46:21.759953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.476 [2024-11-15 10:46:21.759976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.476 qpair failed and we were unable to recover it. 00:27:33.476 [2024-11-15 10:46:21.760129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.476 [2024-11-15 10:46:21.760153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.476 qpair failed and we were unable to recover it. 00:27:33.476 [2024-11-15 10:46:21.760396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.476 [2024-11-15 10:46:21.760422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.476 qpair failed and we were unable to recover it. 00:27:33.476 [2024-11-15 10:46:21.760619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.476 [2024-11-15 10:46:21.760643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.476 qpair failed and we were unable to recover it. 00:27:33.476 [2024-11-15 10:46:21.760835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.476 [2024-11-15 10:46:21.760859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.476 qpair failed and we were unable to recover it. 00:27:33.476 [2024-11-15 10:46:21.761083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.476 [2024-11-15 10:46:21.761107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.476 qpair failed and we were unable to recover it. 00:27:33.476 [2024-11-15 10:46:21.761260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.476 [2024-11-15 10:46:21.761288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.476 qpair failed and we were unable to recover it. 00:27:33.476 [2024-11-15 10:46:21.761486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.476 [2024-11-15 10:46:21.761511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.476 qpair failed and we were unable to recover it. 00:27:33.476 [2024-11-15 10:46:21.761734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.476 [2024-11-15 10:46:21.761757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.476 qpair failed and we were unable to recover it. 00:27:33.476 [2024-11-15 10:46:21.761918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.476 [2024-11-15 10:46:21.761942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.476 qpair failed and we were unable to recover it. 00:27:33.476 [2024-11-15 10:46:21.762165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.476 [2024-11-15 10:46:21.762189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.476 qpair failed and we were unable to recover it. 00:27:33.476 [2024-11-15 10:46:21.762375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.476 [2024-11-15 10:46:21.762399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.476 qpair failed and we were unable to recover it. 00:27:33.476 [2024-11-15 10:46:21.762619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.476 [2024-11-15 10:46:21.762644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.476 qpair failed and we were unable to recover it. 00:27:33.476 [2024-11-15 10:46:21.762886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.476 [2024-11-15 10:46:21.762910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.476 qpair failed and we were unable to recover it. 00:27:33.476 [2024-11-15 10:46:21.763151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.476 [2024-11-15 10:46:21.763175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.476 qpair failed and we were unable to recover it. 00:27:33.476 [2024-11-15 10:46:21.763407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.476 [2024-11-15 10:46:21.763432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.476 qpair failed and we were unable to recover it. 00:27:33.476 [2024-11-15 10:46:21.763667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.476 [2024-11-15 10:46:21.763691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.476 qpair failed and we were unable to recover it. 00:27:33.476 [2024-11-15 10:46:21.763921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.476 [2024-11-15 10:46:21.763944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.476 qpair failed and we were unable to recover it. 00:27:33.476 [2024-11-15 10:46:21.764142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.476 [2024-11-15 10:46:21.764165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.476 qpair failed and we were unable to recover it. 00:27:33.476 [2024-11-15 10:46:21.764295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.476 [2024-11-15 10:46:21.764319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.476 qpair failed and we were unable to recover it. 00:27:33.476 [2024-11-15 10:46:21.764508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.476 [2024-11-15 10:46:21.764533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.476 qpair failed and we were unable to recover it. 00:27:33.476 [2024-11-15 10:46:21.764704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.476 [2024-11-15 10:46:21.764743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.476 qpair failed and we were unable to recover it. 00:27:33.476 [2024-11-15 10:46:21.764926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.476 [2024-11-15 10:46:21.764949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.476 qpair failed and we were unable to recover it. 00:27:33.476 [2024-11-15 10:46:21.765176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.476 [2024-11-15 10:46:21.765199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.476 qpair failed and we were unable to recover it. 00:27:33.476 [2024-11-15 10:46:21.765325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.476 [2024-11-15 10:46:21.765349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.476 qpair failed and we were unable to recover it. 00:27:33.476 [2024-11-15 10:46:21.765608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.476 [2024-11-15 10:46:21.765633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.476 qpair failed and we were unable to recover it. 00:27:33.476 [2024-11-15 10:46:21.765793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.476 [2024-11-15 10:46:21.765817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.476 qpair failed and we were unable to recover it. 00:27:33.476 [2024-11-15 10:46:21.765939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.476 [2024-11-15 10:46:21.765977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.476 qpair failed and we were unable to recover it. 00:27:33.476 [2024-11-15 10:46:21.766183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.476 [2024-11-15 10:46:21.766206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.476 qpair failed and we were unable to recover it. 00:27:33.476 [2024-11-15 10:46:21.766351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.476 [2024-11-15 10:46:21.766382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.476 qpair failed and we were unable to recover it. 00:27:33.476 [2024-11-15 10:46:21.766522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.476 [2024-11-15 10:46:21.766561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.476 qpair failed and we were unable to recover it. 00:27:33.476 [2024-11-15 10:46:21.766753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.476 [2024-11-15 10:46:21.766778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.476 qpair failed and we were unable to recover it. 00:27:33.477 [2024-11-15 10:46:21.766969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.477 [2024-11-15 10:46:21.766993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.477 qpair failed and we were unable to recover it. 00:27:33.477 [2024-11-15 10:46:21.767216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.477 [2024-11-15 10:46:21.767244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.477 qpair failed and we were unable to recover it. 00:27:33.477 [2024-11-15 10:46:21.767418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.477 [2024-11-15 10:46:21.767444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.477 qpair failed and we were unable to recover it. 00:27:33.477 [2024-11-15 10:46:21.767663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.477 [2024-11-15 10:46:21.767688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.477 qpair failed and we were unable to recover it. 00:27:33.477 [2024-11-15 10:46:21.767885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.477 [2024-11-15 10:46:21.767909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.477 qpair failed and we were unable to recover it. 00:27:33.477 [2024-11-15 10:46:21.768133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.477 [2024-11-15 10:46:21.768157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.477 qpair failed and we were unable to recover it. 00:27:33.477 [2024-11-15 10:46:21.768395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.477 [2024-11-15 10:46:21.768435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.477 qpair failed and we were unable to recover it. 00:27:33.477 [2024-11-15 10:46:21.768607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.477 [2024-11-15 10:46:21.768632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.477 qpair failed and we were unable to recover it. 00:27:33.477 [2024-11-15 10:46:21.768862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.477 [2024-11-15 10:46:21.768885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.477 qpair failed and we were unable to recover it. 00:27:33.477 [2024-11-15 10:46:21.769115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.477 [2024-11-15 10:46:21.769139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.477 qpair failed and we were unable to recover it. 00:27:33.477 [2024-11-15 10:46:21.769382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.477 [2024-11-15 10:46:21.769421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.477 qpair failed and we were unable to recover it. 00:27:33.477 [2024-11-15 10:46:21.769634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.477 [2024-11-15 10:46:21.769672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.477 qpair failed and we were unable to recover it. 00:27:33.477 [2024-11-15 10:46:21.769859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.477 [2024-11-15 10:46:21.769883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.477 qpair failed and we were unable to recover it. 00:27:33.477 [2024-11-15 10:46:21.770029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.477 [2024-11-15 10:46:21.770053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.477 qpair failed and we were unable to recover it. 00:27:33.477 [2024-11-15 10:46:21.770172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.477 [2024-11-15 10:46:21.770197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.477 qpair failed and we were unable to recover it. 00:27:33.477 [2024-11-15 10:46:21.770427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.477 [2024-11-15 10:46:21.770452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.477 qpair failed and we were unable to recover it. 00:27:33.477 [2024-11-15 10:46:21.770694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.477 [2024-11-15 10:46:21.770717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.477 qpair failed and we were unable to recover it. 00:27:33.477 [2024-11-15 10:46:21.770908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.477 [2024-11-15 10:46:21.770932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.477 qpair failed and we were unable to recover it. 00:27:33.477 [2024-11-15 10:46:21.771138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.477 [2024-11-15 10:46:21.771162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.477 qpair failed and we were unable to recover it. 00:27:33.477 [2024-11-15 10:46:21.771337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.477 [2024-11-15 10:46:21.771384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.477 qpair failed and we were unable to recover it. 00:27:33.477 [2024-11-15 10:46:21.771660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.477 [2024-11-15 10:46:21.771684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.477 qpair failed and we were unable to recover it. 00:27:33.477 [2024-11-15 10:46:21.771873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.477 [2024-11-15 10:46:21.771897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.477 qpair failed and we were unable to recover it. 00:27:33.477 [2024-11-15 10:46:21.772135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.477 [2024-11-15 10:46:21.772158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.477 qpair failed and we were unable to recover it. 00:27:33.477 [2024-11-15 10:46:21.772286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.477 [2024-11-15 10:46:21.772310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.477 qpair failed and we were unable to recover it. 00:27:33.477 [2024-11-15 10:46:21.772515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.477 [2024-11-15 10:46:21.772540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.477 qpair failed and we were unable to recover it. 00:27:33.477 [2024-11-15 10:46:21.772750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.477 [2024-11-15 10:46:21.772774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.477 qpair failed and we were unable to recover it. 00:27:33.477 [2024-11-15 10:46:21.772980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.477 [2024-11-15 10:46:21.773004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.477 qpair failed and we were unable to recover it. 00:27:33.477 [2024-11-15 10:46:21.773149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.477 [2024-11-15 10:46:21.773172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.477 qpair failed and we were unable to recover it. 00:27:33.477 [2024-11-15 10:46:21.773321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.477 [2024-11-15 10:46:21.773349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.477 qpair failed and we were unable to recover it. 00:27:33.477 [2024-11-15 10:46:21.773580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.477 [2024-11-15 10:46:21.773606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.477 qpair failed and we were unable to recover it. 00:27:33.477 [2024-11-15 10:46:21.773751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.477 [2024-11-15 10:46:21.773790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.477 qpair failed and we were unable to recover it. 00:27:33.477 [2024-11-15 10:46:21.773985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.477 [2024-11-15 10:46:21.774009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.477 qpair failed and we were unable to recover it. 00:27:33.477 [2024-11-15 10:46:21.774214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.477 [2024-11-15 10:46:21.774238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.477 qpair failed and we were unable to recover it. 00:27:33.477 [2024-11-15 10:46:21.774436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.477 [2024-11-15 10:46:21.774462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.477 qpair failed and we were unable to recover it. 00:27:33.477 [2024-11-15 10:46:21.774617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.477 [2024-11-15 10:46:21.774641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.477 qpair failed and we were unable to recover it. 00:27:33.477 [2024-11-15 10:46:21.774820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.477 [2024-11-15 10:46:21.774844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.477 qpair failed and we were unable to recover it. 00:27:33.477 [2024-11-15 10:46:21.775072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.477 [2024-11-15 10:46:21.775096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.477 qpair failed and we were unable to recover it. 00:27:33.477 [2024-11-15 10:46:21.775275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.478 [2024-11-15 10:46:21.775298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.478 qpair failed and we were unable to recover it. 00:27:33.478 [2024-11-15 10:46:21.775538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.478 [2024-11-15 10:46:21.775564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.478 qpair failed and we were unable to recover it. 00:27:33.478 [2024-11-15 10:46:21.775743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.478 [2024-11-15 10:46:21.775766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.478 qpair failed and we were unable to recover it. 00:27:33.478 [2024-11-15 10:46:21.775895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.478 [2024-11-15 10:46:21.775918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.478 qpair failed and we were unable to recover it. 00:27:33.478 [2024-11-15 10:46:21.776056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.478 [2024-11-15 10:46:21.776079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.478 qpair failed and we were unable to recover it. 00:27:33.478 [2024-11-15 10:46:21.776270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.478 [2024-11-15 10:46:21.776294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.478 qpair failed and we were unable to recover it. 00:27:33.478 [2024-11-15 10:46:21.776500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.478 [2024-11-15 10:46:21.776526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.478 qpair failed and we were unable to recover it. 00:27:33.478 [2024-11-15 10:46:21.776723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.478 [2024-11-15 10:46:21.776746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.478 qpair failed and we were unable to recover it. 00:27:33.478 [2024-11-15 10:46:21.776907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.478 [2024-11-15 10:46:21.776930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.478 qpair failed and we were unable to recover it. 00:27:33.478 [2024-11-15 10:46:21.777124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.478 [2024-11-15 10:46:21.777148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.478 qpair failed and we were unable to recover it. 00:27:33.478 [2024-11-15 10:46:21.777306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.478 [2024-11-15 10:46:21.777329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.478 qpair failed and we were unable to recover it. 00:27:33.478 [2024-11-15 10:46:21.777496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.478 [2024-11-15 10:46:21.777521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.478 qpair failed and we were unable to recover it. 00:27:33.478 [2024-11-15 10:46:21.777761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.478 [2024-11-15 10:46:21.777785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.478 qpair failed and we were unable to recover it. 00:27:33.478 [2024-11-15 10:46:21.777969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.478 [2024-11-15 10:46:21.777992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.478 qpair failed and we were unable to recover it. 00:27:33.478 [2024-11-15 10:46:21.778176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.478 [2024-11-15 10:46:21.778199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.478 qpair failed and we were unable to recover it. 00:27:33.478 [2024-11-15 10:46:21.778408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.478 [2024-11-15 10:46:21.778433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.478 qpair failed and we were unable to recover it. 00:27:33.478 [2024-11-15 10:46:21.778636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.478 [2024-11-15 10:46:21.778660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.478 qpair failed and we were unable to recover it. 00:27:33.478 [2024-11-15 10:46:21.778867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.478 [2024-11-15 10:46:21.778890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.478 qpair failed and we were unable to recover it. 00:27:33.478 [2024-11-15 10:46:21.779117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.478 [2024-11-15 10:46:21.779141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.478 qpair failed and we were unable to recover it. 00:27:33.478 [2024-11-15 10:46:21.779335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.478 [2024-11-15 10:46:21.779359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.478 qpair failed and we were unable to recover it. 00:27:33.478 [2024-11-15 10:46:21.779569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.478 [2024-11-15 10:46:21.779593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.478 qpair failed and we were unable to recover it. 00:27:33.478 [2024-11-15 10:46:21.779820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.478 [2024-11-15 10:46:21.779844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.478 qpair failed and we were unable to recover it. 00:27:33.478 [2024-11-15 10:46:21.780074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.478 [2024-11-15 10:46:21.780098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.478 qpair failed and we were unable to recover it. 00:27:33.478 [2024-11-15 10:46:21.780292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.478 [2024-11-15 10:46:21.780315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.478 qpair failed and we were unable to recover it. 00:27:33.478 [2024-11-15 10:46:21.780490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.478 [2024-11-15 10:46:21.780515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.478 qpair failed and we were unable to recover it. 00:27:33.478 [2024-11-15 10:46:21.780758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.478 [2024-11-15 10:46:21.780781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.478 qpair failed and we were unable to recover it. 00:27:33.478 [2024-11-15 10:46:21.781004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.478 [2024-11-15 10:46:21.781027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.478 qpair failed and we were unable to recover it. 00:27:33.478 [2024-11-15 10:46:21.781196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.478 [2024-11-15 10:46:21.781220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.478 qpair failed and we were unable to recover it. 00:27:33.478 [2024-11-15 10:46:21.781390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.478 [2024-11-15 10:46:21.781430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.478 qpair failed and we were unable to recover it. 00:27:33.478 [2024-11-15 10:46:21.781613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.478 [2024-11-15 10:46:21.781637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.478 qpair failed and we were unable to recover it. 00:27:33.478 [2024-11-15 10:46:21.781820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.478 [2024-11-15 10:46:21.781843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.478 qpair failed and we were unable to recover it. 00:27:33.478 [2024-11-15 10:46:21.782046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.478 [2024-11-15 10:46:21.782070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.478 qpair failed and we were unable to recover it. 00:27:33.478 [2024-11-15 10:46:21.782221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.478 [2024-11-15 10:46:21.782245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.478 qpair failed and we were unable to recover it. 00:27:33.478 [2024-11-15 10:46:21.782454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.478 [2024-11-15 10:46:21.782480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.478 qpair failed and we were unable to recover it. 00:27:33.478 [2024-11-15 10:46:21.782692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.478 [2024-11-15 10:46:21.782716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.478 qpair failed and we were unable to recover it. 00:27:33.478 [2024-11-15 10:46:21.782911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.478 [2024-11-15 10:46:21.782935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.478 qpair failed and we were unable to recover it. 00:27:33.478 [2024-11-15 10:46:21.783143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.478 [2024-11-15 10:46:21.783166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.478 qpair failed and we were unable to recover it. 00:27:33.478 [2024-11-15 10:46:21.783367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.478 [2024-11-15 10:46:21.783392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.478 qpair failed and we were unable to recover it. 00:27:33.478 [2024-11-15 10:46:21.783540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.478 [2024-11-15 10:46:21.783565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.478 qpair failed and we were unable to recover it. 00:27:33.478 [2024-11-15 10:46:21.783785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.478 [2024-11-15 10:46:21.783808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.478 qpair failed and we were unable to recover it. 00:27:33.478 [2024-11-15 10:46:21.783992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.479 [2024-11-15 10:46:21.784015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.479 qpair failed and we were unable to recover it. 00:27:33.479 [2024-11-15 10:46:21.784193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.479 [2024-11-15 10:46:21.784216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.479 qpair failed and we were unable to recover it. 00:27:33.479 [2024-11-15 10:46:21.784385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.479 [2024-11-15 10:46:21.784411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.479 qpair failed and we were unable to recover it. 00:27:33.479 [2024-11-15 10:46:21.784569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.479 [2024-11-15 10:46:21.784594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.479 qpair failed and we were unable to recover it. 00:27:33.479 [2024-11-15 10:46:21.784765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.479 [2024-11-15 10:46:21.784789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.479 qpair failed and we were unable to recover it. 00:27:33.479 [2024-11-15 10:46:21.784974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.479 [2024-11-15 10:46:21.784998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.479 qpair failed and we were unable to recover it. 00:27:33.479 [2024-11-15 10:46:21.785230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.479 [2024-11-15 10:46:21.785254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.479 qpair failed and we were unable to recover it. 00:27:33.479 [2024-11-15 10:46:21.785489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.479 [2024-11-15 10:46:21.785515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.479 qpair failed and we were unable to recover it. 00:27:33.479 [2024-11-15 10:46:21.785718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.479 [2024-11-15 10:46:21.785742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.479 qpair failed and we were unable to recover it. 00:27:33.479 [2024-11-15 10:46:21.785964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.479 [2024-11-15 10:46:21.785988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.479 qpair failed and we were unable to recover it. 00:27:33.479 [2024-11-15 10:46:21.786171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.479 [2024-11-15 10:46:21.786194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.479 qpair failed and we were unable to recover it. 00:27:33.479 [2024-11-15 10:46:21.786440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.479 [2024-11-15 10:46:21.786465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.479 qpair failed and we were unable to recover it. 00:27:33.479 [2024-11-15 10:46:21.786629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.479 [2024-11-15 10:46:21.786653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.479 qpair failed and we were unable to recover it. 00:27:33.479 [2024-11-15 10:46:21.786839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.479 [2024-11-15 10:46:21.786863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.479 qpair failed and we were unable to recover it. 00:27:33.479 [2024-11-15 10:46:21.787087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.479 [2024-11-15 10:46:21.787111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.479 qpair failed and we were unable to recover it. 00:27:33.479 [2024-11-15 10:46:21.787297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.479 [2024-11-15 10:46:21.787320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.479 qpair failed and we were unable to recover it. 00:27:33.479 [2024-11-15 10:46:21.787519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.479 [2024-11-15 10:46:21.787545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.479 qpair failed and we were unable to recover it. 00:27:33.479 [2024-11-15 10:46:21.787754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.479 [2024-11-15 10:46:21.787778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.479 qpair failed and we were unable to recover it. 00:27:33.479 [2024-11-15 10:46:21.788017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.479 [2024-11-15 10:46:21.788040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.479 qpair failed and we were unable to recover it. 00:27:33.479 [2024-11-15 10:46:21.788233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.479 [2024-11-15 10:46:21.788263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.479 qpair failed and we were unable to recover it. 00:27:33.479 [2024-11-15 10:46:21.788453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.479 [2024-11-15 10:46:21.788478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.479 qpair failed and we were unable to recover it. 00:27:33.479 [2024-11-15 10:46:21.788649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.479 [2024-11-15 10:46:21.788688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.479 qpair failed and we were unable to recover it. 00:27:33.479 [2024-11-15 10:46:21.788908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.479 [2024-11-15 10:46:21.788931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.479 qpair failed and we were unable to recover it. 00:27:33.479 [2024-11-15 10:46:21.789154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.479 [2024-11-15 10:46:21.789178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.479 qpair failed and we were unable to recover it. 00:27:33.479 [2024-11-15 10:46:21.789428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.479 [2024-11-15 10:46:21.789453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.479 qpair failed and we were unable to recover it. 00:27:33.479 [2024-11-15 10:46:21.789654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.479 [2024-11-15 10:46:21.789693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.479 qpair failed and we were unable to recover it. 00:27:33.479 [2024-11-15 10:46:21.789920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.479 [2024-11-15 10:46:21.789944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.479 qpair failed and we were unable to recover it. 00:27:33.479 [2024-11-15 10:46:21.790078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.479 [2024-11-15 10:46:21.790102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.479 qpair failed and we were unable to recover it. 00:27:33.479 [2024-11-15 10:46:21.790263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.479 [2024-11-15 10:46:21.790302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.479 qpair failed and we were unable to recover it. 00:27:33.479 [2024-11-15 10:46:21.790527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.479 [2024-11-15 10:46:21.790553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.479 qpair failed and we were unable to recover it. 00:27:33.479 [2024-11-15 10:46:21.790699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.479 [2024-11-15 10:46:21.790738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.479 qpair failed and we were unable to recover it. 00:27:33.479 [2024-11-15 10:46:21.790909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.479 [2024-11-15 10:46:21.790933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.479 qpair failed and we were unable to recover it. 00:27:33.479 [2024-11-15 10:46:21.791159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.479 [2024-11-15 10:46:21.791183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.479 qpair failed and we were unable to recover it. 00:27:33.479 [2024-11-15 10:46:21.791391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.479 [2024-11-15 10:46:21.791432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.479 qpair failed and we were unable to recover it. 00:27:33.479 [2024-11-15 10:46:21.791637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.479 [2024-11-15 10:46:21.791675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.479 qpair failed and we were unable to recover it. 00:27:33.479 [2024-11-15 10:46:21.791908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.479 [2024-11-15 10:46:21.791932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.479 qpair failed and we were unable to recover it. 00:27:33.479 [2024-11-15 10:46:21.792085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.479 [2024-11-15 10:46:21.792109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.479 qpair failed and we were unable to recover it. 00:27:33.479 [2024-11-15 10:46:21.792329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.479 [2024-11-15 10:46:21.792374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.479 qpair failed and we were unable to recover it. 00:27:33.479 [2024-11-15 10:46:21.792596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.479 [2024-11-15 10:46:21.792622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.479 qpair failed and we were unable to recover it. 00:27:33.479 [2024-11-15 10:46:21.792816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.480 [2024-11-15 10:46:21.792840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.480 qpair failed and we were unable to recover it. 00:27:33.480 [2024-11-15 10:46:21.793059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.480 [2024-11-15 10:46:21.793098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.480 qpair failed and we were unable to recover it. 00:27:33.480 [2024-11-15 10:46:21.793305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.480 [2024-11-15 10:46:21.793329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.480 qpair failed and we were unable to recover it. 00:27:33.480 [2024-11-15 10:46:21.793561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.480 [2024-11-15 10:46:21.793587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.480 qpair failed and we were unable to recover it. 00:27:33.480 [2024-11-15 10:46:21.793746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.480 [2024-11-15 10:46:21.793770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.480 qpair failed and we were unable to recover it. 00:27:33.480 [2024-11-15 10:46:21.793940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.480 [2024-11-15 10:46:21.793964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.480 qpair failed and we were unable to recover it. 00:27:33.480 [2024-11-15 10:46:21.794191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.480 [2024-11-15 10:46:21.794214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.480 qpair failed and we were unable to recover it. 00:27:33.480 [2024-11-15 10:46:21.794391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.480 [2024-11-15 10:46:21.794420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.480 qpair failed and we were unable to recover it. 00:27:33.480 [2024-11-15 10:46:21.794612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.480 [2024-11-15 10:46:21.794638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.480 qpair failed and we were unable to recover it. 00:27:33.480 [2024-11-15 10:46:21.794834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.480 [2024-11-15 10:46:21.794858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.480 qpair failed and we were unable to recover it. 00:27:33.480 [2024-11-15 10:46:21.795104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.480 [2024-11-15 10:46:21.795127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.480 qpair failed and we were unable to recover it. 00:27:33.480 [2024-11-15 10:46:21.795312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.480 [2024-11-15 10:46:21.795335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.480 qpair failed and we were unable to recover it. 00:27:33.480 [2024-11-15 10:46:21.795544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.480 [2024-11-15 10:46:21.795569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.480 qpair failed and we were unable to recover it. 00:27:33.480 [2024-11-15 10:46:21.795763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.480 [2024-11-15 10:46:21.795786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.480 qpair failed and we were unable to recover it. 00:27:33.480 [2024-11-15 10:46:21.795972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.480 [2024-11-15 10:46:21.795995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.480 qpair failed and we were unable to recover it. 00:27:33.480 [2024-11-15 10:46:21.796166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.480 [2024-11-15 10:46:21.796190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.480 qpair failed and we were unable to recover it. 00:27:33.480 [2024-11-15 10:46:21.796386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.480 [2024-11-15 10:46:21.796426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.480 qpair failed and we were unable to recover it. 00:27:33.480 [2024-11-15 10:46:21.796628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.480 [2024-11-15 10:46:21.796652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.480 qpair failed and we were unable to recover it. 00:27:33.480 [2024-11-15 10:46:21.796856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.480 [2024-11-15 10:46:21.796880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.480 qpair failed and we were unable to recover it. 00:27:33.480 [2024-11-15 10:46:21.797100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.480 [2024-11-15 10:46:21.797124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.480 qpair failed and we were unable to recover it. 00:27:33.480 [2024-11-15 10:46:21.797355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.480 [2024-11-15 10:46:21.797404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.480 qpair failed and we were unable to recover it. 00:27:33.480 [2024-11-15 10:46:21.797651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.480 [2024-11-15 10:46:21.797690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.480 qpair failed and we were unable to recover it. 00:27:33.480 [2024-11-15 10:46:21.797869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.480 [2024-11-15 10:46:21.797893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.480 qpair failed and we were unable to recover it. 00:27:33.480 [2024-11-15 10:46:21.798064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.480 [2024-11-15 10:46:21.798089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.480 qpair failed and we were unable to recover it. 00:27:33.480 [2024-11-15 10:46:21.798276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.480 [2024-11-15 10:46:21.798299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.480 qpair failed and we were unable to recover it. 00:27:33.480 [2024-11-15 10:46:21.798504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.480 [2024-11-15 10:46:21.798529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.480 qpair failed and we were unable to recover it. 00:27:33.480 [2024-11-15 10:46:21.798682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.480 [2024-11-15 10:46:21.798706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.480 qpair failed and we were unable to recover it. 00:27:33.480 [2024-11-15 10:46:21.798890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.480 [2024-11-15 10:46:21.798914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.480 qpair failed and we were unable to recover it. 00:27:33.480 [2024-11-15 10:46:21.799136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.480 [2024-11-15 10:46:21.799159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.480 qpair failed and we were unable to recover it. 00:27:33.480 [2024-11-15 10:46:21.799368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.480 [2024-11-15 10:46:21.799393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.480 qpair failed and we were unable to recover it. 00:27:33.480 [2024-11-15 10:46:21.799615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.480 [2024-11-15 10:46:21.799639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.480 qpair failed and we were unable to recover it. 00:27:33.480 [2024-11-15 10:46:21.799816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.480 [2024-11-15 10:46:21.799839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.480 qpair failed and we were unable to recover it. 00:27:33.480 [2024-11-15 10:46:21.799974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.480 [2024-11-15 10:46:21.799998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.480 qpair failed and we were unable to recover it. 00:27:33.480 [2024-11-15 10:46:21.800150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.480 [2024-11-15 10:46:21.800188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.480 qpair failed and we were unable to recover it. 00:27:33.480 [2024-11-15 10:46:21.800393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.480 [2024-11-15 10:46:21.800419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.480 qpair failed and we were unable to recover it. 00:27:33.480 [2024-11-15 10:46:21.800616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.480 [2024-11-15 10:46:21.800656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.480 qpair failed and we were unable to recover it. 00:27:33.480 [2024-11-15 10:46:21.800826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.480 [2024-11-15 10:46:21.800849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.480 qpair failed and we were unable to recover it. 00:27:33.480 [2024-11-15 10:46:21.801071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.480 [2024-11-15 10:46:21.801094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.480 qpair failed and we were unable to recover it. 00:27:33.480 [2024-11-15 10:46:21.801293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.480 [2024-11-15 10:46:21.801316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.480 qpair failed and we were unable to recover it. 00:27:33.480 [2024-11-15 10:46:21.801536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.480 [2024-11-15 10:46:21.801560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.480 qpair failed and we were unable to recover it. 00:27:33.480 [2024-11-15 10:46:21.801789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.480 [2024-11-15 10:46:21.801812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.480 qpair failed and we were unable to recover it. 00:27:33.480 [2024-11-15 10:46:21.802048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.481 [2024-11-15 10:46:21.802071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.481 qpair failed and we were unable to recover it. 00:27:33.481 [2024-11-15 10:46:21.802238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.481 [2024-11-15 10:46:21.802262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.481 qpair failed and we were unable to recover it. 00:27:33.481 [2024-11-15 10:46:21.802495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.481 [2024-11-15 10:46:21.802521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.481 qpair failed and we were unable to recover it. 00:27:33.481 [2024-11-15 10:46:21.802773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.481 [2024-11-15 10:46:21.802797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.481 qpair failed and we were unable to recover it. 00:27:33.481 [2024-11-15 10:46:21.802995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.481 [2024-11-15 10:46:21.803018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.481 qpair failed and we were unable to recover it. 00:27:33.481 [2024-11-15 10:46:21.803222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.481 [2024-11-15 10:46:21.803245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.481 qpair failed and we were unable to recover it. 00:27:33.481 [2024-11-15 10:46:21.803407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.481 [2024-11-15 10:46:21.803432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.481 qpair failed and we were unable to recover it. 00:27:33.481 [2024-11-15 10:46:21.803628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.481 [2024-11-15 10:46:21.803668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.481 qpair failed and we were unable to recover it. 00:27:33.481 [2024-11-15 10:46:21.803852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.481 [2024-11-15 10:46:21.803876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.481 qpair failed and we were unable to recover it. 00:27:33.481 [2024-11-15 10:46:21.804098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.481 [2024-11-15 10:46:21.804122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.481 qpair failed and we were unable to recover it. 00:27:33.481 [2024-11-15 10:46:21.804282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.481 [2024-11-15 10:46:21.804305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.481 qpair failed and we were unable to recover it. 00:27:33.481 [2024-11-15 10:46:21.804478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.481 [2024-11-15 10:46:21.804503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.481 qpair failed and we were unable to recover it. 00:27:33.481 [2024-11-15 10:46:21.804734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.481 [2024-11-15 10:46:21.804758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.481 qpair failed and we were unable to recover it. 00:27:33.481 [2024-11-15 10:46:21.804987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.481 [2024-11-15 10:46:21.805010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.481 qpair failed and we were unable to recover it. 00:27:33.481 [2024-11-15 10:46:21.805196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.481 [2024-11-15 10:46:21.805220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.481 qpair failed and we were unable to recover it. 00:27:33.481 [2024-11-15 10:46:21.805387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.481 [2024-11-15 10:46:21.805426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.481 qpair failed and we were unable to recover it. 00:27:33.481 [2024-11-15 10:46:21.805586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.481 [2024-11-15 10:46:21.805610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.481 qpair failed and we were unable to recover it. 00:27:33.481 [2024-11-15 10:46:21.805817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.481 [2024-11-15 10:46:21.805840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.481 qpair failed and we were unable to recover it. 00:27:33.481 [2024-11-15 10:46:21.806014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.481 [2024-11-15 10:46:21.806037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.481 qpair failed and we were unable to recover it. 00:27:33.481 [2024-11-15 10:46:21.806238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.481 [2024-11-15 10:46:21.806261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.481 qpair failed and we were unable to recover it. 00:27:33.481 [2024-11-15 10:46:21.806471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.481 [2024-11-15 10:46:21.806496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.481 qpair failed and we were unable to recover it. 00:27:33.481 [2024-11-15 10:46:21.806697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.481 [2024-11-15 10:46:21.806721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.481 qpair failed and we were unable to recover it. 00:27:33.481 [2024-11-15 10:46:21.806897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.481 [2024-11-15 10:46:21.806920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.481 qpair failed and we were unable to recover it. 00:27:33.481 [2024-11-15 10:46:21.807150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.481 [2024-11-15 10:46:21.807173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.481 qpair failed and we were unable to recover it. 00:27:33.481 [2024-11-15 10:46:21.807368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.481 [2024-11-15 10:46:21.807407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.481 qpair failed and we were unable to recover it. 00:27:33.481 [2024-11-15 10:46:21.807641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.481 [2024-11-15 10:46:21.807665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.481 qpair failed and we were unable to recover it. 00:27:33.481 [2024-11-15 10:46:21.807835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.481 [2024-11-15 10:46:21.807859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.481 qpair failed and we were unable to recover it. 00:27:33.481 [2024-11-15 10:46:21.808034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.481 [2024-11-15 10:46:21.808058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.481 qpair failed and we were unable to recover it. 00:27:33.481 [2024-11-15 10:46:21.808291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.481 [2024-11-15 10:46:21.808315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.481 qpair failed and we were unable to recover it. 00:27:33.481 [2024-11-15 10:46:21.808552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.481 [2024-11-15 10:46:21.808577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.481 qpair failed and we were unable to recover it. 00:27:33.481 [2024-11-15 10:46:21.808792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.481 [2024-11-15 10:46:21.808815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.481 qpair failed and we were unable to recover it. 00:27:33.481 [2024-11-15 10:46:21.809066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.481 [2024-11-15 10:46:21.809089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.481 qpair failed and we were unable to recover it. 00:27:33.481 [2024-11-15 10:46:21.809316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.481 [2024-11-15 10:46:21.809339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.481 qpair failed and we were unable to recover it. 00:27:33.481 [2024-11-15 10:46:21.809558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.481 [2024-11-15 10:46:21.809583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.481 qpair failed and we were unable to recover it. 00:27:33.481 [2024-11-15 10:46:21.809773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.481 [2024-11-15 10:46:21.809800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.481 qpair failed and we were unable to recover it. 00:27:33.481 [2024-11-15 10:46:21.810025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.481 [2024-11-15 10:46:21.810049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.481 qpair failed and we were unable to recover it. 00:27:33.481 [2024-11-15 10:46:21.810279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.481 [2024-11-15 10:46:21.810303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.481 qpair failed and we were unable to recover it. 00:27:33.481 [2024-11-15 10:46:21.810451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.481 [2024-11-15 10:46:21.810476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.481 qpair failed and we were unable to recover it. 00:27:33.481 [2024-11-15 10:46:21.810665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.481 [2024-11-15 10:46:21.810688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.481 qpair failed and we were unable to recover it. 00:27:33.481 [2024-11-15 10:46:21.810913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.481 [2024-11-15 10:46:21.810937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.481 qpair failed and we were unable to recover it. 00:27:33.481 [2024-11-15 10:46:21.811160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.481 [2024-11-15 10:46:21.811184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.481 qpair failed and we were unable to recover it. 00:27:33.481 [2024-11-15 10:46:21.811334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.481 [2024-11-15 10:46:21.811380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.481 qpair failed and we were unable to recover it. 00:27:33.481 [2024-11-15 10:46:21.811542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.481 [2024-11-15 10:46:21.811567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.481 qpair failed and we were unable to recover it. 00:27:33.481 [2024-11-15 10:46:21.811775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.481 [2024-11-15 10:46:21.811798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.481 qpair failed and we were unable to recover it. 00:27:33.481 [2024-11-15 10:46:21.812001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.481 [2024-11-15 10:46:21.812024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.482 qpair failed and we were unable to recover it. 00:27:33.482 [2024-11-15 10:46:21.812184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.482 [2024-11-15 10:46:21.812208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.482 qpair failed and we were unable to recover it. 00:27:33.482 [2024-11-15 10:46:21.812435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.482 [2024-11-15 10:46:21.812459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.482 qpair failed and we were unable to recover it. 00:27:33.482 [2024-11-15 10:46:21.812648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.482 [2024-11-15 10:46:21.812687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.482 qpair failed and we were unable to recover it. 00:27:33.482 [2024-11-15 10:46:21.812860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.482 [2024-11-15 10:46:21.812884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.482 qpair failed and we were unable to recover it. 00:27:33.482 [2024-11-15 10:46:21.813111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.482 [2024-11-15 10:46:21.813135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.482 qpair failed and we were unable to recover it. 00:27:33.482 [2024-11-15 10:46:21.813291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.482 [2024-11-15 10:46:21.813314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.482 qpair failed and we were unable to recover it. 00:27:33.482 [2024-11-15 10:46:21.813554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.482 [2024-11-15 10:46:21.813578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.482 qpair failed and we were unable to recover it. 00:27:33.482 [2024-11-15 10:46:21.813825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.482 [2024-11-15 10:46:21.813849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.482 qpair failed and we were unable to recover it. 00:27:33.482 [2024-11-15 10:46:21.814012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.482 [2024-11-15 10:46:21.814035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.482 qpair failed and we were unable to recover it. 00:27:33.482 [2024-11-15 10:46:21.814221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.482 [2024-11-15 10:46:21.814244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.482 qpair failed and we were unable to recover it. 00:27:33.482 [2024-11-15 10:46:21.814480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.482 [2024-11-15 10:46:21.814505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.482 qpair failed and we were unable to recover it. 00:27:33.482 [2024-11-15 10:46:21.814693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.482 [2024-11-15 10:46:21.814733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.482 qpair failed and we were unable to recover it. 00:27:33.482 [2024-11-15 10:46:21.814943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.482 [2024-11-15 10:46:21.814966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.482 qpair failed and we were unable to recover it. 00:27:33.482 [2024-11-15 10:46:21.815163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.482 [2024-11-15 10:46:21.815187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.482 qpair failed and we were unable to recover it. 00:27:33.482 [2024-11-15 10:46:21.815414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.482 [2024-11-15 10:46:21.815439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.482 qpair failed and we were unable to recover it. 00:27:33.482 [2024-11-15 10:46:21.815590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.482 [2024-11-15 10:46:21.815614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.482 qpair failed and we were unable to recover it. 00:27:33.482 [2024-11-15 10:46:21.815806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.482 [2024-11-15 10:46:21.815834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.482 qpair failed and we were unable to recover it. 00:27:33.482 [2024-11-15 10:46:21.816039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.482 [2024-11-15 10:46:21.816063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.482 qpair failed and we were unable to recover it. 00:27:33.482 [2024-11-15 10:46:21.816292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.482 [2024-11-15 10:46:21.816316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.482 qpair failed and we were unable to recover it. 00:27:33.482 [2024-11-15 10:46:21.816486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.482 [2024-11-15 10:46:21.816512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.482 qpair failed and we were unable to recover it. 00:27:33.482 [2024-11-15 10:46:21.816713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.482 [2024-11-15 10:46:21.816752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.482 qpair failed and we were unable to recover it. 00:27:33.482 [2024-11-15 10:46:21.816934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.482 [2024-11-15 10:46:21.816957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.482 qpair failed and we were unable to recover it. 00:27:33.482 [2024-11-15 10:46:21.817154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.482 [2024-11-15 10:46:21.817177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.482 qpair failed and we were unable to recover it. 00:27:33.482 [2024-11-15 10:46:21.817403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.482 [2024-11-15 10:46:21.817429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.482 qpair failed and we were unable to recover it. 00:27:33.482 [2024-11-15 10:46:21.817587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.482 [2024-11-15 10:46:21.817613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.482 qpair failed and we were unable to recover it. 00:27:33.482 [2024-11-15 10:46:21.817844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.482 [2024-11-15 10:46:21.817867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.482 qpair failed and we were unable to recover it. 00:27:33.482 [2024-11-15 10:46:21.818058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.482 [2024-11-15 10:46:21.818082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.482 qpair failed and we were unable to recover it. 00:27:33.482 [2024-11-15 10:46:21.818299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.482 [2024-11-15 10:46:21.818324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.482 qpair failed and we were unable to recover it. 00:27:33.482 [2024-11-15 10:46:21.818561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.482 [2024-11-15 10:46:21.818586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.482 qpair failed and we were unable to recover it. 00:27:33.482 [2024-11-15 10:46:21.818766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.482 [2024-11-15 10:46:21.818790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.482 qpair failed and we were unable to recover it. 00:27:33.482 [2024-11-15 10:46:21.819009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.482 [2024-11-15 10:46:21.819033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.482 qpair failed and we were unable to recover it. 00:27:33.482 [2024-11-15 10:46:21.819263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.482 [2024-11-15 10:46:21.819287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.482 qpair failed and we were unable to recover it. 00:27:33.482 [2024-11-15 10:46:21.819525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.482 [2024-11-15 10:46:21.819550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.482 qpair failed and we were unable to recover it. 00:27:33.482 [2024-11-15 10:46:21.819708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.482 [2024-11-15 10:46:21.819732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.482 qpair failed and we were unable to recover it. 00:27:33.482 [2024-11-15 10:46:21.819953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.482 [2024-11-15 10:46:21.819977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.482 qpair failed and we were unable to recover it. 00:27:33.482 [2024-11-15 10:46:21.820207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.482 [2024-11-15 10:46:21.820231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.482 qpair failed and we were unable to recover it. 00:27:33.482 [2024-11-15 10:46:21.820432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.483 [2024-11-15 10:46:21.820457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.483 qpair failed and we were unable to recover it. 00:27:33.483 [2024-11-15 10:46:21.820631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.483 [2024-11-15 10:46:21.820655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.483 qpair failed and we were unable to recover it. 00:27:33.483 [2024-11-15 10:46:21.820881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.483 [2024-11-15 10:46:21.820905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.483 qpair failed and we were unable to recover it. 00:27:33.483 [2024-11-15 10:46:21.821083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.483 [2024-11-15 10:46:21.821107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.483 qpair failed and we were unable to recover it. 00:27:33.483 [2024-11-15 10:46:21.821274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.483 [2024-11-15 10:46:21.821298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.483 qpair failed and we were unable to recover it. 00:27:33.483 [2024-11-15 10:46:21.821478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.483 [2024-11-15 10:46:21.821517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.483 qpair failed and we were unable to recover it. 00:27:33.483 [2024-11-15 10:46:21.821752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.483 [2024-11-15 10:46:21.821775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.483 qpair failed and we were unable to recover it. 00:27:33.483 [2024-11-15 10:46:21.821997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.483 [2024-11-15 10:46:21.822024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.483 qpair failed and we were unable to recover it. 00:27:33.483 [2024-11-15 10:46:21.822216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.483 [2024-11-15 10:46:21.822240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.483 qpair failed and we were unable to recover it. 00:27:33.483 [2024-11-15 10:46:21.822436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.483 [2024-11-15 10:46:21.822460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.483 qpair failed and we were unable to recover it. 00:27:33.483 [2024-11-15 10:46:21.822593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.483 [2024-11-15 10:46:21.822617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.483 qpair failed and we were unable to recover it. 00:27:33.483 [2024-11-15 10:46:21.822822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.483 [2024-11-15 10:46:21.822846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.483 qpair failed and we were unable to recover it. 00:27:33.483 [2024-11-15 10:46:21.823080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.483 [2024-11-15 10:46:21.823103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.483 qpair failed and we were unable to recover it. 00:27:33.483 [2024-11-15 10:46:21.823257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.483 [2024-11-15 10:46:21.823280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.483 qpair failed and we were unable to recover it. 00:27:33.483 [2024-11-15 10:46:21.823442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.483 [2024-11-15 10:46:21.823467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.483 qpair failed and we were unable to recover it. 00:27:33.483 [2024-11-15 10:46:21.823648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.483 [2024-11-15 10:46:21.823672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.483 qpair failed and we were unable to recover it. 00:27:33.483 [2024-11-15 10:46:21.823886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.483 [2024-11-15 10:46:21.823909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.483 qpair failed and we were unable to recover it. 00:27:33.483 [2024-11-15 10:46:21.824130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.483 [2024-11-15 10:46:21.824154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.483 qpair failed and we were unable to recover it. 00:27:33.483 [2024-11-15 10:46:21.824366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.483 [2024-11-15 10:46:21.824391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.483 qpair failed and we were unable to recover it. 00:27:33.483 [2024-11-15 10:46:21.824536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.483 [2024-11-15 10:46:21.824560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.483 qpair failed and we were unable to recover it. 00:27:33.483 [2024-11-15 10:46:21.824761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.483 [2024-11-15 10:46:21.824785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.483 qpair failed and we were unable to recover it. 00:27:33.483 [2024-11-15 10:46:21.824992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.483 [2024-11-15 10:46:21.825016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.483 qpair failed and we were unable to recover it. 00:27:33.483 [2024-11-15 10:46:21.825202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.483 [2024-11-15 10:46:21.825225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.483 qpair failed and we were unable to recover it. 00:27:33.483 [2024-11-15 10:46:21.825408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.483 [2024-11-15 10:46:21.825434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.483 qpair failed and we were unable to recover it. 00:27:33.483 [2024-11-15 10:46:21.825612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.483 [2024-11-15 10:46:21.825637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.483 qpair failed and we were unable to recover it. 00:27:33.483 [2024-11-15 10:46:21.825835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.483 [2024-11-15 10:46:21.825859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.483 qpair failed and we were unable to recover it. 00:27:33.483 [2024-11-15 10:46:21.826074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.483 [2024-11-15 10:46:21.826097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.483 qpair failed and we were unable to recover it. 00:27:33.483 [2024-11-15 10:46:21.826281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.483 [2024-11-15 10:46:21.826305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.483 qpair failed and we were unable to recover it. 00:27:33.483 [2024-11-15 10:46:21.826482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.483 [2024-11-15 10:46:21.826508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.483 qpair failed and we were unable to recover it. 00:27:33.483 [2024-11-15 10:46:21.826652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.483 [2024-11-15 10:46:21.826677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.483 qpair failed and we were unable to recover it. 00:27:33.483 [2024-11-15 10:46:21.826873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.483 [2024-11-15 10:46:21.826897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.483 qpair failed and we were unable to recover it. 00:27:33.483 [2024-11-15 10:46:21.827090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.483 [2024-11-15 10:46:21.827114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.483 qpair failed and we were unable to recover it. 00:27:33.483 [2024-11-15 10:46:21.827293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.483 [2024-11-15 10:46:21.827316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.483 qpair failed and we were unable to recover it. 00:27:33.483 [2024-11-15 10:46:21.827538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.483 [2024-11-15 10:46:21.827564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.483 qpair failed and we were unable to recover it. 00:27:33.483 [2024-11-15 10:46:21.827783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.483 [2024-11-15 10:46:21.827806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.483 qpair failed and we were unable to recover it. 00:27:33.483 [2024-11-15 10:46:21.827994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.483 [2024-11-15 10:46:21.828018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.483 qpair failed and we were unable to recover it. 00:27:33.483 [2024-11-15 10:46:21.828178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.483 [2024-11-15 10:46:21.828202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.483 qpair failed and we were unable to recover it. 00:27:33.483 [2024-11-15 10:46:21.828389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.483 [2024-11-15 10:46:21.828413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.483 qpair failed and we were unable to recover it. 00:27:33.483 [2024-11-15 10:46:21.828636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.483 [2024-11-15 10:46:21.828676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.483 qpair failed and we were unable to recover it. 00:27:33.483 [2024-11-15 10:46:21.828910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.483 [2024-11-15 10:46:21.828934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.483 qpair failed and we were unable to recover it. 00:27:33.483 [2024-11-15 10:46:21.829118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.483 [2024-11-15 10:46:21.829142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.483 qpair failed and we were unable to recover it. 00:27:33.483 [2024-11-15 10:46:21.829360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.483 [2024-11-15 10:46:21.829402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.483 qpair failed and we were unable to recover it. 00:27:33.483 [2024-11-15 10:46:21.829552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.483 [2024-11-15 10:46:21.829577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.483 qpair failed and we were unable to recover it. 00:27:33.483 [2024-11-15 10:46:21.829800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.483 [2024-11-15 10:46:21.829823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.483 qpair failed and we were unable to recover it. 00:27:33.483 [2024-11-15 10:46:21.830019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.483 [2024-11-15 10:46:21.830043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.483 qpair failed and we were unable to recover it. 00:27:33.483 [2024-11-15 10:46:21.830224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.483 [2024-11-15 10:46:21.830247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.483 qpair failed and we were unable to recover it. 00:27:33.483 [2024-11-15 10:46:21.830436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.483 [2024-11-15 10:46:21.830461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.483 qpair failed and we were unable to recover it. 00:27:33.483 [2024-11-15 10:46:21.830644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.484 [2024-11-15 10:46:21.830683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.484 qpair failed and we were unable to recover it. 00:27:33.484 [2024-11-15 10:46:21.830866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.484 [2024-11-15 10:46:21.830890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.484 qpair failed and we were unable to recover it. 00:27:33.484 [2024-11-15 10:46:21.831019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.484 [2024-11-15 10:46:21.831057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.484 qpair failed and we were unable to recover it. 00:27:33.484 [2024-11-15 10:46:21.831258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.484 [2024-11-15 10:46:21.831282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.484 qpair failed and we were unable to recover it. 00:27:33.484 [2024-11-15 10:46:21.831485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.484 [2024-11-15 10:46:21.831510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.484 qpair failed and we were unable to recover it. 00:27:33.484 [2024-11-15 10:46:21.831658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.484 [2024-11-15 10:46:21.831697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.484 qpair failed and we were unable to recover it. 00:27:33.484 [2024-11-15 10:46:21.831828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.484 [2024-11-15 10:46:21.831867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.484 qpair failed and we were unable to recover it. 00:27:33.484 [2024-11-15 10:46:21.832032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.484 [2024-11-15 10:46:21.832070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.484 qpair failed and we were unable to recover it. 00:27:33.484 [2024-11-15 10:46:21.832248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.484 [2024-11-15 10:46:21.832271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.484 qpair failed and we were unable to recover it. 00:27:33.484 [2024-11-15 10:46:21.832469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.484 [2024-11-15 10:46:21.832494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.484 qpair failed and we were unable to recover it. 00:27:33.484 [2024-11-15 10:46:21.832661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.484 [2024-11-15 10:46:21.832684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.484 qpair failed and we were unable to recover it. 00:27:33.484 [2024-11-15 10:46:21.832865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.484 [2024-11-15 10:46:21.832889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.484 qpair failed and we were unable to recover it. 00:27:33.484 [2024-11-15 10:46:21.833065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.484 [2024-11-15 10:46:21.833088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.484 qpair failed and we were unable to recover it. 00:27:33.484 [2024-11-15 10:46:21.833273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.484 [2024-11-15 10:46:21.833296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.484 qpair failed and we were unable to recover it. 00:27:33.484 [2024-11-15 10:46:21.833471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.484 [2024-11-15 10:46:21.833496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.484 qpair failed and we were unable to recover it. 00:27:33.484 [2024-11-15 10:46:21.833657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.484 [2024-11-15 10:46:21.833681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.484 qpair failed and we were unable to recover it. 00:27:33.484 [2024-11-15 10:46:21.833908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.484 [2024-11-15 10:46:21.833931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.484 qpair failed and we were unable to recover it. 00:27:33.484 [2024-11-15 10:46:21.834139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.484 [2024-11-15 10:46:21.834163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.484 qpair failed and we were unable to recover it. 00:27:33.484 [2024-11-15 10:46:21.834374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.484 [2024-11-15 10:46:21.834399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.484 qpair failed and we were unable to recover it. 00:27:33.484 [2024-11-15 10:46:21.834577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.484 [2024-11-15 10:46:21.834602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.484 qpair failed and we were unable to recover it. 00:27:33.484 [2024-11-15 10:46:21.834800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.484 [2024-11-15 10:46:21.834823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.484 qpair failed and we were unable to recover it. 00:27:33.484 [2024-11-15 10:46:21.835046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.484 [2024-11-15 10:46:21.835069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.484 qpair failed and we were unable to recover it. 00:27:33.484 [2024-11-15 10:46:21.835300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.484 [2024-11-15 10:46:21.835323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.484 qpair failed and we were unable to recover it. 00:27:33.484 [2024-11-15 10:46:21.835484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.484 [2024-11-15 10:46:21.835511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.484 qpair failed and we were unable to recover it. 00:27:33.484 [2024-11-15 10:46:21.835733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.484 [2024-11-15 10:46:21.835756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.484 qpair failed and we were unable to recover it. 00:27:33.484 [2024-11-15 10:46:21.835957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.484 [2024-11-15 10:46:21.835981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.484 qpair failed and we were unable to recover it. 00:27:33.484 [2024-11-15 10:46:21.836156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.484 [2024-11-15 10:46:21.836181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.484 qpair failed and we were unable to recover it. 00:27:33.484 [2024-11-15 10:46:21.836400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.484 [2024-11-15 10:46:21.836426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.484 qpair failed and we were unable to recover it. 00:27:33.484 [2024-11-15 10:46:21.836597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.484 [2024-11-15 10:46:21.836627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.484 qpair failed and we were unable to recover it. 00:27:33.484 [2024-11-15 10:46:21.836822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.484 [2024-11-15 10:46:21.836845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.484 qpair failed and we were unable to recover it. 00:27:33.484 [2024-11-15 10:46:21.836972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.484 [2024-11-15 10:46:21.836996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.484 qpair failed and we were unable to recover it. 00:27:33.484 [2024-11-15 10:46:21.837244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.484 [2024-11-15 10:46:21.837268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.484 qpair failed and we were unable to recover it. 00:27:33.484 [2024-11-15 10:46:21.837499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.484 [2024-11-15 10:46:21.837524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.484 qpair failed and we were unable to recover it. 00:27:33.484 [2024-11-15 10:46:21.837708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.484 [2024-11-15 10:46:21.837732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.484 qpair failed and we were unable to recover it. 00:27:33.484 [2024-11-15 10:46:21.837952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.484 [2024-11-15 10:46:21.837976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.484 qpair failed and we were unable to recover it. 00:27:33.484 [2024-11-15 10:46:21.838205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.484 [2024-11-15 10:46:21.838228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.484 qpair failed and we were unable to recover it. 00:27:33.484 [2024-11-15 10:46:21.838445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.484 [2024-11-15 10:46:21.838471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.484 qpair failed and we were unable to recover it. 00:27:33.484 [2024-11-15 10:46:21.838712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.484 [2024-11-15 10:46:21.838736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.484 qpair failed and we were unable to recover it. 00:27:33.484 [2024-11-15 10:46:21.838934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.484 [2024-11-15 10:46:21.838957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.484 qpair failed and we were unable to recover it. 00:27:33.484 [2024-11-15 10:46:21.839180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.484 [2024-11-15 10:46:21.839204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.484 qpair failed and we were unable to recover it. 00:27:33.484 [2024-11-15 10:46:21.839447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.484 [2024-11-15 10:46:21.839473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.484 qpair failed and we were unable to recover it. 00:27:33.484 [2024-11-15 10:46:21.839658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.484 [2024-11-15 10:46:21.839697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.484 qpair failed and we were unable to recover it. 00:27:33.484 [2024-11-15 10:46:21.839833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.484 [2024-11-15 10:46:21.839857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.484 qpair failed and we were unable to recover it. 00:27:33.484 [2024-11-15 10:46:21.840056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.484 [2024-11-15 10:46:21.840079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.484 qpair failed and we were unable to recover it. 00:27:33.485 [2024-11-15 10:46:21.840263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.485 [2024-11-15 10:46:21.840286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.485 qpair failed and we were unable to recover it. 00:27:33.485 [2024-11-15 10:46:21.840471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.485 [2024-11-15 10:46:21.840497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.485 qpair failed and we were unable to recover it. 00:27:33.485 [2024-11-15 10:46:21.840669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.485 [2024-11-15 10:46:21.840693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.485 qpair failed and we were unable to recover it. 00:27:33.485 [2024-11-15 10:46:21.840867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.485 [2024-11-15 10:46:21.840891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.485 qpair failed and we were unable to recover it. 00:27:33.485 [2024-11-15 10:46:21.841073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.485 [2024-11-15 10:46:21.841097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.485 qpair failed and we were unable to recover it. 00:27:33.485 [2024-11-15 10:46:21.841324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.485 [2024-11-15 10:46:21.841348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.485 qpair failed and we were unable to recover it. 00:27:33.485 [2024-11-15 10:46:21.841495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.485 [2024-11-15 10:46:21.841519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.485 qpair failed and we were unable to recover it. 00:27:33.485 [2024-11-15 10:46:21.841721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.485 [2024-11-15 10:46:21.841745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.485 qpair failed and we were unable to recover it. 00:27:33.485 [2024-11-15 10:46:21.841900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.485 [2024-11-15 10:46:21.841923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.485 qpair failed and we were unable to recover it. 00:27:33.485 [2024-11-15 10:46:21.842118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.485 [2024-11-15 10:46:21.842141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.485 qpair failed and we were unable to recover it. 00:27:33.485 [2024-11-15 10:46:21.842310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.485 [2024-11-15 10:46:21.842334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.485 qpair failed and we were unable to recover it. 00:27:33.485 [2024-11-15 10:46:21.842487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.485 [2024-11-15 10:46:21.842516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.485 qpair failed and we were unable to recover it. 00:27:33.485 [2024-11-15 10:46:21.842705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.485 [2024-11-15 10:46:21.842743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.485 qpair failed and we were unable to recover it. 00:27:33.485 [2024-11-15 10:46:21.842928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.485 [2024-11-15 10:46:21.842951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.485 qpair failed and we were unable to recover it. 00:27:33.485 [2024-11-15 10:46:21.843104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.485 [2024-11-15 10:46:21.843127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.485 qpair failed and we were unable to recover it. 00:27:33.485 [2024-11-15 10:46:21.843359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.485 [2024-11-15 10:46:21.843403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.485 qpair failed and we were unable to recover it. 00:27:33.485 [2024-11-15 10:46:21.843582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.485 [2024-11-15 10:46:21.843606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.485 qpair failed and we were unable to recover it. 00:27:33.485 [2024-11-15 10:46:21.843738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.485 [2024-11-15 10:46:21.843778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.485 qpair failed and we were unable to recover it. 00:27:33.485 [2024-11-15 10:46:21.843968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.485 [2024-11-15 10:46:21.843992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.485 qpair failed and we were unable to recover it. 00:27:33.485 [2024-11-15 10:46:21.844215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.485 [2024-11-15 10:46:21.844239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.485 qpair failed and we were unable to recover it. 00:27:33.485 [2024-11-15 10:46:21.844384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.485 [2024-11-15 10:46:21.844425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.485 qpair failed and we were unable to recover it. 00:27:33.485 [2024-11-15 10:46:21.844606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.485 [2024-11-15 10:46:21.844631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.485 qpair failed and we were unable to recover it. 00:27:33.485 [2024-11-15 10:46:21.844811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.485 [2024-11-15 10:46:21.844835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.485 qpair failed and we were unable to recover it. 00:27:33.485 [2024-11-15 10:46:21.845039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.485 [2024-11-15 10:46:21.845062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.485 qpair failed and we were unable to recover it. 00:27:33.485 [2024-11-15 10:46:21.845261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.485 [2024-11-15 10:46:21.845284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.485 qpair failed and we were unable to recover it. 00:27:33.485 [2024-11-15 10:46:21.845482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.485 [2024-11-15 10:46:21.845507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.485 qpair failed and we were unable to recover it. 00:27:33.485 [2024-11-15 10:46:21.845696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.485 [2024-11-15 10:46:21.845719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.485 qpair failed and we were unable to recover it. 00:27:33.485 [2024-11-15 10:46:21.845943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.485 [2024-11-15 10:46:21.845967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.485 qpair failed and we were unable to recover it. 00:27:33.485 [2024-11-15 10:46:21.846097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.485 [2024-11-15 10:46:21.846121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.485 qpair failed and we were unable to recover it. 00:27:33.485 [2024-11-15 10:46:21.846324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.485 [2024-11-15 10:46:21.846347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.485 qpair failed and we were unable to recover it. 00:27:33.485 [2024-11-15 10:46:21.846538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.485 [2024-11-15 10:46:21.846563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.485 qpair failed and we were unable to recover it. 00:27:33.485 [2024-11-15 10:46:21.846773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.485 [2024-11-15 10:46:21.846797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.485 qpair failed and we were unable to recover it. 00:27:33.485 [2024-11-15 10:46:21.846994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.485 [2024-11-15 10:46:21.847018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.485 qpair failed and we were unable to recover it. 00:27:33.485 [2024-11-15 10:46:21.847227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.485 [2024-11-15 10:46:21.847251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.485 qpair failed and we were unable to recover it. 00:27:33.485 [2024-11-15 10:46:21.847473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.485 [2024-11-15 10:46:21.847498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.485 qpair failed and we were unable to recover it. 00:27:33.485 [2024-11-15 10:46:21.847656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.485 [2024-11-15 10:46:21.847680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.485 qpair failed and we were unable to recover it. 00:27:33.485 [2024-11-15 10:46:21.847878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.485 [2024-11-15 10:46:21.847901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.485 qpair failed and we were unable to recover it. 00:27:33.485 [2024-11-15 10:46:21.848124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.485 [2024-11-15 10:46:21.848147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.485 qpair failed and we were unable to recover it. 00:27:33.485 [2024-11-15 10:46:21.848374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.485 [2024-11-15 10:46:21.848399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.485 qpair failed and we were unable to recover it. 00:27:33.485 [2024-11-15 10:46:21.848587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.485 [2024-11-15 10:46:21.848612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.485 qpair failed and we were unable to recover it. 00:27:33.485 [2024-11-15 10:46:21.848771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.485 [2024-11-15 10:46:21.848795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.485 qpair failed and we were unable to recover it. 00:27:33.485 [2024-11-15 10:46:21.849001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.485 [2024-11-15 10:46:21.849024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.485 qpair failed and we were unable to recover it. 00:27:33.485 [2024-11-15 10:46:21.849187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.485 [2024-11-15 10:46:21.849210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.485 qpair failed and we were unable to recover it. 00:27:33.485 [2024-11-15 10:46:21.849407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.485 [2024-11-15 10:46:21.849433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.485 qpair failed and we were unable to recover it. 00:27:33.485 [2024-11-15 10:46:21.849616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.486 [2024-11-15 10:46:21.849641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.486 qpair failed and we were unable to recover it. 00:27:33.486 [2024-11-15 10:46:21.849826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.486 [2024-11-15 10:46:21.849849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.486 qpair failed and we were unable to recover it. 00:27:33.486 [2024-11-15 10:46:21.850080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.486 [2024-11-15 10:46:21.850103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.486 qpair failed and we were unable to recover it. 00:27:33.486 [2024-11-15 10:46:21.850305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.486 [2024-11-15 10:46:21.850329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.486 qpair failed and we were unable to recover it. 00:27:33.486 [2024-11-15 10:46:21.850523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.486 [2024-11-15 10:46:21.850548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.486 qpair failed and we were unable to recover it. 00:27:33.486 [2024-11-15 10:46:21.850729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.486 [2024-11-15 10:46:21.850753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.486 qpair failed and we were unable to recover it. 00:27:33.486 [2024-11-15 10:46:21.850894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.486 [2024-11-15 10:46:21.850932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.486 qpair failed and we were unable to recover it. 00:27:33.486 [2024-11-15 10:46:21.851130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.486 [2024-11-15 10:46:21.851154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.486 qpair failed and we were unable to recover it. 00:27:33.486 [2024-11-15 10:46:21.851351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.486 [2024-11-15 10:46:21.851396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.486 qpair failed and we were unable to recover it. 00:27:33.486 [2024-11-15 10:46:21.851653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.486 [2024-11-15 10:46:21.851678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.486 qpair failed and we were unable to recover it. 00:27:33.486 [2024-11-15 10:46:21.851826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.486 [2024-11-15 10:46:21.851850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.486 qpair failed and we were unable to recover it. 00:27:33.486 [2024-11-15 10:46:21.852019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.486 [2024-11-15 10:46:21.852058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.486 qpair failed and we were unable to recover it. 00:27:33.486 [2024-11-15 10:46:21.852282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.486 [2024-11-15 10:46:21.852305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.486 qpair failed and we were unable to recover it. 00:27:33.486 [2024-11-15 10:46:21.852494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.486 [2024-11-15 10:46:21.852520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.486 qpair failed and we were unable to recover it. 00:27:33.486 [2024-11-15 10:46:21.852723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.486 [2024-11-15 10:46:21.852747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.486 qpair failed and we were unable to recover it. 00:27:33.486 [2024-11-15 10:46:21.852912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.486 [2024-11-15 10:46:21.852936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.486 qpair failed and we were unable to recover it. 00:27:33.486 [2024-11-15 10:46:21.853116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.486 [2024-11-15 10:46:21.853140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.486 qpair failed and we were unable to recover it. 00:27:33.486 [2024-11-15 10:46:21.853303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.486 [2024-11-15 10:46:21.853327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.486 qpair failed and we were unable to recover it. 00:27:33.486 [2024-11-15 10:46:21.853571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.486 [2024-11-15 10:46:21.853595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.486 qpair failed and we were unable to recover it. 00:27:33.486 [2024-11-15 10:46:21.853767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.486 [2024-11-15 10:46:21.853791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.486 qpair failed and we were unable to recover it. 00:27:33.486 [2024-11-15 10:46:21.853997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.486 [2024-11-15 10:46:21.854021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.486 qpair failed and we were unable to recover it. 00:27:33.486 [2024-11-15 10:46:21.854242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.486 [2024-11-15 10:46:21.854266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.486 qpair failed and we were unable to recover it. 00:27:33.486 [2024-11-15 10:46:21.854478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.486 [2024-11-15 10:46:21.854503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.486 qpair failed and we were unable to recover it. 00:27:33.486 [2024-11-15 10:46:21.854677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.486 [2024-11-15 10:46:21.854700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.486 qpair failed and we were unable to recover it. 00:27:33.486 [2024-11-15 10:46:21.854901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.486 [2024-11-15 10:46:21.854925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.486 qpair failed and we were unable to recover it. 00:27:33.486 [2024-11-15 10:46:21.855152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.486 [2024-11-15 10:46:21.855176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.486 qpair failed and we were unable to recover it. 00:27:33.486 [2024-11-15 10:46:21.855396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.486 [2024-11-15 10:46:21.855421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.486 qpair failed and we were unable to recover it. 00:27:33.486 [2024-11-15 10:46:21.855596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.486 [2024-11-15 10:46:21.855621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.486 qpair failed and we were unable to recover it. 00:27:33.486 [2024-11-15 10:46:21.855850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.486 [2024-11-15 10:46:21.855873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.486 qpair failed and we were unable to recover it. 00:27:33.486 [2024-11-15 10:46:21.856097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.486 [2024-11-15 10:46:21.856120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.486 qpair failed and we were unable to recover it. 00:27:33.486 [2024-11-15 10:46:21.856311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.486 [2024-11-15 10:46:21.856335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.486 qpair failed and we were unable to recover it. 00:27:33.486 [2024-11-15 10:46:21.856494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.486 [2024-11-15 10:46:21.856519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.486 qpair failed and we were unable to recover it. 00:27:33.486 [2024-11-15 10:46:21.856705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.486 [2024-11-15 10:46:21.856744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.486 qpair failed and we were unable to recover it. 00:27:33.486 [2024-11-15 10:46:21.856972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.486 [2024-11-15 10:46:21.856996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.486 qpair failed and we were unable to recover it. 00:27:33.486 [2024-11-15 10:46:21.857158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.486 [2024-11-15 10:46:21.857182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.486 qpair failed and we were unable to recover it. 00:27:33.486 [2024-11-15 10:46:21.857416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.486 [2024-11-15 10:46:21.857445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.486 qpair failed and we were unable to recover it. 00:27:33.486 [2024-11-15 10:46:21.857634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.486 [2024-11-15 10:46:21.857658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.486 qpair failed and we were unable to recover it. 00:27:33.486 [2024-11-15 10:46:21.857831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.486 [2024-11-15 10:46:21.857855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.486 qpair failed and we were unable to recover it. 00:27:33.486 [2024-11-15 10:46:21.858039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.486 [2024-11-15 10:46:21.858063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.486 qpair failed and we were unable to recover it. 00:27:33.486 [2024-11-15 10:46:21.858284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.486 [2024-11-15 10:46:21.858308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.486 qpair failed and we were unable to recover it. 00:27:33.486 [2024-11-15 10:46:21.858540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.486 [2024-11-15 10:46:21.858565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.486 qpair failed and we were unable to recover it. 00:27:33.486 [2024-11-15 10:46:21.858763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.486 [2024-11-15 10:46:21.858787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.486 qpair failed and we were unable to recover it. 00:27:33.486 [2024-11-15 10:46:21.858977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.486 [2024-11-15 10:46:21.859001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.486 qpair failed and we were unable to recover it. 00:27:33.486 [2024-11-15 10:46:21.859228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.487 [2024-11-15 10:46:21.859251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.487 qpair failed and we were unable to recover it. 00:27:33.487 [2024-11-15 10:46:21.859449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.487 [2024-11-15 10:46:21.859474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.487 qpair failed and we were unable to recover it. 00:27:33.487 [2024-11-15 10:46:21.859648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.487 [2024-11-15 10:46:21.859673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.487 qpair failed and we were unable to recover it. 00:27:33.487 [2024-11-15 10:46:21.859898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.487 [2024-11-15 10:46:21.859921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.487 qpair failed and we were unable to recover it. 00:27:33.487 [2024-11-15 10:46:21.860155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.487 [2024-11-15 10:46:21.860178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.487 qpair failed and we were unable to recover it. 00:27:33.487 [2024-11-15 10:46:21.860379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.487 [2024-11-15 10:46:21.860404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.487 qpair failed and we were unable to recover it. 00:27:33.487 [2024-11-15 10:46:21.860591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.487 [2024-11-15 10:46:21.860616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.487 qpair failed and we were unable to recover it. 00:27:33.487 [2024-11-15 10:46:21.860787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.487 [2024-11-15 10:46:21.860810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.487 qpair failed and we were unable to recover it. 00:27:33.487 [2024-11-15 10:46:21.860996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.487 [2024-11-15 10:46:21.861021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.487 qpair failed and we were unable to recover it. 00:27:33.487 [2024-11-15 10:46:21.861254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.487 [2024-11-15 10:46:21.861278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.487 qpair failed and we were unable to recover it. 00:27:33.487 [2024-11-15 10:46:21.861490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.487 [2024-11-15 10:46:21.861516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.487 qpair failed and we were unable to recover it. 00:27:33.487 [2024-11-15 10:46:21.861655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.487 [2024-11-15 10:46:21.861694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.487 qpair failed and we were unable to recover it. 00:27:33.487 [2024-11-15 10:46:21.861882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.487 [2024-11-15 10:46:21.861906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.487 qpair failed and we were unable to recover it. 00:27:33.487 [2024-11-15 10:46:21.862130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.487 [2024-11-15 10:46:21.862153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.487 qpair failed and we were unable to recover it. 00:27:33.487 [2024-11-15 10:46:21.862392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.487 [2024-11-15 10:46:21.862417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.487 qpair failed and we were unable to recover it. 00:27:33.487 [2024-11-15 10:46:21.862566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.487 [2024-11-15 10:46:21.862589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.487 qpair failed and we were unable to recover it. 00:27:33.487 [2024-11-15 10:46:21.862783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.487 [2024-11-15 10:46:21.862806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.487 qpair failed and we were unable to recover it. 00:27:33.487 [2024-11-15 10:46:21.862996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.487 [2024-11-15 10:46:21.863019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.487 qpair failed and we were unable to recover it. 00:27:33.487 [2024-11-15 10:46:21.863224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.487 [2024-11-15 10:46:21.863247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.487 qpair failed and we were unable to recover it. 00:27:33.487 [2024-11-15 10:46:21.863471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.487 [2024-11-15 10:46:21.863499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.487 qpair failed and we were unable to recover it. 00:27:33.487 [2024-11-15 10:46:21.863727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.487 [2024-11-15 10:46:21.863751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.487 qpair failed and we were unable to recover it. 00:27:33.487 [2024-11-15 10:46:21.863916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.487 [2024-11-15 10:46:21.863940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.487 qpair failed and we were unable to recover it. 00:27:33.487 [2024-11-15 10:46:21.864165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.487 [2024-11-15 10:46:21.864188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.487 qpair failed and we were unable to recover it. 00:27:33.487 [2024-11-15 10:46:21.864422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.487 [2024-11-15 10:46:21.864448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.487 qpair failed and we were unable to recover it. 00:27:33.487 [2024-11-15 10:46:21.864585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.487 [2024-11-15 10:46:21.864610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.487 qpair failed and we were unable to recover it. 00:27:33.487 [2024-11-15 10:46:21.864833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.487 [2024-11-15 10:46:21.864856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.487 qpair failed and we were unable to recover it. 00:27:33.487 [2024-11-15 10:46:21.865041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.487 [2024-11-15 10:46:21.865065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.487 qpair failed and we were unable to recover it. 00:27:33.487 [2024-11-15 10:46:21.865293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.487 [2024-11-15 10:46:21.865318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.487 qpair failed and we were unable to recover it. 00:27:33.487 [2024-11-15 10:46:21.865550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.487 [2024-11-15 10:46:21.865575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.487 qpair failed and we were unable to recover it. 00:27:33.487 [2024-11-15 10:46:21.865810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.487 [2024-11-15 10:46:21.865833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.487 qpair failed and we were unable to recover it. 00:27:33.487 [2024-11-15 10:46:21.866065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.487 [2024-11-15 10:46:21.866089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.487 qpair failed and we were unable to recover it. 00:27:33.487 [2024-11-15 10:46:21.866316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.487 [2024-11-15 10:46:21.866340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.487 qpair failed and we were unable to recover it. 00:27:33.487 [2024-11-15 10:46:21.866517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.487 [2024-11-15 10:46:21.866542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.487 qpair failed and we were unable to recover it. 00:27:33.487 [2024-11-15 10:46:21.866737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.487 [2024-11-15 10:46:21.866776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.487 qpair failed and we were unable to recover it. 00:27:33.487 [2024-11-15 10:46:21.867008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.487 [2024-11-15 10:46:21.867031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.487 qpair failed and we were unable to recover it. 00:27:33.487 [2024-11-15 10:46:21.867224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.487 [2024-11-15 10:46:21.867248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.487 qpair failed and we were unable to recover it. 00:27:33.487 [2024-11-15 10:46:21.867450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.487 [2024-11-15 10:46:21.867476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.487 qpair failed and we were unable to recover it. 00:27:33.487 [2024-11-15 10:46:21.867675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.487 [2024-11-15 10:46:21.867699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.487 qpair failed and we were unable to recover it. 00:27:33.487 [2024-11-15 10:46:21.867937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.487 [2024-11-15 10:46:21.867961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.487 qpair failed and we were unable to recover it. 00:27:33.487 [2024-11-15 10:46:21.868175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.487 [2024-11-15 10:46:21.868199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.487 qpair failed and we were unable to recover it. 00:27:33.487 [2024-11-15 10:46:21.868395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.487 [2024-11-15 10:46:21.868420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.487 qpair failed and we were unable to recover it. 00:27:33.487 [2024-11-15 10:46:21.868642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.487 [2024-11-15 10:46:21.868666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.487 qpair failed and we were unable to recover it. 00:27:33.487 [2024-11-15 10:46:21.868823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.487 [2024-11-15 10:46:21.868846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.487 qpair failed and we were unable to recover it. 00:27:33.487 [2024-11-15 10:46:21.869014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.487 [2024-11-15 10:46:21.869039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.487 qpair failed and we were unable to recover it. 00:27:33.487 [2024-11-15 10:46:21.869235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.487 [2024-11-15 10:46:21.869259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.487 qpair failed and we were unable to recover it. 00:27:33.487 [2024-11-15 10:46:21.869442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.487 [2024-11-15 10:46:21.869468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.487 qpair failed and we were unable to recover it. 00:27:33.487 [2024-11-15 10:46:21.869590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.487 [2024-11-15 10:46:21.869619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.487 qpair failed and we were unable to recover it. 00:27:33.487 [2024-11-15 10:46:21.869845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.487 [2024-11-15 10:46:21.869869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.487 qpair failed and we were unable to recover it. 00:27:33.487 [2024-11-15 10:46:21.870061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.487 [2024-11-15 10:46:21.870084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.487 qpair failed and we were unable to recover it. 00:27:33.487 [2024-11-15 10:46:21.870272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.487 [2024-11-15 10:46:21.870295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.487 qpair failed and we were unable to recover it. 00:27:33.487 [2024-11-15 10:46:21.870495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.487 [2024-11-15 10:46:21.870520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.487 qpair failed and we were unable to recover it. 00:27:33.487 [2024-11-15 10:46:21.870721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.487 [2024-11-15 10:46:21.870745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.487 qpair failed and we were unable to recover it. 00:27:33.488 [2024-11-15 10:46:21.870970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.488 [2024-11-15 10:46:21.870994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.488 qpair failed and we were unable to recover it. 00:27:33.488 [2024-11-15 10:46:21.871175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.488 [2024-11-15 10:46:21.871198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.488 qpair failed and we were unable to recover it. 00:27:33.488 [2024-11-15 10:46:21.871355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.488 [2024-11-15 10:46:21.871386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.488 qpair failed and we were unable to recover it. 00:27:33.488 [2024-11-15 10:46:21.871577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.488 [2024-11-15 10:46:21.871601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.488 qpair failed and we were unable to recover it. 00:27:33.488 [2024-11-15 10:46:21.871745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.488 [2024-11-15 10:46:21.871768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.488 qpair failed and we were unable to recover it. 00:27:33.488 [2024-11-15 10:46:21.871983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.488 [2024-11-15 10:46:21.872007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.488 qpair failed and we were unable to recover it. 00:27:33.488 [2024-11-15 10:46:21.872234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.488 [2024-11-15 10:46:21.872258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.488 qpair failed and we were unable to recover it. 00:27:33.488 [2024-11-15 10:46:21.872435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.488 [2024-11-15 10:46:21.872461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.488 qpair failed and we were unable to recover it. 00:27:33.488 [2024-11-15 10:46:21.872631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.488 [2024-11-15 10:46:21.872671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.488 qpair failed and we were unable to recover it. 00:27:33.488 [2024-11-15 10:46:21.872880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.488 [2024-11-15 10:46:21.872903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.488 qpair failed and we were unable to recover it. 00:27:33.488 [2024-11-15 10:46:21.873137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.488 [2024-11-15 10:46:21.873161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.488 qpair failed and we were unable to recover it. 00:27:33.488 [2024-11-15 10:46:21.873397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.488 [2024-11-15 10:46:21.873423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.488 qpair failed and we were unable to recover it. 00:27:33.488 [2024-11-15 10:46:21.873540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.488 [2024-11-15 10:46:21.873566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.488 qpair failed and we were unable to recover it. 00:27:33.488 [2024-11-15 10:46:21.873779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.488 [2024-11-15 10:46:21.873803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.488 qpair failed and we were unable to recover it. 00:27:33.488 [2024-11-15 10:46:21.874019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.488 [2024-11-15 10:46:21.874043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.488 qpair failed and we were unable to recover it. 00:27:33.488 [2024-11-15 10:46:21.874235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.488 [2024-11-15 10:46:21.874259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.488 qpair failed and we were unable to recover it. 00:27:33.488 [2024-11-15 10:46:21.874399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.488 [2024-11-15 10:46:21.874424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.488 qpair failed and we were unable to recover it. 00:27:33.488 [2024-11-15 10:46:21.874626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.488 [2024-11-15 10:46:21.874650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.488 qpair failed and we were unable to recover it. 00:27:33.488 [2024-11-15 10:46:21.874852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.488 [2024-11-15 10:46:21.874875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.488 qpair failed and we were unable to recover it. 00:27:33.488 [2024-11-15 10:46:21.875100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.488 [2024-11-15 10:46:21.875124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.488 qpair failed and we were unable to recover it. 00:27:33.488 [2024-11-15 10:46:21.875303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.488 [2024-11-15 10:46:21.875326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.488 qpair failed and we were unable to recover it. 00:27:33.488 [2024-11-15 10:46:21.875516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.488 [2024-11-15 10:46:21.875543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.488 qpair failed and we were unable to recover it. 00:27:33.488 [2024-11-15 10:46:21.875769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.488 [2024-11-15 10:46:21.875793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.488 qpair failed and we were unable to recover it. 00:27:33.488 [2024-11-15 10:46:21.875974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.488 [2024-11-15 10:46:21.875997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.488 qpair failed and we were unable to recover it. 00:27:33.488 [2024-11-15 10:46:21.876155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.488 [2024-11-15 10:46:21.876180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.488 qpair failed and we were unable to recover it. 00:27:33.488 [2024-11-15 10:46:21.876431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.488 [2024-11-15 10:46:21.876455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.488 qpair failed and we were unable to recover it. 00:27:33.488 [2024-11-15 10:46:21.876646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.488 [2024-11-15 10:46:21.876670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.488 qpair failed and we were unable to recover it. 00:27:33.488 [2024-11-15 10:46:21.876872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.488 [2024-11-15 10:46:21.876896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.488 qpair failed and we were unable to recover it. 00:27:33.488 [2024-11-15 10:46:21.877086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.488 [2024-11-15 10:46:21.877110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.488 qpair failed and we were unable to recover it. 00:27:33.488 [2024-11-15 10:46:21.877338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.488 [2024-11-15 10:46:21.877391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.488 qpair failed and we were unable to recover it. 00:27:33.488 [2024-11-15 10:46:21.877539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.488 [2024-11-15 10:46:21.877564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.488 qpair failed and we were unable to recover it. 00:27:33.488 [2024-11-15 10:46:21.877759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.488 [2024-11-15 10:46:21.877784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.488 qpair failed and we were unable to recover it. 00:27:33.488 [2024-11-15 10:46:21.877962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.488 [2024-11-15 10:46:21.877986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.488 qpair failed and we were unable to recover it. 00:27:33.488 [2024-11-15 10:46:21.878188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.488 [2024-11-15 10:46:21.878211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.488 qpair failed and we were unable to recover it. 00:27:33.488 [2024-11-15 10:46:21.878431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.488 [2024-11-15 10:46:21.878457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.488 qpair failed and we were unable to recover it. 00:27:33.488 [2024-11-15 10:46:21.878702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.488 [2024-11-15 10:46:21.878741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.488 qpair failed and we were unable to recover it. 00:27:33.488 [2024-11-15 10:46:21.878919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.488 [2024-11-15 10:46:21.878945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.488 qpair failed and we were unable to recover it. 00:27:33.488 [2024-11-15 10:46:21.879115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.488 [2024-11-15 10:46:21.879141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.488 qpair failed and we were unable to recover it. 00:27:33.488 [2024-11-15 10:46:21.879309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.488 [2024-11-15 10:46:21.879333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.488 qpair failed and we were unable to recover it. 00:27:33.488 [2024-11-15 10:46:21.879539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.488 [2024-11-15 10:46:21.879564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.488 qpair failed and we were unable to recover it. 00:27:33.488 [2024-11-15 10:46:21.879698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.488 [2024-11-15 10:46:21.879722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.488 qpair failed and we were unable to recover it. 00:27:33.488 [2024-11-15 10:46:21.879919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.488 [2024-11-15 10:46:21.879944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.488 qpair failed and we were unable to recover it. 00:27:33.488 [2024-11-15 10:46:21.880120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.488 [2024-11-15 10:46:21.880144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.488 qpair failed and we were unable to recover it. 00:27:33.488 [2024-11-15 10:46:21.880386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.488 [2024-11-15 10:46:21.880410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.488 qpair failed and we were unable to recover it. 00:27:33.488 [2024-11-15 10:46:21.880597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.488 [2024-11-15 10:46:21.880622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.488 qpair failed and we were unable to recover it. 00:27:33.488 [2024-11-15 10:46:21.880822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.488 [2024-11-15 10:46:21.880845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.488 qpair failed and we were unable to recover it. 00:27:33.488 [2024-11-15 10:46:21.881064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.488 [2024-11-15 10:46:21.881088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.488 qpair failed and we were unable to recover it. 00:27:33.488 [2024-11-15 10:46:21.881312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.488 [2024-11-15 10:46:21.881334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.488 qpair failed and we were unable to recover it. 00:27:33.488 [2024-11-15 10:46:21.881496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.488 [2024-11-15 10:46:21.881522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.488 qpair failed and we were unable to recover it. 00:27:33.488 [2024-11-15 10:46:21.881685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.488 [2024-11-15 10:46:21.881710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.488 qpair failed and we were unable to recover it. 00:27:33.488 [2024-11-15 10:46:21.881933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.488 [2024-11-15 10:46:21.881956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.488 qpair failed and we were unable to recover it. 00:27:33.489 [2024-11-15 10:46:21.882139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.489 [2024-11-15 10:46:21.882162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.489 qpair failed and we were unable to recover it. 00:27:33.489 [2024-11-15 10:46:21.882293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.489 [2024-11-15 10:46:21.882317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.489 qpair failed and we were unable to recover it. 00:27:33.489 [2024-11-15 10:46:21.882531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.489 [2024-11-15 10:46:21.882557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.489 qpair failed and we were unable to recover it. 00:27:33.489 [2024-11-15 10:46:21.882739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.489 [2024-11-15 10:46:21.882763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.489 qpair failed and we were unable to recover it. 00:27:33.489 [2024-11-15 10:46:21.882990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.489 [2024-11-15 10:46:21.883012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.489 qpair failed and we were unable to recover it. 00:27:33.489 [2024-11-15 10:46:21.883161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.489 [2024-11-15 10:46:21.883184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.489 qpair failed and we were unable to recover it. 00:27:33.489 [2024-11-15 10:46:21.883388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.489 [2024-11-15 10:46:21.883414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.489 qpair failed and we were unable to recover it. 00:27:33.489 [2024-11-15 10:46:21.883634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.489 [2024-11-15 10:46:21.883659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.489 qpair failed and we were unable to recover it. 00:27:33.489 [2024-11-15 10:46:21.883846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.489 [2024-11-15 10:46:21.883871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.489 qpair failed and we were unable to recover it. 00:27:33.489 [2024-11-15 10:46:21.884089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.489 [2024-11-15 10:46:21.884113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.489 qpair failed and we were unable to recover it. 00:27:33.489 [2024-11-15 10:46:21.884297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.489 [2024-11-15 10:46:21.884320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.489 qpair failed and we were unable to recover it. 00:27:33.489 [2024-11-15 10:46:21.884523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.489 [2024-11-15 10:46:21.884549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.489 qpair failed and we were unable to recover it. 00:27:33.489 [2024-11-15 10:46:21.884739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.489 [2024-11-15 10:46:21.884763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.489 qpair failed and we were unable to recover it. 00:27:33.489 [2024-11-15 10:46:21.884980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.489 [2024-11-15 10:46:21.885004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.489 qpair failed and we were unable to recover it. 00:27:33.489 [2024-11-15 10:46:21.885199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.489 [2024-11-15 10:46:21.885238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.489 qpair failed and we were unable to recover it. 00:27:33.489 [2024-11-15 10:46:21.885385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.489 [2024-11-15 10:46:21.885425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.489 qpair failed and we were unable to recover it. 00:27:33.489 [2024-11-15 10:46:21.885634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.489 [2024-11-15 10:46:21.885658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.489 qpair failed and we were unable to recover it. 00:27:33.489 [2024-11-15 10:46:21.885867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.489 [2024-11-15 10:46:21.885891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.489 qpair failed and we were unable to recover it. 00:27:33.489 [2024-11-15 10:46:21.886125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.489 [2024-11-15 10:46:21.886148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.489 qpair failed and we were unable to recover it. 00:27:33.489 [2024-11-15 10:46:21.886303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.489 [2024-11-15 10:46:21.886326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.489 qpair failed and we were unable to recover it. 00:27:33.489 [2024-11-15 10:46:21.886473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.489 [2024-11-15 10:46:21.886514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.489 qpair failed and we were unable to recover it. 00:27:33.489 [2024-11-15 10:46:21.886710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.489 [2024-11-15 10:46:21.886733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.489 qpair failed and we were unable to recover it. 00:27:33.489 [2024-11-15 10:46:21.886950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.489 [2024-11-15 10:46:21.886974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.489 qpair failed and we were unable to recover it. 00:27:33.489 [2024-11-15 10:46:21.887199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.489 [2024-11-15 10:46:21.887224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.489 qpair failed and we were unable to recover it. 00:27:33.489 [2024-11-15 10:46:21.887397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.489 [2024-11-15 10:46:21.887427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.489 qpair failed and we were unable to recover it. 00:27:33.489 [2024-11-15 10:46:21.887610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.489 [2024-11-15 10:46:21.887649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.489 qpair failed and we were unable to recover it. 00:27:33.489 [2024-11-15 10:46:21.887866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.489 [2024-11-15 10:46:21.887890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.489 qpair failed and we were unable to recover it. 00:27:33.489 [2024-11-15 10:46:21.888117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.489 [2024-11-15 10:46:21.888141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.489 qpair failed and we were unable to recover it. 00:27:33.489 [2024-11-15 10:46:21.888335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.489 [2024-11-15 10:46:21.888382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.489 qpair failed and we were unable to recover it. 00:27:33.489 [2024-11-15 10:46:21.888593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.489 [2024-11-15 10:46:21.888619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.489 qpair failed and we were unable to recover it. 00:27:33.489 [2024-11-15 10:46:21.888779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.489 [2024-11-15 10:46:21.888802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.489 qpair failed and we were unable to recover it. 00:27:33.489 [2024-11-15 10:46:21.889038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.489 [2024-11-15 10:46:21.889062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.489 qpair failed and we were unable to recover it. 00:27:33.489 [2024-11-15 10:46:21.889256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.489 [2024-11-15 10:46:21.889280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.489 qpair failed and we were unable to recover it. 00:27:33.489 [2024-11-15 10:46:21.889514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.489 [2024-11-15 10:46:21.889541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.489 qpair failed and we were unable to recover it. 00:27:33.489 [2024-11-15 10:46:21.889693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.489 [2024-11-15 10:46:21.889718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.489 qpair failed and we were unable to recover it. 00:27:33.489 [2024-11-15 10:46:21.889938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.489 [2024-11-15 10:46:21.889962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.489 qpair failed and we were unable to recover it. 00:27:33.489 [2024-11-15 10:46:21.890192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.489 [2024-11-15 10:46:21.890214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.489 qpair failed and we were unable to recover it. 00:27:33.489 [2024-11-15 10:46:21.890408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.489 [2024-11-15 10:46:21.890432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.489 qpair failed and we were unable to recover it. 00:27:33.489 [2024-11-15 10:46:21.890597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.489 [2024-11-15 10:46:21.890620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.489 qpair failed and we were unable to recover it. 00:27:33.489 [2024-11-15 10:46:21.890858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.489 [2024-11-15 10:46:21.890881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.489 qpair failed and we were unable to recover it. 00:27:33.489 [2024-11-15 10:46:21.891066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.489 [2024-11-15 10:46:21.891088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.489 qpair failed and we were unable to recover it. 00:27:33.489 [2024-11-15 10:46:21.891308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.489 [2024-11-15 10:46:21.891330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.489 qpair failed and we were unable to recover it. 00:27:33.489 [2024-11-15 10:46:21.891458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.489 [2024-11-15 10:46:21.891482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.489 qpair failed and we were unable to recover it. 00:27:33.489 [2024-11-15 10:46:21.891658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.489 [2024-11-15 10:46:21.891697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.489 qpair failed and we were unable to recover it. 00:27:33.489 [2024-11-15 10:46:21.891912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.489 [2024-11-15 10:46:21.891934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.489 qpair failed and we were unable to recover it. 00:27:33.489 [2024-11-15 10:46:21.892151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.489 [2024-11-15 10:46:21.892174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.489 qpair failed and we were unable to recover it. 00:27:33.489 [2024-11-15 10:46:21.892412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.489 [2024-11-15 10:46:21.892437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.489 qpair failed and we were unable to recover it. 00:27:33.489 [2024-11-15 10:46:21.892655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.489 [2024-11-15 10:46:21.892693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.489 qpair failed and we were unable to recover it. 00:27:33.489 [2024-11-15 10:46:21.892936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.489 [2024-11-15 10:46:21.892959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.489 qpair failed and we were unable to recover it. 00:27:33.489 [2024-11-15 10:46:21.893148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.490 [2024-11-15 10:46:21.893172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.490 qpair failed and we were unable to recover it. 00:27:33.490 [2024-11-15 10:46:21.893321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.490 [2024-11-15 10:46:21.893345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.490 qpair failed and we were unable to recover it. 00:27:33.490 [2024-11-15 10:46:21.893597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.490 [2024-11-15 10:46:21.893622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.490 qpair failed and we were unable to recover it. 00:27:33.490 [2024-11-15 10:46:21.893769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.490 [2024-11-15 10:46:21.893793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.490 qpair failed and we were unable to recover it. 00:27:33.490 [2024-11-15 10:46:21.894015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.490 [2024-11-15 10:46:21.894039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.490 qpair failed and we were unable to recover it. 00:27:33.490 [2024-11-15 10:46:21.894225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.490 [2024-11-15 10:46:21.894249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.490 qpair failed and we were unable to recover it. 00:27:33.490 [2024-11-15 10:46:21.894456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.490 [2024-11-15 10:46:21.894482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.490 qpair failed and we were unable to recover it. 00:27:33.490 [2024-11-15 10:46:21.894702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.490 [2024-11-15 10:46:21.894725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.490 qpair failed and we were unable to recover it. 00:27:33.490 [2024-11-15 10:46:21.894861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.490 [2024-11-15 10:46:21.894885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.490 qpair failed and we were unable to recover it. 00:27:33.490 [2024-11-15 10:46:21.895094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.490 [2024-11-15 10:46:21.895118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.490 qpair failed and we were unable to recover it. 00:27:33.490 [2024-11-15 10:46:21.895343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.490 [2024-11-15 10:46:21.895386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.490 qpair failed and we were unable to recover it. 00:27:33.490 [2024-11-15 10:46:21.895588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.490 [2024-11-15 10:46:21.895612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.490 qpair failed and we were unable to recover it. 00:27:33.490 [2024-11-15 10:46:21.895838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.490 [2024-11-15 10:46:21.895862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.490 qpair failed and we were unable to recover it. 00:27:33.490 [2024-11-15 10:46:21.896043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.490 [2024-11-15 10:46:21.896066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.490 qpair failed and we were unable to recover it. 00:27:33.490 [2024-11-15 10:46:21.896285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.490 [2024-11-15 10:46:21.896309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.490 qpair failed and we were unable to recover it. 00:27:33.490 [2024-11-15 10:46:21.896497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.490 [2024-11-15 10:46:21.896528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.490 qpair failed and we were unable to recover it. 00:27:33.490 [2024-11-15 10:46:21.896698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.490 [2024-11-15 10:46:21.896737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.490 qpair failed and we were unable to recover it. 00:27:33.490 [2024-11-15 10:46:21.896945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.490 [2024-11-15 10:46:21.896969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.490 qpair failed and we were unable to recover it. 00:27:33.490 [2024-11-15 10:46:21.897176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.490 [2024-11-15 10:46:21.897201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.490 qpair failed and we were unable to recover it. 00:27:33.490 [2024-11-15 10:46:21.897436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.490 [2024-11-15 10:46:21.897461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.490 qpair failed and we were unable to recover it. 00:27:33.490 [2024-11-15 10:46:21.897631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.490 [2024-11-15 10:46:21.897656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.490 qpair failed and we were unable to recover it. 00:27:33.490 [2024-11-15 10:46:21.897787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.490 [2024-11-15 10:46:21.897812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.490 qpair failed and we were unable to recover it. 00:27:33.490 [2024-11-15 10:46:21.897982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.490 [2024-11-15 10:46:21.898033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.490 qpair failed and we were unable to recover it. 00:27:33.490 [2024-11-15 10:46:21.898287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.490 [2024-11-15 10:46:21.898315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.490 qpair failed and we were unable to recover it. 00:27:33.490 [2024-11-15 10:46:21.898546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.490 [2024-11-15 10:46:21.898573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.490 qpair failed and we were unable to recover it. 00:27:33.490 [2024-11-15 10:46:21.898713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.490 [2024-11-15 10:46:21.898753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.490 qpair failed and we were unable to recover it. 00:27:33.490 [2024-11-15 10:46:21.898980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.490 [2024-11-15 10:46:21.899004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.490 qpair failed and we were unable to recover it. 00:27:33.490 [2024-11-15 10:46:21.899164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.490 [2024-11-15 10:46:21.899188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.490 qpair failed and we were unable to recover it. 00:27:33.490 [2024-11-15 10:46:21.899387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.490 [2024-11-15 10:46:21.899413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.490 qpair failed and we were unable to recover it. 00:27:33.490 [2024-11-15 10:46:21.899624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.490 [2024-11-15 10:46:21.899655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.490 qpair failed and we were unable to recover it. 00:27:33.490 [2024-11-15 10:46:21.899893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.490 [2024-11-15 10:46:21.899937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.490 qpair failed and we were unable to recover it. 00:27:33.490 [2024-11-15 10:46:21.900162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.490 [2024-11-15 10:46:21.900202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.490 qpair failed and we were unable to recover it. 00:27:33.490 [2024-11-15 10:46:21.900368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.490 [2024-11-15 10:46:21.900394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.490 qpair failed and we were unable to recover it. 00:27:33.490 [2024-11-15 10:46:21.900581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.490 [2024-11-15 10:46:21.900606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.490 qpair failed and we were unable to recover it. 00:27:33.490 [2024-11-15 10:46:21.900817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.490 [2024-11-15 10:46:21.900841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.490 qpair failed and we were unable to recover it. 00:27:33.490 [2024-11-15 10:46:21.901012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.490 [2024-11-15 10:46:21.901036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.490 qpair failed and we were unable to recover it. 00:27:33.490 [2024-11-15 10:46:21.901224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.490 [2024-11-15 10:46:21.901250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.490 qpair failed and we were unable to recover it. 00:27:33.490 [2024-11-15 10:46:21.901459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.490 [2024-11-15 10:46:21.901485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.490 qpair failed and we were unable to recover it. 00:27:33.774 [2024-11-15 10:46:21.901628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.774 [2024-11-15 10:46:21.901677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.774 qpair failed and we were unable to recover it. 00:27:33.774 [2024-11-15 10:46:21.901951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.774 [2024-11-15 10:46:21.901988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.774 qpair failed and we were unable to recover it. 00:27:33.774 [2024-11-15 10:46:21.902187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.774 [2024-11-15 10:46:21.902237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.774 qpair failed and we were unable to recover it. 00:27:33.775 [2024-11-15 10:46:21.902475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.775 [2024-11-15 10:46:21.902511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.775 qpair failed and we were unable to recover it. 00:27:33.775 [2024-11-15 10:46:21.902641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.775 [2024-11-15 10:46:21.902682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.775 qpair failed and we were unable to recover it. 00:27:33.775 [2024-11-15 10:46:21.902918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.775 [2024-11-15 10:46:21.902945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.775 qpair failed and we were unable to recover it. 00:27:33.775 [2024-11-15 10:46:21.903112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.775 [2024-11-15 10:46:21.903145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.775 qpair failed and we were unable to recover it. 00:27:33.775 [2024-11-15 10:46:21.903379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.775 [2024-11-15 10:46:21.903408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.775 qpair failed and we were unable to recover it. 00:27:33.775 [2024-11-15 10:46:21.903641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.775 [2024-11-15 10:46:21.903678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.775 qpair failed and we were unable to recover it. 00:27:33.775 [2024-11-15 10:46:21.903913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.775 [2024-11-15 10:46:21.903948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.775 qpair failed and we were unable to recover it. 00:27:33.775 [2024-11-15 10:46:21.904164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.775 [2024-11-15 10:46:21.904193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.775 qpair failed and we were unable to recover it. 00:27:33.775 [2024-11-15 10:46:21.904303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.775 [2024-11-15 10:46:21.904343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.775 qpair failed and we were unable to recover it. 00:27:33.775 [2024-11-15 10:46:21.904536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.775 [2024-11-15 10:46:21.904562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.775 qpair failed and we were unable to recover it. 00:27:33.775 [2024-11-15 10:46:21.904780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.775 [2024-11-15 10:46:21.904805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.775 qpair failed and we were unable to recover it. 00:27:33.775 [2024-11-15 10:46:21.904991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.775 [2024-11-15 10:46:21.905015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.775 qpair failed and we were unable to recover it. 00:27:33.775 [2024-11-15 10:46:21.905220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.775 [2024-11-15 10:46:21.905245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.775 qpair failed and we were unable to recover it. 00:27:33.775 [2024-11-15 10:46:21.905475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.775 [2024-11-15 10:46:21.905502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.775 qpair failed and we were unable to recover it. 00:27:33.775 [2024-11-15 10:46:21.905686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.775 [2024-11-15 10:46:21.905730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.775 qpair failed and we were unable to recover it. 00:27:33.775 [2024-11-15 10:46:21.905959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.775 [2024-11-15 10:46:21.905984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.775 qpair failed and we were unable to recover it. 00:27:33.775 [2024-11-15 10:46:21.906174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.775 [2024-11-15 10:46:21.906198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.775 qpair failed and we were unable to recover it. 00:27:33.775 [2024-11-15 10:46:21.906388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.775 [2024-11-15 10:46:21.906414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.775 qpair failed and we were unable to recover it. 00:27:33.775 [2024-11-15 10:46:21.906517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.775 [2024-11-15 10:46:21.906543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.775 qpair failed and we were unable to recover it. 00:27:33.775 [2024-11-15 10:46:21.906708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.775 [2024-11-15 10:46:21.906747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.775 qpair failed and we were unable to recover it. 00:27:33.775 [2024-11-15 10:46:21.906980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.775 [2024-11-15 10:46:21.907019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.775 qpair failed and we were unable to recover it. 00:27:33.775 [2024-11-15 10:46:21.907248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.775 [2024-11-15 10:46:21.907273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.775 qpair failed and we were unable to recover it. 00:27:33.775 [2024-11-15 10:46:21.907477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.775 [2024-11-15 10:46:21.907503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.775 qpair failed and we were unable to recover it. 00:27:33.775 [2024-11-15 10:46:21.907689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.775 [2024-11-15 10:46:21.907714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.775 qpair failed and we were unable to recover it. 00:27:33.775 [2024-11-15 10:46:21.907944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.775 [2024-11-15 10:46:21.907967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.775 qpair failed and we were unable to recover it. 00:27:33.775 [2024-11-15 10:46:21.908192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.775 [2024-11-15 10:46:21.908217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.775 qpair failed and we were unable to recover it. 00:27:33.775 [2024-11-15 10:46:21.908413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.775 [2024-11-15 10:46:21.908438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.775 qpair failed and we were unable to recover it. 00:27:33.775 [2024-11-15 10:46:21.908639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.775 [2024-11-15 10:46:21.908664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.775 qpair failed and we were unable to recover it. 00:27:33.775 [2024-11-15 10:46:21.908904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.775 [2024-11-15 10:46:21.908929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.775 qpair failed and we were unable to recover it. 00:27:33.775 [2024-11-15 10:46:21.909159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.775 [2024-11-15 10:46:21.909183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.775 qpair failed and we were unable to recover it. 00:27:33.775 [2024-11-15 10:46:21.909385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.775 [2024-11-15 10:46:21.909424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.775 qpair failed and we were unable to recover it. 00:27:33.775 [2024-11-15 10:46:21.909614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.775 [2024-11-15 10:46:21.909639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.775 qpair failed and we were unable to recover it. 00:27:33.775 [2024-11-15 10:46:21.909868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.775 [2024-11-15 10:46:21.909893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.775 qpair failed and we were unable to recover it. 00:27:33.775 [2024-11-15 10:46:21.910111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.775 [2024-11-15 10:46:21.910136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.775 qpair failed and we were unable to recover it. 00:27:33.775 [2024-11-15 10:46:21.910376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.775 [2024-11-15 10:46:21.910402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.775 qpair failed and we were unable to recover it. 00:27:33.775 [2024-11-15 10:46:21.910596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.775 [2024-11-15 10:46:21.910621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.775 qpair failed and we were unable to recover it. 00:27:33.775 [2024-11-15 10:46:21.910777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.775 [2024-11-15 10:46:21.910815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.776 qpair failed and we were unable to recover it. 00:27:33.776 [2024-11-15 10:46:21.910989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.776 [2024-11-15 10:46:21.911013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.776 qpair failed and we were unable to recover it. 00:27:33.776 [2024-11-15 10:46:21.911186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.776 [2024-11-15 10:46:21.911209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.776 qpair failed and we were unable to recover it. 00:27:33.776 [2024-11-15 10:46:21.911401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.776 [2024-11-15 10:46:21.911427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.776 qpair failed and we were unable to recover it. 00:27:33.776 [2024-11-15 10:46:21.911602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.776 [2024-11-15 10:46:21.911627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.776 qpair failed and we were unable to recover it. 00:27:33.776 [2024-11-15 10:46:21.911862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.776 [2024-11-15 10:46:21.911886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.776 qpair failed and we were unable to recover it. 00:27:33.776 [2024-11-15 10:46:21.912077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.776 [2024-11-15 10:46:21.912101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.776 qpair failed and we were unable to recover it. 00:27:33.776 [2024-11-15 10:46:21.912302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.776 [2024-11-15 10:46:21.912325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.776 qpair failed and we were unable to recover it. 00:27:33.776 [2024-11-15 10:46:21.912554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.776 [2024-11-15 10:46:21.912579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.776 qpair failed and we were unable to recover it. 00:27:33.776 [2024-11-15 10:46:21.912778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.776 [2024-11-15 10:46:21.912802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.776 qpair failed and we were unable to recover it. 00:27:33.776 [2024-11-15 10:46:21.913030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.776 [2024-11-15 10:46:21.913053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.776 qpair failed and we were unable to recover it. 00:27:33.776 [2024-11-15 10:46:21.913287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.776 [2024-11-15 10:46:21.913311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.776 qpair failed and we were unable to recover it. 00:27:33.776 [2024-11-15 10:46:21.913514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.776 [2024-11-15 10:46:21.913540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.776 qpair failed and we were unable to recover it. 00:27:33.776 [2024-11-15 10:46:21.913725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.776 [2024-11-15 10:46:21.913749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.776 qpair failed and we were unable to recover it. 00:27:33.776 [2024-11-15 10:46:21.913925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.776 [2024-11-15 10:46:21.913949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.776 qpair failed and we were unable to recover it. 00:27:33.776 [2024-11-15 10:46:21.914184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.776 [2024-11-15 10:46:21.914208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.776 qpair failed and we were unable to recover it. 00:27:33.776 [2024-11-15 10:46:21.914401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.776 [2024-11-15 10:46:21.914426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.776 qpair failed and we were unable to recover it. 00:27:33.776 [2024-11-15 10:46:21.914620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.776 [2024-11-15 10:46:21.914661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.776 qpair failed and we were unable to recover it. 00:27:33.776 [2024-11-15 10:46:21.914869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.776 [2024-11-15 10:46:21.914899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.776 qpair failed and we were unable to recover it. 00:27:33.776 [2024-11-15 10:46:21.915084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.776 [2024-11-15 10:46:21.915108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.776 qpair failed and we were unable to recover it. 00:27:33.776 [2024-11-15 10:46:21.915299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.776 [2024-11-15 10:46:21.915322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.776 qpair failed and we were unable to recover it. 00:27:33.776 [2024-11-15 10:46:21.915512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.776 [2024-11-15 10:46:21.915537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.776 qpair failed and we were unable to recover it. 00:27:33.776 [2024-11-15 10:46:21.915737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.776 [2024-11-15 10:46:21.915762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.776 qpair failed and we were unable to recover it. 00:27:33.776 [2024-11-15 10:46:21.915987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.776 [2024-11-15 10:46:21.916011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.776 qpair failed and we were unable to recover it. 00:27:33.776 [2024-11-15 10:46:21.916171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.776 [2024-11-15 10:46:21.916194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.776 qpair failed and we were unable to recover it. 00:27:33.776 [2024-11-15 10:46:21.916344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.776 [2024-11-15 10:46:21.916395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.776 qpair failed and we were unable to recover it. 00:27:33.776 [2024-11-15 10:46:21.916621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.776 [2024-11-15 10:46:21.916661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.776 qpair failed and we were unable to recover it. 00:27:33.776 [2024-11-15 10:46:21.916840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.776 [2024-11-15 10:46:21.916864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.776 qpair failed and we were unable to recover it. 00:27:33.776 [2024-11-15 10:46:21.917090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.776 [2024-11-15 10:46:21.917114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.776 qpair failed and we were unable to recover it. 00:27:33.776 [2024-11-15 10:46:21.917339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.776 [2024-11-15 10:46:21.917370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.776 qpair failed and we were unable to recover it. 00:27:33.776 [2024-11-15 10:46:21.917552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.776 [2024-11-15 10:46:21.917576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.776 qpair failed and we were unable to recover it. 00:27:33.776 [2024-11-15 10:46:21.917808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.776 [2024-11-15 10:46:21.917832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.776 qpair failed and we were unable to recover it. 00:27:33.776 [2024-11-15 10:46:21.918011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.776 [2024-11-15 10:46:21.918034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.776 qpair failed and we were unable to recover it. 00:27:33.776 [2024-11-15 10:46:21.918264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.776 [2024-11-15 10:46:21.918288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.776 qpair failed and we were unable to recover it. 00:27:33.776 [2024-11-15 10:46:21.918492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.776 [2024-11-15 10:46:21.918516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.776 qpair failed and we were unable to recover it. 00:27:33.776 [2024-11-15 10:46:21.918741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.776 [2024-11-15 10:46:21.918765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.776 qpair failed and we were unable to recover it. 00:27:33.776 [2024-11-15 10:46:21.918926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.776 [2024-11-15 10:46:21.918950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.776 qpair failed and we were unable to recover it. 00:27:33.776 [2024-11-15 10:46:21.919153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.776 [2024-11-15 10:46:21.919177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.776 qpair failed and we were unable to recover it. 00:27:33.776 [2024-11-15 10:46:21.919408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.776 [2024-11-15 10:46:21.919433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.776 qpair failed and we were unable to recover it. 00:27:33.777 [2024-11-15 10:46:21.919630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.777 [2024-11-15 10:46:21.919669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.777 qpair failed and we were unable to recover it. 00:27:33.777 [2024-11-15 10:46:21.919843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.777 [2024-11-15 10:46:21.919867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.777 qpair failed and we were unable to recover it. 00:27:33.777 [2024-11-15 10:46:21.920072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.777 [2024-11-15 10:46:21.920095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.777 qpair failed and we were unable to recover it. 00:27:33.777 [2024-11-15 10:46:21.920308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.777 [2024-11-15 10:46:21.920332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.777 qpair failed and we were unable to recover it. 00:27:33.777 [2024-11-15 10:46:21.920486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.777 [2024-11-15 10:46:21.920511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.777 qpair failed and we were unable to recover it. 00:27:33.777 [2024-11-15 10:46:21.920749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.777 [2024-11-15 10:46:21.920772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.777 qpair failed and we were unable to recover it. 00:27:33.777 [2024-11-15 10:46:21.920974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.777 [2024-11-15 10:46:21.920997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.777 qpair failed and we were unable to recover it. 00:27:33.777 [2024-11-15 10:46:21.921195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.777 [2024-11-15 10:46:21.921218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.777 qpair failed and we were unable to recover it. 00:27:33.777 [2024-11-15 10:46:21.921452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.777 [2024-11-15 10:46:21.921477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.777 qpair failed and we were unable to recover it. 00:27:33.777 [2024-11-15 10:46:21.921672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.777 [2024-11-15 10:46:21.921712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.777 qpair failed and we were unable to recover it. 00:27:33.777 [2024-11-15 10:46:21.921893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.777 [2024-11-15 10:46:21.921917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.777 qpair failed and we were unable to recover it. 00:27:33.777 [2024-11-15 10:46:21.922114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.777 [2024-11-15 10:46:21.922138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.777 qpair failed and we were unable to recover it. 00:27:33.777 [2024-11-15 10:46:21.922359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.777 [2024-11-15 10:46:21.922405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.777 qpair failed and we were unable to recover it. 00:27:33.777 [2024-11-15 10:46:21.922606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.777 [2024-11-15 10:46:21.922630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.777 qpair failed and we were unable to recover it. 00:27:33.777 [2024-11-15 10:46:21.922838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.777 [2024-11-15 10:46:21.922861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.777 qpair failed and we were unable to recover it. 00:27:33.777 [2024-11-15 10:46:21.923060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.777 [2024-11-15 10:46:21.923084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.777 qpair failed and we were unable to recover it. 00:27:33.777 [2024-11-15 10:46:21.923251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.777 [2024-11-15 10:46:21.923275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.777 qpair failed and we were unable to recover it. 00:27:33.777 [2024-11-15 10:46:21.923457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.777 [2024-11-15 10:46:21.923484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.777 qpair failed and we were unable to recover it. 00:27:33.777 [2024-11-15 10:46:21.923666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.777 [2024-11-15 10:46:21.923691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.777 qpair failed and we were unable to recover it. 00:27:33.777 [2024-11-15 10:46:21.923915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.777 [2024-11-15 10:46:21.923943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.777 qpair failed and we were unable to recover it. 00:27:33.777 [2024-11-15 10:46:21.924154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.777 [2024-11-15 10:46:21.924178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.777 qpair failed and we were unable to recover it. 00:27:33.777 [2024-11-15 10:46:21.924421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.777 [2024-11-15 10:46:21.924447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.777 qpair failed and we were unable to recover it. 00:27:33.777 [2024-11-15 10:46:21.924559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.777 [2024-11-15 10:46:21.924584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.777 qpair failed and we were unable to recover it. 00:27:33.777 [2024-11-15 10:46:21.924798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.777 [2024-11-15 10:46:21.924823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.777 qpair failed and we were unable to recover it. 00:27:33.777 [2024-11-15 10:46:21.924975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.777 [2024-11-15 10:46:21.924999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.777 qpair failed and we were unable to recover it. 00:27:33.777 [2024-11-15 10:46:21.925147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.777 [2024-11-15 10:46:21.925186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.777 qpair failed and we were unable to recover it. 00:27:33.777 [2024-11-15 10:46:21.925410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.777 [2024-11-15 10:46:21.925436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.777 qpair failed and we were unable to recover it. 00:27:33.777 [2024-11-15 10:46:21.925646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.777 [2024-11-15 10:46:21.925670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.777 qpair failed and we were unable to recover it. 00:27:33.777 [2024-11-15 10:46:21.925918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.777 [2024-11-15 10:46:21.925942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.777 qpair failed and we were unable to recover it. 00:27:33.777 [2024-11-15 10:46:21.926126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.777 [2024-11-15 10:46:21.926150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.777 qpair failed and we were unable to recover it. 00:27:33.777 [2024-11-15 10:46:21.926391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.777 [2024-11-15 10:46:21.926417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.777 qpair failed and we were unable to recover it. 00:27:33.777 [2024-11-15 10:46:21.926582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.777 [2024-11-15 10:46:21.926606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.777 qpair failed and we were unable to recover it. 00:27:33.777 [2024-11-15 10:46:21.926820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.777 [2024-11-15 10:46:21.926843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.777 qpair failed and we were unable to recover it. 00:27:33.777 [2024-11-15 10:46:21.927031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.777 [2024-11-15 10:46:21.927055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.777 qpair failed and we were unable to recover it. 00:27:33.777 [2024-11-15 10:46:21.927283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.777 [2024-11-15 10:46:21.927307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.777 qpair failed and we were unable to recover it. 00:27:33.777 [2024-11-15 10:46:21.927551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.777 [2024-11-15 10:46:21.927578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.777 qpair failed and we were unable to recover it. 00:27:33.777 [2024-11-15 10:46:21.927756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.777 [2024-11-15 10:46:21.927780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.777 qpair failed and we were unable to recover it. 00:27:33.777 [2024-11-15 10:46:21.927999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.777 [2024-11-15 10:46:21.928024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.778 qpair failed and we were unable to recover it. 00:27:33.778 [2024-11-15 10:46:21.928266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.778 [2024-11-15 10:46:21.928305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.778 qpair failed and we were unable to recover it. 00:27:33.778 [2024-11-15 10:46:21.928542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.778 [2024-11-15 10:46:21.928568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.778 qpair failed and we were unable to recover it. 00:27:33.778 [2024-11-15 10:46:21.928746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.778 [2024-11-15 10:46:21.928769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.778 qpair failed and we were unable to recover it. 00:27:33.778 [2024-11-15 10:46:21.928961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.778 [2024-11-15 10:46:21.928984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.778 qpair failed and we were unable to recover it. 00:27:33.778 [2024-11-15 10:46:21.929209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.778 [2024-11-15 10:46:21.929234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.778 qpair failed and we were unable to recover it. 00:27:33.778 [2024-11-15 10:46:21.929465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.778 [2024-11-15 10:46:21.929490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.778 qpair failed and we were unable to recover it. 00:27:33.778 [2024-11-15 10:46:21.929690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.778 [2024-11-15 10:46:21.929729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.778 qpair failed and we were unable to recover it. 00:27:33.778 [2024-11-15 10:46:21.929965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.778 [2024-11-15 10:46:21.929989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.778 qpair failed and we were unable to recover it. 00:27:33.778 [2024-11-15 10:46:21.930162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.778 [2024-11-15 10:46:21.930186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.778 qpair failed and we were unable to recover it. 00:27:33.778 [2024-11-15 10:46:21.930386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.778 [2024-11-15 10:46:21.930410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.778 qpair failed and we were unable to recover it. 00:27:33.778 [2024-11-15 10:46:21.930594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.778 [2024-11-15 10:46:21.930619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.778 qpair failed and we were unable to recover it. 00:27:33.778 [2024-11-15 10:46:21.930863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.778 [2024-11-15 10:46:21.930886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.778 qpair failed and we were unable to recover it. 00:27:33.778 [2024-11-15 10:46:21.931077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.778 [2024-11-15 10:46:21.931101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.778 qpair failed and we were unable to recover it. 00:27:33.778 [2024-11-15 10:46:21.931299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.778 [2024-11-15 10:46:21.931323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.778 qpair failed and we were unable to recover it. 00:27:33.778 [2024-11-15 10:46:21.931575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.778 [2024-11-15 10:46:21.931600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.778 qpair failed and we were unable to recover it. 00:27:33.778 [2024-11-15 10:46:21.931832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.778 [2024-11-15 10:46:21.931856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.778 qpair failed and we were unable to recover it. 00:27:33.778 [2024-11-15 10:46:21.932077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.778 [2024-11-15 10:46:21.932101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.778 qpair failed and we were unable to recover it. 00:27:33.778 [2024-11-15 10:46:21.932327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.778 [2024-11-15 10:46:21.932351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.778 qpair failed and we were unable to recover it. 00:27:33.778 [2024-11-15 10:46:21.932553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.778 [2024-11-15 10:46:21.932577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.778 qpair failed and we were unable to recover it. 00:27:33.778 [2024-11-15 10:46:21.932780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.778 [2024-11-15 10:46:21.932804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.778 qpair failed and we were unable to recover it. 00:27:33.778 [2024-11-15 10:46:21.932988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.778 [2024-11-15 10:46:21.933012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.778 qpair failed and we were unable to recover it. 00:27:33.778 [2024-11-15 10:46:21.933239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.778 [2024-11-15 10:46:21.933266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.778 qpair failed and we were unable to recover it. 00:27:33.778 [2024-11-15 10:46:21.933489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.778 [2024-11-15 10:46:21.933515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.778 qpair failed and we were unable to recover it. 00:27:33.778 [2024-11-15 10:46:21.933654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.778 [2024-11-15 10:46:21.933693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.778 qpair failed and we were unable to recover it. 00:27:33.778 [2024-11-15 10:46:21.933876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.778 [2024-11-15 10:46:21.933901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.778 qpair failed and we were unable to recover it. 00:27:33.778 [2024-11-15 10:46:21.934103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.778 [2024-11-15 10:46:21.934127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.778 qpair failed and we were unable to recover it. 00:27:33.778 [2024-11-15 10:46:21.934264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.778 [2024-11-15 10:46:21.934288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.778 qpair failed and we were unable to recover it. 00:27:33.778 [2024-11-15 10:46:21.934491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.778 [2024-11-15 10:46:21.934516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.778 qpair failed and we were unable to recover it. 00:27:33.778 [2024-11-15 10:46:21.934661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.778 [2024-11-15 10:46:21.934686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.778 qpair failed and we were unable to recover it. 00:27:33.778 [2024-11-15 10:46:21.934910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.778 [2024-11-15 10:46:21.934934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.778 qpair failed and we were unable to recover it. 00:27:33.778 [2024-11-15 10:46:21.935158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.778 [2024-11-15 10:46:21.935181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.778 qpair failed and we were unable to recover it. 00:27:33.778 [2024-11-15 10:46:21.935388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.778 [2024-11-15 10:46:21.935428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.778 qpair failed and we were unable to recover it. 00:27:33.778 [2024-11-15 10:46:21.935623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.778 [2024-11-15 10:46:21.935648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.778 qpair failed and we were unable to recover it. 00:27:33.778 [2024-11-15 10:46:21.935873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.778 [2024-11-15 10:46:21.935897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.778 qpair failed and we were unable to recover it. 00:27:33.778 [2024-11-15 10:46:21.936088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.778 [2024-11-15 10:46:21.936112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.778 qpair failed and we were unable to recover it. 00:27:33.778 [2024-11-15 10:46:21.936277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.778 [2024-11-15 10:46:21.936302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.778 qpair failed and we were unable to recover it. 00:27:33.778 [2024-11-15 10:46:21.936531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.778 [2024-11-15 10:46:21.936557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.778 qpair failed and we were unable to recover it. 00:27:33.779 [2024-11-15 10:46:21.936778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.779 [2024-11-15 10:46:21.936802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.779 qpair failed and we were unable to recover it. 00:27:33.779 [2024-11-15 10:46:21.937022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.779 [2024-11-15 10:46:21.937047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.779 qpair failed and we were unable to recover it. 00:27:33.779 [2024-11-15 10:46:21.937290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.779 [2024-11-15 10:46:21.937314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.779 qpair failed and we were unable to recover it. 00:27:33.779 [2024-11-15 10:46:21.937556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.779 [2024-11-15 10:46:21.937581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.779 qpair failed and we were unable to recover it. 00:27:33.779 [2024-11-15 10:46:21.937800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.779 [2024-11-15 10:46:21.937824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.779 qpair failed and we were unable to recover it. 00:27:33.779 [2024-11-15 10:46:21.937996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.779 [2024-11-15 10:46:21.938021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.779 qpair failed and we were unable to recover it. 00:27:33.779 [2024-11-15 10:46:21.938250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.779 [2024-11-15 10:46:21.938274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.779 qpair failed and we were unable to recover it. 00:27:33.779 [2024-11-15 10:46:21.938511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.779 [2024-11-15 10:46:21.938537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.779 qpair failed and we were unable to recover it. 00:27:33.779 [2024-11-15 10:46:21.938754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.779 [2024-11-15 10:46:21.938778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.779 qpair failed and we were unable to recover it. 00:27:33.779 [2024-11-15 10:46:21.938979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.779 [2024-11-15 10:46:21.939003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.779 qpair failed and we were unable to recover it. 00:27:33.779 [2024-11-15 10:46:21.939226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.779 [2024-11-15 10:46:21.939250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.779 qpair failed and we were unable to recover it. 00:27:33.779 [2024-11-15 10:46:21.939447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.779 [2024-11-15 10:46:21.939474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.779 qpair failed and we were unable to recover it. 00:27:33.779 [2024-11-15 10:46:21.939687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.779 [2024-11-15 10:46:21.939726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.779 qpair failed and we were unable to recover it. 00:27:33.779 [2024-11-15 10:46:21.939900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.779 [2024-11-15 10:46:21.939924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.779 qpair failed and we were unable to recover it. 00:27:33.779 [2024-11-15 10:46:21.940143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.779 [2024-11-15 10:46:21.940167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.779 qpair failed and we were unable to recover it. 00:27:33.779 [2024-11-15 10:46:21.940375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.779 [2024-11-15 10:46:21.940400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.779 qpair failed and we were unable to recover it. 00:27:33.779 [2024-11-15 10:46:21.940573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.779 [2024-11-15 10:46:21.940598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.779 qpair failed and we were unable to recover it. 00:27:33.779 [2024-11-15 10:46:21.940778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.779 [2024-11-15 10:46:21.940802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.779 qpair failed and we were unable to recover it. 00:27:33.779 [2024-11-15 10:46:21.940975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.779 [2024-11-15 10:46:21.940999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.779 qpair failed and we were unable to recover it. 00:27:33.779 [2024-11-15 10:46:21.941172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.779 [2024-11-15 10:46:21.941196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.779 qpair failed and we were unable to recover it. 00:27:33.779 [2024-11-15 10:46:21.941437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.779 [2024-11-15 10:46:21.941463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.779 qpair failed and we were unable to recover it. 00:27:33.779 [2024-11-15 10:46:21.941661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.779 [2024-11-15 10:46:21.941686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.779 qpair failed and we were unable to recover it. 00:27:33.779 [2024-11-15 10:46:21.941859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.779 [2024-11-15 10:46:21.941883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.779 qpair failed and we were unable to recover it. 00:27:33.779 [2024-11-15 10:46:21.942036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.779 [2024-11-15 10:46:21.942059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.779 qpair failed and we were unable to recover it. 00:27:33.779 [2024-11-15 10:46:21.942252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.779 [2024-11-15 10:46:21.942280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.779 qpair failed and we were unable to recover it. 00:27:33.779 [2024-11-15 10:46:21.942448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.779 [2024-11-15 10:46:21.942473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.779 qpair failed and we were unable to recover it. 00:27:33.779 [2024-11-15 10:46:21.942661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.779 [2024-11-15 10:46:21.942685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.779 qpair failed and we were unable to recover it. 00:27:33.779 [2024-11-15 10:46:21.942913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.779 [2024-11-15 10:46:21.942937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.779 qpair failed and we were unable to recover it. 00:27:33.779 [2024-11-15 10:46:21.943132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.779 [2024-11-15 10:46:21.943156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.779 qpair failed and we were unable to recover it. 00:27:33.779 [2024-11-15 10:46:21.943395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.779 [2024-11-15 10:46:21.943436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.779 qpair failed and we were unable to recover it. 00:27:33.779 [2024-11-15 10:46:21.943589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.779 [2024-11-15 10:46:21.943615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.779 qpair failed and we were unable to recover it. 00:27:33.779 [2024-11-15 10:46:21.943822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.779 [2024-11-15 10:46:21.943846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.779 qpair failed and we were unable to recover it. 00:27:33.779 [2024-11-15 10:46:21.944031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.779 [2024-11-15 10:46:21.944055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.779 qpair failed and we were unable to recover it. 00:27:33.779 [2024-11-15 10:46:21.944255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.780 [2024-11-15 10:46:21.944278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.780 qpair failed and we were unable to recover it. 00:27:33.780 [2024-11-15 10:46:21.944476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.780 [2024-11-15 10:46:21.944502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.780 qpair failed and we were unable to recover it. 00:27:33.780 [2024-11-15 10:46:21.944690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.780 [2024-11-15 10:46:21.944715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.780 qpair failed and we were unable to recover it. 00:27:33.780 [2024-11-15 10:46:21.944938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.780 [2024-11-15 10:46:21.944962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.780 qpair failed and we were unable to recover it. 00:27:33.780 [2024-11-15 10:46:21.945157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.780 [2024-11-15 10:46:21.945180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.780 qpair failed and we were unable to recover it. 00:27:33.780 [2024-11-15 10:46:21.945395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.780 [2024-11-15 10:46:21.945420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.780 qpair failed and we were unable to recover it. 00:27:33.780 [2024-11-15 10:46:21.945568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.780 [2024-11-15 10:46:21.945592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.780 qpair failed and we were unable to recover it. 00:27:33.780 [2024-11-15 10:46:21.945807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.780 [2024-11-15 10:46:21.945830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.780 qpair failed and we were unable to recover it. 00:27:33.780 [2024-11-15 10:46:21.945992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.780 [2024-11-15 10:46:21.946016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.780 qpair failed and we were unable to recover it. 00:27:33.780 [2024-11-15 10:46:21.946210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.780 [2024-11-15 10:46:21.946233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.780 qpair failed and we were unable to recover it. 00:27:33.780 [2024-11-15 10:46:21.946459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.780 [2024-11-15 10:46:21.946484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.780 qpair failed and we were unable to recover it. 00:27:33.780 [2024-11-15 10:46:21.946695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.780 [2024-11-15 10:46:21.946718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.780 qpair failed and we were unable to recover it. 00:27:33.780 [2024-11-15 10:46:21.946850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.780 [2024-11-15 10:46:21.946873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.780 qpair failed and we were unable to recover it. 00:27:33.780 [2024-11-15 10:46:21.947074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.780 [2024-11-15 10:46:21.947097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.780 qpair failed and we were unable to recover it. 00:27:33.780 [2024-11-15 10:46:21.947322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.780 [2024-11-15 10:46:21.947371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.780 qpair failed and we were unable to recover it. 00:27:33.780 [2024-11-15 10:46:21.947518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.780 [2024-11-15 10:46:21.947542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.780 qpair failed and we were unable to recover it. 00:27:33.780 [2024-11-15 10:46:21.947740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.780 [2024-11-15 10:46:21.947763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.780 qpair failed and we were unable to recover it. 00:27:33.780 [2024-11-15 10:46:21.947932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.780 [2024-11-15 10:46:21.947955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.780 qpair failed and we were unable to recover it. 00:27:33.780 [2024-11-15 10:46:21.948170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.780 [2024-11-15 10:46:21.948194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.780 qpair failed and we were unable to recover it. 00:27:33.780 [2024-11-15 10:46:21.948427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.780 [2024-11-15 10:46:21.948452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.780 qpair failed and we were unable to recover it. 00:27:33.780 [2024-11-15 10:46:21.948672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.780 [2024-11-15 10:46:21.948696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.780 qpair failed and we were unable to recover it. 00:27:33.780 [2024-11-15 10:46:21.948885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.780 [2024-11-15 10:46:21.948909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.780 qpair failed and we were unable to recover it. 00:27:33.780 [2024-11-15 10:46:21.949134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.780 [2024-11-15 10:46:21.949158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.780 qpair failed and we were unable to recover it. 00:27:33.780 [2024-11-15 10:46:21.949312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.780 [2024-11-15 10:46:21.949335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.780 qpair failed and we were unable to recover it. 00:27:33.780 [2024-11-15 10:46:21.949494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.780 [2024-11-15 10:46:21.949519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.780 qpair failed and we were unable to recover it. 00:27:33.780 [2024-11-15 10:46:21.949692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.780 [2024-11-15 10:46:21.949716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.780 qpair failed and we were unable to recover it. 00:27:33.780 [2024-11-15 10:46:21.949886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.780 [2024-11-15 10:46:21.949909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.780 qpair failed and we were unable to recover it. 00:27:33.780 [2024-11-15 10:46:21.950068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.780 [2024-11-15 10:46:21.950092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.780 qpair failed and we were unable to recover it. 00:27:33.780 [2024-11-15 10:46:21.950279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.780 [2024-11-15 10:46:21.950303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.780 qpair failed and we were unable to recover it. 00:27:33.780 [2024-11-15 10:46:21.950485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.780 [2024-11-15 10:46:21.950510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.780 qpair failed and we were unable to recover it. 00:27:33.780 [2024-11-15 10:46:21.950658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.780 [2024-11-15 10:46:21.950683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.780 qpair failed and we were unable to recover it. 00:27:33.780 [2024-11-15 10:46:21.950904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.780 [2024-11-15 10:46:21.950946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.780 qpair failed and we were unable to recover it. 00:27:33.780 [2024-11-15 10:46:21.951119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.780 [2024-11-15 10:46:21.951142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.780 qpair failed and we were unable to recover it. 00:27:33.780 [2024-11-15 10:46:21.951380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.780 [2024-11-15 10:46:21.951405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.780 qpair failed and we were unable to recover it. 00:27:33.780 [2024-11-15 10:46:21.951595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.780 [2024-11-15 10:46:21.951620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.780 qpair failed and we were unable to recover it. 00:27:33.780 [2024-11-15 10:46:21.951773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.780 [2024-11-15 10:46:21.951797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.780 qpair failed and we were unable to recover it. 00:27:33.780 [2024-11-15 10:46:21.952025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.780 [2024-11-15 10:46:21.952049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.780 qpair failed and we were unable to recover it. 00:27:33.780 [2024-11-15 10:46:21.952250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.780 [2024-11-15 10:46:21.952274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.781 qpair failed and we were unable to recover it. 00:27:33.781 [2024-11-15 10:46:21.952465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.781 [2024-11-15 10:46:21.952489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.781 qpair failed and we were unable to recover it. 00:27:33.781 [2024-11-15 10:46:21.952656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.781 [2024-11-15 10:46:21.952680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.781 qpair failed and we were unable to recover it. 00:27:33.781 [2024-11-15 10:46:21.952841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.781 [2024-11-15 10:46:21.952865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.781 qpair failed and we were unable to recover it. 00:27:33.781 [2024-11-15 10:46:21.953045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.781 [2024-11-15 10:46:21.953069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.781 qpair failed and we were unable to recover it. 00:27:33.781 [2024-11-15 10:46:21.953300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.781 [2024-11-15 10:46:21.953324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.781 qpair failed and we were unable to recover it. 00:27:33.781 [2024-11-15 10:46:21.953507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.781 [2024-11-15 10:46:21.953532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.781 qpair failed and we were unable to recover it. 00:27:33.781 [2024-11-15 10:46:21.953698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.781 [2024-11-15 10:46:21.953722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.781 qpair failed and we were unable to recover it. 00:27:33.781 [2024-11-15 10:46:21.953928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.781 [2024-11-15 10:46:21.953953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.781 qpair failed and we were unable to recover it. 00:27:33.781 [2024-11-15 10:46:21.954171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.781 [2024-11-15 10:46:21.954194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.781 qpair failed and we were unable to recover it. 00:27:33.781 [2024-11-15 10:46:21.954346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.781 [2024-11-15 10:46:21.954375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.781 qpair failed and we were unable to recover it. 00:27:33.781 [2024-11-15 10:46:21.954579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.781 [2024-11-15 10:46:21.954603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.781 qpair failed and we were unable to recover it. 00:27:33.781 [2024-11-15 10:46:21.954836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.781 [2024-11-15 10:46:21.954860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.781 qpair failed and we were unable to recover it. 00:27:33.781 [2024-11-15 10:46:21.955106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.781 [2024-11-15 10:46:21.955130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.781 qpair failed and we were unable to recover it. 00:27:33.781 [2024-11-15 10:46:21.955309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.781 [2024-11-15 10:46:21.955333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.781 qpair failed and we were unable to recover it. 00:27:33.781 [2024-11-15 10:46:21.955509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.781 [2024-11-15 10:46:21.955533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.781 qpair failed and we were unable to recover it. 00:27:33.781 [2024-11-15 10:46:21.955723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.781 [2024-11-15 10:46:21.955761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.781 qpair failed and we were unable to recover it. 00:27:33.781 [2024-11-15 10:46:21.955960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.781 [2024-11-15 10:46:21.955983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.781 qpair failed and we were unable to recover it. 00:27:33.781 [2024-11-15 10:46:21.956168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.781 [2024-11-15 10:46:21.956191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.781 qpair failed and we were unable to recover it. 00:27:33.781 [2024-11-15 10:46:21.956316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.781 [2024-11-15 10:46:21.956355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.781 qpair failed and we were unable to recover it. 00:27:33.781 [2024-11-15 10:46:21.956566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.781 [2024-11-15 10:46:21.956591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.781 qpair failed and we were unable to recover it. 00:27:33.781 [2024-11-15 10:46:21.956837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.781 [2024-11-15 10:46:21.956861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.781 qpair failed and we were unable to recover it. 00:27:33.781 [2024-11-15 10:46:21.957027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.781 [2024-11-15 10:46:21.957050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.781 qpair failed and we were unable to recover it. 00:27:33.781 [2024-11-15 10:46:21.957242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.781 [2024-11-15 10:46:21.957266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.781 qpair failed and we were unable to recover it. 00:27:33.781 [2024-11-15 10:46:21.957493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.781 [2024-11-15 10:46:21.957518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.781 qpair failed and we were unable to recover it. 00:27:33.781 [2024-11-15 10:46:21.957719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.781 [2024-11-15 10:46:21.957743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.781 qpair failed and we were unable to recover it. 00:27:33.781 [2024-11-15 10:46:21.957887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.781 [2024-11-15 10:46:21.957911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.781 qpair failed and we were unable to recover it. 00:27:33.781 [2024-11-15 10:46:21.958135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.781 [2024-11-15 10:46:21.958158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.781 qpair failed and we were unable to recover it. 00:27:33.781 [2024-11-15 10:46:21.958343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.781 [2024-11-15 10:46:21.958389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.781 qpair failed and we were unable to recover it. 00:27:33.781 [2024-11-15 10:46:21.958553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.781 [2024-11-15 10:46:21.958578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.781 qpair failed and we were unable to recover it. 00:27:33.781 [2024-11-15 10:46:21.958791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.781 [2024-11-15 10:46:21.958829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.781 qpair failed and we were unable to recover it. 00:27:33.781 [2024-11-15 10:46:21.958984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.781 [2024-11-15 10:46:21.959009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.781 qpair failed and we were unable to recover it. 00:27:33.781 [2024-11-15 10:46:21.959226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.781 [2024-11-15 10:46:21.959265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.781 qpair failed and we were unable to recover it. 00:27:33.781 [2024-11-15 10:46:21.959504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.781 [2024-11-15 10:46:21.959531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.781 qpair failed and we were unable to recover it. 00:27:33.781 [2024-11-15 10:46:21.959683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.781 [2024-11-15 10:46:21.959715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.781 qpair failed and we were unable to recover it. 00:27:33.781 [2024-11-15 10:46:21.959888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.781 [2024-11-15 10:46:21.959913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.781 qpair failed and we were unable to recover it. 00:27:33.781 [2024-11-15 10:46:21.960026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.781 [2024-11-15 10:46:21.960051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.781 qpair failed and we were unable to recover it. 00:27:33.781 [2024-11-15 10:46:21.960226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.781 [2024-11-15 10:46:21.960277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.781 qpair failed and we were unable to recover it. 00:27:33.781 [2024-11-15 10:46:21.960525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.781 [2024-11-15 10:46:21.960553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.781 qpair failed and we were unable to recover it. 00:27:33.782 [2024-11-15 10:46:21.960729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.782 [2024-11-15 10:46:21.960754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.782 qpair failed and we were unable to recover it. 00:27:33.782 [2024-11-15 10:46:21.960983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.782 [2024-11-15 10:46:21.961022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.782 qpair failed and we were unable to recover it. 00:27:33.782 [2024-11-15 10:46:21.961193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.782 [2024-11-15 10:46:21.961217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.782 qpair failed and we were unable to recover it. 00:27:33.782 [2024-11-15 10:46:21.961442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.782 [2024-11-15 10:46:21.961468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.782 qpair failed and we were unable to recover it. 00:27:33.782 [2024-11-15 10:46:21.961631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.782 [2024-11-15 10:46:21.961669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.782 qpair failed and we were unable to recover it. 00:27:33.782 [2024-11-15 10:46:21.961823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.782 [2024-11-15 10:46:21.961847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.782 qpair failed and we were unable to recover it. 00:27:33.782 [2024-11-15 10:46:21.962078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.782 [2024-11-15 10:46:21.962102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.782 qpair failed and we were unable to recover it. 00:27:33.782 [2024-11-15 10:46:21.962304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.782 [2024-11-15 10:46:21.962328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.782 qpair failed and we were unable to recover it. 00:27:33.782 [2024-11-15 10:46:21.962533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.782 [2024-11-15 10:46:21.962558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.782 qpair failed and we were unable to recover it. 00:27:33.782 [2024-11-15 10:46:21.962696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.782 [2024-11-15 10:46:21.962719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.782 qpair failed and we were unable to recover it. 00:27:33.782 [2024-11-15 10:46:21.962893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.782 [2024-11-15 10:46:21.962932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.782 qpair failed and we were unable to recover it. 00:27:33.782 [2024-11-15 10:46:21.963091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.782 [2024-11-15 10:46:21.963114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.782 qpair failed and we were unable to recover it. 00:27:33.782 [2024-11-15 10:46:21.963225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.782 [2024-11-15 10:46:21.963249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.782 qpair failed and we were unable to recover it. 00:27:33.782 [2024-11-15 10:46:21.963495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.782 [2024-11-15 10:46:21.963520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.782 qpair failed and we were unable to recover it. 00:27:33.782 [2024-11-15 10:46:21.963640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.782 [2024-11-15 10:46:21.963666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.782 qpair failed and we were unable to recover it. 00:27:33.782 [2024-11-15 10:46:21.963816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.782 [2024-11-15 10:46:21.963840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.782 qpair failed and we were unable to recover it. 00:27:33.782 [2024-11-15 10:46:21.963980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.782 [2024-11-15 10:46:21.964019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.782 qpair failed and we were unable to recover it. 00:27:33.782 [2024-11-15 10:46:21.964134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.782 [2024-11-15 10:46:21.964159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.782 qpair failed and we were unable to recover it. 00:27:33.782 [2024-11-15 10:46:21.964280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.782 [2024-11-15 10:46:21.964304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.782 qpair failed and we were unable to recover it. 00:27:33.782 [2024-11-15 10:46:21.964460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.782 [2024-11-15 10:46:21.964487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.782 qpair failed and we were unable to recover it. 00:27:33.782 [2024-11-15 10:46:21.964599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.782 [2024-11-15 10:46:21.964623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.782 qpair failed and we were unable to recover it. 00:27:33.782 [2024-11-15 10:46:21.964725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.782 [2024-11-15 10:46:21.964750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.782 qpair failed and we were unable to recover it. 00:27:33.782 [2024-11-15 10:46:21.964930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.782 [2024-11-15 10:46:21.964969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.782 qpair failed and we were unable to recover it. 00:27:33.782 [2024-11-15 10:46:21.965132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.782 [2024-11-15 10:46:21.965157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.782 qpair failed and we were unable to recover it. 00:27:33.782 [2024-11-15 10:46:21.965262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.782 [2024-11-15 10:46:21.965286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.782 qpair failed and we were unable to recover it. 00:27:33.782 [2024-11-15 10:46:21.965415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.782 [2024-11-15 10:46:21.965441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.782 qpair failed and we were unable to recover it. 00:27:33.782 [2024-11-15 10:46:21.965550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.782 [2024-11-15 10:46:21.965575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.782 qpair failed and we were unable to recover it. 00:27:33.782 [2024-11-15 10:46:21.965722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.782 [2024-11-15 10:46:21.965746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.782 qpair failed and we were unable to recover it. 00:27:33.782 [2024-11-15 10:46:21.965889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.782 [2024-11-15 10:46:21.965927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.782 qpair failed and we were unable to recover it. 00:27:33.782 [2024-11-15 10:46:21.966097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.782 [2024-11-15 10:46:21.966121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.782 qpair failed and we were unable to recover it. 00:27:33.782 [2024-11-15 10:46:21.966236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.782 [2024-11-15 10:46:21.966260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.782 qpair failed and we were unable to recover it. 00:27:33.782 [2024-11-15 10:46:21.966394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.782 [2024-11-15 10:46:21.966434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.782 qpair failed and we were unable to recover it. 00:27:33.782 [2024-11-15 10:46:21.966534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.782 [2024-11-15 10:46:21.966560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.782 qpair failed and we were unable to recover it. 00:27:33.782 [2024-11-15 10:46:21.966701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.782 [2024-11-15 10:46:21.966742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.782 qpair failed and we were unable to recover it. 00:27:33.782 [2024-11-15 10:46:21.966904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.782 [2024-11-15 10:46:21.966928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.782 qpair failed and we were unable to recover it. 00:27:33.782 [2024-11-15 10:46:21.967040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.782 [2024-11-15 10:46:21.967083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.782 qpair failed and we were unable to recover it. 00:27:33.782 [2024-11-15 10:46:21.967231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.782 [2024-11-15 10:46:21.967255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.782 qpair failed and we were unable to recover it. 00:27:33.782 [2024-11-15 10:46:21.967404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.783 [2024-11-15 10:46:21.967430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.783 qpair failed and we were unable to recover it. 00:27:33.783 [2024-11-15 10:46:21.967581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.783 [2024-11-15 10:46:21.967607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.783 qpair failed and we were unable to recover it. 00:27:33.783 [2024-11-15 10:46:21.967742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.783 [2024-11-15 10:46:21.967766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.783 qpair failed and we were unable to recover it. 00:27:33.783 [2024-11-15 10:46:21.967877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.783 [2024-11-15 10:46:21.967902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.783 qpair failed and we were unable to recover it. 00:27:33.783 [2024-11-15 10:46:21.968042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.783 [2024-11-15 10:46:21.968066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.783 qpair failed and we were unable to recover it. 00:27:33.783 [2024-11-15 10:46:21.968194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.783 [2024-11-15 10:46:21.968218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.783 qpair failed and we were unable to recover it. 00:27:33.783 [2024-11-15 10:46:21.968369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.783 [2024-11-15 10:46:21.968394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.783 qpair failed and we were unable to recover it. 00:27:33.783 [2024-11-15 10:46:21.968546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.783 [2024-11-15 10:46:21.968571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.783 qpair failed and we were unable to recover it. 00:27:33.783 [2024-11-15 10:46:21.968693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.783 [2024-11-15 10:46:21.968718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.783 qpair failed and we were unable to recover it. 00:27:33.783 [2024-11-15 10:46:21.968875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.783 [2024-11-15 10:46:21.968899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.783 qpair failed and we were unable to recover it. 00:27:33.783 [2024-11-15 10:46:21.969030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.783 [2024-11-15 10:46:21.969068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.783 qpair failed and we were unable to recover it. 00:27:33.783 [2024-11-15 10:46:21.969165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.783 [2024-11-15 10:46:21.969189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.783 qpair failed and we were unable to recover it. 00:27:33.783 [2024-11-15 10:46:21.969336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.783 [2024-11-15 10:46:21.969360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.783 qpair failed and we were unable to recover it. 00:27:33.783 [2024-11-15 10:46:21.969509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.783 [2024-11-15 10:46:21.969549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.783 qpair failed and we were unable to recover it. 00:27:33.783 [2024-11-15 10:46:21.969681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.783 [2024-11-15 10:46:21.969706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.783 qpair failed and we were unable to recover it. 00:27:33.783 [2024-11-15 10:46:21.969844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.783 [2024-11-15 10:46:21.969883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.783 qpair failed and we were unable to recover it. 00:27:33.783 [2024-11-15 10:46:21.970001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.783 [2024-11-15 10:46:21.970026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.783 qpair failed and we were unable to recover it. 00:27:33.783 [2024-11-15 10:46:21.970163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.783 [2024-11-15 10:46:21.970188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.783 qpair failed and we were unable to recover it. 00:27:33.783 [2024-11-15 10:46:21.970325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.783 [2024-11-15 10:46:21.970349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.783 qpair failed and we were unable to recover it. 00:27:33.783 [2024-11-15 10:46:21.970494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.783 [2024-11-15 10:46:21.970519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.783 qpair failed and we were unable to recover it. 00:27:33.783 [2024-11-15 10:46:21.970676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.783 [2024-11-15 10:46:21.970700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.783 qpair failed and we were unable to recover it. 00:27:33.783 [2024-11-15 10:46:21.970883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.783 [2024-11-15 10:46:21.970907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.783 qpair failed and we were unable to recover it. 00:27:33.783 [2024-11-15 10:46:21.971040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.783 [2024-11-15 10:46:21.971064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.783 qpair failed and we were unable to recover it. 00:27:33.783 [2024-11-15 10:46:21.971193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.783 [2024-11-15 10:46:21.971217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.783 qpair failed and we were unable to recover it. 00:27:33.783 [2024-11-15 10:46:21.971356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.783 [2024-11-15 10:46:21.971406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.783 qpair failed and we were unable to recover it. 00:27:33.783 [2024-11-15 10:46:21.971531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.783 [2024-11-15 10:46:21.971557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.783 qpair failed and we were unable to recover it. 00:27:33.783 [2024-11-15 10:46:21.971658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.783 [2024-11-15 10:46:21.971683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.783 qpair failed and we were unable to recover it. 00:27:33.783 [2024-11-15 10:46:21.971816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.783 [2024-11-15 10:46:21.971840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.783 qpair failed and we were unable to recover it. 00:27:33.783 [2024-11-15 10:46:21.971995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.783 [2024-11-15 10:46:21.972019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.783 qpair failed and we were unable to recover it. 00:27:33.783 [2024-11-15 10:46:21.972175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.783 [2024-11-15 10:46:21.972199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.783 qpair failed and we were unable to recover it. 00:27:33.783 [2024-11-15 10:46:21.972334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.783 [2024-11-15 10:46:21.972382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.783 qpair failed and we were unable to recover it. 00:27:33.783 [2024-11-15 10:46:21.972505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.783 [2024-11-15 10:46:21.972530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.783 qpair failed and we were unable to recover it. 00:27:33.783 [2024-11-15 10:46:21.972693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.783 [2024-11-15 10:46:21.972718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.783 qpair failed and we were unable to recover it. 00:27:33.783 [2024-11-15 10:46:21.972870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.783 [2024-11-15 10:46:21.972894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.783 qpair failed and we were unable to recover it. 00:27:33.783 [2024-11-15 10:46:21.972990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.783 [2024-11-15 10:46:21.973015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.783 qpair failed and we were unable to recover it. 00:27:33.783 [2024-11-15 10:46:21.973131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.783 [2024-11-15 10:46:21.973155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.783 qpair failed and we were unable to recover it. 00:27:33.783 [2024-11-15 10:46:21.973308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.783 [2024-11-15 10:46:21.973332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.783 qpair failed and we were unable to recover it. 00:27:33.783 [2024-11-15 10:46:21.973476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.784 [2024-11-15 10:46:21.973516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.784 qpair failed and we were unable to recover it. 00:27:33.784 [2024-11-15 10:46:21.973637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.784 [2024-11-15 10:46:21.973679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.784 qpair failed and we were unable to recover it. 00:27:33.784 [2024-11-15 10:46:21.973804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.784 [2024-11-15 10:46:21.973843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.784 qpair failed and we were unable to recover it. 00:27:33.784 [2024-11-15 10:46:21.974020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.784 [2024-11-15 10:46:21.974058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.784 qpair failed and we were unable to recover it. 00:27:33.784 [2024-11-15 10:46:21.974225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.784 [2024-11-15 10:46:21.974249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.784 qpair failed and we were unable to recover it. 00:27:33.784 [2024-11-15 10:46:21.974351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.784 [2024-11-15 10:46:21.974387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.784 qpair failed and we were unable to recover it. 00:27:33.784 [2024-11-15 10:46:21.974541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.784 [2024-11-15 10:46:21.974566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.784 qpair failed and we were unable to recover it. 00:27:33.784 [2024-11-15 10:46:21.974712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.784 [2024-11-15 10:46:21.974737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.784 qpair failed and we were unable to recover it. 00:27:33.784 [2024-11-15 10:46:21.974928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.784 [2024-11-15 10:46:21.974952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.784 qpair failed and we were unable to recover it. 00:27:33.784 [2024-11-15 10:46:21.975164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.784 [2024-11-15 10:46:21.975187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.784 qpair failed and we were unable to recover it. 00:27:33.784 [2024-11-15 10:46:21.975397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.784 [2024-11-15 10:46:21.975432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.784 qpair failed and we were unable to recover it. 00:27:33.784 [2024-11-15 10:46:21.975589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.784 [2024-11-15 10:46:21.975614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.784 qpair failed and we were unable to recover it. 00:27:33.784 [2024-11-15 10:46:21.975745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.784 [2024-11-15 10:46:21.975769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.784 qpair failed and we were unable to recover it. 00:27:33.784 [2024-11-15 10:46:21.975907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.784 [2024-11-15 10:46:21.975931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.784 qpair failed and we were unable to recover it. 00:27:33.784 [2024-11-15 10:46:21.976060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.784 [2024-11-15 10:46:21.976085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.784 qpair failed and we were unable to recover it. 00:27:33.784 [2024-11-15 10:46:21.976268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.784 [2024-11-15 10:46:21.976292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.784 qpair failed and we were unable to recover it. 00:27:33.784 [2024-11-15 10:46:21.976422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.784 [2024-11-15 10:46:21.976464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.784 qpair failed and we were unable to recover it. 00:27:33.784 [2024-11-15 10:46:21.976584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.784 [2024-11-15 10:46:21.976608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.784 qpair failed and we were unable to recover it. 00:27:33.784 [2024-11-15 10:46:21.976800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.784 [2024-11-15 10:46:21.976823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.784 qpair failed and we were unable to recover it. 00:27:33.784 [2024-11-15 10:46:21.976966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.784 [2024-11-15 10:46:21.977005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.784 qpair failed and we were unable to recover it. 00:27:33.784 [2024-11-15 10:46:21.977146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.784 [2024-11-15 10:46:21.977186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.784 qpair failed and we were unable to recover it. 00:27:33.784 [2024-11-15 10:46:21.977381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.784 [2024-11-15 10:46:21.977405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.784 qpair failed and we were unable to recover it. 00:27:33.784 [2024-11-15 10:46:21.977572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.784 [2024-11-15 10:46:21.977597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.784 qpair failed and we were unable to recover it. 00:27:33.784 [2024-11-15 10:46:21.977726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.784 [2024-11-15 10:46:21.977750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.784 qpair failed and we were unable to recover it. 00:27:33.784 [2024-11-15 10:46:21.977902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.784 [2024-11-15 10:46:21.977926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.784 qpair failed and we were unable to recover it. 00:27:33.784 [2024-11-15 10:46:21.978066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.784 [2024-11-15 10:46:21.978090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.784 qpair failed and we were unable to recover it. 00:27:33.784 [2024-11-15 10:46:21.978253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.784 [2024-11-15 10:46:21.978278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.784 qpair failed and we were unable to recover it. 00:27:33.784 [2024-11-15 10:46:21.978382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.784 [2024-11-15 10:46:21.978407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.784 qpair failed and we were unable to recover it. 00:27:33.784 [2024-11-15 10:46:21.978523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.784 [2024-11-15 10:46:21.978549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.784 qpair failed and we were unable to recover it. 00:27:33.784 [2024-11-15 10:46:21.978661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.784 [2024-11-15 10:46:21.978685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.784 qpair failed and we were unable to recover it. 00:27:33.784 [2024-11-15 10:46:21.978874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.784 [2024-11-15 10:46:21.978898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.784 qpair failed and we were unable to recover it. 00:27:33.784 [2024-11-15 10:46:21.979047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.784 [2024-11-15 10:46:21.979071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.784 qpair failed and we were unable to recover it. 00:27:33.784 [2024-11-15 10:46:21.979204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.784 [2024-11-15 10:46:21.979243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.784 qpair failed and we were unable to recover it. 00:27:33.784 [2024-11-15 10:46:21.979387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.784 [2024-11-15 10:46:21.979434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.785 qpair failed and we were unable to recover it. 00:27:33.785 [2024-11-15 10:46:21.979564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.785 [2024-11-15 10:46:21.979590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.785 qpair failed and we were unable to recover it. 00:27:33.785 [2024-11-15 10:46:21.979731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.785 [2024-11-15 10:46:21.979756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.785 qpair failed and we were unable to recover it. 00:27:33.785 [2024-11-15 10:46:21.979891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.785 [2024-11-15 10:46:21.979930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.785 qpair failed and we were unable to recover it. 00:27:33.785 [2024-11-15 10:46:21.980073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.785 [2024-11-15 10:46:21.980097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.785 qpair failed and we were unable to recover it. 00:27:33.785 [2024-11-15 10:46:21.980235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.785 [2024-11-15 10:46:21.980260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.785 qpair failed and we were unable to recover it. 00:27:33.785 [2024-11-15 10:46:21.980418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.785 [2024-11-15 10:46:21.980444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.785 qpair failed and we were unable to recover it. 00:27:33.785 [2024-11-15 10:46:21.980603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.785 [2024-11-15 10:46:21.980628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.785 qpair failed and we were unable to recover it. 00:27:33.785 [2024-11-15 10:46:21.980796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.785 [2024-11-15 10:46:21.980824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.785 qpair failed and we were unable to recover it. 00:27:33.785 [2024-11-15 10:46:21.980944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.785 [2024-11-15 10:46:21.980983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.785 qpair failed and we were unable to recover it. 00:27:33.785 [2024-11-15 10:46:21.981111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.785 [2024-11-15 10:46:21.981136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.785 qpair failed and we were unable to recover it. 00:27:33.785 [2024-11-15 10:46:21.981295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.785 [2024-11-15 10:46:21.981319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.785 qpair failed and we were unable to recover it. 00:27:33.785 [2024-11-15 10:46:21.981444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.785 [2024-11-15 10:46:21.981470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.785 qpair failed and we were unable to recover it. 00:27:33.785 [2024-11-15 10:46:21.981599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.785 [2024-11-15 10:46:21.981624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.785 qpair failed and we were unable to recover it. 00:27:33.785 [2024-11-15 10:46:21.981786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.785 [2024-11-15 10:46:21.981809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.785 qpair failed and we were unable to recover it. 00:27:33.785 [2024-11-15 10:46:21.981952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.785 [2024-11-15 10:46:21.981977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.785 qpair failed and we were unable to recover it. 00:27:33.785 [2024-11-15 10:46:21.982110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.785 [2024-11-15 10:46:21.982135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.785 qpair failed and we were unable to recover it. 00:27:33.785 [2024-11-15 10:46:21.982289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.785 [2024-11-15 10:46:21.982327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.785 qpair failed and we were unable to recover it. 00:27:33.785 [2024-11-15 10:46:21.982458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.785 [2024-11-15 10:46:21.982484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.785 qpair failed and we were unable to recover it. 00:27:33.785 [2024-11-15 10:46:21.982581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.785 [2024-11-15 10:46:21.982606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.785 qpair failed and we were unable to recover it. 00:27:33.785 [2024-11-15 10:46:21.982718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.785 [2024-11-15 10:46:21.982743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.785 qpair failed and we were unable to recover it. 00:27:33.785 [2024-11-15 10:46:21.982885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.785 [2024-11-15 10:46:21.982925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.785 qpair failed and we were unable to recover it. 00:27:33.785 [2024-11-15 10:46:21.983058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.785 [2024-11-15 10:46:21.983083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.785 qpair failed and we were unable to recover it. 00:27:33.785 [2024-11-15 10:46:21.983236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.785 [2024-11-15 10:46:21.983260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.785 qpair failed and we were unable to recover it. 00:27:33.785 [2024-11-15 10:46:21.983374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.785 [2024-11-15 10:46:21.983400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.785 qpair failed and we were unable to recover it. 00:27:33.785 [2024-11-15 10:46:21.983488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.785 [2024-11-15 10:46:21.983512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.785 qpair failed and we were unable to recover it. 00:27:33.785 [2024-11-15 10:46:21.983624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.785 [2024-11-15 10:46:21.983648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.785 qpair failed and we were unable to recover it. 00:27:33.785 [2024-11-15 10:46:21.983764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.785 [2024-11-15 10:46:21.983789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.785 qpair failed and we were unable to recover it. 00:27:33.785 [2024-11-15 10:46:21.983938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.785 [2024-11-15 10:46:21.983962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.785 qpair failed and we were unable to recover it. 00:27:33.785 [2024-11-15 10:46:21.984101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.785 [2024-11-15 10:46:21.984140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.785 qpair failed and we were unable to recover it. 00:27:33.785 [2024-11-15 10:46:21.984233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.785 [2024-11-15 10:46:21.984257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.785 qpair failed and we were unable to recover it. 00:27:33.785 [2024-11-15 10:46:21.984420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.785 [2024-11-15 10:46:21.984460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.785 qpair failed and we were unable to recover it. 00:27:33.785 [2024-11-15 10:46:21.984582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.785 [2024-11-15 10:46:21.984620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.785 qpair failed and we were unable to recover it. 00:27:33.785 [2024-11-15 10:46:21.984779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.785 [2024-11-15 10:46:21.984802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.785 qpair failed and we were unable to recover it. 00:27:33.785 [2024-11-15 10:46:21.984913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.785 [2024-11-15 10:46:21.984936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.785 qpair failed and we were unable to recover it. 00:27:33.785 [2024-11-15 10:46:21.985070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.785 [2024-11-15 10:46:21.985095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.785 qpair failed and we were unable to recover it. 00:27:33.785 [2024-11-15 10:46:21.985209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.785 [2024-11-15 10:46:21.985233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.785 qpair failed and we were unable to recover it. 00:27:33.785 [2024-11-15 10:46:21.985377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.785 [2024-11-15 10:46:21.985402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.785 qpair failed and we were unable to recover it. 00:27:33.785 [2024-11-15 10:46:21.985503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.786 [2024-11-15 10:46:21.985529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.786 qpair failed and we were unable to recover it. 00:27:33.786 [2024-11-15 10:46:21.985650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.786 [2024-11-15 10:46:21.985676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.786 qpair failed and we were unable to recover it. 00:27:33.786 [2024-11-15 10:46:21.985822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.786 [2024-11-15 10:46:21.985846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.786 qpair failed and we were unable to recover it. 00:27:33.786 [2024-11-15 10:46:21.986029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.786 [2024-11-15 10:46:21.986053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.786 qpair failed and we were unable to recover it. 00:27:33.786 [2024-11-15 10:46:21.986133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.786 [2024-11-15 10:46:21.986156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.786 qpair failed and we were unable to recover it. 00:27:33.786 [2024-11-15 10:46:21.986275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.786 [2024-11-15 10:46:21.986299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.786 qpair failed and we were unable to recover it. 00:27:33.786 [2024-11-15 10:46:21.986416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.786 [2024-11-15 10:46:21.986442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.786 qpair failed and we were unable to recover it. 00:27:33.786 [2024-11-15 10:46:21.986553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.786 [2024-11-15 10:46:21.986578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.786 qpair failed and we were unable to recover it. 00:27:33.786 [2024-11-15 10:46:21.986709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.786 [2024-11-15 10:46:21.986733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.786 qpair failed and we were unable to recover it. 00:27:33.786 [2024-11-15 10:46:21.986828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.786 [2024-11-15 10:46:21.986853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.786 qpair failed and we were unable to recover it. 00:27:33.786 [2024-11-15 10:46:21.986973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.786 [2024-11-15 10:46:21.986997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.786 qpair failed and we were unable to recover it. 00:27:33.786 [2024-11-15 10:46:21.987162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.786 [2024-11-15 10:46:21.987187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.786 qpair failed and we were unable to recover it. 00:27:33.786 [2024-11-15 10:46:21.987326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.786 [2024-11-15 10:46:21.987371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.786 qpair failed and we were unable to recover it. 00:27:33.786 [2024-11-15 10:46:21.987455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.786 [2024-11-15 10:46:21.987480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.786 qpair failed and we were unable to recover it. 00:27:33.786 [2024-11-15 10:46:21.987586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.786 [2024-11-15 10:46:21.987611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.786 qpair failed and we were unable to recover it. 00:27:33.786 [2024-11-15 10:46:21.987778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.786 [2024-11-15 10:46:21.987817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.786 qpair failed and we were unable to recover it. 00:27:33.786 [2024-11-15 10:46:21.987952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.786 [2024-11-15 10:46:21.987976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.786 qpair failed and we were unable to recover it. 00:27:33.786 [2024-11-15 10:46:21.988082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.786 [2024-11-15 10:46:21.988107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.786 qpair failed and we were unable to recover it. 00:27:33.786 [2024-11-15 10:46:21.988223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.786 [2024-11-15 10:46:21.988247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.786 qpair failed and we were unable to recover it. 00:27:33.786 [2024-11-15 10:46:21.988376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.786 [2024-11-15 10:46:21.988402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.786 qpair failed and we were unable to recover it. 00:27:33.786 [2024-11-15 10:46:21.988514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.786 [2024-11-15 10:46:21.988539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.786 qpair failed and we were unable to recover it. 00:27:33.786 [2024-11-15 10:46:21.988630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.786 [2024-11-15 10:46:21.988670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.786 qpair failed and we were unable to recover it. 00:27:33.786 [2024-11-15 10:46:21.988808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.786 [2024-11-15 10:46:21.988846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.786 qpair failed and we were unable to recover it. 00:27:33.786 [2024-11-15 10:46:21.988955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.786 [2024-11-15 10:46:21.988980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.786 qpair failed and we were unable to recover it. 00:27:33.786 [2024-11-15 10:46:21.989139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.786 [2024-11-15 10:46:21.989164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.786 qpair failed and we were unable to recover it. 00:27:33.786 [2024-11-15 10:46:21.989333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.786 [2024-11-15 10:46:21.989358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.786 qpair failed and we were unable to recover it. 00:27:33.786 [2024-11-15 10:46:21.989470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.786 [2024-11-15 10:46:21.989494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.786 qpair failed and we were unable to recover it. 00:27:33.786 [2024-11-15 10:46:21.989619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.786 [2024-11-15 10:46:21.989643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.786 qpair failed and we were unable to recover it. 00:27:33.786 [2024-11-15 10:46:21.989752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.786 [2024-11-15 10:46:21.989777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.786 qpair failed and we were unable to recover it. 00:27:33.786 [2024-11-15 10:46:21.989916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.786 [2024-11-15 10:46:21.989941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.786 qpair failed and we were unable to recover it. 00:27:33.786 [2024-11-15 10:46:21.990076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.786 [2024-11-15 10:46:21.990100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.786 qpair failed and we were unable to recover it. 00:27:33.786 [2024-11-15 10:46:21.990245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.786 [2024-11-15 10:46:21.990270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.786 qpair failed and we were unable to recover it. 00:27:33.786 [2024-11-15 10:46:21.990388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.786 [2024-11-15 10:46:21.990413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.786 qpair failed and we were unable to recover it. 00:27:33.786 [2024-11-15 10:46:21.990531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.786 [2024-11-15 10:46:21.990556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.786 qpair failed and we were unable to recover it. 00:27:33.786 [2024-11-15 10:46:21.990668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.786 [2024-11-15 10:46:21.990693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.786 qpair failed and we were unable to recover it. 00:27:33.786 [2024-11-15 10:46:21.990851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.786 [2024-11-15 10:46:21.990890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.786 qpair failed and we were unable to recover it. 00:27:33.786 [2024-11-15 10:46:21.991049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.786 [2024-11-15 10:46:21.991072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.786 qpair failed and we were unable to recover it. 00:27:33.786 [2024-11-15 10:46:21.991250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.786 [2024-11-15 10:46:21.991277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.786 qpair failed and we were unable to recover it. 00:27:33.787 [2024-11-15 10:46:21.991421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.787 [2024-11-15 10:46:21.991462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.787 qpair failed and we were unable to recover it. 00:27:33.787 [2024-11-15 10:46:21.991586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.787 [2024-11-15 10:46:21.991611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.787 qpair failed and we were unable to recover it. 00:27:33.787 [2024-11-15 10:46:21.991748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.787 [2024-11-15 10:46:21.991773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.787 qpair failed and we were unable to recover it. 00:27:33.787 [2024-11-15 10:46:21.991909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.787 [2024-11-15 10:46:21.991949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.787 qpair failed and we were unable to recover it. 00:27:33.787 [2024-11-15 10:46:21.992081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.787 [2024-11-15 10:46:21.992129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.787 qpair failed and we were unable to recover it. 00:27:33.787 [2024-11-15 10:46:21.992312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.787 [2024-11-15 10:46:21.992336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.787 qpair failed and we were unable to recover it. 00:27:33.787 [2024-11-15 10:46:21.992498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.787 [2024-11-15 10:46:21.992524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.787 qpair failed and we were unable to recover it. 00:27:33.787 [2024-11-15 10:46:21.992630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.787 [2024-11-15 10:46:21.992655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.787 qpair failed and we were unable to recover it. 00:27:33.787 [2024-11-15 10:46:21.992767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.787 [2024-11-15 10:46:21.992791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.787 qpair failed and we were unable to recover it. 00:27:33.787 [2024-11-15 10:46:21.992918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.787 [2024-11-15 10:46:21.992942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.787 qpair failed and we were unable to recover it. 00:27:33.787 [2024-11-15 10:46:21.993121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.787 [2024-11-15 10:46:21.993160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.787 qpair failed and we were unable to recover it. 00:27:33.787 [2024-11-15 10:46:21.993306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.787 [2024-11-15 10:46:21.993330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.787 qpair failed and we were unable to recover it. 00:27:33.787 [2024-11-15 10:46:21.993455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.787 [2024-11-15 10:46:21.993480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.787 qpair failed and we were unable to recover it. 00:27:33.787 [2024-11-15 10:46:21.993610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.787 [2024-11-15 10:46:21.993634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.787 qpair failed and we were unable to recover it. 00:27:33.787 [2024-11-15 10:46:21.993790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.787 [2024-11-15 10:46:21.993830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.787 qpair failed and we were unable to recover it. 00:27:33.787 [2024-11-15 10:46:21.993989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.787 [2024-11-15 10:46:21.994014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.787 qpair failed and we were unable to recover it. 00:27:33.787 [2024-11-15 10:46:21.994136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.787 [2024-11-15 10:46:21.994160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.787 qpair failed and we were unable to recover it. 00:27:33.787 [2024-11-15 10:46:21.994308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.787 [2024-11-15 10:46:21.994333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.787 qpair failed and we were unable to recover it. 00:27:33.787 [2024-11-15 10:46:21.994455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.787 [2024-11-15 10:46:21.994480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.787 qpair failed and we were unable to recover it. 00:27:33.787 [2024-11-15 10:46:21.994575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.787 [2024-11-15 10:46:21.994600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.787 qpair failed and we were unable to recover it. 00:27:33.787 [2024-11-15 10:46:21.994681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.787 [2024-11-15 10:46:21.994707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.787 qpair failed and we were unable to recover it. 00:27:33.787 [2024-11-15 10:46:21.994834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.787 [2024-11-15 10:46:21.994858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.787 qpair failed and we were unable to recover it. 00:27:33.787 [2024-11-15 10:46:21.995009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.787 [2024-11-15 10:46:21.995033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.787 qpair failed and we were unable to recover it. 00:27:33.787 [2024-11-15 10:46:21.995148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.787 [2024-11-15 10:46:21.995173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.787 qpair failed and we were unable to recover it. 00:27:33.787 [2024-11-15 10:46:21.995300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.787 [2024-11-15 10:46:21.995325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.787 qpair failed and we were unable to recover it. 00:27:33.787 [2024-11-15 10:46:21.995474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.787 [2024-11-15 10:46:21.995499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.787 qpair failed and we were unable to recover it. 00:27:33.787 [2024-11-15 10:46:21.995622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.787 [2024-11-15 10:46:21.995647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.787 qpair failed and we were unable to recover it. 00:27:33.787 [2024-11-15 10:46:21.995753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.787 [2024-11-15 10:46:21.995777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.787 qpair failed and we were unable to recover it. 00:27:33.787 [2024-11-15 10:46:21.995919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.787 [2024-11-15 10:46:21.995943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.787 qpair failed and we were unable to recover it. 00:27:33.787 [2024-11-15 10:46:21.996049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.787 [2024-11-15 10:46:21.996074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.787 qpair failed and we were unable to recover it. 00:27:33.787 [2024-11-15 10:46:21.996191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.787 [2024-11-15 10:46:21.996215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.787 qpair failed and we were unable to recover it. 00:27:33.787 [2024-11-15 10:46:21.996329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.787 [2024-11-15 10:46:21.996353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.787 qpair failed and we were unable to recover it. 00:27:33.787 [2024-11-15 10:46:21.996503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.787 [2024-11-15 10:46:21.996527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.787 qpair failed and we were unable to recover it. 00:27:33.787 [2024-11-15 10:46:21.996652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.787 [2024-11-15 10:46:21.996677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.788 qpair failed and we were unable to recover it. 00:27:33.788 [2024-11-15 10:46:21.996820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.788 [2024-11-15 10:46:21.996844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.788 qpair failed and we were unable to recover it. 00:27:33.788 [2024-11-15 10:46:21.997022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.788 [2024-11-15 10:46:21.997046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.788 qpair failed and we were unable to recover it. 00:27:33.788 [2024-11-15 10:46:21.997157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.788 [2024-11-15 10:46:21.997181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.788 qpair failed and we were unable to recover it. 00:27:33.788 [2024-11-15 10:46:21.997374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.788 [2024-11-15 10:46:21.997401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.788 qpair failed and we were unable to recover it. 00:27:33.788 [2024-11-15 10:46:21.997509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.788 [2024-11-15 10:46:21.997534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.788 qpair failed and we were unable to recover it. 00:27:33.788 [2024-11-15 10:46:21.997630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.788 [2024-11-15 10:46:21.997674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.788 qpair failed and we were unable to recover it. 00:27:33.788 [2024-11-15 10:46:21.997800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.788 [2024-11-15 10:46:21.997825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.788 qpair failed and we were unable to recover it. 00:27:33.788 [2024-11-15 10:46:21.997982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.788 [2024-11-15 10:46:21.998021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.788 qpair failed and we were unable to recover it. 00:27:33.788 [2024-11-15 10:46:21.998118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.788 [2024-11-15 10:46:21.998143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.788 qpair failed and we were unable to recover it. 00:27:33.788 [2024-11-15 10:46:21.998276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.788 [2024-11-15 10:46:21.998299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.788 qpair failed and we were unable to recover it. 00:27:33.788 [2024-11-15 10:46:21.998421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.788 [2024-11-15 10:46:21.998445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.788 qpair failed and we were unable to recover it. 00:27:33.788 [2024-11-15 10:46:21.998538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.788 [2024-11-15 10:46:21.998561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.788 qpair failed and we were unable to recover it. 00:27:33.788 [2024-11-15 10:46:21.998712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.788 [2024-11-15 10:46:21.998736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.788 qpair failed and we were unable to recover it. 00:27:33.788 [2024-11-15 10:46:21.998855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.788 [2024-11-15 10:46:21.998879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.788 qpair failed and we were unable to recover it. 00:27:33.788 [2024-11-15 10:46:21.999051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.788 [2024-11-15 10:46:21.999091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.788 qpair failed and we were unable to recover it. 00:27:33.788 [2024-11-15 10:46:21.999271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.788 [2024-11-15 10:46:21.999310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.788 qpair failed and we were unable to recover it. 00:27:33.788 [2024-11-15 10:46:21.999429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.788 [2024-11-15 10:46:21.999455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.788 qpair failed and we were unable to recover it. 00:27:33.788 [2024-11-15 10:46:21.999548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.788 [2024-11-15 10:46:21.999574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.788 qpair failed and we were unable to recover it. 00:27:33.788 [2024-11-15 10:46:21.999714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.788 [2024-11-15 10:46:21.999739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.788 qpair failed and we were unable to recover it. 00:27:33.788 [2024-11-15 10:46:21.999861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.788 [2024-11-15 10:46:21.999901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.788 qpair failed and we were unable to recover it. 00:27:33.788 [2024-11-15 10:46:22.000024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.788 [2024-11-15 10:46:22.000065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.788 qpair failed and we were unable to recover it. 00:27:33.788 [2024-11-15 10:46:22.000220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.788 [2024-11-15 10:46:22.000244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.788 qpair failed and we were unable to recover it. 00:27:33.788 [2024-11-15 10:46:22.000355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.788 [2024-11-15 10:46:22.000402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.788 qpair failed and we were unable to recover it. 00:27:33.788 [2024-11-15 10:46:22.000500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.788 [2024-11-15 10:46:22.000525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.788 qpair failed and we were unable to recover it. 00:27:33.788 [2024-11-15 10:46:22.000630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.788 [2024-11-15 10:46:22.000655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.788 qpair failed and we were unable to recover it. 00:27:33.788 [2024-11-15 10:46:22.000776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.788 [2024-11-15 10:46:22.000802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.788 qpair failed and we were unable to recover it. 00:27:33.788 [2024-11-15 10:46:22.000962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.788 [2024-11-15 10:46:22.001001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.788 qpair failed and we were unable to recover it. 00:27:33.788 [2024-11-15 10:46:22.001130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.788 [2024-11-15 10:46:22.001154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.788 qpair failed and we were unable to recover it. 00:27:33.788 [2024-11-15 10:46:22.001300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.788 [2024-11-15 10:46:22.001331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.788 qpair failed and we were unable to recover it. 00:27:33.789 [2024-11-15 10:46:22.001469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.789 [2024-11-15 10:46:22.001510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.789 qpair failed and we were unable to recover it. 00:27:33.789 [2024-11-15 10:46:22.001634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.789 [2024-11-15 10:46:22.001659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.789 qpair failed and we were unable to recover it. 00:27:33.789 [2024-11-15 10:46:22.001783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.789 [2024-11-15 10:46:22.001822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.789 qpair failed and we were unable to recover it. 00:27:33.789 [2024-11-15 10:46:22.001940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.789 [2024-11-15 10:46:22.001965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.789 qpair failed and we were unable to recover it. 00:27:33.789 [2024-11-15 10:46:22.002166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.789 [2024-11-15 10:46:22.002191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.789 qpair failed and we were unable to recover it. 00:27:33.789 [2024-11-15 10:46:22.002329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.789 [2024-11-15 10:46:22.002353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.789 qpair failed and we were unable to recover it. 00:27:33.789 [2024-11-15 10:46:22.002482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.789 [2024-11-15 10:46:22.002507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.789 qpair failed and we were unable to recover it. 00:27:33.789 [2024-11-15 10:46:22.002662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.789 [2024-11-15 10:46:22.002687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.789 qpair failed and we were unable to recover it. 00:27:33.789 [2024-11-15 10:46:22.002832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.789 [2024-11-15 10:46:22.002869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.789 qpair failed and we were unable to recover it. 00:27:33.789 [2024-11-15 10:46:22.003038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.789 [2024-11-15 10:46:22.003062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.789 qpair failed and we were unable to recover it. 00:27:33.789 [2024-11-15 10:46:22.003184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.789 [2024-11-15 10:46:22.003209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.789 qpair failed and we were unable to recover it. 00:27:33.789 [2024-11-15 10:46:22.003344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.789 [2024-11-15 10:46:22.003391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.789 qpair failed and we were unable to recover it. 00:27:33.789 [2024-11-15 10:46:22.003529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.789 [2024-11-15 10:46:22.003555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.789 qpair failed and we were unable to recover it. 00:27:33.789 [2024-11-15 10:46:22.003742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.789 [2024-11-15 10:46:22.003766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.789 qpair failed and we were unable to recover it. 00:27:33.789 [2024-11-15 10:46:22.003911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.789 [2024-11-15 10:46:22.003934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.789 qpair failed and we were unable to recover it. 00:27:33.789 [2024-11-15 10:46:22.004166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.789 [2024-11-15 10:46:22.004190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.789 qpair failed and we were unable to recover it. 00:27:33.789 [2024-11-15 10:46:22.004390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.789 [2024-11-15 10:46:22.004428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.789 qpair failed and we were unable to recover it. 00:27:33.789 [2024-11-15 10:46:22.004588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.789 [2024-11-15 10:46:22.004613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.789 qpair failed and we were unable to recover it. 00:27:33.789 [2024-11-15 10:46:22.004800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.789 [2024-11-15 10:46:22.004823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.789 qpair failed and we were unable to recover it. 00:27:33.789 [2024-11-15 10:46:22.004935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.789 [2024-11-15 10:46:22.004973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.789 qpair failed and we were unable to recover it. 00:27:33.789 [2024-11-15 10:46:22.005117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.789 [2024-11-15 10:46:22.005141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.789 qpair failed and we were unable to recover it. 00:27:33.789 [2024-11-15 10:46:22.005302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.789 [2024-11-15 10:46:22.005340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.789 qpair failed and we were unable to recover it. 00:27:33.789 [2024-11-15 10:46:22.005457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.789 [2024-11-15 10:46:22.005481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.789 qpair failed and we were unable to recover it. 00:27:33.789 [2024-11-15 10:46:22.005605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.789 [2024-11-15 10:46:22.005630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.789 qpair failed and we were unable to recover it. 00:27:33.789 [2024-11-15 10:46:22.005759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.789 [2024-11-15 10:46:22.005784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.789 qpair failed and we were unable to recover it. 00:27:33.789 [2024-11-15 10:46:22.005878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.789 [2024-11-15 10:46:22.005902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.789 qpair failed and we were unable to recover it. 00:27:33.789 [2024-11-15 10:46:22.006035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.789 [2024-11-15 10:46:22.006059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.789 qpair failed and we were unable to recover it. 00:27:33.789 [2024-11-15 10:46:22.006173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.789 [2024-11-15 10:46:22.006197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.789 qpair failed and we were unable to recover it. 00:27:33.789 [2024-11-15 10:46:22.006342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.789 [2024-11-15 10:46:22.006387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.789 qpair failed and we were unable to recover it. 00:27:33.789 [2024-11-15 10:46:22.006494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.789 [2024-11-15 10:46:22.006518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.789 qpair failed and we were unable to recover it. 00:27:33.789 [2024-11-15 10:46:22.006718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.789 [2024-11-15 10:46:22.006741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.789 qpair failed and we were unable to recover it. 00:27:33.789 [2024-11-15 10:46:22.006924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.789 [2024-11-15 10:46:22.006948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.789 qpair failed and we were unable to recover it. 00:27:33.789 [2024-11-15 10:46:22.007098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.789 [2024-11-15 10:46:22.007122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.789 qpair failed and we were unable to recover it. 00:27:33.789 [2024-11-15 10:46:22.007264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.789 [2024-11-15 10:46:22.007303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.789 qpair failed and we were unable to recover it. 00:27:33.789 [2024-11-15 10:46:22.007439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.789 [2024-11-15 10:46:22.007465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.789 qpair failed and we were unable to recover it. 00:27:33.790 [2024-11-15 10:46:22.007559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.790 [2024-11-15 10:46:22.007584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.790 qpair failed and we were unable to recover it. 00:27:33.790 [2024-11-15 10:46:22.007746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.790 [2024-11-15 10:46:22.007771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.790 qpair failed and we were unable to recover it. 00:27:33.790 [2024-11-15 10:46:22.007971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.790 [2024-11-15 10:46:22.007994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.790 qpair failed and we were unable to recover it. 00:27:33.790 [2024-11-15 10:46:22.008199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.790 [2024-11-15 10:46:22.008223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.790 qpair failed and we were unable to recover it. 00:27:33.790 [2024-11-15 10:46:22.008369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.790 [2024-11-15 10:46:22.008409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.790 qpair failed and we were unable to recover it. 00:27:33.790 [2024-11-15 10:46:22.008507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.790 [2024-11-15 10:46:22.008532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.790 qpair failed and we were unable to recover it. 00:27:33.790 [2024-11-15 10:46:22.008632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.790 [2024-11-15 10:46:22.008657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.790 qpair failed and we were unable to recover it. 00:27:33.790 [2024-11-15 10:46:22.008776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.790 [2024-11-15 10:46:22.008801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.790 qpair failed and we were unable to recover it. 00:27:33.790 [2024-11-15 10:46:22.008953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.790 [2024-11-15 10:46:22.008992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.790 qpair failed and we were unable to recover it. 00:27:33.790 [2024-11-15 10:46:22.009098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.790 [2024-11-15 10:46:22.009136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.790 qpair failed and we were unable to recover it. 00:27:33.790 [2024-11-15 10:46:22.009264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.790 [2024-11-15 10:46:22.009288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.790 qpair failed and we were unable to recover it. 00:27:33.790 [2024-11-15 10:46:22.009406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.790 [2024-11-15 10:46:22.009432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.790 qpair failed and we were unable to recover it. 00:27:33.790 [2024-11-15 10:46:22.009525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.790 [2024-11-15 10:46:22.009550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.790 qpair failed and we were unable to recover it. 00:27:33.790 [2024-11-15 10:46:22.009714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.790 [2024-11-15 10:46:22.009754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.790 qpair failed and we were unable to recover it. 00:27:33.790 [2024-11-15 10:46:22.009875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.790 [2024-11-15 10:46:22.009913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.790 qpair failed and we were unable to recover it. 00:27:33.790 [2024-11-15 10:46:22.010049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.790 [2024-11-15 10:46:22.010074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.790 qpair failed and we were unable to recover it. 00:27:33.790 [2024-11-15 10:46:22.010234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.790 [2024-11-15 10:46:22.010273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.790 qpair failed and we were unable to recover it. 00:27:33.790 [2024-11-15 10:46:22.010425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.790 [2024-11-15 10:46:22.010451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.790 qpair failed and we were unable to recover it. 00:27:33.790 [2024-11-15 10:46:22.010553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.790 [2024-11-15 10:46:22.010578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.790 qpair failed and we were unable to recover it. 00:27:33.790 [2024-11-15 10:46:22.010710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.790 [2024-11-15 10:46:22.010749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.790 qpair failed and we were unable to recover it. 00:27:33.790 [2024-11-15 10:46:22.010879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.790 [2024-11-15 10:46:22.010918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.790 qpair failed and we were unable to recover it. 00:27:33.790 [2024-11-15 10:46:22.011050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.790 [2024-11-15 10:46:22.011092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.790 qpair failed and we were unable to recover it. 00:27:33.790 [2024-11-15 10:46:22.011267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.790 [2024-11-15 10:46:22.011305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.790 qpair failed and we were unable to recover it. 00:27:33.790 [2024-11-15 10:46:22.011427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.790 [2024-11-15 10:46:22.011453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.790 qpair failed and we were unable to recover it. 00:27:33.790 [2024-11-15 10:46:22.011545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.790 [2024-11-15 10:46:22.011570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.790 qpair failed and we were unable to recover it. 00:27:33.790 [2024-11-15 10:46:22.011697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.790 [2024-11-15 10:46:22.011721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.790 qpair failed and we were unable to recover it. 00:27:33.790 [2024-11-15 10:46:22.011893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.790 [2024-11-15 10:46:22.011931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.790 qpair failed and we were unable to recover it. 00:27:33.790 [2024-11-15 10:46:22.012062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.790 [2024-11-15 10:46:22.012086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.790 qpair failed and we were unable to recover it. 00:27:33.790 [2024-11-15 10:46:22.012252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.790 [2024-11-15 10:46:22.012290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.790 qpair failed and we were unable to recover it. 00:27:33.790 [2024-11-15 10:46:22.012433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.790 [2024-11-15 10:46:22.012459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.790 qpair failed and we were unable to recover it. 00:27:33.790 [2024-11-15 10:46:22.012591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.790 [2024-11-15 10:46:22.012615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.790 qpair failed and we were unable to recover it. 00:27:33.790 [2024-11-15 10:46:22.012793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.790 [2024-11-15 10:46:22.012816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.791 qpair failed and we were unable to recover it. 00:27:33.791 [2024-11-15 10:46:22.012997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.791 [2024-11-15 10:46:22.013021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.791 qpair failed and we were unable to recover it. 00:27:33.791 [2024-11-15 10:46:22.013137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.791 [2024-11-15 10:46:22.013176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.791 qpair failed and we were unable to recover it. 00:27:33.791 [2024-11-15 10:46:22.013307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.791 [2024-11-15 10:46:22.013331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.791 qpair failed and we were unable to recover it. 00:27:33.791 [2024-11-15 10:46:22.013473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.791 [2024-11-15 10:46:22.013499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.791 qpair failed and we were unable to recover it. 00:27:33.791 [2024-11-15 10:46:22.013627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.791 [2024-11-15 10:46:22.013669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.791 qpair failed and we were unable to recover it. 00:27:33.791 [2024-11-15 10:46:22.013792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.791 [2024-11-15 10:46:22.013815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.791 qpair failed and we were unable to recover it. 00:27:33.791 [2024-11-15 10:46:22.014007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.791 [2024-11-15 10:46:22.014031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.791 qpair failed and we were unable to recover it. 00:27:33.791 [2024-11-15 10:46:22.014169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.791 [2024-11-15 10:46:22.014193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.791 qpair failed and we were unable to recover it. 00:27:33.791 [2024-11-15 10:46:22.014333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.791 [2024-11-15 10:46:22.014357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.791 qpair failed and we were unable to recover it. 00:27:33.791 [2024-11-15 10:46:22.014461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.791 [2024-11-15 10:46:22.014486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.791 qpair failed and we were unable to recover it. 00:27:33.791 [2024-11-15 10:46:22.014619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.791 [2024-11-15 10:46:22.014644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.791 qpair failed and we were unable to recover it. 00:27:33.791 [2024-11-15 10:46:22.014756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.791 [2024-11-15 10:46:22.014780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.791 qpair failed and we were unable to recover it. 00:27:33.791 [2024-11-15 10:46:22.014916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.791 [2024-11-15 10:46:22.014940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.791 qpair failed and we were unable to recover it. 00:27:33.791 [2024-11-15 10:46:22.015079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.791 [2024-11-15 10:46:22.015103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.791 qpair failed and we were unable to recover it. 00:27:33.791 [2024-11-15 10:46:22.015214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.791 [2024-11-15 10:46:22.015238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.791 qpair failed and we were unable to recover it. 00:27:33.791 [2024-11-15 10:46:22.015385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.791 [2024-11-15 10:46:22.015410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.791 qpair failed and we were unable to recover it. 00:27:33.791 [2024-11-15 10:46:22.015517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.791 [2024-11-15 10:46:22.015542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.791 qpair failed and we were unable to recover it. 00:27:33.791 [2024-11-15 10:46:22.015633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.791 [2024-11-15 10:46:22.015672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.791 qpair failed and we were unable to recover it. 00:27:33.791 [2024-11-15 10:46:22.015835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.791 [2024-11-15 10:46:22.015859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.791 qpair failed and we were unable to recover it. 00:27:33.791 [2024-11-15 10:46:22.016000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.791 [2024-11-15 10:46:22.016024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.791 qpair failed and we were unable to recover it. 00:27:33.791 [2024-11-15 10:46:22.016157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.791 [2024-11-15 10:46:22.016182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.791 qpair failed and we were unable to recover it. 00:27:33.791 [2024-11-15 10:46:22.016330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.791 [2024-11-15 10:46:22.016354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.791 qpair failed and we were unable to recover it. 00:27:33.791 [2024-11-15 10:46:22.016500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.791 [2024-11-15 10:46:22.016540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.791 qpair failed and we were unable to recover it. 00:27:33.791 [2024-11-15 10:46:22.016633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.791 [2024-11-15 10:46:22.016657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.791 qpair failed and we were unable to recover it. 00:27:33.791 [2024-11-15 10:46:22.016809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.791 [2024-11-15 10:46:22.016833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.791 qpair failed and we were unable to recover it. 00:27:33.791 [2024-11-15 10:46:22.017036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.791 [2024-11-15 10:46:22.017059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.791 qpair failed and we were unable to recover it. 00:27:33.791 [2024-11-15 10:46:22.017187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.791 [2024-11-15 10:46:22.017211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.791 qpair failed and we were unable to recover it. 00:27:33.791 [2024-11-15 10:46:22.017383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.791 [2024-11-15 10:46:22.017409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.791 qpair failed and we were unable to recover it. 00:27:33.791 [2024-11-15 10:46:22.017535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.791 [2024-11-15 10:46:22.017575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.791 qpair failed and we were unable to recover it. 00:27:33.791 [2024-11-15 10:46:22.017723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.791 [2024-11-15 10:46:22.017766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.791 qpair failed and we were unable to recover it. 00:27:33.791 [2024-11-15 10:46:22.017895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.791 [2024-11-15 10:46:22.017920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.791 qpair failed and we were unable to recover it. 00:27:33.791 [2024-11-15 10:46:22.018089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.791 [2024-11-15 10:46:22.018113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.791 qpair failed and we were unable to recover it. 00:27:33.791 [2024-11-15 10:46:22.018223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.791 [2024-11-15 10:46:22.018247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.791 qpair failed and we were unable to recover it. 00:27:33.791 [2024-11-15 10:46:22.018425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.792 [2024-11-15 10:46:22.018451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.792 qpair failed and we were unable to recover it. 00:27:33.792 [2024-11-15 10:46:22.018573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.792 [2024-11-15 10:46:22.018611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.792 qpair failed and we were unable to recover it. 00:27:33.792 [2024-11-15 10:46:22.018786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.792 [2024-11-15 10:46:22.018809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.792 qpair failed and we were unable to recover it. 00:27:33.792 [2024-11-15 10:46:22.018946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.792 [2024-11-15 10:46:22.018985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.792 qpair failed and we were unable to recover it. 00:27:33.792 [2024-11-15 10:46:22.019126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.792 [2024-11-15 10:46:22.019165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.792 qpair failed and we were unable to recover it. 00:27:33.792 [2024-11-15 10:46:22.019297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.792 [2024-11-15 10:46:22.019321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.792 qpair failed and we were unable to recover it. 00:27:33.792 [2024-11-15 10:46:22.019496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.792 [2024-11-15 10:46:22.019522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.792 qpair failed and we were unable to recover it. 00:27:33.792 [2024-11-15 10:46:22.019671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.792 [2024-11-15 10:46:22.019695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.792 qpair failed and we were unable to recover it. 00:27:33.792 [2024-11-15 10:46:22.019838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.792 [2024-11-15 10:46:22.019877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.792 qpair failed and we were unable to recover it. 00:27:33.792 [2024-11-15 10:46:22.020007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.792 [2024-11-15 10:46:22.020031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.792 qpair failed and we were unable to recover it. 00:27:33.792 [2024-11-15 10:46:22.020159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.792 [2024-11-15 10:46:22.020183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.792 qpair failed and we were unable to recover it. 00:27:33.792 [2024-11-15 10:46:22.020293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.792 [2024-11-15 10:46:22.020318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.792 qpair failed and we were unable to recover it. 00:27:33.792 [2024-11-15 10:46:22.020433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.792 [2024-11-15 10:46:22.020458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.792 qpair failed and we were unable to recover it. 00:27:33.792 [2024-11-15 10:46:22.020594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.792 [2024-11-15 10:46:22.020619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.792 qpair failed and we were unable to recover it. 00:27:33.792 [2024-11-15 10:46:22.020778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.792 [2024-11-15 10:46:22.020801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.792 qpair failed and we were unable to recover it. 00:27:33.792 [2024-11-15 10:46:22.020979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.792 [2024-11-15 10:46:22.021002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.792 qpair failed and we were unable to recover it. 00:27:33.792 [2024-11-15 10:46:22.021174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.792 [2024-11-15 10:46:22.021197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.792 qpair failed and we were unable to recover it. 00:27:33.792 [2024-11-15 10:46:22.021328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.792 [2024-11-15 10:46:22.021372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.792 qpair failed and we were unable to recover it. 00:27:33.792 [2024-11-15 10:46:22.021509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.792 [2024-11-15 10:46:22.021533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.792 qpair failed and we were unable to recover it. 00:27:33.792 [2024-11-15 10:46:22.021687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.792 [2024-11-15 10:46:22.021726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.792 qpair failed and we were unable to recover it. 00:27:33.792 [2024-11-15 10:46:22.021863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.792 [2024-11-15 10:46:22.021902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.792 qpair failed and we were unable to recover it. 00:27:33.792 [2024-11-15 10:46:22.022041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.792 [2024-11-15 10:46:22.022064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.792 qpair failed and we were unable to recover it. 00:27:33.792 [2024-11-15 10:46:22.022190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.792 [2024-11-15 10:46:22.022214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.792 qpair failed and we were unable to recover it. 00:27:33.792 [2024-11-15 10:46:22.022334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.792 [2024-11-15 10:46:22.022381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.792 qpair failed and we were unable to recover it. 00:27:33.792 [2024-11-15 10:46:22.022497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.792 [2024-11-15 10:46:22.022523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.792 qpair failed and we were unable to recover it. 00:27:33.792 [2024-11-15 10:46:22.022657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.792 [2024-11-15 10:46:22.022681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.792 qpair failed and we were unable to recover it. 00:27:33.792 [2024-11-15 10:46:22.022815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.792 [2024-11-15 10:46:22.022853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.792 qpair failed and we were unable to recover it. 00:27:33.792 [2024-11-15 10:46:22.023017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.792 [2024-11-15 10:46:22.023040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.792 qpair failed and we were unable to recover it. 00:27:33.792 [2024-11-15 10:46:22.023182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.792 [2024-11-15 10:46:22.023206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.792 qpair failed and we were unable to recover it. 00:27:33.792 [2024-11-15 10:46:22.023313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.792 [2024-11-15 10:46:22.023338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.792 qpair failed and we were unable to recover it. 00:27:33.792 [2024-11-15 10:46:22.023473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.792 [2024-11-15 10:46:22.023498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.792 qpair failed and we were unable to recover it. 00:27:33.792 [2024-11-15 10:46:22.023593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.792 [2024-11-15 10:46:22.023618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.792 qpair failed and we were unable to recover it. 00:27:33.792 [2024-11-15 10:46:22.023792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.792 [2024-11-15 10:46:22.023830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.792 qpair failed and we were unable to recover it. 00:27:33.793 [2024-11-15 10:46:22.023939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.793 [2024-11-15 10:46:22.023963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.793 qpair failed and we were unable to recover it. 00:27:33.793 [2024-11-15 10:46:22.024099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.793 [2024-11-15 10:46:22.024123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.793 qpair failed and we were unable to recover it. 00:27:33.793 [2024-11-15 10:46:22.024310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.793 [2024-11-15 10:46:22.024334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.793 qpair failed and we were unable to recover it. 00:27:33.793 [2024-11-15 10:46:22.024494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.793 [2024-11-15 10:46:22.024523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.793 qpair failed and we were unable to recover it. 00:27:33.793 [2024-11-15 10:46:22.024648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.793 [2024-11-15 10:46:22.024688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.793 qpair failed and we were unable to recover it. 00:27:33.793 [2024-11-15 10:46:22.024863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.793 [2024-11-15 10:46:22.024886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.793 qpair failed and we were unable to recover it. 00:27:33.793 [2024-11-15 10:46:22.025059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.793 [2024-11-15 10:46:22.025083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.793 qpair failed and we were unable to recover it. 00:27:33.793 [2024-11-15 10:46:22.025234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.793 [2024-11-15 10:46:22.025258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.793 qpair failed and we were unable to recover it. 00:27:33.793 [2024-11-15 10:46:22.025394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.793 [2024-11-15 10:46:22.025419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.793 qpair failed and we were unable to recover it. 00:27:33.793 [2024-11-15 10:46:22.025535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.793 [2024-11-15 10:46:22.025560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.793 qpair failed and we were unable to recover it. 00:27:33.793 [2024-11-15 10:46:22.025685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.793 [2024-11-15 10:46:22.025710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.793 qpair failed and we were unable to recover it. 00:27:33.793 [2024-11-15 10:46:22.025909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.793 [2024-11-15 10:46:22.025932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.793 qpair failed and we were unable to recover it. 00:27:33.793 [2024-11-15 10:46:22.026047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.793 [2024-11-15 10:46:22.026071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.793 qpair failed and we were unable to recover it. 00:27:33.793 [2024-11-15 10:46:22.026193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.793 [2024-11-15 10:46:22.026217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.793 qpair failed and we were unable to recover it. 00:27:33.793 [2024-11-15 10:46:22.026419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.793 [2024-11-15 10:46:22.026444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.793 qpair failed and we were unable to recover it. 00:27:33.793 [2024-11-15 10:46:22.026576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.793 [2024-11-15 10:46:22.026601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.793 qpair failed and we were unable to recover it. 00:27:33.793 [2024-11-15 10:46:22.026765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.793 [2024-11-15 10:46:22.026788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.793 qpair failed and we were unable to recover it. 00:27:33.793 [2024-11-15 10:46:22.026944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.793 [2024-11-15 10:46:22.026968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.793 qpair failed and we were unable to recover it. 00:27:33.793 [2024-11-15 10:46:22.027090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.793 [2024-11-15 10:46:22.027114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.793 qpair failed and we were unable to recover it. 00:27:33.793 [2024-11-15 10:46:22.027284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.793 [2024-11-15 10:46:22.027322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.793 qpair failed and we were unable to recover it. 00:27:33.793 [2024-11-15 10:46:22.027465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.793 [2024-11-15 10:46:22.027490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.793 qpair failed and we were unable to recover it. 00:27:33.793 [2024-11-15 10:46:22.027660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.793 [2024-11-15 10:46:22.027699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.793 qpair failed and we were unable to recover it. 00:27:33.793 [2024-11-15 10:46:22.027843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.793 [2024-11-15 10:46:22.027866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.793 qpair failed and we were unable to recover it. 00:27:33.793 [2024-11-15 10:46:22.028007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.793 [2024-11-15 10:46:22.028031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.793 qpair failed and we were unable to recover it. 00:27:33.793 [2024-11-15 10:46:22.028220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.793 [2024-11-15 10:46:22.028244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.793 qpair failed and we were unable to recover it. 00:27:33.793 [2024-11-15 10:46:22.028375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.793 [2024-11-15 10:46:22.028400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.793 qpair failed and we were unable to recover it. 00:27:33.793 [2024-11-15 10:46:22.028514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.793 [2024-11-15 10:46:22.028538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.793 qpair failed and we were unable to recover it. 00:27:33.793 [2024-11-15 10:46:22.028708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.793 [2024-11-15 10:46:22.028732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.793 qpair failed and we were unable to recover it. 00:27:33.793 [2024-11-15 10:46:22.028888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.793 [2024-11-15 10:46:22.028920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.793 qpair failed and we were unable to recover it. 00:27:33.793 [2024-11-15 10:46:22.029133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.793 [2024-11-15 10:46:22.029156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.793 qpair failed and we were unable to recover it. 00:27:33.793 [2024-11-15 10:46:22.029297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.793 [2024-11-15 10:46:22.029320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.793 qpair failed and we were unable to recover it. 00:27:33.793 [2024-11-15 10:46:22.029488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.793 [2024-11-15 10:46:22.029514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.793 qpair failed and we were unable to recover it. 00:27:33.793 [2024-11-15 10:46:22.029634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.793 [2024-11-15 10:46:22.029674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.794 qpair failed and we were unable to recover it. 00:27:33.794 [2024-11-15 10:46:22.029839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.794 [2024-11-15 10:46:22.029862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.794 qpair failed and we were unable to recover it. 00:27:33.794 [2024-11-15 10:46:22.029967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.794 [2024-11-15 10:46:22.029991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.794 qpair failed and we were unable to recover it. 00:27:33.794 [2024-11-15 10:46:22.030128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.794 [2024-11-15 10:46:22.030152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.794 qpair failed and we were unable to recover it. 00:27:33.794 [2024-11-15 10:46:22.030319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.794 [2024-11-15 10:46:22.030358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.794 qpair failed and we were unable to recover it. 00:27:33.794 [2024-11-15 10:46:22.030463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.794 [2024-11-15 10:46:22.030488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.794 qpair failed and we were unable to recover it. 00:27:33.794 [2024-11-15 10:46:22.030641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.794 [2024-11-15 10:46:22.030666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.794 qpair failed and we were unable to recover it. 00:27:33.794 [2024-11-15 10:46:22.030822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.794 [2024-11-15 10:46:22.030845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.794 qpair failed and we were unable to recover it. 00:27:33.794 [2024-11-15 10:46:22.030994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.794 [2024-11-15 10:46:22.031032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.794 qpair failed and we were unable to recover it. 00:27:33.794 [2024-11-15 10:46:22.031172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.794 [2024-11-15 10:46:22.031211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.794 qpair failed and we were unable to recover it. 00:27:33.794 [2024-11-15 10:46:22.031338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.794 [2024-11-15 10:46:22.031368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.794 qpair failed and we were unable to recover it. 00:27:33.794 [2024-11-15 10:46:22.031484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.794 [2024-11-15 10:46:22.031513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.794 qpair failed and we were unable to recover it. 00:27:33.794 [2024-11-15 10:46:22.031683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.794 [2024-11-15 10:46:22.031722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.794 qpair failed and we were unable to recover it. 00:27:33.794 [2024-11-15 10:46:22.031867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.794 [2024-11-15 10:46:22.031890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.794 qpair failed and we were unable to recover it. 00:27:33.794 [2024-11-15 10:46:22.032053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.794 [2024-11-15 10:46:22.032092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.794 qpair failed and we were unable to recover it. 00:27:33.794 [2024-11-15 10:46:22.032206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.794 [2024-11-15 10:46:22.032246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.794 qpair failed and we were unable to recover it. 00:27:33.794 [2024-11-15 10:46:22.032411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.794 [2024-11-15 10:46:22.032436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.794 qpair failed and we were unable to recover it. 00:27:33.794 [2024-11-15 10:46:22.032581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.794 [2024-11-15 10:46:22.032606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.794 qpair failed and we were unable to recover it. 00:27:33.794 [2024-11-15 10:46:22.032750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.794 [2024-11-15 10:46:22.032789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.794 qpair failed and we were unable to recover it. 00:27:33.794 [2024-11-15 10:46:22.032895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.794 [2024-11-15 10:46:22.032919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.794 qpair failed and we were unable to recover it. 00:27:33.794 [2024-11-15 10:46:22.033034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.794 [2024-11-15 10:46:22.033058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.794 qpair failed and we were unable to recover it. 00:27:33.794 [2024-11-15 10:46:22.033261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.794 [2024-11-15 10:46:22.033285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.794 qpair failed and we were unable to recover it. 00:27:33.794 [2024-11-15 10:46:22.033465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.794 [2024-11-15 10:46:22.033490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.794 qpair failed and we were unable to recover it. 00:27:33.794 [2024-11-15 10:46:22.033606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.794 [2024-11-15 10:46:22.033631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.794 qpair failed and we were unable to recover it. 00:27:33.794 [2024-11-15 10:46:22.033814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.794 [2024-11-15 10:46:22.033837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.794 qpair failed and we were unable to recover it. 00:27:33.794 [2024-11-15 10:46:22.034013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.794 [2024-11-15 10:46:22.034037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.794 qpair failed and we were unable to recover it. 00:27:33.794 [2024-11-15 10:46:22.034222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.794 [2024-11-15 10:46:22.034245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.794 qpair failed and we were unable to recover it. 00:27:33.794 [2024-11-15 10:46:22.034367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.794 [2024-11-15 10:46:22.034392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.794 qpair failed and we were unable to recover it. 00:27:33.794 [2024-11-15 10:46:22.034486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.794 [2024-11-15 10:46:22.034512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.794 qpair failed and we were unable to recover it. 00:27:33.794 [2024-11-15 10:46:22.034622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.795 [2024-11-15 10:46:22.034646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.795 qpair failed and we were unable to recover it. 00:27:33.795 [2024-11-15 10:46:22.034758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.795 [2024-11-15 10:46:22.034782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.795 qpair failed and we were unable to recover it. 00:27:33.795 [2024-11-15 10:46:22.034937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.795 [2024-11-15 10:46:22.034961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.795 qpair failed and we were unable to recover it. 00:27:33.795 [2024-11-15 10:46:22.035186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.795 [2024-11-15 10:46:22.035209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.795 qpair failed and we were unable to recover it. 00:27:33.795 [2024-11-15 10:46:22.035393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.795 [2024-11-15 10:46:22.035431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.795 qpair failed and we were unable to recover it. 00:27:33.795 [2024-11-15 10:46:22.035533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.795 [2024-11-15 10:46:22.035572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.795 qpair failed and we were unable to recover it. 00:27:33.795 [2024-11-15 10:46:22.035737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.795 [2024-11-15 10:46:22.035762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.795 qpair failed and we were unable to recover it. 00:27:33.795 [2024-11-15 10:46:22.035935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.795 [2024-11-15 10:46:22.035959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.795 qpair failed and we were unable to recover it. 00:27:33.795 [2024-11-15 10:46:22.036121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.795 [2024-11-15 10:46:22.036144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.795 qpair failed and we were unable to recover it. 00:27:33.795 [2024-11-15 10:46:22.036308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.795 [2024-11-15 10:46:22.036332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.795 qpair failed and we were unable to recover it. 00:27:33.795 [2024-11-15 10:46:22.036499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.795 [2024-11-15 10:46:22.036524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.795 qpair failed and we were unable to recover it. 00:27:33.795 [2024-11-15 10:46:22.036667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.795 [2024-11-15 10:46:22.036691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.795 qpair failed and we were unable to recover it. 00:27:33.795 [2024-11-15 10:46:22.036856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.795 [2024-11-15 10:46:22.036880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.795 qpair failed and we were unable to recover it. 00:27:33.795 [2024-11-15 10:46:22.036980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.795 [2024-11-15 10:46:22.037003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.795 qpair failed and we were unable to recover it. 00:27:33.795 [2024-11-15 10:46:22.037199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.795 [2024-11-15 10:46:22.037238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.795 qpair failed and we were unable to recover it. 00:27:33.795 [2024-11-15 10:46:22.037404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.795 [2024-11-15 10:46:22.037429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.795 qpair failed and we were unable to recover it. 00:27:33.795 [2024-11-15 10:46:22.037532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.795 [2024-11-15 10:46:22.037557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.795 qpair failed and we were unable to recover it. 00:27:33.795 [2024-11-15 10:46:22.037685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.795 [2024-11-15 10:46:22.037709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.795 qpair failed and we were unable to recover it. 00:27:33.795 [2024-11-15 10:46:22.037840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.795 [2024-11-15 10:46:22.037879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.795 qpair failed and we were unable to recover it. 00:27:33.795 [2024-11-15 10:46:22.038032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.795 [2024-11-15 10:46:22.038075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.795 qpair failed and we were unable to recover it. 00:27:33.795 [2024-11-15 10:46:22.038230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.795 [2024-11-15 10:46:22.038269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.795 qpair failed and we were unable to recover it. 00:27:33.795 [2024-11-15 10:46:22.038413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.795 [2024-11-15 10:46:22.038454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.795 qpair failed and we were unable to recover it. 00:27:33.795 [2024-11-15 10:46:22.038559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.795 [2024-11-15 10:46:22.038588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.795 qpair failed and we were unable to recover it. 00:27:33.795 [2024-11-15 10:46:22.038771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.795 [2024-11-15 10:46:22.038796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.795 qpair failed and we were unable to recover it. 00:27:33.795 [2024-11-15 10:46:22.038912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.795 [2024-11-15 10:46:22.038950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.795 qpair failed and we were unable to recover it. 00:27:33.795 [2024-11-15 10:46:22.039065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.795 [2024-11-15 10:46:22.039089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.795 qpair failed and we were unable to recover it. 00:27:33.795 [2024-11-15 10:46:22.039232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.795 [2024-11-15 10:46:22.039257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.795 qpair failed and we were unable to recover it. 00:27:33.795 [2024-11-15 10:46:22.039391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.795 [2024-11-15 10:46:22.039416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.795 qpair failed and we were unable to recover it. 00:27:33.795 [2024-11-15 10:46:22.039552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.795 [2024-11-15 10:46:22.039577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.795 qpair failed and we were unable to recover it. 00:27:33.795 [2024-11-15 10:46:22.039730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.795 [2024-11-15 10:46:22.039770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.795 qpair failed and we were unable to recover it. 00:27:33.795 [2024-11-15 10:46:22.039893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.795 [2024-11-15 10:46:22.039932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.795 qpair failed and we were unable to recover it. 00:27:33.795 [2024-11-15 10:46:22.040072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.795 [2024-11-15 10:46:22.040112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.795 qpair failed and we were unable to recover it. 00:27:33.795 [2024-11-15 10:46:22.040227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.795 [2024-11-15 10:46:22.040251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.795 qpair failed and we were unable to recover it. 00:27:33.795 [2024-11-15 10:46:22.040390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.795 [2024-11-15 10:46:22.040415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.795 qpair failed and we were unable to recover it. 00:27:33.795 [2024-11-15 10:46:22.040506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.795 [2024-11-15 10:46:22.040531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.795 qpair failed and we were unable to recover it. 00:27:33.795 [2024-11-15 10:46:22.040674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.795 [2024-11-15 10:46:22.040698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.795 qpair failed and we were unable to recover it. 00:27:33.795 [2024-11-15 10:46:22.040881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.796 [2024-11-15 10:46:22.040904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.796 qpair failed and we were unable to recover it. 00:27:33.796 [2024-11-15 10:46:22.041066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.796 [2024-11-15 10:46:22.041090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.796 qpair failed and we were unable to recover it. 00:27:33.796 [2024-11-15 10:46:22.041229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.796 [2024-11-15 10:46:22.041268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.796 qpair failed and we were unable to recover it. 00:27:33.796 [2024-11-15 10:46:22.041396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.796 [2024-11-15 10:46:22.041421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.796 qpair failed and we were unable to recover it. 00:27:33.796 [2024-11-15 10:46:22.041532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.796 [2024-11-15 10:46:22.041557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.796 qpair failed and we were unable to recover it. 00:27:33.796 [2024-11-15 10:46:22.041649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.796 [2024-11-15 10:46:22.041674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.796 qpair failed and we were unable to recover it. 00:27:33.796 [2024-11-15 10:46:22.041818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.796 [2024-11-15 10:46:22.041842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.796 qpair failed and we were unable to recover it. 00:27:33.796 [2024-11-15 10:46:22.041986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.796 [2024-11-15 10:46:22.042010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.796 qpair failed and we were unable to recover it. 00:27:33.796 [2024-11-15 10:46:22.042162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.796 [2024-11-15 10:46:22.042186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.796 qpair failed and we were unable to recover it. 00:27:33.796 [2024-11-15 10:46:22.042354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.796 [2024-11-15 10:46:22.042399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.796 qpair failed and we were unable to recover it. 00:27:33.796 [2024-11-15 10:46:22.042532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.796 [2024-11-15 10:46:22.042557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.796 qpair failed and we were unable to recover it. 00:27:33.796 [2024-11-15 10:46:22.042721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.796 [2024-11-15 10:46:22.042759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.796 qpair failed and we were unable to recover it. 00:27:33.796 [2024-11-15 10:46:22.042896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.796 [2024-11-15 10:46:22.042920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.796 qpair failed and we were unable to recover it. 00:27:33.796 [2024-11-15 10:46:22.043094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.796 [2024-11-15 10:46:22.043132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.796 qpair failed and we were unable to recover it. 00:27:33.796 [2024-11-15 10:46:22.043299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.796 [2024-11-15 10:46:22.043322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.796 qpair failed and we were unable to recover it. 00:27:33.796 [2024-11-15 10:46:22.043468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.796 [2024-11-15 10:46:22.043494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.796 qpair failed and we were unable to recover it. 00:27:33.796 [2024-11-15 10:46:22.043615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.796 [2024-11-15 10:46:22.043640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.796 qpair failed and we were unable to recover it. 00:27:33.796 [2024-11-15 10:46:22.043781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.796 [2024-11-15 10:46:22.043819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.796 qpair failed and we were unable to recover it. 00:27:33.796 [2024-11-15 10:46:22.043945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.796 [2024-11-15 10:46:22.043970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.796 qpair failed and we were unable to recover it. 00:27:33.796 [2024-11-15 10:46:22.044091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.796 [2024-11-15 10:46:22.044115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.796 qpair failed and we were unable to recover it. 00:27:33.796 [2024-11-15 10:46:22.044305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.796 [2024-11-15 10:46:22.044329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.796 qpair failed and we were unable to recover it. 00:27:33.796 [2024-11-15 10:46:22.044465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.796 [2024-11-15 10:46:22.044505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.796 qpair failed and we were unable to recover it. 00:27:33.796 [2024-11-15 10:46:22.044621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.796 [2024-11-15 10:46:22.044646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.796 qpair failed and we were unable to recover it. 00:27:33.796 [2024-11-15 10:46:22.044795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.796 [2024-11-15 10:46:22.044819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.796 qpair failed and we were unable to recover it. 00:27:33.796 [2024-11-15 10:46:22.044963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.796 [2024-11-15 10:46:22.045002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.796 qpair failed and we were unable to recover it. 00:27:33.796 [2024-11-15 10:46:22.045168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.796 [2024-11-15 10:46:22.045193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.796 qpair failed and we were unable to recover it. 00:27:33.796 [2024-11-15 10:46:22.045318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.796 [2024-11-15 10:46:22.045346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.796 qpair failed and we were unable to recover it. 00:27:33.796 [2024-11-15 10:46:22.045508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.796 [2024-11-15 10:46:22.045546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.796 qpair failed and we were unable to recover it. 00:27:33.796 [2024-11-15 10:46:22.045710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.796 [2024-11-15 10:46:22.045749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.796 qpair failed and we were unable to recover it. 00:27:33.796 [2024-11-15 10:46:22.045860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.796 [2024-11-15 10:46:22.045899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.796 qpair failed and we were unable to recover it. 00:27:33.796 [2024-11-15 10:46:22.046029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.796 [2024-11-15 10:46:22.046053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.796 qpair failed and we were unable to recover it. 00:27:33.796 [2024-11-15 10:46:22.046242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.796 [2024-11-15 10:46:22.046281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.796 qpair failed and we were unable to recover it. 00:27:33.796 [2024-11-15 10:46:22.046397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.796 [2024-11-15 10:46:22.046422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.796 qpair failed and we were unable to recover it. 00:27:33.796 [2024-11-15 10:46:22.046590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.796 [2024-11-15 10:46:22.046615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.796 qpair failed and we were unable to recover it. 00:27:33.796 [2024-11-15 10:46:22.046737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.796 [2024-11-15 10:46:22.046775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.796 qpair failed and we were unable to recover it. 00:27:33.796 [2024-11-15 10:46:22.046871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.796 [2024-11-15 10:46:22.046896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.796 qpair failed and we were unable to recover it. 00:27:33.796 [2024-11-15 10:46:22.047083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.796 [2024-11-15 10:46:22.047108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.796 qpair failed and we were unable to recover it. 00:27:33.796 [2024-11-15 10:46:22.047232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.796 [2024-11-15 10:46:22.047256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.796 qpair failed and we were unable to recover it. 00:27:33.796 [2024-11-15 10:46:22.047373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.797 [2024-11-15 10:46:22.047398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.797 qpair failed and we were unable to recover it. 00:27:33.797 [2024-11-15 10:46:22.047525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.797 [2024-11-15 10:46:22.047550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.797 qpair failed and we were unable to recover it. 00:27:33.797 [2024-11-15 10:46:22.047743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.797 [2024-11-15 10:46:22.047766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.797 qpair failed and we were unable to recover it. 00:27:33.797 [2024-11-15 10:46:22.047918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.797 [2024-11-15 10:46:22.047942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.797 qpair failed and we were unable to recover it. 00:27:33.797 [2024-11-15 10:46:22.048080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.797 [2024-11-15 10:46:22.048119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.797 qpair failed and we were unable to recover it. 00:27:33.797 [2024-11-15 10:46:22.048237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.797 [2024-11-15 10:46:22.048261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.797 qpair failed and we were unable to recover it. 00:27:33.797 [2024-11-15 10:46:22.048393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.797 [2024-11-15 10:46:22.048433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.797 qpair failed and we were unable to recover it. 00:27:33.797 [2024-11-15 10:46:22.048536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.797 [2024-11-15 10:46:22.048561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.797 qpair failed and we were unable to recover it. 00:27:33.797 [2024-11-15 10:46:22.048667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.797 [2024-11-15 10:46:22.048691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.797 qpair failed and we were unable to recover it. 00:27:33.797 [2024-11-15 10:46:22.048831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.797 [2024-11-15 10:46:22.048855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.797 qpair failed and we were unable to recover it. 00:27:33.797 [2024-11-15 10:46:22.049031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.797 [2024-11-15 10:46:22.049070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.797 qpair failed and we were unable to recover it. 00:27:33.797 [2024-11-15 10:46:22.049252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.797 [2024-11-15 10:46:22.049276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.797 qpair failed and we were unable to recover it. 00:27:33.797 [2024-11-15 10:46:22.049415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.797 [2024-11-15 10:46:22.049441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.797 qpair failed and we were unable to recover it. 00:27:33.797 [2024-11-15 10:46:22.049567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.797 [2024-11-15 10:46:22.049593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.797 qpair failed and we were unable to recover it. 00:27:33.797 [2024-11-15 10:46:22.049722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.797 [2024-11-15 10:46:22.049760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.797 qpair failed and we were unable to recover it. 00:27:33.797 [2024-11-15 10:46:22.049912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.797 [2024-11-15 10:46:22.049936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.797 qpair failed and we were unable to recover it. 00:27:33.797 [2024-11-15 10:46:22.050109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.797 [2024-11-15 10:46:22.050148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.797 qpair failed and we were unable to recover it. 00:27:33.797 [2024-11-15 10:46:22.050293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.797 [2024-11-15 10:46:22.050316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.797 qpair failed and we were unable to recover it. 00:27:33.797 [2024-11-15 10:46:22.050494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.797 [2024-11-15 10:46:22.050520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.797 qpair failed and we were unable to recover it. 00:27:33.797 [2024-11-15 10:46:22.050617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.797 [2024-11-15 10:46:22.050661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.797 qpair failed and we were unable to recover it. 00:27:33.797 [2024-11-15 10:46:22.050770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.797 [2024-11-15 10:46:22.050795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.797 qpair failed and we were unable to recover it. 00:27:33.797 [2024-11-15 10:46:22.050922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.797 [2024-11-15 10:46:22.050948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.797 qpair failed and we were unable to recover it. 00:27:33.797 [2024-11-15 10:46:22.051117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.797 [2024-11-15 10:46:22.051156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.797 qpair failed and we were unable to recover it. 00:27:33.797 [2024-11-15 10:46:22.051320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.797 [2024-11-15 10:46:22.051344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.797 qpair failed and we were unable to recover it. 00:27:33.797 [2024-11-15 10:46:22.051474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.797 [2024-11-15 10:46:22.051499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.797 qpair failed and we were unable to recover it. 00:27:33.797 [2024-11-15 10:46:22.051630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.797 [2024-11-15 10:46:22.051654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.797 qpair failed and we were unable to recover it. 00:27:33.797 [2024-11-15 10:46:22.051796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.797 [2024-11-15 10:46:22.051834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.797 qpair failed and we were unable to recover it. 00:27:33.797 [2024-11-15 10:46:22.052014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.797 [2024-11-15 10:46:22.052037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.797 qpair failed and we were unable to recover it. 00:27:33.797 [2024-11-15 10:46:22.052176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.797 [2024-11-15 10:46:22.052219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.797 qpair failed and we were unable to recover it. 00:27:33.797 [2024-11-15 10:46:22.052360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.797 [2024-11-15 10:46:22.052405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.797 qpair failed and we were unable to recover it. 00:27:33.797 [2024-11-15 10:46:22.052545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.797 [2024-11-15 10:46:22.052569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.797 qpair failed and we were unable to recover it. 00:27:33.797 [2024-11-15 10:46:22.052737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.797 [2024-11-15 10:46:22.052775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.797 qpair failed and we were unable to recover it. 00:27:33.797 [2024-11-15 10:46:22.052946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.797 [2024-11-15 10:46:22.052969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.797 qpair failed and we were unable to recover it. 00:27:33.797 [2024-11-15 10:46:22.053107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.797 [2024-11-15 10:46:22.053145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.797 qpair failed and we were unable to recover it. 00:27:33.797 [2024-11-15 10:46:22.053269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.797 [2024-11-15 10:46:22.053293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.797 qpair failed and we were unable to recover it. 00:27:33.797 [2024-11-15 10:46:22.053403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.797 [2024-11-15 10:46:22.053429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.797 qpair failed and we were unable to recover it. 00:27:33.797 [2024-11-15 10:46:22.053545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.797 [2024-11-15 10:46:22.053571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.797 qpair failed and we were unable to recover it. 00:27:33.797 [2024-11-15 10:46:22.053729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.797 [2024-11-15 10:46:22.053753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.797 qpair failed and we were unable to recover it. 00:27:33.797 [2024-11-15 10:46:22.053900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.798 [2024-11-15 10:46:22.053924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.798 qpair failed and we were unable to recover it. 00:27:33.798 [2024-11-15 10:46:22.054027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.798 [2024-11-15 10:46:22.054051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.798 qpair failed and we were unable to recover it. 00:27:33.798 [2024-11-15 10:46:22.054186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.798 [2024-11-15 10:46:22.054210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.798 qpair failed and we were unable to recover it. 00:27:33.798 [2024-11-15 10:46:22.054357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.798 [2024-11-15 10:46:22.054412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.798 qpair failed and we were unable to recover it. 00:27:33.798 [2024-11-15 10:46:22.054557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.798 [2024-11-15 10:46:22.054582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.798 qpair failed and we were unable to recover it. 00:27:33.798 [2024-11-15 10:46:22.054710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.798 [2024-11-15 10:46:22.054750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.798 qpair failed and we were unable to recover it. 00:27:33.798 [2024-11-15 10:46:22.054905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.798 [2024-11-15 10:46:22.054944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.798 qpair failed and we were unable to recover it. 00:27:33.798 [2024-11-15 10:46:22.055089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.798 [2024-11-15 10:46:22.055112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.798 qpair failed and we were unable to recover it. 00:27:33.798 [2024-11-15 10:46:22.055267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.798 [2024-11-15 10:46:22.055305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.798 qpair failed and we were unable to recover it. 00:27:33.798 [2024-11-15 10:46:22.055435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.798 [2024-11-15 10:46:22.055461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.798 qpair failed and we were unable to recover it. 00:27:33.798 [2024-11-15 10:46:22.055596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.798 [2024-11-15 10:46:22.055621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.798 qpair failed and we were unable to recover it. 00:27:33.798 [2024-11-15 10:46:22.055753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.798 [2024-11-15 10:46:22.055778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.798 qpair failed and we were unable to recover it. 00:27:33.798 [2024-11-15 10:46:22.055976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.798 [2024-11-15 10:46:22.055999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.798 qpair failed and we were unable to recover it. 00:27:33.798 [2024-11-15 10:46:22.056196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.798 [2024-11-15 10:46:22.056219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.798 qpair failed and we were unable to recover it. 00:27:33.798 [2024-11-15 10:46:22.056464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.798 [2024-11-15 10:46:22.056490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.798 qpair failed and we were unable to recover it. 00:27:33.798 [2024-11-15 10:46:22.056589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.798 [2024-11-15 10:46:22.056623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.798 qpair failed and we were unable to recover it. 00:27:33.798 [2024-11-15 10:46:22.056809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.798 [2024-11-15 10:46:22.056833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.798 qpair failed and we were unable to recover it. 00:27:33.798 [2024-11-15 10:46:22.057007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.798 [2024-11-15 10:46:22.057045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.798 qpair failed and we were unable to recover it. 00:27:33.798 [2024-11-15 10:46:22.057232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.798 [2024-11-15 10:46:22.057256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.798 qpair failed and we were unable to recover it. 00:27:33.798 [2024-11-15 10:46:22.057383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.798 [2024-11-15 10:46:22.057410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.798 qpair failed and we were unable to recover it. 00:27:33.798 [2024-11-15 10:46:22.057551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.798 [2024-11-15 10:46:22.057576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.798 qpair failed and we were unable to recover it. 00:27:33.798 [2024-11-15 10:46:22.057694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.798 [2024-11-15 10:46:22.057719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.798 qpair failed and we were unable to recover it. 00:27:33.798 [2024-11-15 10:46:22.057852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.798 [2024-11-15 10:46:22.057891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.798 qpair failed and we were unable to recover it. 00:27:33.798 [2024-11-15 10:46:22.058053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.798 [2024-11-15 10:46:22.058092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.798 qpair failed and we were unable to recover it. 00:27:33.798 [2024-11-15 10:46:22.058245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.798 [2024-11-15 10:46:22.058285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.798 qpair failed and we were unable to recover it. 00:27:33.798 [2024-11-15 10:46:22.058437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.798 [2024-11-15 10:46:22.058478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.798 qpair failed and we were unable to recover it. 00:27:33.798 [2024-11-15 10:46:22.058623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.798 [2024-11-15 10:46:22.058662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.798 qpair failed and we were unable to recover it. 00:27:33.798 [2024-11-15 10:46:22.058790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.798 [2024-11-15 10:46:22.058815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.798 qpair failed and we were unable to recover it. 00:27:33.798 [2024-11-15 10:46:22.058956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.798 [2024-11-15 10:46:22.058996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.798 qpair failed and we were unable to recover it. 00:27:33.798 [2024-11-15 10:46:22.059121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.798 [2024-11-15 10:46:22.059145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.798 qpair failed and we were unable to recover it. 00:27:33.798 [2024-11-15 10:46:22.059337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.798 [2024-11-15 10:46:22.059386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.798 qpair failed and we were unable to recover it. 00:27:33.798 [2024-11-15 10:46:22.059511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.798 [2024-11-15 10:46:22.059535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.798 qpair failed and we were unable to recover it. 00:27:33.798 [2024-11-15 10:46:22.059711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.798 [2024-11-15 10:46:22.059749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.798 qpair failed and we were unable to recover it. 00:27:33.798 [2024-11-15 10:46:22.059893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.799 [2024-11-15 10:46:22.059917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.799 qpair failed and we were unable to recover it. 00:27:33.799 [2024-11-15 10:46:22.060083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.799 [2024-11-15 10:46:22.060122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.799 qpair failed and we were unable to recover it. 00:27:33.799 [2024-11-15 10:46:22.060227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.799 [2024-11-15 10:46:22.060251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.799 qpair failed and we were unable to recover it. 00:27:33.799 [2024-11-15 10:46:22.060414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.799 [2024-11-15 10:46:22.060444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.799 qpair failed and we were unable to recover it. 00:27:33.799 [2024-11-15 10:46:22.060579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.799 [2024-11-15 10:46:22.060609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.799 qpair failed and we were unable to recover it. 00:27:33.799 [2024-11-15 10:46:22.060756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.799 [2024-11-15 10:46:22.060785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.799 qpair failed and we were unable to recover it. 00:27:33.799 [2024-11-15 10:46:22.060933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.799 [2024-11-15 10:46:22.060975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.799 qpair failed and we were unable to recover it. 00:27:33.799 [2024-11-15 10:46:22.061194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.799 [2024-11-15 10:46:22.061222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.799 qpair failed and we were unable to recover it. 00:27:33.799 [2024-11-15 10:46:22.061375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.799 [2024-11-15 10:46:22.061417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.799 qpair failed and we were unable to recover it. 00:27:33.799 [2024-11-15 10:46:22.061533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.799 [2024-11-15 10:46:22.061558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.799 qpair failed and we were unable to recover it. 00:27:33.799 [2024-11-15 10:46:22.061676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.799 [2024-11-15 10:46:22.061700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.799 qpair failed and we were unable to recover it. 00:27:33.799 [2024-11-15 10:46:22.061858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.799 [2024-11-15 10:46:22.061897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.799 qpair failed and we were unable to recover it. 00:27:33.799 [2024-11-15 10:46:22.062051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.799 [2024-11-15 10:46:22.062076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.799 qpair failed and we were unable to recover it. 00:27:33.799 [2024-11-15 10:46:22.062214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.799 [2024-11-15 10:46:22.062238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.799 qpair failed and we were unable to recover it. 00:27:33.799 [2024-11-15 10:46:22.062394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.799 [2024-11-15 10:46:22.062420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.799 qpair failed and we were unable to recover it. 00:27:33.799 [2024-11-15 10:46:22.062542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.799 [2024-11-15 10:46:22.062566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.799 qpair failed and we were unable to recover it. 00:27:33.799 [2024-11-15 10:46:22.062674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.799 [2024-11-15 10:46:22.062698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.799 qpair failed and we were unable to recover it. 00:27:33.799 [2024-11-15 10:46:22.062840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.799 [2024-11-15 10:46:22.062865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.799 qpair failed and we were unable to recover it. 00:27:33.799 [2024-11-15 10:46:22.062987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.799 [2024-11-15 10:46:22.063011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.799 qpair failed and we were unable to recover it. 00:27:33.799 [2024-11-15 10:46:22.063132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.799 [2024-11-15 10:46:22.063156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.799 qpair failed and we were unable to recover it. 00:27:33.799 [2024-11-15 10:46:22.063290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.799 [2024-11-15 10:46:22.063314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.799 qpair failed and we were unable to recover it. 00:27:33.799 [2024-11-15 10:46:22.063433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.799 [2024-11-15 10:46:22.063458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.799 qpair failed and we were unable to recover it. 00:27:33.799 [2024-11-15 10:46:22.063570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.799 [2024-11-15 10:46:22.063595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.799 qpair failed and we were unable to recover it. 00:27:33.799 [2024-11-15 10:46:22.063755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.799 [2024-11-15 10:46:22.063793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.799 qpair failed and we were unable to recover it. 00:27:33.799 [2024-11-15 10:46:22.063920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.799 [2024-11-15 10:46:22.063943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.799 qpair failed and we were unable to recover it. 00:27:33.799 [2024-11-15 10:46:22.064093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.799 [2024-11-15 10:46:22.064118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.799 qpair failed and we were unable to recover it. 00:27:33.799 [2024-11-15 10:46:22.064223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.799 [2024-11-15 10:46:22.064247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.799 qpair failed and we were unable to recover it. 00:27:33.799 [2024-11-15 10:46:22.064356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.799 [2024-11-15 10:46:22.064386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.799 qpair failed and we were unable to recover it. 00:27:33.799 [2024-11-15 10:46:22.064538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.799 [2024-11-15 10:46:22.064563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.799 qpair failed and we were unable to recover it. 00:27:33.799 [2024-11-15 10:46:22.064716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.799 [2024-11-15 10:46:22.064740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.799 qpair failed and we were unable to recover it. 00:27:33.799 [2024-11-15 10:46:22.064879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.799 [2024-11-15 10:46:22.064903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.799 qpair failed and we were unable to recover it. 00:27:33.799 [2024-11-15 10:46:22.065097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.799 [2024-11-15 10:46:22.065120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.799 qpair failed and we were unable to recover it. 00:27:33.799 [2024-11-15 10:46:22.065226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.799 [2024-11-15 10:46:22.065251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.799 qpair failed and we were unable to recover it. 00:27:33.799 [2024-11-15 10:46:22.065429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.799 [2024-11-15 10:46:22.065455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.799 qpair failed and we were unable to recover it. 00:27:33.799 [2024-11-15 10:46:22.065588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.799 [2024-11-15 10:46:22.065627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.799 qpair failed and we were unable to recover it. 00:27:33.799 [2024-11-15 10:46:22.065734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.799 [2024-11-15 10:46:22.065759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.799 qpair failed and we were unable to recover it. 00:27:33.799 [2024-11-15 10:46:22.065897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.799 [2024-11-15 10:46:22.065922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.799 qpair failed and we were unable to recover it. 00:27:33.799 [2024-11-15 10:46:22.066040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.799 [2024-11-15 10:46:22.066069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.799 qpair failed and we were unable to recover it. 00:27:33.799 [2024-11-15 10:46:22.066242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.800 [2024-11-15 10:46:22.066266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.800 qpair failed and we were unable to recover it. 00:27:33.800 [2024-11-15 10:46:22.066415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.800 [2024-11-15 10:46:22.066439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.800 qpair failed and we were unable to recover it. 00:27:33.800 [2024-11-15 10:46:22.066552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.800 [2024-11-15 10:46:22.066578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.800 qpair failed and we were unable to recover it. 00:27:33.800 [2024-11-15 10:46:22.066718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.800 [2024-11-15 10:46:22.066742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.800 qpair failed and we were unable to recover it. 00:27:33.800 [2024-11-15 10:46:22.066866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.800 [2024-11-15 10:46:22.066891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.800 qpair failed and we were unable to recover it. 00:27:33.800 [2024-11-15 10:46:22.067002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.800 [2024-11-15 10:46:22.067027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.800 qpair failed and we were unable to recover it. 00:27:33.800 [2024-11-15 10:46:22.067167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.800 [2024-11-15 10:46:22.067191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.800 qpair failed and we were unable to recover it. 00:27:33.800 [2024-11-15 10:46:22.067332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.800 [2024-11-15 10:46:22.067356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.800 qpair failed and we were unable to recover it. 00:27:33.800 [2024-11-15 10:46:22.067488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.800 [2024-11-15 10:46:22.067513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.800 qpair failed and we were unable to recover it. 00:27:33.800 [2024-11-15 10:46:22.067632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.800 [2024-11-15 10:46:22.067671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.800 qpair failed and we were unable to recover it. 00:27:33.800 [2024-11-15 10:46:22.067817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.800 [2024-11-15 10:46:22.067842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.800 qpair failed and we were unable to recover it. 00:27:33.800 [2024-11-15 10:46:22.067960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.800 [2024-11-15 10:46:22.067985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.800 qpair failed and we were unable to recover it. 00:27:33.800 [2024-11-15 10:46:22.068136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.800 [2024-11-15 10:46:22.068160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.800 qpair failed and we were unable to recover it. 00:27:33.800 [2024-11-15 10:46:22.068284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.800 [2024-11-15 10:46:22.068308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.800 qpair failed and we were unable to recover it. 00:27:33.800 [2024-11-15 10:46:22.068412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.800 [2024-11-15 10:46:22.068438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.800 qpair failed and we were unable to recover it. 00:27:33.800 [2024-11-15 10:46:22.068533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.800 [2024-11-15 10:46:22.068558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.800 qpair failed and we were unable to recover it. 00:27:33.800 [2024-11-15 10:46:22.068689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.800 [2024-11-15 10:46:22.068713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.800 qpair failed and we were unable to recover it. 00:27:33.800 [2024-11-15 10:46:22.068874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.800 [2024-11-15 10:46:22.068898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.800 qpair failed and we were unable to recover it. 00:27:33.800 [2024-11-15 10:46:22.069053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.800 [2024-11-15 10:46:22.069076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.800 qpair failed and we were unable to recover it. 00:27:33.800 [2024-11-15 10:46:22.069210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.800 [2024-11-15 10:46:22.069248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.800 qpair failed and we were unable to recover it. 00:27:33.800 [2024-11-15 10:46:22.069397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.800 [2024-11-15 10:46:22.069445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.800 qpair failed and we were unable to recover it. 00:27:33.800 [2024-11-15 10:46:22.069535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.800 [2024-11-15 10:46:22.069559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.800 qpair failed and we were unable to recover it. 00:27:33.800 [2024-11-15 10:46:22.069705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.800 [2024-11-15 10:46:22.069729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.800 qpair failed and we were unable to recover it. 00:27:33.800 [2024-11-15 10:46:22.069862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.800 [2024-11-15 10:46:22.069886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.800 qpair failed and we were unable to recover it. 00:27:33.800 [2024-11-15 10:46:22.069995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.800 [2024-11-15 10:46:22.070019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.800 qpair failed and we were unable to recover it. 00:27:33.800 [2024-11-15 10:46:22.070188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.800 [2024-11-15 10:46:22.070212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.800 qpair failed and we were unable to recover it. 00:27:33.800 [2024-11-15 10:46:22.070343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.800 [2024-11-15 10:46:22.070372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.800 qpair failed and we were unable to recover it. 00:27:33.800 [2024-11-15 10:46:22.070487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.800 [2024-11-15 10:46:22.070512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.800 qpair failed and we were unable to recover it. 00:27:33.800 [2024-11-15 10:46:22.070596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.800 [2024-11-15 10:46:22.070621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.800 qpair failed and we were unable to recover it. 00:27:33.800 [2024-11-15 10:46:22.070760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.800 [2024-11-15 10:46:22.070784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.800 qpair failed and we were unable to recover it. 00:27:33.800 [2024-11-15 10:46:22.070935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.800 [2024-11-15 10:46:22.070960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.800 qpair failed and we were unable to recover it. 00:27:33.800 [2024-11-15 10:46:22.071078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.800 [2024-11-15 10:46:22.071102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.800 qpair failed and we were unable to recover it. 00:27:33.800 [2024-11-15 10:46:22.071242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.800 [2024-11-15 10:46:22.071266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.800 qpair failed and we were unable to recover it. 00:27:33.800 [2024-11-15 10:46:22.071415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.800 [2024-11-15 10:46:22.071440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.800 qpair failed and we were unable to recover it. 00:27:33.800 [2024-11-15 10:46:22.071541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.800 [2024-11-15 10:46:22.071567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.800 qpair failed and we were unable to recover it. 00:27:33.800 [2024-11-15 10:46:22.071701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.800 [2024-11-15 10:46:22.071725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.800 qpair failed and we were unable to recover it. 00:27:33.800 [2024-11-15 10:46:22.071874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.800 [2024-11-15 10:46:22.071913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.800 qpair failed and we were unable to recover it. 00:27:33.800 [2024-11-15 10:46:22.072020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.801 [2024-11-15 10:46:22.072044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.801 qpair failed and we were unable to recover it. 00:27:33.801 [2024-11-15 10:46:22.072186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.801 [2024-11-15 10:46:22.072210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.801 qpair failed and we were unable to recover it. 00:27:33.801 [2024-11-15 10:46:22.072349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.801 [2024-11-15 10:46:22.072391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.801 qpair failed and we were unable to recover it. 00:27:33.801 [2024-11-15 10:46:22.072493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.801 [2024-11-15 10:46:22.072517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.801 qpair failed and we were unable to recover it. 00:27:33.801 [2024-11-15 10:46:22.072635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.801 [2024-11-15 10:46:22.072660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.801 qpair failed and we were unable to recover it. 00:27:33.801 [2024-11-15 10:46:22.072797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.801 [2024-11-15 10:46:22.072821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.801 qpair failed and we were unable to recover it. 00:27:33.801 [2024-11-15 10:46:22.072953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.801 [2024-11-15 10:46:22.072979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.801 qpair failed and we were unable to recover it. 00:27:33.801 [2024-11-15 10:46:22.073114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.801 [2024-11-15 10:46:22.073139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.801 qpair failed and we were unable to recover it. 00:27:33.801 [2024-11-15 10:46:22.073274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.801 [2024-11-15 10:46:22.073299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.801 qpair failed and we were unable to recover it. 00:27:33.801 [2024-11-15 10:46:22.073417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.801 [2024-11-15 10:46:22.073444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.801 qpair failed and we were unable to recover it. 00:27:33.801 [2024-11-15 10:46:22.073537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.801 [2024-11-15 10:46:22.073563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.801 qpair failed and we were unable to recover it. 00:27:33.801 [2024-11-15 10:46:22.073677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.801 [2024-11-15 10:46:22.073703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.801 qpair failed and we were unable to recover it. 00:27:33.801 [2024-11-15 10:46:22.073847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.801 [2024-11-15 10:46:22.073872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.801 qpair failed and we were unable to recover it. 00:27:33.801 [2024-11-15 10:46:22.074018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.801 [2024-11-15 10:46:22.074043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.801 qpair failed and we were unable to recover it. 00:27:33.801 [2024-11-15 10:46:22.074192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.801 [2024-11-15 10:46:22.074218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.801 qpair failed and we were unable to recover it. 00:27:33.801 [2024-11-15 10:46:22.074305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.801 [2024-11-15 10:46:22.074331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.801 qpair failed and we were unable to recover it. 00:27:33.801 [2024-11-15 10:46:22.074424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.801 [2024-11-15 10:46:22.074450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.801 qpair failed and we were unable to recover it. 00:27:33.801 [2024-11-15 10:46:22.074535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.801 [2024-11-15 10:46:22.074561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.801 qpair failed and we were unable to recover it. 00:27:33.801 [2024-11-15 10:46:22.074693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.801 [2024-11-15 10:46:22.074718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.801 qpair failed and we were unable to recover it. 00:27:33.801 [2024-11-15 10:46:22.074803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.801 [2024-11-15 10:46:22.074828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.801 qpair failed and we were unable to recover it. 00:27:33.801 [2024-11-15 10:46:22.074950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.801 [2024-11-15 10:46:22.074975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.801 qpair failed and we were unable to recover it. 00:27:33.801 [2024-11-15 10:46:22.075121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.801 [2024-11-15 10:46:22.075146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.801 qpair failed and we were unable to recover it. 00:27:33.801 [2024-11-15 10:46:22.075267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.801 [2024-11-15 10:46:22.075292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.801 qpair failed and we were unable to recover it. 00:27:33.801 [2024-11-15 10:46:22.075374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.801 [2024-11-15 10:46:22.075400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.801 qpair failed and we were unable to recover it. 00:27:33.801 [2024-11-15 10:46:22.075518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.801 [2024-11-15 10:46:22.075544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.801 qpair failed and we were unable to recover it. 00:27:33.801 [2024-11-15 10:46:22.075642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.801 [2024-11-15 10:46:22.075681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.801 qpair failed and we were unable to recover it. 00:27:33.801 [2024-11-15 10:46:22.075797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.801 [2024-11-15 10:46:22.075821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.801 qpair failed and we were unable to recover it. 00:27:33.801 [2024-11-15 10:46:22.075965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.801 [2024-11-15 10:46:22.075990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.801 qpair failed and we were unable to recover it. 00:27:33.801 [2024-11-15 10:46:22.076093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.801 [2024-11-15 10:46:22.076118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.801 qpair failed and we were unable to recover it. 00:27:33.801 [2024-11-15 10:46:22.076265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.801 [2024-11-15 10:46:22.076293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.801 qpair failed and we were unable to recover it. 00:27:33.801 [2024-11-15 10:46:22.076408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.801 [2024-11-15 10:46:22.076434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.801 qpair failed and we were unable to recover it. 00:27:33.801 [2024-11-15 10:46:22.076531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.801 [2024-11-15 10:46:22.076557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.801 qpair failed and we were unable to recover it. 00:27:33.801 [2024-11-15 10:46:22.076707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.801 [2024-11-15 10:46:22.076731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.801 qpair failed and we were unable to recover it. 00:27:33.801 [2024-11-15 10:46:22.076872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.801 [2024-11-15 10:46:22.076911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.801 qpair failed and we were unable to recover it. 00:27:33.801 [2024-11-15 10:46:22.077046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.801 [2024-11-15 10:46:22.077070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.801 qpair failed and we were unable to recover it. 00:27:33.801 [2024-11-15 10:46:22.077204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.801 [2024-11-15 10:46:22.077228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.801 qpair failed and we were unable to recover it. 00:27:33.801 [2024-11-15 10:46:22.077376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.801 [2024-11-15 10:46:22.077402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.801 qpair failed and we were unable to recover it. 00:27:33.801 [2024-11-15 10:46:22.077490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.801 [2024-11-15 10:46:22.077516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.801 qpair failed and we were unable to recover it. 00:27:33.801 [2024-11-15 10:46:22.077609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.802 [2024-11-15 10:46:22.077634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.802 qpair failed and we were unable to recover it. 00:27:33.802 [2024-11-15 10:46:22.077766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.802 [2024-11-15 10:46:22.077790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.802 qpair failed and we were unable to recover it. 00:27:33.802 [2024-11-15 10:46:22.077948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.802 [2024-11-15 10:46:22.077971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.802 qpair failed and we were unable to recover it. 00:27:33.802 [2024-11-15 10:46:22.078100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.802 [2024-11-15 10:46:22.078140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.802 qpair failed and we were unable to recover it. 00:27:33.802 [2024-11-15 10:46:22.078242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.802 [2024-11-15 10:46:22.078267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.802 qpair failed and we were unable to recover it. 00:27:33.802 [2024-11-15 10:46:22.078357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.802 [2024-11-15 10:46:22.078390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.802 qpair failed and we were unable to recover it. 00:27:33.802 [2024-11-15 10:46:22.078487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.802 [2024-11-15 10:46:22.078512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.802 qpair failed and we were unable to recover it. 00:27:33.802 [2024-11-15 10:46:22.078638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.802 [2024-11-15 10:46:22.078678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.802 qpair failed and we were unable to recover it. 00:27:33.802 [2024-11-15 10:46:22.078813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.802 [2024-11-15 10:46:22.078851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.802 qpair failed and we were unable to recover it. 00:27:33.802 [2024-11-15 10:46:22.078977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.802 [2024-11-15 10:46:22.079001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.802 qpair failed and we were unable to recover it. 00:27:33.802 [2024-11-15 10:46:22.079135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.802 [2024-11-15 10:46:22.079159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.802 qpair failed and we were unable to recover it. 00:27:33.802 [2024-11-15 10:46:22.079325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.802 [2024-11-15 10:46:22.079370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.802 qpair failed and we were unable to recover it. 00:27:33.802 [2024-11-15 10:46:22.079464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.802 [2024-11-15 10:46:22.079490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.802 qpair failed and we were unable to recover it. 00:27:33.802 [2024-11-15 10:46:22.079589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.802 [2024-11-15 10:46:22.079614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.802 qpair failed and we were unable to recover it. 00:27:33.802 [2024-11-15 10:46:22.079744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.802 [2024-11-15 10:46:22.079769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.802 qpair failed and we were unable to recover it. 00:27:33.802 [2024-11-15 10:46:22.079905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.802 [2024-11-15 10:46:22.079930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.802 qpair failed and we were unable to recover it. 00:27:33.802 [2024-11-15 10:46:22.080090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.802 [2024-11-15 10:46:22.080114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.802 qpair failed and we were unable to recover it. 00:27:33.802 [2024-11-15 10:46:22.080244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.802 [2024-11-15 10:46:22.080268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.802 qpair failed and we were unable to recover it. 00:27:33.802 [2024-11-15 10:46:22.080381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.802 [2024-11-15 10:46:22.080408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.802 qpair failed and we were unable to recover it. 00:27:33.802 [2024-11-15 10:46:22.080511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.802 [2024-11-15 10:46:22.080537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.802 qpair failed and we were unable to recover it. 00:27:33.802 [2024-11-15 10:46:22.080696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.802 [2024-11-15 10:46:22.080721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.802 qpair failed and we were unable to recover it. 00:27:33.802 [2024-11-15 10:46:22.080858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.802 [2024-11-15 10:46:22.080897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.802 qpair failed and we were unable to recover it. 00:27:33.802 [2024-11-15 10:46:22.081056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.802 [2024-11-15 10:46:22.081080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.802 qpair failed and we were unable to recover it. 00:27:33.802 [2024-11-15 10:46:22.081205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.802 [2024-11-15 10:46:22.081229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.802 qpair failed and we were unable to recover it. 00:27:33.802 [2024-11-15 10:46:22.081383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.802 [2024-11-15 10:46:22.081409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.802 qpair failed and we were unable to recover it. 00:27:33.802 [2024-11-15 10:46:22.081508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.802 [2024-11-15 10:46:22.081533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.802 qpair failed and we were unable to recover it. 00:27:33.802 [2024-11-15 10:46:22.081696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.802 [2024-11-15 10:46:22.081721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.802 qpair failed and we were unable to recover it. 00:27:33.802 [2024-11-15 10:46:22.081845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.802 [2024-11-15 10:46:22.081870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.802 qpair failed and we were unable to recover it. 00:27:33.802 [2024-11-15 10:46:22.082032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.802 [2024-11-15 10:46:22.082056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.802 qpair failed and we were unable to recover it. 00:27:33.802 [2024-11-15 10:46:22.082172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.802 [2024-11-15 10:46:22.082197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.802 qpair failed and we were unable to recover it. 00:27:33.802 [2024-11-15 10:46:22.082336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.802 [2024-11-15 10:46:22.082394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.802 qpair failed and we were unable to recover it. 00:27:33.802 [2024-11-15 10:46:22.082502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.802 [2024-11-15 10:46:22.082531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.802 qpair failed and we were unable to recover it. 00:27:33.802 [2024-11-15 10:46:22.082624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.802 [2024-11-15 10:46:22.082650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.802 qpair failed and we were unable to recover it. 00:27:33.802 [2024-11-15 10:46:22.082807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.802 [2024-11-15 10:46:22.082831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.802 qpair failed and we were unable to recover it. 00:27:33.802 [2024-11-15 10:46:22.082951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.802 [2024-11-15 10:46:22.082976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.802 qpair failed and we were unable to recover it. 00:27:33.802 [2024-11-15 10:46:22.083068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.802 [2024-11-15 10:46:22.083093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.802 qpair failed and we were unable to recover it. 00:27:33.802 [2024-11-15 10:46:22.083214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.802 [2024-11-15 10:46:22.083238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.802 qpair failed and we were unable to recover it. 00:27:33.802 [2024-11-15 10:46:22.083359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.802 [2024-11-15 10:46:22.083389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.803 qpair failed and we were unable to recover it. 00:27:33.803 [2024-11-15 10:46:22.083532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.803 [2024-11-15 10:46:22.083558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.803 qpair failed and we were unable to recover it. 00:27:33.803 [2024-11-15 10:46:22.083672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.803 [2024-11-15 10:46:22.083697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.803 qpair failed and we were unable to recover it. 00:27:33.803 [2024-11-15 10:46:22.083837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.803 [2024-11-15 10:46:22.083878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.803 qpair failed and we were unable to recover it. 00:27:33.803 [2024-11-15 10:46:22.084005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.803 [2024-11-15 10:46:22.084030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.803 qpair failed and we were unable to recover it. 00:27:33.803 [2024-11-15 10:46:22.084136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.803 [2024-11-15 10:46:22.084160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.803 qpair failed and we were unable to recover it. 00:27:33.803 [2024-11-15 10:46:22.084296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.803 [2024-11-15 10:46:22.084321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.803 qpair failed and we were unable to recover it. 00:27:33.803 [2024-11-15 10:46:22.084450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.803 [2024-11-15 10:46:22.084476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.803 qpair failed and we were unable to recover it. 00:27:33.803 [2024-11-15 10:46:22.084648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.803 [2024-11-15 10:46:22.084690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.803 qpair failed and we were unable to recover it. 00:27:33.803 [2024-11-15 10:46:22.084821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.803 [2024-11-15 10:46:22.084844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.803 qpair failed and we were unable to recover it. 00:27:33.803 [2024-11-15 10:46:22.084978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.803 [2024-11-15 10:46:22.085003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.803 qpair failed and we were unable to recover it. 00:27:33.803 [2024-11-15 10:46:22.085167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.803 [2024-11-15 10:46:22.085205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.803 qpair failed and we were unable to recover it. 00:27:33.803 [2024-11-15 10:46:22.085308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.803 [2024-11-15 10:46:22.085332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.803 qpair failed and we were unable to recover it. 00:27:33.803 [2024-11-15 10:46:22.085446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.803 [2024-11-15 10:46:22.085472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.803 qpair failed and we were unable to recover it. 00:27:33.803 [2024-11-15 10:46:22.085568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.803 [2024-11-15 10:46:22.085593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.803 qpair failed and we were unable to recover it. 00:27:33.803 [2024-11-15 10:46:22.085711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.803 [2024-11-15 10:46:22.085735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.803 qpair failed and we were unable to recover it. 00:27:33.803 [2024-11-15 10:46:22.085864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.803 [2024-11-15 10:46:22.085889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.803 qpair failed and we were unable to recover it. 00:27:33.803 [2024-11-15 10:46:22.085984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.803 [2024-11-15 10:46:22.086009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.803 qpair failed and we were unable to recover it. 00:27:33.803 [2024-11-15 10:46:22.086143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.803 [2024-11-15 10:46:22.086168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.803 qpair failed and we were unable to recover it. 00:27:33.803 [2024-11-15 10:46:22.086288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.803 [2024-11-15 10:46:22.086312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.803 qpair failed and we were unable to recover it. 00:27:33.803 [2024-11-15 10:46:22.086444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.803 [2024-11-15 10:46:22.086469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.803 qpair failed and we were unable to recover it. 00:27:33.803 [2024-11-15 10:46:22.086555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.803 [2024-11-15 10:46:22.086580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.803 qpair failed and we were unable to recover it. 00:27:33.803 [2024-11-15 10:46:22.086705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.803 [2024-11-15 10:46:22.086730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.803 qpair failed and we were unable to recover it. 00:27:33.803 [2024-11-15 10:46:22.086920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.803 [2024-11-15 10:46:22.086944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.803 qpair failed and we were unable to recover it. 00:27:33.803 [2024-11-15 10:46:22.087103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.803 [2024-11-15 10:46:22.087127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.803 qpair failed and we were unable to recover it. 00:27:33.803 [2024-11-15 10:46:22.087269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.803 [2024-11-15 10:46:22.087308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.803 qpair failed and we were unable to recover it. 00:27:33.803 [2024-11-15 10:46:22.087428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.803 [2024-11-15 10:46:22.087454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.803 qpair failed and we were unable to recover it. 00:27:33.803 [2024-11-15 10:46:22.087549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.803 [2024-11-15 10:46:22.087574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.803 qpair failed and we were unable to recover it. 00:27:33.803 [2024-11-15 10:46:22.087700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.803 [2024-11-15 10:46:22.087724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.803 qpair failed and we were unable to recover it. 00:27:33.803 [2024-11-15 10:46:22.087873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.803 [2024-11-15 10:46:22.087911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.803 qpair failed and we were unable to recover it. 00:27:33.803 [2024-11-15 10:46:22.088080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.803 [2024-11-15 10:46:22.088103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.803 qpair failed and we were unable to recover it. 00:27:33.803 [2024-11-15 10:46:22.088237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.803 [2024-11-15 10:46:22.088276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.803 qpair failed and we were unable to recover it. 00:27:33.803 [2024-11-15 10:46:22.088421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.803 [2024-11-15 10:46:22.088447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.803 qpair failed and we were unable to recover it. 00:27:33.803 [2024-11-15 10:46:22.088531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.803 [2024-11-15 10:46:22.088556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.803 qpair failed and we were unable to recover it. 00:27:33.803 [2024-11-15 10:46:22.088710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.804 [2024-11-15 10:46:22.088739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.804 qpair failed and we were unable to recover it. 00:27:33.804 [2024-11-15 10:46:22.088854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.804 [2024-11-15 10:46:22.088879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.804 qpair failed and we were unable to recover it. 00:27:33.804 [2024-11-15 10:46:22.088983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.804 [2024-11-15 10:46:22.089008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.804 qpair failed and we were unable to recover it. 00:27:33.804 [2024-11-15 10:46:22.089142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.804 [2024-11-15 10:46:22.089166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.804 qpair failed and we were unable to recover it. 00:27:33.804 [2024-11-15 10:46:22.089334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.804 [2024-11-15 10:46:22.089379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.804 qpair failed and we were unable to recover it. 00:27:33.804 [2024-11-15 10:46:22.089472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.804 [2024-11-15 10:46:22.089498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.804 qpair failed and we were unable to recover it. 00:27:33.804 [2024-11-15 10:46:22.089596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.804 [2024-11-15 10:46:22.089621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.804 qpair failed and we were unable to recover it. 00:27:33.804 [2024-11-15 10:46:22.089754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.804 [2024-11-15 10:46:22.089778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.804 qpair failed and we were unable to recover it. 00:27:33.804 [2024-11-15 10:46:22.089914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.804 [2024-11-15 10:46:22.089938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.804 qpair failed and we were unable to recover it. 00:27:33.804 [2024-11-15 10:46:22.090063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.804 [2024-11-15 10:46:22.090088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.804 qpair failed and we were unable to recover it. 00:27:33.804 [2024-11-15 10:46:22.090226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.804 [2024-11-15 10:46:22.090250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.804 qpair failed and we were unable to recover it. 00:27:33.804 [2024-11-15 10:46:22.090372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.804 [2024-11-15 10:46:22.090413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.804 qpair failed and we were unable to recover it. 00:27:33.804 [2024-11-15 10:46:22.090507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.804 [2024-11-15 10:46:22.090532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.804 qpair failed and we were unable to recover it. 00:27:33.804 [2024-11-15 10:46:22.090628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.804 [2024-11-15 10:46:22.090653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.804 qpair failed and we were unable to recover it. 00:27:33.804 [2024-11-15 10:46:22.090776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.804 [2024-11-15 10:46:22.090800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.804 qpair failed and we were unable to recover it. 00:27:33.804 [2024-11-15 10:46:22.090942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.804 [2024-11-15 10:46:22.090967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.804 qpair failed and we were unable to recover it. 00:27:33.804 [2024-11-15 10:46:22.091107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.804 [2024-11-15 10:46:22.091132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.804 qpair failed and we were unable to recover it. 00:27:33.804 [2024-11-15 10:46:22.091243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.804 [2024-11-15 10:46:22.091267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.804 qpair failed and we were unable to recover it. 00:27:33.804 [2024-11-15 10:46:22.091413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.804 [2024-11-15 10:46:22.091439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.804 qpair failed and we were unable to recover it. 00:27:33.804 [2024-11-15 10:46:22.091528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.804 [2024-11-15 10:46:22.091553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.804 qpair failed and we were unable to recover it. 00:27:33.804 [2024-11-15 10:46:22.091641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.804 [2024-11-15 10:46:22.091666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.804 qpair failed and we were unable to recover it. 00:27:33.804 [2024-11-15 10:46:22.091810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.804 [2024-11-15 10:46:22.091848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.804 qpair failed and we were unable to recover it. 00:27:33.804 [2024-11-15 10:46:22.091995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.804 [2024-11-15 10:46:22.092020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.804 qpair failed and we were unable to recover it. 00:27:33.804 [2024-11-15 10:46:22.092162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.804 [2024-11-15 10:46:22.092186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.804 qpair failed and we were unable to recover it. 00:27:33.804 [2024-11-15 10:46:22.092321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.804 [2024-11-15 10:46:22.092360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.804 qpair failed and we were unable to recover it. 00:27:33.804 [2024-11-15 10:46:22.092479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.804 [2024-11-15 10:46:22.092504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.804 qpair failed and we were unable to recover it. 00:27:33.804 [2024-11-15 10:46:22.092604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.804 [2024-11-15 10:46:22.092629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.804 qpair failed and we were unable to recover it. 00:27:33.804 [2024-11-15 10:46:22.092785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.804 [2024-11-15 10:46:22.092810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.804 qpair failed and we were unable to recover it. 00:27:33.804 [2024-11-15 10:46:22.092923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.804 [2024-11-15 10:46:22.092947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.804 qpair failed and we were unable to recover it. 00:27:33.804 [2024-11-15 10:46:22.093076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.804 [2024-11-15 10:46:22.093101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.804 qpair failed and we were unable to recover it. 00:27:33.804 [2024-11-15 10:46:22.093200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.804 [2024-11-15 10:46:22.093240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.804 qpair failed and we were unable to recover it. 00:27:33.804 [2024-11-15 10:46:22.093374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.804 [2024-11-15 10:46:22.093400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.804 qpair failed and we were unable to recover it. 00:27:33.804 [2024-11-15 10:46:22.093496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.804 [2024-11-15 10:46:22.093522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.804 qpair failed and we were unable to recover it. 00:27:33.804 [2024-11-15 10:46:22.093609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.804 [2024-11-15 10:46:22.093650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.804 qpair failed and we were unable to recover it. 00:27:33.804 [2024-11-15 10:46:22.093792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.804 [2024-11-15 10:46:22.093816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.804 qpair failed and we were unable to recover it. 00:27:33.804 [2024-11-15 10:46:22.093933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.804 [2024-11-15 10:46:22.093957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.804 qpair failed and we were unable to recover it. 00:27:33.804 [2024-11-15 10:46:22.094092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.804 [2024-11-15 10:46:22.094117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.804 qpair failed and we were unable to recover it. 00:27:33.804 [2024-11-15 10:46:22.094253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.804 [2024-11-15 10:46:22.094278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.804 qpair failed and we were unable to recover it. 00:27:33.804 [2024-11-15 10:46:22.094420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.805 [2024-11-15 10:46:22.094446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.805 qpair failed and we were unable to recover it. 00:27:33.805 [2024-11-15 10:46:22.094539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.805 [2024-11-15 10:46:22.094564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.805 qpair failed and we were unable to recover it. 00:27:33.805 [2024-11-15 10:46:22.094693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.805 [2024-11-15 10:46:22.094722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.805 qpair failed and we were unable to recover it. 00:27:33.805 [2024-11-15 10:46:22.094868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.805 [2024-11-15 10:46:22.094907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.805 qpair failed and we were unable to recover it. 00:27:33.805 [2024-11-15 10:46:22.095025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.805 [2024-11-15 10:46:22.095050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.805 qpair failed and we were unable to recover it. 00:27:33.805 [2024-11-15 10:46:22.095198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.805 [2024-11-15 10:46:22.095222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.805 qpair failed and we were unable to recover it. 00:27:33.805 [2024-11-15 10:46:22.095343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.805 [2024-11-15 10:46:22.095372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.805 qpair failed and we were unable to recover it. 00:27:33.805 [2024-11-15 10:46:22.095483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.805 [2024-11-15 10:46:22.095509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.805 qpair failed and we were unable to recover it. 00:27:33.805 [2024-11-15 10:46:22.095590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.805 [2024-11-15 10:46:22.095615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.805 qpair failed and we were unable to recover it. 00:27:33.805 [2024-11-15 10:46:22.095726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.805 [2024-11-15 10:46:22.095750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.805 qpair failed and we were unable to recover it. 00:27:33.805 [2024-11-15 10:46:22.095881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.805 [2024-11-15 10:46:22.095905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.805 qpair failed and we were unable to recover it. 00:27:33.805 [2024-11-15 10:46:22.096017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.805 [2024-11-15 10:46:22.096042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.805 qpair failed and we were unable to recover it. 00:27:33.805 [2024-11-15 10:46:22.096150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.805 [2024-11-15 10:46:22.096175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.805 qpair failed and we were unable to recover it. 00:27:33.805 [2024-11-15 10:46:22.096288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.805 [2024-11-15 10:46:22.096328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.805 qpair failed and we were unable to recover it. 00:27:33.805 [2024-11-15 10:46:22.096441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.805 [2024-11-15 10:46:22.096467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.805 qpair failed and we were unable to recover it. 00:27:33.805 [2024-11-15 10:46:22.096588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.805 [2024-11-15 10:46:22.096614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.805 qpair failed and we were unable to recover it. 00:27:33.805 [2024-11-15 10:46:22.096758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.805 [2024-11-15 10:46:22.096798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.805 qpair failed and we were unable to recover it. 00:27:33.805 [2024-11-15 10:46:22.096943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.805 [2024-11-15 10:46:22.096967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.805 qpair failed and we were unable to recover it. 00:27:33.805 [2024-11-15 10:46:22.097105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.805 [2024-11-15 10:46:22.097130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.805 qpair failed and we were unable to recover it. 00:27:33.805 [2024-11-15 10:46:22.097244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.805 [2024-11-15 10:46:22.097268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.805 qpair failed and we were unable to recover it. 00:27:33.805 [2024-11-15 10:46:22.097384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.805 [2024-11-15 10:46:22.097410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.805 qpair failed and we were unable to recover it. 00:27:33.805 [2024-11-15 10:46:22.097513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.805 [2024-11-15 10:46:22.097539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.805 qpair failed and we were unable to recover it. 00:27:33.805 [2024-11-15 10:46:22.097674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.805 [2024-11-15 10:46:22.097699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.805 qpair failed and we were unable to recover it. 00:27:33.805 [2024-11-15 10:46:22.097847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.805 [2024-11-15 10:46:22.097885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.805 qpair failed and we were unable to recover it. 00:27:33.805 [2024-11-15 10:46:22.097989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.805 [2024-11-15 10:46:22.098013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.805 qpair failed and we were unable to recover it. 00:27:33.805 [2024-11-15 10:46:22.098191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.805 [2024-11-15 10:46:22.098215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.805 qpair failed and we were unable to recover it. 00:27:33.805 [2024-11-15 10:46:22.098318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.805 [2024-11-15 10:46:22.098342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.805 qpair failed and we were unable to recover it. 00:27:33.805 [2024-11-15 10:46:22.098460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.805 [2024-11-15 10:46:22.098485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.805 qpair failed and we were unable to recover it. 00:27:33.805 [2024-11-15 10:46:22.098585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.805 [2024-11-15 10:46:22.098610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.805 qpair failed and we were unable to recover it. 00:27:33.805 [2024-11-15 10:46:22.098734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.805 [2024-11-15 10:46:22.098759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.805 qpair failed and we were unable to recover it. 00:27:33.805 [2024-11-15 10:46:22.098838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.805 [2024-11-15 10:46:22.098862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.805 qpair failed and we were unable to recover it. 00:27:33.805 [2024-11-15 10:46:22.099036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.805 [2024-11-15 10:46:22.099061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.805 qpair failed and we were unable to recover it. 00:27:33.805 [2024-11-15 10:46:22.099168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.805 [2024-11-15 10:46:22.099192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.805 qpair failed and we were unable to recover it. 00:27:33.805 [2024-11-15 10:46:22.099313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.805 [2024-11-15 10:46:22.099337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.805 qpair failed and we were unable to recover it. 00:27:33.805 [2024-11-15 10:46:22.099450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.805 [2024-11-15 10:46:22.099475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.805 qpair failed and we were unable to recover it. 00:27:33.805 [2024-11-15 10:46:22.099570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.805 [2024-11-15 10:46:22.099595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.805 qpair failed and we were unable to recover it. 00:27:33.805 [2024-11-15 10:46:22.099715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.805 [2024-11-15 10:46:22.099739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.805 qpair failed and we were unable to recover it. 00:27:33.805 [2024-11-15 10:46:22.099922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.805 [2024-11-15 10:46:22.099945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.806 qpair failed and we were unable to recover it. 00:27:33.806 [2024-11-15 10:46:22.100053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.806 [2024-11-15 10:46:22.100091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.806 qpair failed and we were unable to recover it. 00:27:33.806 [2024-11-15 10:46:22.100248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.806 [2024-11-15 10:46:22.100272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.806 qpair failed and we were unable to recover it. 00:27:33.806 [2024-11-15 10:46:22.100394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.806 [2024-11-15 10:46:22.100434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.806 qpair failed and we were unable to recover it. 00:27:33.806 [2024-11-15 10:46:22.100532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.806 [2024-11-15 10:46:22.100558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.806 qpair failed and we were unable to recover it. 00:27:33.806 [2024-11-15 10:46:22.100642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.806 [2024-11-15 10:46:22.100671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.806 qpair failed and we were unable to recover it. 00:27:33.806 [2024-11-15 10:46:22.100774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.806 [2024-11-15 10:46:22.100799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.806 qpair failed and we were unable to recover it. 00:27:33.806 [2024-11-15 10:46:22.100959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.806 [2024-11-15 10:46:22.100984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.806 qpair failed and we were unable to recover it. 00:27:33.806 [2024-11-15 10:46:22.101110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.806 [2024-11-15 10:46:22.101148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.806 qpair failed and we were unable to recover it. 00:27:33.806 [2024-11-15 10:46:22.101275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.806 [2024-11-15 10:46:22.101300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.806 qpair failed and we were unable to recover it. 00:27:33.806 [2024-11-15 10:46:22.101441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.806 [2024-11-15 10:46:22.101468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.806 qpair failed and we were unable to recover it. 00:27:33.806 [2024-11-15 10:46:22.101558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.806 [2024-11-15 10:46:22.101584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.806 qpair failed and we were unable to recover it. 00:27:33.806 [2024-11-15 10:46:22.101692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.806 [2024-11-15 10:46:22.101716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.806 qpair failed and we were unable to recover it. 00:27:33.806 [2024-11-15 10:46:22.101865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.806 [2024-11-15 10:46:22.101889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.806 qpair failed and we were unable to recover it. 00:27:33.806 [2024-11-15 10:46:22.102064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.806 [2024-11-15 10:46:22.102088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.806 qpair failed and we were unable to recover it. 00:27:33.806 [2024-11-15 10:46:22.102202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.806 [2024-11-15 10:46:22.102226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.806 qpair failed and we were unable to recover it. 00:27:33.806 [2024-11-15 10:46:22.102372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.806 [2024-11-15 10:46:22.102412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.806 qpair failed and we were unable to recover it. 00:27:33.806 [2024-11-15 10:46:22.102509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.806 [2024-11-15 10:46:22.102535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.806 qpair failed and we were unable to recover it. 00:27:33.806 [2024-11-15 10:46:22.102662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.806 [2024-11-15 10:46:22.102701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.806 qpair failed and we were unable to recover it. 00:27:33.806 [2024-11-15 10:46:22.102875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.806 [2024-11-15 10:46:22.102898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.806 qpair failed and we were unable to recover it. 00:27:33.806 [2024-11-15 10:46:22.103003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.806 [2024-11-15 10:46:22.103042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.806 qpair failed and we were unable to recover it. 00:27:33.806 [2024-11-15 10:46:22.103190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.806 [2024-11-15 10:46:22.103214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.806 qpair failed and we were unable to recover it. 00:27:33.806 [2024-11-15 10:46:22.103358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.806 [2024-11-15 10:46:22.103403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.806 qpair failed and we were unable to recover it. 00:27:33.806 [2024-11-15 10:46:22.103535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.806 [2024-11-15 10:46:22.103559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.806 qpair failed and we were unable to recover it. 00:27:33.806 [2024-11-15 10:46:22.103684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.806 [2024-11-15 10:46:22.103708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.806 qpair failed and we were unable to recover it. 00:27:33.806 [2024-11-15 10:46:22.103886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.806 [2024-11-15 10:46:22.103924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.806 qpair failed and we were unable to recover it. 00:27:33.806 [2024-11-15 10:46:22.104074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.806 [2024-11-15 10:46:22.104098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.806 qpair failed and we were unable to recover it. 00:27:33.806 [2024-11-15 10:46:22.104205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.806 [2024-11-15 10:46:22.104229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.806 qpair failed and we were unable to recover it. 00:27:33.806 [2024-11-15 10:46:22.104314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.806 [2024-11-15 10:46:22.104353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.806 qpair failed and we were unable to recover it. 00:27:33.806 [2024-11-15 10:46:22.104459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.806 [2024-11-15 10:46:22.104484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.806 qpair failed and we were unable to recover it. 00:27:33.806 [2024-11-15 10:46:22.104574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.806 [2024-11-15 10:46:22.104599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.806 qpair failed and we were unable to recover it. 00:27:33.806 [2024-11-15 10:46:22.104743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.806 [2024-11-15 10:46:22.104767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.806 qpair failed and we were unable to recover it. 00:27:33.806 [2024-11-15 10:46:22.104921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.806 [2024-11-15 10:46:22.104959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.806 qpair failed and we were unable to recover it. 00:27:33.806 [2024-11-15 10:46:22.105115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.806 [2024-11-15 10:46:22.105139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.806 qpair failed and we were unable to recover it. 00:27:33.806 [2024-11-15 10:46:22.105251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.806 [2024-11-15 10:46:22.105275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.806 qpair failed and we were unable to recover it. 00:27:33.806 [2024-11-15 10:46:22.105389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.806 [2024-11-15 10:46:22.105415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.806 qpair failed and we were unable to recover it. 00:27:33.806 [2024-11-15 10:46:22.105496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.806 [2024-11-15 10:46:22.105521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.806 qpair failed and we were unable to recover it. 00:27:33.806 [2024-11-15 10:46:22.105610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.806 [2024-11-15 10:46:22.105635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.806 qpair failed and we were unable to recover it. 00:27:33.806 [2024-11-15 10:46:22.105769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.806 [2024-11-15 10:46:22.105809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.807 qpair failed and we were unable to recover it. 00:27:33.807 [2024-11-15 10:46:22.105947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.807 [2024-11-15 10:46:22.105986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.807 qpair failed and we were unable to recover it. 00:27:33.807 [2024-11-15 10:46:22.106139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.807 [2024-11-15 10:46:22.106163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.807 qpair failed and we were unable to recover it. 00:27:33.807 [2024-11-15 10:46:22.106358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.807 [2024-11-15 10:46:22.106386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.807 qpair failed and we were unable to recover it. 00:27:33.807 [2024-11-15 10:46:22.106518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.807 [2024-11-15 10:46:22.106543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.807 qpair failed and we were unable to recover it. 00:27:33.807 [2024-11-15 10:46:22.106695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.807 [2024-11-15 10:46:22.106738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.807 qpair failed and we were unable to recover it. 00:27:33.807 [2024-11-15 10:46:22.106865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.807 [2024-11-15 10:46:22.106888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.807 qpair failed and we were unable to recover it. 00:27:33.807 [2024-11-15 10:46:22.107009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.807 [2024-11-15 10:46:22.107037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.807 qpair failed and we were unable to recover it. 00:27:33.807 [2024-11-15 10:46:22.107196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.807 [2024-11-15 10:46:22.107220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.807 qpair failed and we were unable to recover it. 00:27:33.807 [2024-11-15 10:46:22.107380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.807 [2024-11-15 10:46:22.107406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.807 qpair failed and we were unable to recover it. 00:27:33.807 [2024-11-15 10:46:22.107528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.807 [2024-11-15 10:46:22.107553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.807 qpair failed and we were unable to recover it. 00:27:33.807 [2024-11-15 10:46:22.107632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.807 [2024-11-15 10:46:22.107672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.807 qpair failed and we were unable to recover it. 00:27:33.807 [2024-11-15 10:46:22.107795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.807 [2024-11-15 10:46:22.107819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.807 qpair failed and we were unable to recover it. 00:27:33.807 [2024-11-15 10:46:22.108011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.807 [2024-11-15 10:46:22.108035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.807 qpair failed and we were unable to recover it. 00:27:33.807 [2024-11-15 10:46:22.108189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.807 [2024-11-15 10:46:22.108228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.807 qpair failed and we were unable to recover it. 00:27:33.807 [2024-11-15 10:46:22.108395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.807 [2024-11-15 10:46:22.108422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.807 qpair failed and we were unable to recover it. 00:27:33.807 [2024-11-15 10:46:22.108516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.807 [2024-11-15 10:46:22.108541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.807 qpair failed and we were unable to recover it. 00:27:33.807 [2024-11-15 10:46:22.108635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.807 [2024-11-15 10:46:22.108679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.807 qpair failed and we were unable to recover it. 00:27:33.807 [2024-11-15 10:46:22.108867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.807 [2024-11-15 10:46:22.108891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.807 qpair failed and we were unable to recover it. 00:27:33.807 [2024-11-15 10:46:22.109037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.807 [2024-11-15 10:46:22.109062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.807 qpair failed and we were unable to recover it. 00:27:33.807 [2024-11-15 10:46:22.109174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.807 [2024-11-15 10:46:22.109198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.807 qpair failed and we were unable to recover it. 00:27:33.807 [2024-11-15 10:46:22.109420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.807 [2024-11-15 10:46:22.109460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.807 qpair failed and we were unable to recover it. 00:27:33.807 [2024-11-15 10:46:22.109566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.807 [2024-11-15 10:46:22.109592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.807 qpair failed and we were unable to recover it. 00:27:33.807 [2024-11-15 10:46:22.109744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.807 [2024-11-15 10:46:22.109768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.807 qpair failed and we were unable to recover it. 00:27:33.807 [2024-11-15 10:46:22.109880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.807 [2024-11-15 10:46:22.109911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.807 qpair failed and we were unable to recover it. 00:27:33.807 [2024-11-15 10:46:22.110075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.807 [2024-11-15 10:46:22.110100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.807 qpair failed and we were unable to recover it. 00:27:33.807 [2024-11-15 10:46:22.110206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.807 [2024-11-15 10:46:22.110230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.807 qpair failed and we were unable to recover it. 00:27:33.807 [2024-11-15 10:46:22.110377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.807 [2024-11-15 10:46:22.110418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.807 qpair failed and we were unable to recover it. 00:27:33.807 [2024-11-15 10:46:22.110543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.807 [2024-11-15 10:46:22.110569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.807 qpair failed and we were unable to recover it. 00:27:33.807 [2024-11-15 10:46:22.110670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.807 [2024-11-15 10:46:22.110709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.807 qpair failed and we were unable to recover it. 00:27:33.807 [2024-11-15 10:46:22.110858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.807 [2024-11-15 10:46:22.110904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.807 qpair failed and we were unable to recover it. 00:27:33.807 [2024-11-15 10:46:22.110998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.807 [2024-11-15 10:46:22.111022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.807 qpair failed and we were unable to recover it. 00:27:33.807 [2024-11-15 10:46:22.111144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.807 [2024-11-15 10:46:22.111169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.807 qpair failed and we were unable to recover it. 00:27:33.807 [2024-11-15 10:46:22.111360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.807 [2024-11-15 10:46:22.111391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.807 qpair failed and we were unable to recover it. 00:27:33.807 [2024-11-15 10:46:22.111518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.807 [2024-11-15 10:46:22.111543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.807 qpair failed and we were unable to recover it. 00:27:33.807 [2024-11-15 10:46:22.111692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.807 [2024-11-15 10:46:22.111731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.807 qpair failed and we were unable to recover it. 00:27:33.807 [2024-11-15 10:46:22.111876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.807 [2024-11-15 10:46:22.111899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.807 qpair failed and we were unable to recover it. 00:27:33.808 [2024-11-15 10:46:22.112047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.808 [2024-11-15 10:46:22.112071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.808 qpair failed and we were unable to recover it. 00:27:33.808 [2024-11-15 10:46:22.112254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.808 [2024-11-15 10:46:22.112278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.808 qpair failed and we were unable to recover it. 00:27:33.808 [2024-11-15 10:46:22.112427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.808 [2024-11-15 10:46:22.112453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.808 qpair failed and we were unable to recover it. 00:27:33.808 [2024-11-15 10:46:22.112578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.808 [2024-11-15 10:46:22.112603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.808 qpair failed and we were unable to recover it. 00:27:33.808 [2024-11-15 10:46:22.112747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.808 [2024-11-15 10:46:22.112771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.808 qpair failed and we were unable to recover it. 00:27:33.808 [2024-11-15 10:46:22.112904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.808 [2024-11-15 10:46:22.112942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.808 qpair failed and we were unable to recover it. 00:27:33.808 [2024-11-15 10:46:22.113052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.808 [2024-11-15 10:46:22.113086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.808 qpair failed and we were unable to recover it. 00:27:33.808 [2024-11-15 10:46:22.113309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.808 [2024-11-15 10:46:22.113333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.808 qpair failed and we were unable to recover it. 00:27:33.808 [2024-11-15 10:46:22.113463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.808 [2024-11-15 10:46:22.113489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.808 qpair failed and we were unable to recover it. 00:27:33.808 [2024-11-15 10:46:22.113580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.808 [2024-11-15 10:46:22.113604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.808 qpair failed and we were unable to recover it. 00:27:33.808 [2024-11-15 10:46:22.113774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.808 [2024-11-15 10:46:22.113817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.808 qpair failed and we were unable to recover it. 00:27:33.808 [2024-11-15 10:46:22.113958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.808 [2024-11-15 10:46:22.113982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.808 qpair failed and we were unable to recover it. 00:27:33.808 [2024-11-15 10:46:22.114122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.808 [2024-11-15 10:46:22.114146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.808 qpair failed and we were unable to recover it. 00:27:33.808 [2024-11-15 10:46:22.114342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.808 [2024-11-15 10:46:22.114372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.808 qpair failed and we were unable to recover it. 00:27:33.808 [2024-11-15 10:46:22.114482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.808 [2024-11-15 10:46:22.114507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.808 qpair failed and we were unable to recover it. 00:27:33.808 [2024-11-15 10:46:22.114663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.808 [2024-11-15 10:46:22.114701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.808 qpair failed and we were unable to recover it. 00:27:33.808 [2024-11-15 10:46:22.114829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.808 [2024-11-15 10:46:22.114853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.808 qpair failed and we were unable to recover it. 00:27:33.808 [2024-11-15 10:46:22.115046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.808 [2024-11-15 10:46:22.115070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.808 qpair failed and we were unable to recover it. 00:27:33.808 [2024-11-15 10:46:22.115207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.808 [2024-11-15 10:46:22.115230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.808 qpair failed and we were unable to recover it. 00:27:33.808 [2024-11-15 10:46:22.115323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.808 [2024-11-15 10:46:22.115369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.808 qpair failed and we were unable to recover it. 00:27:33.808 [2024-11-15 10:46:22.115461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.808 [2024-11-15 10:46:22.115486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.808 qpair failed and we were unable to recover it. 00:27:33.808 [2024-11-15 10:46:22.115627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.808 [2024-11-15 10:46:22.115667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.808 qpair failed and we were unable to recover it. 00:27:33.808 [2024-11-15 10:46:22.115759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.808 [2024-11-15 10:46:22.115784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.808 qpair failed and we were unable to recover it. 00:27:33.808 [2024-11-15 10:46:22.115902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.808 [2024-11-15 10:46:22.115926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.808 qpair failed and we were unable to recover it. 00:27:33.808 [2024-11-15 10:46:22.116124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.808 [2024-11-15 10:46:22.116147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.808 qpair failed and we were unable to recover it. 00:27:33.808 [2024-11-15 10:46:22.116296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.808 [2024-11-15 10:46:22.116320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.808 qpair failed and we were unable to recover it. 00:27:33.808 [2024-11-15 10:46:22.116479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.808 [2024-11-15 10:46:22.116505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.808 qpair failed and we were unable to recover it. 00:27:33.808 [2024-11-15 10:46:22.116649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.808 [2024-11-15 10:46:22.116688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.808 qpair failed and we were unable to recover it. 00:27:33.808 [2024-11-15 10:46:22.116786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.808 [2024-11-15 10:46:22.116810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.808 qpair failed and we were unable to recover it. 00:27:33.808 [2024-11-15 10:46:22.116955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.808 [2024-11-15 10:46:22.116979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.808 qpair failed and we were unable to recover it. 00:27:33.808 [2024-11-15 10:46:22.117230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.808 [2024-11-15 10:46:22.117254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.808 qpair failed and we were unable to recover it. 00:27:33.808 [2024-11-15 10:46:22.117451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.808 [2024-11-15 10:46:22.117477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.808 qpair failed and we were unable to recover it. 00:27:33.808 [2024-11-15 10:46:22.117597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.808 [2024-11-15 10:46:22.117621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.808 qpair failed and we were unable to recover it. 00:27:33.808 [2024-11-15 10:46:22.117762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.808 [2024-11-15 10:46:22.117800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.808 qpair failed and we were unable to recover it. 00:27:33.808 [2024-11-15 10:46:22.117991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.808 [2024-11-15 10:46:22.118014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.808 qpair failed and we were unable to recover it. 00:27:33.809 [2024-11-15 10:46:22.118145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.809 [2024-11-15 10:46:22.118183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.809 qpair failed and we were unable to recover it. 00:27:33.809 [2024-11-15 10:46:22.118380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.809 [2024-11-15 10:46:22.118406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.809 qpair failed and we were unable to recover it. 00:27:33.809 [2024-11-15 10:46:22.118588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.809 [2024-11-15 10:46:22.118613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.809 qpair failed and we were unable to recover it. 00:27:33.809 [2024-11-15 10:46:22.118816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.809 [2024-11-15 10:46:22.118840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.809 qpair failed and we were unable to recover it. 00:27:33.809 [2024-11-15 10:46:22.118944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.809 [2024-11-15 10:46:22.118983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.809 qpair failed and we were unable to recover it. 00:27:33.809 [2024-11-15 10:46:22.119142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.809 [2024-11-15 10:46:22.119165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.809 qpair failed and we were unable to recover it. 00:27:33.809 [2024-11-15 10:46:22.119324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.809 [2024-11-15 10:46:22.119367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.809 qpair failed and we were unable to recover it. 00:27:33.809 [2024-11-15 10:46:22.119513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.809 [2024-11-15 10:46:22.119538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.809 qpair failed and we were unable to recover it. 00:27:33.809 [2024-11-15 10:46:22.119627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.809 [2024-11-15 10:46:22.119651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.809 qpair failed and we were unable to recover it. 00:27:33.809 [2024-11-15 10:46:22.119821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.809 [2024-11-15 10:46:22.119845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.809 qpair failed and we were unable to recover it. 00:27:33.809 [2024-11-15 10:46:22.120029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.809 [2024-11-15 10:46:22.120052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.809 qpair failed and we were unable to recover it. 00:27:33.809 [2024-11-15 10:46:22.120235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.809 [2024-11-15 10:46:22.120258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.809 qpair failed and we were unable to recover it. 00:27:33.809 [2024-11-15 10:46:22.120373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.809 [2024-11-15 10:46:22.120399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.809 qpair failed and we were unable to recover it. 00:27:33.809 [2024-11-15 10:46:22.120550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.809 [2024-11-15 10:46:22.120575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.809 qpair failed and we were unable to recover it. 00:27:33.809 [2024-11-15 10:46:22.120683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.809 [2024-11-15 10:46:22.120720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.809 qpair failed and we were unable to recover it. 00:27:33.809 [2024-11-15 10:46:22.120860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.809 [2024-11-15 10:46:22.120903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.809 qpair failed and we were unable to recover it. 00:27:33.809 [2024-11-15 10:46:22.121068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.809 [2024-11-15 10:46:22.121107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.809 qpair failed and we were unable to recover it. 00:27:33.809 [2024-11-15 10:46:22.121265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.809 [2024-11-15 10:46:22.121288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.809 qpair failed and we were unable to recover it. 00:27:33.809 [2024-11-15 10:46:22.121407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.809 [2024-11-15 10:46:22.121434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.809 qpair failed and we were unable to recover it. 00:27:33.809 [2024-11-15 10:46:22.121534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.809 [2024-11-15 10:46:22.121559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.809 qpair failed and we were unable to recover it. 00:27:33.809 [2024-11-15 10:46:22.121698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.809 [2024-11-15 10:46:22.121722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.809 qpair failed and we were unable to recover it. 00:27:33.809 [2024-11-15 10:46:22.121837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.809 [2024-11-15 10:46:22.121861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.809 qpair failed and we were unable to recover it. 00:27:33.809 [2024-11-15 10:46:22.121995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.809 [2024-11-15 10:46:22.122020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.809 qpair failed and we were unable to recover it. 00:27:33.809 [2024-11-15 10:46:22.122201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.809 [2024-11-15 10:46:22.122225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.809 qpair failed and we were unable to recover it. 00:27:33.809 [2024-11-15 10:46:22.122378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.809 [2024-11-15 10:46:22.122403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.809 qpair failed and we were unable to recover it. 00:27:33.809 [2024-11-15 10:46:22.122513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.809 [2024-11-15 10:46:22.122537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.809 qpair failed and we were unable to recover it. 00:27:33.809 [2024-11-15 10:46:22.122670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.809 [2024-11-15 10:46:22.122694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.809 qpair failed and we were unable to recover it. 00:27:33.809 [2024-11-15 10:46:22.122803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.809 [2024-11-15 10:46:22.122827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.809 qpair failed and we were unable to recover it. 00:27:33.809 [2024-11-15 10:46:22.122957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.809 [2024-11-15 10:46:22.122981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.809 qpair failed and we were unable to recover it. 00:27:33.809 [2024-11-15 10:46:22.123132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.809 [2024-11-15 10:46:22.123156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.809 qpair failed and we were unable to recover it. 00:27:33.809 [2024-11-15 10:46:22.123253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.809 [2024-11-15 10:46:22.123277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.809 qpair failed and we were unable to recover it. 00:27:33.809 [2024-11-15 10:46:22.123405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.809 [2024-11-15 10:46:22.123431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.809 qpair failed and we were unable to recover it. 00:27:33.809 [2024-11-15 10:46:22.123538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.809 [2024-11-15 10:46:22.123563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.809 qpair failed and we were unable to recover it. 00:27:33.809 [2024-11-15 10:46:22.123730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.809 [2024-11-15 10:46:22.123754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.809 qpair failed and we were unable to recover it. 00:27:33.809 [2024-11-15 10:46:22.123904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.809 [2024-11-15 10:46:22.123928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.809 qpair failed and we were unable to recover it. 00:27:33.809 [2024-11-15 10:46:22.124094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.809 [2024-11-15 10:46:22.124118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.809 qpair failed and we were unable to recover it. 00:27:33.809 [2024-11-15 10:46:22.124293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.809 [2024-11-15 10:46:22.124317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.809 qpair failed and we were unable to recover it. 00:27:33.809 [2024-11-15 10:46:22.124462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.809 [2024-11-15 10:46:22.124487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.809 qpair failed and we were unable to recover it. 00:27:33.809 [2024-11-15 10:46:22.124611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.810 [2024-11-15 10:46:22.124636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.810 qpair failed and we were unable to recover it. 00:27:33.810 [2024-11-15 10:46:22.124818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.810 [2024-11-15 10:46:22.124841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.810 qpair failed and we were unable to recover it. 00:27:33.810 [2024-11-15 10:46:22.125018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.810 [2024-11-15 10:46:22.125056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.810 qpair failed and we were unable to recover it. 00:27:33.810 [2024-11-15 10:46:22.125214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.810 [2024-11-15 10:46:22.125238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.810 qpair failed and we were unable to recover it. 00:27:33.810 [2024-11-15 10:46:22.125378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.810 [2024-11-15 10:46:22.125404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.810 qpair failed and we were unable to recover it. 00:27:33.810 [2024-11-15 10:46:22.125502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.810 [2024-11-15 10:46:22.125527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.810 qpair failed and we were unable to recover it. 00:27:33.810 [2024-11-15 10:46:22.125662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.810 [2024-11-15 10:46:22.125686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.810 qpair failed and we were unable to recover it. 00:27:33.810 [2024-11-15 10:46:22.125874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.810 [2024-11-15 10:46:22.125897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.810 qpair failed and we were unable to recover it. 00:27:33.810 [2024-11-15 10:46:22.126041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.810 [2024-11-15 10:46:22.126063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.810 qpair failed and we were unable to recover it. 00:27:33.810 [2024-11-15 10:46:22.126254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.810 [2024-11-15 10:46:22.126278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.810 qpair failed and we were unable to recover it. 00:27:33.810 [2024-11-15 10:46:22.126466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.810 [2024-11-15 10:46:22.126492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.810 qpair failed and we were unable to recover it. 00:27:33.810 [2024-11-15 10:46:22.126653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.810 [2024-11-15 10:46:22.126676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.810 qpair failed and we were unable to recover it. 00:27:33.810 [2024-11-15 10:46:22.126838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.810 [2024-11-15 10:46:22.126861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.810 qpair failed and we were unable to recover it. 00:27:33.810 [2024-11-15 10:46:22.126971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.810 [2024-11-15 10:46:22.126995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.810 qpair failed and we were unable to recover it. 00:27:33.810 [2024-11-15 10:46:22.127167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.810 [2024-11-15 10:46:22.127191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.810 qpair failed and we were unable to recover it. 00:27:33.810 [2024-11-15 10:46:22.127336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.810 [2024-11-15 10:46:22.127360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.810 qpair failed and we were unable to recover it. 00:27:33.810 [2024-11-15 10:46:22.127511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.810 [2024-11-15 10:46:22.127536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.810 qpair failed and we were unable to recover it. 00:27:33.810 [2024-11-15 10:46:22.127664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.810 [2024-11-15 10:46:22.127693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.810 qpair failed and we were unable to recover it. 00:27:33.810 [2024-11-15 10:46:22.127826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.810 [2024-11-15 10:46:22.127850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.810 qpair failed and we were unable to recover it. 00:27:33.810 [2024-11-15 10:46:22.128018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.810 [2024-11-15 10:46:22.128042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.810 qpair failed and we were unable to recover it. 00:27:33.810 [2024-11-15 10:46:22.128155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.810 [2024-11-15 10:46:22.128179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.810 qpair failed and we were unable to recover it. 00:27:33.810 [2024-11-15 10:46:22.128292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.810 [2024-11-15 10:46:22.128316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.810 qpair failed and we were unable to recover it. 00:27:33.810 [2024-11-15 10:46:22.128431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.810 [2024-11-15 10:46:22.128456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.810 qpair failed and we were unable to recover it. 00:27:33.810 [2024-11-15 10:46:22.128597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.810 [2024-11-15 10:46:22.128621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.810 qpair failed and we were unable to recover it. 00:27:33.810 [2024-11-15 10:46:22.128784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.810 [2024-11-15 10:46:22.128807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.810 qpair failed and we were unable to recover it. 00:27:33.810 [2024-11-15 10:46:22.128999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.810 [2024-11-15 10:46:22.129022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.810 qpair failed and we were unable to recover it. 00:27:33.810 [2024-11-15 10:46:22.129168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.810 [2024-11-15 10:46:22.129196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.810 qpair failed and we were unable to recover it. 00:27:33.810 [2024-11-15 10:46:22.129397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.810 [2024-11-15 10:46:22.129421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.810 qpair failed and we were unable to recover it. 00:27:33.810 [2024-11-15 10:46:22.129588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.810 [2024-11-15 10:46:22.129613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.810 qpair failed and we were unable to recover it. 00:27:33.810 [2024-11-15 10:46:22.129719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.810 [2024-11-15 10:46:22.129744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.810 qpair failed and we were unable to recover it. 00:27:33.810 [2024-11-15 10:46:22.129875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.810 [2024-11-15 10:46:22.129899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.810 qpair failed and we were unable to recover it. 00:27:33.810 [2024-11-15 10:46:22.130038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.810 [2024-11-15 10:46:22.130062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.810 qpair failed and we were unable to recover it. 00:27:33.810 [2024-11-15 10:46:22.130236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.810 [2024-11-15 10:46:22.130275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.810 qpair failed and we were unable to recover it. 00:27:33.810 [2024-11-15 10:46:22.130430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.810 [2024-11-15 10:46:22.130456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.810 qpair failed and we were unable to recover it. 00:27:33.810 [2024-11-15 10:46:22.130605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.810 [2024-11-15 10:46:22.130631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.810 qpair failed and we were unable to recover it. 00:27:33.810 [2024-11-15 10:46:22.130795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.810 [2024-11-15 10:46:22.130818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.810 qpair failed and we were unable to recover it. 00:27:33.810 [2024-11-15 10:46:22.130948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.810 [2024-11-15 10:46:22.130986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.810 qpair failed and we were unable to recover it. 00:27:33.810 [2024-11-15 10:46:22.131123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.810 [2024-11-15 10:46:22.131148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.810 qpair failed and we were unable to recover it. 00:27:33.811 [2024-11-15 10:46:22.131313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.811 [2024-11-15 10:46:22.131337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.811 qpair failed and we were unable to recover it. 00:27:33.811 [2024-11-15 10:46:22.131481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.811 [2024-11-15 10:46:22.131506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.811 qpair failed and we were unable to recover it. 00:27:33.811 [2024-11-15 10:46:22.131671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.811 [2024-11-15 10:46:22.131695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.811 qpair failed and we were unable to recover it. 00:27:33.811 [2024-11-15 10:46:22.131866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.811 [2024-11-15 10:46:22.131889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.811 qpair failed and we were unable to recover it. 00:27:33.811 [2024-11-15 10:46:22.132026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.811 [2024-11-15 10:46:22.132050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.811 qpair failed and we were unable to recover it. 00:27:33.811 [2024-11-15 10:46:22.132245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.811 [2024-11-15 10:46:22.132268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.811 qpair failed and we were unable to recover it. 00:27:33.811 [2024-11-15 10:46:22.132439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.811 [2024-11-15 10:46:22.132464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.811 qpair failed and we were unable to recover it. 00:27:33.811 [2024-11-15 10:46:22.132573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.811 [2024-11-15 10:46:22.132598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.811 qpair failed and we were unable to recover it. 00:27:33.811 [2024-11-15 10:46:22.132726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.811 [2024-11-15 10:46:22.132750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.811 qpair failed and we were unable to recover it. 00:27:33.811 [2024-11-15 10:46:22.132949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.811 [2024-11-15 10:46:22.132972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.811 qpair failed and we were unable to recover it. 00:27:33.811 [2024-11-15 10:46:22.133071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.811 [2024-11-15 10:46:22.133109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.811 qpair failed and we were unable to recover it. 00:27:33.811 [2024-11-15 10:46:22.133270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.811 [2024-11-15 10:46:22.133294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.811 qpair failed and we were unable to recover it. 00:27:33.811 [2024-11-15 10:46:22.133447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.811 [2024-11-15 10:46:22.133493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.811 qpair failed and we were unable to recover it. 00:27:33.811 [2024-11-15 10:46:22.133666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.811 [2024-11-15 10:46:22.133704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.811 qpair failed and we were unable to recover it. 00:27:33.811 [2024-11-15 10:46:22.133863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.811 [2024-11-15 10:46:22.133887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.811 qpair failed and we were unable to recover it. 00:27:33.811 [2024-11-15 10:46:22.134026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.811 [2024-11-15 10:46:22.134050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.811 qpair failed and we were unable to recover it. 00:27:33.811 [2024-11-15 10:46:22.134192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.811 [2024-11-15 10:46:22.134216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.811 qpair failed and we were unable to recover it. 00:27:33.811 [2024-11-15 10:46:22.134380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.811 [2024-11-15 10:46:22.134408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.811 qpair failed and we were unable to recover it. 00:27:33.811 [2024-11-15 10:46:22.134514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.811 [2024-11-15 10:46:22.134538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.811 qpair failed and we were unable to recover it. 00:27:33.811 [2024-11-15 10:46:22.134714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.811 [2024-11-15 10:46:22.134742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.811 qpair failed and we were unable to recover it. 00:27:33.811 [2024-11-15 10:46:22.134893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.811 [2024-11-15 10:46:22.134916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.811 qpair failed and we were unable to recover it. 00:27:33.811 [2024-11-15 10:46:22.135046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.811 [2024-11-15 10:46:22.135070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.811 qpair failed and we were unable to recover it. 00:27:33.811 [2024-11-15 10:46:22.135271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.811 [2024-11-15 10:46:22.135295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.811 qpair failed and we were unable to recover it. 00:27:33.811 [2024-11-15 10:46:22.135471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.811 [2024-11-15 10:46:22.135497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.811 qpair failed and we were unable to recover it. 00:27:33.811 [2024-11-15 10:46:22.135608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.811 [2024-11-15 10:46:22.135648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.811 qpair failed and we were unable to recover it. 00:27:33.811 [2024-11-15 10:46:22.135795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.811 [2024-11-15 10:46:22.135836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.811 qpair failed and we were unable to recover it. 00:27:33.811 [2024-11-15 10:46:22.135990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.811 [2024-11-15 10:46:22.136013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.811 qpair failed and we were unable to recover it. 00:27:33.811 [2024-11-15 10:46:22.136191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.811 [2024-11-15 10:46:22.136214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.811 qpair failed and we were unable to recover it. 00:27:33.811 [2024-11-15 10:46:22.136389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.811 [2024-11-15 10:46:22.136412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.811 qpair failed and we were unable to recover it. 00:27:33.811 [2024-11-15 10:46:22.136551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.811 [2024-11-15 10:46:22.136575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.811 qpair failed and we were unable to recover it. 00:27:33.811 [2024-11-15 10:46:22.136755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.811 [2024-11-15 10:46:22.136779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.811 qpair failed and we were unable to recover it. 00:27:33.811 [2024-11-15 10:46:22.136903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.811 [2024-11-15 10:46:22.136943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.811 qpair failed and we were unable to recover it. 00:27:33.811 [2024-11-15 10:46:22.137099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.811 [2024-11-15 10:46:22.137139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.811 qpair failed and we were unable to recover it. 00:27:33.811 [2024-11-15 10:46:22.137257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.811 [2024-11-15 10:46:22.137281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.811 qpair failed and we were unable to recover it. 00:27:33.811 [2024-11-15 10:46:22.137464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.811 [2024-11-15 10:46:22.137488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.811 qpair failed and we were unable to recover it. 00:27:33.811 [2024-11-15 10:46:22.137697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.811 [2024-11-15 10:46:22.137736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.811 qpair failed and we were unable to recover it. 00:27:33.811 [2024-11-15 10:46:22.137882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.811 [2024-11-15 10:46:22.137906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.811 qpair failed and we were unable to recover it. 00:27:33.811 [2024-11-15 10:46:22.138044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.812 [2024-11-15 10:46:22.138069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.812 qpair failed and we were unable to recover it. 00:27:33.812 [2024-11-15 10:46:22.138279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.812 [2024-11-15 10:46:22.138302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.812 qpair failed and we were unable to recover it. 00:27:33.812 [2024-11-15 10:46:22.138495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.812 [2024-11-15 10:46:22.138521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.812 qpair failed and we were unable to recover it. 00:27:33.812 [2024-11-15 10:46:22.138695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.812 [2024-11-15 10:46:22.138733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.812 qpair failed and we were unable to recover it. 00:27:33.812 [2024-11-15 10:46:22.138916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.812 [2024-11-15 10:46:22.138940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.812 qpair failed and we were unable to recover it. 00:27:33.812 [2024-11-15 10:46:22.139072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.812 [2024-11-15 10:46:22.139111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.812 qpair failed and we were unable to recover it. 00:27:33.812 [2024-11-15 10:46:22.139246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.812 [2024-11-15 10:46:22.139284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.812 qpair failed and we were unable to recover it. 00:27:33.812 [2024-11-15 10:46:22.139436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.812 [2024-11-15 10:46:22.139462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.812 qpair failed and we were unable to recover it. 00:27:33.812 [2024-11-15 10:46:22.139606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.812 [2024-11-15 10:46:22.139631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.812 qpair failed and we were unable to recover it. 00:27:33.812 [2024-11-15 10:46:22.139808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.812 [2024-11-15 10:46:22.139832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.812 qpair failed and we were unable to recover it. 00:27:33.812 [2024-11-15 10:46:22.139977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.812 [2024-11-15 10:46:22.140014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.812 qpair failed and we were unable to recover it. 00:27:33.812 [2024-11-15 10:46:22.140235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.812 [2024-11-15 10:46:22.140259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.812 qpair failed and we were unable to recover it. 00:27:33.812 [2024-11-15 10:46:22.140418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.812 [2024-11-15 10:46:22.140442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.812 qpair failed and we were unable to recover it. 00:27:33.812 [2024-11-15 10:46:22.140583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.812 [2024-11-15 10:46:22.140607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.812 qpair failed and we were unable to recover it. 00:27:33.812 [2024-11-15 10:46:22.140777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.812 [2024-11-15 10:46:22.140816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.812 qpair failed and we were unable to recover it. 00:27:33.812 [2024-11-15 10:46:22.140994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.812 [2024-11-15 10:46:22.141017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.812 qpair failed and we were unable to recover it. 00:27:33.812 [2024-11-15 10:46:22.141145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.812 [2024-11-15 10:46:22.141183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.812 qpair failed and we were unable to recover it. 00:27:33.812 [2024-11-15 10:46:22.141347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.812 [2024-11-15 10:46:22.141391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.812 qpair failed and we were unable to recover it. 00:27:33.812 [2024-11-15 10:46:22.141521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.812 [2024-11-15 10:46:22.141561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.812 qpair failed and we were unable to recover it. 00:27:33.812 [2024-11-15 10:46:22.141758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.812 [2024-11-15 10:46:22.141796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.812 qpair failed and we were unable to recover it. 00:27:33.812 [2024-11-15 10:46:22.141934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.812 [2024-11-15 10:46:22.141957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.812 qpair failed and we were unable to recover it. 00:27:33.812 [2024-11-15 10:46:22.142085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.812 [2024-11-15 10:46:22.142110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.812 qpair failed and we were unable to recover it. 00:27:33.812 [2024-11-15 10:46:22.142233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.812 [2024-11-15 10:46:22.142261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.812 qpair failed and we were unable to recover it. 00:27:33.812 [2024-11-15 10:46:22.142509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.812 [2024-11-15 10:46:22.142534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.812 qpair failed and we were unable to recover it. 00:27:33.812 [2024-11-15 10:46:22.142643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.812 [2024-11-15 10:46:22.142681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.812 qpair failed and we were unable to recover it. 00:27:33.812 [2024-11-15 10:46:22.142828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.812 [2024-11-15 10:46:22.142866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.812 qpair failed and we were unable to recover it. 00:27:33.812 [2024-11-15 10:46:22.143073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.812 [2024-11-15 10:46:22.143096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.812 qpair failed and we were unable to recover it. 00:27:33.812 [2024-11-15 10:46:22.143285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.812 [2024-11-15 10:46:22.143308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.812 qpair failed and we were unable to recover it. 00:27:33.812 [2024-11-15 10:46:22.143505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.812 [2024-11-15 10:46:22.143531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.812 qpair failed and we were unable to recover it. 00:27:33.812 [2024-11-15 10:46:22.143726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.812 [2024-11-15 10:46:22.143750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.812 qpair failed and we were unable to recover it. 00:27:33.812 [2024-11-15 10:46:22.143915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.812 [2024-11-15 10:46:22.143938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.812 qpair failed and we were unable to recover it. 00:27:33.812 [2024-11-15 10:46:22.144078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.812 [2024-11-15 10:46:22.144102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.812 qpair failed and we were unable to recover it. 00:27:33.812 [2024-11-15 10:46:22.144252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.812 [2024-11-15 10:46:22.144295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.812 qpair failed and we were unable to recover it. 00:27:33.812 [2024-11-15 10:46:22.144438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.812 [2024-11-15 10:46:22.144463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.812 qpair failed and we were unable to recover it. 00:27:33.813 [2024-11-15 10:46:22.144587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.813 [2024-11-15 10:46:22.144617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.813 qpair failed and we were unable to recover it. 00:27:33.813 [2024-11-15 10:46:22.144744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.813 [2024-11-15 10:46:22.144768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.813 qpair failed and we were unable to recover it. 00:27:33.813 [2024-11-15 10:46:22.144918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.813 [2024-11-15 10:46:22.144957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.813 qpair failed and we were unable to recover it. 00:27:33.813 [2024-11-15 10:46:22.145132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.813 [2024-11-15 10:46:22.145178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.813 qpair failed and we were unable to recover it. 00:27:33.813 [2024-11-15 10:46:22.145401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.813 [2024-11-15 10:46:22.145426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.813 qpair failed and we were unable to recover it. 00:27:33.813 [2024-11-15 10:46:22.145580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.813 [2024-11-15 10:46:22.145604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.813 qpair failed and we were unable to recover it. 00:27:33.813 [2024-11-15 10:46:22.145781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.813 [2024-11-15 10:46:22.145804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.813 qpair failed and we were unable to recover it. 00:27:33.813 [2024-11-15 10:46:22.145950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.813 [2024-11-15 10:46:22.145973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.813 qpair failed and we were unable to recover it. 00:27:33.813 [2024-11-15 10:46:22.146156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.813 [2024-11-15 10:46:22.146179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.813 qpair failed and we were unable to recover it. 00:27:33.813 [2024-11-15 10:46:22.146356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.813 [2024-11-15 10:46:22.146403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.813 qpair failed and we were unable to recover it. 00:27:33.813 [2024-11-15 10:46:22.146542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.813 [2024-11-15 10:46:22.146565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.813 qpair failed and we were unable to recover it. 00:27:33.813 [2024-11-15 10:46:22.146716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.813 [2024-11-15 10:46:22.146740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.813 qpair failed and we were unable to recover it. 00:27:33.813 [2024-11-15 10:46:22.146881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.813 [2024-11-15 10:46:22.146929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.813 qpair failed and we were unable to recover it. 00:27:33.813 [2024-11-15 10:46:22.147033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.813 [2024-11-15 10:46:22.147057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.813 qpair failed and we were unable to recover it. 00:27:33.813 [2024-11-15 10:46:22.147185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.813 [2024-11-15 10:46:22.147209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.813 qpair failed and we were unable to recover it. 00:27:33.813 [2024-11-15 10:46:22.147391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.813 [2024-11-15 10:46:22.147417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.813 qpair failed and we were unable to recover it. 00:27:33.813 [2024-11-15 10:46:22.147500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.813 [2024-11-15 10:46:22.147525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.813 qpair failed and we were unable to recover it. 00:27:33.813 [2024-11-15 10:46:22.147656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.813 [2024-11-15 10:46:22.147681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.813 qpair failed and we were unable to recover it. 00:27:33.813 [2024-11-15 10:46:22.147839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.813 [2024-11-15 10:46:22.147877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.813 qpair failed and we were unable to recover it. 00:27:33.813 [2024-11-15 10:46:22.148034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.813 [2024-11-15 10:46:22.148058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.813 qpair failed and we were unable to recover it. 00:27:33.813 [2024-11-15 10:46:22.148237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.813 [2024-11-15 10:46:22.148260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.813 qpair failed and we were unable to recover it. 00:27:33.813 [2024-11-15 10:46:22.148401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.813 [2024-11-15 10:46:22.148426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.813 qpair failed and we were unable to recover it. 00:27:33.813 [2024-11-15 10:46:22.148577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.813 [2024-11-15 10:46:22.148602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.813 qpair failed and we were unable to recover it. 00:27:33.813 [2024-11-15 10:46:22.148764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.813 [2024-11-15 10:46:22.148788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.813 qpair failed and we were unable to recover it. 00:27:33.813 [2024-11-15 10:46:22.148993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.813 [2024-11-15 10:46:22.149016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.813 qpair failed and we were unable to recover it. 00:27:33.813 [2024-11-15 10:46:22.149157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.813 [2024-11-15 10:46:22.149180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.813 qpair failed and we were unable to recover it. 00:27:33.813 [2024-11-15 10:46:22.149314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.813 [2024-11-15 10:46:22.149339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.813 qpair failed and we were unable to recover it. 00:27:33.813 [2024-11-15 10:46:22.149446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.813 [2024-11-15 10:46:22.149471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.813 qpair failed and we were unable to recover it. 00:27:33.813 [2024-11-15 10:46:22.149637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.813 [2024-11-15 10:46:22.149666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.813 qpair failed and we were unable to recover it. 00:27:33.813 [2024-11-15 10:46:22.149882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.813 [2024-11-15 10:46:22.149906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.813 qpair failed and we were unable to recover it. 00:27:33.813 [2024-11-15 10:46:22.150078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.813 [2024-11-15 10:46:22.150113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.813 qpair failed and we were unable to recover it. 00:27:33.813 [2024-11-15 10:46:22.150388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.813 [2024-11-15 10:46:22.150414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.813 qpair failed and we were unable to recover it. 00:27:33.813 [2024-11-15 10:46:22.150569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.813 [2024-11-15 10:46:22.150594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.813 qpair failed and we were unable to recover it. 00:27:33.813 [2024-11-15 10:46:22.150745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.813 [2024-11-15 10:46:22.150769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.813 qpair failed and we were unable to recover it. 00:27:33.813 [2024-11-15 10:46:22.150964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.813 [2024-11-15 10:46:22.151003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.813 qpair failed and we were unable to recover it. 00:27:33.813 [2024-11-15 10:46:22.151138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.813 [2024-11-15 10:46:22.151161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.813 qpair failed and we were unable to recover it. 00:27:33.813 [2024-11-15 10:46:22.151326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.813 [2024-11-15 10:46:22.151350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.813 qpair failed and we were unable to recover it. 00:27:33.813 [2024-11-15 10:46:22.151549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.813 [2024-11-15 10:46:22.151574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.813 qpair failed and we were unable to recover it. 00:27:33.814 [2024-11-15 10:46:22.151672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.814 [2024-11-15 10:46:22.151696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.814 qpair failed and we were unable to recover it. 00:27:33.814 [2024-11-15 10:46:22.151832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.814 [2024-11-15 10:46:22.151856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.814 qpair failed and we were unable to recover it. 00:27:33.814 [2024-11-15 10:46:22.152026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.814 [2024-11-15 10:46:22.152050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.814 qpair failed and we were unable to recover it. 00:27:33.814 [2024-11-15 10:46:22.152210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.814 [2024-11-15 10:46:22.152234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.814 qpair failed and we were unable to recover it. 00:27:33.814 [2024-11-15 10:46:22.152425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.814 [2024-11-15 10:46:22.152449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.814 qpair failed and we were unable to recover it. 00:27:33.814 [2024-11-15 10:46:22.152584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.814 [2024-11-15 10:46:22.152608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.814 qpair failed and we were unable to recover it. 00:27:33.814 [2024-11-15 10:46:22.152748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.814 [2024-11-15 10:46:22.152773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.814 qpair failed and we were unable to recover it. 00:27:33.814 [2024-11-15 10:46:22.152921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.814 [2024-11-15 10:46:22.152959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.814 qpair failed and we were unable to recover it. 00:27:33.814 [2024-11-15 10:46:22.153110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.814 [2024-11-15 10:46:22.153134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.814 qpair failed and we were unable to recover it. 00:27:33.814 [2024-11-15 10:46:22.153293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.814 [2024-11-15 10:46:22.153341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.814 qpair failed and we were unable to recover it. 00:27:33.814 [2024-11-15 10:46:22.153477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.814 [2024-11-15 10:46:22.153502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.814 qpair failed and we were unable to recover it. 00:27:33.814 [2024-11-15 10:46:22.153651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.814 [2024-11-15 10:46:22.153676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.814 qpair failed and we were unable to recover it. 00:27:33.814 [2024-11-15 10:46:22.153836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.814 [2024-11-15 10:46:22.153874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.814 qpair failed and we were unable to recover it. 00:27:33.814 [2024-11-15 10:46:22.154033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.814 [2024-11-15 10:46:22.154057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.814 qpair failed and we were unable to recover it. 00:27:33.814 [2024-11-15 10:46:22.154238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.814 [2024-11-15 10:46:22.154262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.814 qpair failed and we were unable to recover it. 00:27:33.814 [2024-11-15 10:46:22.154428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.814 [2024-11-15 10:46:22.154469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.814 qpair failed and we were unable to recover it. 00:27:33.814 [2024-11-15 10:46:22.154641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.814 [2024-11-15 10:46:22.154665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.814 qpair failed and we were unable to recover it. 00:27:33.814 [2024-11-15 10:46:22.154838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.814 [2024-11-15 10:46:22.154861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.814 qpair failed and we were unable to recover it. 00:27:33.814 [2024-11-15 10:46:22.155017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.814 [2024-11-15 10:46:22.155041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.814 qpair failed and we were unable to recover it. 00:27:33.814 [2024-11-15 10:46:22.155170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.814 [2024-11-15 10:46:22.155195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.814 qpair failed and we were unable to recover it. 00:27:33.814 [2024-11-15 10:46:22.155372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.814 [2024-11-15 10:46:22.155397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.814 qpair failed and we were unable to recover it. 00:27:33.814 [2024-11-15 10:46:22.155504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.814 [2024-11-15 10:46:22.155528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.814 qpair failed and we were unable to recover it. 00:27:33.814 [2024-11-15 10:46:22.155670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.814 [2024-11-15 10:46:22.155694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.814 qpair failed and we were unable to recover it. 00:27:33.814 [2024-11-15 10:46:22.155855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.814 [2024-11-15 10:46:22.155893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.814 qpair failed and we were unable to recover it. 00:27:33.814 [2024-11-15 10:46:22.156065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.814 [2024-11-15 10:46:22.156089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.814 qpair failed and we were unable to recover it. 00:27:33.814 [2024-11-15 10:46:22.156201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.814 [2024-11-15 10:46:22.156226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.814 qpair failed and we were unable to recover it. 00:27:33.814 [2024-11-15 10:46:22.156400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.814 [2024-11-15 10:46:22.156426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.814 qpair failed and we were unable to recover it. 00:27:33.814 [2024-11-15 10:46:22.156573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.814 [2024-11-15 10:46:22.156598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.814 qpair failed and we were unable to recover it. 00:27:33.814 [2024-11-15 10:46:22.156731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.814 [2024-11-15 10:46:22.156755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.814 qpair failed and we were unable to recover it. 00:27:33.814 [2024-11-15 10:46:22.156941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.814 [2024-11-15 10:46:22.156965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.814 qpair failed and we were unable to recover it. 00:27:33.814 [2024-11-15 10:46:22.157138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.814 [2024-11-15 10:46:22.157165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.814 qpair failed and we were unable to recover it. 00:27:33.814 [2024-11-15 10:46:22.157277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.814 [2024-11-15 10:46:22.157316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.814 qpair failed and we were unable to recover it. 00:27:33.814 [2024-11-15 10:46:22.157442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.814 [2024-11-15 10:46:22.157468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.814 qpair failed and we were unable to recover it. 00:27:33.814 [2024-11-15 10:46:22.157569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.814 [2024-11-15 10:46:22.157595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.814 qpair failed and we were unable to recover it. 00:27:33.814 [2024-11-15 10:46:22.157721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.814 [2024-11-15 10:46:22.157745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.814 qpair failed and we were unable to recover it. 00:27:33.814 [2024-11-15 10:46:22.157927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.814 [2024-11-15 10:46:22.157950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.814 qpair failed and we were unable to recover it. 00:27:33.814 [2024-11-15 10:46:22.158082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.814 [2024-11-15 10:46:22.158107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.814 qpair failed and we were unable to recover it. 00:27:33.814 [2024-11-15 10:46:22.158263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.814 [2024-11-15 10:46:22.158301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.814 qpair failed and we were unable to recover it. 00:27:33.814 [2024-11-15 10:46:22.158442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.815 [2024-11-15 10:46:22.158467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.815 qpair failed and we were unable to recover it. 00:27:33.815 [2024-11-15 10:46:22.158576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.815 [2024-11-15 10:46:22.158601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.815 qpair failed and we were unable to recover it. 00:27:33.815 [2024-11-15 10:46:22.158754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.815 [2024-11-15 10:46:22.158779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.815 qpair failed and we were unable to recover it. 00:27:33.815 [2024-11-15 10:46:22.158899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.815 [2024-11-15 10:46:22.158923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.815 qpair failed and we were unable to recover it. 00:27:33.815 [2024-11-15 10:46:22.159063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.815 [2024-11-15 10:46:22.159087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.815 qpair failed and we were unable to recover it. 00:27:33.815 [2024-11-15 10:46:22.159204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.815 [2024-11-15 10:46:22.159228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.815 qpair failed and we were unable to recover it. 00:27:33.815 [2024-11-15 10:46:22.159406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.815 [2024-11-15 10:46:22.159445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.815 qpair failed and we were unable to recover it. 00:27:33.815 [2024-11-15 10:46:22.159549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.815 [2024-11-15 10:46:22.159573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.815 qpair failed and we were unable to recover it. 00:27:33.815 [2024-11-15 10:46:22.159734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.815 [2024-11-15 10:46:22.159759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.815 qpair failed and we were unable to recover it. 00:27:33.815 [2024-11-15 10:46:22.159875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.815 [2024-11-15 10:46:22.159899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.815 qpair failed and we were unable to recover it. 00:27:33.815 [2024-11-15 10:46:22.160043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.815 [2024-11-15 10:46:22.160067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.815 qpair failed and we were unable to recover it. 00:27:33.815 [2024-11-15 10:46:22.160238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.815 [2024-11-15 10:46:22.160263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.815 qpair failed and we were unable to recover it. 00:27:33.815 [2024-11-15 10:46:22.160455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.815 [2024-11-15 10:46:22.160480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.815 qpair failed and we were unable to recover it. 00:27:33.815 [2024-11-15 10:46:22.160601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.815 [2024-11-15 10:46:22.160626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.815 qpair failed and we were unable to recover it. 00:27:33.815 [2024-11-15 10:46:22.160791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.815 [2024-11-15 10:46:22.160830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.815 qpair failed and we were unable to recover it. 00:27:33.815 [2024-11-15 10:46:22.160962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.815 [2024-11-15 10:46:22.161000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.815 qpair failed and we were unable to recover it. 00:27:33.815 [2024-11-15 10:46:22.161125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.815 [2024-11-15 10:46:22.161149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.815 qpair failed and we were unable to recover it. 00:27:33.815 [2024-11-15 10:46:22.161282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.815 [2024-11-15 10:46:22.161306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.815 qpair failed and we were unable to recover it. 00:27:33.815 [2024-11-15 10:46:22.161414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.815 [2024-11-15 10:46:22.161440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.815 qpair failed and we were unable to recover it. 00:27:33.815 [2024-11-15 10:46:22.161575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.815 [2024-11-15 10:46:22.161607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.815 qpair failed and we were unable to recover it. 00:27:33.815 [2024-11-15 10:46:22.161728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.815 [2024-11-15 10:46:22.161767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.815 qpair failed and we were unable to recover it. 00:27:33.815 [2024-11-15 10:46:22.161926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.815 [2024-11-15 10:46:22.161964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.815 qpair failed and we were unable to recover it. 00:27:33.815 [2024-11-15 10:46:22.162096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.815 [2024-11-15 10:46:22.162135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.815 qpair failed and we were unable to recover it. 00:27:33.815 [2024-11-15 10:46:22.162279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.815 [2024-11-15 10:46:22.162303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.815 qpair failed and we were unable to recover it. 00:27:33.815 [2024-11-15 10:46:22.162434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.815 [2024-11-15 10:46:22.162459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.815 qpair failed and we were unable to recover it. 00:27:33.815 [2024-11-15 10:46:22.162596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.815 [2024-11-15 10:46:22.162621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.815 qpair failed and we were unable to recover it. 00:27:33.815 [2024-11-15 10:46:22.162772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.815 [2024-11-15 10:46:22.162811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.815 qpair failed and we were unable to recover it. 00:27:33.815 [2024-11-15 10:46:22.162951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.815 [2024-11-15 10:46:22.162975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.815 qpair failed and we were unable to recover it. 00:27:33.815 [2024-11-15 10:46:22.163109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.815 [2024-11-15 10:46:22.163133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.815 qpair failed and we were unable to recover it. 00:27:33.815 [2024-11-15 10:46:22.163274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.815 [2024-11-15 10:46:22.163298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.815 qpair failed and we were unable to recover it. 00:27:33.815 [2024-11-15 10:46:22.163450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.815 [2024-11-15 10:46:22.163475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.815 qpair failed and we were unable to recover it. 00:27:33.815 [2024-11-15 10:46:22.163593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.815 [2024-11-15 10:46:22.163618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.815 qpair failed and we were unable to recover it. 00:27:33.815 [2024-11-15 10:46:22.163773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.815 [2024-11-15 10:46:22.163811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.815 qpair failed and we were unable to recover it. 00:27:33.815 [2024-11-15 10:46:22.163951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.815 [2024-11-15 10:46:22.163974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.815 qpair failed and we were unable to recover it. 00:27:33.815 [2024-11-15 10:46:22.164083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.815 [2024-11-15 10:46:22.164107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.815 qpair failed and we were unable to recover it. 00:27:33.815 [2024-11-15 10:46:22.164277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.815 [2024-11-15 10:46:22.164301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.815 qpair failed and we were unable to recover it. 00:27:33.815 [2024-11-15 10:46:22.164424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.815 [2024-11-15 10:46:22.164449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.815 qpair failed and we were unable to recover it. 00:27:33.815 [2024-11-15 10:46:22.164562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.815 [2024-11-15 10:46:22.164587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.816 qpair failed and we were unable to recover it. 00:27:33.816 [2024-11-15 10:46:22.164710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.816 [2024-11-15 10:46:22.164734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.816 qpair failed and we were unable to recover it. 00:27:33.816 [2024-11-15 10:46:22.164912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.816 [2024-11-15 10:46:22.164936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.816 qpair failed and we were unable to recover it. 00:27:33.816 [2024-11-15 10:46:22.165064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.816 [2024-11-15 10:46:22.165088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.816 qpair failed and we were unable to recover it. 00:27:33.816 [2024-11-15 10:46:22.165200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.816 [2024-11-15 10:46:22.165224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.816 qpair failed and we were unable to recover it. 00:27:33.816 [2024-11-15 10:46:22.165356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.816 [2024-11-15 10:46:22.165386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.816 qpair failed and we were unable to recover it. 00:27:33.816 [2024-11-15 10:46:22.165545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.816 [2024-11-15 10:46:22.165585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.816 qpair failed and we were unable to recover it. 00:27:33.816 [2024-11-15 10:46:22.165697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.816 [2024-11-15 10:46:22.165739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.816 qpair failed and we were unable to recover it. 00:27:33.816 [2024-11-15 10:46:22.165864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.816 [2024-11-15 10:46:22.165888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.816 qpair failed and we were unable to recover it. 00:27:33.816 [2024-11-15 10:46:22.166007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.816 [2024-11-15 10:46:22.166032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.816 qpair failed and we were unable to recover it. 00:27:33.816 [2024-11-15 10:46:22.166139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.816 [2024-11-15 10:46:22.166163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.816 qpair failed and we were unable to recover it. 00:27:33.816 [2024-11-15 10:46:22.166343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.816 [2024-11-15 10:46:22.166398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.816 qpair failed and we were unable to recover it. 00:27:33.816 [2024-11-15 10:46:22.166580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.816 [2024-11-15 10:46:22.166605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.816 qpair failed and we were unable to recover it. 00:27:33.816 [2024-11-15 10:46:22.166755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.816 [2024-11-15 10:46:22.166779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.816 qpair failed and we were unable to recover it. 00:27:33.816 [2024-11-15 10:46:22.166944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.816 [2024-11-15 10:46:22.166981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.816 qpair failed and we were unable to recover it. 00:27:33.816 [2024-11-15 10:46:22.167139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.816 [2024-11-15 10:46:22.167164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.816 qpair failed and we were unable to recover it. 00:27:33.816 [2024-11-15 10:46:22.167300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.816 [2024-11-15 10:46:22.167339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.816 qpair failed and we were unable to recover it. 00:27:33.816 [2024-11-15 10:46:22.167502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.816 [2024-11-15 10:46:22.167527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.816 qpair failed and we were unable to recover it. 00:27:33.816 [2024-11-15 10:46:22.167663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.816 [2024-11-15 10:46:22.167688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.816 qpair failed and we were unable to recover it. 00:27:33.816 [2024-11-15 10:46:22.167819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.816 [2024-11-15 10:46:22.167859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.816 qpair failed and we were unable to recover it. 00:27:33.816 [2024-11-15 10:46:22.168001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.816 [2024-11-15 10:46:22.168025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.816 qpair failed and we were unable to recover it. 00:27:33.816 [2024-11-15 10:46:22.168165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.816 [2024-11-15 10:46:22.168189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.816 qpair failed and we were unable to recover it. 00:27:33.816 [2024-11-15 10:46:22.168297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.816 [2024-11-15 10:46:22.168325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.816 qpair failed and we were unable to recover it. 00:27:33.816 [2024-11-15 10:46:22.168466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.816 [2024-11-15 10:46:22.168491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.816 qpair failed and we were unable to recover it. 00:27:33.816 [2024-11-15 10:46:22.168611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.816 [2024-11-15 10:46:22.168636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.816 qpair failed and we were unable to recover it. 00:27:33.816 [2024-11-15 10:46:22.168738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.816 [2024-11-15 10:46:22.168762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.816 qpair failed and we were unable to recover it. 00:27:33.816 [2024-11-15 10:46:22.168894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.816 [2024-11-15 10:46:22.168918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.816 qpair failed and we were unable to recover it. 00:27:33.816 [2024-11-15 10:46:22.169035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.816 [2024-11-15 10:46:22.169059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.816 qpair failed and we were unable to recover it. 00:27:33.816 [2024-11-15 10:46:22.169188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.816 [2024-11-15 10:46:22.169213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.816 qpair failed and we were unable to recover it. 00:27:33.816 [2024-11-15 10:46:22.169343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.816 [2024-11-15 10:46:22.169387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.816 qpair failed and we were unable to recover it. 00:27:33.816 [2024-11-15 10:46:22.169494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.816 [2024-11-15 10:46:22.169519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.816 qpair failed and we were unable to recover it. 00:27:33.816 [2024-11-15 10:46:22.169652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.816 [2024-11-15 10:46:22.169677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.816 qpair failed and we were unable to recover it. 00:27:33.816 [2024-11-15 10:46:22.169842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.816 [2024-11-15 10:46:22.169880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.816 qpair failed and we were unable to recover it. 00:27:33.816 [2024-11-15 10:46:22.170036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.816 [2024-11-15 10:46:22.170060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.816 qpair failed and we were unable to recover it. 00:27:33.816 [2024-11-15 10:46:22.170240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.816 [2024-11-15 10:46:22.170263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.816 qpair failed and we were unable to recover it. 00:27:33.816 [2024-11-15 10:46:22.170397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.816 [2024-11-15 10:46:22.170423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.816 qpair failed and we were unable to recover it. 00:27:33.816 [2024-11-15 10:46:22.170518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.816 [2024-11-15 10:46:22.170544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.816 qpair failed and we were unable to recover it. 00:27:33.816 [2024-11-15 10:46:22.170667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.816 [2024-11-15 10:46:22.170691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.816 qpair failed and we were unable to recover it. 00:27:33.816 [2024-11-15 10:46:22.170830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.816 [2024-11-15 10:46:22.170870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.816 qpair failed and we were unable to recover it. 00:27:33.816 [2024-11-15 10:46:22.171000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.816 [2024-11-15 10:46:22.171024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.816 qpair failed and we were unable to recover it. 00:27:33.816 [2024-11-15 10:46:22.171156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.816 [2024-11-15 10:46:22.171180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.816 qpair failed and we were unable to recover it. 00:27:33.816 [2024-11-15 10:46:22.171357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.816 [2024-11-15 10:46:22.171388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.816 qpair failed and we were unable to recover it. 00:27:33.816 [2024-11-15 10:46:22.171510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.817 [2024-11-15 10:46:22.171535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.817 qpair failed and we were unable to recover it. 00:27:33.817 [2024-11-15 10:46:22.171698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.817 [2024-11-15 10:46:22.171737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.817 qpair failed and we were unable to recover it. 00:27:33.817 [2024-11-15 10:46:22.171885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.817 [2024-11-15 10:46:22.171909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.817 qpair failed and we were unable to recover it. 00:27:33.817 [2024-11-15 10:46:22.172042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.817 [2024-11-15 10:46:22.172066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.817 qpair failed and we were unable to recover it. 00:27:33.817 [2024-11-15 10:46:22.172210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.817 [2024-11-15 10:46:22.172235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.817 qpair failed and we were unable to recover it. 00:27:33.817 [2024-11-15 10:46:22.172397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.817 [2024-11-15 10:46:22.172422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.817 qpair failed and we were unable to recover it. 00:27:33.817 [2024-11-15 10:46:22.172555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.817 [2024-11-15 10:46:22.172580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.817 qpair failed and we were unable to recover it. 00:27:33.817 [2024-11-15 10:46:22.172671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.817 [2024-11-15 10:46:22.172696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.817 qpair failed and we were unable to recover it. 00:27:33.817 [2024-11-15 10:46:22.172842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.817 [2024-11-15 10:46:22.172866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.817 qpair failed and we were unable to recover it. 00:27:33.817 [2024-11-15 10:46:22.172981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.817 [2024-11-15 10:46:22.173005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.817 qpair failed and we were unable to recover it. 00:27:33.817 [2024-11-15 10:46:22.173145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.817 [2024-11-15 10:46:22.173169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.817 qpair failed and we were unable to recover it. 00:27:33.817 [2024-11-15 10:46:22.173291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.817 [2024-11-15 10:46:22.173315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.817 qpair failed and we were unable to recover it. 00:27:33.817 [2024-11-15 10:46:22.173424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.817 [2024-11-15 10:46:22.173449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.817 qpair failed and we were unable to recover it. 00:27:33.817 [2024-11-15 10:46:22.173577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.817 [2024-11-15 10:46:22.173601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.817 qpair failed and we were unable to recover it. 00:27:33.817 [2024-11-15 10:46:22.173729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.817 [2024-11-15 10:46:22.173754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.817 qpair failed and we were unable to recover it. 00:27:33.817 [2024-11-15 10:46:22.173890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.817 [2024-11-15 10:46:22.173914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.817 qpair failed and we were unable to recover it. 00:27:33.817 [2024-11-15 10:46:22.174044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.817 [2024-11-15 10:46:22.174069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.817 qpair failed and we were unable to recover it. 00:27:33.817 [2024-11-15 10:46:22.174235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.817 [2024-11-15 10:46:22.174259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.817 qpair failed and we were unable to recover it. 00:27:33.817 [2024-11-15 10:46:22.174427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.817 [2024-11-15 10:46:22.174452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.817 qpair failed and we were unable to recover it. 00:27:33.817 [2024-11-15 10:46:22.174580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.817 [2024-11-15 10:46:22.174605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.817 qpair failed and we were unable to recover it. 00:27:33.817 [2024-11-15 10:46:22.174733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.817 [2024-11-15 10:46:22.174762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.817 qpair failed and we were unable to recover it. 00:27:33.817 [2024-11-15 10:46:22.174888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.817 [2024-11-15 10:46:22.174912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.817 qpair failed and we were unable to recover it. 00:27:33.817 [2024-11-15 10:46:22.175050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.817 [2024-11-15 10:46:22.175075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.817 qpair failed and we were unable to recover it. 00:27:33.817 [2024-11-15 10:46:22.175192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.817 [2024-11-15 10:46:22.175216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.817 qpair failed and we were unable to recover it. 00:27:33.817 [2024-11-15 10:46:22.175388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.817 [2024-11-15 10:46:22.175414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.817 qpair failed and we were unable to recover it. 00:27:33.817 [2024-11-15 10:46:22.175548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.817 [2024-11-15 10:46:22.175574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.817 qpair failed and we were unable to recover it. 00:27:33.817 [2024-11-15 10:46:22.175741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.817 [2024-11-15 10:46:22.175765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.817 qpair failed and we were unable to recover it. 00:27:33.817 [2024-11-15 10:46:22.175898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.817 [2024-11-15 10:46:22.175937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.817 qpair failed and we were unable to recover it. 00:27:33.817 [2024-11-15 10:46:22.176081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.817 [2024-11-15 10:46:22.176121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.817 qpair failed and we were unable to recover it. 00:27:33.817 [2024-11-15 10:46:22.176216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.817 [2024-11-15 10:46:22.176240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.817 qpair failed and we were unable to recover it. 00:27:33.817 [2024-11-15 10:46:22.176433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.817 [2024-11-15 10:46:22.176458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.817 qpair failed and we were unable to recover it. 00:27:33.817 [2024-11-15 10:46:22.176575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.817 [2024-11-15 10:46:22.176601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.817 qpair failed and we were unable to recover it. 00:27:33.817 [2024-11-15 10:46:22.176739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.817 [2024-11-15 10:46:22.176763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.817 qpair failed and we were unable to recover it. 00:27:33.817 [2024-11-15 10:46:22.176944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.817 [2024-11-15 10:46:22.176968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.817 qpair failed and we were unable to recover it. 00:27:33.817 [2024-11-15 10:46:22.177138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.817 [2024-11-15 10:46:22.177162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.817 qpair failed and we were unable to recover it. 00:27:33.817 [2024-11-15 10:46:22.177294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.817 [2024-11-15 10:46:22.177333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.817 qpair failed and we were unable to recover it. 00:27:33.817 [2024-11-15 10:46:22.177498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.817 [2024-11-15 10:46:22.177524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.817 qpair failed and we were unable to recover it. 00:27:33.817 [2024-11-15 10:46:22.177672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.817 [2024-11-15 10:46:22.177711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.817 qpair failed and we were unable to recover it. 00:27:33.818 [2024-11-15 10:46:22.177864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.818 [2024-11-15 10:46:22.177888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.818 qpair failed and we were unable to recover it. 00:27:33.818 [2024-11-15 10:46:22.177995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.818 [2024-11-15 10:46:22.178019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.818 qpair failed and we were unable to recover it. 00:27:33.818 [2024-11-15 10:46:22.178156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.818 [2024-11-15 10:46:22.178180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.818 qpair failed and we were unable to recover it. 00:27:33.818 [2024-11-15 10:46:22.178316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.818 [2024-11-15 10:46:22.178357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.818 qpair failed and we were unable to recover it. 00:27:33.818 [2024-11-15 10:46:22.178524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.818 [2024-11-15 10:46:22.178550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.818 qpair failed and we were unable to recover it. 00:27:33.818 [2024-11-15 10:46:22.178687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.818 [2024-11-15 10:46:22.178711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.818 qpair failed and we were unable to recover it. 00:27:33.818 [2024-11-15 10:46:22.178877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.818 [2024-11-15 10:46:22.178901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.818 qpair failed and we were unable to recover it. 00:27:33.818 [2024-11-15 10:46:22.178995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.818 [2024-11-15 10:46:22.179019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.818 qpair failed and we were unable to recover it. 00:27:33.818 [2024-11-15 10:46:22.179146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.818 [2024-11-15 10:46:22.179170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.818 qpair failed and we were unable to recover it. 00:27:33.818 [2024-11-15 10:46:22.179353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.818 [2024-11-15 10:46:22.179398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.818 qpair failed and we were unable to recover it. 00:27:33.818 [2024-11-15 10:46:22.179573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.818 [2024-11-15 10:46:22.179597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.818 qpair failed and we were unable to recover it. 00:27:33.818 [2024-11-15 10:46:22.179767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.818 [2024-11-15 10:46:22.179791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.818 qpair failed and we were unable to recover it. 00:27:33.818 [2024-11-15 10:46:22.179916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.818 [2024-11-15 10:46:22.179955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.818 qpair failed and we were unable to recover it. 00:27:33.818 [2024-11-15 10:46:22.180112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.818 [2024-11-15 10:46:22.180150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.818 qpair failed and we were unable to recover it. 00:27:33.818 [2024-11-15 10:46:22.180279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.818 [2024-11-15 10:46:22.180303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.818 qpair failed and we were unable to recover it. 00:27:33.818 [2024-11-15 10:46:22.180446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.818 [2024-11-15 10:46:22.180471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.818 qpair failed and we were unable to recover it. 00:27:33.818 [2024-11-15 10:46:22.180609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.818 [2024-11-15 10:46:22.180634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.818 qpair failed and we were unable to recover it. 00:27:33.818 [2024-11-15 10:46:22.180746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.818 [2024-11-15 10:46:22.180770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.818 qpair failed and we were unable to recover it. 00:27:33.818 [2024-11-15 10:46:22.180886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.818 [2024-11-15 10:46:22.180910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.818 qpair failed and we were unable to recover it. 00:27:33.818 [2024-11-15 10:46:22.181051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.818 [2024-11-15 10:46:22.181075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.818 qpair failed and we were unable to recover it. 00:27:33.818 [2024-11-15 10:46:22.181197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.818 [2024-11-15 10:46:22.181222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.818 qpair failed and we were unable to recover it. 00:27:33.818 [2024-11-15 10:46:22.181403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.818 [2024-11-15 10:46:22.181429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.818 qpair failed and we were unable to recover it. 00:27:33.818 [2024-11-15 10:46:22.181573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.818 [2024-11-15 10:46:22.181602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.818 qpair failed and we were unable to recover it. 00:27:33.818 [2024-11-15 10:46:22.181749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.818 [2024-11-15 10:46:22.181773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.818 qpair failed and we were unable to recover it. 00:27:33.818 [2024-11-15 10:46:22.181896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.818 [2024-11-15 10:46:22.181920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.818 qpair failed and we were unable to recover it. 00:27:33.818 [2024-11-15 10:46:22.182029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.818 [2024-11-15 10:46:22.182054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.818 qpair failed and we were unable to recover it. 00:27:33.818 [2024-11-15 10:46:22.182181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.818 [2024-11-15 10:46:22.182206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.818 qpair failed and we were unable to recover it. 00:27:33.818 [2024-11-15 10:46:22.182339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.818 [2024-11-15 10:46:22.182369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.818 qpair failed and we were unable to recover it. 00:27:33.818 [2024-11-15 10:46:22.182498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.818 [2024-11-15 10:46:22.182524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.818 qpair failed and we were unable to recover it. 00:27:33.818 [2024-11-15 10:46:22.182713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.818 [2024-11-15 10:46:22.182738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.818 qpair failed and we were unable to recover it. 00:27:33.818 [2024-11-15 10:46:22.182853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.818 [2024-11-15 10:46:22.182878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.818 qpair failed and we were unable to recover it. 00:27:33.818 [2024-11-15 10:46:22.183017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.818 [2024-11-15 10:46:22.183042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.818 qpair failed and we were unable to recover it. 00:27:33.818 [2024-11-15 10:46:22.183217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.818 [2024-11-15 10:46:22.183242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.818 qpair failed and we were unable to recover it. 00:27:33.818 [2024-11-15 10:46:22.183373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.818 [2024-11-15 10:46:22.183399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.818 qpair failed and we were unable to recover it. 00:27:33.818 [2024-11-15 10:46:22.183483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.818 [2024-11-15 10:46:22.183509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.818 qpair failed and we were unable to recover it. 00:27:33.818 [2024-11-15 10:46:22.183665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.818 [2024-11-15 10:46:22.183690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.818 qpair failed and we were unable to recover it. 00:27:33.818 [2024-11-15 10:46:22.183845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.818 [2024-11-15 10:46:22.183869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.818 qpair failed and we were unable to recover it. 00:27:33.818 [2024-11-15 10:46:22.184002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.818 [2024-11-15 10:46:22.184041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.818 qpair failed and we were unable to recover it. 00:27:33.818 [2024-11-15 10:46:22.184168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.818 [2024-11-15 10:46:22.184193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.818 qpair failed and we were unable to recover it. 00:27:33.818 [2024-11-15 10:46:22.184327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.818 [2024-11-15 10:46:22.184375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.818 qpair failed and we were unable to recover it. 00:27:33.818 [2024-11-15 10:46:22.184495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.818 [2024-11-15 10:46:22.184520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.818 qpair failed and we were unable to recover it. 00:27:33.818 [2024-11-15 10:46:22.184653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.819 [2024-11-15 10:46:22.184679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.819 qpair failed and we were unable to recover it. 00:27:33.819 [2024-11-15 10:46:22.184826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.819 [2024-11-15 10:46:22.184865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.819 qpair failed and we were unable to recover it. 00:27:33.819 [2024-11-15 10:46:22.184970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.819 [2024-11-15 10:46:22.184994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.819 qpair failed and we were unable to recover it. 00:27:33.819 [2024-11-15 10:46:22.185161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.819 [2024-11-15 10:46:22.185186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.819 qpair failed and we were unable to recover it. 00:27:33.819 [2024-11-15 10:46:22.185329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.819 [2024-11-15 10:46:22.185375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.819 qpair failed and we were unable to recover it. 00:27:33.819 [2024-11-15 10:46:22.185485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.819 [2024-11-15 10:46:22.185510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.819 qpair failed and we were unable to recover it. 00:27:33.819 [2024-11-15 10:46:22.185683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.819 [2024-11-15 10:46:22.185708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.819 qpair failed and we were unable to recover it. 00:27:33.819 [2024-11-15 10:46:22.185865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.819 [2024-11-15 10:46:22.185889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.819 qpair failed and we were unable to recover it. 00:27:33.819 [2024-11-15 10:46:22.186061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.819 [2024-11-15 10:46:22.186086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.819 qpair failed and we were unable to recover it. 00:27:33.819 [2024-11-15 10:46:22.186221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.819 [2024-11-15 10:46:22.186261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.819 qpair failed and we were unable to recover it. 00:27:33.819 [2024-11-15 10:46:22.186390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.819 [2024-11-15 10:46:22.186416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.819 qpair failed and we were unable to recover it. 00:27:33.819 [2024-11-15 10:46:22.186545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.819 [2024-11-15 10:46:22.186571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.819 qpair failed and we were unable to recover it. 00:27:33.819 [2024-11-15 10:46:22.186668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.819 [2024-11-15 10:46:22.186693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.819 qpair failed and we were unable to recover it. 00:27:33.819 [2024-11-15 10:46:22.186867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.819 [2024-11-15 10:46:22.186892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.819 qpair failed and we were unable to recover it. 00:27:33.819 [2024-11-15 10:46:22.187048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.819 [2024-11-15 10:46:22.187073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.819 qpair failed and we were unable to recover it. 00:27:33.819 [2024-11-15 10:46:22.187202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.819 [2024-11-15 10:46:22.187227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.819 qpair failed and we were unable to recover it. 00:27:33.819 [2024-11-15 10:46:22.187381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.819 [2024-11-15 10:46:22.187427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.819 qpair failed and we were unable to recover it. 00:27:33.819 [2024-11-15 10:46:22.187556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.819 [2024-11-15 10:46:22.187581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.819 qpair failed and we were unable to recover it. 00:27:33.819 [2024-11-15 10:46:22.187672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.819 [2024-11-15 10:46:22.187698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.819 qpair failed and we were unable to recover it. 00:27:33.819 [2024-11-15 10:46:22.187866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.819 [2024-11-15 10:46:22.187891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.819 qpair failed and we were unable to recover it. 00:27:33.819 [2024-11-15 10:46:22.188012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.819 [2024-11-15 10:46:22.188037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.819 qpair failed and we were unable to recover it. 00:27:33.819 [2024-11-15 10:46:22.188195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.819 [2024-11-15 10:46:22.188224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.819 qpair failed and we were unable to recover it. 00:27:33.819 [2024-11-15 10:46:22.188402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.819 [2024-11-15 10:46:22.188428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.819 qpair failed and we were unable to recover it. 00:27:33.819 [2024-11-15 10:46:22.188531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.819 [2024-11-15 10:46:22.188557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.819 qpair failed and we were unable to recover it. 00:27:33.819 [2024-11-15 10:46:22.188686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.819 [2024-11-15 10:46:22.188710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.819 qpair failed and we were unable to recover it. 00:27:33.819 [2024-11-15 10:46:22.188876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.819 [2024-11-15 10:46:22.188901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.819 qpair failed and we were unable to recover it. 00:27:33.819 [2024-11-15 10:46:22.189083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.819 [2024-11-15 10:46:22.189107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.819 qpair failed and we were unable to recover it. 00:27:33.819 [2024-11-15 10:46:22.189238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.819 [2024-11-15 10:46:22.189263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.819 qpair failed and we were unable to recover it. 00:27:33.819 [2024-11-15 10:46:22.189355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.819 [2024-11-15 10:46:22.189385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.819 qpair failed and we were unable to recover it. 00:27:33.819 [2024-11-15 10:46:22.189478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.819 [2024-11-15 10:46:22.189504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.819 qpair failed and we were unable to recover it. 00:27:33.819 [2024-11-15 10:46:22.189613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.819 [2024-11-15 10:46:22.189653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.819 qpair failed and we were unable to recover it. 00:27:33.819 [2024-11-15 10:46:22.189737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.819 [2024-11-15 10:46:22.189762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.819 qpair failed and we were unable to recover it. 00:27:33.819 [2024-11-15 10:46:22.189881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.819 [2024-11-15 10:46:22.189905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.819 qpair failed and we were unable to recover it. 00:27:33.819 [2024-11-15 10:46:22.190042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.819 [2024-11-15 10:46:22.190067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.819 qpair failed and we were unable to recover it. 00:27:33.819 [2024-11-15 10:46:22.190229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.819 [2024-11-15 10:46:22.190268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.819 qpair failed and we were unable to recover it. 00:27:33.819 [2024-11-15 10:46:22.190456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.819 [2024-11-15 10:46:22.190482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.819 qpair failed and we were unable to recover it. 00:27:33.819 [2024-11-15 10:46:22.190595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.819 [2024-11-15 10:46:22.190636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.819 qpair failed and we were unable to recover it. 00:27:33.819 [2024-11-15 10:46:22.190779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.819 [2024-11-15 10:46:22.190803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.819 qpair failed and we were unable to recover it. 00:27:33.819 [2024-11-15 10:46:22.190940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.819 [2024-11-15 10:46:22.190979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.819 qpair failed and we were unable to recover it. 00:27:33.819 [2024-11-15 10:46:22.191087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.819 [2024-11-15 10:46:22.191111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.819 qpair failed and we were unable to recover it. 00:27:33.819 [2024-11-15 10:46:22.191241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.819 [2024-11-15 10:46:22.191266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.819 qpair failed and we were unable to recover it. 00:27:33.819 [2024-11-15 10:46:22.191434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.819 [2024-11-15 10:46:22.191473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.819 qpair failed and we were unable to recover it. 00:27:33.819 [2024-11-15 10:46:22.191613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.819 [2024-11-15 10:46:22.191638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.819 qpair failed and we were unable to recover it. 00:27:33.819 [2024-11-15 10:46:22.191775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.820 [2024-11-15 10:46:22.191799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.820 qpair failed and we were unable to recover it. 00:27:33.820 [2024-11-15 10:46:22.191965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.820 [2024-11-15 10:46:22.192003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.820 qpair failed and we were unable to recover it. 00:27:33.820 [2024-11-15 10:46:22.192130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.820 [2024-11-15 10:46:22.192154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.820 qpair failed and we were unable to recover it. 00:27:33.820 [2024-11-15 10:46:22.192297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.820 [2024-11-15 10:46:22.192321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.820 qpair failed and we were unable to recover it. 00:27:33.820 [2024-11-15 10:46:22.192452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.820 [2024-11-15 10:46:22.192494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.820 qpair failed and we were unable to recover it. 00:27:33.820 [2024-11-15 10:46:22.192661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.820 [2024-11-15 10:46:22.192701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.820 qpair failed and we were unable to recover it. 00:27:33.820 [2024-11-15 10:46:22.192826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.820 [2024-11-15 10:46:22.192850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.820 qpair failed and we were unable to recover it. 00:27:33.820 [2024-11-15 10:46:22.193004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.820 [2024-11-15 10:46:22.193028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.820 qpair failed and we were unable to recover it. 00:27:33.820 [2024-11-15 10:46:22.193191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.820 [2024-11-15 10:46:22.193215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.820 qpair failed and we were unable to recover it. 00:27:33.820 [2024-11-15 10:46:22.193354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.820 [2024-11-15 10:46:22.193398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.820 qpair failed and we were unable to recover it. 00:27:33.820 [2024-11-15 10:46:22.193557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.820 [2024-11-15 10:46:22.193581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.820 qpair failed and we were unable to recover it. 00:27:33.820 [2024-11-15 10:46:22.193725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.820 [2024-11-15 10:46:22.193763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.820 qpair failed and we were unable to recover it. 00:27:33.820 [2024-11-15 10:46:22.193894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.820 [2024-11-15 10:46:22.193933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.820 qpair failed and we were unable to recover it. 00:27:33.820 [2024-11-15 10:46:22.194079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.820 [2024-11-15 10:46:22.194103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.820 qpair failed and we were unable to recover it. 00:27:33.820 [2024-11-15 10:46:22.194261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.820 [2024-11-15 10:46:22.194285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.820 qpair failed and we were unable to recover it. 00:27:33.820 [2024-11-15 10:46:22.194417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.820 [2024-11-15 10:46:22.194443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.820 qpair failed and we were unable to recover it. 00:27:33.820 [2024-11-15 10:46:22.194564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.820 [2024-11-15 10:46:22.194589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.820 qpair failed and we were unable to recover it. 00:27:33.820 [2024-11-15 10:46:22.194711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.820 [2024-11-15 10:46:22.194735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.820 qpair failed and we were unable to recover it. 00:27:33.820 [2024-11-15 10:46:22.194846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.820 [2024-11-15 10:46:22.194876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.820 qpair failed and we were unable to recover it. 00:27:33.820 [2024-11-15 10:46:22.195050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.820 [2024-11-15 10:46:22.195088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.820 qpair failed and we were unable to recover it. 00:27:33.820 [2024-11-15 10:46:22.195225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.820 [2024-11-15 10:46:22.195254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.820 qpair failed and we were unable to recover it. 00:27:33.820 [2024-11-15 10:46:22.195463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.820 [2024-11-15 10:46:22.195489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.820 qpair failed and we were unable to recover it. 00:27:33.820 [2024-11-15 10:46:22.195608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.820 [2024-11-15 10:46:22.195633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.820 qpair failed and we were unable to recover it. 00:27:33.820 [2024-11-15 10:46:22.195822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.820 [2024-11-15 10:46:22.195846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.820 qpair failed and we were unable to recover it. 00:27:33.820 [2024-11-15 10:46:22.196011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.820 [2024-11-15 10:46:22.196035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.820 qpair failed and we were unable to recover it. 00:27:33.820 [2024-11-15 10:46:22.196170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.820 [2024-11-15 10:46:22.196208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.820 qpair failed and we were unable to recover it. 00:27:33.820 [2024-11-15 10:46:22.196318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.820 [2024-11-15 10:46:22.196360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.820 qpair failed and we were unable to recover it. 00:27:33.820 [2024-11-15 10:46:22.196473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.820 [2024-11-15 10:46:22.196498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.820 qpair failed and we were unable to recover it. 00:27:33.820 [2024-11-15 10:46:22.196657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.820 [2024-11-15 10:46:22.196682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.820 qpair failed and we were unable to recover it. 00:27:33.820 [2024-11-15 10:46:22.196812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.820 [2024-11-15 10:46:22.196836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.820 qpair failed and we were unable to recover it. 00:27:33.820 [2024-11-15 10:46:22.196975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.820 [2024-11-15 10:46:22.196999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.820 qpair failed and we were unable to recover it. 00:27:33.820 [2024-11-15 10:46:22.197149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.820 [2024-11-15 10:46:22.197173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.820 qpair failed and we were unable to recover it. 00:27:33.820 [2024-11-15 10:46:22.197354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.820 [2024-11-15 10:46:22.197383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.820 qpair failed and we were unable to recover it. 00:27:33.820 [2024-11-15 10:46:22.197489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.820 [2024-11-15 10:46:22.197514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.820 qpair failed and we were unable to recover it. 00:27:33.820 [2024-11-15 10:46:22.197690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.820 [2024-11-15 10:46:22.197714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.820 qpair failed and we were unable to recover it. 00:27:33.820 [2024-11-15 10:46:22.197847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.820 [2024-11-15 10:46:22.197885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.820 qpair failed and we were unable to recover it. 00:27:33.820 [2024-11-15 10:46:22.198039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.820 [2024-11-15 10:46:22.198078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.820 qpair failed and we were unable to recover it. 00:27:33.820 [2024-11-15 10:46:22.198241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.820 [2024-11-15 10:46:22.198279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.820 qpair failed and we were unable to recover it. 00:27:33.820 [2024-11-15 10:46:22.198443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.820 [2024-11-15 10:46:22.198466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.820 qpair failed and we were unable to recover it. 00:27:33.820 [2024-11-15 10:46:22.198573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.820 [2024-11-15 10:46:22.198598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.820 qpair failed and we were unable to recover it. 00:27:33.820 [2024-11-15 10:46:22.198713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.820 [2024-11-15 10:46:22.198737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.820 qpair failed and we were unable to recover it. 00:27:33.820 [2024-11-15 10:46:22.198928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.820 [2024-11-15 10:46:22.198952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.820 qpair failed and we were unable to recover it. 00:27:33.820 [2024-11-15 10:46:22.199073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.820 [2024-11-15 10:46:22.199097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.820 qpair failed and we were unable to recover it. 00:27:33.820 [2024-11-15 10:46:22.199283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.820 [2024-11-15 10:46:22.199307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.820 qpair failed and we were unable to recover it. 00:27:33.820 [2024-11-15 10:46:22.199503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.820 [2024-11-15 10:46:22.199527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.820 qpair failed and we were unable to recover it. 00:27:33.821 [2024-11-15 10:46:22.199694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.821 [2024-11-15 10:46:22.199718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.821 qpair failed and we were unable to recover it. 00:27:33.821 [2024-11-15 10:46:22.199895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.821 [2024-11-15 10:46:22.199919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.821 qpair failed and we were unable to recover it. 00:27:33.821 [2024-11-15 10:46:22.200023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.821 [2024-11-15 10:46:22.200046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.821 qpair failed and we were unable to recover it. 00:27:33.821 [2024-11-15 10:46:22.200181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.821 [2024-11-15 10:46:22.200205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.821 qpair failed and we were unable to recover it. 00:27:33.821 [2024-11-15 10:46:22.200317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.821 [2024-11-15 10:46:22.200341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.821 qpair failed and we were unable to recover it. 00:27:33.821 [2024-11-15 10:46:22.200476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.821 [2024-11-15 10:46:22.200501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.821 qpair failed and we were unable to recover it. 00:27:33.821 [2024-11-15 10:46:22.200580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.821 [2024-11-15 10:46:22.200614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.821 qpair failed and we were unable to recover it. 00:27:33.821 [2024-11-15 10:46:22.200746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.821 [2024-11-15 10:46:22.200775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.821 qpair failed and we were unable to recover it. 00:27:33.821 [2024-11-15 10:46:22.200931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.821 [2024-11-15 10:46:22.200955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.821 qpair failed and we were unable to recover it. 00:27:33.821 [2024-11-15 10:46:22.201072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.821 [2024-11-15 10:46:22.201096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.821 qpair failed and we were unable to recover it. 00:27:33.821 [2024-11-15 10:46:22.201265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.821 [2024-11-15 10:46:22.201302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.821 qpair failed and we were unable to recover it. 00:27:33.821 [2024-11-15 10:46:22.201467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.821 [2024-11-15 10:46:22.201492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.821 qpair failed and we were unable to recover it. 00:27:33.821 [2024-11-15 10:46:22.201701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.821 [2024-11-15 10:46:22.201739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.821 qpair failed and we were unable to recover it. 00:27:33.821 [2024-11-15 10:46:22.201891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.821 [2024-11-15 10:46:22.201919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.821 qpair failed and we were unable to recover it. 00:27:33.821 [2024-11-15 10:46:22.202010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.821 [2024-11-15 10:46:22.202034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.821 qpair failed and we were unable to recover it. 00:27:33.821 [2024-11-15 10:46:22.202189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.821 [2024-11-15 10:46:22.202213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.821 qpair failed and we were unable to recover it. 00:27:33.821 [2024-11-15 10:46:22.202357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.821 [2024-11-15 10:46:22.202404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.821 qpair failed and we were unable to recover it. 00:27:33.821 [2024-11-15 10:46:22.202549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.821 [2024-11-15 10:46:22.202574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.821 qpair failed and we were unable to recover it. 00:27:33.821 [2024-11-15 10:46:22.202747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.821 [2024-11-15 10:46:22.202770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.821 qpair failed and we were unable to recover it. 00:27:33.821 [2024-11-15 10:46:22.202937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.821 [2024-11-15 10:46:22.202960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.821 qpair failed and we were unable to recover it. 00:27:33.821 [2024-11-15 10:46:22.203099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.821 [2024-11-15 10:46:22.203136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.821 qpair failed and we were unable to recover it. 00:27:33.821 [2024-11-15 10:46:22.203276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.821 [2024-11-15 10:46:22.203314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.821 qpair failed and we were unable to recover it. 00:27:33.821 [2024-11-15 10:46:22.203470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.821 [2024-11-15 10:46:22.203495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.821 qpair failed and we were unable to recover it. 00:27:33.821 [2024-11-15 10:46:22.203622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.821 [2024-11-15 10:46:22.203647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.821 qpair failed and we were unable to recover it. 00:27:33.821 [2024-11-15 10:46:22.203794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.821 [2024-11-15 10:46:22.203818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.821 qpair failed and we were unable to recover it. 00:27:33.821 [2024-11-15 10:46:22.203945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.821 [2024-11-15 10:46:22.203969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.821 qpair failed and we were unable to recover it. 00:27:33.821 [2024-11-15 10:46:22.204136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.821 [2024-11-15 10:46:22.204160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.821 qpair failed and we were unable to recover it. 00:27:33.821 [2024-11-15 10:46:22.204288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.821 [2024-11-15 10:46:22.204312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.821 qpair failed and we were unable to recover it. 00:27:33.821 [2024-11-15 10:46:22.204467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.821 [2024-11-15 10:46:22.204507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.821 qpair failed and we were unable to recover it. 00:27:33.821 [2024-11-15 10:46:22.204638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.821 [2024-11-15 10:46:22.204678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.821 qpair failed and we were unable to recover it. 00:27:33.822 [2024-11-15 10:46:22.204853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.822 [2024-11-15 10:46:22.204877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.822 qpair failed and we were unable to recover it. 00:27:33.822 [2024-11-15 10:46:22.204978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.822 [2024-11-15 10:46:22.205002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.822 qpair failed and we were unable to recover it. 00:27:33.822 [2024-11-15 10:46:22.205127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.822 [2024-11-15 10:46:22.205164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.822 qpair failed and we were unable to recover it. 00:27:33.822 [2024-11-15 10:46:22.205353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.822 [2024-11-15 10:46:22.205402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.822 qpair failed and we were unable to recover it. 00:27:33.822 [2024-11-15 10:46:22.205565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.822 [2024-11-15 10:46:22.205589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.822 qpair failed and we were unable to recover it. 00:27:33.822 [2024-11-15 10:46:22.205699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.822 [2024-11-15 10:46:22.205723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.822 qpair failed and we were unable to recover it. 00:27:33.822 [2024-11-15 10:46:22.205874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.822 [2024-11-15 10:46:22.205898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.822 qpair failed and we were unable to recover it. 00:27:33.822 [2024-11-15 10:46:22.205978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.822 [2024-11-15 10:46:22.206001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.822 qpair failed and we were unable to recover it. 00:27:33.822 [2024-11-15 10:46:22.206142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.822 [2024-11-15 10:46:22.206166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.822 qpair failed and we were unable to recover it. 00:27:33.822 [2024-11-15 10:46:22.206344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.822 [2024-11-15 10:46:22.206391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.822 qpair failed and we were unable to recover it. 00:27:33.822 [2024-11-15 10:46:22.206545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.822 [2024-11-15 10:46:22.206570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.822 qpair failed and we were unable to recover it. 00:27:33.822 [2024-11-15 10:46:22.206752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.822 [2024-11-15 10:46:22.206775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.822 qpair failed and we were unable to recover it. 00:27:33.822 [2024-11-15 10:46:22.206914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.822 [2024-11-15 10:46:22.206937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.822 qpair failed and we were unable to recover it. 00:27:33.822 [2024-11-15 10:46:22.207066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.822 [2024-11-15 10:46:22.207090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.822 qpair failed and we were unable to recover it. 00:27:33.822 [2024-11-15 10:46:22.207229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.822 [2024-11-15 10:46:22.207253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.822 qpair failed and we were unable to recover it. 00:27:33.822 [2024-11-15 10:46:22.207418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.822 [2024-11-15 10:46:22.207445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.822 qpair failed and we were unable to recover it. 00:27:33.822 [2024-11-15 10:46:22.207561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.822 [2024-11-15 10:46:22.207586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.822 qpair failed and we were unable to recover it. 00:27:33.822 [2024-11-15 10:46:22.207734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.822 [2024-11-15 10:46:22.207758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.822 qpair failed and we were unable to recover it. 00:27:33.822 [2024-11-15 10:46:22.207937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.822 [2024-11-15 10:46:22.207960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.822 qpair failed and we were unable to recover it. 00:27:33.822 [2024-11-15 10:46:22.208101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.822 [2024-11-15 10:46:22.208125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.822 qpair failed and we were unable to recover it. 00:27:33.822 [2024-11-15 10:46:22.208325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.822 [2024-11-15 10:46:22.208348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.822 qpair failed and we were unable to recover it. 00:27:33.822 [2024-11-15 10:46:22.208509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.822 [2024-11-15 10:46:22.208533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.822 qpair failed and we were unable to recover it. 00:27:33.822 [2024-11-15 10:46:22.208610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.822 [2024-11-15 10:46:22.208635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.822 qpair failed and we were unable to recover it. 00:27:33.822 [2024-11-15 10:46:22.208721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.822 [2024-11-15 10:46:22.208765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.822 qpair failed and we were unable to recover it. 00:27:33.822 [2024-11-15 10:46:22.208915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.822 [2024-11-15 10:46:22.208954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.822 qpair failed and we were unable to recover it. 00:27:33.822 [2024-11-15 10:46:22.209137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.822 [2024-11-15 10:46:22.209160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.822 qpair failed and we were unable to recover it. 00:27:33.822 [2024-11-15 10:46:22.209324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.822 [2024-11-15 10:46:22.209348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.822 qpair failed and we were unable to recover it. 00:27:33.822 [2024-11-15 10:46:22.209467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.822 [2024-11-15 10:46:22.209491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.822 qpair failed and we were unable to recover it. 00:27:33.822 [2024-11-15 10:46:22.209594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.822 [2024-11-15 10:46:22.209618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.822 qpair failed and we were unable to recover it. 00:27:33.822 [2024-11-15 10:46:22.209726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.822 [2024-11-15 10:46:22.209750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.822 qpair failed and we were unable to recover it. 00:27:33.822 [2024-11-15 10:46:22.209896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.822 [2024-11-15 10:46:22.209920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.822 qpair failed and we were unable to recover it. 00:27:33.822 [2024-11-15 10:46:22.210069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.822 [2024-11-15 10:46:22.210107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.822 qpair failed and we were unable to recover it. 00:27:33.822 [2024-11-15 10:46:22.210236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.822 [2024-11-15 10:46:22.210273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.822 qpair failed and we were unable to recover it. 00:27:33.822 [2024-11-15 10:46:22.210479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.822 [2024-11-15 10:46:22.210504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.822 qpair failed and we were unable to recover it. 00:27:33.822 [2024-11-15 10:46:22.210649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.822 [2024-11-15 10:46:22.210673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.822 qpair failed and we were unable to recover it. 00:27:33.822 [2024-11-15 10:46:22.210852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.823 [2024-11-15 10:46:22.210876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.823 qpair failed and we were unable to recover it. 00:27:33.823 [2024-11-15 10:46:22.211006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.823 [2024-11-15 10:46:22.211030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.823 qpair failed and we were unable to recover it. 00:27:33.823 [2024-11-15 10:46:22.211207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.823 [2024-11-15 10:46:22.211246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.823 qpair failed and we were unable to recover it. 00:27:33.823 [2024-11-15 10:46:22.211379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.823 [2024-11-15 10:46:22.211411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.823 qpair failed and we were unable to recover it. 00:27:33.823 [2024-11-15 10:46:22.211527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.823 [2024-11-15 10:46:22.211552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.823 qpair failed and we were unable to recover it. 00:27:33.823 [2024-11-15 10:46:22.211718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.823 [2024-11-15 10:46:22.211743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.823 qpair failed and we were unable to recover it. 00:27:33.823 [2024-11-15 10:46:22.211853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.823 [2024-11-15 10:46:22.211879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.823 qpair failed and we were unable to recover it. 00:27:33.823 [2024-11-15 10:46:22.212053] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1214f30 is same with the state(6) to be set 00:27:33.823 [2024-11-15 10:46:22.212346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.823 [2024-11-15 10:46:22.212395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.823 qpair failed and we were unable to recover it. 00:27:33.823 [2024-11-15 10:46:22.212538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.823 [2024-11-15 10:46:22.212565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.823 qpair failed and we were unable to recover it. 00:27:33.823 [2024-11-15 10:46:22.212719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.823 [2024-11-15 10:46:22.212759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.823 qpair failed and we were unable to recover it. 00:27:33.823 [2024-11-15 10:46:22.212885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.823 [2024-11-15 10:46:22.212923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.823 qpair failed and we were unable to recover it. 00:27:33.823 [2024-11-15 10:46:22.213092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.823 [2024-11-15 10:46:22.213116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.823 qpair failed and we were unable to recover it. 00:27:33.823 [2024-11-15 10:46:22.213289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.823 [2024-11-15 10:46:22.213314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:33.823 qpair failed and we were unable to recover it. 00:27:33.823 [2024-11-15 10:46:22.213449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.823 [2024-11-15 10:46:22.213479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.823 qpair failed and we were unable to recover it. 00:27:33.823 [2024-11-15 10:46:22.213568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.823 [2024-11-15 10:46:22.213603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.823 qpair failed and we were unable to recover it. 00:27:33.823 [2024-11-15 10:46:22.213796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.823 [2024-11-15 10:46:22.213824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.823 qpair failed and we were unable to recover it. 00:27:33.823 [2024-11-15 10:46:22.213965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.823 [2024-11-15 10:46:22.213989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.823 qpair failed and we were unable to recover it. 00:27:33.823 [2024-11-15 10:46:22.214165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.823 [2024-11-15 10:46:22.214190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.823 qpair failed and we were unable to recover it. 00:27:33.823 [2024-11-15 10:46:22.214366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.823 [2024-11-15 10:46:22.214406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.823 qpair failed and we were unable to recover it. 00:27:33.823 [2024-11-15 10:46:22.214546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.823 [2024-11-15 10:46:22.214572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.823 qpair failed and we were unable to recover it. 00:27:33.823 [2024-11-15 10:46:22.214677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.823 [2024-11-15 10:46:22.214717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.823 qpair failed and we were unable to recover it. 00:27:33.823 [2024-11-15 10:46:22.214900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.823 [2024-11-15 10:46:22.214925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.823 qpair failed and we were unable to recover it. 00:27:33.823 [2024-11-15 10:46:22.215050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.823 [2024-11-15 10:46:22.215075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:33.823 qpair failed and we were unable to recover it. 00:27:34.104 [2024-11-15 10:46:22.215217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.104 [2024-11-15 10:46:22.215255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:34.104 qpair failed and we were unable to recover it. 00:27:34.104 [2024-11-15 10:46:22.215462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.104 [2024-11-15 10:46:22.215501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:34.104 qpair failed and we were unable to recover it. 00:27:34.104 [2024-11-15 10:46:22.215671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.104 [2024-11-15 10:46:22.215709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.104 qpair failed and we were unable to recover it. 00:27:34.104 [2024-11-15 10:46:22.215878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.104 [2024-11-15 10:46:22.215904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.104 qpair failed and we were unable to recover it. 00:27:34.104 [2024-11-15 10:46:22.216015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.104 [2024-11-15 10:46:22.216041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.104 qpair failed and we were unable to recover it. 00:27:34.104 [2024-11-15 10:46:22.216176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.104 [2024-11-15 10:46:22.216202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.104 qpair failed and we were unable to recover it. 00:27:34.104 [2024-11-15 10:46:22.216332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.104 [2024-11-15 10:46:22.216357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.104 qpair failed and we were unable to recover it. 00:27:34.104 [2024-11-15 10:46:22.216480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.104 [2024-11-15 10:46:22.216505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.104 qpair failed and we were unable to recover it. 00:27:34.104 [2024-11-15 10:46:22.216639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.104 [2024-11-15 10:46:22.216664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.104 qpair failed and we were unable to recover it. 00:27:34.104 [2024-11-15 10:46:22.216837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.104 [2024-11-15 10:46:22.216862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.104 qpair failed and we were unable to recover it. 00:27:34.104 [2024-11-15 10:46:22.216988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.104 [2024-11-15 10:46:22.217012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.104 qpair failed and we were unable to recover it. 00:27:34.104 [2024-11-15 10:46:22.217133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.104 [2024-11-15 10:46:22.217158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.104 qpair failed and we were unable to recover it. 00:27:34.104 [2024-11-15 10:46:22.217312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.104 [2024-11-15 10:46:22.217337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.104 qpair failed and we were unable to recover it. 00:27:34.104 [2024-11-15 10:46:22.217488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.104 [2024-11-15 10:46:22.217513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.104 qpair failed and we were unable to recover it. 00:27:34.104 [2024-11-15 10:46:22.217645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.104 [2024-11-15 10:46:22.217670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.104 qpair failed and we were unable to recover it. 00:27:34.104 [2024-11-15 10:46:22.217822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.104 [2024-11-15 10:46:22.217847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.104 qpair failed and we were unable to recover it. 00:27:34.104 [2024-11-15 10:46:22.217948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.104 [2024-11-15 10:46:22.217972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.104 qpair failed and we were unable to recover it. 00:27:34.104 [2024-11-15 10:46:22.218127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.104 [2024-11-15 10:46:22.218152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.105 qpair failed and we were unable to recover it. 00:27:34.105 [2024-11-15 10:46:22.218286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.105 [2024-11-15 10:46:22.218311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.105 qpair failed and we were unable to recover it. 00:27:34.105 [2024-11-15 10:46:22.218442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.105 [2024-11-15 10:46:22.218468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.105 qpair failed and we were unable to recover it. 00:27:34.105 [2024-11-15 10:46:22.218625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.105 [2024-11-15 10:46:22.218664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.105 qpair failed and we were unable to recover it. 00:27:34.105 [2024-11-15 10:46:22.218792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.105 [2024-11-15 10:46:22.218832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.105 qpair failed and we were unable to recover it. 00:27:34.105 [2024-11-15 10:46:22.218996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.105 [2024-11-15 10:46:22.219021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.105 qpair failed and we were unable to recover it. 00:27:34.105 [2024-11-15 10:46:22.219141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.105 [2024-11-15 10:46:22.219166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.105 qpair failed and we were unable to recover it. 00:27:34.105 [2024-11-15 10:46:22.219291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.105 [2024-11-15 10:46:22.219320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.105 qpair failed and we were unable to recover it. 00:27:34.105 [2024-11-15 10:46:22.219465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.105 [2024-11-15 10:46:22.219491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.105 qpair failed and we were unable to recover it. 00:27:34.105 [2024-11-15 10:46:22.219631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.105 [2024-11-15 10:46:22.219670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.105 qpair failed and we were unable to recover it. 00:27:34.105 [2024-11-15 10:46:22.219838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.105 [2024-11-15 10:46:22.219863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.105 qpair failed and we were unable to recover it. 00:27:34.105 [2024-11-15 10:46:22.219985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.105 [2024-11-15 10:46:22.220010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.105 qpair failed and we were unable to recover it. 00:27:34.105 [2024-11-15 10:46:22.220089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.105 [2024-11-15 10:46:22.220114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.105 qpair failed and we were unable to recover it. 00:27:34.105 [2024-11-15 10:46:22.220208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.105 [2024-11-15 10:46:22.220233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.105 qpair failed and we were unable to recover it. 00:27:34.105 [2024-11-15 10:46:22.220370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.105 [2024-11-15 10:46:22.220396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.105 qpair failed and we were unable to recover it. 00:27:34.105 [2024-11-15 10:46:22.220528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.105 [2024-11-15 10:46:22.220557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:34.105 qpair failed and we were unable to recover it. 00:27:34.105 [2024-11-15 10:46:22.220730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.105 [2024-11-15 10:46:22.220756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:34.105 qpair failed and we were unable to recover it. 00:27:34.105 [2024-11-15 10:46:22.220859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.105 [2024-11-15 10:46:22.220884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:34.105 qpair failed and we were unable to recover it. 00:27:34.105 [2024-11-15 10:46:22.221012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.105 [2024-11-15 10:46:22.221053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:34.105 qpair failed and we were unable to recover it. 00:27:34.105 [2024-11-15 10:46:22.221215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.105 [2024-11-15 10:46:22.221242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:34.105 qpair failed and we were unable to recover it. 00:27:34.105 [2024-11-15 10:46:22.221367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.105 [2024-11-15 10:46:22.221394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:34.105 qpair failed and we were unable to recover it. 00:27:34.105 [2024-11-15 10:46:22.221510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.105 [2024-11-15 10:46:22.221535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:34.105 qpair failed and we were unable to recover it. 00:27:34.105 [2024-11-15 10:46:22.221704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.105 [2024-11-15 10:46:22.221729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:34.105 qpair failed and we were unable to recover it. 00:27:34.105 [2024-11-15 10:46:22.221877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.105 [2024-11-15 10:46:22.221917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:34.105 qpair failed and we were unable to recover it. 00:27:34.105 [2024-11-15 10:46:22.222002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.105 [2024-11-15 10:46:22.222027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:34.105 qpair failed and we were unable to recover it. 00:27:34.105 [2024-11-15 10:46:22.222168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.105 [2024-11-15 10:46:22.222205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:34.105 qpair failed and we were unable to recover it. 00:27:34.105 [2024-11-15 10:46:22.222352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.105 [2024-11-15 10:46:22.222383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:34.105 qpair failed and we were unable to recover it. 00:27:34.105 [2024-11-15 10:46:22.222478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.105 [2024-11-15 10:46:22.222505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.105 qpair failed and we were unable to recover it. 00:27:34.105 [2024-11-15 10:46:22.222655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.105 [2024-11-15 10:46:22.222679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.105 qpair failed and we were unable to recover it. 00:27:34.105 [2024-11-15 10:46:22.222826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.105 [2024-11-15 10:46:22.222851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.105 qpair failed and we were unable to recover it. 00:27:34.105 [2024-11-15 10:46:22.223025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.105 [2024-11-15 10:46:22.223049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.105 qpair failed and we were unable to recover it. 00:27:34.105 [2024-11-15 10:46:22.223189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.105 [2024-11-15 10:46:22.223213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.105 qpair failed and we were unable to recover it. 00:27:34.105 [2024-11-15 10:46:22.223355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.105 [2024-11-15 10:46:22.223387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.105 qpair failed and we were unable to recover it. 00:27:34.105 [2024-11-15 10:46:22.223574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.105 [2024-11-15 10:46:22.223599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.106 qpair failed and we were unable to recover it. 00:27:34.106 [2024-11-15 10:46:22.223762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.106 [2024-11-15 10:46:22.223785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.106 qpair failed and we were unable to recover it. 00:27:34.106 [2024-11-15 10:46:22.223922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.106 [2024-11-15 10:46:22.223945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.106 qpair failed and we were unable to recover it. 00:27:34.106 [2024-11-15 10:46:22.224178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.106 [2024-11-15 10:46:22.224202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.106 qpair failed and we were unable to recover it. 00:27:34.106 [2024-11-15 10:46:22.224335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.106 [2024-11-15 10:46:22.224381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.106 qpair failed and we were unable to recover it. 00:27:34.106 [2024-11-15 10:46:22.224560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.106 [2024-11-15 10:46:22.224585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.106 qpair failed and we were unable to recover it. 00:27:34.106 [2024-11-15 10:46:22.224793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.106 [2024-11-15 10:46:22.224817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.106 qpair failed and we were unable to recover it. 00:27:34.106 [2024-11-15 10:46:22.225015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.106 [2024-11-15 10:46:22.225038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.106 qpair failed and we were unable to recover it. 00:27:34.106 [2024-11-15 10:46:22.225211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.106 [2024-11-15 10:46:22.225235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.106 qpair failed and we were unable to recover it. 00:27:34.106 [2024-11-15 10:46:22.225454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.106 [2024-11-15 10:46:22.225480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.106 qpair failed and we were unable to recover it. 00:27:34.106 [2024-11-15 10:46:22.225635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.106 [2024-11-15 10:46:22.225674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.106 qpair failed and we were unable to recover it. 00:27:34.106 [2024-11-15 10:46:22.225802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.106 [2024-11-15 10:46:22.225825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.106 qpair failed and we were unable to recover it. 00:27:34.106 [2024-11-15 10:46:22.225989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.106 [2024-11-15 10:46:22.226013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.106 qpair failed and we were unable to recover it. 00:27:34.106 [2024-11-15 10:46:22.226129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.106 [2024-11-15 10:46:22.226153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.106 qpair failed and we were unable to recover it. 00:27:34.106 [2024-11-15 10:46:22.226352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.106 [2024-11-15 10:46:22.226383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.106 qpair failed and we were unable to recover it. 00:27:34.106 [2024-11-15 10:46:22.226477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.106 [2024-11-15 10:46:22.226502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.106 qpair failed and we were unable to recover it. 00:27:34.106 [2024-11-15 10:46:22.226756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.106 [2024-11-15 10:46:22.226779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.106 qpair failed and we were unable to recover it. 00:27:34.106 [2024-11-15 10:46:22.226943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.106 [2024-11-15 10:46:22.226966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.106 qpair failed and we were unable to recover it. 00:27:34.106 [2024-11-15 10:46:22.227134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.106 [2024-11-15 10:46:22.227158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.106 qpair failed and we were unable to recover it. 00:27:34.106 [2024-11-15 10:46:22.227398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.106 [2024-11-15 10:46:22.227423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.106 qpair failed and we were unable to recover it. 00:27:34.106 [2024-11-15 10:46:22.227617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.106 [2024-11-15 10:46:22.227655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.106 qpair failed and we were unable to recover it. 00:27:34.106 [2024-11-15 10:46:22.227829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.106 [2024-11-15 10:46:22.227852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.106 qpair failed and we were unable to recover it. 00:27:34.106 [2024-11-15 10:46:22.228029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.106 [2024-11-15 10:46:22.228052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.106 qpair failed and we were unable to recover it. 00:27:34.106 [2024-11-15 10:46:22.228209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.106 [2024-11-15 10:46:22.228233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.106 qpair failed and we were unable to recover it. 00:27:34.106 [2024-11-15 10:46:22.228474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.106 [2024-11-15 10:46:22.228499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.106 qpair failed and we were unable to recover it. 00:27:34.106 [2024-11-15 10:46:22.228655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.106 [2024-11-15 10:46:22.228679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.106 qpair failed and we were unable to recover it. 00:27:34.106 [2024-11-15 10:46:22.228853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.106 [2024-11-15 10:46:22.228877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.106 qpair failed and we were unable to recover it. 00:27:34.106 [2024-11-15 10:46:22.229054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.106 [2024-11-15 10:46:22.229077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.106 qpair failed and we were unable to recover it. 00:27:34.106 [2024-11-15 10:46:22.229232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.106 [2024-11-15 10:46:22.229256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.106 qpair failed and we were unable to recover it. 00:27:34.106 [2024-11-15 10:46:22.229424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.106 [2024-11-15 10:46:22.229459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.106 qpair failed and we were unable to recover it. 00:27:34.106 [2024-11-15 10:46:22.229587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.106 [2024-11-15 10:46:22.229625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.106 qpair failed and we were unable to recover it. 00:27:34.106 [2024-11-15 10:46:22.229845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.106 [2024-11-15 10:46:22.229868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.106 qpair failed and we were unable to recover it. 00:27:34.106 [2024-11-15 10:46:22.230105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.106 [2024-11-15 10:46:22.230128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.106 qpair failed and we were unable to recover it. 00:27:34.106 [2024-11-15 10:46:22.230349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.106 [2024-11-15 10:46:22.230391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.106 qpair failed and we were unable to recover it. 00:27:34.106 [2024-11-15 10:46:22.230598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.106 [2024-11-15 10:46:22.230622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.106 qpair failed and we were unable to recover it. 00:27:34.107 [2024-11-15 10:46:22.230757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.107 [2024-11-15 10:46:22.230780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.107 qpair failed and we were unable to recover it. 00:27:34.107 [2024-11-15 10:46:22.230938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.107 [2024-11-15 10:46:22.230966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.107 qpair failed and we were unable to recover it. 00:27:34.107 [2024-11-15 10:46:22.231080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.107 [2024-11-15 10:46:22.231104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.107 qpair failed and we were unable to recover it. 00:27:34.107 [2024-11-15 10:46:22.231242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.107 [2024-11-15 10:46:22.231267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.107 qpair failed and we were unable to recover it. 00:27:34.107 [2024-11-15 10:46:22.231416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.107 [2024-11-15 10:46:22.231455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.107 qpair failed and we were unable to recover it. 00:27:34.107 [2024-11-15 10:46:22.231602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.107 [2024-11-15 10:46:22.231626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.107 qpair failed and we were unable to recover it. 00:27:34.107 [2024-11-15 10:46:22.231800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.107 [2024-11-15 10:46:22.231823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.107 qpair failed and we were unable to recover it. 00:27:34.107 [2024-11-15 10:46:22.231961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.107 [2024-11-15 10:46:22.231985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.107 qpair failed and we were unable to recover it. 00:27:34.107 [2024-11-15 10:46:22.232147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.107 [2024-11-15 10:46:22.232171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.107 qpair failed and we were unable to recover it. 00:27:34.107 [2024-11-15 10:46:22.232299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.107 [2024-11-15 10:46:22.232338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.107 qpair failed and we were unable to recover it. 00:27:34.107 [2024-11-15 10:46:22.232516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.107 [2024-11-15 10:46:22.232542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.107 qpair failed and we were unable to recover it. 00:27:34.107 [2024-11-15 10:46:22.232702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.107 [2024-11-15 10:46:22.232726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.107 qpair failed and we were unable to recover it. 00:27:34.107 [2024-11-15 10:46:22.232888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.107 [2024-11-15 10:46:22.232911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.107 qpair failed and we were unable to recover it. 00:27:34.107 [2024-11-15 10:46:22.233042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.107 [2024-11-15 10:46:22.233081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.107 qpair failed and we were unable to recover it. 00:27:34.107 [2024-11-15 10:46:22.233283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.107 [2024-11-15 10:46:22.233307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.107 qpair failed and we were unable to recover it. 00:27:34.107 [2024-11-15 10:46:22.233524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.107 [2024-11-15 10:46:22.233550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.107 qpair failed and we were unable to recover it. 00:27:34.107 [2024-11-15 10:46:22.233781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.107 [2024-11-15 10:46:22.233804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.107 qpair failed and we were unable to recover it. 00:27:34.107 [2024-11-15 10:46:22.233991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.107 [2024-11-15 10:46:22.234014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.107 qpair failed and we were unable to recover it. 00:27:34.107 [2024-11-15 10:46:22.234227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.107 [2024-11-15 10:46:22.234250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.107 qpair failed and we were unable to recover it. 00:27:34.107 [2024-11-15 10:46:22.234493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.107 [2024-11-15 10:46:22.234518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.107 qpair failed and we were unable to recover it. 00:27:34.107 [2024-11-15 10:46:22.234651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.107 [2024-11-15 10:46:22.234675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.107 qpair failed and we were unable to recover it. 00:27:34.107 [2024-11-15 10:46:22.234852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.107 [2024-11-15 10:46:22.234875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.107 qpair failed and we were unable to recover it. 00:27:34.107 [2024-11-15 10:46:22.235068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.107 [2024-11-15 10:46:22.235091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.107 qpair failed and we were unable to recover it. 00:27:34.107 [2024-11-15 10:46:22.235224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.107 [2024-11-15 10:46:22.235248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.107 qpair failed and we were unable to recover it. 00:27:34.107 [2024-11-15 10:46:22.235358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.107 [2024-11-15 10:46:22.235388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.107 qpair failed and we were unable to recover it. 00:27:34.107 [2024-11-15 10:46:22.235509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.107 [2024-11-15 10:46:22.235534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.107 qpair failed and we were unable to recover it. 00:27:34.107 [2024-11-15 10:46:22.235717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.107 [2024-11-15 10:46:22.235740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.107 qpair failed and we were unable to recover it. 00:27:34.107 [2024-11-15 10:46:22.235910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.107 [2024-11-15 10:46:22.235933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.107 qpair failed and we were unable to recover it. 00:27:34.107 [2024-11-15 10:46:22.236140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.107 [2024-11-15 10:46:22.236168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.107 qpair failed and we were unable to recover it. 00:27:34.107 [2024-11-15 10:46:22.236329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.107 [2024-11-15 10:46:22.236353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.107 qpair failed and we were unable to recover it. 00:27:34.107 [2024-11-15 10:46:22.236530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.107 [2024-11-15 10:46:22.236555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.107 qpair failed and we were unable to recover it. 00:27:34.107 [2024-11-15 10:46:22.236728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.107 [2024-11-15 10:46:22.236758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.107 qpair failed and we were unable to recover it. 00:27:34.107 [2024-11-15 10:46:22.236955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.107 [2024-11-15 10:46:22.236978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.107 qpair failed and we were unable to recover it. 00:27:34.107 [2024-11-15 10:46:22.237126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.108 [2024-11-15 10:46:22.237149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.108 qpair failed and we were unable to recover it. 00:27:34.108 [2024-11-15 10:46:22.237400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.108 [2024-11-15 10:46:22.237425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.108 qpair failed and we were unable to recover it. 00:27:34.108 [2024-11-15 10:46:22.237639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.108 [2024-11-15 10:46:22.237678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.108 qpair failed and we were unable to recover it. 00:27:34.108 [2024-11-15 10:46:22.237861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.108 [2024-11-15 10:46:22.237885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.108 qpair failed and we were unable to recover it. 00:27:34.108 [2024-11-15 10:46:22.238060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.108 [2024-11-15 10:46:22.238083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.108 qpair failed and we were unable to recover it. 00:27:34.108 [2024-11-15 10:46:22.238232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.108 [2024-11-15 10:46:22.238255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.108 qpair failed and we were unable to recover it. 00:27:34.108 [2024-11-15 10:46:22.238395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.108 [2024-11-15 10:46:22.238421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.108 qpair failed and we were unable to recover it. 00:27:34.108 [2024-11-15 10:46:22.238581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.108 [2024-11-15 10:46:22.238628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.108 qpair failed and we were unable to recover it. 00:27:34.108 [2024-11-15 10:46:22.238785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.108 [2024-11-15 10:46:22.238808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.108 qpair failed and we were unable to recover it. 00:27:34.108 [2024-11-15 10:46:22.239007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.108 [2024-11-15 10:46:22.239031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.108 qpair failed and we were unable to recover it. 00:27:34.108 [2024-11-15 10:46:22.239229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.108 [2024-11-15 10:46:22.239252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.108 qpair failed and we were unable to recover it. 00:27:34.108 [2024-11-15 10:46:22.239425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.108 [2024-11-15 10:46:22.239464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.108 qpair failed and we were unable to recover it. 00:27:34.108 [2024-11-15 10:46:22.239643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.108 [2024-11-15 10:46:22.239683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.108 qpair failed and we were unable to recover it. 00:27:34.108 [2024-11-15 10:46:22.239920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.108 [2024-11-15 10:46:22.239944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.108 qpair failed and we were unable to recover it. 00:27:34.108 [2024-11-15 10:46:22.240144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.108 [2024-11-15 10:46:22.240167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.108 qpair failed and we were unable to recover it. 00:27:34.108 [2024-11-15 10:46:22.240352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.108 [2024-11-15 10:46:22.240396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.108 qpair failed and we were unable to recover it. 00:27:34.108 [2024-11-15 10:46:22.240622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.108 [2024-11-15 10:46:22.240645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.108 qpair failed and we were unable to recover it. 00:27:34.108 [2024-11-15 10:46:22.240818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.108 [2024-11-15 10:46:22.240851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.108 qpair failed and we were unable to recover it. 00:27:34.108 [2024-11-15 10:46:22.241041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.108 [2024-11-15 10:46:22.241065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.108 qpair failed and we were unable to recover it. 00:27:34.108 [2024-11-15 10:46:22.241223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.108 [2024-11-15 10:46:22.241246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.108 qpair failed and we were unable to recover it. 00:27:34.108 [2024-11-15 10:46:22.241385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.108 [2024-11-15 10:46:22.241409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.108 qpair failed and we were unable to recover it. 00:27:34.108 [2024-11-15 10:46:22.241617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.108 [2024-11-15 10:46:22.241657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.108 qpair failed and we were unable to recover it. 00:27:34.108 [2024-11-15 10:46:22.241895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.108 [2024-11-15 10:46:22.241922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.108 qpair failed and we were unable to recover it. 00:27:34.108 [2024-11-15 10:46:22.242044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.108 [2024-11-15 10:46:22.242067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.108 qpair failed and we were unable to recover it. 00:27:34.108 [2024-11-15 10:46:22.242232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.108 [2024-11-15 10:46:22.242271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.108 qpair failed and we were unable to recover it. 00:27:34.108 [2024-11-15 10:46:22.242443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.108 [2024-11-15 10:46:22.242467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.108 qpair failed and we were unable to recover it. 00:27:34.108 [2024-11-15 10:46:22.242599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.108 [2024-11-15 10:46:22.242623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.108 qpair failed and we were unable to recover it. 00:27:34.108 [2024-11-15 10:46:22.242877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.108 [2024-11-15 10:46:22.242900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.108 qpair failed and we were unable to recover it. 00:27:34.108 [2024-11-15 10:46:22.243072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.108 [2024-11-15 10:46:22.243096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.108 qpair failed and we were unable to recover it. 00:27:34.108 [2024-11-15 10:46:22.243248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.108 [2024-11-15 10:46:22.243272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.108 qpair failed and we were unable to recover it. 00:27:34.108 [2024-11-15 10:46:22.243514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.108 [2024-11-15 10:46:22.243539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.108 qpair failed and we were unable to recover it. 00:27:34.108 [2024-11-15 10:46:22.243667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.108 [2024-11-15 10:46:22.243691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.108 qpair failed and we were unable to recover it. 00:27:34.108 [2024-11-15 10:46:22.243900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.108 [2024-11-15 10:46:22.243923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.108 qpair failed and we were unable to recover it. 00:27:34.108 [2024-11-15 10:46:22.244108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.108 [2024-11-15 10:46:22.244132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.108 qpair failed and we were unable to recover it. 00:27:34.108 [2024-11-15 10:46:22.244376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.108 [2024-11-15 10:46:22.244405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.108 qpair failed and we were unable to recover it. 00:27:34.108 [2024-11-15 10:46:22.244611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.109 [2024-11-15 10:46:22.244636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.109 qpair failed and we were unable to recover it. 00:27:34.109 [2024-11-15 10:46:22.244782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.109 [2024-11-15 10:46:22.244809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.109 qpair failed and we were unable to recover it. 00:27:34.109 [2024-11-15 10:46:22.245018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.109 [2024-11-15 10:46:22.245041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.109 qpair failed and we were unable to recover it. 00:27:34.109 [2024-11-15 10:46:22.245195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.109 [2024-11-15 10:46:22.245219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.109 qpair failed and we were unable to recover it. 00:27:34.109 [2024-11-15 10:46:22.245325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.109 [2024-11-15 10:46:22.245349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.109 qpair failed and we were unable to recover it. 00:27:34.109 [2024-11-15 10:46:22.245495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.109 [2024-11-15 10:46:22.245519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.109 qpair failed and we were unable to recover it. 00:27:34.109 [2024-11-15 10:46:22.245730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.109 [2024-11-15 10:46:22.245768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.109 qpair failed and we were unable to recover it. 00:27:34.109 [2024-11-15 10:46:22.245934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.109 [2024-11-15 10:46:22.245957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.109 qpair failed and we were unable to recover it. 00:27:34.109 [2024-11-15 10:46:22.246096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.109 [2024-11-15 10:46:22.246134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.109 qpair failed and we were unable to recover it. 00:27:34.109 [2024-11-15 10:46:22.246352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.109 [2024-11-15 10:46:22.246396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.109 qpair failed and we were unable to recover it. 00:27:34.109 [2024-11-15 10:46:22.246650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.109 [2024-11-15 10:46:22.246688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.109 qpair failed and we were unable to recover it. 00:27:34.109 [2024-11-15 10:46:22.246842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.109 [2024-11-15 10:46:22.246865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.109 qpair failed and we were unable to recover it. 00:27:34.109 [2024-11-15 10:46:22.247059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.109 [2024-11-15 10:46:22.247083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.109 qpair failed and we were unable to recover it. 00:27:34.109 [2024-11-15 10:46:22.247197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.109 [2024-11-15 10:46:22.247237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.109 qpair failed and we were unable to recover it. 00:27:34.109 [2024-11-15 10:46:22.247479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.109 [2024-11-15 10:46:22.247503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.109 qpair failed and we were unable to recover it. 00:27:34.109 [2024-11-15 10:46:22.247732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.109 [2024-11-15 10:46:22.247756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.109 qpair failed and we were unable to recover it. 00:27:34.109 [2024-11-15 10:46:22.247915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.109 [2024-11-15 10:46:22.247938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.109 qpair failed and we were unable to recover it. 00:27:34.109 [2024-11-15 10:46:22.248119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.109 [2024-11-15 10:46:22.248143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.109 qpair failed and we were unable to recover it. 00:27:34.109 [2024-11-15 10:46:22.248340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.109 [2024-11-15 10:46:22.248393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.109 qpair failed and we were unable to recover it. 00:27:34.109 [2024-11-15 10:46:22.248603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.109 [2024-11-15 10:46:22.248627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.109 qpair failed and we were unable to recover it. 00:27:34.109 [2024-11-15 10:46:22.248811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.109 [2024-11-15 10:46:22.248834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.109 qpair failed and we were unable to recover it. 00:27:34.109 [2024-11-15 10:46:22.249038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.109 [2024-11-15 10:46:22.249061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.109 qpair failed and we were unable to recover it. 00:27:34.109 [2024-11-15 10:46:22.249204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.109 [2024-11-15 10:46:22.249227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.109 qpair failed and we were unable to recover it. 00:27:34.109 [2024-11-15 10:46:22.249461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.109 [2024-11-15 10:46:22.249484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.109 qpair failed and we were unable to recover it. 00:27:34.109 [2024-11-15 10:46:22.249668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.109 [2024-11-15 10:46:22.249701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.109 qpair failed and we were unable to recover it. 00:27:34.109 [2024-11-15 10:46:22.249852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.109 [2024-11-15 10:46:22.249876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.109 qpair failed and we were unable to recover it. 00:27:34.109 [2024-11-15 10:46:22.250087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.109 [2024-11-15 10:46:22.250110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.109 qpair failed and we were unable to recover it. 00:27:34.109 [2024-11-15 10:46:22.250296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.109 [2024-11-15 10:46:22.250320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.109 qpair failed and we were unable to recover it. 00:27:34.109 [2024-11-15 10:46:22.250443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.109 [2024-11-15 10:46:22.250469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.109 qpair failed and we were unable to recover it. 00:27:34.109 [2024-11-15 10:46:22.250598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.109 [2024-11-15 10:46:22.250623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.109 qpair failed and we were unable to recover it. 00:27:34.109 [2024-11-15 10:46:22.250855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.109 [2024-11-15 10:46:22.250880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.110 qpair failed and we were unable to recover it. 00:27:34.110 [2024-11-15 10:46:22.251019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.110 [2024-11-15 10:46:22.251052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.110 qpair failed and we were unable to recover it. 00:27:34.110 [2024-11-15 10:46:22.251281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.110 [2024-11-15 10:46:22.251304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.110 qpair failed and we were unable to recover it. 00:27:34.110 [2024-11-15 10:46:22.251529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.110 [2024-11-15 10:46:22.251555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.110 qpair failed and we were unable to recover it. 00:27:34.110 [2024-11-15 10:46:22.251675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.110 [2024-11-15 10:46:22.251700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.110 qpair failed and we were unable to recover it. 00:27:34.110 [2024-11-15 10:46:22.251916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.110 [2024-11-15 10:46:22.251939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.110 qpair failed and we were unable to recover it. 00:27:34.110 [2024-11-15 10:46:22.252155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.110 [2024-11-15 10:46:22.252178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.110 qpair failed and we were unable to recover it. 00:27:34.110 [2024-11-15 10:46:22.252430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.110 [2024-11-15 10:46:22.252455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.110 qpair failed and we were unable to recover it. 00:27:34.110 [2024-11-15 10:46:22.252638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.110 [2024-11-15 10:46:22.252661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.110 qpair failed and we were unable to recover it. 00:27:34.110 [2024-11-15 10:46:22.252823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.110 [2024-11-15 10:46:22.252846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.110 qpair failed and we were unable to recover it. 00:27:34.110 [2024-11-15 10:46:22.252967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.110 [2024-11-15 10:46:22.252992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.110 qpair failed and we were unable to recover it. 00:27:34.110 [2024-11-15 10:46:22.253200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.110 [2024-11-15 10:46:22.253224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.110 qpair failed and we were unable to recover it. 00:27:34.110 [2024-11-15 10:46:22.253440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.110 [2024-11-15 10:46:22.253465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.110 qpair failed and we were unable to recover it. 00:27:34.110 [2024-11-15 10:46:22.253590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.110 [2024-11-15 10:46:22.253615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.110 qpair failed and we were unable to recover it. 00:27:34.110 [2024-11-15 10:46:22.253848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.110 [2024-11-15 10:46:22.253872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.110 qpair failed and we were unable to recover it. 00:27:34.110 [2024-11-15 10:46:22.253999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.110 [2024-11-15 10:46:22.254022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.110 qpair failed and we were unable to recover it. 00:27:34.110 [2024-11-15 10:46:22.254242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.110 [2024-11-15 10:46:22.254266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.110 qpair failed and we were unable to recover it. 00:27:34.110 [2024-11-15 10:46:22.254448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.110 [2024-11-15 10:46:22.254482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.110 qpair failed and we were unable to recover it. 00:27:34.110 [2024-11-15 10:46:22.254695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.110 [2024-11-15 10:46:22.254719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.110 qpair failed and we were unable to recover it. 00:27:34.110 [2024-11-15 10:46:22.254923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.110 [2024-11-15 10:46:22.254946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.110 qpair failed and we were unable to recover it. 00:27:34.110 [2024-11-15 10:46:22.255104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.110 [2024-11-15 10:46:22.255127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.110 qpair failed and we were unable to recover it. 00:27:34.110 [2024-11-15 10:46:22.255331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.110 [2024-11-15 10:46:22.255375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.110 qpair failed and we were unable to recover it. 00:27:34.110 [2024-11-15 10:46:22.255564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.110 [2024-11-15 10:46:22.255589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.110 qpair failed and we were unable to recover it. 00:27:34.110 [2024-11-15 10:46:22.255757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.110 [2024-11-15 10:46:22.255780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.110 qpair failed and we were unable to recover it. 00:27:34.110 [2024-11-15 10:46:22.255920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.110 [2024-11-15 10:46:22.255943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.110 qpair failed and we were unable to recover it. 00:27:34.110 [2024-11-15 10:46:22.256128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.110 [2024-11-15 10:46:22.256162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.110 qpair failed and we were unable to recover it. 00:27:34.110 [2024-11-15 10:46:22.256343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.110 [2024-11-15 10:46:22.256383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.110 qpair failed and we were unable to recover it. 00:27:34.110 [2024-11-15 10:46:22.256566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.110 [2024-11-15 10:46:22.256591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.110 qpair failed and we were unable to recover it. 00:27:34.110 [2024-11-15 10:46:22.256771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.110 [2024-11-15 10:46:22.256794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.110 qpair failed and we were unable to recover it. 00:27:34.110 [2024-11-15 10:46:22.257021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.110 [2024-11-15 10:46:22.257044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.110 qpair failed and we were unable to recover it. 00:27:34.110 [2024-11-15 10:46:22.257239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.110 [2024-11-15 10:46:22.257262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.110 qpair failed and we were unable to recover it. 00:27:34.110 [2024-11-15 10:46:22.257440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.110 [2024-11-15 10:46:22.257465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.110 qpair failed and we were unable to recover it. 00:27:34.110 [2024-11-15 10:46:22.257661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.110 [2024-11-15 10:46:22.257684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.110 qpair failed and we were unable to recover it. 00:27:34.110 [2024-11-15 10:46:22.257849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.110 [2024-11-15 10:46:22.257873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.110 qpair failed and we were unable to recover it. 00:27:34.110 [2024-11-15 10:46:22.258058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.110 [2024-11-15 10:46:22.258090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.110 qpair failed and we were unable to recover it. 00:27:34.110 [2024-11-15 10:46:22.258236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.111 [2024-11-15 10:46:22.258259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.111 qpair failed and we were unable to recover it. 00:27:34.111 [2024-11-15 10:46:22.258414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.111 [2024-11-15 10:46:22.258449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.111 qpair failed and we were unable to recover it. 00:27:34.111 [2024-11-15 10:46:22.258643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.111 [2024-11-15 10:46:22.258682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.111 qpair failed and we were unable to recover it. 00:27:34.111 [2024-11-15 10:46:22.258830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.111 [2024-11-15 10:46:22.258853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.111 qpair failed and we were unable to recover it. 00:27:34.111 [2024-11-15 10:46:22.259048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.111 [2024-11-15 10:46:22.259072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.111 qpair failed and we were unable to recover it. 00:27:34.111 [2024-11-15 10:46:22.259260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.111 [2024-11-15 10:46:22.259283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.111 qpair failed and we were unable to recover it. 00:27:34.111 [2024-11-15 10:46:22.259516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.111 [2024-11-15 10:46:22.259540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.111 qpair failed and we were unable to recover it. 00:27:34.111 [2024-11-15 10:46:22.259696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.111 [2024-11-15 10:46:22.259719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.111 qpair failed and we were unable to recover it. 00:27:34.111 [2024-11-15 10:46:22.259899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.111 [2024-11-15 10:46:22.259923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.111 qpair failed and we were unable to recover it. 00:27:34.111 [2024-11-15 10:46:22.260101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.111 [2024-11-15 10:46:22.260125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.111 qpair failed and we were unable to recover it. 00:27:34.111 [2024-11-15 10:46:22.260304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.111 [2024-11-15 10:46:22.260341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.111 qpair failed and we were unable to recover it. 00:27:34.111 [2024-11-15 10:46:22.260538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.111 [2024-11-15 10:46:22.260563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.111 qpair failed and we were unable to recover it. 00:27:34.111 [2024-11-15 10:46:22.260735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.111 [2024-11-15 10:46:22.260758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.111 qpair failed and we were unable to recover it. 00:27:34.111 [2024-11-15 10:46:22.260936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.111 [2024-11-15 10:46:22.260961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.111 qpair failed and we were unable to recover it. 00:27:34.111 [2024-11-15 10:46:22.261118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.111 [2024-11-15 10:46:22.261153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.111 qpair failed and we were unable to recover it. 00:27:34.111 [2024-11-15 10:46:22.261264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.111 [2024-11-15 10:46:22.261288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.111 qpair failed and we were unable to recover it. 00:27:34.111 [2024-11-15 10:46:22.261516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.111 [2024-11-15 10:46:22.261541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.111 qpair failed and we were unable to recover it. 00:27:34.111 [2024-11-15 10:46:22.261698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.111 [2024-11-15 10:46:22.261741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.111 qpair failed and we were unable to recover it. 00:27:34.111 [2024-11-15 10:46:22.261932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.111 [2024-11-15 10:46:22.261956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.111 qpair failed and we were unable to recover it. 00:27:34.111 [2024-11-15 10:46:22.262166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.111 [2024-11-15 10:46:22.262189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.111 qpair failed and we were unable to recover it. 00:27:34.111 [2024-11-15 10:46:22.262412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.111 [2024-11-15 10:46:22.262453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.111 qpair failed and we were unable to recover it. 00:27:34.111 [2024-11-15 10:46:22.262608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.111 [2024-11-15 10:46:22.262632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.111 qpair failed and we were unable to recover it. 00:27:34.111 [2024-11-15 10:46:22.262795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.111 [2024-11-15 10:46:22.262818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.111 qpair failed and we were unable to recover it. 00:27:34.111 [2024-11-15 10:46:22.263045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.111 [2024-11-15 10:46:22.263069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.111 qpair failed and we were unable to recover it. 00:27:34.111 [2024-11-15 10:46:22.263247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.111 [2024-11-15 10:46:22.263271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.111 qpair failed and we were unable to recover it. 00:27:34.111 [2024-11-15 10:46:22.263434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.111 [2024-11-15 10:46:22.263459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.111 qpair failed and we were unable to recover it. 00:27:34.111 [2024-11-15 10:46:22.263726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.111 [2024-11-15 10:46:22.263750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.111 qpair failed and we were unable to recover it. 00:27:34.111 [2024-11-15 10:46:22.263906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.111 [2024-11-15 10:46:22.263930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.111 qpair failed and we were unable to recover it. 00:27:34.111 [2024-11-15 10:46:22.264041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.111 [2024-11-15 10:46:22.264065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.111 qpair failed and we were unable to recover it. 00:27:34.111 [2024-11-15 10:46:22.264232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.111 [2024-11-15 10:46:22.264256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.111 qpair failed and we were unable to recover it. 00:27:34.111 [2024-11-15 10:46:22.264442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.111 [2024-11-15 10:46:22.264467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.111 qpair failed and we were unable to recover it. 00:27:34.111 [2024-11-15 10:46:22.264593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.111 [2024-11-15 10:46:22.264618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.111 qpair failed and we were unable to recover it. 00:27:34.111 [2024-11-15 10:46:22.264824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.111 [2024-11-15 10:46:22.264848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.111 qpair failed and we were unable to recover it. 00:27:34.111 [2024-11-15 10:46:22.265042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.111 [2024-11-15 10:46:22.265065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.111 qpair failed and we were unable to recover it. 00:27:34.111 [2024-11-15 10:46:22.265234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.112 [2024-11-15 10:46:22.265257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.112 qpair failed and we were unable to recover it. 00:27:34.112 [2024-11-15 10:46:22.265458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.112 [2024-11-15 10:46:22.265483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.112 qpair failed and we were unable to recover it. 00:27:34.112 [2024-11-15 10:46:22.265580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.112 [2024-11-15 10:46:22.265618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.112 qpair failed and we were unable to recover it. 00:27:34.112 [2024-11-15 10:46:22.265836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.112 [2024-11-15 10:46:22.265859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.112 qpair failed and we were unable to recover it. 00:27:34.112 [2024-11-15 10:46:22.266061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.112 [2024-11-15 10:46:22.266084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.112 qpair failed and we were unable to recover it. 00:27:34.112 [2024-11-15 10:46:22.266206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.112 [2024-11-15 10:46:22.266230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.112 qpair failed and we were unable to recover it. 00:27:34.112 [2024-11-15 10:46:22.266432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.112 [2024-11-15 10:46:22.266457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.112 qpair failed and we were unable to recover it. 00:27:34.112 [2024-11-15 10:46:22.266621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.112 [2024-11-15 10:46:22.266660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.112 qpair failed and we were unable to recover it. 00:27:34.112 [2024-11-15 10:46:22.266897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.112 [2024-11-15 10:46:22.266921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.112 qpair failed and we were unable to recover it. 00:27:34.112 [2024-11-15 10:46:22.267150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.112 [2024-11-15 10:46:22.267173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.112 qpair failed and we were unable to recover it. 00:27:34.112 [2024-11-15 10:46:22.267337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.112 [2024-11-15 10:46:22.267366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.112 qpair failed and we were unable to recover it. 00:27:34.112 [2024-11-15 10:46:22.267531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.112 [2024-11-15 10:46:22.267555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.112 qpair failed and we were unable to recover it. 00:27:34.112 [2024-11-15 10:46:22.267740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.112 [2024-11-15 10:46:22.267764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.112 qpair failed and we were unable to recover it. 00:27:34.112 [2024-11-15 10:46:22.267996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.112 [2024-11-15 10:46:22.268019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.112 qpair failed and we were unable to recover it. 00:27:34.112 [2024-11-15 10:46:22.268159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.112 [2024-11-15 10:46:22.268183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.112 qpair failed and we were unable to recover it. 00:27:34.112 [2024-11-15 10:46:22.268326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.112 [2024-11-15 10:46:22.268352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.112 qpair failed and we were unable to recover it. 00:27:34.112 [2024-11-15 10:46:22.268522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.112 [2024-11-15 10:46:22.268546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.112 qpair failed and we were unable to recover it. 00:27:34.112 [2024-11-15 10:46:22.268676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.112 [2024-11-15 10:46:22.268701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.112 qpair failed and we were unable to recover it. 00:27:34.112 [2024-11-15 10:46:22.268882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.112 [2024-11-15 10:46:22.268906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.112 qpair failed and we were unable to recover it. 00:27:34.112 [2024-11-15 10:46:22.269045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.112 [2024-11-15 10:46:22.269069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.112 qpair failed and we were unable to recover it. 00:27:34.112 [2024-11-15 10:46:22.269220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.112 [2024-11-15 10:46:22.269261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.112 qpair failed and we were unable to recover it. 00:27:34.112 [2024-11-15 10:46:22.269440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.112 [2024-11-15 10:46:22.269466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.112 qpair failed and we were unable to recover it. 00:27:34.112 [2024-11-15 10:46:22.269700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.112 [2024-11-15 10:46:22.269723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.112 qpair failed and we were unable to recover it. 00:27:34.112 [2024-11-15 10:46:22.269945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.112 [2024-11-15 10:46:22.269968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.112 qpair failed and we were unable to recover it. 00:27:34.112 [2024-11-15 10:46:22.270100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.112 [2024-11-15 10:46:22.270124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.112 qpair failed and we were unable to recover it. 00:27:34.112 [2024-11-15 10:46:22.270272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.112 [2024-11-15 10:46:22.270296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.112 qpair failed and we were unable to recover it. 00:27:34.112 [2024-11-15 10:46:22.270488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.112 [2024-11-15 10:46:22.270514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.112 qpair failed and we were unable to recover it. 00:27:34.112 [2024-11-15 10:46:22.270704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.112 [2024-11-15 10:46:22.270743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.112 qpair failed and we were unable to recover it. 00:27:34.112 [2024-11-15 10:46:22.270895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.112 [2024-11-15 10:46:22.270919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.112 qpair failed and we were unable to recover it. 00:27:34.112 [2024-11-15 10:46:22.271118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.112 [2024-11-15 10:46:22.271141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.112 qpair failed and we were unable to recover it. 00:27:34.112 [2024-11-15 10:46:22.271302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.112 [2024-11-15 10:46:22.271326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.112 qpair failed and we were unable to recover it. 00:27:34.112 [2024-11-15 10:46:22.271452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.112 [2024-11-15 10:46:22.271492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.112 qpair failed and we were unable to recover it. 00:27:34.112 [2024-11-15 10:46:22.271586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.112 [2024-11-15 10:46:22.271611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.112 qpair failed and we were unable to recover it. 00:27:34.112 [2024-11-15 10:46:22.271764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.112 [2024-11-15 10:46:22.271788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.112 qpair failed and we were unable to recover it. 00:27:34.112 [2024-11-15 10:46:22.271968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.113 [2024-11-15 10:46:22.271992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.113 qpair failed and we were unable to recover it. 00:27:34.113 [2024-11-15 10:46:22.272199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.113 [2024-11-15 10:46:22.272222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.113 qpair failed and we were unable to recover it. 00:27:34.113 [2024-11-15 10:46:22.272444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.113 [2024-11-15 10:46:22.272470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.113 qpair failed and we were unable to recover it. 00:27:34.113 [2024-11-15 10:46:22.272602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.113 [2024-11-15 10:46:22.272627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.113 qpair failed and we were unable to recover it. 00:27:34.113 [2024-11-15 10:46:22.272794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.113 [2024-11-15 10:46:22.272818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.113 qpair failed and we were unable to recover it. 00:27:34.113 [2024-11-15 10:46:22.272973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.113 [2024-11-15 10:46:22.272997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.113 qpair failed and we were unable to recover it. 00:27:34.113 [2024-11-15 10:46:22.273113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.113 [2024-11-15 10:46:22.273148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.113 qpair failed and we were unable to recover it. 00:27:34.113 [2024-11-15 10:46:22.273292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.113 [2024-11-15 10:46:22.273317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.113 qpair failed and we were unable to recover it. 00:27:34.113 [2024-11-15 10:46:22.273491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.113 [2024-11-15 10:46:22.273516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.113 qpair failed and we were unable to recover it. 00:27:34.113 [2024-11-15 10:46:22.273635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.113 [2024-11-15 10:46:22.273674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.113 qpair failed and we were unable to recover it. 00:27:34.113 [2024-11-15 10:46:22.273771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.113 [2024-11-15 10:46:22.273796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.113 qpair failed and we were unable to recover it. 00:27:34.113 [2024-11-15 10:46:22.273987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.113 [2024-11-15 10:46:22.274026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.113 qpair failed and we were unable to recover it. 00:27:34.113 [2024-11-15 10:46:22.274221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.113 [2024-11-15 10:46:22.274245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.113 qpair failed and we were unable to recover it. 00:27:34.113 [2024-11-15 10:46:22.274429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.113 [2024-11-15 10:46:22.274454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.113 qpair failed and we were unable to recover it. 00:27:34.113 [2024-11-15 10:46:22.274607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.113 [2024-11-15 10:46:22.274631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.113 qpair failed and we were unable to recover it. 00:27:34.113 [2024-11-15 10:46:22.274779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.113 [2024-11-15 10:46:22.274819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.113 qpair failed and we were unable to recover it. 00:27:34.113 [2024-11-15 10:46:22.274964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.113 [2024-11-15 10:46:22.274989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.113 qpair failed and we were unable to recover it. 00:27:34.113 [2024-11-15 10:46:22.275119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.113 [2024-11-15 10:46:22.275148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.113 qpair failed and we were unable to recover it. 00:27:34.113 [2024-11-15 10:46:22.275297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.113 [2024-11-15 10:46:22.275335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.113 qpair failed and we were unable to recover it. 00:27:34.113 [2024-11-15 10:46:22.275469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.113 [2024-11-15 10:46:22.275494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.113 qpair failed and we were unable to recover it. 00:27:34.113 [2024-11-15 10:46:22.275596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.113 [2024-11-15 10:46:22.275621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.113 qpair failed and we were unable to recover it. 00:27:34.113 [2024-11-15 10:46:22.275786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.113 [2024-11-15 10:46:22.275825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.113 qpair failed and we were unable to recover it. 00:27:34.113 [2024-11-15 10:46:22.276037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.113 [2024-11-15 10:46:22.276060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.113 qpair failed and we were unable to recover it. 00:27:34.113 [2024-11-15 10:46:22.276201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.113 [2024-11-15 10:46:22.276225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.113 qpair failed and we were unable to recover it. 00:27:34.113 [2024-11-15 10:46:22.276429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.113 [2024-11-15 10:46:22.276469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.113 qpair failed and we were unable to recover it. 00:27:34.113 [2024-11-15 10:46:22.276585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.113 [2024-11-15 10:46:22.276608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.113 qpair failed and we were unable to recover it. 00:27:34.113 [2024-11-15 10:46:22.276795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.113 [2024-11-15 10:46:22.276818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.113 qpair failed and we were unable to recover it. 00:27:34.113 [2024-11-15 10:46:22.277042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.113 [2024-11-15 10:46:22.277065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.113 qpair failed and we were unable to recover it. 00:27:34.113 [2024-11-15 10:46:22.277291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.113 [2024-11-15 10:46:22.277323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.113 qpair failed and we were unable to recover it. 00:27:34.113 [2024-11-15 10:46:22.277459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.113 [2024-11-15 10:46:22.277485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.113 qpair failed and we were unable to recover it. 00:27:34.113 [2024-11-15 10:46:22.277598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.113 [2024-11-15 10:46:22.277623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.113 qpair failed and we were unable to recover it. 00:27:34.113 [2024-11-15 10:46:22.277742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.113 [2024-11-15 10:46:22.277767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.113 qpair failed and we were unable to recover it. 00:27:34.113 [2024-11-15 10:46:22.277917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.113 [2024-11-15 10:46:22.277965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.113 qpair failed and we were unable to recover it. 00:27:34.113 [2024-11-15 10:46:22.278131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.113 [2024-11-15 10:46:22.278155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.113 qpair failed and we were unable to recover it. 00:27:34.113 [2024-11-15 10:46:22.278297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.113 [2024-11-15 10:46:22.278335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.113 qpair failed and we were unable to recover it. 00:27:34.113 [2024-11-15 10:46:22.278475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.113 [2024-11-15 10:46:22.278500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.114 qpair failed and we were unable to recover it. 00:27:34.114 [2024-11-15 10:46:22.278652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.114 [2024-11-15 10:46:22.278696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.114 qpair failed and we were unable to recover it. 00:27:34.114 [2024-11-15 10:46:22.278892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.114 [2024-11-15 10:46:22.278915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.114 qpair failed and we were unable to recover it. 00:27:34.114 [2024-11-15 10:46:22.279083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.114 [2024-11-15 10:46:22.279106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.114 qpair failed and we were unable to recover it. 00:27:34.114 [2024-11-15 10:46:22.279344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.114 [2024-11-15 10:46:22.279390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.114 qpair failed and we were unable to recover it. 00:27:34.114 [2024-11-15 10:46:22.279530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.114 [2024-11-15 10:46:22.279554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.114 qpair failed and we were unable to recover it. 00:27:34.114 [2024-11-15 10:46:22.279728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.114 [2024-11-15 10:46:22.279762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.114 qpair failed and we were unable to recover it. 00:27:34.114 [2024-11-15 10:46:22.279957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.114 [2024-11-15 10:46:22.279981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.114 qpair failed and we were unable to recover it. 00:27:34.114 [2024-11-15 10:46:22.280169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.114 [2024-11-15 10:46:22.280193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.114 qpair failed and we were unable to recover it. 00:27:34.114 [2024-11-15 10:46:22.280374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.114 [2024-11-15 10:46:22.280418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.114 qpair failed and we were unable to recover it. 00:27:34.114 [2024-11-15 10:46:22.280565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.114 [2024-11-15 10:46:22.280590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.114 qpair failed and we were unable to recover it. 00:27:34.114 [2024-11-15 10:46:22.280738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.114 [2024-11-15 10:46:22.280776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.114 qpair failed and we were unable to recover it. 00:27:34.114 [2024-11-15 10:46:22.280910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.114 [2024-11-15 10:46:22.280934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.114 qpair failed and we were unable to recover it. 00:27:34.114 [2024-11-15 10:46:22.281099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.114 [2024-11-15 10:46:22.281123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.114 qpair failed and we were unable to recover it. 00:27:34.114 [2024-11-15 10:46:22.281307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.114 [2024-11-15 10:46:22.281356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.114 qpair failed and we were unable to recover it. 00:27:34.114 [2024-11-15 10:46:22.281495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.114 [2024-11-15 10:46:22.281519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.114 qpair failed and we were unable to recover it. 00:27:34.114 [2024-11-15 10:46:22.281675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.114 [2024-11-15 10:46:22.281714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.114 qpair failed and we were unable to recover it. 00:27:34.114 [2024-11-15 10:46:22.281877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.114 [2024-11-15 10:46:22.281903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.114 qpair failed and we were unable to recover it. 00:27:34.114 [2024-11-15 10:46:22.282093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.114 [2024-11-15 10:46:22.282117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.114 qpair failed and we were unable to recover it. 00:27:34.114 [2024-11-15 10:46:22.282306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.114 [2024-11-15 10:46:22.282329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.114 qpair failed and we were unable to recover it. 00:27:34.114 [2024-11-15 10:46:22.282484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.114 [2024-11-15 10:46:22.282510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.114 qpair failed and we were unable to recover it. 00:27:34.114 [2024-11-15 10:46:22.282701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.114 [2024-11-15 10:46:22.282740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.114 qpair failed and we were unable to recover it. 00:27:34.114 [2024-11-15 10:46:22.282935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.114 [2024-11-15 10:46:22.282959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.114 qpair failed and we were unable to recover it. 00:27:34.114 [2024-11-15 10:46:22.283157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.114 [2024-11-15 10:46:22.283180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.114 qpair failed and we were unable to recover it. 00:27:34.114 [2024-11-15 10:46:22.283402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.114 [2024-11-15 10:46:22.283443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.114 qpair failed and we were unable to recover it. 00:27:34.114 [2024-11-15 10:46:22.283581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.114 [2024-11-15 10:46:22.283606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.114 qpair failed and we were unable to recover it. 00:27:34.114 [2024-11-15 10:46:22.283733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.114 [2024-11-15 10:46:22.283772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.114 qpair failed and we were unable to recover it. 00:27:34.114 [2024-11-15 10:46:22.283977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.114 [2024-11-15 10:46:22.284000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.114 qpair failed and we were unable to recover it. 00:27:34.114 [2024-11-15 10:46:22.284120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.114 [2024-11-15 10:46:22.284157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.114 qpair failed and we were unable to recover it. 00:27:34.114 [2024-11-15 10:46:22.284353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.115 [2024-11-15 10:46:22.284382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.115 qpair failed and we were unable to recover it. 00:27:34.115 [2024-11-15 10:46:22.284498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.115 [2024-11-15 10:46:22.284523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.115 qpair failed and we were unable to recover it. 00:27:34.115 [2024-11-15 10:46:22.284646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.115 [2024-11-15 10:46:22.284671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.115 qpair failed and we were unable to recover it. 00:27:34.115 [2024-11-15 10:46:22.284810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.115 [2024-11-15 10:46:22.284834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.115 qpair failed and we were unable to recover it. 00:27:34.115 [2024-11-15 10:46:22.285041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.115 [2024-11-15 10:46:22.285065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.115 qpair failed and we were unable to recover it. 00:27:34.115 [2024-11-15 10:46:22.285305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.115 [2024-11-15 10:46:22.285328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.115 qpair failed and we were unable to recover it. 00:27:34.115 [2024-11-15 10:46:22.285496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.115 [2024-11-15 10:46:22.285521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.115 qpair failed and we were unable to recover it. 00:27:34.115 [2024-11-15 10:46:22.285647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.115 [2024-11-15 10:46:22.285689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.115 qpair failed and we were unable to recover it. 00:27:34.115 [2024-11-15 10:46:22.285854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.115 [2024-11-15 10:46:22.285877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.115 qpair failed and we were unable to recover it. 00:27:34.115 [2024-11-15 10:46:22.286080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.115 [2024-11-15 10:46:22.286103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.115 qpair failed and we were unable to recover it. 00:27:34.115 [2024-11-15 10:46:22.286291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.115 [2024-11-15 10:46:22.286315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.115 qpair failed and we were unable to recover it. 00:27:34.115 [2024-11-15 10:46:22.286503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.115 [2024-11-15 10:46:22.286529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.115 qpair failed and we were unable to recover it. 00:27:34.115 [2024-11-15 10:46:22.286694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.115 [2024-11-15 10:46:22.286718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.115 qpair failed and we were unable to recover it. 00:27:34.115 [2024-11-15 10:46:22.286934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.115 [2024-11-15 10:46:22.286957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.115 qpair failed and we were unable to recover it. 00:27:34.115 [2024-11-15 10:46:22.287088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.115 [2024-11-15 10:46:22.287112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.115 qpair failed and we were unable to recover it. 00:27:34.115 [2024-11-15 10:46:22.287255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.115 [2024-11-15 10:46:22.287280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.115 qpair failed and we were unable to recover it. 00:27:34.115 [2024-11-15 10:46:22.287491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.115 [2024-11-15 10:46:22.287516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.115 qpair failed and we were unable to recover it. 00:27:34.115 [2024-11-15 10:46:22.287632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.115 [2024-11-15 10:46:22.287673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.115 qpair failed and we were unable to recover it. 00:27:34.115 [2024-11-15 10:46:22.287916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.115 [2024-11-15 10:46:22.287940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.115 qpair failed and we were unable to recover it. 00:27:34.115 [2024-11-15 10:46:22.288063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.115 [2024-11-15 10:46:22.288086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.115 qpair failed and we were unable to recover it. 00:27:34.115 [2024-11-15 10:46:22.288251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.115 [2024-11-15 10:46:22.288290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.115 qpair failed and we were unable to recover it. 00:27:34.115 [2024-11-15 10:46:22.288499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.115 [2024-11-15 10:46:22.288524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.115 qpair failed and we were unable to recover it. 00:27:34.115 [2024-11-15 10:46:22.288664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.115 [2024-11-15 10:46:22.288697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.115 qpair failed and we were unable to recover it. 00:27:34.115 [2024-11-15 10:46:22.288849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.115 [2024-11-15 10:46:22.288882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.115 qpair failed and we were unable to recover it. 00:27:34.115 [2024-11-15 10:46:22.289057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.115 [2024-11-15 10:46:22.289094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.115 qpair failed and we were unable to recover it. 00:27:34.115 [2024-11-15 10:46:22.289305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.115 [2024-11-15 10:46:22.289328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.115 qpair failed and we were unable to recover it. 00:27:34.115 [2024-11-15 10:46:22.289504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.115 [2024-11-15 10:46:22.289529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.115 qpair failed and we were unable to recover it. 00:27:34.115 [2024-11-15 10:46:22.289654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.115 [2024-11-15 10:46:22.289692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.115 qpair failed and we were unable to recover it. 00:27:34.115 [2024-11-15 10:46:22.289835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.115 [2024-11-15 10:46:22.289869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.115 qpair failed and we were unable to recover it. 00:27:34.115 [2024-11-15 10:46:22.290042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.115 [2024-11-15 10:46:22.290067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.115 qpair failed and we were unable to recover it. 00:27:34.115 [2024-11-15 10:46:22.290267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.115 [2024-11-15 10:46:22.290290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.115 qpair failed and we were unable to recover it. 00:27:34.115 [2024-11-15 10:46:22.290459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.115 [2024-11-15 10:46:22.290485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.116 qpair failed and we were unable to recover it. 00:27:34.116 [2024-11-15 10:46:22.290587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.116 [2024-11-15 10:46:22.290612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.116 qpair failed and we were unable to recover it. 00:27:34.116 [2024-11-15 10:46:22.290780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.116 [2024-11-15 10:46:22.290819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.116 qpair failed and we were unable to recover it. 00:27:34.116 [2024-11-15 10:46:22.290978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.116 [2024-11-15 10:46:22.291002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.116 qpair failed and we were unable to recover it. 00:27:34.116 [2024-11-15 10:46:22.291184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.116 [2024-11-15 10:46:22.291216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.116 qpair failed and we were unable to recover it. 00:27:34.116 [2024-11-15 10:46:22.291376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.116 [2024-11-15 10:46:22.291416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.116 qpair failed and we were unable to recover it. 00:27:34.116 [2024-11-15 10:46:22.291578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.116 [2024-11-15 10:46:22.291604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.116 qpair failed and we were unable to recover it. 00:27:34.116 [2024-11-15 10:46:22.291791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.116 [2024-11-15 10:46:22.291816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.116 qpair failed and we were unable to recover it. 00:27:34.116 [2024-11-15 10:46:22.291984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.116 [2024-11-15 10:46:22.292009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.116 qpair failed and we were unable to recover it. 00:27:34.116 [2024-11-15 10:46:22.292132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.116 [2024-11-15 10:46:22.292171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.116 qpair failed and we were unable to recover it. 00:27:34.116 [2024-11-15 10:46:22.292297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.116 [2024-11-15 10:46:22.292338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.116 qpair failed and we were unable to recover it. 00:27:34.116 [2024-11-15 10:46:22.292541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.116 [2024-11-15 10:46:22.292566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.116 qpair failed and we were unable to recover it. 00:27:34.116 [2024-11-15 10:46:22.292700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.116 [2024-11-15 10:46:22.292725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.116 qpair failed and we were unable to recover it. 00:27:34.116 [2024-11-15 10:46:22.292828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.116 [2024-11-15 10:46:22.292853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.116 qpair failed and we were unable to recover it. 00:27:34.116 [2024-11-15 10:46:22.293031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.116 [2024-11-15 10:46:22.293069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.116 qpair failed and we were unable to recover it. 00:27:34.116 [2024-11-15 10:46:22.293173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.116 [2024-11-15 10:46:22.293199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.116 qpair failed and we were unable to recover it. 00:27:34.116 [2024-11-15 10:46:22.293329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.116 [2024-11-15 10:46:22.293354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.116 qpair failed and we were unable to recover it. 00:27:34.116 [2024-11-15 10:46:22.293543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.116 [2024-11-15 10:46:22.293568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.116 qpair failed and we were unable to recover it. 00:27:34.116 [2024-11-15 10:46:22.293691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.116 [2024-11-15 10:46:22.293716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.116 qpair failed and we were unable to recover it. 00:27:34.116 [2024-11-15 10:46:22.293921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.116 [2024-11-15 10:46:22.293945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.116 qpair failed and we were unable to recover it. 00:27:34.116 [2024-11-15 10:46:22.294142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.116 [2024-11-15 10:46:22.294165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.116 qpair failed and we were unable to recover it. 00:27:34.116 [2024-11-15 10:46:22.294402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.116 [2024-11-15 10:46:22.294428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.116 qpair failed and we were unable to recover it. 00:27:34.116 [2024-11-15 10:46:22.294619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.116 [2024-11-15 10:46:22.294644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.116 qpair failed and we were unable to recover it. 00:27:34.116 [2024-11-15 10:46:22.294784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.116 [2024-11-15 10:46:22.294807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.116 qpair failed and we were unable to recover it. 00:27:34.116 [2024-11-15 10:46:22.295007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.116 [2024-11-15 10:46:22.295031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.116 qpair failed and we were unable to recover it. 00:27:34.116 [2024-11-15 10:46:22.295209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.116 [2024-11-15 10:46:22.295234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.116 qpair failed and we were unable to recover it. 00:27:34.116 [2024-11-15 10:46:22.295469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.116 [2024-11-15 10:46:22.295495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.116 qpair failed and we were unable to recover it. 00:27:34.116 [2024-11-15 10:46:22.295700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.116 [2024-11-15 10:46:22.295735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.116 qpair failed and we were unable to recover it. 00:27:34.116 [2024-11-15 10:46:22.295862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.116 [2024-11-15 10:46:22.295886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.116 qpair failed and we were unable to recover it. 00:27:34.116 [2024-11-15 10:46:22.296050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.116 [2024-11-15 10:46:22.296088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.116 qpair failed and we were unable to recover it. 00:27:34.116 [2024-11-15 10:46:22.296298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.116 [2024-11-15 10:46:22.296321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.116 qpair failed and we were unable to recover it. 00:27:34.116 [2024-11-15 10:46:22.296549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.116 [2024-11-15 10:46:22.296574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.116 qpair failed and we were unable to recover it. 00:27:34.116 [2024-11-15 10:46:22.296694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.116 [2024-11-15 10:46:22.296717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.117 qpair failed and we were unable to recover it. 00:27:34.117 [2024-11-15 10:46:22.296912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.117 [2024-11-15 10:46:22.296935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.117 qpair failed and we were unable to recover it. 00:27:34.117 [2024-11-15 10:46:22.297147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.117 [2024-11-15 10:46:22.297170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.117 qpair failed and we were unable to recover it. 00:27:34.117 [2024-11-15 10:46:22.297328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.117 [2024-11-15 10:46:22.297392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.117 qpair failed and we were unable to recover it. 00:27:34.117 [2024-11-15 10:46:22.297604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.117 [2024-11-15 10:46:22.297630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.117 qpair failed and we were unable to recover it. 00:27:34.117 [2024-11-15 10:46:22.297807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.117 [2024-11-15 10:46:22.297830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.117 qpair failed and we were unable to recover it. 00:27:34.117 [2024-11-15 10:46:22.297973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.117 [2024-11-15 10:46:22.297997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.117 qpair failed and we were unable to recover it. 00:27:34.117 [2024-11-15 10:46:22.298190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.117 [2024-11-15 10:46:22.298213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.117 qpair failed and we were unable to recover it. 00:27:34.117 [2024-11-15 10:46:22.298376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.117 [2024-11-15 10:46:22.298404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.117 qpair failed and we were unable to recover it. 00:27:34.117 [2024-11-15 10:46:22.298591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.117 [2024-11-15 10:46:22.298616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.117 qpair failed and we were unable to recover it. 00:27:34.117 [2024-11-15 10:46:22.298803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.117 [2024-11-15 10:46:22.298826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.117 qpair failed and we were unable to recover it. 00:27:34.117 [2024-11-15 10:46:22.299024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.117 [2024-11-15 10:46:22.299048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.117 qpair failed and we were unable to recover it. 00:27:34.117 [2024-11-15 10:46:22.299242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.117 [2024-11-15 10:46:22.299270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.117 qpair failed and we were unable to recover it. 00:27:34.117 [2024-11-15 10:46:22.299469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.117 [2024-11-15 10:46:22.299512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.117 qpair failed and we were unable to recover it. 00:27:34.117 [2024-11-15 10:46:22.299620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.117 [2024-11-15 10:46:22.299645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.117 qpair failed and we were unable to recover it. 00:27:34.117 [2024-11-15 10:46:22.299798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.117 [2024-11-15 10:46:22.299837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.117 qpair failed and we were unable to recover it. 00:27:34.117 [2024-11-15 10:46:22.299938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.117 [2024-11-15 10:46:22.299962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.117 qpair failed and we were unable to recover it. 00:27:34.117 [2024-11-15 10:46:22.300174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.117 [2024-11-15 10:46:22.300198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.117 qpair failed and we were unable to recover it. 00:27:34.117 [2024-11-15 10:46:22.300346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.117 [2024-11-15 10:46:22.300375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.117 qpair failed and we were unable to recover it. 00:27:34.117 [2024-11-15 10:46:22.300550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.117 [2024-11-15 10:46:22.300574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.117 qpair failed and we were unable to recover it. 00:27:34.117 [2024-11-15 10:46:22.300793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.117 [2024-11-15 10:46:22.300816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.117 qpair failed and we were unable to recover it. 00:27:34.117 [2024-11-15 10:46:22.300958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.117 [2024-11-15 10:46:22.300983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.117 qpair failed and we were unable to recover it. 00:27:34.117 [2024-11-15 10:46:22.301103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.117 [2024-11-15 10:46:22.301128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.117 qpair failed and we were unable to recover it. 00:27:34.117 [2024-11-15 10:46:22.301353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.117 [2024-11-15 10:46:22.301403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.117 qpair failed and we were unable to recover it. 00:27:34.117 [2024-11-15 10:46:22.301529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.117 [2024-11-15 10:46:22.301553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.117 qpair failed and we were unable to recover it. 00:27:34.117 [2024-11-15 10:46:22.301675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.117 [2024-11-15 10:46:22.301699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.117 qpair failed and we were unable to recover it. 00:27:34.117 [2024-11-15 10:46:22.301956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.117 [2024-11-15 10:46:22.301980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.117 qpair failed and we were unable to recover it. 00:27:34.117 [2024-11-15 10:46:22.302174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.117 [2024-11-15 10:46:22.302198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.117 qpair failed and we were unable to recover it. 00:27:34.117 [2024-11-15 10:46:22.302390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.117 [2024-11-15 10:46:22.302415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.117 qpair failed and we were unable to recover it. 00:27:34.117 [2024-11-15 10:46:22.302596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.117 [2024-11-15 10:46:22.302620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.117 qpair failed and we were unable to recover it. 00:27:34.117 [2024-11-15 10:46:22.302802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.117 [2024-11-15 10:46:22.302825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.117 qpair failed and we were unable to recover it. 00:27:34.117 [2024-11-15 10:46:22.303009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.117 [2024-11-15 10:46:22.303032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.117 qpair failed and we were unable to recover it. 00:27:34.117 [2024-11-15 10:46:22.303265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.117 [2024-11-15 10:46:22.303289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.117 qpair failed and we were unable to recover it. 00:27:34.117 [2024-11-15 10:46:22.303533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.117 [2024-11-15 10:46:22.303558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.117 qpair failed and we were unable to recover it. 00:27:34.117 [2024-11-15 10:46:22.303734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.117 [2024-11-15 10:46:22.303757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.117 qpair failed and we were unable to recover it. 00:27:34.118 [2024-11-15 10:46:22.303884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.118 [2024-11-15 10:46:22.303931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.118 qpair failed and we were unable to recover it. 00:27:34.118 [2024-11-15 10:46:22.304120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.118 [2024-11-15 10:46:22.304143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.118 qpair failed and we were unable to recover it. 00:27:34.118 [2024-11-15 10:46:22.304309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.118 [2024-11-15 10:46:22.304333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.118 qpair failed and we were unable to recover it. 00:27:34.118 [2024-11-15 10:46:22.304473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.118 [2024-11-15 10:46:22.304499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.118 qpair failed and we were unable to recover it. 00:27:34.118 [2024-11-15 10:46:22.304680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.118 [2024-11-15 10:46:22.304708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.118 qpair failed and we were unable to recover it. 00:27:34.118 [2024-11-15 10:46:22.304899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.118 [2024-11-15 10:46:22.304922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.118 qpair failed and we were unable to recover it. 00:27:34.118 [2024-11-15 10:46:22.305119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.118 [2024-11-15 10:46:22.305143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.118 qpair failed and we were unable to recover it. 00:27:34.118 [2024-11-15 10:46:22.305302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.118 [2024-11-15 10:46:22.305325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.118 qpair failed and we were unable to recover it. 00:27:34.118 [2024-11-15 10:46:22.305533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.118 [2024-11-15 10:46:22.305558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.118 qpair failed and we were unable to recover it. 00:27:34.118 [2024-11-15 10:46:22.305691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.118 [2024-11-15 10:46:22.305716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.118 qpair failed and we were unable to recover it. 00:27:34.118 [2024-11-15 10:46:22.305844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.118 [2024-11-15 10:46:22.305882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.118 qpair failed and we were unable to recover it. 00:27:34.118 [2024-11-15 10:46:22.306038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.118 [2024-11-15 10:46:22.306070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.118 qpair failed and we were unable to recover it. 00:27:34.118 [2024-11-15 10:46:22.306290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.118 [2024-11-15 10:46:22.306314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.118 qpair failed and we were unable to recover it. 00:27:34.118 [2024-11-15 10:46:22.306492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.118 [2024-11-15 10:46:22.306517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.118 qpair failed and we were unable to recover it. 00:27:34.118 [2024-11-15 10:46:22.306597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.118 [2024-11-15 10:46:22.306621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.118 qpair failed and we were unable to recover it. 00:27:34.118 [2024-11-15 10:46:22.306812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.118 [2024-11-15 10:46:22.306853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.118 qpair failed and we were unable to recover it. 00:27:34.118 [2024-11-15 10:46:22.306997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.118 [2024-11-15 10:46:22.307020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.118 qpair failed and we were unable to recover it. 00:27:34.118 [2024-11-15 10:46:22.307204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.118 [2024-11-15 10:46:22.307227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.118 qpair failed and we were unable to recover it. 00:27:34.118 [2024-11-15 10:46:22.307400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.118 [2024-11-15 10:46:22.307440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.118 qpair failed and we were unable to recover it. 00:27:34.118 [2024-11-15 10:46:22.307584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.118 [2024-11-15 10:46:22.307607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.118 qpair failed and we were unable to recover it. 00:27:34.118 [2024-11-15 10:46:22.307730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.118 [2024-11-15 10:46:22.307769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.118 qpair failed and we were unable to recover it. 00:27:34.118 [2024-11-15 10:46:22.307975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.118 [2024-11-15 10:46:22.308000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.118 qpair failed and we were unable to recover it. 00:27:34.118 [2024-11-15 10:46:22.308227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.118 [2024-11-15 10:46:22.308250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.118 qpair failed and we were unable to recover it. 00:27:34.118 [2024-11-15 10:46:22.308450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.118 [2024-11-15 10:46:22.308474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.118 qpair failed and we were unable to recover it. 00:27:34.118 [2024-11-15 10:46:22.308636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.118 [2024-11-15 10:46:22.308660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.118 qpair failed and we were unable to recover it. 00:27:34.118 [2024-11-15 10:46:22.308823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.118 [2024-11-15 10:46:22.308846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.118 qpair failed and we were unable to recover it. 00:27:34.118 [2024-11-15 10:46:22.309076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.118 [2024-11-15 10:46:22.309100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.118 qpair failed and we were unable to recover it. 00:27:34.118 [2024-11-15 10:46:22.309294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.118 [2024-11-15 10:46:22.309317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.118 qpair failed and we were unable to recover it. 00:27:34.118 [2024-11-15 10:46:22.309508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.118 [2024-11-15 10:46:22.309534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.118 qpair failed and we were unable to recover it. 00:27:34.118 [2024-11-15 10:46:22.309722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.118 [2024-11-15 10:46:22.309745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.118 qpair failed and we were unable to recover it. 00:27:34.118 [2024-11-15 10:46:22.309900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.118 [2024-11-15 10:46:22.309923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.118 qpair failed and we were unable to recover it. 00:27:34.118 [2024-11-15 10:46:22.310159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.118 [2024-11-15 10:46:22.310182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.118 qpair failed and we were unable to recover it. 00:27:34.118 [2024-11-15 10:46:22.310436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.118 [2024-11-15 10:46:22.310462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.118 qpair failed and we were unable to recover it. 00:27:34.118 [2024-11-15 10:46:22.310642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.118 [2024-11-15 10:46:22.310667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.118 qpair failed and we were unable to recover it. 00:27:34.119 [2024-11-15 10:46:22.310872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.119 [2024-11-15 10:46:22.310896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.119 qpair failed and we were unable to recover it. 00:27:34.119 [2024-11-15 10:46:22.311041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.119 [2024-11-15 10:46:22.311079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.119 qpair failed and we were unable to recover it. 00:27:34.119 [2024-11-15 10:46:22.311216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.119 [2024-11-15 10:46:22.311255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.119 qpair failed and we were unable to recover it. 00:27:34.119 [2024-11-15 10:46:22.311376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.119 [2024-11-15 10:46:22.311401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.119 qpair failed and we were unable to recover it. 00:27:34.119 [2024-11-15 10:46:22.311602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.119 [2024-11-15 10:46:22.311627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.119 qpair failed and we were unable to recover it. 00:27:34.119 [2024-11-15 10:46:22.311802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.119 [2024-11-15 10:46:22.311826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.119 qpair failed and we were unable to recover it. 00:27:34.119 [2024-11-15 10:46:22.311980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.119 [2024-11-15 10:46:22.312005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.119 qpair failed and we were unable to recover it. 00:27:34.119 [2024-11-15 10:46:22.312180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.119 [2024-11-15 10:46:22.312204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.119 qpair failed and we were unable to recover it. 00:27:34.119 [2024-11-15 10:46:22.312332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.119 [2024-11-15 10:46:22.312356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.119 qpair failed and we were unable to recover it. 00:27:34.119 [2024-11-15 10:46:22.312531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.119 [2024-11-15 10:46:22.312569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.119 qpair failed and we were unable to recover it. 00:27:34.119 [2024-11-15 10:46:22.312709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.119 [2024-11-15 10:46:22.312733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.119 qpair failed and we were unable to recover it. 00:27:34.119 [2024-11-15 10:46:22.312905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.119 [2024-11-15 10:46:22.312943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.119 qpair failed and we were unable to recover it. 00:27:34.119 [2024-11-15 10:46:22.313097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.119 [2024-11-15 10:46:22.313136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.119 qpair failed and we were unable to recover it. 00:27:34.119 [2024-11-15 10:46:22.313321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.119 [2024-11-15 10:46:22.313344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.119 qpair failed and we were unable to recover it. 00:27:34.119 [2024-11-15 10:46:22.313571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.119 [2024-11-15 10:46:22.313595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.119 qpair failed and we were unable to recover it. 00:27:34.119 [2024-11-15 10:46:22.313766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.119 [2024-11-15 10:46:22.313799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.119 qpair failed and we were unable to recover it. 00:27:34.119 [2024-11-15 10:46:22.313970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.119 [2024-11-15 10:46:22.314004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.119 qpair failed and we were unable to recover it. 00:27:34.119 [2024-11-15 10:46:22.314113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.119 [2024-11-15 10:46:22.314151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.119 qpair failed and we were unable to recover it. 00:27:34.119 [2024-11-15 10:46:22.314257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.119 [2024-11-15 10:46:22.314281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.119 qpair failed and we were unable to recover it. 00:27:34.119 [2024-11-15 10:46:22.314509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.119 [2024-11-15 10:46:22.314534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.119 qpair failed and we were unable to recover it. 00:27:34.119 [2024-11-15 10:46:22.314747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.119 [2024-11-15 10:46:22.314770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.119 qpair failed and we were unable to recover it. 00:27:34.119 [2024-11-15 10:46:22.314957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.119 [2024-11-15 10:46:22.314981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.119 qpair failed and we were unable to recover it. 00:27:34.119 [2024-11-15 10:46:22.315179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.119 [2024-11-15 10:46:22.315203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.119 qpair failed and we were unable to recover it. 00:27:34.119 [2024-11-15 10:46:22.315414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.119 [2024-11-15 10:46:22.315440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.119 qpair failed and we were unable to recover it. 00:27:34.119 [2024-11-15 10:46:22.315611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.119 [2024-11-15 10:46:22.315635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.119 qpair failed and we were unable to recover it. 00:27:34.119 [2024-11-15 10:46:22.315788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.119 [2024-11-15 10:46:22.315811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.119 qpair failed and we were unable to recover it. 00:27:34.119 [2024-11-15 10:46:22.316000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.119 [2024-11-15 10:46:22.316024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.119 qpair failed and we were unable to recover it. 00:27:34.119 [2024-11-15 10:46:22.316193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.119 [2024-11-15 10:46:22.316216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.119 qpair failed and we were unable to recover it. 00:27:34.119 [2024-11-15 10:46:22.316401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.119 [2024-11-15 10:46:22.316440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.119 qpair failed and we were unable to recover it. 00:27:34.119 [2024-11-15 10:46:22.316549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.119 [2024-11-15 10:46:22.316588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.119 qpair failed and we were unable to recover it. 00:27:34.119 [2024-11-15 10:46:22.316714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.119 [2024-11-15 10:46:22.316738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.119 qpair failed and we were unable to recover it. 00:27:34.120 [2024-11-15 10:46:22.316962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.120 [2024-11-15 10:46:22.316985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.120 qpair failed and we were unable to recover it. 00:27:34.120 [2024-11-15 10:46:22.317192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.120 [2024-11-15 10:46:22.317216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.120 qpair failed and we were unable to recover it. 00:27:34.120 [2024-11-15 10:46:22.317404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.120 [2024-11-15 10:46:22.317439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.120 qpair failed and we were unable to recover it. 00:27:34.120 [2024-11-15 10:46:22.317575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.120 [2024-11-15 10:46:22.317601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.120 qpair failed and we were unable to recover it. 00:27:34.120 [2024-11-15 10:46:22.317846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.120 [2024-11-15 10:46:22.317871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.120 qpair failed and we were unable to recover it. 00:27:34.120 [2024-11-15 10:46:22.318025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.120 [2024-11-15 10:46:22.318048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.120 qpair failed and we were unable to recover it. 00:27:34.120 [2024-11-15 10:46:22.318248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.120 [2024-11-15 10:46:22.318272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.120 qpair failed and we were unable to recover it. 00:27:34.120 [2024-11-15 10:46:22.318440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.120 [2024-11-15 10:46:22.318469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.120 qpair failed and we were unable to recover it. 00:27:34.120 [2024-11-15 10:46:22.318652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.120 [2024-11-15 10:46:22.318675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.120 qpair failed and we were unable to recover it. 00:27:34.120 [2024-11-15 10:46:22.318870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.120 [2024-11-15 10:46:22.318894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.120 qpair failed and we were unable to recover it. 00:27:34.120 [2024-11-15 10:46:22.319046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.120 [2024-11-15 10:46:22.319070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.120 qpair failed and we were unable to recover it. 00:27:34.120 [2024-11-15 10:46:22.319251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.120 [2024-11-15 10:46:22.319274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.120 qpair failed and we were unable to recover it. 00:27:34.120 [2024-11-15 10:46:22.319458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.120 [2024-11-15 10:46:22.319483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.120 qpair failed and we were unable to recover it. 00:27:34.120 [2024-11-15 10:46:22.319663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.120 [2024-11-15 10:46:22.319702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.120 qpair failed and we were unable to recover it. 00:27:34.120 [2024-11-15 10:46:22.319889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.120 [2024-11-15 10:46:22.319912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.120 qpair failed and we were unable to recover it. 00:27:34.120 [2024-11-15 10:46:22.320087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.120 [2024-11-15 10:46:22.320111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.120 qpair failed and we were unable to recover it. 00:27:34.120 [2024-11-15 10:46:22.320285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.120 [2024-11-15 10:46:22.320308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.120 qpair failed and we were unable to recover it. 00:27:34.120 [2024-11-15 10:46:22.320484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.120 [2024-11-15 10:46:22.320509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.120 qpair failed and we were unable to recover it. 00:27:34.120 [2024-11-15 10:46:22.320639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.120 [2024-11-15 10:46:22.320677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.120 qpair failed and we were unable to recover it. 00:27:34.120 [2024-11-15 10:46:22.320782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.120 [2024-11-15 10:46:22.320806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.120 qpair failed and we were unable to recover it. 00:27:34.120 [2024-11-15 10:46:22.321001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.120 [2024-11-15 10:46:22.321040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.120 qpair failed and we were unable to recover it. 00:27:34.120 [2024-11-15 10:46:22.321267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.120 [2024-11-15 10:46:22.321291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.120 qpair failed and we were unable to recover it. 00:27:34.120 [2024-11-15 10:46:22.321435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.120 [2024-11-15 10:46:22.321461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.120 qpair failed and we were unable to recover it. 00:27:34.120 [2024-11-15 10:46:22.321646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.120 [2024-11-15 10:46:22.321672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.120 qpair failed and we were unable to recover it. 00:27:34.120 [2024-11-15 10:46:22.321840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.120 [2024-11-15 10:46:22.321864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.120 qpair failed and we were unable to recover it. 00:27:34.120 [2024-11-15 10:46:22.322084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.120 [2024-11-15 10:46:22.322107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.120 qpair failed and we were unable to recover it. 00:27:34.120 [2024-11-15 10:46:22.322268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.120 [2024-11-15 10:46:22.322291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.120 qpair failed and we were unable to recover it. 00:27:34.120 [2024-11-15 10:46:22.322487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.120 [2024-11-15 10:46:22.322512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.120 qpair failed and we were unable to recover it. 00:27:34.120 [2024-11-15 10:46:22.322646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.121 [2024-11-15 10:46:22.322670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.121 qpair failed and we were unable to recover it. 00:27:34.121 [2024-11-15 10:46:22.322790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.121 [2024-11-15 10:46:22.322814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.121 qpair failed and we were unable to recover it. 00:27:34.121 [2024-11-15 10:46:22.322966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.121 [2024-11-15 10:46:22.322991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.121 qpair failed and we were unable to recover it. 00:27:34.121 [2024-11-15 10:46:22.323142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.121 [2024-11-15 10:46:22.323180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.121 qpair failed and we were unable to recover it. 00:27:34.121 [2024-11-15 10:46:22.323418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.121 [2024-11-15 10:46:22.323454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.121 qpair failed and we were unable to recover it. 00:27:34.121 [2024-11-15 10:46:22.323651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.121 [2024-11-15 10:46:22.323693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.121 qpair failed and we were unable to recover it. 00:27:34.121 [2024-11-15 10:46:22.323884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.121 [2024-11-15 10:46:22.323912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.121 qpair failed and we were unable to recover it. 00:27:34.121 [2024-11-15 10:46:22.324079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.121 [2024-11-15 10:46:22.324102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.121 qpair failed and we were unable to recover it. 00:27:34.121 [2024-11-15 10:46:22.324311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.121 [2024-11-15 10:46:22.324335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.121 qpair failed and we were unable to recover it. 00:27:34.121 [2024-11-15 10:46:22.324457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.121 [2024-11-15 10:46:22.324482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.121 qpair failed and we were unable to recover it. 00:27:34.121 [2024-11-15 10:46:22.324609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.121 [2024-11-15 10:46:22.324634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.121 qpair failed and we were unable to recover it. 00:27:34.121 [2024-11-15 10:46:22.324834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.121 [2024-11-15 10:46:22.324858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.121 qpair failed and we were unable to recover it. 00:27:34.121 [2024-11-15 10:46:22.325056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.121 [2024-11-15 10:46:22.325080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.121 qpair failed and we were unable to recover it. 00:27:34.121 [2024-11-15 10:46:22.325270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.121 [2024-11-15 10:46:22.325293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.121 qpair failed and we were unable to recover it. 00:27:34.121 [2024-11-15 10:46:22.325527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.121 [2024-11-15 10:46:22.325552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.121 qpair failed and we were unable to recover it. 00:27:34.121 [2024-11-15 10:46:22.325790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.121 [2024-11-15 10:46:22.325814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.121 qpair failed and we were unable to recover it. 00:27:34.121 [2024-11-15 10:46:22.325992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.121 [2024-11-15 10:46:22.326016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.121 qpair failed and we were unable to recover it. 00:27:34.121 [2024-11-15 10:46:22.326179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.121 [2024-11-15 10:46:22.326202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.121 qpair failed and we were unable to recover it. 00:27:34.121 [2024-11-15 10:46:22.326439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.121 [2024-11-15 10:46:22.326465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.121 qpair failed and we were unable to recover it. 00:27:34.121 [2024-11-15 10:46:22.326655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.121 [2024-11-15 10:46:22.326679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.121 qpair failed and we were unable to recover it. 00:27:34.121 [2024-11-15 10:46:22.326873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.121 [2024-11-15 10:46:22.326897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.121 qpair failed and we were unable to recover it. 00:27:34.121 [2024-11-15 10:46:22.327135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.121 [2024-11-15 10:46:22.327158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.121 qpair failed and we were unable to recover it. 00:27:34.121 [2024-11-15 10:46:22.327404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.121 [2024-11-15 10:46:22.327429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.121 qpair failed and we were unable to recover it. 00:27:34.121 [2024-11-15 10:46:22.327540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.121 [2024-11-15 10:46:22.327564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.121 qpair failed and we were unable to recover it. 00:27:34.121 [2024-11-15 10:46:22.327744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.121 [2024-11-15 10:46:22.327784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.121 qpair failed and we were unable to recover it. 00:27:34.121 [2024-11-15 10:46:22.327939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.121 [2024-11-15 10:46:22.327962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.121 qpair failed and we were unable to recover it. 00:27:34.121 [2024-11-15 10:46:22.328152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.121 [2024-11-15 10:46:22.328176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.121 qpair failed and we were unable to recover it. 00:27:34.121 [2024-11-15 10:46:22.328399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.121 [2024-11-15 10:46:22.328425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.121 qpair failed and we were unable to recover it. 00:27:34.121 [2024-11-15 10:46:22.328623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.121 [2024-11-15 10:46:22.328647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.121 qpair failed and we were unable to recover it. 00:27:34.121 [2024-11-15 10:46:22.328849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.121 [2024-11-15 10:46:22.328872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.121 qpair failed and we were unable to recover it. 00:27:34.121 [2024-11-15 10:46:22.329089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.121 [2024-11-15 10:46:22.329113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.121 qpair failed and we were unable to recover it. 00:27:34.121 [2024-11-15 10:46:22.329297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.121 [2024-11-15 10:46:22.329321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.121 qpair failed and we were unable to recover it. 00:27:34.121 [2024-11-15 10:46:22.329569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.121 [2024-11-15 10:46:22.329595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.121 qpair failed and we were unable to recover it. 00:27:34.121 [2024-11-15 10:46:22.329736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.121 [2024-11-15 10:46:22.329774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.121 qpair failed and we were unable to recover it. 00:27:34.121 [2024-11-15 10:46:22.329942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.122 [2024-11-15 10:46:22.329965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.122 qpair failed and we were unable to recover it. 00:27:34.122 [2024-11-15 10:46:22.330194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.122 [2024-11-15 10:46:22.330217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.122 qpair failed and we were unable to recover it. 00:27:34.122 [2024-11-15 10:46:22.330347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.122 [2024-11-15 10:46:22.330391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.122 qpair failed and we were unable to recover it. 00:27:34.122 [2024-11-15 10:46:22.330583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.122 [2024-11-15 10:46:22.330607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.122 qpair failed and we were unable to recover it. 00:27:34.122 [2024-11-15 10:46:22.330806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.122 [2024-11-15 10:46:22.330829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.122 qpair failed and we were unable to recover it. 00:27:34.122 [2024-11-15 10:46:22.331024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.122 [2024-11-15 10:46:22.331048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.122 qpair failed and we were unable to recover it. 00:27:34.122 [2024-11-15 10:46:22.331221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.122 [2024-11-15 10:46:22.331245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.122 qpair failed and we were unable to recover it. 00:27:34.122 [2024-11-15 10:46:22.331468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.122 [2024-11-15 10:46:22.331494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.122 qpair failed and we were unable to recover it. 00:27:34.122 [2024-11-15 10:46:22.331709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.122 [2024-11-15 10:46:22.331750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.122 qpair failed and we were unable to recover it. 00:27:34.122 [2024-11-15 10:46:22.331930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.122 [2024-11-15 10:46:22.331953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.122 qpair failed and we were unable to recover it. 00:27:34.122 [2024-11-15 10:46:22.332137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.122 [2024-11-15 10:46:22.332160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.122 qpair failed and we were unable to recover it. 00:27:34.122 [2024-11-15 10:46:22.332349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.122 [2024-11-15 10:46:22.332394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.122 qpair failed and we were unable to recover it. 00:27:34.122 [2024-11-15 10:46:22.332622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.122 [2024-11-15 10:46:22.332646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.122 qpair failed and we were unable to recover it. 00:27:34.122 [2024-11-15 10:46:22.332859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.122 [2024-11-15 10:46:22.332883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.122 qpair failed and we were unable to recover it. 00:27:34.122 [2024-11-15 10:46:22.333068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.122 [2024-11-15 10:46:22.333106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.122 qpair failed and we were unable to recover it. 00:27:34.122 [2024-11-15 10:46:22.333267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.122 [2024-11-15 10:46:22.333291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.122 qpair failed and we were unable to recover it. 00:27:34.122 [2024-11-15 10:46:22.333458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.122 [2024-11-15 10:46:22.333484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.122 qpair failed and we were unable to recover it. 00:27:34.122 [2024-11-15 10:46:22.333684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.122 [2024-11-15 10:46:22.333709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.122 qpair failed and we were unable to recover it. 00:27:34.122 [2024-11-15 10:46:22.333907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.122 [2024-11-15 10:46:22.333930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.122 qpair failed and we were unable to recover it. 00:27:34.122 [2024-11-15 10:46:22.334133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.122 [2024-11-15 10:46:22.334157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.122 qpair failed and we were unable to recover it. 00:27:34.122 [2024-11-15 10:46:22.334377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.122 [2024-11-15 10:46:22.334402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.122 qpair failed and we were unable to recover it. 00:27:34.122 [2024-11-15 10:46:22.334580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.122 [2024-11-15 10:46:22.334605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.122 qpair failed and we were unable to recover it. 00:27:34.122 [2024-11-15 10:46:22.334746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.122 [2024-11-15 10:46:22.334770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.122 qpair failed and we were unable to recover it. 00:27:34.122 [2024-11-15 10:46:22.334958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.122 [2024-11-15 10:46:22.334982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.122 qpair failed and we were unable to recover it. 00:27:34.122 [2024-11-15 10:46:22.335213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.122 [2024-11-15 10:46:22.335237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.122 qpair failed and we were unable to recover it. 00:27:34.122 [2024-11-15 10:46:22.335437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.122 [2024-11-15 10:46:22.335463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.122 qpair failed and we were unable to recover it. 00:27:34.122 [2024-11-15 10:46:22.335565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.122 [2024-11-15 10:46:22.335603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.122 qpair failed and we were unable to recover it. 00:27:34.122 [2024-11-15 10:46:22.335743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.122 [2024-11-15 10:46:22.335782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.122 qpair failed and we were unable to recover it. 00:27:34.122 [2024-11-15 10:46:22.335977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.122 [2024-11-15 10:46:22.336001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.122 qpair failed and we were unable to recover it. 00:27:34.122 [2024-11-15 10:46:22.336167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.122 [2024-11-15 10:46:22.336191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.122 qpair failed and we were unable to recover it. 00:27:34.122 [2024-11-15 10:46:22.336424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.122 [2024-11-15 10:46:22.336448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.122 qpair failed and we were unable to recover it. 00:27:34.122 [2024-11-15 10:46:22.336590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.122 [2024-11-15 10:46:22.336615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.122 qpair failed and we were unable to recover it. 00:27:34.123 [2024-11-15 10:46:22.336836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.123 [2024-11-15 10:46:22.336860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.123 qpair failed and we were unable to recover it. 00:27:34.123 [2024-11-15 10:46:22.337009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.123 [2024-11-15 10:46:22.337032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.123 qpair failed and we were unable to recover it. 00:27:34.123 [2024-11-15 10:46:22.337274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.123 [2024-11-15 10:46:22.337298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.123 qpair failed and we were unable to recover it. 00:27:34.123 [2024-11-15 10:46:22.337416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.123 [2024-11-15 10:46:22.337445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.123 qpair failed and we were unable to recover it. 00:27:34.123 [2024-11-15 10:46:22.337602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.123 [2024-11-15 10:46:22.337641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.123 qpair failed and we were unable to recover it. 00:27:34.123 [2024-11-15 10:46:22.337769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.123 [2024-11-15 10:46:22.337792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.123 qpair failed and we were unable to recover it. 00:27:34.123 [2024-11-15 10:46:22.337941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.123 [2024-11-15 10:46:22.337980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.123 qpair failed and we were unable to recover it. 00:27:34.123 [2024-11-15 10:46:22.338117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.123 [2024-11-15 10:46:22.338156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.123 qpair failed and we were unable to recover it. 00:27:34.123 [2024-11-15 10:46:22.338385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.123 [2024-11-15 10:46:22.338425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.123 qpair failed and we were unable to recover it. 00:27:34.123 [2024-11-15 10:46:22.338656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.123 [2024-11-15 10:46:22.338680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.123 qpair failed and we were unable to recover it. 00:27:34.123 [2024-11-15 10:46:22.338887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.123 [2024-11-15 10:46:22.338910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.123 qpair failed and we were unable to recover it. 00:27:34.123 [2024-11-15 10:46:22.339111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.123 [2024-11-15 10:46:22.339149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.123 qpair failed and we were unable to recover it. 00:27:34.123 [2024-11-15 10:46:22.339304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.123 [2024-11-15 10:46:22.339328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.123 qpair failed and we were unable to recover it. 00:27:34.123 [2024-11-15 10:46:22.339448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.123 [2024-11-15 10:46:22.339473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.123 qpair failed and we were unable to recover it. 00:27:34.123 [2024-11-15 10:46:22.339656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.123 [2024-11-15 10:46:22.339693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.123 qpair failed and we were unable to recover it. 00:27:34.123 [2024-11-15 10:46:22.339868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.123 [2024-11-15 10:46:22.339893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.123 qpair failed and we were unable to recover it. 00:27:34.123 [2024-11-15 10:46:22.340096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.123 [2024-11-15 10:46:22.340120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.123 qpair failed and we were unable to recover it. 00:27:34.123 [2024-11-15 10:46:22.340265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.123 [2024-11-15 10:46:22.340288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.123 qpair failed and we were unable to recover it. 00:27:34.123 [2024-11-15 10:46:22.340488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.123 [2024-11-15 10:46:22.340513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.123 qpair failed and we were unable to recover it. 00:27:34.123 [2024-11-15 10:46:22.340683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.123 [2024-11-15 10:46:22.340707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.123 qpair failed and we were unable to recover it. 00:27:34.123 [2024-11-15 10:46:22.340842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.123 [2024-11-15 10:46:22.340865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.123 qpair failed and we were unable to recover it. 00:27:34.123 [2024-11-15 10:46:22.341061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.123 [2024-11-15 10:46:22.341093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.123 qpair failed and we were unable to recover it. 00:27:34.123 [2024-11-15 10:46:22.341272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.123 [2024-11-15 10:46:22.341296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.123 qpair failed and we were unable to recover it. 00:27:34.123 [2024-11-15 10:46:22.341508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.123 [2024-11-15 10:46:22.341532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.123 qpair failed and we were unable to recover it. 00:27:34.123 [2024-11-15 10:46:22.341667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.123 [2024-11-15 10:46:22.341694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.123 qpair failed and we were unable to recover it. 00:27:34.123 [2024-11-15 10:46:22.341893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.123 [2024-11-15 10:46:22.341917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.123 qpair failed and we were unable to recover it. 00:27:34.123 [2024-11-15 10:46:22.342110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.123 [2024-11-15 10:46:22.342134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.123 qpair failed and we were unable to recover it. 00:27:34.123 [2024-11-15 10:46:22.342283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.123 [2024-11-15 10:46:22.342306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.123 qpair failed and we were unable to recover it. 00:27:34.123 [2024-11-15 10:46:22.342461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.123 [2024-11-15 10:46:22.342486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.123 qpair failed and we were unable to recover it. 00:27:34.123 [2024-11-15 10:46:22.342617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.123 [2024-11-15 10:46:22.342642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.123 qpair failed and we were unable to recover it. 00:27:34.123 [2024-11-15 10:46:22.342861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.123 [2024-11-15 10:46:22.342885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.123 qpair failed and we were unable to recover it. 00:27:34.123 [2024-11-15 10:46:22.343107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.123 [2024-11-15 10:46:22.343146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.123 qpair failed and we were unable to recover it. 00:27:34.123 [2024-11-15 10:46:22.343351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.123 [2024-11-15 10:46:22.343396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.123 qpair failed and we were unable to recover it. 00:27:34.123 [2024-11-15 10:46:22.343583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.123 [2024-11-15 10:46:22.343608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.124 qpair failed and we were unable to recover it. 00:27:34.124 [2024-11-15 10:46:22.343689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.124 [2024-11-15 10:46:22.343713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.124 qpair failed and we were unable to recover it. 00:27:34.124 [2024-11-15 10:46:22.343842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.124 [2024-11-15 10:46:22.343870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.124 qpair failed and we were unable to recover it. 00:27:34.124 [2024-11-15 10:46:22.344039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.124 [2024-11-15 10:46:22.344078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.124 qpair failed and we were unable to recover it. 00:27:34.124 [2024-11-15 10:46:22.344237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.124 [2024-11-15 10:46:22.344261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.124 qpair failed and we were unable to recover it. 00:27:34.124 [2024-11-15 10:46:22.344392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.124 [2024-11-15 10:46:22.344418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.124 qpair failed and we were unable to recover it. 00:27:34.124 [2024-11-15 10:46:22.344560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.124 [2024-11-15 10:46:22.344585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.124 qpair failed and we were unable to recover it. 00:27:34.124 [2024-11-15 10:46:22.344706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.124 [2024-11-15 10:46:22.344730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.124 qpair failed and we were unable to recover it. 00:27:34.124 [2024-11-15 10:46:22.344866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.124 [2024-11-15 10:46:22.344905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.124 qpair failed and we were unable to recover it. 00:27:34.124 [2024-11-15 10:46:22.345063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.124 [2024-11-15 10:46:22.345087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.124 qpair failed and we were unable to recover it. 00:27:34.124 [2024-11-15 10:46:22.345227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.124 [2024-11-15 10:46:22.345252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.124 qpair failed and we were unable to recover it. 00:27:34.124 [2024-11-15 10:46:22.345394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.124 [2024-11-15 10:46:22.345419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.124 qpair failed and we were unable to recover it. 00:27:34.124 [2024-11-15 10:46:22.345553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.124 [2024-11-15 10:46:22.345579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.124 qpair failed and we were unable to recover it. 00:27:34.124 [2024-11-15 10:46:22.345728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.124 [2024-11-15 10:46:22.345753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.124 qpair failed and we were unable to recover it. 00:27:34.124 [2024-11-15 10:46:22.345901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.124 [2024-11-15 10:46:22.345925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.124 qpair failed and we were unable to recover it. 00:27:34.124 [2024-11-15 10:46:22.346198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.124 [2024-11-15 10:46:22.346221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.124 qpair failed and we were unable to recover it. 00:27:34.124 [2024-11-15 10:46:22.346487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.124 [2024-11-15 10:46:22.346512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.124 qpair failed and we were unable to recover it. 00:27:34.124 [2024-11-15 10:46:22.346654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.124 [2024-11-15 10:46:22.346703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.124 qpair failed and we were unable to recover it. 00:27:34.124 [2024-11-15 10:46:22.346892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.124 [2024-11-15 10:46:22.346932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.124 qpair failed and we were unable to recover it. 00:27:34.124 [2024-11-15 10:46:22.347115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.124 [2024-11-15 10:46:22.347163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.124 qpair failed and we were unable to recover it. 00:27:34.124 [2024-11-15 10:46:22.347447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.124 [2024-11-15 10:46:22.347473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.124 qpair failed and we were unable to recover it. 00:27:34.124 [2024-11-15 10:46:22.347621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.124 [2024-11-15 10:46:22.347645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.124 qpair failed and we were unable to recover it. 00:27:34.124 [2024-11-15 10:46:22.347842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.124 [2024-11-15 10:46:22.347866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.124 qpair failed and we were unable to recover it. 00:27:34.124 [2024-11-15 10:46:22.348080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.124 [2024-11-15 10:46:22.348119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.124 qpair failed and we were unable to recover it. 00:27:34.124 [2024-11-15 10:46:22.348358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.124 [2024-11-15 10:46:22.348409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.124 qpair failed and we were unable to recover it. 00:27:34.124 [2024-11-15 10:46:22.348574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.124 [2024-11-15 10:46:22.348598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.124 qpair failed and we were unable to recover it. 00:27:34.124 [2024-11-15 10:46:22.348748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.124 [2024-11-15 10:46:22.348771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.124 qpair failed and we were unable to recover it. 00:27:34.124 [2024-11-15 10:46:22.348924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.124 [2024-11-15 10:46:22.348948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.124 qpair failed and we were unable to recover it. 00:27:34.124 [2024-11-15 10:46:22.349074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.124 [2024-11-15 10:46:22.349099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.124 qpair failed and we were unable to recover it. 00:27:34.124 [2024-11-15 10:46:22.349306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.124 [2024-11-15 10:46:22.349342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.124 qpair failed and we were unable to recover it. 00:27:34.124 [2024-11-15 10:46:22.349540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.124 [2024-11-15 10:46:22.349564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.124 qpair failed and we were unable to recover it. 00:27:34.124 [2024-11-15 10:46:22.349711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.124 [2024-11-15 10:46:22.349750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.124 qpair failed and we were unable to recover it. 00:27:34.124 [2024-11-15 10:46:22.349881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.124 [2024-11-15 10:46:22.349906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.124 qpair failed and we were unable to recover it. 00:27:34.124 [2024-11-15 10:46:22.350086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.124 [2024-11-15 10:46:22.350124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.124 qpair failed and we were unable to recover it. 00:27:34.124 [2024-11-15 10:46:22.350306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.124 [2024-11-15 10:46:22.350346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.124 qpair failed and we were unable to recover it. 00:27:34.124 [2024-11-15 10:46:22.350516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.124 [2024-11-15 10:46:22.350569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.124 qpair failed and we were unable to recover it. 00:27:34.124 [2024-11-15 10:46:22.350726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.124 [2024-11-15 10:46:22.350776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.124 qpair failed and we were unable to recover it. 00:27:34.124 [2024-11-15 10:46:22.351020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.125 [2024-11-15 10:46:22.351060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.125 qpair failed and we were unable to recover it. 00:27:34.125 [2024-11-15 10:46:22.351283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.125 [2024-11-15 10:46:22.351307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.125 qpair failed and we were unable to recover it. 00:27:34.125 [2024-11-15 10:46:22.351503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.125 [2024-11-15 10:46:22.351565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.125 qpair failed and we were unable to recover it. 00:27:34.125 [2024-11-15 10:46:22.351740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.125 [2024-11-15 10:46:22.351763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.125 qpair failed and we were unable to recover it. 00:27:34.125 [2024-11-15 10:46:22.351929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.125 [2024-11-15 10:46:22.351953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.125 qpair failed and we were unable to recover it. 00:27:34.125 [2024-11-15 10:46:22.352152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.125 [2024-11-15 10:46:22.352185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.125 qpair failed and we were unable to recover it. 00:27:34.125 [2024-11-15 10:46:22.352338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.125 [2024-11-15 10:46:22.352379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.125 qpair failed and we were unable to recover it. 00:27:34.125 [2024-11-15 10:46:22.352527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.125 [2024-11-15 10:46:22.352566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.125 qpair failed and we were unable to recover it. 00:27:34.125 [2024-11-15 10:46:22.352731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.125 [2024-11-15 10:46:22.352755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.125 qpair failed and we were unable to recover it. 00:27:34.125 [2024-11-15 10:46:22.352894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.125 [2024-11-15 10:46:22.352918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.125 qpair failed and we were unable to recover it. 00:27:34.125 [2024-11-15 10:46:22.353058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.125 [2024-11-15 10:46:22.353083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.125 qpair failed and we were unable to recover it. 00:27:34.125 [2024-11-15 10:46:22.353350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.125 [2024-11-15 10:46:22.353417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.125 qpair failed and we were unable to recover it. 00:27:34.125 [2024-11-15 10:46:22.353565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.125 [2024-11-15 10:46:22.353617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.125 qpair failed and we were unable to recover it. 00:27:34.125 [2024-11-15 10:46:22.353797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.125 [2024-11-15 10:46:22.353820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.125 qpair failed and we were unable to recover it. 00:27:34.125 [2024-11-15 10:46:22.353954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.125 [2024-11-15 10:46:22.353978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.125 qpair failed and we were unable to recover it. 00:27:34.125 [2024-11-15 10:46:22.354185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.125 [2024-11-15 10:46:22.354209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.125 qpair failed and we were unable to recover it. 00:27:34.125 [2024-11-15 10:46:22.354415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.125 [2024-11-15 10:46:22.354440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.125 qpair failed and we were unable to recover it. 00:27:34.125 [2024-11-15 10:46:22.354609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.125 [2024-11-15 10:46:22.354660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.125 qpair failed and we were unable to recover it. 00:27:34.125 [2024-11-15 10:46:22.354878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.125 [2024-11-15 10:46:22.354902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.125 qpair failed and we were unable to recover it. 00:27:34.125 [2024-11-15 10:46:22.355085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.125 [2024-11-15 10:46:22.355109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.125 qpair failed and we were unable to recover it. 00:27:34.125 [2024-11-15 10:46:22.355330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.125 [2024-11-15 10:46:22.355377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.125 qpair failed and we were unable to recover it. 00:27:34.125 [2024-11-15 10:46:22.355522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.125 [2024-11-15 10:46:22.355562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.125 qpair failed and we were unable to recover it. 00:27:34.125 [2024-11-15 10:46:22.355761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.125 [2024-11-15 10:46:22.355801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.125 qpair failed and we were unable to recover it. 00:27:34.125 [2024-11-15 10:46:22.355964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.125 [2024-11-15 10:46:22.355987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.125 qpair failed and we were unable to recover it. 00:27:34.125 [2024-11-15 10:46:22.356158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.125 [2024-11-15 10:46:22.356196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.125 qpair failed and we were unable to recover it. 00:27:34.125 [2024-11-15 10:46:22.356399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.125 [2024-11-15 10:46:22.356424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.125 qpair failed and we were unable to recover it. 00:27:34.125 [2024-11-15 10:46:22.356642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.125 [2024-11-15 10:46:22.356681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.125 qpair failed and we were unable to recover it. 00:27:34.125 [2024-11-15 10:46:22.356909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.125 [2024-11-15 10:46:22.356932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.125 qpair failed and we were unable to recover it. 00:27:34.125 [2024-11-15 10:46:22.357144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.125 [2024-11-15 10:46:22.357168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.125 qpair failed and we were unable to recover it. 00:27:34.125 [2024-11-15 10:46:22.357351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.125 [2024-11-15 10:46:22.357383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.125 qpair failed and we were unable to recover it. 00:27:34.125 [2024-11-15 10:46:22.357591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.126 [2024-11-15 10:46:22.357640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.126 qpair failed and we were unable to recover it. 00:27:34.126 [2024-11-15 10:46:22.357815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.126 [2024-11-15 10:46:22.357839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.126 qpair failed and we were unable to recover it. 00:27:34.126 [2024-11-15 10:46:22.357978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.126 [2024-11-15 10:46:22.358016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.126 qpair failed and we were unable to recover it. 00:27:34.126 [2024-11-15 10:46:22.358201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.126 [2024-11-15 10:46:22.358242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.126 qpair failed and we were unable to recover it. 00:27:34.126 [2024-11-15 10:46:22.358477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.126 [2024-11-15 10:46:22.358518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.126 qpair failed and we were unable to recover it. 00:27:34.126 [2024-11-15 10:46:22.358663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.126 [2024-11-15 10:46:22.358702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.126 qpair failed and we were unable to recover it. 00:27:34.126 [2024-11-15 10:46:22.358840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.126 [2024-11-15 10:46:22.358864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.126 qpair failed and we were unable to recover it. 00:27:34.126 [2024-11-15 10:46:22.358946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.126 [2024-11-15 10:46:22.358970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.126 qpair failed and we were unable to recover it. 00:27:34.126 [2024-11-15 10:46:22.359110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.126 [2024-11-15 10:46:22.359134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.126 qpair failed and we were unable to recover it. 00:27:34.126 [2024-11-15 10:46:22.359345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.126 [2024-11-15 10:46:22.359377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.126 qpair failed and we were unable to recover it. 00:27:34.126 [2024-11-15 10:46:22.359616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.126 [2024-11-15 10:46:22.359641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.126 qpair failed and we were unable to recover it. 00:27:34.126 [2024-11-15 10:46:22.359794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.126 [2024-11-15 10:46:22.359818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.126 qpair failed and we were unable to recover it. 00:27:34.126 [2024-11-15 10:46:22.360019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.126 [2024-11-15 10:46:22.360043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.126 qpair failed and we were unable to recover it. 00:27:34.126 [2024-11-15 10:46:22.360206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.126 [2024-11-15 10:46:22.360229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.126 qpair failed and we were unable to recover it. 00:27:34.126 [2024-11-15 10:46:22.360427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.126 [2024-11-15 10:46:22.360468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.126 qpair failed and we were unable to recover it. 00:27:34.126 [2024-11-15 10:46:22.360600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.126 [2024-11-15 10:46:22.360639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.126 qpair failed and we were unable to recover it. 00:27:34.126 [2024-11-15 10:46:22.360814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.126 [2024-11-15 10:46:22.360837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.126 qpair failed and we were unable to recover it. 00:27:34.126 [2024-11-15 10:46:22.360976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.126 [2024-11-15 10:46:22.361015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.126 qpair failed and we were unable to recover it. 00:27:34.126 [2024-11-15 10:46:22.361125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.126 [2024-11-15 10:46:22.361160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.126 qpair failed and we were unable to recover it. 00:27:34.126 [2024-11-15 10:46:22.361303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.126 [2024-11-15 10:46:22.361328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.126 qpair failed and we were unable to recover it. 00:27:34.126 [2024-11-15 10:46:22.361582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.126 [2024-11-15 10:46:22.361608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.126 qpair failed and we were unable to recover it. 00:27:34.126 [2024-11-15 10:46:22.361799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.126 [2024-11-15 10:46:22.361824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.126 qpair failed and we were unable to recover it. 00:27:34.126 [2024-11-15 10:46:22.362065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.126 [2024-11-15 10:46:22.362104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.126 qpair failed and we were unable to recover it. 00:27:34.126 [2024-11-15 10:46:22.362313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.126 [2024-11-15 10:46:22.362353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.126 qpair failed and we were unable to recover it. 00:27:34.126 [2024-11-15 10:46:22.362623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.126 [2024-11-15 10:46:22.362648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.126 qpair failed and we were unable to recover it. 00:27:34.126 [2024-11-15 10:46:22.362808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.126 [2024-11-15 10:46:22.362833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.126 qpair failed and we were unable to recover it. 00:27:34.126 [2024-11-15 10:46:22.362984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.126 [2024-11-15 10:46:22.363009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.126 qpair failed and we were unable to recover it. 00:27:34.126 [2024-11-15 10:46:22.363153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.126 [2024-11-15 10:46:22.363178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.126 qpair failed and we were unable to recover it. 00:27:34.126 [2024-11-15 10:46:22.363380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.126 [2024-11-15 10:46:22.363421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.126 qpair failed and we were unable to recover it. 00:27:34.126 [2024-11-15 10:46:22.363614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.126 [2024-11-15 10:46:22.363654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.126 qpair failed and we were unable to recover it. 00:27:34.126 [2024-11-15 10:46:22.363912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.126 [2024-11-15 10:46:22.363942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.126 qpair failed and we were unable to recover it. 00:27:34.126 [2024-11-15 10:46:22.364120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.126 [2024-11-15 10:46:22.364145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.127 qpair failed and we were unable to recover it. 00:27:34.127 [2024-11-15 10:46:22.364301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.127 [2024-11-15 10:46:22.364326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.127 qpair failed and we were unable to recover it. 00:27:34.127 [2024-11-15 10:46:22.364498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.127 [2024-11-15 10:46:22.364524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.127 qpair failed and we were unable to recover it. 00:27:34.127 [2024-11-15 10:46:22.364699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.127 [2024-11-15 10:46:22.364738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.127 qpair failed and we were unable to recover it. 00:27:34.127 [2024-11-15 10:46:22.364887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.127 [2024-11-15 10:46:22.364926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.127 qpair failed and we were unable to recover it. 00:27:34.127 [2024-11-15 10:46:22.365026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.127 [2024-11-15 10:46:22.365051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.127 qpair failed and we were unable to recover it. 00:27:34.127 [2024-11-15 10:46:22.365174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.127 [2024-11-15 10:46:22.365200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.127 qpair failed and we were unable to recover it. 00:27:34.127 [2024-11-15 10:46:22.365329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.127 [2024-11-15 10:46:22.365360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.127 qpair failed and we were unable to recover it. 00:27:34.127 [2024-11-15 10:46:22.365475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.127 [2024-11-15 10:46:22.365499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.127 qpair failed and we were unable to recover it. 00:27:34.127 [2024-11-15 10:46:22.365603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.127 [2024-11-15 10:46:22.365628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.127 qpair failed and we were unable to recover it. 00:27:34.127 [2024-11-15 10:46:22.365812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.127 [2024-11-15 10:46:22.365860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.127 qpair failed and we were unable to recover it. 00:27:34.127 [2024-11-15 10:46:22.366086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.127 [2024-11-15 10:46:22.366134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.127 qpair failed and we were unable to recover it. 00:27:34.127 [2024-11-15 10:46:22.366289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.127 [2024-11-15 10:46:22.366314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.127 qpair failed and we were unable to recover it. 00:27:34.127 [2024-11-15 10:46:22.366513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.127 [2024-11-15 10:46:22.366539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.127 qpair failed and we were unable to recover it. 00:27:34.127 [2024-11-15 10:46:22.366771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.127 [2024-11-15 10:46:22.366810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.127 qpair failed and we were unable to recover it. 00:27:34.127 [2024-11-15 10:46:22.367061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.127 [2024-11-15 10:46:22.367100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.127 qpair failed and we were unable to recover it. 00:27:34.127 [2024-11-15 10:46:22.367283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.127 [2024-11-15 10:46:22.367334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.127 qpair failed and we were unable to recover it. 00:27:34.127 [2024-11-15 10:46:22.367522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.127 [2024-11-15 10:46:22.367548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.127 qpair failed and we were unable to recover it. 00:27:34.127 [2024-11-15 10:46:22.367728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.127 [2024-11-15 10:46:22.367753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.127 qpair failed and we were unable to recover it. 00:27:34.127 [2024-11-15 10:46:22.367899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.127 [2024-11-15 10:46:22.367924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.127 qpair failed and we were unable to recover it. 00:27:34.127 [2024-11-15 10:46:22.368070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.127 [2024-11-15 10:46:22.368115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.127 qpair failed and we were unable to recover it. 00:27:34.127 [2024-11-15 10:46:22.368350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.127 [2024-11-15 10:46:22.368403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.127 qpair failed and we were unable to recover it. 00:27:34.127 [2024-11-15 10:46:22.368572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.127 [2024-11-15 10:46:22.368613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.127 qpair failed and we were unable to recover it. 00:27:34.127 [2024-11-15 10:46:22.368788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.127 [2024-11-15 10:46:22.368828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.127 qpair failed and we were unable to recover it. 00:27:34.127 [2024-11-15 10:46:22.369020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.127 [2024-11-15 10:46:22.369045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.127 qpair failed and we were unable to recover it. 00:27:34.127 [2024-11-15 10:46:22.369181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.127 [2024-11-15 10:46:22.369206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.127 qpair failed and we were unable to recover it. 00:27:34.127 [2024-11-15 10:46:22.369426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.127 [2024-11-15 10:46:22.369457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.127 qpair failed and we were unable to recover it. 00:27:34.127 [2024-11-15 10:46:22.369589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.127 [2024-11-15 10:46:22.369614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.127 qpair failed and we were unable to recover it. 00:27:34.127 [2024-11-15 10:46:22.369784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.127 [2024-11-15 10:46:22.369824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.127 qpair failed and we were unable to recover it. 00:27:34.127 [2024-11-15 10:46:22.370006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.127 [2024-11-15 10:46:22.370046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.127 qpair failed and we were unable to recover it. 00:27:34.127 [2024-11-15 10:46:22.370295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.127 [2024-11-15 10:46:22.370320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.127 qpair failed and we were unable to recover it. 00:27:34.127 [2024-11-15 10:46:22.370447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.127 [2024-11-15 10:46:22.370472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.127 qpair failed and we were unable to recover it. 00:27:34.127 [2024-11-15 10:46:22.370597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.127 [2024-11-15 10:46:22.370622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.127 qpair failed and we were unable to recover it. 00:27:34.127 [2024-11-15 10:46:22.370792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.128 [2024-11-15 10:46:22.370817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.128 qpair failed and we were unable to recover it. 00:27:34.128 [2024-11-15 10:46:22.370934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.128 [2024-11-15 10:46:22.370984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.128 qpair failed and we were unable to recover it. 00:27:34.128 [2024-11-15 10:46:22.371211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.128 [2024-11-15 10:46:22.371257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.128 qpair failed and we were unable to recover it. 00:27:34.128 [2024-11-15 10:46:22.371461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.128 [2024-11-15 10:46:22.371487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.128 qpair failed and we were unable to recover it. 00:27:34.128 [2024-11-15 10:46:22.371698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.128 [2024-11-15 10:46:22.371724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.128 qpair failed and we were unable to recover it. 00:27:34.128 [2024-11-15 10:46:22.371970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.128 [2024-11-15 10:46:22.372009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.128 qpair failed and we were unable to recover it. 00:27:34.128 [2024-11-15 10:46:22.372258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.128 [2024-11-15 10:46:22.372298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.128 qpair failed and we were unable to recover it. 00:27:34.128 [2024-11-15 10:46:22.372464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.128 [2024-11-15 10:46:22.372505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.128 qpair failed and we were unable to recover it. 00:27:34.128 [2024-11-15 10:46:22.372657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.128 [2024-11-15 10:46:22.372681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.128 qpair failed and we were unable to recover it. 00:27:34.128 [2024-11-15 10:46:22.372811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.128 [2024-11-15 10:46:22.372841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.128 qpair failed and we were unable to recover it. 00:27:34.128 [2024-11-15 10:46:22.372967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.128 [2024-11-15 10:46:22.372992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.128 qpair failed and we were unable to recover it. 00:27:34.128 [2024-11-15 10:46:22.373106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.128 [2024-11-15 10:46:22.373131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.128 qpair failed and we were unable to recover it. 00:27:34.128 [2024-11-15 10:46:22.373285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.128 [2024-11-15 10:46:22.373320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.128 qpair failed and we were unable to recover it. 00:27:34.128 [2024-11-15 10:46:22.373428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.128 [2024-11-15 10:46:22.373454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.128 qpair failed and we were unable to recover it. 00:27:34.128 [2024-11-15 10:46:22.373584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.128 [2024-11-15 10:46:22.373609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.128 qpair failed and we were unable to recover it. 00:27:34.128 [2024-11-15 10:46:22.373832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.128 [2024-11-15 10:46:22.373857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.128 qpair failed and we were unable to recover it. 00:27:34.128 [2024-11-15 10:46:22.374051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.128 [2024-11-15 10:46:22.374091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.128 qpair failed and we were unable to recover it. 00:27:34.128 [2024-11-15 10:46:22.374347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.128 [2024-11-15 10:46:22.374398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.128 qpair failed and we were unable to recover it. 00:27:34.128 [2024-11-15 10:46:22.374584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.128 [2024-11-15 10:46:22.374609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.128 qpair failed and we were unable to recover it. 00:27:34.128 [2024-11-15 10:46:22.374771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.128 [2024-11-15 10:46:22.374796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.128 qpair failed and we were unable to recover it. 00:27:34.128 [2024-11-15 10:46:22.374963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.128 [2024-11-15 10:46:22.375019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.128 qpair failed and we were unable to recover it. 00:27:34.128 [2024-11-15 10:46:22.375235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.128 [2024-11-15 10:46:22.375275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.128 qpair failed and we were unable to recover it. 00:27:34.128 [2024-11-15 10:46:22.375458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.128 [2024-11-15 10:46:22.375499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.128 qpair failed and we were unable to recover it. 00:27:34.128 [2024-11-15 10:46:22.375731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.128 [2024-11-15 10:46:22.375790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.128 qpair failed and we were unable to recover it. 00:27:34.128 [2024-11-15 10:46:22.375969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.128 [2024-11-15 10:46:22.375995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.128 qpair failed and we were unable to recover it. 00:27:34.128 [2024-11-15 10:46:22.376204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.128 [2024-11-15 10:46:22.376229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.128 qpair failed and we were unable to recover it. 00:27:34.128 [2024-11-15 10:46:22.376343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.128 [2024-11-15 10:46:22.376374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.128 qpair failed and we were unable to recover it. 00:27:34.128 [2024-11-15 10:46:22.376535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.128 [2024-11-15 10:46:22.376561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.128 qpair failed and we were unable to recover it. 00:27:34.128 [2024-11-15 10:46:22.376744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.128 [2024-11-15 10:46:22.376784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.128 qpair failed and we were unable to recover it. 00:27:34.128 [2024-11-15 10:46:22.377001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.128 [2024-11-15 10:46:22.377026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.128 qpair failed and we were unable to recover it. 00:27:34.128 [2024-11-15 10:46:22.377146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.128 [2024-11-15 10:46:22.377172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.128 qpair failed and we were unable to recover it. 00:27:34.128 [2024-11-15 10:46:22.377327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.128 [2024-11-15 10:46:22.377352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.128 qpair failed and we were unable to recover it. 00:27:34.128 [2024-11-15 10:46:22.377521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.128 [2024-11-15 10:46:22.377547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.128 qpair failed and we were unable to recover it. 00:27:34.128 [2024-11-15 10:46:22.377731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.128 [2024-11-15 10:46:22.377787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.128 qpair failed and we were unable to recover it. 00:27:34.129 [2024-11-15 10:46:22.378067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.129 [2024-11-15 10:46:22.378093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.129 qpair failed and we were unable to recover it. 00:27:34.129 [2024-11-15 10:46:22.378194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.129 [2024-11-15 10:46:22.378219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.129 qpair failed and we were unable to recover it. 00:27:34.129 [2024-11-15 10:46:22.378392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.129 [2024-11-15 10:46:22.378418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.129 qpair failed and we were unable to recover it. 00:27:34.129 [2024-11-15 10:46:22.378619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.129 [2024-11-15 10:46:22.378645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.129 qpair failed and we were unable to recover it. 00:27:34.129 [2024-11-15 10:46:22.378809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.129 [2024-11-15 10:46:22.378834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.129 qpair failed and we were unable to recover it. 00:27:34.129 [2024-11-15 10:46:22.378997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.129 [2024-11-15 10:46:22.379052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.129 qpair failed and we were unable to recover it. 00:27:34.129 [2024-11-15 10:46:22.379230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.129 [2024-11-15 10:46:22.379255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.129 qpair failed and we were unable to recover it. 00:27:34.129 [2024-11-15 10:46:22.379399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.129 [2024-11-15 10:46:22.379425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.129 qpair failed and we were unable to recover it. 00:27:34.129 [2024-11-15 10:46:22.379576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.129 [2024-11-15 10:46:22.379608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.129 qpair failed and we were unable to recover it. 00:27:34.129 [2024-11-15 10:46:22.379769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.129 [2024-11-15 10:46:22.379794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.129 qpair failed and we were unable to recover it. 00:27:34.129 [2024-11-15 10:46:22.379909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.129 [2024-11-15 10:46:22.379934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.129 qpair failed and we were unable to recover it. 00:27:34.129 [2024-11-15 10:46:22.380084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.129 [2024-11-15 10:46:22.380124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.129 qpair failed and we were unable to recover it. 00:27:34.129 [2024-11-15 10:46:22.380355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.129 [2024-11-15 10:46:22.380389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.129 qpair failed and we were unable to recover it. 00:27:34.129 [2024-11-15 10:46:22.380618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.129 [2024-11-15 10:46:22.380643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.129 qpair failed and we were unable to recover it. 00:27:34.129 [2024-11-15 10:46:22.380800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.129 [2024-11-15 10:46:22.380825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.129 qpair failed and we were unable to recover it. 00:27:34.129 [2024-11-15 10:46:22.381045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.129 [2024-11-15 10:46:22.381070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.129 qpair failed and we were unable to recover it. 00:27:34.129 [2024-11-15 10:46:22.381205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.129 [2024-11-15 10:46:22.381245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.129 qpair failed and we were unable to recover it. 00:27:34.129 [2024-11-15 10:46:22.381503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.129 [2024-11-15 10:46:22.381560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.129 qpair failed and we were unable to recover it. 00:27:34.129 [2024-11-15 10:46:22.381765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.129 [2024-11-15 10:46:22.381820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.129 qpair failed and we were unable to recover it. 00:27:34.129 [2024-11-15 10:46:22.382061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.129 [2024-11-15 10:46:22.382086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.129 qpair failed and we were unable to recover it. 00:27:34.129 [2024-11-15 10:46:22.382214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.129 [2024-11-15 10:46:22.382239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.129 qpair failed and we were unable to recover it. 00:27:34.129 [2024-11-15 10:46:22.382402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.129 [2024-11-15 10:46:22.382427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.129 qpair failed and we were unable to recover it. 00:27:34.129 [2024-11-15 10:46:22.382600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.129 [2024-11-15 10:46:22.382654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.129 qpair failed and we were unable to recover it. 00:27:34.129 [2024-11-15 10:46:22.382862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.129 [2024-11-15 10:46:22.382917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.129 qpair failed and we were unable to recover it. 00:27:34.129 [2024-11-15 10:46:22.383120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.129 [2024-11-15 10:46:22.383145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.129 qpair failed and we were unable to recover it. 00:27:34.129 [2024-11-15 10:46:22.383313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.129 [2024-11-15 10:46:22.383338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.129 qpair failed and we were unable to recover it. 00:27:34.129 [2024-11-15 10:46:22.383562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.129 [2024-11-15 10:46:22.383588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.129 qpair failed and we were unable to recover it. 00:27:34.129 [2024-11-15 10:46:22.383701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.129 [2024-11-15 10:46:22.383726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.129 qpair failed and we were unable to recover it. 00:27:34.129 [2024-11-15 10:46:22.383898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.129 [2024-11-15 10:46:22.383954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.129 qpair failed and we were unable to recover it. 00:27:34.129 [2024-11-15 10:46:22.384144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.129 [2024-11-15 10:46:22.384169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.129 qpair failed and we were unable to recover it. 00:27:34.129 [2024-11-15 10:46:22.384327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.129 [2024-11-15 10:46:22.384352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.129 qpair failed and we were unable to recover it. 00:27:34.129 [2024-11-15 10:46:22.384539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.129 [2024-11-15 10:46:22.384564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.129 qpair failed and we were unable to recover it. 00:27:34.129 [2024-11-15 10:46:22.384771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.129 [2024-11-15 10:46:22.384827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.129 qpair failed and we were unable to recover it. 00:27:34.129 [2024-11-15 10:46:22.385093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.129 [2024-11-15 10:46:22.385118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.129 qpair failed and we were unable to recover it. 00:27:34.129 [2024-11-15 10:46:22.385241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.129 [2024-11-15 10:46:22.385266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.129 qpair failed and we were unable to recover it. 00:27:34.130 [2024-11-15 10:46:22.385397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.130 [2024-11-15 10:46:22.385432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.130 qpair failed and we were unable to recover it. 00:27:34.130 [2024-11-15 10:46:22.385603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.130 [2024-11-15 10:46:22.385628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.130 qpair failed and we were unable to recover it. 00:27:34.130 [2024-11-15 10:46:22.385862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.130 [2024-11-15 10:46:22.385918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.130 qpair failed and we were unable to recover it. 00:27:34.130 [2024-11-15 10:46:22.386130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.130 [2024-11-15 10:46:22.386155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.130 qpair failed and we were unable to recover it. 00:27:34.130 [2024-11-15 10:46:22.386278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.130 [2024-11-15 10:46:22.386306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.130 qpair failed and we were unable to recover it. 00:27:34.130 [2024-11-15 10:46:22.386510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.130 [2024-11-15 10:46:22.386536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.130 qpair failed and we were unable to recover it. 00:27:34.130 [2024-11-15 10:46:22.386766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.130 [2024-11-15 10:46:22.386791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.130 qpair failed and we were unable to recover it. 00:27:34.130 [2024-11-15 10:46:22.386913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.130 [2024-11-15 10:46:22.386963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.130 qpair failed and we were unable to recover it. 00:27:34.130 [2024-11-15 10:46:22.387108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.130 [2024-11-15 10:46:22.387133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.130 qpair failed and we were unable to recover it. 00:27:34.130 [2024-11-15 10:46:22.387304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.130 [2024-11-15 10:46:22.387329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.130 qpair failed and we were unable to recover it. 00:27:34.130 [2024-11-15 10:46:22.387488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.130 [2024-11-15 10:46:22.387514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.130 qpair failed and we were unable to recover it. 00:27:34.130 [2024-11-15 10:46:22.387678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.130 [2024-11-15 10:46:22.387713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.130 qpair failed and we were unable to recover it. 00:27:34.130 [2024-11-15 10:46:22.387829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.130 [2024-11-15 10:46:22.387855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.130 qpair failed and we were unable to recover it. 00:27:34.130 [2024-11-15 10:46:22.388043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.130 [2024-11-15 10:46:22.388069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.130 qpair failed and we were unable to recover it. 00:27:34.130 [2024-11-15 10:46:22.388243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.130 [2024-11-15 10:46:22.388269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.130 qpair failed and we were unable to recover it. 00:27:34.130 [2024-11-15 10:46:22.388424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.130 [2024-11-15 10:46:22.388450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.130 qpair failed and we were unable to recover it. 00:27:34.130 [2024-11-15 10:46:22.388564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.130 [2024-11-15 10:46:22.388589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.130 qpair failed and we were unable to recover it. 00:27:34.130 [2024-11-15 10:46:22.388727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.130 [2024-11-15 10:46:22.388762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.130 qpair failed and we were unable to recover it. 00:27:34.130 [2024-11-15 10:46:22.388938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.130 [2024-11-15 10:46:22.388978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.130 qpair failed and we were unable to recover it. 00:27:34.130 [2024-11-15 10:46:22.389197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.130 [2024-11-15 10:46:22.389243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.130 qpair failed and we were unable to recover it. 00:27:34.130 [2024-11-15 10:46:22.389444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.130 [2024-11-15 10:46:22.389471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.130 qpair failed and we were unable to recover it. 00:27:34.130 [2024-11-15 10:46:22.389623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.130 [2024-11-15 10:46:22.389648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.130 qpair failed and we were unable to recover it. 00:27:34.130 [2024-11-15 10:46:22.389793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.130 [2024-11-15 10:46:22.389819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.130 qpair failed and we were unable to recover it. 00:27:34.130 [2024-11-15 10:46:22.390014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.130 [2024-11-15 10:46:22.390040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.130 qpair failed and we were unable to recover it. 00:27:34.130 [2024-11-15 10:46:22.390223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.130 [2024-11-15 10:46:22.390263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.130 qpair failed and we were unable to recover it. 00:27:34.130 [2024-11-15 10:46:22.390516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.130 [2024-11-15 10:46:22.390573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.130 qpair failed and we were unable to recover it. 00:27:34.130 [2024-11-15 10:46:22.390798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.131 [2024-11-15 10:46:22.390823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.131 qpair failed and we were unable to recover it. 00:27:34.131 [2024-11-15 10:46:22.391036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.131 [2024-11-15 10:46:22.391061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.131 qpair failed and we were unable to recover it. 00:27:34.131 [2024-11-15 10:46:22.391189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.131 [2024-11-15 10:46:22.391222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.131 qpair failed and we were unable to recover it. 00:27:34.131 [2024-11-15 10:46:22.391419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.131 [2024-11-15 10:46:22.391464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.131 qpair failed and we were unable to recover it. 00:27:34.131 [2024-11-15 10:46:22.391718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.131 [2024-11-15 10:46:22.391774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.131 qpair failed and we were unable to recover it. 00:27:34.131 [2024-11-15 10:46:22.392002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.131 [2024-11-15 10:46:22.392027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.131 qpair failed and we were unable to recover it. 00:27:34.131 [2024-11-15 10:46:22.392186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.131 [2024-11-15 10:46:22.392211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.131 qpair failed and we were unable to recover it. 00:27:34.131 [2024-11-15 10:46:22.392405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.131 [2024-11-15 10:46:22.392431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.131 qpair failed and we were unable to recover it. 00:27:34.131 [2024-11-15 10:46:22.392563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.131 [2024-11-15 10:46:22.392589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.131 qpair failed and we were unable to recover it. 00:27:34.131 [2024-11-15 10:46:22.392773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.131 [2024-11-15 10:46:22.392829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.131 qpair failed and we were unable to recover it. 00:27:34.131 [2024-11-15 10:46:22.393067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.131 [2024-11-15 10:46:22.393092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.131 qpair failed and we were unable to recover it. 00:27:34.131 [2024-11-15 10:46:22.393255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.131 [2024-11-15 10:46:22.393279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.131 qpair failed and we were unable to recover it. 00:27:34.131 [2024-11-15 10:46:22.393476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.131 [2024-11-15 10:46:22.393502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.131 qpair failed and we were unable to recover it. 00:27:34.131 [2024-11-15 10:46:22.393669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.131 [2024-11-15 10:46:22.393724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.131 qpair failed and we were unable to recover it. 00:27:34.131 [2024-11-15 10:46:22.393970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.131 [2024-11-15 10:46:22.394026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.131 qpair failed and we were unable to recover it. 00:27:34.131 [2024-11-15 10:46:22.394248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.131 [2024-11-15 10:46:22.394289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.131 qpair failed and we were unable to recover it. 00:27:34.131 [2024-11-15 10:46:22.394527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.131 [2024-11-15 10:46:22.394553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.131 qpair failed and we were unable to recover it. 00:27:34.131 [2024-11-15 10:46:22.394650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.131 [2024-11-15 10:46:22.394675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.131 qpair failed and we were unable to recover it. 00:27:34.131 [2024-11-15 10:46:22.394812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.131 [2024-11-15 10:46:22.394837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.131 qpair failed and we were unable to recover it. 00:27:34.131 [2024-11-15 10:46:22.395018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.131 [2024-11-15 10:46:22.395074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.131 qpair failed and we were unable to recover it. 00:27:34.131 [2024-11-15 10:46:22.395335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.131 [2024-11-15 10:46:22.395394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.131 qpair failed and we were unable to recover it. 00:27:34.131 [2024-11-15 10:46:22.395620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.131 [2024-11-15 10:46:22.395646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.131 qpair failed and we were unable to recover it. 00:27:34.131 [2024-11-15 10:46:22.395822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.131 [2024-11-15 10:46:22.395847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.131 qpair failed and we were unable to recover it. 00:27:34.131 [2024-11-15 10:46:22.395965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.131 [2024-11-15 10:46:22.395990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.131 qpair failed and we were unable to recover it. 00:27:34.131 [2024-11-15 10:46:22.396195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.131 [2024-11-15 10:46:22.396220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.131 qpair failed and we were unable to recover it. 00:27:34.131 [2024-11-15 10:46:22.396400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.131 [2024-11-15 10:46:22.396459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.131 qpair failed and we were unable to recover it. 00:27:34.131 [2024-11-15 10:46:22.396624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.131 [2024-11-15 10:46:22.396649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.131 qpair failed and we were unable to recover it. 00:27:34.131 [2024-11-15 10:46:22.396814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.131 [2024-11-15 10:46:22.396840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.131 qpair failed and we were unable to recover it. 00:27:34.131 [2024-11-15 10:46:22.397018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.131 [2024-11-15 10:46:22.397043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.131 qpair failed and we were unable to recover it. 00:27:34.131 [2024-11-15 10:46:22.397187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.131 [2024-11-15 10:46:22.397212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.131 qpair failed and we were unable to recover it. 00:27:34.131 [2024-11-15 10:46:22.397433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.131 [2024-11-15 10:46:22.397493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.131 qpair failed and we were unable to recover it. 00:27:34.131 [2024-11-15 10:46:22.397651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.131 [2024-11-15 10:46:22.397707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.131 qpair failed and we were unable to recover it. 00:27:34.131 [2024-11-15 10:46:22.397882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.131 [2024-11-15 10:46:22.397917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.131 qpair failed and we were unable to recover it. 00:27:34.131 [2024-11-15 10:46:22.398091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.131 [2024-11-15 10:46:22.398116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.131 qpair failed and we were unable to recover it. 00:27:34.131 [2024-11-15 10:46:22.398269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.132 [2024-11-15 10:46:22.398295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.132 qpair failed and we were unable to recover it. 00:27:34.132 [2024-11-15 10:46:22.398486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.132 [2024-11-15 10:46:22.398513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.132 qpair failed and we were unable to recover it. 00:27:34.132 [2024-11-15 10:46:22.398633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.132 [2024-11-15 10:46:22.398691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.132 qpair failed and we were unable to recover it. 00:27:34.132 [2024-11-15 10:46:22.398923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.132 [2024-11-15 10:46:22.398978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.132 qpair failed and we were unable to recover it. 00:27:34.132 [2024-11-15 10:46:22.399194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.132 [2024-11-15 10:46:22.399219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.132 qpair failed and we were unable to recover it. 00:27:34.132 [2024-11-15 10:46:22.399386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.132 [2024-11-15 10:46:22.399413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.132 qpair failed and we were unable to recover it. 00:27:34.132 [2024-11-15 10:46:22.399595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.132 [2024-11-15 10:46:22.399620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.132 qpair failed and we were unable to recover it. 00:27:34.132 [2024-11-15 10:46:22.399776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.132 [2024-11-15 10:46:22.399840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.132 qpair failed and we were unable to recover it. 00:27:34.132 [2024-11-15 10:46:22.400097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.132 [2024-11-15 10:46:22.400123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.132 qpair failed and we were unable to recover it. 00:27:34.132 [2024-11-15 10:46:22.400256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.132 [2024-11-15 10:46:22.400281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.132 qpair failed and we were unable to recover it. 00:27:34.132 [2024-11-15 10:46:22.400471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.132 [2024-11-15 10:46:22.400531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.132 qpair failed and we were unable to recover it. 00:27:34.132 [2024-11-15 10:46:22.400796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.132 [2024-11-15 10:46:22.400858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.132 qpair failed and we were unable to recover it. 00:27:34.132 [2024-11-15 10:46:22.401063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.132 [2024-11-15 10:46:22.401089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.132 qpair failed and we were unable to recover it. 00:27:34.132 [2024-11-15 10:46:22.401199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.132 [2024-11-15 10:46:22.401224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.132 qpair failed and we were unable to recover it. 00:27:34.132 [2024-11-15 10:46:22.401345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.132 [2024-11-15 10:46:22.401382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.132 qpair failed and we were unable to recover it. 00:27:34.132 [2024-11-15 10:46:22.401566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.132 [2024-11-15 10:46:22.401592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.132 qpair failed and we were unable to recover it. 00:27:34.132 [2024-11-15 10:46:22.401717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.132 [2024-11-15 10:46:22.401768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.132 qpair failed and we were unable to recover it. 00:27:34.132 [2024-11-15 10:46:22.401952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.132 [2024-11-15 10:46:22.402008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.132 qpair failed and we were unable to recover it. 00:27:34.132 [2024-11-15 10:46:22.402216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.132 [2024-11-15 10:46:22.402241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.132 qpair failed and we were unable to recover it. 00:27:34.132 [2024-11-15 10:46:22.402394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.132 [2024-11-15 10:46:22.402422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.132 qpair failed and we were unable to recover it. 00:27:34.132 [2024-11-15 10:46:22.402550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.132 [2024-11-15 10:46:22.402576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.132 qpair failed and we were unable to recover it. 00:27:34.132 [2024-11-15 10:46:22.402705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.132 [2024-11-15 10:46:22.402730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.132 qpair failed and we were unable to recover it. 00:27:34.132 [2024-11-15 10:46:22.402933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.132 [2024-11-15 10:46:22.402992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.132 qpair failed and we were unable to recover it. 00:27:34.132 [2024-11-15 10:46:22.403228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.132 [2024-11-15 10:46:22.403268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.132 qpair failed and we were unable to recover it. 00:27:34.132 [2024-11-15 10:46:22.403413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.132 [2024-11-15 10:46:22.403439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.132 qpair failed and we were unable to recover it. 00:27:34.132 [2024-11-15 10:46:22.403603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.132 [2024-11-15 10:46:22.403628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.132 qpair failed and we were unable to recover it. 00:27:34.132 [2024-11-15 10:46:22.403781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.132 [2024-11-15 10:46:22.403805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.132 qpair failed and we were unable to recover it. 00:27:34.132 [2024-11-15 10:46:22.404004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.132 [2024-11-15 10:46:22.404030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.132 qpair failed and we were unable to recover it. 00:27:34.132 [2024-11-15 10:46:22.404190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.132 [2024-11-15 10:46:22.404229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.132 qpair failed and we were unable to recover it. 00:27:34.132 [2024-11-15 10:46:22.404394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.132 [2024-11-15 10:46:22.404443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.132 qpair failed and we were unable to recover it. 00:27:34.132 [2024-11-15 10:46:22.404657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.132 [2024-11-15 10:46:22.404682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.132 qpair failed and we were unable to recover it. 00:27:34.132 [2024-11-15 10:46:22.404895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.132 [2024-11-15 10:46:22.404920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.132 qpair failed and we were unable to recover it. 00:27:34.132 [2024-11-15 10:46:22.405049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.132 [2024-11-15 10:46:22.405074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.132 qpair failed and we were unable to recover it. 00:27:34.132 [2024-11-15 10:46:22.405159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.132 [2024-11-15 10:46:22.405184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.132 qpair failed and we were unable to recover it. 00:27:34.132 [2024-11-15 10:46:22.405285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.132 [2024-11-15 10:46:22.405310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.132 qpair failed and we were unable to recover it. 00:27:34.132 [2024-11-15 10:46:22.405441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.132 [2024-11-15 10:46:22.405467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.133 qpair failed and we were unable to recover it. 00:27:34.133 [2024-11-15 10:46:22.405632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.133 [2024-11-15 10:46:22.405658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.133 qpair failed and we were unable to recover it. 00:27:34.133 [2024-11-15 10:46:22.405856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.133 [2024-11-15 10:46:22.405881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.133 qpair failed and we were unable to recover it. 00:27:34.133 [2024-11-15 10:46:22.406068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.133 [2024-11-15 10:46:22.406093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.133 qpair failed and we were unable to recover it. 00:27:34.133 [2024-11-15 10:46:22.406221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.133 [2024-11-15 10:46:22.406247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.133 qpair failed and we were unable to recover it. 00:27:34.133 [2024-11-15 10:46:22.406376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.133 [2024-11-15 10:46:22.406423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.133 qpair failed and we were unable to recover it. 00:27:34.133 [2024-11-15 10:46:22.406645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.133 [2024-11-15 10:46:22.406670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.133 qpair failed and we were unable to recover it. 00:27:34.133 [2024-11-15 10:46:22.406811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.133 [2024-11-15 10:46:22.406836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.133 qpair failed and we were unable to recover it. 00:27:34.133 [2024-11-15 10:46:22.406953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.133 [2024-11-15 10:46:22.406978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.133 qpair failed and we were unable to recover it. 00:27:34.133 [2024-11-15 10:46:22.407215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.133 [2024-11-15 10:46:22.407254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.133 qpair failed and we were unable to recover it. 00:27:34.133 [2024-11-15 10:46:22.407456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.133 [2024-11-15 10:46:22.407513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.133 qpair failed and we were unable to recover it. 00:27:34.133 [2024-11-15 10:46:22.407707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.133 [2024-11-15 10:46:22.407746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.133 qpair failed and we were unable to recover it. 00:27:34.133 [2024-11-15 10:46:22.407986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.133 [2024-11-15 10:46:22.408012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.133 qpair failed and we were unable to recover it. 00:27:34.133 [2024-11-15 10:46:22.408215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.133 [2024-11-15 10:46:22.408240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.133 qpair failed and we were unable to recover it. 00:27:34.133 [2024-11-15 10:46:22.408404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.133 [2024-11-15 10:46:22.408429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.133 qpair failed and we were unable to recover it. 00:27:34.133 [2024-11-15 10:46:22.408627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.133 [2024-11-15 10:46:22.408685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.133 qpair failed and we were unable to recover it. 00:27:34.133 [2024-11-15 10:46:22.408879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.133 [2024-11-15 10:46:22.408935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.133 qpair failed and we were unable to recover it. 00:27:34.133 [2024-11-15 10:46:22.409135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.133 [2024-11-15 10:46:22.409161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.133 qpair failed and we were unable to recover it. 00:27:34.133 [2024-11-15 10:46:22.409351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.133 [2024-11-15 10:46:22.409383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.133 qpair failed and we were unable to recover it. 00:27:34.133 [2024-11-15 10:46:22.409552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.133 [2024-11-15 10:46:22.409581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.133 qpair failed and we were unable to recover it. 00:27:34.133 [2024-11-15 10:46:22.409725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.133 [2024-11-15 10:46:22.409771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.133 qpair failed and we were unable to recover it. 00:27:34.133 [2024-11-15 10:46:22.410014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.133 [2024-11-15 10:46:22.410071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.133 qpair failed and we were unable to recover it. 00:27:34.133 [2024-11-15 10:46:22.410267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.133 [2024-11-15 10:46:22.410292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.133 qpair failed and we were unable to recover it. 00:27:34.133 [2024-11-15 10:46:22.410421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.133 [2024-11-15 10:46:22.410447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.133 qpair failed and we were unable to recover it. 00:27:34.133 [2024-11-15 10:46:22.410600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.133 [2024-11-15 10:46:22.410626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.133 qpair failed and we were unable to recover it. 00:27:34.133 [2024-11-15 10:46:22.410821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.133 [2024-11-15 10:46:22.410846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.133 qpair failed and we were unable to recover it. 00:27:34.133 [2024-11-15 10:46:22.411023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.133 [2024-11-15 10:46:22.411080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.133 qpair failed and we were unable to recover it. 00:27:34.133 [2024-11-15 10:46:22.411305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.133 [2024-11-15 10:46:22.411356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.133 qpair failed and we were unable to recover it. 00:27:34.133 [2024-11-15 10:46:22.411538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.133 [2024-11-15 10:46:22.411563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.133 qpair failed and we were unable to recover it. 00:27:34.133 [2024-11-15 10:46:22.411792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.133 [2024-11-15 10:46:22.411817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.133 qpair failed and we were unable to recover it. 00:27:34.133 [2024-11-15 10:46:22.412002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.133 [2024-11-15 10:46:22.412061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.133 qpair failed and we were unable to recover it. 00:27:34.133 [2024-11-15 10:46:22.412286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.133 [2024-11-15 10:46:22.412326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.133 qpair failed and we were unable to recover it. 00:27:34.133 [2024-11-15 10:46:22.412533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.133 [2024-11-15 10:46:22.412591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.133 qpair failed and we were unable to recover it. 00:27:34.133 [2024-11-15 10:46:22.412821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.134 [2024-11-15 10:46:22.412847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.134 qpair failed and we were unable to recover it. 00:27:34.134 [2024-11-15 10:46:22.413006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.134 [2024-11-15 10:46:22.413031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.134 qpair failed and we were unable to recover it. 00:27:34.134 [2024-11-15 10:46:22.413150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.134 [2024-11-15 10:46:22.413176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.134 qpair failed and we were unable to recover it. 00:27:34.134 [2024-11-15 10:46:22.413384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.134 [2024-11-15 10:46:22.413425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.134 qpair failed and we were unable to recover it. 00:27:34.134 [2024-11-15 10:46:22.413636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.134 [2024-11-15 10:46:22.413693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.134 qpair failed and we were unable to recover it. 00:27:34.134 [2024-11-15 10:46:22.413863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.134 [2024-11-15 10:46:22.413911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.134 qpair failed and we were unable to recover it. 00:27:34.134 [2024-11-15 10:46:22.414061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.134 [2024-11-15 10:46:22.414086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.134 qpair failed and we were unable to recover it. 00:27:34.134 [2024-11-15 10:46:22.414206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.134 [2024-11-15 10:46:22.414232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.134 qpair failed and we were unable to recover it. 00:27:34.134 [2024-11-15 10:46:22.414406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.134 [2024-11-15 10:46:22.414433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.134 qpair failed and we were unable to recover it. 00:27:34.134 [2024-11-15 10:46:22.414559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.134 [2024-11-15 10:46:22.414598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.134 qpair failed and we were unable to recover it. 00:27:34.134 [2024-11-15 10:46:22.414780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.134 [2024-11-15 10:46:22.414820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.134 qpair failed and we were unable to recover it. 00:27:34.134 [2024-11-15 10:46:22.415002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.134 [2024-11-15 10:46:22.415027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.134 qpair failed and we were unable to recover it. 00:27:34.134 [2024-11-15 10:46:22.415153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.134 [2024-11-15 10:46:22.415178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.134 qpair failed and we were unable to recover it. 00:27:34.134 [2024-11-15 10:46:22.415314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.134 [2024-11-15 10:46:22.415343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.134 qpair failed and we were unable to recover it. 00:27:34.134 [2024-11-15 10:46:22.415543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.134 [2024-11-15 10:46:22.415568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.134 qpair failed and we were unable to recover it. 00:27:34.134 [2024-11-15 10:46:22.415693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.134 [2024-11-15 10:46:22.415732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.134 qpair failed and we were unable to recover it. 00:27:34.134 [2024-11-15 10:46:22.415928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.134 [2024-11-15 10:46:22.415954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.134 qpair failed and we were unable to recover it. 00:27:34.134 [2024-11-15 10:46:22.416071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.134 [2024-11-15 10:46:22.416097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.134 qpair failed and we were unable to recover it. 00:27:34.134 [2024-11-15 10:46:22.416246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.134 [2024-11-15 10:46:22.416271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.134 qpair failed and we were unable to recover it. 00:27:34.134 [2024-11-15 10:46:22.416369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.134 [2024-11-15 10:46:22.416395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.134 qpair failed and we were unable to recover it. 00:27:34.134 [2024-11-15 10:46:22.416566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.134 [2024-11-15 10:46:22.416605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.134 qpair failed and we were unable to recover it. 00:27:34.134 [2024-11-15 10:46:22.416805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.134 [2024-11-15 10:46:22.416860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.134 qpair failed and we were unable to recover it. 00:27:34.134 [2024-11-15 10:46:22.417046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.134 [2024-11-15 10:46:22.417071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.134 qpair failed and we were unable to recover it. 00:27:34.134 [2024-11-15 10:46:22.417233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.134 [2024-11-15 10:46:22.417257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.134 qpair failed and we were unable to recover it. 00:27:34.134 [2024-11-15 10:46:22.417416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.134 [2024-11-15 10:46:22.417449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.134 qpair failed and we were unable to recover it. 00:27:34.134 [2024-11-15 10:46:22.417672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.134 [2024-11-15 10:46:22.417728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.134 qpair failed and we were unable to recover it. 00:27:34.134 [2024-11-15 10:46:22.417937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.134 [2024-11-15 10:46:22.417993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.134 qpair failed and we were unable to recover it. 00:27:34.134 [2024-11-15 10:46:22.418275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.134 [2024-11-15 10:46:22.418301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.134 qpair failed and we were unable to recover it. 00:27:34.134 [2024-11-15 10:46:22.418488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.134 [2024-11-15 10:46:22.418514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.134 qpair failed and we were unable to recover it. 00:27:34.134 [2024-11-15 10:46:22.418665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.134 [2024-11-15 10:46:22.418690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.134 qpair failed and we were unable to recover it. 00:27:34.134 [2024-11-15 10:46:22.418852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.134 [2024-11-15 10:46:22.418893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.134 qpair failed and we were unable to recover it. 00:27:34.134 [2024-11-15 10:46:22.419102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.134 [2024-11-15 10:46:22.419157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.134 qpair failed and we were unable to recover it. 00:27:34.134 [2024-11-15 10:46:22.419310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.134 [2024-11-15 10:46:22.419350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.134 qpair failed and we were unable to recover it. 00:27:34.134 [2024-11-15 10:46:22.419573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.134 [2024-11-15 10:46:22.419599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.134 qpair failed and we were unable to recover it. 00:27:34.134 [2024-11-15 10:46:22.419800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.134 [2024-11-15 10:46:22.419826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.134 qpair failed and we were unable to recover it. 00:27:34.134 [2024-11-15 10:46:22.419974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.134 [2024-11-15 10:46:22.419999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.134 qpair failed and we were unable to recover it. 00:27:34.135 [2024-11-15 10:46:22.420166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.135 [2024-11-15 10:46:22.420206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.135 qpair failed and we were unable to recover it. 00:27:34.135 [2024-11-15 10:46:22.420372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.135 [2024-11-15 10:46:22.420413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.135 qpair failed and we were unable to recover it. 00:27:34.135 [2024-11-15 10:46:22.420566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.135 [2024-11-15 10:46:22.420591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.135 qpair failed and we were unable to recover it. 00:27:34.135 [2024-11-15 10:46:22.420723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.135 [2024-11-15 10:46:22.420748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.135 qpair failed and we were unable to recover it. 00:27:34.135 [2024-11-15 10:46:22.420864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.135 [2024-11-15 10:46:22.420895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.135 qpair failed and we were unable to recover it. 00:27:34.135 [2024-11-15 10:46:22.421026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.135 [2024-11-15 10:46:22.421051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.135 qpair failed and we were unable to recover it. 00:27:34.135 [2024-11-15 10:46:22.421144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.135 [2024-11-15 10:46:22.421169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.135 qpair failed and we were unable to recover it. 00:27:34.135 [2024-11-15 10:46:22.421342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.135 [2024-11-15 10:46:22.421392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.135 qpair failed and we were unable to recover it. 00:27:34.135 [2024-11-15 10:46:22.421486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.135 [2024-11-15 10:46:22.421511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.135 qpair failed and we were unable to recover it. 00:27:34.135 [2024-11-15 10:46:22.421603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.135 [2024-11-15 10:46:22.421628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.135 qpair failed and we were unable to recover it. 00:27:34.135 [2024-11-15 10:46:22.421825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.135 [2024-11-15 10:46:22.421854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.135 qpair failed and we were unable to recover it. 00:27:34.135 [2024-11-15 10:46:22.422002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.135 [2024-11-15 10:46:22.422027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.135 qpair failed and we were unable to recover it. 00:27:34.135 [2024-11-15 10:46:22.422197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.135 [2024-11-15 10:46:22.422236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.135 qpair failed and we were unable to recover it. 00:27:34.135 [2024-11-15 10:46:22.422392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.135 [2024-11-15 10:46:22.422418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.135 qpair failed and we were unable to recover it. 00:27:34.135 [2024-11-15 10:46:22.422595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.135 [2024-11-15 10:46:22.422621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.135 qpair failed and we were unable to recover it. 00:27:34.135 [2024-11-15 10:46:22.422748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.135 [2024-11-15 10:46:22.422773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.135 qpair failed and we were unable to recover it. 00:27:34.135 [2024-11-15 10:46:22.422926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.135 [2024-11-15 10:46:22.422952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.135 qpair failed and we were unable to recover it. 00:27:34.135 [2024-11-15 10:46:22.423126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.135 [2024-11-15 10:46:22.423151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.135 qpair failed and we were unable to recover it. 00:27:34.135 [2024-11-15 10:46:22.423346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.135 [2024-11-15 10:46:22.423400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.135 qpair failed and we were unable to recover it. 00:27:34.135 [2024-11-15 10:46:22.423623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.135 [2024-11-15 10:46:22.423648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.135 qpair failed and we were unable to recover it. 00:27:34.135 [2024-11-15 10:46:22.423828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.135 [2024-11-15 10:46:22.423853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.135 qpair failed and we were unable to recover it. 00:27:34.135 [2024-11-15 10:46:22.424022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.135 [2024-11-15 10:46:22.424047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.135 qpair failed and we were unable to recover it. 00:27:34.135 [2024-11-15 10:46:22.424282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.135 [2024-11-15 10:46:22.424322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.135 qpair failed and we were unable to recover it. 00:27:34.135 [2024-11-15 10:46:22.424515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.135 [2024-11-15 10:46:22.424572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.135 qpair failed and we were unable to recover it. 00:27:34.135 [2024-11-15 10:46:22.424746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.135 [2024-11-15 10:46:22.424803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.135 qpair failed and we were unable to recover it. 00:27:34.135 [2024-11-15 10:46:22.425067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.135 [2024-11-15 10:46:22.425092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.135 qpair failed and we were unable to recover it. 00:27:34.135 [2024-11-15 10:46:22.425203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.135 [2024-11-15 10:46:22.425237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.135 qpair failed and we were unable to recover it. 00:27:34.135 [2024-11-15 10:46:22.425401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.135 [2024-11-15 10:46:22.425427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.135 qpair failed and we were unable to recover it. 00:27:34.136 [2024-11-15 10:46:22.425655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.136 [2024-11-15 10:46:22.425680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.136 qpair failed and we were unable to recover it. 00:27:34.136 [2024-11-15 10:46:22.425868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.136 [2024-11-15 10:46:22.425923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.136 qpair failed and we were unable to recover it. 00:27:34.136 [2024-11-15 10:46:22.426088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.136 [2024-11-15 10:46:22.426113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.136 qpair failed and we were unable to recover it. 00:27:34.136 [2024-11-15 10:46:22.426240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.136 [2024-11-15 10:46:22.426274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.136 qpair failed and we were unable to recover it. 00:27:34.136 [2024-11-15 10:46:22.426454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.136 [2024-11-15 10:46:22.426481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.136 qpair failed and we were unable to recover it. 00:27:34.136 [2024-11-15 10:46:22.426680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.136 [2024-11-15 10:46:22.426734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.136 qpair failed and we were unable to recover it. 00:27:34.136 [2024-11-15 10:46:22.426926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.136 [2024-11-15 10:46:22.426951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.136 qpair failed and we were unable to recover it. 00:27:34.136 [2024-11-15 10:46:22.427111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.136 [2024-11-15 10:46:22.427136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.136 qpair failed and we were unable to recover it. 00:27:34.136 [2024-11-15 10:46:22.427284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.136 [2024-11-15 10:46:22.427309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.136 qpair failed and we were unable to recover it. 00:27:34.136 [2024-11-15 10:46:22.427449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.136 [2024-11-15 10:46:22.427475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.136 qpair failed and we were unable to recover it. 00:27:34.136 [2024-11-15 10:46:22.427640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.136 [2024-11-15 10:46:22.427721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.136 qpair failed and we were unable to recover it. 00:27:34.136 [2024-11-15 10:46:22.427956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.136 [2024-11-15 10:46:22.427998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.136 qpair failed and we were unable to recover it. 00:27:34.136 [2024-11-15 10:46:22.428175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.136 [2024-11-15 10:46:22.428200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.136 qpair failed and we were unable to recover it. 00:27:34.136 [2024-11-15 10:46:22.428421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.136 [2024-11-15 10:46:22.428447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.136 qpair failed and we were unable to recover it. 00:27:34.136 [2024-11-15 10:46:22.428629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.136 [2024-11-15 10:46:22.428669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.136 qpair failed and we were unable to recover it. 00:27:34.136 [2024-11-15 10:46:22.428926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.136 [2024-11-15 10:46:22.428951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.136 qpair failed and we were unable to recover it. 00:27:34.136 [2024-11-15 10:46:22.429159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.136 [2024-11-15 10:46:22.429184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.136 qpair failed and we were unable to recover it. 00:27:34.136 [2024-11-15 10:46:22.429379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.136 [2024-11-15 10:46:22.429405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.136 qpair failed and we were unable to recover it. 00:27:34.136 [2024-11-15 10:46:22.429543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.136 [2024-11-15 10:46:22.429583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.136 qpair failed and we were unable to recover it. 00:27:34.136 [2024-11-15 10:46:22.429808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.136 [2024-11-15 10:46:22.429865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.136 qpair failed and we were unable to recover it. 00:27:34.136 [2024-11-15 10:46:22.430023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.136 [2024-11-15 10:46:22.430048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.136 qpair failed and we were unable to recover it. 00:27:34.136 [2024-11-15 10:46:22.430187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.136 [2024-11-15 10:46:22.430212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.136 qpair failed and we were unable to recover it. 00:27:34.136 [2024-11-15 10:46:22.430358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.136 [2024-11-15 10:46:22.430397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.136 qpair failed and we were unable to recover it. 00:27:34.136 [2024-11-15 10:46:22.430522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.136 [2024-11-15 10:46:22.430548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.136 qpair failed and we were unable to recover it. 00:27:34.136 [2024-11-15 10:46:22.430732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.136 [2024-11-15 10:46:22.430773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.136 qpair failed and we were unable to recover it. 00:27:34.136 [2024-11-15 10:46:22.431719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.136 [2024-11-15 10:46:22.431749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.136 qpair failed and we were unable to recover it. 00:27:34.136 [2024-11-15 10:46:22.431933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.136 [2024-11-15 10:46:22.431959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.136 qpair failed and we were unable to recover it. 00:27:34.136 [2024-11-15 10:46:22.432137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.136 [2024-11-15 10:46:22.432162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.136 qpair failed and we were unable to recover it. 00:27:34.136 [2024-11-15 10:46:22.432359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.136 [2024-11-15 10:46:22.432404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.136 qpair failed and we were unable to recover it. 00:27:34.136 [2024-11-15 10:46:22.432556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.136 [2024-11-15 10:46:22.432581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.136 qpair failed and we were unable to recover it. 00:27:34.136 [2024-11-15 10:46:22.432742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.136 [2024-11-15 10:46:22.432781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.136 qpair failed and we were unable to recover it. 00:27:34.136 [2024-11-15 10:46:22.433002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.136 [2024-11-15 10:46:22.433059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.136 qpair failed and we were unable to recover it. 00:27:34.136 [2024-11-15 10:46:22.433204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.136 [2024-11-15 10:46:22.433229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.136 qpair failed and we were unable to recover it. 00:27:34.136 [2024-11-15 10:46:22.433345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.136 [2024-11-15 10:46:22.433388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.136 qpair failed and we were unable to recover it. 00:27:34.137 [2024-11-15 10:46:22.433510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.137 [2024-11-15 10:46:22.433536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.137 qpair failed and we were unable to recover it. 00:27:34.137 [2024-11-15 10:46:22.433658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.137 [2024-11-15 10:46:22.433698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.137 qpair failed and we were unable to recover it. 00:27:34.137 [2024-11-15 10:46:22.433796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.137 [2024-11-15 10:46:22.433836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.137 qpair failed and we were unable to recover it. 00:27:34.137 [2024-11-15 10:46:22.433957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.137 [2024-11-15 10:46:22.433981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.137 qpair failed and we were unable to recover it. 00:27:34.137 [2024-11-15 10:46:22.434121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.137 [2024-11-15 10:46:22.434146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.137 qpair failed and we were unable to recover it. 00:27:34.137 [2024-11-15 10:46:22.434318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.137 [2024-11-15 10:46:22.434342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.137 qpair failed and we were unable to recover it. 00:27:34.137 [2024-11-15 10:46:22.434506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.137 [2024-11-15 10:46:22.434531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.137 qpair failed and we were unable to recover it. 00:27:34.137 [2024-11-15 10:46:22.434660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.137 [2024-11-15 10:46:22.434685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.137 qpair failed and we were unable to recover it. 00:27:34.137 [2024-11-15 10:46:22.434842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.137 [2024-11-15 10:46:22.434867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.137 qpair failed and we were unable to recover it. 00:27:34.137 [2024-11-15 10:46:22.435044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.137 [2024-11-15 10:46:22.435068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.137 qpair failed and we were unable to recover it. 00:27:34.137 [2024-11-15 10:46:22.435216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.137 [2024-11-15 10:46:22.435244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.137 qpair failed and we were unable to recover it. 00:27:34.137 [2024-11-15 10:46:22.435410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.137 [2024-11-15 10:46:22.435437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.137 qpair failed and we were unable to recover it. 00:27:34.137 [2024-11-15 10:46:22.435567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.137 [2024-11-15 10:46:22.435606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.137 qpair failed and we were unable to recover it. 00:27:34.137 [2024-11-15 10:46:22.435793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.137 [2024-11-15 10:46:22.435817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.137 qpair failed and we were unable to recover it. 00:27:34.137 [2024-11-15 10:46:22.435974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.137 [2024-11-15 10:46:22.436013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.137 qpair failed and we were unable to recover it. 00:27:34.137 [2024-11-15 10:46:22.436115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.137 [2024-11-15 10:46:22.436139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.137 qpair failed and we were unable to recover it. 00:27:34.137 [2024-11-15 10:46:22.436258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.137 [2024-11-15 10:46:22.436282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.137 qpair failed and we were unable to recover it. 00:27:34.137 [2024-11-15 10:46:22.436431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.137 [2024-11-15 10:46:22.436472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.137 qpair failed and we were unable to recover it. 00:27:34.137 [2024-11-15 10:46:22.436655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.137 [2024-11-15 10:46:22.436695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.137 qpair failed and we were unable to recover it. 00:27:34.137 [2024-11-15 10:46:22.436847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.137 [2024-11-15 10:46:22.436887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.137 qpair failed and we were unable to recover it. 00:27:34.137 [2024-11-15 10:46:22.437073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.137 [2024-11-15 10:46:22.437098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.137 qpair failed and we were unable to recover it. 00:27:34.137 [2024-11-15 10:46:22.437226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.137 [2024-11-15 10:46:22.437249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.137 qpair failed and we were unable to recover it. 00:27:34.137 [2024-11-15 10:46:22.437446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.137 [2024-11-15 10:46:22.437500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.137 qpair failed and we were unable to recover it. 00:27:34.137 [2024-11-15 10:46:22.437685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.137 [2024-11-15 10:46:22.437710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.137 qpair failed and we were unable to recover it. 00:27:34.137 [2024-11-15 10:46:22.437839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.137 [2024-11-15 10:46:22.437863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.137 qpair failed and we were unable to recover it. 00:27:34.137 [2024-11-15 10:46:22.438015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.137 [2024-11-15 10:46:22.438040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.137 qpair failed and we were unable to recover it. 00:27:34.137 [2024-11-15 10:46:22.438185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.137 [2024-11-15 10:46:22.438210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.137 qpair failed and we were unable to recover it. 00:27:34.137 [2024-11-15 10:46:22.438368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.137 [2024-11-15 10:46:22.438393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.137 qpair failed and we were unable to recover it. 00:27:34.137 [2024-11-15 10:46:22.438478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.137 [2024-11-15 10:46:22.438502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.137 qpair failed and we were unable to recover it. 00:27:34.137 [2024-11-15 10:46:22.438634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.137 [2024-11-15 10:46:22.438674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.137 qpair failed and we were unable to recover it. 00:27:34.137 [2024-11-15 10:46:22.438772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.137 [2024-11-15 10:46:22.438798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.137 qpair failed and we were unable to recover it. 00:27:34.137 [2024-11-15 10:46:22.438972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.137 [2024-11-15 10:46:22.439012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.137 qpair failed and we were unable to recover it. 00:27:34.137 [2024-11-15 10:46:22.439173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.137 [2024-11-15 10:46:22.439197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.137 qpair failed and we were unable to recover it. 00:27:34.138 [2024-11-15 10:46:22.439351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.138 [2024-11-15 10:46:22.439386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.138 qpair failed and we were unable to recover it. 00:27:34.138 [2024-11-15 10:46:22.439508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.138 [2024-11-15 10:46:22.439560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.138 qpair failed and we were unable to recover it. 00:27:34.138 [2024-11-15 10:46:22.439725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.138 [2024-11-15 10:46:22.439773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.138 qpair failed and we were unable to recover it. 00:27:34.138 [2024-11-15 10:46:22.439934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.138 [2024-11-15 10:46:22.439959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.138 qpair failed and we were unable to recover it. 00:27:34.138 [2024-11-15 10:46:22.440146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.138 [2024-11-15 10:46:22.440192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.138 qpair failed and we were unable to recover it. 00:27:34.138 [2024-11-15 10:46:22.440358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.138 [2024-11-15 10:46:22.440406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.138 qpair failed and we were unable to recover it. 00:27:34.138 [2024-11-15 10:46:22.440563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.138 [2024-11-15 10:46:22.440603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.138 qpair failed and we were unable to recover it. 00:27:34.138 [2024-11-15 10:46:22.440725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.138 [2024-11-15 10:46:22.440775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.138 qpair failed and we were unable to recover it. 00:27:34.138 [2024-11-15 10:46:22.440950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.138 [2024-11-15 10:46:22.441003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.138 qpair failed and we were unable to recover it. 00:27:34.138 [2024-11-15 10:46:22.441178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.138 [2024-11-15 10:46:22.441229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.138 qpair failed and we were unable to recover it. 00:27:34.138 [2024-11-15 10:46:22.441337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.138 [2024-11-15 10:46:22.441387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.138 qpair failed and we were unable to recover it. 00:27:34.138 [2024-11-15 10:46:22.441495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.138 [2024-11-15 10:46:22.441520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.138 qpair failed and we were unable to recover it. 00:27:34.138 [2024-11-15 10:46:22.441663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.138 [2024-11-15 10:46:22.441688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.138 qpair failed and we were unable to recover it. 00:27:34.138 [2024-11-15 10:46:22.441835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.138 [2024-11-15 10:46:22.441860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.138 qpair failed and we were unable to recover it. 00:27:34.138 [2024-11-15 10:46:22.442028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.138 [2024-11-15 10:46:22.442052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.138 qpair failed and we were unable to recover it. 00:27:34.138 [2024-11-15 10:46:22.442184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.138 [2024-11-15 10:46:22.442209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.138 qpair failed and we were unable to recover it. 00:27:34.138 [2024-11-15 10:46:22.442342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.138 [2024-11-15 10:46:22.442386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.138 qpair failed and we were unable to recover it. 00:27:34.138 [2024-11-15 10:46:22.442514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.138 [2024-11-15 10:46:22.442540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.138 qpair failed and we were unable to recover it. 00:27:34.138 [2024-11-15 10:46:22.442696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.138 [2024-11-15 10:46:22.442721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.138 qpair failed and we were unable to recover it. 00:27:34.138 [2024-11-15 10:46:22.442832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.138 [2024-11-15 10:46:22.442856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.138 qpair failed and we were unable to recover it. 00:27:34.138 [2024-11-15 10:46:22.442979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.138 [2024-11-15 10:46:22.443003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.138 qpair failed and we were unable to recover it. 00:27:34.138 [2024-11-15 10:46:22.443170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.138 [2024-11-15 10:46:22.443194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.138 qpair failed and we were unable to recover it. 00:27:34.138 [2024-11-15 10:46:22.443300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.138 [2024-11-15 10:46:22.443325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.138 qpair failed and we were unable to recover it. 00:27:34.138 [2024-11-15 10:46:22.443462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.138 [2024-11-15 10:46:22.443487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.138 qpair failed and we were unable to recover it. 00:27:34.138 [2024-11-15 10:46:22.443641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.138 [2024-11-15 10:46:22.443681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.138 qpair failed and we were unable to recover it. 00:27:34.138 [2024-11-15 10:46:22.443836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.138 [2024-11-15 10:46:22.443859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.138 qpair failed and we were unable to recover it. 00:27:34.138 [2024-11-15 10:46:22.443989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.138 [2024-11-15 10:46:22.444013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.138 qpair failed and we were unable to recover it. 00:27:34.138 [2024-11-15 10:46:22.444140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.138 [2024-11-15 10:46:22.444165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.138 qpair failed and we were unable to recover it. 00:27:34.138 [2024-11-15 10:46:22.444269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.138 [2024-11-15 10:46:22.444294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.138 qpair failed and we were unable to recover it. 00:27:34.138 [2024-11-15 10:46:22.444431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.138 [2024-11-15 10:46:22.444456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.138 qpair failed and we were unable to recover it. 00:27:34.138 [2024-11-15 10:46:22.444578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.138 [2024-11-15 10:46:22.444602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.138 qpair failed and we were unable to recover it. 00:27:34.138 [2024-11-15 10:46:22.444745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.138 [2024-11-15 10:46:22.444784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.138 qpair failed and we were unable to recover it. 00:27:34.138 [2024-11-15 10:46:22.444922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.138 [2024-11-15 10:46:22.444963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.138 qpair failed and we were unable to recover it. 00:27:34.138 [2024-11-15 10:46:22.445092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.138 [2024-11-15 10:46:22.445117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.138 qpair failed and we were unable to recover it. 00:27:34.138 [2024-11-15 10:46:22.445287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.138 [2024-11-15 10:46:22.445311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.138 qpair failed and we were unable to recover it. 00:27:34.138 [2024-11-15 10:46:22.445442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.138 [2024-11-15 10:46:22.445483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.138 qpair failed and we were unable to recover it. 00:27:34.138 [2024-11-15 10:46:22.445598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.139 [2024-11-15 10:46:22.445623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.139 qpair failed and we were unable to recover it. 00:27:34.139 [2024-11-15 10:46:22.445768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.139 [2024-11-15 10:46:22.445807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.139 qpair failed and we were unable to recover it. 00:27:34.139 [2024-11-15 10:46:22.445914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.139 [2024-11-15 10:46:22.445953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.139 qpair failed and we were unable to recover it. 00:27:34.139 [2024-11-15 10:46:22.446114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.139 [2024-11-15 10:46:22.446139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.139 qpair failed and we were unable to recover it. 00:27:34.139 [2024-11-15 10:46:22.446246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.139 [2024-11-15 10:46:22.446271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.139 qpair failed and we were unable to recover it. 00:27:34.139 [2024-11-15 10:46:22.446426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.139 [2024-11-15 10:46:22.446451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.139 qpair failed and we were unable to recover it. 00:27:34.139 [2024-11-15 10:46:22.446551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.139 [2024-11-15 10:46:22.446576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.139 qpair failed and we were unable to recover it. 00:27:34.139 [2024-11-15 10:46:22.446724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.139 [2024-11-15 10:46:22.446748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.139 qpair failed and we were unable to recover it. 00:27:34.139 [2024-11-15 10:46:22.446903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.139 [2024-11-15 10:46:22.446927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.139 qpair failed and we were unable to recover it. 00:27:34.139 [2024-11-15 10:46:22.447040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.139 [2024-11-15 10:46:22.447064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.139 qpair failed and we were unable to recover it. 00:27:34.139 [2024-11-15 10:46:22.447193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.139 [2024-11-15 10:46:22.447226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.139 qpair failed and we were unable to recover it. 00:27:34.139 [2024-11-15 10:46:22.447378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.139 [2024-11-15 10:46:22.447404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.139 qpair failed and we were unable to recover it. 00:27:34.139 [2024-11-15 10:46:22.447535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.139 [2024-11-15 10:46:22.447560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.139 qpair failed and we were unable to recover it. 00:27:34.139 [2024-11-15 10:46:22.447700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.139 [2024-11-15 10:46:22.447754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.139 qpair failed and we were unable to recover it. 00:27:34.139 [2024-11-15 10:46:22.447845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.139 [2024-11-15 10:46:22.447870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.139 qpair failed and we were unable to recover it. 00:27:34.139 [2024-11-15 10:46:22.448029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.139 [2024-11-15 10:46:22.448054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.139 qpair failed and we were unable to recover it. 00:27:34.139 [2024-11-15 10:46:22.448164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.139 [2024-11-15 10:46:22.448188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.139 qpair failed and we were unable to recover it. 00:27:34.139 [2024-11-15 10:46:22.448307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.139 [2024-11-15 10:46:22.448331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.139 qpair failed and we were unable to recover it. 00:27:34.139 [2024-11-15 10:46:22.448449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.139 [2024-11-15 10:46:22.448474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.139 qpair failed and we were unable to recover it. 00:27:34.139 [2024-11-15 10:46:22.448570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.139 [2024-11-15 10:46:22.448595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.139 qpair failed and we were unable to recover it. 00:27:34.139 [2024-11-15 10:46:22.448727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.139 [2024-11-15 10:46:22.448752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.139 qpair failed and we were unable to recover it. 00:27:34.139 [2024-11-15 10:46:22.448906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.139 [2024-11-15 10:46:22.448930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.139 qpair failed and we were unable to recover it. 00:27:34.139 [2024-11-15 10:46:22.449060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.139 [2024-11-15 10:46:22.449084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.139 qpair failed and we were unable to recover it. 00:27:34.139 [2024-11-15 10:46:22.449185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.139 [2024-11-15 10:46:22.449210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.139 qpair failed and we were unable to recover it. 00:27:34.139 [2024-11-15 10:46:22.449411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.139 [2024-11-15 10:46:22.449437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.139 qpair failed and we were unable to recover it. 00:27:34.139 [2024-11-15 10:46:22.449571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.139 [2024-11-15 10:46:22.449632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.139 qpair failed and we were unable to recover it. 00:27:34.139 [2024-11-15 10:46:22.449785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.139 [2024-11-15 10:46:22.449810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.139 qpair failed and we were unable to recover it. 00:27:34.139 [2024-11-15 10:46:22.449980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.139 [2024-11-15 10:46:22.450004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.139 qpair failed and we were unable to recover it. 00:27:34.139 [2024-11-15 10:46:22.450204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.139 [2024-11-15 10:46:22.450229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.139 qpair failed and we were unable to recover it. 00:27:34.139 [2024-11-15 10:46:22.450402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.139 [2024-11-15 10:46:22.450428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.139 qpair failed and we were unable to recover it. 00:27:34.139 [2024-11-15 10:46:22.450577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.139 [2024-11-15 10:46:22.450625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.139 qpair failed and we were unable to recover it. 00:27:34.139 [2024-11-15 10:46:22.450799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.139 [2024-11-15 10:46:22.450846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.139 qpair failed and we were unable to recover it. 00:27:34.139 [2024-11-15 10:46:22.450976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.139 [2024-11-15 10:46:22.451000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.139 qpair failed and we were unable to recover it. 00:27:34.139 [2024-11-15 10:46:22.451146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.140 [2024-11-15 10:46:22.451171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.140 qpair failed and we were unable to recover it. 00:27:34.140 [2024-11-15 10:46:22.451350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.140 [2024-11-15 10:46:22.451398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.140 qpair failed and we were unable to recover it. 00:27:34.140 [2024-11-15 10:46:22.451507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.140 [2024-11-15 10:46:22.451554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.140 qpair failed and we were unable to recover it. 00:27:34.140 [2024-11-15 10:46:22.451739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.140 [2024-11-15 10:46:22.451783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.140 qpair failed and we were unable to recover it. 00:27:34.140 [2024-11-15 10:46:22.451936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.140 [2024-11-15 10:46:22.451977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.140 qpair failed and we were unable to recover it. 00:27:34.140 [2024-11-15 10:46:22.452106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.140 [2024-11-15 10:46:22.452130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.140 qpair failed and we were unable to recover it. 00:27:34.140 [2024-11-15 10:46:22.452256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.140 [2024-11-15 10:46:22.452281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.140 qpair failed and we were unable to recover it. 00:27:34.140 [2024-11-15 10:46:22.452432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.140 [2024-11-15 10:46:22.452475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.140 qpair failed and we were unable to recover it. 00:27:34.140 [2024-11-15 10:46:22.452583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.140 [2024-11-15 10:46:22.452608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.140 qpair failed and we were unable to recover it. 00:27:34.140 [2024-11-15 10:46:22.452745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.140 [2024-11-15 10:46:22.452769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.140 qpair failed and we were unable to recover it. 00:27:34.140 [2024-11-15 10:46:22.452953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.140 [2024-11-15 10:46:22.452985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.140 qpair failed and we were unable to recover it. 00:27:34.140 [2024-11-15 10:46:22.453139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.140 [2024-11-15 10:46:22.453163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.140 qpair failed and we were unable to recover it. 00:27:34.140 [2024-11-15 10:46:22.453285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.140 [2024-11-15 10:46:22.453309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.140 qpair failed and we were unable to recover it. 00:27:34.140 [2024-11-15 10:46:22.453453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.140 [2024-11-15 10:46:22.453479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.140 qpair failed and we were unable to recover it. 00:27:34.140 [2024-11-15 10:46:22.453582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.140 [2024-11-15 10:46:22.453607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.140 qpair failed and we were unable to recover it. 00:27:34.140 [2024-11-15 10:46:22.453761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.140 [2024-11-15 10:46:22.453785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.140 qpair failed and we were unable to recover it. 00:27:34.140 [2024-11-15 10:46:22.453909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.140 [2024-11-15 10:46:22.453934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.140 qpair failed and we were unable to recover it. 00:27:34.140 [2024-11-15 10:46:22.454055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.140 [2024-11-15 10:46:22.454079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.140 qpair failed and we were unable to recover it. 00:27:34.140 [2024-11-15 10:46:22.454196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.140 [2024-11-15 10:46:22.454221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.140 qpair failed and we were unable to recover it. 00:27:34.140 [2024-11-15 10:46:22.454395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.140 [2024-11-15 10:46:22.454421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.140 qpair failed and we were unable to recover it. 00:27:34.140 [2024-11-15 10:46:22.454577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.140 [2024-11-15 10:46:22.454602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.140 qpair failed and we were unable to recover it. 00:27:34.140 [2024-11-15 10:46:22.454777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.140 [2024-11-15 10:46:22.454808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.140 qpair failed and we were unable to recover it. 00:27:34.140 [2024-11-15 10:46:22.454961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.140 [2024-11-15 10:46:22.454995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.140 qpair failed and we were unable to recover it. 00:27:34.140 [2024-11-15 10:46:22.455099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.140 [2024-11-15 10:46:22.455123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.140 qpair failed and we were unable to recover it. 00:27:34.140 [2024-11-15 10:46:22.455288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.140 [2024-11-15 10:46:22.455312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.140 qpair failed and we were unable to recover it. 00:27:34.140 [2024-11-15 10:46:22.455464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.140 [2024-11-15 10:46:22.455490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.140 qpair failed and we were unable to recover it. 00:27:34.140 [2024-11-15 10:46:22.455592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.140 [2024-11-15 10:46:22.455617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.140 qpair failed and we were unable to recover it. 00:27:34.140 [2024-11-15 10:46:22.455788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.140 [2024-11-15 10:46:22.455819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.140 qpair failed and we were unable to recover it. 00:27:34.140 [2024-11-15 10:46:22.455942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.140 [2024-11-15 10:46:22.455982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.140 qpair failed and we were unable to recover it. 00:27:34.140 [2024-11-15 10:46:22.456194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.140 [2024-11-15 10:46:22.456218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.140 qpair failed and we were unable to recover it. 00:27:34.140 [2024-11-15 10:46:22.456348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.140 [2024-11-15 10:46:22.456413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.140 qpair failed and we were unable to recover it. 00:27:34.140 [2024-11-15 10:46:22.456546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.140 [2024-11-15 10:46:22.456572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.140 qpair failed and we were unable to recover it. 00:27:34.140 [2024-11-15 10:46:22.456710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.141 [2024-11-15 10:46:22.456734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.141 qpair failed and we were unable to recover it. 00:27:34.141 [2024-11-15 10:46:22.456916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.141 [2024-11-15 10:46:22.456940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.141 qpair failed and we were unable to recover it. 00:27:34.141 [2024-11-15 10:46:22.457107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.141 [2024-11-15 10:46:22.457132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.141 qpair failed and we were unable to recover it. 00:27:34.141 [2024-11-15 10:46:22.457274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.141 [2024-11-15 10:46:22.457298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.141 qpair failed and we were unable to recover it. 00:27:34.141 [2024-11-15 10:46:22.457406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.141 [2024-11-15 10:46:22.457431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.141 qpair failed and we were unable to recover it. 00:27:34.141 [2024-11-15 10:46:22.457523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.141 [2024-11-15 10:46:22.457548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.141 qpair failed and we were unable to recover it. 00:27:34.141 [2024-11-15 10:46:22.457748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.141 [2024-11-15 10:46:22.457788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.141 qpair failed and we were unable to recover it. 00:27:34.141 [2024-11-15 10:46:22.457896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.141 [2024-11-15 10:46:22.457921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.141 qpair failed and we were unable to recover it. 00:27:34.141 [2024-11-15 10:46:22.458165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.141 [2024-11-15 10:46:22.458190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.141 qpair failed and we were unable to recover it. 00:27:34.141 [2024-11-15 10:46:22.458293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.141 [2024-11-15 10:46:22.458317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.141 qpair failed and we were unable to recover it. 00:27:34.141 [2024-11-15 10:46:22.458456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.141 [2024-11-15 10:46:22.458482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.141 qpair failed and we were unable to recover it. 00:27:34.141 [2024-11-15 10:46:22.458642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.141 [2024-11-15 10:46:22.458682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.141 qpair failed and we were unable to recover it. 00:27:34.141 [2024-11-15 10:46:22.458790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.141 [2024-11-15 10:46:22.458842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.141 qpair failed and we were unable to recover it. 00:27:34.141 [2024-11-15 10:46:22.459013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.141 [2024-11-15 10:46:22.459053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.141 qpair failed and we were unable to recover it. 00:27:34.141 [2024-11-15 10:46:22.459137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.141 [2024-11-15 10:46:22.459161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.141 qpair failed and we were unable to recover it. 00:27:34.141 [2024-11-15 10:46:22.459259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.141 [2024-11-15 10:46:22.459283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.141 qpair failed and we were unable to recover it. 00:27:34.141 [2024-11-15 10:46:22.459430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.141 [2024-11-15 10:46:22.459456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.141 qpair failed and we were unable to recover it. 00:27:34.141 [2024-11-15 10:46:22.459557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.141 [2024-11-15 10:46:22.459582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.141 qpair failed and we were unable to recover it. 00:27:34.141 [2024-11-15 10:46:22.459772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.141 [2024-11-15 10:46:22.459800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.141 qpair failed and we were unable to recover it. 00:27:34.141 [2024-11-15 10:46:22.460034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.141 [2024-11-15 10:46:22.460059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.141 qpair failed and we were unable to recover it. 00:27:34.141 [2024-11-15 10:46:22.460232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.141 [2024-11-15 10:46:22.460256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.141 qpair failed and we were unable to recover it. 00:27:34.141 [2024-11-15 10:46:22.460436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.141 [2024-11-15 10:46:22.460495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.141 qpair failed and we were unable to recover it. 00:27:34.141 [2024-11-15 10:46:22.460589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.141 [2024-11-15 10:46:22.460614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.141 qpair failed and we were unable to recover it. 00:27:34.141 [2024-11-15 10:46:22.460754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.141 [2024-11-15 10:46:22.460798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.141 qpair failed and we were unable to recover it. 00:27:34.141 [2024-11-15 10:46:22.460942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.141 [2024-11-15 10:46:22.460967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.141 qpair failed and we were unable to recover it. 00:27:34.141 [2024-11-15 10:46:22.461180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.141 [2024-11-15 10:46:22.461208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.141 qpair failed and we were unable to recover it. 00:27:34.141 [2024-11-15 10:46:22.461341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.141 [2024-11-15 10:46:22.461374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.141 qpair failed and we were unable to recover it. 00:27:34.141 [2024-11-15 10:46:22.461486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.141 [2024-11-15 10:46:22.461511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.141 qpair failed and we were unable to recover it. 00:27:34.141 [2024-11-15 10:46:22.461675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.141 [2024-11-15 10:46:22.461699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.141 qpair failed and we were unable to recover it. 00:27:34.141 [2024-11-15 10:46:22.461866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.141 [2024-11-15 10:46:22.461905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.141 qpair failed and we were unable to recover it. 00:27:34.141 [2024-11-15 10:46:22.462084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.141 [2024-11-15 10:46:22.462124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.141 qpair failed and we were unable to recover it. 00:27:34.141 [2024-11-15 10:46:22.462240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.141 [2024-11-15 10:46:22.462264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.141 qpair failed and we were unable to recover it. 00:27:34.141 [2024-11-15 10:46:22.462467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.141 [2024-11-15 10:46:22.462518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.141 qpair failed and we were unable to recover it. 00:27:34.141 [2024-11-15 10:46:22.462613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.141 [2024-11-15 10:46:22.462637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.141 qpair failed and we were unable to recover it. 00:27:34.141 [2024-11-15 10:46:22.462798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.141 [2024-11-15 10:46:22.462821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.142 qpair failed and we were unable to recover it. 00:27:34.142 [2024-11-15 10:46:22.462945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.142 [2024-11-15 10:46:22.462983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.142 qpair failed and we were unable to recover it. 00:27:34.142 [2024-11-15 10:46:22.463118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.142 [2024-11-15 10:46:22.463143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.142 qpair failed and we were unable to recover it. 00:27:34.142 [2024-11-15 10:46:22.463263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.142 [2024-11-15 10:46:22.463286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.142 qpair failed and we were unable to recover it. 00:27:34.142 [2024-11-15 10:46:22.463476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.142 [2024-11-15 10:46:22.463518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.142 qpair failed and we were unable to recover it. 00:27:34.142 [2024-11-15 10:46:22.463661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.142 [2024-11-15 10:46:22.463700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.142 qpair failed and we were unable to recover it. 00:27:34.142 [2024-11-15 10:46:22.463852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.142 [2024-11-15 10:46:22.463893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.142 qpair failed and we were unable to recover it. 00:27:34.142 [2024-11-15 10:46:22.464016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.142 [2024-11-15 10:46:22.464039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.142 qpair failed and we were unable to recover it. 00:27:34.142 [2024-11-15 10:46:22.464154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.142 [2024-11-15 10:46:22.464178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.142 qpair failed and we were unable to recover it. 00:27:34.142 [2024-11-15 10:46:22.464287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.142 [2024-11-15 10:46:22.464310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.142 qpair failed and we were unable to recover it. 00:27:34.142 [2024-11-15 10:46:22.464438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.142 [2024-11-15 10:46:22.464464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.142 qpair failed and we were unable to recover it. 00:27:34.142 [2024-11-15 10:46:22.464634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.142 [2024-11-15 10:46:22.464686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.142 qpair failed and we were unable to recover it. 00:27:34.142 [2024-11-15 10:46:22.464817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.142 [2024-11-15 10:46:22.464840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.142 qpair failed and we were unable to recover it. 00:27:34.142 [2024-11-15 10:46:22.464937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.142 [2024-11-15 10:46:22.464961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.142 qpair failed and we were unable to recover it. 00:27:34.142 [2024-11-15 10:46:22.465102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.142 [2024-11-15 10:46:22.465125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.142 qpair failed and we were unable to recover it. 00:27:34.142 [2024-11-15 10:46:22.465289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.142 [2024-11-15 10:46:22.465313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.142 qpair failed and we were unable to recover it. 00:27:34.142 [2024-11-15 10:46:22.465474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.142 [2024-11-15 10:46:22.465500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.142 qpair failed and we were unable to recover it. 00:27:34.142 [2024-11-15 10:46:22.465666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.142 [2024-11-15 10:46:22.465691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.142 qpair failed and we were unable to recover it. 00:27:34.142 [2024-11-15 10:46:22.465845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.142 [2024-11-15 10:46:22.465869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.142 qpair failed and we were unable to recover it. 00:27:34.142 [2024-11-15 10:46:22.466029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.142 [2024-11-15 10:46:22.466054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.142 qpair failed and we were unable to recover it. 00:27:34.142 [2024-11-15 10:46:22.466179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.142 [2024-11-15 10:46:22.466202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.142 qpair failed and we were unable to recover it. 00:27:34.142 [2024-11-15 10:46:22.466383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.142 [2024-11-15 10:46:22.466409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.142 qpair failed and we were unable to recover it. 00:27:34.142 [2024-11-15 10:46:22.466530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.142 [2024-11-15 10:46:22.466569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.142 qpair failed and we were unable to recover it. 00:27:34.142 [2024-11-15 10:46:22.466679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.142 [2024-11-15 10:46:22.466702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.142 qpair failed and we were unable to recover it. 00:27:34.142 [2024-11-15 10:46:22.466818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.142 [2024-11-15 10:46:22.466842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.142 qpair failed and we were unable to recover it. 00:27:34.142 [2024-11-15 10:46:22.466984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.142 [2024-11-15 10:46:22.467035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.142 qpair failed and we were unable to recover it. 00:27:34.142 [2024-11-15 10:46:22.467158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.142 [2024-11-15 10:46:22.467181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.142 qpair failed and we were unable to recover it. 00:27:34.142 [2024-11-15 10:46:22.467311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.142 [2024-11-15 10:46:22.467335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.142 qpair failed and we were unable to recover it. 00:27:34.142 [2024-11-15 10:46:22.467514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.142 [2024-11-15 10:46:22.467539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.142 qpair failed and we were unable to recover it. 00:27:34.142 [2024-11-15 10:46:22.467669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.142 [2024-11-15 10:46:22.467714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.142 qpair failed and we were unable to recover it. 00:27:34.142 [2024-11-15 10:46:22.467878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.142 [2024-11-15 10:46:22.467902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.142 qpair failed and we were unable to recover it. 00:27:34.142 [2024-11-15 10:46:22.467993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.142 [2024-11-15 10:46:22.468017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.142 qpair failed and we were unable to recover it. 00:27:34.143 [2024-11-15 10:46:22.468144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.143 [2024-11-15 10:46:22.468172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.143 qpair failed and we were unable to recover it. 00:27:34.143 [2024-11-15 10:46:22.468334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.143 [2024-11-15 10:46:22.468358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.143 qpair failed and we were unable to recover it. 00:27:34.143 [2024-11-15 10:46:22.468500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.143 [2024-11-15 10:46:22.468526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.143 qpair failed and we were unable to recover it. 00:27:34.143 [2024-11-15 10:46:22.468614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.143 [2024-11-15 10:46:22.468638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.143 qpair failed and we were unable to recover it. 00:27:34.143 [2024-11-15 10:46:22.468873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.143 [2024-11-15 10:46:22.468897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.143 qpair failed and we were unable to recover it. 00:27:34.143 [2024-11-15 10:46:22.469067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.143 [2024-11-15 10:46:22.469106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.143 qpair failed and we were unable to recover it. 00:27:34.143 [2024-11-15 10:46:22.469240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.143 [2024-11-15 10:46:22.469264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.143 qpair failed and we were unable to recover it. 00:27:34.143 [2024-11-15 10:46:22.469389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.143 [2024-11-15 10:46:22.469414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.143 qpair failed and we were unable to recover it. 00:27:34.143 [2024-11-15 10:46:22.469508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.143 [2024-11-15 10:46:22.469532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.143 qpair failed and we were unable to recover it. 00:27:34.143 [2024-11-15 10:46:22.469695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.143 [2024-11-15 10:46:22.469719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.143 qpair failed and we were unable to recover it. 00:27:34.143 [2024-11-15 10:46:22.469881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.143 [2024-11-15 10:46:22.469905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.143 qpair failed and we were unable to recover it. 00:27:34.143 [2024-11-15 10:46:22.470039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.143 [2024-11-15 10:46:22.470063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.143 qpair failed and we were unable to recover it. 00:27:34.143 [2024-11-15 10:46:22.470160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.143 [2024-11-15 10:46:22.470183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.143 qpair failed and we were unable to recover it. 00:27:34.143 [2024-11-15 10:46:22.470391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.143 [2024-11-15 10:46:22.470420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.143 qpair failed and we were unable to recover it. 00:27:34.143 [2024-11-15 10:46:22.470549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.143 [2024-11-15 10:46:22.470575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.143 qpair failed and we were unable to recover it. 00:27:34.143 [2024-11-15 10:46:22.470725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.143 [2024-11-15 10:46:22.470751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.143 qpair failed and we were unable to recover it. 00:27:34.143 [2024-11-15 10:46:22.470948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.143 [2024-11-15 10:46:22.470995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.143 qpair failed and we were unable to recover it. 00:27:34.143 [2024-11-15 10:46:22.471144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.143 [2024-11-15 10:46:22.471168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.143 qpair failed and we were unable to recover it. 00:27:34.143 [2024-11-15 10:46:22.471262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.143 [2024-11-15 10:46:22.471285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.143 qpair failed and we were unable to recover it. 00:27:34.143 [2024-11-15 10:46:22.471456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.143 [2024-11-15 10:46:22.471482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.143 qpair failed and we were unable to recover it. 00:27:34.143 [2024-11-15 10:46:22.471616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.143 [2024-11-15 10:46:22.471656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.143 qpair failed and we were unable to recover it. 00:27:34.143 [2024-11-15 10:46:22.471823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.143 [2024-11-15 10:46:22.471861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.143 qpair failed and we were unable to recover it. 00:27:34.143 [2024-11-15 10:46:22.472031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.143 [2024-11-15 10:46:22.472063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.143 qpair failed and we were unable to recover it. 00:27:34.143 [2024-11-15 10:46:22.472244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.143 [2024-11-15 10:46:22.472268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.143 qpair failed and we were unable to recover it. 00:27:34.143 [2024-11-15 10:46:22.472377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.143 [2024-11-15 10:46:22.472402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.143 qpair failed and we were unable to recover it. 00:27:34.143 [2024-11-15 10:46:22.472523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.143 [2024-11-15 10:46:22.472547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.143 qpair failed and we were unable to recover it. 00:27:34.143 [2024-11-15 10:46:22.472716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.143 [2024-11-15 10:46:22.472741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.143 qpair failed and we were unable to recover it. 00:27:34.143 [2024-11-15 10:46:22.472921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.143 [2024-11-15 10:46:22.472965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.143 qpair failed and we were unable to recover it. 00:27:34.143 [2024-11-15 10:46:22.473169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.143 [2024-11-15 10:46:22.473194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.143 qpair failed and we were unable to recover it. 00:27:34.143 [2024-11-15 10:46:22.473376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.143 [2024-11-15 10:46:22.473401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.143 qpair failed and we were unable to recover it. 00:27:34.143 [2024-11-15 10:46:22.473533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.143 [2024-11-15 10:46:22.473579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.143 qpair failed and we were unable to recover it. 00:27:34.143 [2024-11-15 10:46:22.473707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.143 [2024-11-15 10:46:22.473730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.143 qpair failed and we were unable to recover it. 00:27:34.143 [2024-11-15 10:46:22.473907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.143 [2024-11-15 10:46:22.473945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.143 qpair failed and we were unable to recover it. 00:27:34.143 [2024-11-15 10:46:22.474122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.143 [2024-11-15 10:46:22.474163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.143 qpair failed and we were unable to recover it. 00:27:34.143 [2024-11-15 10:46:22.474274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.143 [2024-11-15 10:46:22.474299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.143 qpair failed and we were unable to recover it. 00:27:34.143 [2024-11-15 10:46:22.474457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.143 [2024-11-15 10:46:22.474498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.143 qpair failed and we were unable to recover it. 00:27:34.144 [2024-11-15 10:46:22.474627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.144 [2024-11-15 10:46:22.474674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.144 qpair failed and we were unable to recover it. 00:27:34.144 [2024-11-15 10:46:22.474810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.144 [2024-11-15 10:46:22.474834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.144 qpair failed and we were unable to recover it. 00:27:34.144 [2024-11-15 10:46:22.474966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.144 [2024-11-15 10:46:22.475007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.144 qpair failed and we were unable to recover it. 00:27:34.144 [2024-11-15 10:46:22.475185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.144 [2024-11-15 10:46:22.475208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.144 qpair failed and we were unable to recover it. 00:27:34.144 [2024-11-15 10:46:22.475409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.144 [2024-11-15 10:46:22.475436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.144 qpair failed and we were unable to recover it. 00:27:34.144 [2024-11-15 10:46:22.475550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.144 [2024-11-15 10:46:22.475589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.144 qpair failed and we were unable to recover it. 00:27:34.144 [2024-11-15 10:46:22.475740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.144 [2024-11-15 10:46:22.475765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.144 qpair failed and we were unable to recover it. 00:27:34.144 [2024-11-15 10:46:22.475942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.144 [2024-11-15 10:46:22.475982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.144 qpair failed and we were unable to recover it. 00:27:34.144 [2024-11-15 10:46:22.476109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.144 [2024-11-15 10:46:22.476133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.144 qpair failed and we were unable to recover it. 00:27:34.144 [2024-11-15 10:46:22.476244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.144 [2024-11-15 10:46:22.476267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.144 qpair failed and we were unable to recover it. 00:27:34.144 [2024-11-15 10:46:22.476405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.144 [2024-11-15 10:46:22.476429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.144 qpair failed and we were unable to recover it. 00:27:34.144 [2024-11-15 10:46:22.476528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.144 [2024-11-15 10:46:22.476553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.144 qpair failed and we were unable to recover it. 00:27:34.144 [2024-11-15 10:46:22.476679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.144 [2024-11-15 10:46:22.476703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.144 qpair failed and we were unable to recover it. 00:27:34.144 [2024-11-15 10:46:22.476862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.144 [2024-11-15 10:46:22.476886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.144 qpair failed and we were unable to recover it. 00:27:34.144 [2024-11-15 10:46:22.477008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.144 [2024-11-15 10:46:22.477032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.144 qpair failed and we were unable to recover it. 00:27:34.144 [2024-11-15 10:46:22.477187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.144 [2024-11-15 10:46:22.477219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.144 qpair failed and we were unable to recover it. 00:27:34.144 [2024-11-15 10:46:22.477376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.144 [2024-11-15 10:46:22.477401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.144 qpair failed and we were unable to recover it. 00:27:34.144 [2024-11-15 10:46:22.477512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.144 [2024-11-15 10:46:22.477559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.144 qpair failed and we were unable to recover it. 00:27:34.144 [2024-11-15 10:46:22.477726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.144 [2024-11-15 10:46:22.477768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.144 qpair failed and we were unable to recover it. 00:27:34.144 [2024-11-15 10:46:22.477900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.144 [2024-11-15 10:46:22.477941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.144 qpair failed and we were unable to recover it. 00:27:34.144 [2024-11-15 10:46:22.478093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.144 [2024-11-15 10:46:22.478118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.144 qpair failed and we were unable to recover it. 00:27:34.144 [2024-11-15 10:46:22.478262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.144 [2024-11-15 10:46:22.478285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.144 qpair failed and we were unable to recover it. 00:27:34.144 [2024-11-15 10:46:22.478460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.144 [2024-11-15 10:46:22.478520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.144 qpair failed and we were unable to recover it. 00:27:34.144 [2024-11-15 10:46:22.478692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.144 [2024-11-15 10:46:22.478748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.144 qpair failed and we were unable to recover it. 00:27:34.144 [2024-11-15 10:46:22.478851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.144 [2024-11-15 10:46:22.478877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.144 qpair failed and we were unable to recover it. 00:27:34.144 [2024-11-15 10:46:22.479036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.144 [2024-11-15 10:46:22.479060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.144 qpair failed and we were unable to recover it. 00:27:34.144 [2024-11-15 10:46:22.479214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.144 [2024-11-15 10:46:22.479239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.144 qpair failed and we were unable to recover it. 00:27:34.144 [2024-11-15 10:46:22.479394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.144 [2024-11-15 10:46:22.479420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.144 qpair failed and we were unable to recover it. 00:27:34.144 [2024-11-15 10:46:22.479539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.144 [2024-11-15 10:46:22.479564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.144 qpair failed and we were unable to recover it. 00:27:34.144 [2024-11-15 10:46:22.479757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.144 [2024-11-15 10:46:22.479782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.144 qpair failed and we were unable to recover it. 00:27:34.144 [2024-11-15 10:46:22.479976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.144 [2024-11-15 10:46:22.480000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.144 qpair failed and we were unable to recover it. 00:27:34.144 [2024-11-15 10:46:22.480132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.144 [2024-11-15 10:46:22.480157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.144 qpair failed and we were unable to recover it. 00:27:34.144 [2024-11-15 10:46:22.480312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.144 [2024-11-15 10:46:22.480336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.144 qpair failed and we were unable to recover it. 00:27:34.144 [2024-11-15 10:46:22.480505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.144 [2024-11-15 10:46:22.480553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.144 qpair failed and we were unable to recover it. 00:27:34.144 [2024-11-15 10:46:22.480665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.144 [2024-11-15 10:46:22.480705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.144 qpair failed and we were unable to recover it. 00:27:34.144 [2024-11-15 10:46:22.480871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.145 [2024-11-15 10:46:22.480910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.145 qpair failed and we were unable to recover it. 00:27:34.145 [2024-11-15 10:46:22.481031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.145 [2024-11-15 10:46:22.481054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.145 qpair failed and we were unable to recover it. 00:27:34.145 [2024-11-15 10:46:22.481234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.145 [2024-11-15 10:46:22.481258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.145 qpair failed and we were unable to recover it. 00:27:34.145 [2024-11-15 10:46:22.481422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.145 [2024-11-15 10:46:22.481447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.145 qpair failed and we were unable to recover it. 00:27:34.145 [2024-11-15 10:46:22.481578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.145 [2024-11-15 10:46:22.481601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.145 qpair failed and we were unable to recover it. 00:27:34.145 [2024-11-15 10:46:22.481756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.145 [2024-11-15 10:46:22.481780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.145 qpair failed and we were unable to recover it. 00:27:34.145 [2024-11-15 10:46:22.481889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.145 [2024-11-15 10:46:22.481913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.145 qpair failed and we were unable to recover it. 00:27:34.145 [2024-11-15 10:46:22.482082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.145 [2024-11-15 10:46:22.482106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.145 qpair failed and we were unable to recover it. 00:27:34.145 [2024-11-15 10:46:22.482238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.145 [2024-11-15 10:46:22.482262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.145 qpair failed and we were unable to recover it. 00:27:34.145 [2024-11-15 10:46:22.482400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.145 [2024-11-15 10:46:22.482425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.145 qpair failed and we were unable to recover it. 00:27:34.145 [2024-11-15 10:46:22.482551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.145 [2024-11-15 10:46:22.482575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.145 qpair failed and we were unable to recover it. 00:27:34.145 [2024-11-15 10:46:22.482716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.145 [2024-11-15 10:46:22.482740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.145 qpair failed and we were unable to recover it. 00:27:34.145 [2024-11-15 10:46:22.482847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.145 [2024-11-15 10:46:22.482870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.145 qpair failed and we were unable to recover it. 00:27:34.145 [2024-11-15 10:46:22.482989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.145 [2024-11-15 10:46:22.483014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.145 qpair failed and we were unable to recover it. 00:27:34.145 [2024-11-15 10:46:22.483177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.145 [2024-11-15 10:46:22.483200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.145 qpair failed and we were unable to recover it. 00:27:34.145 [2024-11-15 10:46:22.483336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.145 [2024-11-15 10:46:22.483360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.145 qpair failed and we were unable to recover it. 00:27:34.145 [2024-11-15 10:46:22.483540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.145 [2024-11-15 10:46:22.483565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.145 qpair failed and we were unable to recover it. 00:27:34.145 [2024-11-15 10:46:22.483743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.145 [2024-11-15 10:46:22.483766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.145 qpair failed and we were unable to recover it. 00:27:34.145 [2024-11-15 10:46:22.483917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.145 [2024-11-15 10:46:22.483940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.145 qpair failed and we were unable to recover it. 00:27:34.145 [2024-11-15 10:46:22.484062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.145 [2024-11-15 10:46:22.484086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.145 qpair failed and we were unable to recover it. 00:27:34.145 [2024-11-15 10:46:22.484238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.145 [2024-11-15 10:46:22.484262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.145 qpair failed and we were unable to recover it. 00:27:34.145 [2024-11-15 10:46:22.484441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.145 [2024-11-15 10:46:22.484466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.145 qpair failed and we were unable to recover it. 00:27:34.145 [2024-11-15 10:46:22.484630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.145 [2024-11-15 10:46:22.484668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.145 qpair failed and we were unable to recover it. 00:27:34.145 [2024-11-15 10:46:22.484801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.145 [2024-11-15 10:46:22.484840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.145 qpair failed and we were unable to recover it. 00:27:34.145 [2024-11-15 10:46:22.484978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.145 [2024-11-15 10:46:22.485002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.145 qpair failed and we were unable to recover it. 00:27:34.145 [2024-11-15 10:46:22.485156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.145 [2024-11-15 10:46:22.485181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.145 qpair failed and we were unable to recover it. 00:27:34.145 [2024-11-15 10:46:22.485308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.145 [2024-11-15 10:46:22.485332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.145 qpair failed and we were unable to recover it. 00:27:34.145 [2024-11-15 10:46:22.485508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.145 [2024-11-15 10:46:22.485533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.145 qpair failed and we were unable to recover it. 00:27:34.145 [2024-11-15 10:46:22.485670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.145 [2024-11-15 10:46:22.485694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.145 qpair failed and we were unable to recover it. 00:27:34.145 [2024-11-15 10:46:22.485846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.145 [2024-11-15 10:46:22.485870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.145 qpair failed and we were unable to recover it. 00:27:34.145 [2024-11-15 10:46:22.485998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.145 [2024-11-15 10:46:22.486023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.145 qpair failed and we were unable to recover it. 00:27:34.145 [2024-11-15 10:46:22.486203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.145 [2024-11-15 10:46:22.486227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.145 qpair failed and we were unable to recover it. 00:27:34.145 [2024-11-15 10:46:22.486324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.145 [2024-11-15 10:46:22.486348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.145 qpair failed and we were unable to recover it. 00:27:34.145 [2024-11-15 10:46:22.486530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.146 [2024-11-15 10:46:22.486553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.146 qpair failed and we were unable to recover it. 00:27:34.146 [2024-11-15 10:46:22.486705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.146 [2024-11-15 10:46:22.486728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.146 qpair failed and we were unable to recover it. 00:27:34.146 [2024-11-15 10:46:22.486858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.146 [2024-11-15 10:46:22.486883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.146 qpair failed and we were unable to recover it. 00:27:34.146 [2024-11-15 10:46:22.487083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.146 [2024-11-15 10:46:22.487106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.146 qpair failed and we were unable to recover it. 00:27:34.146 [2024-11-15 10:46:22.487199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.146 [2024-11-15 10:46:22.487222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.146 qpair failed and we were unable to recover it. 00:27:34.146 [2024-11-15 10:46:22.487320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.146 [2024-11-15 10:46:22.487344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.146 qpair failed and we were unable to recover it. 00:27:34.146 [2024-11-15 10:46:22.487504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.146 [2024-11-15 10:46:22.487529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.146 qpair failed and we were unable to recover it. 00:27:34.146 [2024-11-15 10:46:22.487707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.146 [2024-11-15 10:46:22.487746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.146 qpair failed and we were unable to recover it. 00:27:34.146 [2024-11-15 10:46:22.487912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.146 [2024-11-15 10:46:22.487951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.146 qpair failed and we were unable to recover it. 00:27:34.146 [2024-11-15 10:46:22.488120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.146 [2024-11-15 10:46:22.488143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.146 qpair failed and we were unable to recover it. 00:27:34.146 [2024-11-15 10:46:22.488267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.146 [2024-11-15 10:46:22.488291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.146 qpair failed and we were unable to recover it. 00:27:34.146 [2024-11-15 10:46:22.488500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.146 [2024-11-15 10:46:22.488552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.146 qpair failed and we were unable to recover it. 00:27:34.146 [2024-11-15 10:46:22.488707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.146 [2024-11-15 10:46:22.488754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.146 qpair failed and we were unable to recover it. 00:27:34.146 [2024-11-15 10:46:22.488881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.146 [2024-11-15 10:46:22.488919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.146 qpair failed and we were unable to recover it. 00:27:34.146 [2024-11-15 10:46:22.489043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.146 [2024-11-15 10:46:22.489067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.146 qpair failed and we were unable to recover it. 00:27:34.146 [2024-11-15 10:46:22.489232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.146 [2024-11-15 10:46:22.489257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.146 qpair failed and we were unable to recover it. 00:27:34.146 [2024-11-15 10:46:22.489383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.146 [2024-11-15 10:46:22.489408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.146 qpair failed and we were unable to recover it. 00:27:34.146 [2024-11-15 10:46:22.489523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.146 [2024-11-15 10:46:22.489548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.146 qpair failed and we were unable to recover it. 00:27:34.146 [2024-11-15 10:46:22.489734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.146 [2024-11-15 10:46:22.489777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.146 qpair failed and we were unable to recover it. 00:27:34.146 [2024-11-15 10:46:22.489877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.146 [2024-11-15 10:46:22.489901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.146 qpair failed and we were unable to recover it. 00:27:34.146 [2024-11-15 10:46:22.490109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.146 [2024-11-15 10:46:22.490132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.146 qpair failed and we were unable to recover it. 00:27:34.146 [2024-11-15 10:46:22.490285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.146 [2024-11-15 10:46:22.490309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.146 qpair failed and we were unable to recover it. 00:27:34.146 [2024-11-15 10:46:22.490417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.146 [2024-11-15 10:46:22.490442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.146 qpair failed and we were unable to recover it. 00:27:34.146 [2024-11-15 10:46:22.490609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.146 [2024-11-15 10:46:22.490633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.146 qpair failed and we were unable to recover it. 00:27:34.146 [2024-11-15 10:46:22.490754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.146 [2024-11-15 10:46:22.490794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.146 qpair failed and we were unable to recover it. 00:27:34.146 [2024-11-15 10:46:22.490899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.146 [2024-11-15 10:46:22.490923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.146 qpair failed and we were unable to recover it. 00:27:34.146 [2024-11-15 10:46:22.491113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.146 [2024-11-15 10:46:22.491145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.146 qpair failed and we were unable to recover it. 00:27:34.146 [2024-11-15 10:46:22.491276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.146 [2024-11-15 10:46:22.491300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.146 qpair failed and we were unable to recover it. 00:27:34.146 [2024-11-15 10:46:22.491438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.146 [2024-11-15 10:46:22.491477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.146 qpair failed and we were unable to recover it. 00:27:34.146 [2024-11-15 10:46:22.491649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.146 [2024-11-15 10:46:22.491688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.146 qpair failed and we were unable to recover it. 00:27:34.146 [2024-11-15 10:46:22.491870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.146 [2024-11-15 10:46:22.491910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.146 qpair failed and we were unable to recover it. 00:27:34.146 [2024-11-15 10:46:22.492039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.146 [2024-11-15 10:46:22.492063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.146 qpair failed and we were unable to recover it. 00:27:34.146 [2024-11-15 10:46:22.492177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.146 [2024-11-15 10:46:22.492200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.146 qpair failed and we were unable to recover it. 00:27:34.146 [2024-11-15 10:46:22.492371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.146 [2024-11-15 10:46:22.492398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.146 qpair failed and we were unable to recover it. 00:27:34.146 [2024-11-15 10:46:22.492576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.146 [2024-11-15 10:46:22.492616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.146 qpair failed and we were unable to recover it. 00:27:34.146 [2024-11-15 10:46:22.492723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.146 [2024-11-15 10:46:22.492772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.146 qpair failed and we were unable to recover it. 00:27:34.146 [2024-11-15 10:46:22.492980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.146 [2024-11-15 10:46:22.493019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.146 qpair failed and we were unable to recover it. 00:27:34.146 [2024-11-15 10:46:22.493182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.146 [2024-11-15 10:46:22.493206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.146 qpair failed and we were unable to recover it. 00:27:34.146 [2024-11-15 10:46:22.493373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.146 [2024-11-15 10:46:22.493398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.146 qpair failed and we were unable to recover it. 00:27:34.146 [2024-11-15 10:46:22.493531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.147 [2024-11-15 10:46:22.493556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.147 qpair failed and we were unable to recover it. 00:27:34.147 [2024-11-15 10:46:22.493720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.147 [2024-11-15 10:46:22.493767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.147 qpair failed and we were unable to recover it. 00:27:34.147 [2024-11-15 10:46:22.493885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.147 [2024-11-15 10:46:22.493923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.147 qpair failed and we were unable to recover it. 00:27:34.147 [2024-11-15 10:46:22.494041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.147 [2024-11-15 10:46:22.494081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.147 qpair failed and we were unable to recover it. 00:27:34.147 [2024-11-15 10:46:22.494249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.147 [2024-11-15 10:46:22.494284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.147 qpair failed and we were unable to recover it. 00:27:34.147 [2024-11-15 10:46:22.494465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.147 [2024-11-15 10:46:22.494508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.147 qpair failed and we were unable to recover it. 00:27:34.147 [2024-11-15 10:46:22.494649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.147 [2024-11-15 10:46:22.494700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.147 qpair failed and we were unable to recover it. 00:27:34.147 [2024-11-15 10:46:22.494858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.147 [2024-11-15 10:46:22.494882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.147 qpair failed and we were unable to recover it. 00:27:34.147 [2024-11-15 10:46:22.495012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.147 [2024-11-15 10:46:22.495051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.147 qpair failed and we were unable to recover it. 00:27:34.147 [2024-11-15 10:46:22.495181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.147 [2024-11-15 10:46:22.495205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.147 qpair failed and we were unable to recover it. 00:27:34.147 [2024-11-15 10:46:22.495386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.147 [2024-11-15 10:46:22.495411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.147 qpair failed and we were unable to recover it. 00:27:34.147 [2024-11-15 10:46:22.495538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.147 [2024-11-15 10:46:22.495578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.147 qpair failed and we were unable to recover it. 00:27:34.147 [2024-11-15 10:46:22.495736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.147 [2024-11-15 10:46:22.495775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.147 qpair failed and we were unable to recover it. 00:27:34.147 [2024-11-15 10:46:22.495978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.147 [2024-11-15 10:46:22.496017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.147 qpair failed and we were unable to recover it. 00:27:34.147 [2024-11-15 10:46:22.496146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.147 [2024-11-15 10:46:22.496169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.147 qpair failed and we were unable to recover it. 00:27:34.147 [2024-11-15 10:46:22.496384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.147 [2024-11-15 10:46:22.496409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.147 qpair failed and we were unable to recover it. 00:27:34.147 [2024-11-15 10:46:22.496538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.147 [2024-11-15 10:46:22.496585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.147 qpair failed and we were unable to recover it. 00:27:34.147 [2024-11-15 10:46:22.496738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.147 [2024-11-15 10:46:22.496777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.147 qpair failed and we were unable to recover it. 00:27:34.147 [2024-11-15 10:46:22.496938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.147 [2024-11-15 10:46:22.496982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.147 qpair failed and we were unable to recover it. 00:27:34.147 [2024-11-15 10:46:22.497147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.147 [2024-11-15 10:46:22.497171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.147 qpair failed and we were unable to recover it. 00:27:34.147 [2024-11-15 10:46:22.497327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.147 [2024-11-15 10:46:22.497351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.147 qpair failed and we were unable to recover it. 00:27:34.147 [2024-11-15 10:46:22.497515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.147 [2024-11-15 10:46:22.497553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.147 qpair failed and we were unable to recover it. 00:27:34.147 [2024-11-15 10:46:22.497715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.147 [2024-11-15 10:46:22.497754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.147 qpair failed and we were unable to recover it. 00:27:34.147 [2024-11-15 10:46:22.497879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.147 [2024-11-15 10:46:22.497918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.147 qpair failed and we were unable to recover it. 00:27:34.147 [2024-11-15 10:46:22.498048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.147 [2024-11-15 10:46:22.498086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.147 qpair failed and we were unable to recover it. 00:27:34.147 [2024-11-15 10:46:22.498269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.147 [2024-11-15 10:46:22.498292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.147 qpair failed and we were unable to recover it. 00:27:34.147 [2024-11-15 10:46:22.498438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.147 [2024-11-15 10:46:22.498478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.147 qpair failed and we were unable to recover it. 00:27:34.147 [2024-11-15 10:46:22.498618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.147 [2024-11-15 10:46:22.498658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.147 qpair failed and we were unable to recover it. 00:27:34.147 [2024-11-15 10:46:22.498840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.147 [2024-11-15 10:46:22.498880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.147 qpair failed and we were unable to recover it. 00:27:34.147 [2024-11-15 10:46:22.499063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.147 [2024-11-15 10:46:22.499101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.147 qpair failed and we were unable to recover it. 00:27:34.147 [2024-11-15 10:46:22.499248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.147 [2024-11-15 10:46:22.499271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.147 qpair failed and we were unable to recover it. 00:27:34.147 [2024-11-15 10:46:22.499464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.147 [2024-11-15 10:46:22.499515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.147 qpair failed and we were unable to recover it. 00:27:34.147 [2024-11-15 10:46:22.499658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.147 [2024-11-15 10:46:22.499698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.147 qpair failed and we were unable to recover it. 00:27:34.147 [2024-11-15 10:46:22.499840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.147 [2024-11-15 10:46:22.499868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.147 qpair failed and we were unable to recover it. 00:27:34.147 [2024-11-15 10:46:22.500020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.147 [2024-11-15 10:46:22.500043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.147 qpair failed and we were unable to recover it. 00:27:34.147 [2024-11-15 10:46:22.500199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.147 [2024-11-15 10:46:22.500233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.147 qpair failed and we were unable to recover it. 00:27:34.147 [2024-11-15 10:46:22.500352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.147 [2024-11-15 10:46:22.500392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.147 qpair failed and we were unable to recover it. 00:27:34.147 [2024-11-15 10:46:22.500575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.147 [2024-11-15 10:46:22.500599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.147 qpair failed and we were unable to recover it. 00:27:34.147 [2024-11-15 10:46:22.500760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.148 [2024-11-15 10:46:22.500795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.148 qpair failed and we were unable to recover it. 00:27:34.148 [2024-11-15 10:46:22.500926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.148 [2024-11-15 10:46:22.500964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.148 qpair failed and we were unable to recover it. 00:27:34.148 [2024-11-15 10:46:22.501122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.148 [2024-11-15 10:46:22.501146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.148 qpair failed and we were unable to recover it. 00:27:34.148 [2024-11-15 10:46:22.501308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.148 [2024-11-15 10:46:22.501331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.148 qpair failed and we were unable to recover it. 00:27:34.148 [2024-11-15 10:46:22.501462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.148 [2024-11-15 10:46:22.501511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.148 qpair failed and we were unable to recover it. 00:27:34.148 [2024-11-15 10:46:22.501685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.148 [2024-11-15 10:46:22.501709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.148 qpair failed and we were unable to recover it. 00:27:34.148 [2024-11-15 10:46:22.501825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.148 [2024-11-15 10:46:22.501848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.148 qpair failed and we were unable to recover it. 00:27:34.148 [2024-11-15 10:46:22.501964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.148 [2024-11-15 10:46:22.502004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.148 qpair failed and we were unable to recover it. 00:27:34.148 [2024-11-15 10:46:22.502119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.148 [2024-11-15 10:46:22.502142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.148 qpair failed and we were unable to recover it. 00:27:34.148 [2024-11-15 10:46:22.502241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.148 [2024-11-15 10:46:22.502265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.148 qpair failed and we were unable to recover it. 00:27:34.148 [2024-11-15 10:46:22.502360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.148 [2024-11-15 10:46:22.502419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.148 qpair failed and we were unable to recover it. 00:27:34.148 [2024-11-15 10:46:22.502609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.148 [2024-11-15 10:46:22.502634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.148 qpair failed and we were unable to recover it. 00:27:34.148 [2024-11-15 10:46:22.502749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.148 [2024-11-15 10:46:22.502773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.148 qpair failed and we were unable to recover it. 00:27:34.148 [2024-11-15 10:46:22.502874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.148 [2024-11-15 10:46:22.502898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.148 qpair failed and we were unable to recover it. 00:27:34.148 [2024-11-15 10:46:22.503026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.148 [2024-11-15 10:46:22.503050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.148 qpair failed and we were unable to recover it. 00:27:34.148 [2024-11-15 10:46:22.503228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.148 [2024-11-15 10:46:22.503252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.148 qpair failed and we were unable to recover it. 00:27:34.148 [2024-11-15 10:46:22.503435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.148 [2024-11-15 10:46:22.503461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.148 qpair failed and we were unable to recover it. 00:27:34.148 [2024-11-15 10:46:22.503569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.148 [2024-11-15 10:46:22.503597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.148 qpair failed and we were unable to recover it. 00:27:34.148 [2024-11-15 10:46:22.503750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.148 [2024-11-15 10:46:22.503774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.148 qpair failed and we were unable to recover it. 00:27:34.148 [2024-11-15 10:46:22.503952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.148 [2024-11-15 10:46:22.503975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.148 qpair failed and we were unable to recover it. 00:27:34.148 [2024-11-15 10:46:22.504088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.148 [2024-11-15 10:46:22.504112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.148 qpair failed and we were unable to recover it. 00:27:34.148 [2024-11-15 10:46:22.504239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.148 [2024-11-15 10:46:22.504263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.148 qpair failed and we were unable to recover it. 00:27:34.148 [2024-11-15 10:46:22.504442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.148 [2024-11-15 10:46:22.504468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.148 qpair failed and we were unable to recover it. 00:27:34.148 [2024-11-15 10:46:22.504584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.148 [2024-11-15 10:46:22.504609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.148 qpair failed and we were unable to recover it. 00:27:34.148 [2024-11-15 10:46:22.504777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.148 [2024-11-15 10:46:22.504801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.148 qpair failed and we were unable to recover it. 00:27:34.148 [2024-11-15 10:46:22.504924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.148 [2024-11-15 10:46:22.504947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.148 qpair failed and we were unable to recover it. 00:27:34.148 [2024-11-15 10:46:22.505168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.148 [2024-11-15 10:46:22.505192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.148 qpair failed and we were unable to recover it. 00:27:34.148 [2024-11-15 10:46:22.505318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.148 [2024-11-15 10:46:22.505342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.148 qpair failed and we were unable to recover it. 00:27:34.148 [2024-11-15 10:46:22.505471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.148 [2024-11-15 10:46:22.505497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.148 qpair failed and we were unable to recover it. 00:27:34.148 [2024-11-15 10:46:22.505619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.148 [2024-11-15 10:46:22.505644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.148 qpair failed and we were unable to recover it. 00:27:34.148 [2024-11-15 10:46:22.505781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.148 [2024-11-15 10:46:22.505805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.148 qpair failed and we were unable to recover it. 00:27:34.148 [2024-11-15 10:46:22.505956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.148 [2024-11-15 10:46:22.505980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.148 qpair failed and we were unable to recover it. 00:27:34.148 [2024-11-15 10:46:22.506179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.148 [2024-11-15 10:46:22.506203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.148 qpair failed and we were unable to recover it. 00:27:34.148 [2024-11-15 10:46:22.506389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.148 [2024-11-15 10:46:22.506415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.148 qpair failed and we were unable to recover it. 00:27:34.148 [2024-11-15 10:46:22.506608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.148 [2024-11-15 10:46:22.506632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.148 qpair failed and we were unable to recover it. 00:27:34.148 [2024-11-15 10:46:22.506786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.148 [2024-11-15 10:46:22.506809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.148 qpair failed and we were unable to recover it. 00:27:34.148 [2024-11-15 10:46:22.506991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.148 [2024-11-15 10:46:22.507034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.148 qpair failed and we were unable to recover it. 00:27:34.149 [2024-11-15 10:46:22.507174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.149 [2024-11-15 10:46:22.507198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.149 qpair failed and we were unable to recover it. 00:27:34.149 [2024-11-15 10:46:22.507341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.149 [2024-11-15 10:46:22.507380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.149 qpair failed and we were unable to recover it. 00:27:34.149 [2024-11-15 10:46:22.507512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.149 [2024-11-15 10:46:22.507559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.149 qpair failed and we were unable to recover it. 00:27:34.149 [2024-11-15 10:46:22.507694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.149 [2024-11-15 10:46:22.507734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.149 qpair failed and we were unable to recover it. 00:27:34.149 [2024-11-15 10:46:22.507928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.149 [2024-11-15 10:46:22.507952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.149 qpair failed and we were unable to recover it. 00:27:34.149 [2024-11-15 10:46:22.508058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.149 [2024-11-15 10:46:22.508081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.149 qpair failed and we were unable to recover it. 00:27:34.149 [2024-11-15 10:46:22.508206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.149 [2024-11-15 10:46:22.508230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.149 qpair failed and we were unable to recover it. 00:27:34.149 [2024-11-15 10:46:22.508375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.149 [2024-11-15 10:46:22.508399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.149 qpair failed and we were unable to recover it. 00:27:34.149 [2024-11-15 10:46:22.508552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.149 [2024-11-15 10:46:22.508575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.149 qpair failed and we were unable to recover it. 00:27:34.149 [2024-11-15 10:46:22.508764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.149 [2024-11-15 10:46:22.508789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.149 qpair failed and we were unable to recover it. 00:27:34.149 [2024-11-15 10:46:22.508952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.149 [2024-11-15 10:46:22.508982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.149 qpair failed and we were unable to recover it. 00:27:34.149 [2024-11-15 10:46:22.509099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.149 [2024-11-15 10:46:22.509123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.149 qpair failed and we were unable to recover it. 00:27:34.149 [2024-11-15 10:46:22.509324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.149 [2024-11-15 10:46:22.509348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.149 qpair failed and we were unable to recover it. 00:27:34.149 [2024-11-15 10:46:22.509460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.149 [2024-11-15 10:46:22.509484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.149 qpair failed and we were unable to recover it. 00:27:34.149 [2024-11-15 10:46:22.509656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.149 [2024-11-15 10:46:22.509699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.149 qpair failed and we were unable to recover it. 00:27:34.149 [2024-11-15 10:46:22.509835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.149 [2024-11-15 10:46:22.509873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.149 qpair failed and we were unable to recover it. 00:27:34.149 [2024-11-15 10:46:22.510076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.149 [2024-11-15 10:46:22.510115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.149 qpair failed and we were unable to recover it. 00:27:34.149 [2024-11-15 10:46:22.510232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.149 [2024-11-15 10:46:22.510257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.149 qpair failed and we were unable to recover it. 00:27:34.149 [2024-11-15 10:46:22.510401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.149 [2024-11-15 10:46:22.510425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.149 qpair failed and we were unable to recover it. 00:27:34.149 [2024-11-15 10:46:22.510544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.149 [2024-11-15 10:46:22.510583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.149 qpair failed and we were unable to recover it. 00:27:34.149 [2024-11-15 10:46:22.510747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.149 [2024-11-15 10:46:22.510771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.149 qpair failed and we were unable to recover it. 00:27:34.149 [2024-11-15 10:46:22.510925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.149 [2024-11-15 10:46:22.510963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.149 qpair failed and we were unable to recover it. 00:27:34.149 [2024-11-15 10:46:22.511087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.149 [2024-11-15 10:46:22.511111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.149 qpair failed and we were unable to recover it. 00:27:34.149 [2024-11-15 10:46:22.511210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.149 [2024-11-15 10:46:22.511234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.149 qpair failed and we were unable to recover it. 00:27:34.149 [2024-11-15 10:46:22.511410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.149 [2024-11-15 10:46:22.511435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.149 qpair failed and we were unable to recover it. 00:27:34.149 [2024-11-15 10:46:22.511636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.149 [2024-11-15 10:46:22.511661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.149 qpair failed and we were unable to recover it. 00:27:34.149 [2024-11-15 10:46:22.511846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.149 [2024-11-15 10:46:22.511874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.149 qpair failed and we were unable to recover it. 00:27:34.149 [2024-11-15 10:46:22.511990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.149 [2024-11-15 10:46:22.512014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.149 qpair failed and we were unable to recover it. 00:27:34.149 [2024-11-15 10:46:22.512142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.149 [2024-11-15 10:46:22.512166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.149 qpair failed and we were unable to recover it. 00:27:34.149 [2024-11-15 10:46:22.512301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.149 [2024-11-15 10:46:22.512325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.149 qpair failed and we were unable to recover it. 00:27:34.149 [2024-11-15 10:46:22.512556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.149 [2024-11-15 10:46:22.512581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.149 qpair failed and we were unable to recover it. 00:27:34.149 [2024-11-15 10:46:22.512797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.149 [2024-11-15 10:46:22.512822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.149 qpair failed and we were unable to recover it. 00:27:34.149 [2024-11-15 10:46:22.512996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.149 [2024-11-15 10:46:22.513034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.149 qpair failed and we were unable to recover it. 00:27:34.149 [2024-11-15 10:46:22.513157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.149 [2024-11-15 10:46:22.513181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.149 qpair failed and we were unable to recover it. 00:27:34.149 [2024-11-15 10:46:22.513312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.149 [2024-11-15 10:46:22.513337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.149 qpair failed and we were unable to recover it. 00:27:34.149 [2024-11-15 10:46:22.513492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.149 [2024-11-15 10:46:22.513532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.149 qpair failed and we were unable to recover it. 00:27:34.149 [2024-11-15 10:46:22.513683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.149 [2024-11-15 10:46:22.513708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.149 qpair failed and we were unable to recover it. 00:27:34.149 [2024-11-15 10:46:22.513903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.149 [2024-11-15 10:46:22.513928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.149 qpair failed and we were unable to recover it. 00:27:34.149 [2024-11-15 10:46:22.514052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.149 [2024-11-15 10:46:22.514090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.149 qpair failed and we were unable to recover it. 00:27:34.149 [2024-11-15 10:46:22.514264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.149 [2024-11-15 10:46:22.514288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.149 qpair failed and we were unable to recover it. 00:27:34.149 [2024-11-15 10:46:22.514456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.150 [2024-11-15 10:46:22.514497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.150 qpair failed and we were unable to recover it. 00:27:34.150 [2024-11-15 10:46:22.514640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.150 [2024-11-15 10:46:22.514680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.150 qpair failed and we were unable to recover it. 00:27:34.150 [2024-11-15 10:46:22.514849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.150 [2024-11-15 10:46:22.514898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.150 qpair failed and we were unable to recover it. 00:27:34.150 [2024-11-15 10:46:22.515045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.150 [2024-11-15 10:46:22.515084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.150 qpair failed and we were unable to recover it. 00:27:34.150 [2024-11-15 10:46:22.515216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.150 [2024-11-15 10:46:22.515240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.150 qpair failed and we were unable to recover it. 00:27:34.150 [2024-11-15 10:46:22.515379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.150 [2024-11-15 10:46:22.515404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.150 qpair failed and we were unable to recover it. 00:27:34.150 [2024-11-15 10:46:22.515571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.150 [2024-11-15 10:46:22.515595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.150 qpair failed and we were unable to recover it. 00:27:34.150 [2024-11-15 10:46:22.515764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.150 [2024-11-15 10:46:22.515804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.150 qpair failed and we were unable to recover it. 00:27:34.150 [2024-11-15 10:46:22.515950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.150 [2024-11-15 10:46:22.516004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.150 qpair failed and we were unable to recover it. 00:27:34.150 [2024-11-15 10:46:22.516187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.150 [2024-11-15 10:46:22.516212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.150 qpair failed and we were unable to recover it. 00:27:34.150 [2024-11-15 10:46:22.516360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.150 [2024-11-15 10:46:22.516400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.150 qpair failed and we were unable to recover it. 00:27:34.150 [2024-11-15 10:46:22.516528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.150 [2024-11-15 10:46:22.516573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.150 qpair failed and we were unable to recover it. 00:27:34.150 [2024-11-15 10:46:22.516706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.150 [2024-11-15 10:46:22.516746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.150 qpair failed and we were unable to recover it. 00:27:34.150 [2024-11-15 10:46:22.516912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.150 [2024-11-15 10:46:22.516962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.150 qpair failed and we were unable to recover it. 00:27:34.150 [2024-11-15 10:46:22.517114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.150 [2024-11-15 10:46:22.517137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.150 qpair failed and we were unable to recover it. 00:27:34.150 [2024-11-15 10:46:22.517308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.150 [2024-11-15 10:46:22.517331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.150 qpair failed and we were unable to recover it. 00:27:34.150 [2024-11-15 10:46:22.517512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.150 [2024-11-15 10:46:22.517537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.150 qpair failed and we were unable to recover it. 00:27:34.150 [2024-11-15 10:46:22.517659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.150 [2024-11-15 10:46:22.517698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.150 qpair failed and we were unable to recover it. 00:27:34.150 [2024-11-15 10:46:22.517825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.150 [2024-11-15 10:46:22.517848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.150 qpair failed and we were unable to recover it. 00:27:34.150 [2024-11-15 10:46:22.518048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.150 [2024-11-15 10:46:22.518072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.150 qpair failed and we were unable to recover it. 00:27:34.150 [2024-11-15 10:46:22.518179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.150 [2024-11-15 10:46:22.518204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.150 qpair failed and we were unable to recover it. 00:27:34.150 [2024-11-15 10:46:22.518342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.150 [2024-11-15 10:46:22.518373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.150 qpair failed and we were unable to recover it. 00:27:34.150 [2024-11-15 10:46:22.518536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.150 [2024-11-15 10:46:22.518561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.150 qpair failed and we were unable to recover it. 00:27:34.150 [2024-11-15 10:46:22.518721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.150 [2024-11-15 10:46:22.518745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.150 qpair failed and we were unable to recover it. 00:27:34.150 [2024-11-15 10:46:22.518893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.150 [2024-11-15 10:46:22.518918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.150 qpair failed and we were unable to recover it. 00:27:34.150 [2024-11-15 10:46:22.519076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.150 [2024-11-15 10:46:22.519101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.150 qpair failed and we were unable to recover it. 00:27:34.150 [2024-11-15 10:46:22.519209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.150 [2024-11-15 10:46:22.519234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.150 qpair failed and we were unable to recover it. 00:27:34.150 [2024-11-15 10:46:22.519389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.150 [2024-11-15 10:46:22.519420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.150 qpair failed and we were unable to recover it. 00:27:34.150 [2024-11-15 10:46:22.519595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.150 [2024-11-15 10:46:22.519619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.150 qpair failed and we were unable to recover it. 00:27:34.150 [2024-11-15 10:46:22.519820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.150 [2024-11-15 10:46:22.519867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.150 qpair failed and we were unable to recover it. 00:27:34.150 [2024-11-15 10:46:22.520042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.150 [2024-11-15 10:46:22.520067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.150 qpair failed and we were unable to recover it. 00:27:34.150 [2024-11-15 10:46:22.520251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.150 [2024-11-15 10:46:22.520280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.150 qpair failed and we were unable to recover it. 00:27:34.150 [2024-11-15 10:46:22.520420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.150 [2024-11-15 10:46:22.520445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.150 qpair failed and we were unable to recover it. 00:27:34.150 [2024-11-15 10:46:22.520562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.150 [2024-11-15 10:46:22.520586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.150 qpair failed and we were unable to recover it. 00:27:34.150 [2024-11-15 10:46:22.520732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.150 [2024-11-15 10:46:22.520756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.150 qpair failed and we were unable to recover it. 00:27:34.150 [2024-11-15 10:46:22.520943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.150 [2024-11-15 10:46:22.520989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.150 qpair failed and we were unable to recover it. 00:27:34.150 [2024-11-15 10:46:22.521132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.150 [2024-11-15 10:46:22.521155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.150 qpair failed and we were unable to recover it. 00:27:34.150 [2024-11-15 10:46:22.521297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.150 [2024-11-15 10:46:22.521322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.151 qpair failed and we were unable to recover it. 00:27:34.151 [2024-11-15 10:46:22.521474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.151 [2024-11-15 10:46:22.521514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.151 qpair failed and we were unable to recover it. 00:27:34.151 [2024-11-15 10:46:22.521684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.151 [2024-11-15 10:46:22.521723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.151 qpair failed and we were unable to recover it. 00:27:34.151 [2024-11-15 10:46:22.521888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.151 [2024-11-15 10:46:22.521945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.151 qpair failed and we were unable to recover it. 00:27:34.151 [2024-11-15 10:46:22.522126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.151 [2024-11-15 10:46:22.522150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.151 qpair failed and we were unable to recover it. 00:27:34.151 [2024-11-15 10:46:22.522275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.151 [2024-11-15 10:46:22.522300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.151 qpair failed and we were unable to recover it. 00:27:34.151 [2024-11-15 10:46:22.522436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.151 [2024-11-15 10:46:22.522461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.151 qpair failed and we were unable to recover it. 00:27:34.151 [2024-11-15 10:46:22.522632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.151 [2024-11-15 10:46:22.522672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.151 qpair failed and we were unable to recover it. 00:27:34.151 [2024-11-15 10:46:22.522846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.151 [2024-11-15 10:46:22.522905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.151 qpair failed and we were unable to recover it. 00:27:34.151 [2024-11-15 10:46:22.523053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.151 [2024-11-15 10:46:22.523076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.151 qpair failed and we were unable to recover it. 00:27:34.151 [2024-11-15 10:46:22.523213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.151 [2024-11-15 10:46:22.523237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.151 qpair failed and we were unable to recover it. 00:27:34.151 [2024-11-15 10:46:22.523352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.151 [2024-11-15 10:46:22.523385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.151 qpair failed and we were unable to recover it. 00:27:34.151 [2024-11-15 10:46:22.523568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.151 [2024-11-15 10:46:22.523606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.151 qpair failed and we were unable to recover it. 00:27:34.151 [2024-11-15 10:46:22.523831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.151 [2024-11-15 10:46:22.523857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.151 qpair failed and we were unable to recover it. 00:27:34.151 [2024-11-15 10:46:22.524080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.151 [2024-11-15 10:46:22.524104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.151 qpair failed and we were unable to recover it. 00:27:34.151 [2024-11-15 10:46:22.524255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.151 [2024-11-15 10:46:22.524279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.151 qpair failed and we were unable to recover it. 00:27:34.151 [2024-11-15 10:46:22.524405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.151 [2024-11-15 10:46:22.524430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.151 qpair failed and we were unable to recover it. 00:27:34.151 [2024-11-15 10:46:22.524595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.151 [2024-11-15 10:46:22.524639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.151 qpair failed and we were unable to recover it. 00:27:34.151 [2024-11-15 10:46:22.524809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.151 [2024-11-15 10:46:22.524847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.151 qpair failed and we were unable to recover it. 00:27:34.151 [2024-11-15 10:46:22.525029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.151 [2024-11-15 10:46:22.525068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.151 qpair failed and we were unable to recover it. 00:27:34.151 [2024-11-15 10:46:22.525200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.151 [2024-11-15 10:46:22.525224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.151 qpair failed and we were unable to recover it. 00:27:34.151 [2024-11-15 10:46:22.525425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.151 [2024-11-15 10:46:22.525479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.151 qpair failed and we were unable to recover it. 00:27:34.151 [2024-11-15 10:46:22.525663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.151 [2024-11-15 10:46:22.525702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.151 qpair failed and we were unable to recover it. 00:27:34.151 [2024-11-15 10:46:22.525838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.151 [2024-11-15 10:46:22.525876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.151 qpair failed and we were unable to recover it. 00:27:34.151 [2024-11-15 10:46:22.526019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.151 [2024-11-15 10:46:22.526043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.151 qpair failed and we were unable to recover it. 00:27:34.151 [2024-11-15 10:46:22.526215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.151 [2024-11-15 10:46:22.526240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.151 qpair failed and we were unable to recover it. 00:27:34.151 [2024-11-15 10:46:22.526397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.151 [2024-11-15 10:46:22.526421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.151 qpair failed and we were unable to recover it. 00:27:34.151 [2024-11-15 10:46:22.526596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.151 [2024-11-15 10:46:22.526647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.151 qpair failed and we were unable to recover it. 00:27:34.151 [2024-11-15 10:46:22.526829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.151 [2024-11-15 10:46:22.526867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.151 qpair failed and we were unable to recover it. 00:27:34.151 [2024-11-15 10:46:22.526998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.151 [2024-11-15 10:46:22.527029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.151 qpair failed and we were unable to recover it. 00:27:34.151 [2024-11-15 10:46:22.527198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.151 [2024-11-15 10:46:22.527234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.151 qpair failed and we were unable to recover it. 00:27:34.151 [2024-11-15 10:46:22.527408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.151 [2024-11-15 10:46:22.527434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.151 qpair failed and we were unable to recover it. 00:27:34.151 [2024-11-15 10:46:22.527638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.151 [2024-11-15 10:46:22.527681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.151 qpair failed and we were unable to recover it. 00:27:34.151 [2024-11-15 10:46:22.527816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.151 [2024-11-15 10:46:22.527865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.151 qpair failed and we were unable to recover it. 00:27:34.151 [2024-11-15 10:46:22.527991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.151 [2024-11-15 10:46:22.528015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.151 qpair failed and we were unable to recover it. 00:27:34.151 [2024-11-15 10:46:22.528153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.151 [2024-11-15 10:46:22.528177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.151 qpair failed and we were unable to recover it. 00:27:34.151 [2024-11-15 10:46:22.528379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.151 [2024-11-15 10:46:22.528405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.151 qpair failed and we were unable to recover it. 00:27:34.151 [2024-11-15 10:46:22.528568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.151 [2024-11-15 10:46:22.528615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.151 qpair failed and we were unable to recover it. 00:27:34.151 [2024-11-15 10:46:22.528753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.151 [2024-11-15 10:46:22.528798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.151 qpair failed and we were unable to recover it. 00:27:34.151 [2024-11-15 10:46:22.529002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.151 [2024-11-15 10:46:22.529041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.151 qpair failed and we were unable to recover it. 00:27:34.151 [2024-11-15 10:46:22.529171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.151 [2024-11-15 10:46:22.529195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.151 qpair failed and we were unable to recover it. 00:27:34.151 [2024-11-15 10:46:22.529358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.151 [2024-11-15 10:46:22.529391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.152 qpair failed and we were unable to recover it. 00:27:34.152 [2024-11-15 10:46:22.529549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.152 [2024-11-15 10:46:22.529574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.152 qpair failed and we were unable to recover it. 00:27:34.152 [2024-11-15 10:46:22.529724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.152 [2024-11-15 10:46:22.529763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.152 qpair failed and we were unable to recover it. 00:27:34.152 [2024-11-15 10:46:22.529893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.152 [2024-11-15 10:46:22.529946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.152 qpair failed and we were unable to recover it. 00:27:34.152 [2024-11-15 10:46:22.530100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.152 [2024-11-15 10:46:22.530125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.152 qpair failed and we were unable to recover it. 00:27:34.152 [2024-11-15 10:46:22.530283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.152 [2024-11-15 10:46:22.530307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.152 qpair failed and we were unable to recover it. 00:27:34.152 [2024-11-15 10:46:22.530525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.152 [2024-11-15 10:46:22.530566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.152 qpair failed and we were unable to recover it. 00:27:34.152 [2024-11-15 10:46:22.530763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.152 [2024-11-15 10:46:22.530810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.152 qpair failed and we were unable to recover it. 00:27:34.152 [2024-11-15 10:46:22.530939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.152 [2024-11-15 10:46:22.531011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.152 qpair failed and we were unable to recover it. 00:27:34.152 [2024-11-15 10:46:22.531160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.152 [2024-11-15 10:46:22.531188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.152 qpair failed and we were unable to recover it. 00:27:34.152 [2024-11-15 10:46:22.531340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.152 [2024-11-15 10:46:22.531370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.152 qpair failed and we were unable to recover it. 00:27:34.152 [2024-11-15 10:46:22.531542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.152 [2024-11-15 10:46:22.531592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.152 qpair failed and we were unable to recover it. 00:27:34.152 [2024-11-15 10:46:22.531697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.152 [2024-11-15 10:46:22.531745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.152 qpair failed and we were unable to recover it. 00:27:34.152 [2024-11-15 10:46:22.531938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.152 [2024-11-15 10:46:22.531986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.152 qpair failed and we were unable to recover it. 00:27:34.152 [2024-11-15 10:46:22.532124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.152 [2024-11-15 10:46:22.532173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.152 qpair failed and we were unable to recover it. 00:27:34.152 [2024-11-15 10:46:22.532352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.152 [2024-11-15 10:46:22.532400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.152 qpair failed and we were unable to recover it. 00:27:34.152 [2024-11-15 10:46:22.532560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.152 [2024-11-15 10:46:22.532585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.152 qpair failed and we were unable to recover it. 00:27:34.152 [2024-11-15 10:46:22.532779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.152 [2024-11-15 10:46:22.532826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.152 qpair failed and we were unable to recover it. 00:27:34.152 [2024-11-15 10:46:22.532952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.152 [2024-11-15 10:46:22.533001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.152 qpair failed and we were unable to recover it. 00:27:34.152 [2024-11-15 10:46:22.533142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.152 [2024-11-15 10:46:22.533175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.152 qpair failed and we were unable to recover it. 00:27:34.152 [2024-11-15 10:46:22.533282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.152 [2024-11-15 10:46:22.533305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.152 qpair failed and we were unable to recover it. 00:27:34.152 [2024-11-15 10:46:22.533449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.152 [2024-11-15 10:46:22.533489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.152 qpair failed and we were unable to recover it. 00:27:34.152 [2024-11-15 10:46:22.533685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.152 [2024-11-15 10:46:22.533725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.152 qpair failed and we were unable to recover it. 00:27:34.152 [2024-11-15 10:46:22.533895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.152 [2024-11-15 10:46:22.533935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.152 qpair failed and we were unable to recover it. 00:27:34.152 [2024-11-15 10:46:22.534120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.152 [2024-11-15 10:46:22.534168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.152 qpair failed and we were unable to recover it. 00:27:34.152 [2024-11-15 10:46:22.534320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.152 [2024-11-15 10:46:22.534344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.152 qpair failed and we were unable to recover it. 00:27:34.152 [2024-11-15 10:46:22.534466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.152 [2024-11-15 10:46:22.534512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.152 qpair failed and we were unable to recover it. 00:27:34.152 [2024-11-15 10:46:22.534679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.152 [2024-11-15 10:46:22.534719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.152 qpair failed and we were unable to recover it. 00:27:34.152 [2024-11-15 10:46:22.534886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.152 [2024-11-15 10:46:22.534925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.152 qpair failed and we were unable to recover it. 00:27:34.152 [2024-11-15 10:46:22.535015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.152 [2024-11-15 10:46:22.535039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.152 qpair failed and we were unable to recover it. 00:27:34.152 [2024-11-15 10:46:22.535199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.152 [2024-11-15 10:46:22.535228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.152 qpair failed and we were unable to recover it. 00:27:34.152 [2024-11-15 10:46:22.535430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.152 [2024-11-15 10:46:22.535484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.152 qpair failed and we were unable to recover it. 00:27:34.152 [2024-11-15 10:46:22.535667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.152 [2024-11-15 10:46:22.535706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.152 qpair failed and we were unable to recover it. 00:27:34.152 [2024-11-15 10:46:22.535867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.152 [2024-11-15 10:46:22.535914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.152 qpair failed and we were unable to recover it. 00:27:34.152 [2024-11-15 10:46:22.536016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.152 [2024-11-15 10:46:22.536040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.152 qpair failed and we were unable to recover it. 00:27:34.152 [2024-11-15 10:46:22.536122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.152 [2024-11-15 10:46:22.536146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.152 qpair failed and we were unable to recover it. 00:27:34.152 [2024-11-15 10:46:22.536272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.152 [2024-11-15 10:46:22.536297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.152 qpair failed and we were unable to recover it. 00:27:34.152 [2024-11-15 10:46:22.536514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.152 [2024-11-15 10:46:22.536554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.152 qpair failed and we were unable to recover it. 00:27:34.152 [2024-11-15 10:46:22.536665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.152 [2024-11-15 10:46:22.536690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.152 qpair failed and we were unable to recover it. 00:27:34.152 [2024-11-15 10:46:22.536853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.152 [2024-11-15 10:46:22.536903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.152 qpair failed and we were unable to recover it. 00:27:34.152 [2024-11-15 10:46:22.537103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.152 [2024-11-15 10:46:22.537128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.153 qpair failed and we were unable to recover it. 00:27:34.153 [2024-11-15 10:46:22.537302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.153 [2024-11-15 10:46:22.537327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.153 qpair failed and we were unable to recover it. 00:27:34.153 [2024-11-15 10:46:22.537507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.153 [2024-11-15 10:46:22.537555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.153 qpair failed and we were unable to recover it. 00:27:34.153 [2024-11-15 10:46:22.537750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.153 [2024-11-15 10:46:22.537789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.153 qpair failed and we were unable to recover it. 00:27:34.153 [2024-11-15 10:46:22.537952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.153 [2024-11-15 10:46:22.537990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.153 qpair failed and we were unable to recover it. 00:27:34.153 [2024-11-15 10:46:22.538111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.153 [2024-11-15 10:46:22.538135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.153 qpair failed and we were unable to recover it. 00:27:34.153 [2024-11-15 10:46:22.538296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.153 [2024-11-15 10:46:22.538320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.153 qpair failed and we were unable to recover it. 00:27:34.153 [2024-11-15 10:46:22.538532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.153 [2024-11-15 10:46:22.538580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.153 qpair failed and we were unable to recover it. 00:27:34.153 [2024-11-15 10:46:22.538742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.153 [2024-11-15 10:46:22.538781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.153 qpair failed and we were unable to recover it. 00:27:34.153 [2024-11-15 10:46:22.538927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.153 [2024-11-15 10:46:22.538968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.153 qpair failed and we were unable to recover it. 00:27:34.153 [2024-11-15 10:46:22.539071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.153 [2024-11-15 10:46:22.539096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.153 qpair failed and we were unable to recover it. 00:27:34.153 [2024-11-15 10:46:22.539240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.153 [2024-11-15 10:46:22.539264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.153 qpair failed and we were unable to recover it. 00:27:34.153 [2024-11-15 10:46:22.539455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.153 [2024-11-15 10:46:22.539483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.153 qpair failed and we were unable to recover it. 00:27:34.153 [2024-11-15 10:46:22.539615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.153 [2024-11-15 10:46:22.539639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.153 qpair failed and we were unable to recover it. 00:27:34.153 [2024-11-15 10:46:22.539766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.153 [2024-11-15 10:46:22.539790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.153 qpair failed and we were unable to recover it. 00:27:34.153 [2024-11-15 10:46:22.539959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.153 [2024-11-15 10:46:22.539999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.153 qpair failed and we were unable to recover it. 00:27:34.153 [2024-11-15 10:46:22.540221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.153 [2024-11-15 10:46:22.540245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.153 qpair failed and we were unable to recover it. 00:27:34.153 [2024-11-15 10:46:22.540374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.153 [2024-11-15 10:46:22.540406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.153 qpair failed and we were unable to recover it. 00:27:34.153 [2024-11-15 10:46:22.540566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.153 [2024-11-15 10:46:22.540590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.153 qpair failed and we were unable to recover it. 00:27:34.153 [2024-11-15 10:46:22.540695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.153 [2024-11-15 10:46:22.540726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.153 qpair failed and we were unable to recover it. 00:27:34.153 [2024-11-15 10:46:22.540889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.153 [2024-11-15 10:46:22.540913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.153 qpair failed and we were unable to recover it. 00:27:34.153 [2024-11-15 10:46:22.541039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.153 [2024-11-15 10:46:22.541062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.153 qpair failed and we were unable to recover it. 00:27:34.153 [2024-11-15 10:46:22.541229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.153 [2024-11-15 10:46:22.541254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.153 qpair failed and we were unable to recover it. 00:27:34.153 [2024-11-15 10:46:22.541458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.153 [2024-11-15 10:46:22.541484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.153 qpair failed and we were unable to recover it. 00:27:34.153 [2024-11-15 10:46:22.541630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.153 [2024-11-15 10:46:22.541654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.153 qpair failed and we were unable to recover it. 00:27:34.153 [2024-11-15 10:46:22.541825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.153 [2024-11-15 10:46:22.541849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.153 qpair failed and we were unable to recover it. 00:27:34.153 [2024-11-15 10:46:22.542018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.153 [2024-11-15 10:46:22.542043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.153 qpair failed and we were unable to recover it. 00:27:34.153 [2024-11-15 10:46:22.542226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.153 [2024-11-15 10:46:22.542251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.153 qpair failed and we were unable to recover it. 00:27:34.153 [2024-11-15 10:46:22.542438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.153 [2024-11-15 10:46:22.542478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.153 qpair failed and we were unable to recover it. 00:27:34.153 [2024-11-15 10:46:22.542595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.153 [2024-11-15 10:46:22.542619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.153 qpair failed and we were unable to recover it. 00:27:34.153 [2024-11-15 10:46:22.542830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.153 [2024-11-15 10:46:22.542864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.153 qpair failed and we were unable to recover it. 00:27:34.153 [2024-11-15 10:46:22.543001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.153 [2024-11-15 10:46:22.543040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.153 qpair failed and we were unable to recover it. 00:27:34.153 [2024-11-15 10:46:22.543176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.153 [2024-11-15 10:46:22.543200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.153 qpair failed and we were unable to recover it. 00:27:34.153 [2024-11-15 10:46:22.543411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.153 [2024-11-15 10:46:22.543436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.153 qpair failed and we were unable to recover it. 00:27:34.153 [2024-11-15 10:46:22.543567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.153 [2024-11-15 10:46:22.543591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.153 qpair failed and we were unable to recover it. 00:27:34.153 [2024-11-15 10:46:22.543753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.153 [2024-11-15 10:46:22.543781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.153 qpair failed and we were unable to recover it. 00:27:34.153 [2024-11-15 10:46:22.543899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.153 [2024-11-15 10:46:22.543923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.153 qpair failed and we were unable to recover it. 00:27:34.153 [2024-11-15 10:46:22.544078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.153 [2024-11-15 10:46:22.544103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.153 qpair failed and we were unable to recover it. 00:27:34.153 [2024-11-15 10:46:22.544232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.153 [2024-11-15 10:46:22.544257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.153 qpair failed and we were unable to recover it. 00:27:34.153 [2024-11-15 10:46:22.544409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.153 [2024-11-15 10:46:22.544434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.153 qpair failed and we were unable to recover it. 00:27:34.153 [2024-11-15 10:46:22.544581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.153 [2024-11-15 10:46:22.544606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.153 qpair failed and we were unable to recover it. 00:27:34.153 [2024-11-15 10:46:22.544813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.153 [2024-11-15 10:46:22.544838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.153 qpair failed and we were unable to recover it. 00:27:34.153 [2024-11-15 10:46:22.544979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.153 [2024-11-15 10:46:22.545003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.153 qpair failed and we were unable to recover it. 00:27:34.153 [2024-11-15 10:46:22.545205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.154 [2024-11-15 10:46:22.545229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.154 qpair failed and we were unable to recover it. 00:27:34.154 [2024-11-15 10:46:22.545415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.154 [2024-11-15 10:46:22.545442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.154 qpair failed and we were unable to recover it. 00:27:34.154 [2024-11-15 10:46:22.545619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.154 [2024-11-15 10:46:22.545645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.154 qpair failed and we were unable to recover it. 00:27:34.154 [2024-11-15 10:46:22.545771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.154 [2024-11-15 10:46:22.545818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.154 qpair failed and we were unable to recover it. 00:27:34.154 [2024-11-15 10:46:22.546032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.154 [2024-11-15 10:46:22.546058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.154 qpair failed and we were unable to recover it. 00:27:34.154 [2024-11-15 10:46:22.546157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.154 [2024-11-15 10:46:22.546196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.154 qpair failed and we were unable to recover it. 00:27:34.154 [2024-11-15 10:46:22.546341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.154 [2024-11-15 10:46:22.546374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.154 qpair failed and we were unable to recover it. 00:27:34.154 [2024-11-15 10:46:22.546508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.154 [2024-11-15 10:46:22.546532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.154 qpair failed and we were unable to recover it. 00:27:34.154 [2024-11-15 10:46:22.546739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.154 [2024-11-15 10:46:22.546763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.154 qpair failed and we were unable to recover it. 00:27:34.154 [2024-11-15 10:46:22.546940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.154 [2024-11-15 10:46:22.546965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.154 qpair failed and we were unable to recover it. 00:27:34.154 [2024-11-15 10:46:22.547133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.154 [2024-11-15 10:46:22.547158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.154 qpair failed and we were unable to recover it. 00:27:34.433 [2024-11-15 10:46:22.547274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.433 [2024-11-15 10:46:22.547310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.433 qpair failed and we were unable to recover it. 00:27:34.433 [2024-11-15 10:46:22.547440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.433 [2024-11-15 10:46:22.547466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.433 qpair failed and we were unable to recover it. 00:27:34.433 [2024-11-15 10:46:22.547623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.433 [2024-11-15 10:46:22.547648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.433 qpair failed and we were unable to recover it. 00:27:34.433 [2024-11-15 10:46:22.547763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.433 [2024-11-15 10:46:22.547787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.433 qpair failed and we were unable to recover it. 00:27:34.433 [2024-11-15 10:46:22.547937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.433 [2024-11-15 10:46:22.547990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.433 qpair failed and we were unable to recover it. 00:27:34.433 [2024-11-15 10:46:22.548144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.433 [2024-11-15 10:46:22.548170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.433 qpair failed and we were unable to recover it. 00:27:34.433 [2024-11-15 10:46:22.548357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.433 [2024-11-15 10:46:22.548392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.433 qpair failed and we were unable to recover it. 00:27:34.433 [2024-11-15 10:46:22.548520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.433 [2024-11-15 10:46:22.548545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.433 qpair failed and we were unable to recover it. 00:27:34.433 [2024-11-15 10:46:22.548719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.433 [2024-11-15 10:46:22.548745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.433 qpair failed and we were unable to recover it. 00:27:34.433 [2024-11-15 10:46:22.548906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.433 [2024-11-15 10:46:22.548931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.433 qpair failed and we were unable to recover it. 00:27:34.433 [2024-11-15 10:46:22.549115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.433 [2024-11-15 10:46:22.549154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.433 qpair failed and we were unable to recover it. 00:27:34.433 [2024-11-15 10:46:22.549327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.433 [2024-11-15 10:46:22.549358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.433 qpair failed and we were unable to recover it. 00:27:34.433 [2024-11-15 10:46:22.549530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.433 [2024-11-15 10:46:22.549556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.433 qpair failed and we were unable to recover it. 00:27:34.433 [2024-11-15 10:46:22.549732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.433 [2024-11-15 10:46:22.549772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.433 qpair failed and we were unable to recover it. 00:27:34.433 [2024-11-15 10:46:22.549912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.433 [2024-11-15 10:46:22.549937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.433 qpair failed and we were unable to recover it. 00:27:34.433 [2024-11-15 10:46:22.550149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.433 [2024-11-15 10:46:22.550174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.433 qpair failed and we were unable to recover it. 00:27:34.433 [2024-11-15 10:46:22.550359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.433 [2024-11-15 10:46:22.550396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.433 qpair failed and we were unable to recover it. 00:27:34.433 [2024-11-15 10:46:22.550564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.434 [2024-11-15 10:46:22.550590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.434 qpair failed and we were unable to recover it. 00:27:34.434 [2024-11-15 10:46:22.550761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.434 [2024-11-15 10:46:22.550800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.434 qpair failed and we were unable to recover it. 00:27:34.434 [2024-11-15 10:46:22.550960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.434 [2024-11-15 10:46:22.550985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.434 qpair failed and we were unable to recover it. 00:27:34.434 [2024-11-15 10:46:22.551201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.434 [2024-11-15 10:46:22.551226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.434 qpair failed and we were unable to recover it. 00:27:34.434 [2024-11-15 10:46:22.551347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.434 [2024-11-15 10:46:22.551396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.434 qpair failed and we were unable to recover it. 00:27:34.434 [2024-11-15 10:46:22.551519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.434 [2024-11-15 10:46:22.551557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.434 qpair failed and we were unable to recover it. 00:27:34.434 [2024-11-15 10:46:22.551743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.434 [2024-11-15 10:46:22.551782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.434 qpair failed and we were unable to recover it. 00:27:34.434 [2024-11-15 10:46:22.551913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.434 [2024-11-15 10:46:22.551953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.434 qpair failed and we were unable to recover it. 00:27:34.434 [2024-11-15 10:46:22.552154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.434 [2024-11-15 10:46:22.552193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.434 qpair failed and we were unable to recover it. 00:27:34.434 [2024-11-15 10:46:22.552367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.434 [2024-11-15 10:46:22.552415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.434 qpair failed and we were unable to recover it. 00:27:34.434 [2024-11-15 10:46:22.552545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.434 [2024-11-15 10:46:22.552570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.434 qpair failed and we were unable to recover it. 00:27:34.434 [2024-11-15 10:46:22.552780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.434 [2024-11-15 10:46:22.552819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.434 qpair failed and we were unable to recover it. 00:27:34.434 [2024-11-15 10:46:22.552952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.434 [2024-11-15 10:46:22.552999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.434 qpair failed and we were unable to recover it. 00:27:34.434 [2024-11-15 10:46:22.553114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.434 [2024-11-15 10:46:22.553140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.434 qpair failed and we were unable to recover it. 00:27:34.434 [2024-11-15 10:46:22.553347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.434 [2024-11-15 10:46:22.553383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.434 qpair failed and we were unable to recover it. 00:27:34.434 [2024-11-15 10:46:22.553506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.434 [2024-11-15 10:46:22.553530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.434 qpair failed and we were unable to recover it. 00:27:34.434 [2024-11-15 10:46:22.553694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.434 [2024-11-15 10:46:22.553732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.434 qpair failed and we were unable to recover it. 00:27:34.434 [2024-11-15 10:46:22.553846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.434 [2024-11-15 10:46:22.553886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.434 qpair failed and we were unable to recover it. 00:27:34.434 [2024-11-15 10:46:22.554017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.434 [2024-11-15 10:46:22.554041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.434 qpair failed and we were unable to recover it. 00:27:34.434 [2024-11-15 10:46:22.554169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.434 [2024-11-15 10:46:22.554193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.434 qpair failed and we were unable to recover it. 00:27:34.434 [2024-11-15 10:46:22.554390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.434 [2024-11-15 10:46:22.554415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.434 qpair failed and we were unable to recover it. 00:27:34.434 [2024-11-15 10:46:22.554567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.434 [2024-11-15 10:46:22.554590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.434 qpair failed and we were unable to recover it. 00:27:34.434 [2024-11-15 10:46:22.554760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.434 [2024-11-15 10:46:22.554786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.434 qpair failed and we were unable to recover it. 00:27:34.434 [2024-11-15 10:46:22.554959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.434 [2024-11-15 10:46:22.554983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.434 qpair failed and we were unable to recover it. 00:27:34.434 [2024-11-15 10:46:22.555136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.434 [2024-11-15 10:46:22.555161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.434 qpair failed and we were unable to recover it. 00:27:34.434 [2024-11-15 10:46:22.555314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.434 [2024-11-15 10:46:22.555337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.434 qpair failed and we were unable to recover it. 00:27:34.434 [2024-11-15 10:46:22.555469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.434 [2024-11-15 10:46:22.555528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.434 qpair failed and we were unable to recover it. 00:27:34.434 [2024-11-15 10:46:22.555728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.434 [2024-11-15 10:46:22.555767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.434 qpair failed and we were unable to recover it. 00:27:34.434 [2024-11-15 10:46:22.555885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.434 [2024-11-15 10:46:22.555910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.434 qpair failed and we were unable to recover it. 00:27:34.434 [2024-11-15 10:46:22.556105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.434 [2024-11-15 10:46:22.556143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.434 qpair failed and we were unable to recover it. 00:27:34.434 [2024-11-15 10:46:22.556275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.434 [2024-11-15 10:46:22.556298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.434 qpair failed and we were unable to recover it. 00:27:34.434 [2024-11-15 10:46:22.556496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.434 [2024-11-15 10:46:22.556537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.434 qpair failed and we were unable to recover it. 00:27:34.434 [2024-11-15 10:46:22.556731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.434 [2024-11-15 10:46:22.556770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.434 qpair failed and we were unable to recover it. 00:27:34.434 [2024-11-15 10:46:22.556948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.434 [2024-11-15 10:46:22.557002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.434 qpair failed and we were unable to recover it. 00:27:34.434 [2024-11-15 10:46:22.557192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.434 [2024-11-15 10:46:22.557217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.434 qpair failed and we were unable to recover it. 00:27:34.434 [2024-11-15 10:46:22.557356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.434 [2024-11-15 10:46:22.557406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.434 qpair failed and we were unable to recover it. 00:27:34.434 [2024-11-15 10:46:22.557571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.434 [2024-11-15 10:46:22.557611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.434 qpair failed and we were unable to recover it. 00:27:34.434 [2024-11-15 10:46:22.557759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.434 [2024-11-15 10:46:22.557807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.434 qpair failed and we were unable to recover it. 00:27:34.434 [2024-11-15 10:46:22.557938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.434 [2024-11-15 10:46:22.557977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.434 qpair failed and we were unable to recover it. 00:27:34.434 [2024-11-15 10:46:22.558135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.434 [2024-11-15 10:46:22.558158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.434 qpair failed and we were unable to recover it. 00:27:34.434 [2024-11-15 10:46:22.558293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.434 [2024-11-15 10:46:22.558318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.434 qpair failed and we were unable to recover it. 00:27:34.434 [2024-11-15 10:46:22.558518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.434 [2024-11-15 10:46:22.558573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.434 qpair failed and we were unable to recover it. 00:27:34.434 [2024-11-15 10:46:22.558729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.434 [2024-11-15 10:46:22.558754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.434 qpair failed and we were unable to recover it. 00:27:34.434 [2024-11-15 10:46:22.558954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.434 [2024-11-15 10:46:22.558992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.434 qpair failed and we were unable to recover it. 00:27:34.434 [2024-11-15 10:46:22.559127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.434 [2024-11-15 10:46:22.559150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.434 qpair failed and we were unable to recover it. 00:27:34.435 [2024-11-15 10:46:22.559309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.435 [2024-11-15 10:46:22.559333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.435 qpair failed and we were unable to recover it. 00:27:34.435 [2024-11-15 10:46:22.559542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.435 [2024-11-15 10:46:22.559589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.435 qpair failed and we were unable to recover it. 00:27:34.435 [2024-11-15 10:46:22.559761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.435 [2024-11-15 10:46:22.559816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.435 qpair failed and we were unable to recover it. 00:27:34.435 [2024-11-15 10:46:22.559950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.435 [2024-11-15 10:46:22.559989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.435 qpair failed and we were unable to recover it. 00:27:34.435 [2024-11-15 10:46:22.560137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.435 [2024-11-15 10:46:22.560162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.435 qpair failed and we were unable to recover it. 00:27:34.435 [2024-11-15 10:46:22.560353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.435 [2024-11-15 10:46:22.560387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.435 qpair failed and we were unable to recover it. 00:27:34.435 [2024-11-15 10:46:22.560536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.435 [2024-11-15 10:46:22.560583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.435 qpair failed and we were unable to recover it. 00:27:34.435 [2024-11-15 10:46:22.560749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.435 [2024-11-15 10:46:22.560796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.435 qpair failed and we were unable to recover it. 00:27:34.435 [2024-11-15 10:46:22.560986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.435 [2024-11-15 10:46:22.561024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.435 qpair failed and we were unable to recover it. 00:27:34.435 [2024-11-15 10:46:22.561184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.435 [2024-11-15 10:46:22.561209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.435 qpair failed and we were unable to recover it. 00:27:34.435 [2024-11-15 10:46:22.561424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.435 [2024-11-15 10:46:22.561465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.435 qpair failed and we were unable to recover it. 00:27:34.435 [2024-11-15 10:46:22.561642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.435 [2024-11-15 10:46:22.561691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.435 qpair failed and we were unable to recover it. 00:27:34.435 [2024-11-15 10:46:22.561823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.435 [2024-11-15 10:46:22.561869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.435 qpair failed and we were unable to recover it. 00:27:34.435 [2024-11-15 10:46:22.562074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.435 [2024-11-15 10:46:22.562098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.435 qpair failed and we were unable to recover it. 00:27:34.435 [2024-11-15 10:46:22.562302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.435 [2024-11-15 10:46:22.562326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.435 qpair failed and we were unable to recover it. 00:27:34.435 [2024-11-15 10:46:22.562500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.435 [2024-11-15 10:46:22.562541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.435 qpair failed and we were unable to recover it. 00:27:34.435 [2024-11-15 10:46:22.562682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.435 [2024-11-15 10:46:22.562721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.435 qpair failed and we were unable to recover it. 00:27:34.435 [2024-11-15 10:46:22.562914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.435 [2024-11-15 10:46:22.562953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.435 qpair failed and we were unable to recover it. 00:27:34.435 [2024-11-15 10:46:22.563164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.435 [2024-11-15 10:46:22.563203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.435 qpair failed and we were unable to recover it. 00:27:34.435 [2024-11-15 10:46:22.563335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.435 [2024-11-15 10:46:22.563359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.435 qpair failed and we were unable to recover it. 00:27:34.435 [2024-11-15 10:46:22.563603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.435 [2024-11-15 10:46:22.563648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.435 qpair failed and we were unable to recover it. 00:27:34.435 [2024-11-15 10:46:22.563811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.435 [2024-11-15 10:46:22.563862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.435 qpair failed and we were unable to recover it. 00:27:34.435 [2024-11-15 10:46:22.564017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.435 [2024-11-15 10:46:22.564057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.435 qpair failed and we were unable to recover it. 00:27:34.435 [2024-11-15 10:46:22.564208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.435 [2024-11-15 10:46:22.564232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.435 qpair failed and we were unable to recover it. 00:27:34.435 [2024-11-15 10:46:22.564424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.435 [2024-11-15 10:46:22.564473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.435 qpair failed and we were unable to recover it. 00:27:34.435 [2024-11-15 10:46:22.564674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.435 [2024-11-15 10:46:22.564698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.435 qpair failed and we were unable to recover it. 00:27:34.435 [2024-11-15 10:46:22.564847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.435 [2024-11-15 10:46:22.564871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.435 qpair failed and we were unable to recover it. 00:27:34.435 [2024-11-15 10:46:22.565074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.435 [2024-11-15 10:46:22.565098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.435 qpair failed and we were unable to recover it. 00:27:34.435 [2024-11-15 10:46:22.565270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.435 [2024-11-15 10:46:22.565294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.435 qpair failed and we were unable to recover it. 00:27:34.435 [2024-11-15 10:46:22.565446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.435 [2024-11-15 10:46:22.565486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.435 qpair failed and we were unable to recover it. 00:27:34.435 [2024-11-15 10:46:22.565668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.435 [2024-11-15 10:46:22.565707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.435 qpair failed and we were unable to recover it. 00:27:34.435 [2024-11-15 10:46:22.565910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.435 [2024-11-15 10:46:22.565965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.435 qpair failed and we were unable to recover it. 00:27:34.435 [2024-11-15 10:46:22.566097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.435 [2024-11-15 10:46:22.566137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.435 qpair failed and we were unable to recover it. 00:27:34.435 [2024-11-15 10:46:22.566237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.435 [2024-11-15 10:46:22.566261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.435 qpair failed and we were unable to recover it. 00:27:34.435 [2024-11-15 10:46:22.566481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.435 [2024-11-15 10:46:22.566527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.435 qpair failed and we were unable to recover it. 00:27:34.435 [2024-11-15 10:46:22.566648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.435 [2024-11-15 10:46:22.566687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.435 qpair failed and we were unable to recover it. 00:27:34.435 [2024-11-15 10:46:22.566823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.435 [2024-11-15 10:46:22.566848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.435 qpair failed and we were unable to recover it. 00:27:34.435 [2024-11-15 10:46:22.567001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.435 [2024-11-15 10:46:22.567026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.435 qpair failed and we were unable to recover it. 00:27:34.435 [2024-11-15 10:46:22.567210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.435 [2024-11-15 10:46:22.567234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.435 qpair failed and we were unable to recover it. 00:27:34.435 [2024-11-15 10:46:22.567387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.435 [2024-11-15 10:46:22.567413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.435 qpair failed and we were unable to recover it. 00:27:34.435 [2024-11-15 10:46:22.567590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.435 [2024-11-15 10:46:22.567639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.435 qpair failed and we were unable to recover it. 00:27:34.435 [2024-11-15 10:46:22.567793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.435 [2024-11-15 10:46:22.567833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.435 qpair failed and we were unable to recover it. 00:27:34.435 [2024-11-15 10:46:22.568000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.435 [2024-11-15 10:46:22.568038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.435 qpair failed and we were unable to recover it. 00:27:34.435 [2024-11-15 10:46:22.568202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.435 [2024-11-15 10:46:22.568225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.435 qpair failed and we were unable to recover it. 00:27:34.435 [2024-11-15 10:46:22.568379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.436 [2024-11-15 10:46:22.568420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.436 qpair failed and we were unable to recover it. 00:27:34.436 [2024-11-15 10:46:22.568599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.436 [2024-11-15 10:46:22.568652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.436 qpair failed and we were unable to recover it. 00:27:34.436 [2024-11-15 10:46:22.568860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.436 [2024-11-15 10:46:22.568904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.436 qpair failed and we were unable to recover it. 00:27:34.436 [2024-11-15 10:46:22.569083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.436 [2024-11-15 10:46:22.569107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.436 qpair failed and we were unable to recover it. 00:27:34.436 [2024-11-15 10:46:22.569251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.436 [2024-11-15 10:46:22.569275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.436 qpair failed and we were unable to recover it. 00:27:34.436 [2024-11-15 10:46:22.569457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.436 [2024-11-15 10:46:22.569498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.436 qpair failed and we were unable to recover it. 00:27:34.436 [2024-11-15 10:46:22.569609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.436 [2024-11-15 10:46:22.569649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.436 qpair failed and we were unable to recover it. 00:27:34.436 [2024-11-15 10:46:22.569831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.436 [2024-11-15 10:46:22.569877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.436 qpair failed and we were unable to recover it. 00:27:34.436 [2024-11-15 10:46:22.569999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.436 [2024-11-15 10:46:22.570023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.436 qpair failed and we were unable to recover it. 00:27:34.436 [2024-11-15 10:46:22.570183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.436 [2024-11-15 10:46:22.570208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.436 qpair failed and we were unable to recover it. 00:27:34.436 [2024-11-15 10:46:22.570375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.436 [2024-11-15 10:46:22.570400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.436 qpair failed and we were unable to recover it. 00:27:34.436 [2024-11-15 10:46:22.570543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.436 [2024-11-15 10:46:22.570567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.436 qpair failed and we were unable to recover it. 00:27:34.436 [2024-11-15 10:46:22.570722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.436 [2024-11-15 10:46:22.570748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.436 qpair failed and we were unable to recover it. 00:27:34.436 [2024-11-15 10:46:22.570933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.436 [2024-11-15 10:46:22.570957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.436 qpair failed and we were unable to recover it. 00:27:34.436 [2024-11-15 10:46:22.571119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.436 [2024-11-15 10:46:22.571143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.436 qpair failed and we were unable to recover it. 00:27:34.436 [2024-11-15 10:46:22.571326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.436 [2024-11-15 10:46:22.571350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.436 qpair failed and we were unable to recover it. 00:27:34.436 [2024-11-15 10:46:22.571561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.436 [2024-11-15 10:46:22.571587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.436 qpair failed and we were unable to recover it. 00:27:34.436 [2024-11-15 10:46:22.571801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.436 [2024-11-15 10:46:22.571840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.436 qpair failed and we were unable to recover it. 00:27:34.436 [2024-11-15 10:46:22.572001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.436 [2024-11-15 10:46:22.572041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.436 qpair failed and we were unable to recover it. 00:27:34.436 [2024-11-15 10:46:22.572216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.436 [2024-11-15 10:46:22.572240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.436 qpair failed and we were unable to recover it. 00:27:34.436 [2024-11-15 10:46:22.572400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.436 [2024-11-15 10:46:22.572430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.436 qpair failed and we were unable to recover it. 00:27:34.436 [2024-11-15 10:46:22.572596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.436 [2024-11-15 10:46:22.572648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.436 qpair failed and we were unable to recover it. 00:27:34.436 [2024-11-15 10:46:22.572818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.436 [2024-11-15 10:46:22.572842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.436 qpair failed and we were unable to recover it. 00:27:34.436 [2024-11-15 10:46:22.573013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.436 [2024-11-15 10:46:22.573053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.436 qpair failed and we were unable to recover it. 00:27:34.436 [2024-11-15 10:46:22.573211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.436 [2024-11-15 10:46:22.573236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.436 qpair failed and we were unable to recover it. 00:27:34.436 [2024-11-15 10:46:22.573415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.436 [2024-11-15 10:46:22.573440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.436 qpair failed and we were unable to recover it. 00:27:34.436 [2024-11-15 10:46:22.573617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.436 [2024-11-15 10:46:22.573658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.436 qpair failed and we were unable to recover it. 00:27:34.436 [2024-11-15 10:46:22.573844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.436 [2024-11-15 10:46:22.573892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.436 qpair failed and we were unable to recover it. 00:27:34.436 [2024-11-15 10:46:22.574109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.436 [2024-11-15 10:46:22.574149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.436 qpair failed and we were unable to recover it. 00:27:34.436 [2024-11-15 10:46:22.574348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.436 [2024-11-15 10:46:22.574380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.436 qpair failed and we were unable to recover it. 00:27:34.436 [2024-11-15 10:46:22.574529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.436 [2024-11-15 10:46:22.574570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.436 qpair failed and we were unable to recover it. 00:27:34.436 [2024-11-15 10:46:22.574804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.436 [2024-11-15 10:46:22.574851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.436 qpair failed and we were unable to recover it. 00:27:34.436 [2024-11-15 10:46:22.574997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.436 [2024-11-15 10:46:22.575046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.436 qpair failed and we were unable to recover it. 00:27:34.436 [2024-11-15 10:46:22.575167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.436 [2024-11-15 10:46:22.575190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.436 qpair failed and we were unable to recover it. 00:27:34.436 [2024-11-15 10:46:22.575398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.436 [2024-11-15 10:46:22.575423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.436 qpair failed and we were unable to recover it. 00:27:34.436 [2024-11-15 10:46:22.575596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.436 [2024-11-15 10:46:22.575636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.436 qpair failed and we were unable to recover it. 00:27:34.436 [2024-11-15 10:46:22.575844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.436 [2024-11-15 10:46:22.575884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.436 qpair failed and we were unable to recover it. 00:27:34.436 [2024-11-15 10:46:22.576086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.436 [2024-11-15 10:46:22.576139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.436 qpair failed and we were unable to recover it. 00:27:34.436 [2024-11-15 10:46:22.576302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.436 [2024-11-15 10:46:22.576326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.436 qpair failed and we were unable to recover it. 00:27:34.436 [2024-11-15 10:46:22.576494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.436 [2024-11-15 10:46:22.576519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.436 qpair failed and we were unable to recover it. 00:27:34.436 [2024-11-15 10:46:22.576662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.436 [2024-11-15 10:46:22.576708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.436 qpair failed and we were unable to recover it. 00:27:34.436 [2024-11-15 10:46:22.576883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.436 [2024-11-15 10:46:22.576922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.436 qpair failed and we were unable to recover it. 00:27:34.436 [2024-11-15 10:46:22.577086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.436 [2024-11-15 10:46:22.577125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.436 qpair failed and we were unable to recover it. 00:27:34.436 [2024-11-15 10:46:22.577250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.436 [2024-11-15 10:46:22.577274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.436 qpair failed and we were unable to recover it. 00:27:34.436 [2024-11-15 10:46:22.577517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.436 [2024-11-15 10:46:22.577557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.436 qpair failed and we were unable to recover it. 00:27:34.436 [2024-11-15 10:46:22.577726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.437 [2024-11-15 10:46:22.577783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.437 qpair failed and we were unable to recover it. 00:27:34.437 [2024-11-15 10:46:22.577952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.437 [2024-11-15 10:46:22.577998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.437 qpair failed and we were unable to recover it. 00:27:34.437 [2024-11-15 10:46:22.578165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.437 [2024-11-15 10:46:22.578195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.437 qpair failed and we were unable to recover it. 00:27:34.437 [2024-11-15 10:46:22.578372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.437 [2024-11-15 10:46:22.578411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.437 qpair failed and we were unable to recover it. 00:27:34.437 [2024-11-15 10:46:22.578566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.437 [2024-11-15 10:46:22.578605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.437 qpair failed and we were unable to recover it. 00:27:34.437 [2024-11-15 10:46:22.578768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.437 [2024-11-15 10:46:22.578806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.437 qpair failed and we were unable to recover it. 00:27:34.437 [2024-11-15 10:46:22.578967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.437 [2024-11-15 10:46:22.579007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.437 qpair failed and we were unable to recover it. 00:27:34.437 [2024-11-15 10:46:22.579199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.437 [2024-11-15 10:46:22.579224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.437 qpair failed and we were unable to recover it. 00:27:34.437 [2024-11-15 10:46:22.579402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.437 [2024-11-15 10:46:22.579427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.437 qpair failed and we were unable to recover it. 00:27:34.437 [2024-11-15 10:46:22.579600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.437 [2024-11-15 10:46:22.579653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.437 qpair failed and we were unable to recover it. 00:27:34.437 [2024-11-15 10:46:22.579837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.437 [2024-11-15 10:46:22.579875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.437 qpair failed and we were unable to recover it. 00:27:34.437 [2024-11-15 10:46:22.580073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.437 [2024-11-15 10:46:22.580113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.437 qpair failed and we were unable to recover it. 00:27:34.437 [2024-11-15 10:46:22.580301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.437 [2024-11-15 10:46:22.580325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.437 qpair failed and we were unable to recover it. 00:27:34.437 [2024-11-15 10:46:22.580561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.437 [2024-11-15 10:46:22.580610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.437 qpair failed and we were unable to recover it. 00:27:34.437 [2024-11-15 10:46:22.580778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.437 [2024-11-15 10:46:22.580826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.437 qpair failed and we were unable to recover it. 00:27:34.437 [2024-11-15 10:46:22.581019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.437 [2024-11-15 10:46:22.581058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.437 qpair failed and we were unable to recover it. 00:27:34.437 [2024-11-15 10:46:22.581241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.437 [2024-11-15 10:46:22.581266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.437 qpair failed and we were unable to recover it. 00:27:34.437 [2024-11-15 10:46:22.581405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.437 [2024-11-15 10:46:22.581430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.437 qpair failed and we were unable to recover it. 00:27:34.437 [2024-11-15 10:46:22.581586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.437 [2024-11-15 10:46:22.581626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.437 qpair failed and we were unable to recover it. 00:27:34.437 [2024-11-15 10:46:22.581754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.437 [2024-11-15 10:46:22.581794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.437 qpair failed and we were unable to recover it. 00:27:34.437 [2024-11-15 10:46:22.581946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.437 [2024-11-15 10:46:22.581985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.437 qpair failed and we were unable to recover it. 00:27:34.437 [2024-11-15 10:46:22.582177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.437 [2024-11-15 10:46:22.582201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.437 qpair failed and we were unable to recover it. 00:27:34.437 [2024-11-15 10:46:22.582353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.437 [2024-11-15 10:46:22.582385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.437 qpair failed and we were unable to recover it. 00:27:34.437 [2024-11-15 10:46:22.582505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.437 [2024-11-15 10:46:22.582552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.437 qpair failed and we were unable to recover it. 00:27:34.437 [2024-11-15 10:46:22.582724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.437 [2024-11-15 10:46:22.582776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.437 qpair failed and we were unable to recover it. 00:27:34.437 [2024-11-15 10:46:22.582960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.437 [2024-11-15 10:46:22.582999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.437 qpair failed and we were unable to recover it. 00:27:34.437 [2024-11-15 10:46:22.583146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.437 [2024-11-15 10:46:22.583170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.437 qpair failed and we were unable to recover it. 00:27:34.437 [2024-11-15 10:46:22.583351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.437 [2024-11-15 10:46:22.583392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.437 qpair failed and we were unable to recover it. 00:27:34.437 [2024-11-15 10:46:22.583591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.437 [2024-11-15 10:46:22.583615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.437 qpair failed and we were unable to recover it. 00:27:34.437 [2024-11-15 10:46:22.583788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.437 [2024-11-15 10:46:22.583836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.437 qpair failed and we were unable to recover it. 00:27:34.437 [2024-11-15 10:46:22.583986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.437 [2024-11-15 10:46:22.584031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.437 qpair failed and we were unable to recover it. 00:27:34.437 [2024-11-15 10:46:22.584234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.437 [2024-11-15 10:46:22.584258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.437 qpair failed and we were unable to recover it. 00:27:34.437 [2024-11-15 10:46:22.584445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.437 [2024-11-15 10:46:22.584486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.437 qpair failed and we were unable to recover it. 00:27:34.437 [2024-11-15 10:46:22.584679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.437 [2024-11-15 10:46:22.584704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.437 qpair failed and we were unable to recover it. 00:27:34.437 [2024-11-15 10:46:22.584820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.438 [2024-11-15 10:46:22.584858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.438 qpair failed and we were unable to recover it. 00:27:34.438 [2024-11-15 10:46:22.585020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.438 [2024-11-15 10:46:22.585043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.438 qpair failed and we were unable to recover it. 00:27:34.438 [2024-11-15 10:46:22.585209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.438 [2024-11-15 10:46:22.585234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.438 qpair failed and we were unable to recover it. 00:27:34.438 [2024-11-15 10:46:22.585398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.438 [2024-11-15 10:46:22.585423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.438 qpair failed and we were unable to recover it. 00:27:34.438 [2024-11-15 10:46:22.585555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.438 [2024-11-15 10:46:22.585601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.438 qpair failed and we were unable to recover it. 00:27:34.438 [2024-11-15 10:46:22.585798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.438 [2024-11-15 10:46:22.585836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.438 qpair failed and we were unable to recover it. 00:27:34.438 [2024-11-15 10:46:22.585971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.438 [2024-11-15 10:46:22.586022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.438 qpair failed and we were unable to recover it. 00:27:34.438 [2024-11-15 10:46:22.586150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.438 [2024-11-15 10:46:22.586175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.438 qpair failed and we were unable to recover it. 00:27:34.438 [2024-11-15 10:46:22.586403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.438 [2024-11-15 10:46:22.586429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.438 qpair failed and we were unable to recover it. 00:27:34.438 [2024-11-15 10:46:22.586607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.438 [2024-11-15 10:46:22.586631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.438 qpair failed and we were unable to recover it. 00:27:34.438 [2024-11-15 10:46:22.586816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.438 [2024-11-15 10:46:22.586863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.438 qpair failed and we were unable to recover it. 00:27:34.438 [2024-11-15 10:46:22.587075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.438 [2024-11-15 10:46:22.587114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.438 qpair failed and we were unable to recover it. 00:27:34.438 [2024-11-15 10:46:22.587297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.438 [2024-11-15 10:46:22.587331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.438 qpair failed and we were unable to recover it. 00:27:34.438 [2024-11-15 10:46:22.587590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.438 [2024-11-15 10:46:22.587637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.438 qpair failed and we were unable to recover it. 00:27:34.438 [2024-11-15 10:46:22.587807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.438 [2024-11-15 10:46:22.587853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.438 qpair failed and we were unable to recover it. 00:27:34.438 [2024-11-15 10:46:22.588026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.438 [2024-11-15 10:46:22.588074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.438 qpair failed and we were unable to recover it. 00:27:34.438 [2024-11-15 10:46:22.588211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.438 [2024-11-15 10:46:22.588234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.438 qpair failed and we were unable to recover it. 00:27:34.438 [2024-11-15 10:46:22.588450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.438 [2024-11-15 10:46:22.588497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.438 qpair failed and we were unable to recover it. 00:27:34.438 [2024-11-15 10:46:22.588642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.438 [2024-11-15 10:46:22.588696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.438 qpair failed and we were unable to recover it. 00:27:34.438 [2024-11-15 10:46:22.588938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.438 [2024-11-15 10:46:22.588985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.438 qpair failed and we were unable to recover it. 00:27:34.438 [2024-11-15 10:46:22.589123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.438 [2024-11-15 10:46:22.589147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.438 qpair failed and we were unable to recover it. 00:27:34.438 [2024-11-15 10:46:22.589284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.438 [2024-11-15 10:46:22.589315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.438 qpair failed and we were unable to recover it. 00:27:34.438 [2024-11-15 10:46:22.589489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.438 [2024-11-15 10:46:22.589516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.438 qpair failed and we were unable to recover it. 00:27:34.438 [2024-11-15 10:46:22.589686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.438 [2024-11-15 10:46:22.589710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.438 qpair failed and we were unable to recover it. 00:27:34.438 [2024-11-15 10:46:22.589831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.438 [2024-11-15 10:46:22.589879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.438 qpair failed and we were unable to recover it. 00:27:34.438 [2024-11-15 10:46:22.590039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.438 [2024-11-15 10:46:22.590062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.438 qpair failed and we were unable to recover it. 00:27:34.438 [2024-11-15 10:46:22.590199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.438 [2024-11-15 10:46:22.590223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.438 qpair failed and we were unable to recover it. 00:27:34.438 [2024-11-15 10:46:22.590431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.438 [2024-11-15 10:46:22.590481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.438 qpair failed and we were unable to recover it. 00:27:34.438 [2024-11-15 10:46:22.590640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.438 [2024-11-15 10:46:22.590686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.438 qpair failed and we were unable to recover it. 00:27:34.438 [2024-11-15 10:46:22.590832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.438 [2024-11-15 10:46:22.590882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.438 qpair failed and we were unable to recover it. 00:27:34.438 [2024-11-15 10:46:22.591011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.438 [2024-11-15 10:46:22.591036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.438 qpair failed and we were unable to recover it. 00:27:34.438 [2024-11-15 10:46:22.591144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.438 [2024-11-15 10:46:22.591168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.438 qpair failed and we were unable to recover it. 00:27:34.438 [2024-11-15 10:46:22.591383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.438 [2024-11-15 10:46:22.591408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.438 qpair failed and we were unable to recover it. 00:27:34.438 [2024-11-15 10:46:22.591580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.438 [2024-11-15 10:46:22.591605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.438 qpair failed and we were unable to recover it. 00:27:34.438 [2024-11-15 10:46:22.591822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.438 [2024-11-15 10:46:22.591846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.438 qpair failed and we were unable to recover it. 00:27:34.438 [2024-11-15 10:46:22.592011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.438 [2024-11-15 10:46:22.592058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.438 qpair failed and we were unable to recover it. 00:27:34.438 [2024-11-15 10:46:22.592222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.438 [2024-11-15 10:46:22.592250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.438 qpair failed and we were unable to recover it. 00:27:34.438 [2024-11-15 10:46:22.592438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.438 [2024-11-15 10:46:22.592490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.438 qpair failed and we were unable to recover it. 00:27:34.438 [2024-11-15 10:46:22.592680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.439 [2024-11-15 10:46:22.592727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.439 qpair failed and we were unable to recover it. 00:27:34.439 [2024-11-15 10:46:22.592927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.439 [2024-11-15 10:46:22.592979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.439 qpair failed and we were unable to recover it. 00:27:34.439 [2024-11-15 10:46:22.593092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.439 [2024-11-15 10:46:22.593116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.439 qpair failed and we were unable to recover it. 00:27:34.439 [2024-11-15 10:46:22.593243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.439 [2024-11-15 10:46:22.593268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.439 qpair failed and we were unable to recover it. 00:27:34.439 [2024-11-15 10:46:22.593437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.439 [2024-11-15 10:46:22.593485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.439 qpair failed and we were unable to recover it. 00:27:34.439 [2024-11-15 10:46:22.593653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.439 [2024-11-15 10:46:22.593677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.439 qpair failed and we were unable to recover it. 00:27:34.439 [2024-11-15 10:46:22.593806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.439 [2024-11-15 10:46:22.593851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.439 qpair failed and we were unable to recover it. 00:27:34.439 [2024-11-15 10:46:22.593961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.439 [2024-11-15 10:46:22.593986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.439 qpair failed and we were unable to recover it. 00:27:34.439 [2024-11-15 10:46:22.594136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.439 [2024-11-15 10:46:22.594160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.439 qpair failed and we were unable to recover it. 00:27:34.439 [2024-11-15 10:46:22.594355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.439 [2024-11-15 10:46:22.594388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.439 qpair failed and we were unable to recover it. 00:27:34.439 [2024-11-15 10:46:22.594554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.439 [2024-11-15 10:46:22.594578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.439 qpair failed and we were unable to recover it. 00:27:34.439 [2024-11-15 10:46:22.594776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.439 [2024-11-15 10:46:22.594822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.439 qpair failed and we were unable to recover it. 00:27:34.439 [2024-11-15 10:46:22.595030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.439 [2024-11-15 10:46:22.595077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.439 qpair failed and we were unable to recover it. 00:27:34.439 [2024-11-15 10:46:22.595286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.439 [2024-11-15 10:46:22.595311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.439 qpair failed and we were unable to recover it. 00:27:34.439 [2024-11-15 10:46:22.595501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.439 [2024-11-15 10:46:22.595550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.439 qpair failed and we were unable to recover it. 00:27:34.439 [2024-11-15 10:46:22.595770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.439 [2024-11-15 10:46:22.595818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.439 qpair failed and we were unable to recover it. 00:27:34.439 [2024-11-15 10:46:22.596004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.439 [2024-11-15 10:46:22.596053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.439 qpair failed and we were unable to recover it. 00:27:34.439 [2024-11-15 10:46:22.596263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.439 [2024-11-15 10:46:22.596287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.439 qpair failed and we were unable to recover it. 00:27:34.439 [2024-11-15 10:46:22.596469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.439 [2024-11-15 10:46:22.596516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.439 qpair failed and we were unable to recover it. 00:27:34.439 [2024-11-15 10:46:22.596643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.439 [2024-11-15 10:46:22.596691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.439 qpair failed and we were unable to recover it. 00:27:34.439 [2024-11-15 10:46:22.596890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.439 [2024-11-15 10:46:22.596938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.439 qpair failed and we were unable to recover it. 00:27:34.439 [2024-11-15 10:46:22.597112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.439 [2024-11-15 10:46:22.597158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.439 qpair failed and we were unable to recover it. 00:27:34.439 [2024-11-15 10:46:22.597304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.439 [2024-11-15 10:46:22.597328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.439 qpair failed and we were unable to recover it. 00:27:34.439 [2024-11-15 10:46:22.597509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.439 [2024-11-15 10:46:22.597557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.439 qpair failed and we were unable to recover it. 00:27:34.439 [2024-11-15 10:46:22.597760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.439 [2024-11-15 10:46:22.597807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.439 qpair failed and we were unable to recover it. 00:27:34.439 [2024-11-15 10:46:22.597941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.439 [2024-11-15 10:46:22.597994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.439 qpair failed and we were unable to recover it. 00:27:34.439 [2024-11-15 10:46:22.598195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.439 [2024-11-15 10:46:22.598220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.439 qpair failed and we were unable to recover it. 00:27:34.439 [2024-11-15 10:46:22.598386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.439 [2024-11-15 10:46:22.598430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.439 qpair failed and we were unable to recover it. 00:27:34.439 [2024-11-15 10:46:22.598587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.439 [2024-11-15 10:46:22.598635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.439 qpair failed and we were unable to recover it. 00:27:34.439 [2024-11-15 10:46:22.598799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.439 [2024-11-15 10:46:22.598847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.439 qpair failed and we were unable to recover it. 00:27:34.439 [2024-11-15 10:46:22.599003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.439 [2024-11-15 10:46:22.599050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.439 qpair failed and we were unable to recover it. 00:27:34.439 [2024-11-15 10:46:22.599239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.439 [2024-11-15 10:46:22.599263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.439 qpair failed and we were unable to recover it. 00:27:34.439 [2024-11-15 10:46:22.599426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.439 [2024-11-15 10:46:22.599451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.439 qpair failed and we were unable to recover it. 00:27:34.439 [2024-11-15 10:46:22.599618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.439 [2024-11-15 10:46:22.599669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.439 qpair failed and we were unable to recover it. 00:27:34.439 [2024-11-15 10:46:22.599858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.439 [2024-11-15 10:46:22.599905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.439 qpair failed and we were unable to recover it. 00:27:34.440 [2024-11-15 10:46:22.600081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.440 [2024-11-15 10:46:22.600128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.440 qpair failed and we were unable to recover it. 00:27:34.440 [2024-11-15 10:46:22.600282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.440 [2024-11-15 10:46:22.600305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.440 qpair failed and we were unable to recover it. 00:27:34.440 [2024-11-15 10:46:22.600443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.440 [2024-11-15 10:46:22.600489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.440 qpair failed and we were unable to recover it. 00:27:34.440 [2024-11-15 10:46:22.600682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.440 [2024-11-15 10:46:22.600729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.440 qpair failed and we were unable to recover it. 00:27:34.440 [2024-11-15 10:46:22.600933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.440 [2024-11-15 10:46:22.600980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.440 qpair failed and we were unable to recover it. 00:27:34.440 [2024-11-15 10:46:22.601171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.440 [2024-11-15 10:46:22.601196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.440 qpair failed and we were unable to recover it. 00:27:34.440 [2024-11-15 10:46:22.601345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.440 [2024-11-15 10:46:22.601389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.440 qpair failed and we were unable to recover it. 00:27:34.440 [2024-11-15 10:46:22.601591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.440 [2024-11-15 10:46:22.601640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.440 qpair failed and we were unable to recover it. 00:27:34.440 [2024-11-15 10:46:22.601847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.440 [2024-11-15 10:46:22.601891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.440 qpair failed and we were unable to recover it. 00:27:34.440 [2024-11-15 10:46:22.602088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.440 [2024-11-15 10:46:22.602134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.440 qpair failed and we were unable to recover it. 00:27:34.440 [2024-11-15 10:46:22.602292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.440 [2024-11-15 10:46:22.602315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.440 qpair failed and we were unable to recover it. 00:27:34.440 [2024-11-15 10:46:22.602536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.440 [2024-11-15 10:46:22.602583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.440 qpair failed and we were unable to recover it. 00:27:34.440 [2024-11-15 10:46:22.602781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.440 [2024-11-15 10:46:22.602830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.440 qpair failed and we were unable to recover it. 00:27:34.440 [2024-11-15 10:46:22.603035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.440 [2024-11-15 10:46:22.603083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.440 qpair failed and we were unable to recover it. 00:27:34.440 [2024-11-15 10:46:22.603237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.440 [2024-11-15 10:46:22.603261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.440 qpair failed and we were unable to recover it. 00:27:34.440 [2024-11-15 10:46:22.603480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.440 [2024-11-15 10:46:22.603528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.440 qpair failed and we were unable to recover it. 00:27:34.440 [2024-11-15 10:46:22.603722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.440 [2024-11-15 10:46:22.603770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.440 qpair failed and we were unable to recover it. 00:27:34.440 [2024-11-15 10:46:22.603913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.440 [2024-11-15 10:46:22.603963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.440 qpair failed and we were unable to recover it. 00:27:34.440 [2024-11-15 10:46:22.604130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.440 [2024-11-15 10:46:22.604177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.440 qpair failed and we were unable to recover it. 00:27:34.440 [2024-11-15 10:46:22.604298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.440 [2024-11-15 10:46:22.604322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.440 qpair failed and we were unable to recover it. 00:27:34.440 [2024-11-15 10:46:22.604531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.440 [2024-11-15 10:46:22.604580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.440 qpair failed and we were unable to recover it. 00:27:34.440 [2024-11-15 10:46:22.604803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.440 [2024-11-15 10:46:22.604850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.440 qpair failed and we were unable to recover it. 00:27:34.440 [2024-11-15 10:46:22.605011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.440 [2024-11-15 10:46:22.605035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.440 qpair failed and we were unable to recover it. 00:27:34.440 [2024-11-15 10:46:22.605223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.440 [2024-11-15 10:46:22.605247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.440 qpair failed and we were unable to recover it. 00:27:34.440 [2024-11-15 10:46:22.605435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.440 [2024-11-15 10:46:22.605487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.440 qpair failed and we were unable to recover it. 00:27:34.440 [2024-11-15 10:46:22.605680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.440 [2024-11-15 10:46:22.605727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.440 qpair failed and we were unable to recover it. 00:27:34.440 [2024-11-15 10:46:22.605928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.440 [2024-11-15 10:46:22.605974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.440 qpair failed and we were unable to recover it. 00:27:34.440 [2024-11-15 10:46:22.606167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.440 [2024-11-15 10:46:22.606190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.440 qpair failed and we were unable to recover it. 00:27:34.440 [2024-11-15 10:46:22.606431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.440 [2024-11-15 10:46:22.606457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.440 qpair failed and we were unable to recover it. 00:27:34.440 [2024-11-15 10:46:22.606668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.440 [2024-11-15 10:46:22.606715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.440 qpair failed and we were unable to recover it. 00:27:34.440 [2024-11-15 10:46:22.606939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.440 [2024-11-15 10:46:22.606985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.440 qpair failed and we were unable to recover it. 00:27:34.440 [2024-11-15 10:46:22.607192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.440 [2024-11-15 10:46:22.607243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.440 qpair failed and we were unable to recover it. 00:27:34.440 [2024-11-15 10:46:22.607442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.440 [2024-11-15 10:46:22.607490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.440 qpair failed and we were unable to recover it. 00:27:34.440 [2024-11-15 10:46:22.607663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.440 [2024-11-15 10:46:22.607708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.440 qpair failed and we were unable to recover it. 00:27:34.440 [2024-11-15 10:46:22.607908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.440 [2024-11-15 10:46:22.607956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.440 qpair failed and we were unable to recover it. 00:27:34.440 [2024-11-15 10:46:22.608113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.440 [2024-11-15 10:46:22.608159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.440 qpair failed and we were unable to recover it. 00:27:34.440 [2024-11-15 10:46:22.608318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.440 [2024-11-15 10:46:22.608341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.440 qpair failed and we were unable to recover it. 00:27:34.440 [2024-11-15 10:46:22.608521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.440 [2024-11-15 10:46:22.608580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.440 qpair failed and we were unable to recover it. 00:27:34.440 [2024-11-15 10:46:22.608718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.440 [2024-11-15 10:46:22.608742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.440 qpair failed and we were unable to recover it. 00:27:34.440 [2024-11-15 10:46:22.608940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.440 [2024-11-15 10:46:22.608965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.440 qpair failed and we were unable to recover it. 00:27:34.440 [2024-11-15 10:46:22.609189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.440 [2024-11-15 10:46:22.609238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.440 qpair failed and we were unable to recover it. 00:27:34.440 [2024-11-15 10:46:22.609375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.440 [2024-11-15 10:46:22.609400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.440 qpair failed and we were unable to recover it. 00:27:34.440 [2024-11-15 10:46:22.609583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.440 [2024-11-15 10:46:22.609634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.440 qpair failed and we were unable to recover it. 00:27:34.440 [2024-11-15 10:46:22.609822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.441 [2024-11-15 10:46:22.609869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.441 qpair failed and we were unable to recover it. 00:27:34.441 [2024-11-15 10:46:22.610054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.441 [2024-11-15 10:46:22.610101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.441 qpair failed and we were unable to recover it. 00:27:34.441 [2024-11-15 10:46:22.610256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.441 [2024-11-15 10:46:22.610279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.441 qpair failed and we were unable to recover it. 00:27:34.441 [2024-11-15 10:46:22.610441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.441 [2024-11-15 10:46:22.610485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.441 qpair failed and we were unable to recover it. 00:27:34.441 [2024-11-15 10:46:22.610627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.441 [2024-11-15 10:46:22.610675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.441 qpair failed and we were unable to recover it. 00:27:34.441 [2024-11-15 10:46:22.610865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.441 [2024-11-15 10:46:22.610912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.441 qpair failed and we were unable to recover it. 00:27:34.441 [2024-11-15 10:46:22.611105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.441 [2024-11-15 10:46:22.611163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.441 qpair failed and we were unable to recover it. 00:27:34.441 [2024-11-15 10:46:22.611311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.441 [2024-11-15 10:46:22.611335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.441 qpair failed and we were unable to recover it. 00:27:34.441 [2024-11-15 10:46:22.611527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.441 [2024-11-15 10:46:22.611574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.441 qpair failed and we were unable to recover it. 00:27:34.441 [2024-11-15 10:46:22.611784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.441 [2024-11-15 10:46:22.611831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.441 qpair failed and we were unable to recover it. 00:27:34.441 [2024-11-15 10:46:22.612006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.441 [2024-11-15 10:46:22.612054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.441 qpair failed and we were unable to recover it. 00:27:34.441 [2024-11-15 10:46:22.612195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.441 [2024-11-15 10:46:22.612219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.441 qpair failed and we were unable to recover it. 00:27:34.441 [2024-11-15 10:46:22.612432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.441 [2024-11-15 10:46:22.612482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.441 qpair failed and we were unable to recover it. 00:27:34.441 [2024-11-15 10:46:22.612731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.441 [2024-11-15 10:46:22.612777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.441 qpair failed and we were unable to recover it. 00:27:34.441 [2024-11-15 10:46:22.612974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.441 [2024-11-15 10:46:22.613023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.441 qpair failed and we were unable to recover it. 00:27:34.441 [2024-11-15 10:46:22.613237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.441 [2024-11-15 10:46:22.613261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.441 qpair failed and we were unable to recover it. 00:27:34.441 [2024-11-15 10:46:22.613460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.441 [2024-11-15 10:46:22.613508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.441 qpair failed and we were unable to recover it. 00:27:34.441 [2024-11-15 10:46:22.613729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.441 [2024-11-15 10:46:22.613778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.441 qpair failed and we were unable to recover it. 00:27:34.441 [2024-11-15 10:46:22.613960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.441 [2024-11-15 10:46:22.614008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.441 qpair failed and we were unable to recover it. 00:27:34.441 [2024-11-15 10:46:22.614223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.441 [2024-11-15 10:46:22.614247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.441 qpair failed and we were unable to recover it. 00:27:34.441 [2024-11-15 10:46:22.614426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.441 [2024-11-15 10:46:22.614452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.441 qpair failed and we were unable to recover it. 00:27:34.441 [2024-11-15 10:46:22.614603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.441 [2024-11-15 10:46:22.614657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.441 qpair failed and we were unable to recover it. 00:27:34.441 [2024-11-15 10:46:22.614834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.441 [2024-11-15 10:46:22.614882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.441 qpair failed and we were unable to recover it. 00:27:34.441 [2024-11-15 10:46:22.615051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.441 [2024-11-15 10:46:22.615097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.441 qpair failed and we were unable to recover it. 00:27:34.441 [2024-11-15 10:46:22.615212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.441 [2024-11-15 10:46:22.615237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.441 qpair failed and we were unable to recover it. 00:27:34.441 [2024-11-15 10:46:22.615436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.441 [2024-11-15 10:46:22.615462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.441 qpair failed and we were unable to recover it. 00:27:34.441 [2024-11-15 10:46:22.615618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.441 [2024-11-15 10:46:22.615641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.441 qpair failed and we were unable to recover it. 00:27:34.441 [2024-11-15 10:46:22.615830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.441 [2024-11-15 10:46:22.615854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.441 qpair failed and we were unable to recover it. 00:27:34.441 [2024-11-15 10:46:22.616002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.441 [2024-11-15 10:46:22.616026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.441 qpair failed and we were unable to recover it. 00:27:34.441 [2024-11-15 10:46:22.616179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.441 [2024-11-15 10:46:22.616203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.441 qpair failed and we were unable to recover it. 00:27:34.441 [2024-11-15 10:46:22.616397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.441 [2024-11-15 10:46:22.616421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.441 qpair failed and we were unable to recover it. 00:27:34.441 [2024-11-15 10:46:22.616604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.441 [2024-11-15 10:46:22.616653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.441 qpair failed and we were unable to recover it. 00:27:34.441 [2024-11-15 10:46:22.616839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.441 [2024-11-15 10:46:22.616885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.441 qpair failed and we were unable to recover it. 00:27:34.441 [2024-11-15 10:46:22.617084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.441 [2024-11-15 10:46:22.617133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.441 qpair failed and we were unable to recover it. 00:27:34.441 [2024-11-15 10:46:22.617307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.441 [2024-11-15 10:46:22.617332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.441 qpair failed and we were unable to recover it. 00:27:34.441 [2024-11-15 10:46:22.617571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.441 [2024-11-15 10:46:22.617619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.441 qpair failed and we were unable to recover it. 00:27:34.441 [2024-11-15 10:46:22.617798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.441 [2024-11-15 10:46:22.617846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.442 qpair failed and we were unable to recover it. 00:27:34.442 [2024-11-15 10:46:22.618043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.442 [2024-11-15 10:46:22.618096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.442 qpair failed and we were unable to recover it. 00:27:34.442 [2024-11-15 10:46:22.618266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.442 [2024-11-15 10:46:22.618290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.442 qpair failed and we were unable to recover it. 00:27:34.442 [2024-11-15 10:46:22.618479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.442 [2024-11-15 10:46:22.618527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.442 qpair failed and we were unable to recover it. 00:27:34.442 [2024-11-15 10:46:22.618671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.442 [2024-11-15 10:46:22.618726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.442 qpair failed and we were unable to recover it. 00:27:34.442 [2024-11-15 10:46:22.618928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.442 [2024-11-15 10:46:22.618975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.442 qpair failed and we were unable to recover it. 00:27:34.442 [2024-11-15 10:46:22.619140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.442 [2024-11-15 10:46:22.619168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.442 qpair failed and we were unable to recover it. 00:27:34.442 [2024-11-15 10:46:22.619322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.442 [2024-11-15 10:46:22.619347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.442 qpair failed and we were unable to recover it. 00:27:34.442 [2024-11-15 10:46:22.619589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.442 [2024-11-15 10:46:22.619630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.442 qpair failed and we were unable to recover it. 00:27:34.442 [2024-11-15 10:46:22.619819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.442 [2024-11-15 10:46:22.619869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.442 qpair failed and we were unable to recover it. 00:27:34.442 [2024-11-15 10:46:22.620038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.442 [2024-11-15 10:46:22.620086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.442 qpair failed and we were unable to recover it. 00:27:34.442 [2024-11-15 10:46:22.620262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.442 [2024-11-15 10:46:22.620286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.442 qpair failed and we were unable to recover it. 00:27:34.442 [2024-11-15 10:46:22.620454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.442 [2024-11-15 10:46:22.620501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.442 qpair failed and we were unable to recover it. 00:27:34.442 [2024-11-15 10:46:22.620733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.442 [2024-11-15 10:46:22.620781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.442 qpair failed and we were unable to recover it. 00:27:34.442 [2024-11-15 10:46:22.621005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.442 [2024-11-15 10:46:22.621048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.442 qpair failed and we were unable to recover it. 00:27:34.442 [2024-11-15 10:46:22.621213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.442 [2024-11-15 10:46:22.621238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.442 qpair failed and we were unable to recover it. 00:27:34.442 [2024-11-15 10:46:22.621399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.442 [2024-11-15 10:46:22.621434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.442 qpair failed and we were unable to recover it. 00:27:34.442 [2024-11-15 10:46:22.621615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.442 [2024-11-15 10:46:22.621669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.442 qpair failed and we were unable to recover it. 00:27:34.442 [2024-11-15 10:46:22.621814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.442 [2024-11-15 10:46:22.621860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.442 qpair failed and we were unable to recover it. 00:27:34.442 [2024-11-15 10:46:22.622064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.442 [2024-11-15 10:46:22.622112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.442 qpair failed and we were unable to recover it. 00:27:34.442 [2024-11-15 10:46:22.622302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.442 [2024-11-15 10:46:22.622330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.442 qpair failed and we were unable to recover it. 00:27:34.442 [2024-11-15 10:46:22.622507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.442 [2024-11-15 10:46:22.622553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.442 qpair failed and we were unable to recover it. 00:27:34.442 [2024-11-15 10:46:22.622821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.442 [2024-11-15 10:46:22.622870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.442 qpair failed and we were unable to recover it. 00:27:34.442 [2024-11-15 10:46:22.623063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.442 [2024-11-15 10:46:22.623112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.442 qpair failed and we were unable to recover it. 00:27:34.442 [2024-11-15 10:46:22.623251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.442 [2024-11-15 10:46:22.623275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.442 qpair failed and we were unable to recover it. 00:27:34.442 [2024-11-15 10:46:22.623429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.442 [2024-11-15 10:46:22.623479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.442 qpair failed and we were unable to recover it. 00:27:34.442 [2024-11-15 10:46:22.623615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.442 [2024-11-15 10:46:22.623670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.442 qpair failed and we were unable to recover it. 00:27:34.442 [2024-11-15 10:46:22.623880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.442 [2024-11-15 10:46:22.623929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.442 qpair failed and we were unable to recover it. 00:27:34.442 [2024-11-15 10:46:22.624105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.442 [2024-11-15 10:46:22.624144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.442 qpair failed and we were unable to recover it. 00:27:34.442 [2024-11-15 10:46:22.624333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.442 [2024-11-15 10:46:22.624357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.442 qpair failed and we were unable to recover it. 00:27:34.442 [2024-11-15 10:46:22.624500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.442 [2024-11-15 10:46:22.624551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.442 qpair failed and we were unable to recover it. 00:27:34.442 [2024-11-15 10:46:22.624787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.442 [2024-11-15 10:46:22.624833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.442 qpair failed and we were unable to recover it. 00:27:34.442 [2024-11-15 10:46:22.625043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.442 [2024-11-15 10:46:22.625091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.442 qpair failed and we were unable to recover it. 00:27:34.442 [2024-11-15 10:46:22.625270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.442 [2024-11-15 10:46:22.625300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.442 qpair failed and we were unable to recover it. 00:27:34.442 [2024-11-15 10:46:22.625541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.442 [2024-11-15 10:46:22.625589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.442 qpair failed and we were unable to recover it. 00:27:34.443 [2024-11-15 10:46:22.625724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.443 [2024-11-15 10:46:22.625773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.443 qpair failed and we were unable to recover it. 00:27:34.443 [2024-11-15 10:46:22.625937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.443 [2024-11-15 10:46:22.625982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.443 qpair failed and we were unable to recover it. 00:27:34.443 [2024-11-15 10:46:22.626163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.443 [2024-11-15 10:46:22.626187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.443 qpair failed and we were unable to recover it. 00:27:34.443 [2024-11-15 10:46:22.626369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.443 [2024-11-15 10:46:22.626394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.443 qpair failed and we were unable to recover it. 00:27:34.443 [2024-11-15 10:46:22.626587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.443 [2024-11-15 10:46:22.626611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.443 qpair failed and we were unable to recover it. 00:27:34.443 [2024-11-15 10:46:22.626785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.443 [2024-11-15 10:46:22.626834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.443 qpair failed and we were unable to recover it. 00:27:34.443 [2024-11-15 10:46:22.627001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.443 [2024-11-15 10:46:22.627048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.443 qpair failed and we were unable to recover it. 00:27:34.443 [2024-11-15 10:46:22.627224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.443 [2024-11-15 10:46:22.627248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.443 qpair failed and we were unable to recover it. 00:27:34.443 [2024-11-15 10:46:22.627474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.443 [2024-11-15 10:46:22.627530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.443 qpair failed and we were unable to recover it. 00:27:34.443 [2024-11-15 10:46:22.627736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.443 [2024-11-15 10:46:22.627784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.443 qpair failed and we were unable to recover it. 00:27:34.443 [2024-11-15 10:46:22.627961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.443 [2024-11-15 10:46:22.628008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.443 qpair failed and we were unable to recover it. 00:27:34.443 [2024-11-15 10:46:22.628175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.443 [2024-11-15 10:46:22.628199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.443 qpair failed and we were unable to recover it. 00:27:34.443 [2024-11-15 10:46:22.628379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.443 [2024-11-15 10:46:22.628403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.443 qpair failed and we were unable to recover it. 00:27:34.443 [2024-11-15 10:46:22.628623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.443 [2024-11-15 10:46:22.628670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.443 qpair failed and we were unable to recover it. 00:27:34.443 [2024-11-15 10:46:22.628827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.443 [2024-11-15 10:46:22.628873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.443 qpair failed and we were unable to recover it. 00:27:34.443 [2024-11-15 10:46:22.629046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.443 [2024-11-15 10:46:22.629097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.443 qpair failed and we were unable to recover it. 00:27:34.443 [2024-11-15 10:46:22.629314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.443 [2024-11-15 10:46:22.629338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.443 qpair failed and we were unable to recover it. 00:27:34.443 [2024-11-15 10:46:22.629544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.443 [2024-11-15 10:46:22.629570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.443 qpair failed and we were unable to recover it. 00:27:34.443 [2024-11-15 10:46:22.629765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.443 [2024-11-15 10:46:22.629814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.443 qpair failed and we were unable to recover it. 00:27:34.443 [2024-11-15 10:46:22.629957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.443 [2024-11-15 10:46:22.630004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.443 qpair failed and we were unable to recover it. 00:27:34.443 [2024-11-15 10:46:22.630207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.443 [2024-11-15 10:46:22.630230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.443 qpair failed and we were unable to recover it. 00:27:34.443 [2024-11-15 10:46:22.630431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.443 [2024-11-15 10:46:22.630482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.443 qpair failed and we were unable to recover it. 00:27:34.443 [2024-11-15 10:46:22.630680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.443 [2024-11-15 10:46:22.630730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.443 qpair failed and we were unable to recover it. 00:27:34.443 [2024-11-15 10:46:22.630908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.443 [2024-11-15 10:46:22.630957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.443 qpair failed and we were unable to recover it. 00:27:34.443 [2024-11-15 10:46:22.631134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.443 [2024-11-15 10:46:22.631180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.443 qpair failed and we were unable to recover it. 00:27:34.443 [2024-11-15 10:46:22.631341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.443 [2024-11-15 10:46:22.631373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.443 qpair failed and we were unable to recover it. 00:27:34.443 [2024-11-15 10:46:22.631630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.443 [2024-11-15 10:46:22.631684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.443 qpair failed and we were unable to recover it. 00:27:34.443 [2024-11-15 10:46:22.631874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.443 [2024-11-15 10:46:22.631920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.443 qpair failed and we were unable to recover it. 00:27:34.443 [2024-11-15 10:46:22.632059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.443 [2024-11-15 10:46:22.632108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.443 qpair failed and we were unable to recover it. 00:27:34.443 [2024-11-15 10:46:22.632322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.443 [2024-11-15 10:46:22.632346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.443 qpair failed and we were unable to recover it. 00:27:34.443 [2024-11-15 10:46:22.632563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.443 [2024-11-15 10:46:22.632609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.443 qpair failed and we were unable to recover it. 00:27:34.443 [2024-11-15 10:46:22.632798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.443 [2024-11-15 10:46:22.632847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.443 qpair failed and we were unable to recover it. 00:27:34.443 [2024-11-15 10:46:22.633037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.443 [2024-11-15 10:46:22.633082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.443 qpair failed and we were unable to recover it. 00:27:34.443 [2024-11-15 10:46:22.633288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.443 [2024-11-15 10:46:22.633312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.443 qpair failed and we were unable to recover it. 00:27:34.443 [2024-11-15 10:46:22.633473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.443 [2024-11-15 10:46:22.633499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.443 qpair failed and we were unable to recover it. 00:27:34.443 [2024-11-15 10:46:22.633722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.443 [2024-11-15 10:46:22.633770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.443 qpair failed and we were unable to recover it. 00:27:34.443 [2024-11-15 10:46:22.633898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.443 [2024-11-15 10:46:22.633945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.443 qpair failed and we were unable to recover it. 00:27:34.443 [2024-11-15 10:46:22.634111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.443 [2024-11-15 10:46:22.634154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.443 qpair failed and we were unable to recover it. 00:27:34.443 [2024-11-15 10:46:22.634313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.443 [2024-11-15 10:46:22.634337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.443 qpair failed and we were unable to recover it. 00:27:34.443 [2024-11-15 10:46:22.634553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.444 [2024-11-15 10:46:22.634601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.444 qpair failed and we were unable to recover it. 00:27:34.444 [2024-11-15 10:46:22.634802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.444 [2024-11-15 10:46:22.634852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.444 qpair failed and we were unable to recover it. 00:27:34.444 [2024-11-15 10:46:22.635031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.444 [2024-11-15 10:46:22.635081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.444 qpair failed and we were unable to recover it. 00:27:34.444 [2024-11-15 10:46:22.635290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.444 [2024-11-15 10:46:22.635314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.444 qpair failed and we were unable to recover it. 00:27:34.444 [2024-11-15 10:46:22.635490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.444 [2024-11-15 10:46:22.635515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.444 qpair failed and we were unable to recover it. 00:27:34.444 [2024-11-15 10:46:22.635674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.444 [2024-11-15 10:46:22.635738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.444 qpair failed and we were unable to recover it. 00:27:34.444 [2024-11-15 10:46:22.635948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.444 [2024-11-15 10:46:22.635997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.444 qpair failed and we were unable to recover it. 00:27:34.444 [2024-11-15 10:46:22.636172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.444 [2024-11-15 10:46:22.636196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.444 qpair failed and we were unable to recover it. 00:27:34.444 [2024-11-15 10:46:22.636357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.444 [2024-11-15 10:46:22.636388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.444 qpair failed and we were unable to recover it. 00:27:34.444 [2024-11-15 10:46:22.636589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.444 [2024-11-15 10:46:22.636613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.444 qpair failed and we were unable to recover it. 00:27:34.444 [2024-11-15 10:46:22.636795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.444 [2024-11-15 10:46:22.636840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.444 qpair failed and we were unable to recover it. 00:27:34.444 [2024-11-15 10:46:22.637023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.444 [2024-11-15 10:46:22.637072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.444 qpair failed and we were unable to recover it. 00:27:34.444 [2024-11-15 10:46:22.637278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.444 [2024-11-15 10:46:22.637302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.444 qpair failed and we were unable to recover it. 00:27:34.444 [2024-11-15 10:46:22.637472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.444 [2024-11-15 10:46:22.637497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.444 qpair failed and we were unable to recover it. 00:27:34.444 [2024-11-15 10:46:22.637669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.444 [2024-11-15 10:46:22.637726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.444 qpair failed and we were unable to recover it. 00:27:34.444 [2024-11-15 10:46:22.637954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.444 [2024-11-15 10:46:22.638001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.444 qpair failed and we were unable to recover it. 00:27:34.444 [2024-11-15 10:46:22.638175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.444 [2024-11-15 10:46:22.638224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.444 qpair failed and we were unable to recover it. 00:27:34.444 [2024-11-15 10:46:22.638416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.444 [2024-11-15 10:46:22.638442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.444 qpair failed and we were unable to recover it. 00:27:34.444 [2024-11-15 10:46:22.638629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.444 [2024-11-15 10:46:22.638675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.444 qpair failed and we were unable to recover it. 00:27:34.444 [2024-11-15 10:46:22.638882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.444 [2024-11-15 10:46:22.638930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.444 qpair failed and we were unable to recover it. 00:27:34.444 [2024-11-15 10:46:22.639145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.444 [2024-11-15 10:46:22.639193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.444 qpair failed and we were unable to recover it. 00:27:34.444 [2024-11-15 10:46:22.639401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.444 [2024-11-15 10:46:22.639427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.444 qpair failed and we were unable to recover it. 00:27:34.444 [2024-11-15 10:46:22.639600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.444 [2024-11-15 10:46:22.639654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.444 qpair failed and we were unable to recover it. 00:27:34.444 [2024-11-15 10:46:22.639856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.444 [2024-11-15 10:46:22.639901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.444 qpair failed and we were unable to recover it. 00:27:34.444 [2024-11-15 10:46:22.640114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.444 [2024-11-15 10:46:22.640163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.444 qpair failed and we were unable to recover it. 00:27:34.444 [2024-11-15 10:46:22.640350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.444 [2024-11-15 10:46:22.640382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.444 qpair failed and we were unable to recover it. 00:27:34.444 [2024-11-15 10:46:22.640523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.444 [2024-11-15 10:46:22.640571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.444 qpair failed and we were unable to recover it. 00:27:34.444 [2024-11-15 10:46:22.640784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.444 [2024-11-15 10:46:22.640836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.444 qpair failed and we were unable to recover it. 00:27:34.444 [2024-11-15 10:46:22.641049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.444 [2024-11-15 10:46:22.641100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.444 qpair failed and we were unable to recover it. 00:27:34.444 [2024-11-15 10:46:22.641219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.444 [2024-11-15 10:46:22.641243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.444 qpair failed and we were unable to recover it. 00:27:34.444 [2024-11-15 10:46:22.641390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.444 [2024-11-15 10:46:22.641415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.444 qpair failed and we were unable to recover it. 00:27:34.444 [2024-11-15 10:46:22.641608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.444 [2024-11-15 10:46:22.641658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.444 qpair failed and we were unable to recover it. 00:27:34.444 [2024-11-15 10:46:22.641832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.444 [2024-11-15 10:46:22.641879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.444 qpair failed and we were unable to recover it. 00:27:34.444 [2024-11-15 10:46:22.642079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.444 [2024-11-15 10:46:22.642128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.444 qpair failed and we were unable to recover it. 00:27:34.444 [2024-11-15 10:46:22.642280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.444 [2024-11-15 10:46:22.642304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.444 qpair failed and we were unable to recover it. 00:27:34.444 [2024-11-15 10:46:22.642472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.444 [2024-11-15 10:46:22.642520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.444 qpair failed and we were unable to recover it. 00:27:34.444 [2024-11-15 10:46:22.642763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.444 [2024-11-15 10:46:22.642811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.444 qpair failed and we were unable to recover it. 00:27:34.444 [2024-11-15 10:46:22.642948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.445 [2024-11-15 10:46:22.642995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.445 qpair failed and we were unable to recover it. 00:27:34.445 [2024-11-15 10:46:22.643170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.445 [2024-11-15 10:46:22.643195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.445 qpair failed and we were unable to recover it. 00:27:34.445 [2024-11-15 10:46:22.643377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.445 [2024-11-15 10:46:22.643402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.445 qpair failed and we were unable to recover it. 00:27:34.445 [2024-11-15 10:46:22.643585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.445 [2024-11-15 10:46:22.643631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.445 qpair failed and we were unable to recover it. 00:27:34.445 [2024-11-15 10:46:22.643784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.445 [2024-11-15 10:46:22.643833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.445 qpair failed and we were unable to recover it. 00:27:34.445 [2024-11-15 10:46:22.643995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.445 [2024-11-15 10:46:22.644057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.445 qpair failed and we were unable to recover it. 00:27:34.445 [2024-11-15 10:46:22.644206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.445 [2024-11-15 10:46:22.644231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.445 qpair failed and we were unable to recover it. 00:27:34.445 [2024-11-15 10:46:22.644420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.445 [2024-11-15 10:46:22.644474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.445 qpair failed and we were unable to recover it. 00:27:34.445 [2024-11-15 10:46:22.644657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.445 [2024-11-15 10:46:22.644703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.445 qpair failed and we were unable to recover it. 00:27:34.445 [2024-11-15 10:46:22.644879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.445 [2024-11-15 10:46:22.644928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.445 qpair failed and we were unable to recover it. 00:27:34.445 [2024-11-15 10:46:22.645118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.445 [2024-11-15 10:46:22.645165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.445 qpair failed and we were unable to recover it. 00:27:34.445 [2024-11-15 10:46:22.645380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.445 [2024-11-15 10:46:22.645420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.445 qpair failed and we were unable to recover it. 00:27:34.445 [2024-11-15 10:46:22.645578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.445 [2024-11-15 10:46:22.645643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.445 qpair failed and we were unable to recover it. 00:27:34.445 [2024-11-15 10:46:22.645838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.445 [2024-11-15 10:46:22.645889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.445 qpair failed and we were unable to recover it. 00:27:34.445 [2024-11-15 10:46:22.646113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.445 [2024-11-15 10:46:22.646160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.445 qpair failed and we were unable to recover it. 00:27:34.445 [2024-11-15 10:46:22.646332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.445 [2024-11-15 10:46:22.646356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.445 qpair failed and we were unable to recover it. 00:27:34.445 [2024-11-15 10:46:22.646546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.445 [2024-11-15 10:46:22.646571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.445 qpair failed and we were unable to recover it. 00:27:34.445 [2024-11-15 10:46:22.646797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.445 [2024-11-15 10:46:22.646848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.445 qpair failed and we were unable to recover it. 00:27:34.445 [2024-11-15 10:46:22.647039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.445 [2024-11-15 10:46:22.647087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.445 qpair failed and we were unable to recover it. 00:27:34.445 [2024-11-15 10:46:22.647262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.445 [2024-11-15 10:46:22.647287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.445 qpair failed and we were unable to recover it. 00:27:34.445 [2024-11-15 10:46:22.647446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.445 [2024-11-15 10:46:22.647505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.445 qpair failed and we were unable to recover it. 00:27:34.445 [2024-11-15 10:46:22.647757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.445 [2024-11-15 10:46:22.647805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.445 qpair failed and we were unable to recover it. 00:27:34.445 [2024-11-15 10:46:22.648013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.445 [2024-11-15 10:46:22.648059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.445 qpair failed and we were unable to recover it. 00:27:34.445 [2024-11-15 10:46:22.648266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.445 [2024-11-15 10:46:22.648291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.445 qpair failed and we were unable to recover it. 00:27:34.445 [2024-11-15 10:46:22.648465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.445 [2024-11-15 10:46:22.648490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.445 qpair failed and we were unable to recover it. 00:27:34.445 [2024-11-15 10:46:22.648679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.445 [2024-11-15 10:46:22.648729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.445 qpair failed and we were unable to recover it. 00:27:34.445 [2024-11-15 10:46:22.648914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.445 [2024-11-15 10:46:22.648961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.445 qpair failed and we were unable to recover it. 00:27:34.445 [2024-11-15 10:46:22.649175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.445 [2024-11-15 10:46:22.649225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.445 qpair failed and we were unable to recover it. 00:27:34.445 [2024-11-15 10:46:22.649450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.445 [2024-11-15 10:46:22.649498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.445 qpair failed and we were unable to recover it. 00:27:34.445 [2024-11-15 10:46:22.649683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.445 [2024-11-15 10:46:22.649732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.445 qpair failed and we were unable to recover it. 00:27:34.445 [2024-11-15 10:46:22.649942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.445 [2024-11-15 10:46:22.649988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.445 qpair failed and we were unable to recover it. 00:27:34.445 [2024-11-15 10:46:22.650201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.445 [2024-11-15 10:46:22.650226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.445 qpair failed and we were unable to recover it. 00:27:34.445 [2024-11-15 10:46:22.650414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.445 [2024-11-15 10:46:22.650464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.445 qpair failed and we were unable to recover it. 00:27:34.445 [2024-11-15 10:46:22.650597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.445 [2024-11-15 10:46:22.650648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.445 qpair failed and we were unable to recover it. 00:27:34.445 [2024-11-15 10:46:22.650831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.445 [2024-11-15 10:46:22.650878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.445 qpair failed and we were unable to recover it. 00:27:34.445 [2024-11-15 10:46:22.651096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.445 [2024-11-15 10:46:22.651144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.445 qpair failed and we were unable to recover it. 00:27:34.445 [2024-11-15 10:46:22.651296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.445 [2024-11-15 10:46:22.651321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.445 qpair failed and we were unable to recover it. 00:27:34.445 [2024-11-15 10:46:22.651544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.445 [2024-11-15 10:46:22.651593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.445 qpair failed and we were unable to recover it. 00:27:34.445 [2024-11-15 10:46:22.651793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.445 [2024-11-15 10:46:22.651835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.445 qpair failed and we were unable to recover it. 00:27:34.445 [2024-11-15 10:46:22.652040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.445 [2024-11-15 10:46:22.652087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.445 qpair failed and we were unable to recover it. 00:27:34.445 [2024-11-15 10:46:22.652298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.445 [2024-11-15 10:46:22.652322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.445 qpair failed and we were unable to recover it. 00:27:34.445 [2024-11-15 10:46:22.652525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.445 [2024-11-15 10:46:22.652574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.445 qpair failed and we were unable to recover it. 00:27:34.445 [2024-11-15 10:46:22.652794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.445 [2024-11-15 10:46:22.652841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.445 qpair failed and we were unable to recover it. 00:27:34.445 [2024-11-15 10:46:22.653085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.445 [2024-11-15 10:46:22.653132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.445 qpair failed and we were unable to recover it. 00:27:34.445 [2024-11-15 10:46:22.653359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.445 [2024-11-15 10:46:22.653411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.445 qpair failed and we were unable to recover it. 00:27:34.445 [2024-11-15 10:46:22.653645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.446 [2024-11-15 10:46:22.653694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.446 qpair failed and we were unable to recover it. 00:27:34.446 [2024-11-15 10:46:22.653925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.446 [2024-11-15 10:46:22.653971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.446 qpair failed and we were unable to recover it. 00:27:34.446 [2024-11-15 10:46:22.654106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.446 [2024-11-15 10:46:22.654151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.446 qpair failed and we were unable to recover it. 00:27:34.446 [2024-11-15 10:46:22.654359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.446 [2024-11-15 10:46:22.654392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.446 qpair failed and we were unable to recover it. 00:27:34.446 [2024-11-15 10:46:22.654601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.446 [2024-11-15 10:46:22.654625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.446 qpair failed and we were unable to recover it. 00:27:34.446 [2024-11-15 10:46:22.654828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.446 [2024-11-15 10:46:22.654878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.446 qpair failed and we were unable to recover it. 00:27:34.446 [2024-11-15 10:46:22.655095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.446 [2024-11-15 10:46:22.655142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.446 qpair failed and we were unable to recover it. 00:27:34.446 [2024-11-15 10:46:22.655306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.446 [2024-11-15 10:46:22.655331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.446 qpair failed and we were unable to recover it. 00:27:34.446 [2024-11-15 10:46:22.655530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.446 [2024-11-15 10:46:22.655555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.446 qpair failed and we were unable to recover it. 00:27:34.446 [2024-11-15 10:46:22.655774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.446 [2024-11-15 10:46:22.655822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.446 qpair failed and we were unable to recover it. 00:27:34.446 [2024-11-15 10:46:22.655997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.446 [2024-11-15 10:46:22.656046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.446 qpair failed and we were unable to recover it. 00:27:34.446 [2024-11-15 10:46:22.656251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.446 [2024-11-15 10:46:22.656276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.446 qpair failed and we were unable to recover it. 00:27:34.446 [2024-11-15 10:46:22.656447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.446 [2024-11-15 10:46:22.656471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.446 qpair failed and we were unable to recover it. 00:27:34.446 [2024-11-15 10:46:22.656650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.446 [2024-11-15 10:46:22.656696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.446 qpair failed and we were unable to recover it. 00:27:34.446 [2024-11-15 10:46:22.656897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.446 [2024-11-15 10:46:22.656944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.446 qpair failed and we were unable to recover it. 00:27:34.446 [2024-11-15 10:46:22.657162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.446 [2024-11-15 10:46:22.657210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.446 qpair failed and we were unable to recover it. 00:27:34.446 [2024-11-15 10:46:22.657411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.446 [2024-11-15 10:46:22.657436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.446 qpair failed and we were unable to recover it. 00:27:34.446 [2024-11-15 10:46:22.657662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.446 [2024-11-15 10:46:22.657709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.446 qpair failed and we were unable to recover it. 00:27:34.446 [2024-11-15 10:46:22.657883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.446 [2024-11-15 10:46:22.657932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.446 qpair failed and we were unable to recover it. 00:27:34.446 [2024-11-15 10:46:22.658146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.446 [2024-11-15 10:46:22.658193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.446 qpair failed and we were unable to recover it. 00:27:34.446 [2024-11-15 10:46:22.658357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.446 [2024-11-15 10:46:22.658387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.446 qpair failed and we were unable to recover it. 00:27:34.446 [2024-11-15 10:46:22.658564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.446 [2024-11-15 10:46:22.658588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.446 qpair failed and we were unable to recover it. 00:27:34.446 [2024-11-15 10:46:22.658747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.446 [2024-11-15 10:46:22.658796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.446 qpair failed and we were unable to recover it. 00:27:34.446 [2024-11-15 10:46:22.658951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.446 [2024-11-15 10:46:22.658999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.446 qpair failed and we were unable to recover it. 00:27:34.446 [2024-11-15 10:46:22.659222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.446 [2024-11-15 10:46:22.659271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.446 qpair failed and we were unable to recover it. 00:27:34.446 [2024-11-15 10:46:22.659489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.446 [2024-11-15 10:46:22.659514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.446 qpair failed and we were unable to recover it. 00:27:34.446 [2024-11-15 10:46:22.659703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.446 [2024-11-15 10:46:22.659750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.446 qpair failed and we were unable to recover it. 00:27:34.446 [2024-11-15 10:46:22.659973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.446 [2024-11-15 10:46:22.660020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.446 qpair failed and we were unable to recover it. 00:27:34.446 [2024-11-15 10:46:22.660222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.446 [2024-11-15 10:46:22.660246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.446 qpair failed and we were unable to recover it. 00:27:34.446 [2024-11-15 10:46:22.660447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.446 [2024-11-15 10:46:22.660496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.446 qpair failed and we were unable to recover it. 00:27:34.446 [2024-11-15 10:46:22.660705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.446 [2024-11-15 10:46:22.660754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.446 qpair failed and we were unable to recover it. 00:27:34.446 [2024-11-15 10:46:22.660907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.446 [2024-11-15 10:46:22.660955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.446 qpair failed and we were unable to recover it. 00:27:34.446 [2024-11-15 10:46:22.661170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.446 [2024-11-15 10:46:22.661219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.446 qpair failed and we were unable to recover it. 00:27:34.446 [2024-11-15 10:46:22.661445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.446 [2024-11-15 10:46:22.661498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.446 qpair failed and we were unable to recover it. 00:27:34.446 [2024-11-15 10:46:22.661680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.446 [2024-11-15 10:46:22.661728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.446 qpair failed and we were unable to recover it. 00:27:34.446 [2024-11-15 10:46:22.661900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.446 [2024-11-15 10:46:22.661924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.446 qpair failed and we were unable to recover it. 00:27:34.446 [2024-11-15 10:46:22.662138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.446 [2024-11-15 10:46:22.662187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.446 qpair failed and we were unable to recover it. 00:27:34.446 [2024-11-15 10:46:22.662402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.446 [2024-11-15 10:46:22.662427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.446 qpair failed and we were unable to recover it. 00:27:34.446 [2024-11-15 10:46:22.662554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.446 [2024-11-15 10:46:22.662600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.446 qpair failed and we were unable to recover it. 00:27:34.446 [2024-11-15 10:46:22.662759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.446 [2024-11-15 10:46:22.662804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.446 qpair failed and we were unable to recover it. 00:27:34.446 [2024-11-15 10:46:22.662985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.446 [2024-11-15 10:46:22.663034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.446 qpair failed and we were unable to recover it. 00:27:34.447 [2024-11-15 10:46:22.663212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.447 [2024-11-15 10:46:22.663236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.447 qpair failed and we were unable to recover it. 00:27:34.447 [2024-11-15 10:46:22.663360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.447 [2024-11-15 10:46:22.663401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.447 qpair failed and we were unable to recover it. 00:27:34.447 [2024-11-15 10:46:22.663624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.447 [2024-11-15 10:46:22.663677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.447 qpair failed and we were unable to recover it. 00:27:34.447 [2024-11-15 10:46:22.663860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.447 [2024-11-15 10:46:22.663909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.447 qpair failed and we were unable to recover it. 00:27:34.447 [2024-11-15 10:46:22.664129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.447 [2024-11-15 10:46:22.664179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.447 qpair failed and we were unable to recover it. 00:27:34.447 [2024-11-15 10:46:22.664373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.447 [2024-11-15 10:46:22.664398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.447 qpair failed and we were unable to recover it. 00:27:34.447 [2024-11-15 10:46:22.664508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.447 [2024-11-15 10:46:22.664532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.447 qpair failed and we were unable to recover it. 00:27:34.447 [2024-11-15 10:46:22.664690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.447 [2024-11-15 10:46:22.664738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.447 qpair failed and we were unable to recover it. 00:27:34.447 [2024-11-15 10:46:22.664947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.447 [2024-11-15 10:46:22.664997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.447 qpair failed and we were unable to recover it. 00:27:34.447 [2024-11-15 10:46:22.665216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.447 [2024-11-15 10:46:22.665263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.447 qpair failed and we were unable to recover it. 00:27:34.447 [2024-11-15 10:46:22.665488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.447 [2024-11-15 10:46:22.665536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.447 qpair failed and we were unable to recover it. 00:27:34.447 [2024-11-15 10:46:22.665747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.447 [2024-11-15 10:46:22.665793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.447 qpair failed and we were unable to recover it. 00:27:34.447 [2024-11-15 10:46:22.666002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.447 [2024-11-15 10:46:22.666051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.447 qpair failed and we were unable to recover it. 00:27:34.447 [2024-11-15 10:46:22.666239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.447 [2024-11-15 10:46:22.666263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.447 qpair failed and we were unable to recover it. 00:27:34.447 [2024-11-15 10:46:22.666481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.447 [2024-11-15 10:46:22.666531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.447 qpair failed and we were unable to recover it. 00:27:34.447 [2024-11-15 10:46:22.666728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.447 [2024-11-15 10:46:22.666776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.447 qpair failed and we were unable to recover it. 00:27:34.447 [2024-11-15 10:46:22.667000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.447 [2024-11-15 10:46:22.667050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.447 qpair failed and we were unable to recover it. 00:27:34.447 [2024-11-15 10:46:22.667262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.447 [2024-11-15 10:46:22.667285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.447 qpair failed and we were unable to recover it. 00:27:34.447 [2024-11-15 10:46:22.667524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.447 [2024-11-15 10:46:22.667572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.447 qpair failed and we were unable to recover it. 00:27:34.447 [2024-11-15 10:46:22.667773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.447 [2024-11-15 10:46:22.667821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.447 qpair failed and we were unable to recover it. 00:27:34.447 [2024-11-15 10:46:22.668035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.447 [2024-11-15 10:46:22.668083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.447 qpair failed and we were unable to recover it. 00:27:34.447 [2024-11-15 10:46:22.668264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.447 [2024-11-15 10:46:22.668288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.447 qpair failed and we were unable to recover it. 00:27:34.447 [2024-11-15 10:46:22.668501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.447 [2024-11-15 10:46:22.668551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.447 qpair failed and we were unable to recover it. 00:27:34.447 [2024-11-15 10:46:22.668691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.447 [2024-11-15 10:46:22.668738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.447 qpair failed and we were unable to recover it. 00:27:34.447 [2024-11-15 10:46:22.668967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.447 [2024-11-15 10:46:22.669014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.447 qpair failed and we were unable to recover it. 00:27:34.447 [2024-11-15 10:46:22.669187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.447 [2024-11-15 10:46:22.669211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.447 qpair failed and we were unable to recover it. 00:27:34.447 [2024-11-15 10:46:22.669370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.447 [2024-11-15 10:46:22.669398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.447 qpair failed and we were unable to recover it. 00:27:34.447 [2024-11-15 10:46:22.669574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.447 [2024-11-15 10:46:22.669598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.447 qpair failed and we were unable to recover it. 00:27:34.447 [2024-11-15 10:46:22.669782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.447 [2024-11-15 10:46:22.669831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.447 qpair failed and we were unable to recover it. 00:27:34.447 [2024-11-15 10:46:22.670045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.447 [2024-11-15 10:46:22.670093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.447 qpair failed and we were unable to recover it. 00:27:34.447 [2024-11-15 10:46:22.670303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.447 [2024-11-15 10:46:22.670327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.447 qpair failed and we were unable to recover it. 00:27:34.447 [2024-11-15 10:46:22.670544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.447 [2024-11-15 10:46:22.670569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.447 qpair failed and we were unable to recover it. 00:27:34.447 [2024-11-15 10:46:22.670793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.447 [2024-11-15 10:46:22.670841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.447 qpair failed and we were unable to recover it. 00:27:34.447 [2024-11-15 10:46:22.671063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.447 [2024-11-15 10:46:22.671109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.447 qpair failed and we were unable to recover it. 00:27:34.447 [2024-11-15 10:46:22.671287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.447 [2024-11-15 10:46:22.671311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.447 qpair failed and we were unable to recover it. 00:27:34.447 [2024-11-15 10:46:22.671518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.447 [2024-11-15 10:46:22.671543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.447 qpair failed and we were unable to recover it. 00:27:34.447 [2024-11-15 10:46:22.671734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.447 [2024-11-15 10:46:22.671787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.447 qpair failed and we were unable to recover it. 00:27:34.447 [2024-11-15 10:46:22.672019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.447 [2024-11-15 10:46:22.672066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.447 qpair failed and we were unable to recover it. 00:27:34.447 [2024-11-15 10:46:22.672289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.447 [2024-11-15 10:46:22.672315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.447 qpair failed and we were unable to recover it. 00:27:34.447 [2024-11-15 10:46:22.672536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.447 [2024-11-15 10:46:22.672561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.447 qpair failed and we were unable to recover it. 00:27:34.447 [2024-11-15 10:46:22.672791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.447 [2024-11-15 10:46:22.672840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.447 qpair failed and we were unable to recover it. 00:27:34.447 [2024-11-15 10:46:22.673056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.447 [2024-11-15 10:46:22.673106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.447 qpair failed and we were unable to recover it. 00:27:34.447 [2024-11-15 10:46:22.673313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.447 [2024-11-15 10:46:22.673338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.447 qpair failed and we were unable to recover it. 00:27:34.447 [2024-11-15 10:46:22.673490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.447 [2024-11-15 10:46:22.673516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.447 qpair failed and we were unable to recover it. 00:27:34.448 [2024-11-15 10:46:22.673739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.448 [2024-11-15 10:46:22.673794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.448 qpair failed and we were unable to recover it. 00:27:34.448 [2024-11-15 10:46:22.673989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.448 [2024-11-15 10:46:22.674038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.448 qpair failed and we were unable to recover it. 00:27:34.448 [2024-11-15 10:46:22.674252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.448 [2024-11-15 10:46:22.674277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.448 qpair failed and we were unable to recover it. 00:27:34.448 [2024-11-15 10:46:22.674456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.448 [2024-11-15 10:46:22.674482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.448 qpair failed and we were unable to recover it. 00:27:34.448 [2024-11-15 10:46:22.674703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.448 [2024-11-15 10:46:22.674752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.448 qpair failed and we were unable to recover it. 00:27:34.448 [2024-11-15 10:46:22.674930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.448 [2024-11-15 10:46:22.674978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.448 qpair failed and we were unable to recover it. 00:27:34.448 [2024-11-15 10:46:22.675135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.448 [2024-11-15 10:46:22.675184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.448 qpair failed and we were unable to recover it. 00:27:34.448 [2024-11-15 10:46:22.675342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.448 [2024-11-15 10:46:22.675374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.448 qpair failed and we were unable to recover it. 00:27:34.448 [2024-11-15 10:46:22.675588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.448 [2024-11-15 10:46:22.675614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.448 qpair failed and we were unable to recover it. 00:27:34.448 [2024-11-15 10:46:22.675838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.448 [2024-11-15 10:46:22.675891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.448 qpair failed and we were unable to recover it. 00:27:34.448 [2024-11-15 10:46:22.676042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.448 [2024-11-15 10:46:22.676089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.448 qpair failed and we were unable to recover it. 00:27:34.448 [2024-11-15 10:46:22.676299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.448 [2024-11-15 10:46:22.676324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.448 qpair failed and we were unable to recover it. 00:27:34.448 [2024-11-15 10:46:22.676491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.448 [2024-11-15 10:46:22.676517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.448 qpair failed and we were unable to recover it. 00:27:34.448 [2024-11-15 10:46:22.676724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.448 [2024-11-15 10:46:22.676749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.448 qpair failed and we were unable to recover it. 00:27:34.448 [2024-11-15 10:46:22.676964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.448 [2024-11-15 10:46:22.677014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.448 qpair failed and we were unable to recover it. 00:27:34.448 [2024-11-15 10:46:22.677187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.448 [2024-11-15 10:46:22.677233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.448 qpair failed and we were unable to recover it. 00:27:34.448 [2024-11-15 10:46:22.677397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.448 [2024-11-15 10:46:22.677422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.448 qpair failed and we were unable to recover it. 00:27:34.448 [2024-11-15 10:46:22.677618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.448 [2024-11-15 10:46:22.677666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.448 qpair failed and we were unable to recover it. 00:27:34.448 [2024-11-15 10:46:22.677900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.448 [2024-11-15 10:46:22.677948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.448 qpair failed and we were unable to recover it. 00:27:34.448 [2024-11-15 10:46:22.678075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.448 [2024-11-15 10:46:22.678125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.448 qpair failed and we were unable to recover it. 00:27:34.448 [2024-11-15 10:46:22.678330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.448 [2024-11-15 10:46:22.678355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.448 qpair failed and we were unable to recover it. 00:27:34.448 [2024-11-15 10:46:22.678552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.448 [2024-11-15 10:46:22.678602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.448 qpair failed and we were unable to recover it. 00:27:34.448 [2024-11-15 10:46:22.678781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.448 [2024-11-15 10:46:22.678828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.448 qpair failed and we were unable to recover it. 00:27:34.448 [2024-11-15 10:46:22.679047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.448 [2024-11-15 10:46:22.679097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.448 qpair failed and we were unable to recover it. 00:27:34.448 [2024-11-15 10:46:22.679301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.448 [2024-11-15 10:46:22.679326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.448 qpair failed and we were unable to recover it. 00:27:34.448 [2024-11-15 10:46:22.679557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.448 [2024-11-15 10:46:22.679583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.448 qpair failed and we were unable to recover it. 00:27:34.448 [2024-11-15 10:46:22.679761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.448 [2024-11-15 10:46:22.679813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.448 qpair failed and we were unable to recover it. 00:27:34.448 [2024-11-15 10:46:22.679995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.448 [2024-11-15 10:46:22.680043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.448 qpair failed and we were unable to recover it. 00:27:34.448 [2024-11-15 10:46:22.680272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.448 [2024-11-15 10:46:22.680297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.448 qpair failed and we were unable to recover it. 00:27:34.448 [2024-11-15 10:46:22.680455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.448 [2024-11-15 10:46:22.680482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.448 qpair failed and we were unable to recover it. 00:27:34.448 [2024-11-15 10:46:22.680705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.448 [2024-11-15 10:46:22.680753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.448 qpair failed and we were unable to recover it. 00:27:34.448 [2024-11-15 10:46:22.680942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.448 [2024-11-15 10:46:22.680991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.448 qpair failed and we were unable to recover it. 00:27:34.448 [2024-11-15 10:46:22.681195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.448 [2024-11-15 10:46:22.681245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.448 qpair failed and we were unable to recover it. 00:27:34.448 [2024-11-15 10:46:22.681473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.448 [2024-11-15 10:46:22.681523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.448 qpair failed and we were unable to recover it. 00:27:34.448 [2024-11-15 10:46:22.681710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.448 [2024-11-15 10:46:22.681757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.448 qpair failed and we were unable to recover it. 00:27:34.448 [2024-11-15 10:46:22.681979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.448 [2024-11-15 10:46:22.682029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.448 qpair failed and we were unable to recover it. 00:27:34.448 [2024-11-15 10:46:22.682230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.448 [2024-11-15 10:46:22.682255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.448 qpair failed and we were unable to recover it. 00:27:34.448 [2024-11-15 10:46:22.682470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.448 [2024-11-15 10:46:22.682521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.448 qpair failed and we were unable to recover it. 00:27:34.448 [2024-11-15 10:46:22.682720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.448 [2024-11-15 10:46:22.682768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.448 qpair failed and we were unable to recover it. 00:27:34.448 [2024-11-15 10:46:22.682991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.448 [2024-11-15 10:46:22.683041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.448 qpair failed and we were unable to recover it. 00:27:34.448 [2024-11-15 10:46:22.683250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.448 [2024-11-15 10:46:22.683274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.448 qpair failed and we were unable to recover it. 00:27:34.448 [2024-11-15 10:46:22.683509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.448 [2024-11-15 10:46:22.683556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.448 qpair failed and we were unable to recover it. 00:27:34.448 [2024-11-15 10:46:22.683785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.448 [2024-11-15 10:46:22.683830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.448 qpair failed and we were unable to recover it. 00:27:34.448 [2024-11-15 10:46:22.683961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.449 [2024-11-15 10:46:22.684010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.449 qpair failed and we were unable to recover it. 00:27:34.449 [2024-11-15 10:46:22.684215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.449 [2024-11-15 10:46:22.684239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.449 qpair failed and we were unable to recover it. 00:27:34.449 [2024-11-15 10:46:22.684466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.449 [2024-11-15 10:46:22.684512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.449 qpair failed and we were unable to recover it. 00:27:34.449 [2024-11-15 10:46:22.684708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.449 [2024-11-15 10:46:22.684755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.449 qpair failed and we were unable to recover it. 00:27:34.449 [2024-11-15 10:46:22.684974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.449 [2024-11-15 10:46:22.685023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.449 qpair failed and we were unable to recover it. 00:27:34.449 [2024-11-15 10:46:22.685191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.449 [2024-11-15 10:46:22.685215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.449 qpair failed and we were unable to recover it. 00:27:34.449 [2024-11-15 10:46:22.685429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.449 [2024-11-15 10:46:22.685454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.449 qpair failed and we were unable to recover it. 00:27:34.449 [2024-11-15 10:46:22.685679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.449 [2024-11-15 10:46:22.685725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.449 qpair failed and we were unable to recover it. 00:27:34.449 [2024-11-15 10:46:22.685942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.449 [2024-11-15 10:46:22.685991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.449 qpair failed and we were unable to recover it. 00:27:34.449 [2024-11-15 10:46:22.686207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.449 [2024-11-15 10:46:22.686232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.449 qpair failed and we were unable to recover it. 00:27:34.449 [2024-11-15 10:46:22.686438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.449 [2024-11-15 10:46:22.686489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.449 qpair failed and we were unable to recover it. 00:27:34.449 [2024-11-15 10:46:22.686708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.449 [2024-11-15 10:46:22.686756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.449 qpair failed and we were unable to recover it. 00:27:34.449 [2024-11-15 10:46:22.686896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.449 [2024-11-15 10:46:22.686944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.449 qpair failed and we were unable to recover it. 00:27:34.449 [2024-11-15 10:46:22.687115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.449 [2024-11-15 10:46:22.687139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.449 qpair failed and we were unable to recover it. 00:27:34.449 [2024-11-15 10:46:22.687312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.449 [2024-11-15 10:46:22.687337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.449 qpair failed and we were unable to recover it. 00:27:34.449 [2024-11-15 10:46:22.687580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.449 [2024-11-15 10:46:22.687627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.449 qpair failed and we were unable to recover it. 00:27:34.449 [2024-11-15 10:46:22.687745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.449 [2024-11-15 10:46:22.687791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.449 qpair failed and we were unable to recover it. 00:27:34.449 [2024-11-15 10:46:22.688014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.449 [2024-11-15 10:46:22.688063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.449 qpair failed and we were unable to recover it. 00:27:34.449 [2024-11-15 10:46:22.688283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.449 [2024-11-15 10:46:22.688307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.449 qpair failed and we were unable to recover it. 00:27:34.449 [2024-11-15 10:46:22.688494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.449 [2024-11-15 10:46:22.688543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.449 qpair failed and we were unable to recover it. 00:27:34.449 [2024-11-15 10:46:22.688764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.449 [2024-11-15 10:46:22.688815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.449 qpair failed and we were unable to recover it. 00:27:34.449 [2024-11-15 10:46:22.689050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.449 [2024-11-15 10:46:22.689097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.449 qpair failed and we were unable to recover it. 00:27:34.449 [2024-11-15 10:46:22.689305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.449 [2024-11-15 10:46:22.689329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.449 qpair failed and we were unable to recover it. 00:27:34.449 [2024-11-15 10:46:22.689517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.449 [2024-11-15 10:46:22.689542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.449 qpair failed and we were unable to recover it. 00:27:34.449 [2024-11-15 10:46:22.689757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.449 [2024-11-15 10:46:22.689805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.449 qpair failed and we were unable to recover it. 00:27:34.449 [2024-11-15 10:46:22.690036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.449 [2024-11-15 10:46:22.690081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.449 qpair failed and we were unable to recover it. 00:27:34.449 [2024-11-15 10:46:22.690297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.449 [2024-11-15 10:46:22.690321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.449 qpair failed and we were unable to recover it. 00:27:34.449 [2024-11-15 10:46:22.690548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.449 [2024-11-15 10:46:22.690573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.449 qpair failed and we were unable to recover it. 00:27:34.449 [2024-11-15 10:46:22.690798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.449 [2024-11-15 10:46:22.690845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.449 qpair failed and we were unable to recover it. 00:27:34.449 [2024-11-15 10:46:22.691073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.449 [2024-11-15 10:46:22.691121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.449 qpair failed and we were unable to recover it. 00:27:34.449 [2024-11-15 10:46:22.691274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.449 [2024-11-15 10:46:22.691298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.449 qpair failed and we were unable to recover it. 00:27:34.449 [2024-11-15 10:46:22.691516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.449 [2024-11-15 10:46:22.691542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.449 qpair failed and we were unable to recover it. 00:27:34.449 [2024-11-15 10:46:22.691741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.449 [2024-11-15 10:46:22.691788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.449 qpair failed and we were unable to recover it. 00:27:34.449 [2024-11-15 10:46:22.692011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.449 [2024-11-15 10:46:22.692058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.449 qpair failed and we were unable to recover it. 00:27:34.449 [2024-11-15 10:46:22.692234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.449 [2024-11-15 10:46:22.692262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.449 qpair failed and we were unable to recover it. 00:27:34.449 [2024-11-15 10:46:22.692481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.449 [2024-11-15 10:46:22.692530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.449 qpair failed and we were unable to recover it. 00:27:34.449 [2024-11-15 10:46:22.692707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.449 [2024-11-15 10:46:22.692753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.449 qpair failed and we were unable to recover it. 00:27:34.449 [2024-11-15 10:46:22.692972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.449 [2024-11-15 10:46:22.693020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.449 qpair failed and we were unable to recover it. 00:27:34.449 [2024-11-15 10:46:22.693229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.449 [2024-11-15 10:46:22.693252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.449 qpair failed and we were unable to recover it. 00:27:34.449 [2024-11-15 10:46:22.693466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.449 [2024-11-15 10:46:22.693516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.449 qpair failed and we were unable to recover it. 00:27:34.449 [2024-11-15 10:46:22.693703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.449 [2024-11-15 10:46:22.693749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.449 qpair failed and we were unable to recover it. 00:27:34.449 [2024-11-15 10:46:22.693936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.449 [2024-11-15 10:46:22.693983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.449 qpair failed and we were unable to recover it. 00:27:34.449 [2024-11-15 10:46:22.694192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.449 [2024-11-15 10:46:22.694217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.449 qpair failed and we were unable to recover it. 00:27:34.449 [2024-11-15 10:46:22.694397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.449 [2024-11-15 10:46:22.694446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.449 qpair failed and we were unable to recover it. 00:27:34.449 [2024-11-15 10:46:22.694647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.449 [2024-11-15 10:46:22.694695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.449 qpair failed and we were unable to recover it. 00:27:34.449 [2024-11-15 10:46:22.694876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.450 [2024-11-15 10:46:22.694923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.450 qpair failed and we were unable to recover it. 00:27:34.450 [2024-11-15 10:46:22.695131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.450 [2024-11-15 10:46:22.695178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.450 qpair failed and we were unable to recover it. 00:27:34.450 [2024-11-15 10:46:22.695390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.450 [2024-11-15 10:46:22.695416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.450 qpair failed and we were unable to recover it. 00:27:34.450 [2024-11-15 10:46:22.695640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.450 [2024-11-15 10:46:22.695688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.450 qpair failed and we were unable to recover it. 00:27:34.450 [2024-11-15 10:46:22.695916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.450 [2024-11-15 10:46:22.695965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.450 qpair failed and we were unable to recover it. 00:27:34.450 [2024-11-15 10:46:22.696101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.450 [2024-11-15 10:46:22.696149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.450 qpair failed and we were unable to recover it. 00:27:34.450 [2024-11-15 10:46:22.696296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.450 [2024-11-15 10:46:22.696320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.450 qpair failed and we were unable to recover it. 00:27:34.450 [2024-11-15 10:46:22.696564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.450 [2024-11-15 10:46:22.696611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.450 qpair failed and we were unable to recover it. 00:27:34.450 [2024-11-15 10:46:22.696839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.450 [2024-11-15 10:46:22.696886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.450 qpair failed and we were unable to recover it. 00:27:34.450 [2024-11-15 10:46:22.697080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.450 [2024-11-15 10:46:22.697127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.450 qpair failed and we were unable to recover it. 00:27:34.450 [2024-11-15 10:46:22.697329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.450 [2024-11-15 10:46:22.697353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.450 qpair failed and we were unable to recover it. 00:27:34.450 [2024-11-15 10:46:22.697471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.450 [2024-11-15 10:46:22.697495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.450 qpair failed and we were unable to recover it. 00:27:34.450 [2024-11-15 10:46:22.697686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.450 [2024-11-15 10:46:22.697732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.450 qpair failed and we were unable to recover it. 00:27:34.450 [2024-11-15 10:46:22.697904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.450 [2024-11-15 10:46:22.697953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.450 qpair failed and we were unable to recover it. 00:27:34.450 [2024-11-15 10:46:22.698169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.450 [2024-11-15 10:46:22.698218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.450 qpair failed and we were unable to recover it. 00:27:34.450 [2024-11-15 10:46:22.698431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.450 [2024-11-15 10:46:22.698479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.450 qpair failed and we were unable to recover it. 00:27:34.450 [2024-11-15 10:46:22.698678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.450 [2024-11-15 10:46:22.698730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.450 qpair failed and we were unable to recover it. 00:27:34.450 [2024-11-15 10:46:22.698951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.450 [2024-11-15 10:46:22.699001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.450 qpair failed and we were unable to recover it. 00:27:34.450 [2024-11-15 10:46:22.699162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.450 [2024-11-15 10:46:22.699185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.450 qpair failed and we were unable to recover it. 00:27:34.450 [2024-11-15 10:46:22.699401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.450 [2024-11-15 10:46:22.699425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.450 qpair failed and we were unable to recover it. 00:27:34.450 [2024-11-15 10:46:22.699648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.450 [2024-11-15 10:46:22.699696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.450 qpair failed and we were unable to recover it. 00:27:34.450 [2024-11-15 10:46:22.699905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.450 [2024-11-15 10:46:22.699956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.450 qpair failed and we were unable to recover it. 00:27:34.450 [2024-11-15 10:46:22.700113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.450 [2024-11-15 10:46:22.700161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.450 qpair failed and we were unable to recover it. 00:27:34.450 [2024-11-15 10:46:22.700354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.450 [2024-11-15 10:46:22.700385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.450 qpair failed and we were unable to recover it. 00:27:34.450 [2024-11-15 10:46:22.700576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.450 [2024-11-15 10:46:22.700600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.450 qpair failed and we were unable to recover it. 00:27:34.450 [2024-11-15 10:46:22.700828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.450 [2024-11-15 10:46:22.700877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.450 qpair failed and we were unable to recover it. 00:27:34.450 [2024-11-15 10:46:22.701108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.450 [2024-11-15 10:46:22.701154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.450 qpair failed and we were unable to recover it. 00:27:34.450 [2024-11-15 10:46:22.701333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.450 [2024-11-15 10:46:22.701357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.450 qpair failed and we were unable to recover it. 00:27:34.450 [2024-11-15 10:46:22.701571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.450 [2024-11-15 10:46:22.701596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.450 qpair failed and we were unable to recover it. 00:27:34.450 [2024-11-15 10:46:22.701783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.450 [2024-11-15 10:46:22.701834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.450 qpair failed and we were unable to recover it. 00:27:34.450 [2024-11-15 10:46:22.702067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.450 [2024-11-15 10:46:22.702113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.450 qpair failed and we were unable to recover it. 00:27:34.450 [2024-11-15 10:46:22.702286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.450 [2024-11-15 10:46:22.702310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.450 qpair failed and we were unable to recover it. 00:27:34.450 [2024-11-15 10:46:22.702531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.450 [2024-11-15 10:46:22.702556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.450 qpair failed and we were unable to recover it. 00:27:34.450 [2024-11-15 10:46:22.702787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.450 [2024-11-15 10:46:22.702835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.450 qpair failed and we were unable to recover it. 00:27:34.450 [2024-11-15 10:46:22.703013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.450 [2024-11-15 10:46:22.703059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.450 qpair failed and we were unable to recover it. 00:27:34.450 [2024-11-15 10:46:22.703268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.450 [2024-11-15 10:46:22.703292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.450 qpair failed and we were unable to recover it. 00:27:34.450 [2024-11-15 10:46:22.703502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.450 [2024-11-15 10:46:22.703527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.450 qpair failed and we were unable to recover it. 00:27:34.450 [2024-11-15 10:46:22.703700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.450 [2024-11-15 10:46:22.703747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.450 qpair failed and we were unable to recover it. 00:27:34.450 [2024-11-15 10:46:22.703893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.450 [2024-11-15 10:46:22.703942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.450 qpair failed and we were unable to recover it. 00:27:34.450 [2024-11-15 10:46:22.704124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.450 [2024-11-15 10:46:22.704173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.450 qpair failed and we were unable to recover it. 00:27:34.450 [2024-11-15 10:46:22.704346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.450 [2024-11-15 10:46:22.704375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.450 qpair failed and we were unable to recover it. 00:27:34.451 [2024-11-15 10:46:22.704527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.451 [2024-11-15 10:46:22.704550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.451 qpair failed and we were unable to recover it. 00:27:34.451 [2024-11-15 10:46:22.704711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.451 [2024-11-15 10:46:22.704755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.451 qpair failed and we were unable to recover it. 00:27:34.451 [2024-11-15 10:46:22.704966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.451 [2024-11-15 10:46:22.705021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.451 qpair failed and we were unable to recover it. 00:27:34.451 [2024-11-15 10:46:22.705224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.451 [2024-11-15 10:46:22.705248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.451 qpair failed and we were unable to recover it. 00:27:34.451 [2024-11-15 10:46:22.705468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.451 [2024-11-15 10:46:22.705518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.451 qpair failed and we were unable to recover it. 00:27:34.451 [2024-11-15 10:46:22.705740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.451 [2024-11-15 10:46:22.705787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.451 qpair failed and we were unable to recover it. 00:27:34.451 [2024-11-15 10:46:22.705970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.451 [2024-11-15 10:46:22.706018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.451 qpair failed and we were unable to recover it. 00:27:34.451 [2024-11-15 10:46:22.706159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.451 [2024-11-15 10:46:22.706183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.451 qpair failed and we were unable to recover it. 00:27:34.451 [2024-11-15 10:46:22.706315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.451 [2024-11-15 10:46:22.706338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.451 qpair failed and we were unable to recover it. 00:27:34.451 [2024-11-15 10:46:22.706509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.451 [2024-11-15 10:46:22.706557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.451 qpair failed and we were unable to recover it. 00:27:34.451 [2024-11-15 10:46:22.706775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.451 [2024-11-15 10:46:22.706824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.451 qpair failed and we were unable to recover it. 00:27:34.451 [2024-11-15 10:46:22.707048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.451 [2024-11-15 10:46:22.707096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.451 qpair failed and we were unable to recover it. 00:27:34.451 [2024-11-15 10:46:22.707265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.451 [2024-11-15 10:46:22.707289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.451 qpair failed and we were unable to recover it. 00:27:34.451 [2024-11-15 10:46:22.707541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.451 [2024-11-15 10:46:22.707590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.451 qpair failed and we were unable to recover it. 00:27:34.451 [2024-11-15 10:46:22.707814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.451 [2024-11-15 10:46:22.707861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.451 qpair failed and we were unable to recover it. 00:27:34.451 [2024-11-15 10:46:22.708084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.451 [2024-11-15 10:46:22.708133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.451 qpair failed and we were unable to recover it. 00:27:34.451 [2024-11-15 10:46:22.708256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.451 [2024-11-15 10:46:22.708281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.451 qpair failed and we were unable to recover it. 00:27:34.451 [2024-11-15 10:46:22.708462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.451 [2024-11-15 10:46:22.708512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.451 qpair failed and we were unable to recover it. 00:27:34.451 [2024-11-15 10:46:22.708724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.451 [2024-11-15 10:46:22.708772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.451 qpair failed and we were unable to recover it. 00:27:34.451 [2024-11-15 10:46:22.708996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.451 [2024-11-15 10:46:22.709044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.451 qpair failed and we were unable to recover it. 00:27:34.451 [2024-11-15 10:46:22.709258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.451 [2024-11-15 10:46:22.709282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.451 qpair failed and we were unable to recover it. 00:27:34.451 [2024-11-15 10:46:22.709511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.451 [2024-11-15 10:46:22.709558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.451 qpair failed and we were unable to recover it. 00:27:34.451 [2024-11-15 10:46:22.709786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.451 [2024-11-15 10:46:22.709836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.451 qpair failed and we were unable to recover it. 00:27:34.451 [2024-11-15 10:46:22.710012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.451 [2024-11-15 10:46:22.710059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.451 qpair failed and we were unable to recover it. 00:27:34.451 [2024-11-15 10:46:22.710245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.451 [2024-11-15 10:46:22.710269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.451 qpair failed and we were unable to recover it. 00:27:34.451 [2024-11-15 10:46:22.710491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.451 [2024-11-15 10:46:22.710541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.451 qpair failed and we were unable to recover it. 00:27:34.451 [2024-11-15 10:46:22.710766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.451 [2024-11-15 10:46:22.710816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.451 qpair failed and we were unable to recover it. 00:27:34.451 [2024-11-15 10:46:22.711042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.451 [2024-11-15 10:46:22.711089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.451 qpair failed and we were unable to recover it. 00:27:34.451 [2024-11-15 10:46:22.711297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.451 [2024-11-15 10:46:22.711321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.451 qpair failed and we were unable to recover it. 00:27:34.451 [2024-11-15 10:46:22.711540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.451 [2024-11-15 10:46:22.711565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.451 qpair failed and we were unable to recover it. 00:27:34.451 [2024-11-15 10:46:22.711754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.451 [2024-11-15 10:46:22.711804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.451 qpair failed and we were unable to recover it. 00:27:34.451 [2024-11-15 10:46:22.711985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.451 [2024-11-15 10:46:22.712031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.451 qpair failed and we were unable to recover it. 00:27:34.451 [2024-11-15 10:46:22.712239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.451 [2024-11-15 10:46:22.712264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.451 qpair failed and we were unable to recover it. 00:27:34.451 [2024-11-15 10:46:22.712505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.451 [2024-11-15 10:46:22.712553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.451 qpair failed and we were unable to recover it. 00:27:34.451 [2024-11-15 10:46:22.712769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.451 [2024-11-15 10:46:22.712816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.451 qpair failed and we were unable to recover it. 00:27:34.451 [2024-11-15 10:46:22.713037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.451 [2024-11-15 10:46:22.713085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.451 qpair failed and we were unable to recover it. 00:27:34.451 [2024-11-15 10:46:22.713268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.451 [2024-11-15 10:46:22.713292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.451 qpair failed and we were unable to recover it. 00:27:34.451 [2024-11-15 10:46:22.713523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.451 [2024-11-15 10:46:22.713573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.451 qpair failed and we were unable to recover it. 00:27:34.451 [2024-11-15 10:46:22.713794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.451 [2024-11-15 10:46:22.713843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.451 qpair failed and we were unable to recover it. 00:27:34.451 [2024-11-15 10:46:22.714066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.451 [2024-11-15 10:46:22.714113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.451 qpair failed and we were unable to recover it. 00:27:34.451 [2024-11-15 10:46:22.714326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.451 [2024-11-15 10:46:22.714351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.451 qpair failed and we were unable to recover it. 00:27:34.451 [2024-11-15 10:46:22.714563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.451 [2024-11-15 10:46:22.714588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.451 qpair failed and we were unable to recover it. 00:27:34.451 [2024-11-15 10:46:22.714739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.451 [2024-11-15 10:46:22.714789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.451 qpair failed and we were unable to recover it. 00:27:34.451 [2024-11-15 10:46:22.715019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.451 [2024-11-15 10:46:22.715068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.451 qpair failed and we were unable to recover it. 00:27:34.451 [2024-11-15 10:46:22.715283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.451 [2024-11-15 10:46:22.715307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.452 qpair failed and we were unable to recover it. 00:27:34.452 [2024-11-15 10:46:22.715515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.452 [2024-11-15 10:46:22.715540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.452 qpair failed and we were unable to recover it. 00:27:34.452 [2024-11-15 10:46:22.715769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.452 [2024-11-15 10:46:22.715816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.452 qpair failed and we were unable to recover it. 00:27:34.452 [2024-11-15 10:46:22.715992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.452 [2024-11-15 10:46:22.716042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.452 qpair failed and we were unable to recover it. 00:27:34.452 [2024-11-15 10:46:22.716246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.452 [2024-11-15 10:46:22.716269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.452 qpair failed and we were unable to recover it. 00:27:34.452 [2024-11-15 10:46:22.716437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.452 [2024-11-15 10:46:22.716462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.452 qpair failed and we were unable to recover it. 00:27:34.452 [2024-11-15 10:46:22.716694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.452 [2024-11-15 10:46:22.716742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.452 qpair failed and we were unable to recover it. 00:27:34.452 [2024-11-15 10:46:22.716974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.452 [2024-11-15 10:46:22.717026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.452 qpair failed and we were unable to recover it. 00:27:34.452 [2024-11-15 10:46:22.717245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.452 [2024-11-15 10:46:22.717269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.452 qpair failed and we were unable to recover it. 00:27:34.452 [2024-11-15 10:46:22.717510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.452 [2024-11-15 10:46:22.717559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.452 qpair failed and we were unable to recover it. 00:27:34.452 [2024-11-15 10:46:22.717752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.452 [2024-11-15 10:46:22.717800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.452 qpair failed and we were unable to recover it. 00:27:34.452 [2024-11-15 10:46:22.717996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.452 [2024-11-15 10:46:22.718046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.452 qpair failed and we were unable to recover it. 00:27:34.452 [2024-11-15 10:46:22.718262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.452 [2024-11-15 10:46:22.718285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.452 qpair failed and we were unable to recover it. 00:27:34.452 [2024-11-15 10:46:22.718523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.452 [2024-11-15 10:46:22.718569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.452 qpair failed and we were unable to recover it. 00:27:34.452 [2024-11-15 10:46:22.718754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.452 [2024-11-15 10:46:22.718803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.452 qpair failed and we were unable to recover it. 00:27:34.452 [2024-11-15 10:46:22.718985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.452 [2024-11-15 10:46:22.719032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.452 qpair failed and we were unable to recover it. 00:27:34.452 [2024-11-15 10:46:22.719238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.452 [2024-11-15 10:46:22.719262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.452 qpair failed and we were unable to recover it. 00:27:34.452 [2024-11-15 10:46:22.719438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.452 [2024-11-15 10:46:22.719487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.452 qpair failed and we were unable to recover it. 00:27:34.452 [2024-11-15 10:46:22.719709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.452 [2024-11-15 10:46:22.719757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.452 qpair failed and we were unable to recover it. 00:27:34.452 [2024-11-15 10:46:22.719975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.452 [2024-11-15 10:46:22.720026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.452 qpair failed and we were unable to recover it. 00:27:34.452 [2024-11-15 10:46:22.720236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.452 [2024-11-15 10:46:22.720260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.452 qpair failed and we were unable to recover it. 00:27:34.452 [2024-11-15 10:46:22.720491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.452 [2024-11-15 10:46:22.720543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.452 qpair failed and we were unable to recover it. 00:27:34.452 [2024-11-15 10:46:22.720722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.452 [2024-11-15 10:46:22.720768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.452 qpair failed and we were unable to recover it. 00:27:34.452 [2024-11-15 10:46:22.720982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.452 [2024-11-15 10:46:22.721031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.452 qpair failed and we were unable to recover it. 00:27:34.452 [2024-11-15 10:46:22.721223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.452 [2024-11-15 10:46:22.721246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.452 qpair failed and we were unable to recover it. 00:27:34.452 [2024-11-15 10:46:22.721421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.452 [2024-11-15 10:46:22.721476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.452 qpair failed and we were unable to recover it. 00:27:34.452 [2024-11-15 10:46:22.721693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.452 [2024-11-15 10:46:22.721746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.452 qpair failed and we were unable to recover it. 00:27:34.452 [2024-11-15 10:46:22.721944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.452 [2024-11-15 10:46:22.721992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.452 qpair failed and we were unable to recover it. 00:27:34.452 [2024-11-15 10:46:22.722178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.452 [2024-11-15 10:46:22.722202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.452 qpair failed and we were unable to recover it. 00:27:34.452 [2024-11-15 10:46:22.722377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.452 [2024-11-15 10:46:22.722402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.452 qpair failed and we were unable to recover it. 00:27:34.452 [2024-11-15 10:46:22.722622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.452 [2024-11-15 10:46:22.722684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.452 qpair failed and we were unable to recover it. 00:27:34.452 [2024-11-15 10:46:22.722851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.452 [2024-11-15 10:46:22.722898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.452 qpair failed and we were unable to recover it. 00:27:34.452 [2024-11-15 10:46:22.723138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.452 [2024-11-15 10:46:22.723186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.452 qpair failed and we were unable to recover it. 00:27:34.452 [2024-11-15 10:46:22.723379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.452 [2024-11-15 10:46:22.723404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.452 qpair failed and we were unable to recover it. 00:27:34.452 [2024-11-15 10:46:22.723624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.452 [2024-11-15 10:46:22.723648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.452 qpair failed and we were unable to recover it. 00:27:34.452 [2024-11-15 10:46:22.723874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.452 [2024-11-15 10:46:22.723919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.452 qpair failed and we were unable to recover it. 00:27:34.452 [2024-11-15 10:46:22.724119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.452 [2024-11-15 10:46:22.724169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.452 qpair failed and we were unable to recover it. 00:27:34.452 [2024-11-15 10:46:22.724380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.452 [2024-11-15 10:46:22.724404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.452 qpair failed and we were unable to recover it. 00:27:34.452 [2024-11-15 10:46:22.724576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.452 [2024-11-15 10:46:22.724600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.452 qpair failed and we were unable to recover it. 00:27:34.452 [2024-11-15 10:46:22.724761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.452 [2024-11-15 10:46:22.724808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.452 qpair failed and we were unable to recover it. 00:27:34.452 [2024-11-15 10:46:22.725036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.452 [2024-11-15 10:46:22.725086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.452 qpair failed and we were unable to recover it. 00:27:34.452 [2024-11-15 10:46:22.725303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.452 [2024-11-15 10:46:22.725328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.452 qpair failed and we were unable to recover it. 00:27:34.452 [2024-11-15 10:46:22.725493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.452 [2024-11-15 10:46:22.725518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.452 qpair failed and we were unable to recover it. 00:27:34.452 [2024-11-15 10:46:22.725712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.452 [2024-11-15 10:46:22.725760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.452 qpair failed and we were unable to recover it. 00:27:34.452 [2024-11-15 10:46:22.725989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.452 [2024-11-15 10:46:22.726036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.452 qpair failed and we were unable to recover it. 00:27:34.452 [2024-11-15 10:46:22.726249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.452 [2024-11-15 10:46:22.726273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.453 qpair failed and we were unable to recover it. 00:27:34.453 [2024-11-15 10:46:22.726482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.453 [2024-11-15 10:46:22.726507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.453 qpair failed and we were unable to recover it. 00:27:34.453 [2024-11-15 10:46:22.726697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.453 [2024-11-15 10:46:22.726748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.453 qpair failed and we were unable to recover it. 00:27:34.453 [2024-11-15 10:46:22.726981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.453 [2024-11-15 10:46:22.727029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.453 qpair failed and we were unable to recover it. 00:27:34.453 [2024-11-15 10:46:22.727232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.453 [2024-11-15 10:46:22.727256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.453 qpair failed and we were unable to recover it. 00:27:34.453 [2024-11-15 10:46:22.727474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.453 [2024-11-15 10:46:22.727523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.453 qpair failed and we were unable to recover it. 00:27:34.453 [2024-11-15 10:46:22.727751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.453 [2024-11-15 10:46:22.727800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.453 qpair failed and we were unable to recover it. 00:27:34.453 [2024-11-15 10:46:22.727992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.453 [2024-11-15 10:46:22.728041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.453 qpair failed and we were unable to recover it. 00:27:34.453 [2024-11-15 10:46:22.728255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.453 [2024-11-15 10:46:22.728283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.453 qpair failed and we were unable to recover it. 00:27:34.453 [2024-11-15 10:46:22.728514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.453 [2024-11-15 10:46:22.728564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.453 qpair failed and we were unable to recover it. 00:27:34.453 [2024-11-15 10:46:22.728753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.453 [2024-11-15 10:46:22.728799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.453 qpair failed and we were unable to recover it. 00:27:34.453 [2024-11-15 10:46:22.729015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.453 [2024-11-15 10:46:22.729063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.453 qpair failed and we were unable to recover it. 00:27:34.453 [2024-11-15 10:46:22.729216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.453 [2024-11-15 10:46:22.729240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.453 qpair failed and we were unable to recover it. 00:27:34.453 [2024-11-15 10:46:22.729429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.453 [2024-11-15 10:46:22.729482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.453 qpair failed and we were unable to recover it. 00:27:34.453 [2024-11-15 10:46:22.729668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.453 [2024-11-15 10:46:22.729716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.453 qpair failed and we were unable to recover it. 00:27:34.453 [2024-11-15 10:46:22.729941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.453 [2024-11-15 10:46:22.729990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.453 qpair failed and we were unable to recover it. 00:27:34.453 [2024-11-15 10:46:22.730199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.453 [2024-11-15 10:46:22.730224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.453 qpair failed and we were unable to recover it. 00:27:34.453 [2024-11-15 10:46:22.730458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.453 [2024-11-15 10:46:22.730506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.453 qpair failed and we were unable to recover it. 00:27:34.453 [2024-11-15 10:46:22.730704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.453 [2024-11-15 10:46:22.730752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.453 qpair failed and we were unable to recover it. 00:27:34.453 [2024-11-15 10:46:22.730929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.453 [2024-11-15 10:46:22.730975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.453 qpair failed and we were unable to recover it. 00:27:34.453 [2024-11-15 10:46:22.731157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.453 [2024-11-15 10:46:22.731181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.453 qpair failed and we were unable to recover it. 00:27:34.453 [2024-11-15 10:46:22.731391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.453 [2024-11-15 10:46:22.731415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.453 qpair failed and we were unable to recover it. 00:27:34.453 [2024-11-15 10:46:22.731642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.453 [2024-11-15 10:46:22.731692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.453 qpair failed and we were unable to recover it. 00:27:34.453 [2024-11-15 10:46:22.731915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.453 [2024-11-15 10:46:22.731961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.453 qpair failed and we were unable to recover it. 00:27:34.453 [2024-11-15 10:46:22.732121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.453 [2024-11-15 10:46:22.732169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.453 qpair failed and we were unable to recover it. 00:27:34.453 [2024-11-15 10:46:22.732345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.453 [2024-11-15 10:46:22.732378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.453 qpair failed and we were unable to recover it. 00:27:34.453 [2024-11-15 10:46:22.732593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.453 [2024-11-15 10:46:22.732618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.453 qpair failed and we were unable to recover it. 00:27:34.453 [2024-11-15 10:46:22.732839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.453 [2024-11-15 10:46:22.732889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.453 qpair failed and we were unable to recover it. 00:27:34.453 [2024-11-15 10:46:22.733074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.453 [2024-11-15 10:46:22.733122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.453 qpair failed and we were unable to recover it. 00:27:34.453 [2024-11-15 10:46:22.733338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.453 [2024-11-15 10:46:22.733370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.453 qpair failed and we were unable to recover it. 00:27:34.453 [2024-11-15 10:46:22.733558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.453 [2024-11-15 10:46:22.733582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.453 qpair failed and we were unable to recover it. 00:27:34.453 [2024-11-15 10:46:22.733806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.453 [2024-11-15 10:46:22.733854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.453 qpair failed and we were unable to recover it. 00:27:34.453 [2024-11-15 10:46:22.734069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.453 [2024-11-15 10:46:22.734122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.453 qpair failed and we were unable to recover it. 00:27:34.453 [2024-11-15 10:46:22.734339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.453 [2024-11-15 10:46:22.734371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.453 qpair failed and we were unable to recover it. 00:27:34.453 [2024-11-15 10:46:22.734598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.453 [2024-11-15 10:46:22.734622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.453 qpair failed and we were unable to recover it. 00:27:34.453 [2024-11-15 10:46:22.734789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.453 [2024-11-15 10:46:22.734841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.453 qpair failed and we were unable to recover it. 00:27:34.453 [2024-11-15 10:46:22.735004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.453 [2024-11-15 10:46:22.735062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.453 qpair failed and we were unable to recover it. 00:27:34.453 [2024-11-15 10:46:22.735245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.453 [2024-11-15 10:46:22.735268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.453 qpair failed and we were unable to recover it. 00:27:34.453 [2024-11-15 10:46:22.735406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.453 [2024-11-15 10:46:22.735431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.453 qpair failed and we were unable to recover it. 00:27:34.453 [2024-11-15 10:46:22.735649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.453 [2024-11-15 10:46:22.735698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.453 qpair failed and we were unable to recover it. 00:27:34.453 [2024-11-15 10:46:22.735919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.453 [2024-11-15 10:46:22.735970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.453 qpair failed and we were unable to recover it. 00:27:34.453 [2024-11-15 10:46:22.736186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.453 [2024-11-15 10:46:22.736234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.453 qpair failed and we were unable to recover it. 00:27:34.453 [2024-11-15 10:46:22.736385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.453 [2024-11-15 10:46:22.736410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.453 qpair failed and we were unable to recover it. 00:27:34.453 [2024-11-15 10:46:22.736626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.453 [2024-11-15 10:46:22.736675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.453 qpair failed and we were unable to recover it. 00:27:34.453 [2024-11-15 10:46:22.736849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.453 [2024-11-15 10:46:22.736896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.453 qpair failed and we were unable to recover it. 00:27:34.453 [2024-11-15 10:46:22.737046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.454 [2024-11-15 10:46:22.737094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.454 qpair failed and we were unable to recover it. 00:27:34.454 [2024-11-15 10:46:22.737296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.454 [2024-11-15 10:46:22.737319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.454 qpair failed and we were unable to recover it. 00:27:34.454 [2024-11-15 10:46:22.737553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.454 [2024-11-15 10:46:22.737599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.454 qpair failed and we were unable to recover it. 00:27:34.454 [2024-11-15 10:46:22.737786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.454 [2024-11-15 10:46:22.737833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.454 qpair failed and we were unable to recover it. 00:27:34.454 [2024-11-15 10:46:22.738018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.454 [2024-11-15 10:46:22.738069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.454 qpair failed and we were unable to recover it. 00:27:34.454 [2024-11-15 10:46:22.738245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.454 [2024-11-15 10:46:22.738270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.454 qpair failed and we were unable to recover it. 00:27:34.454 [2024-11-15 10:46:22.738498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.454 [2024-11-15 10:46:22.738548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.454 qpair failed and we were unable to recover it. 00:27:34.454 [2024-11-15 10:46:22.738768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.454 [2024-11-15 10:46:22.738817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.454 qpair failed and we were unable to recover it. 00:27:34.454 [2024-11-15 10:46:22.739012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.454 [2024-11-15 10:46:22.739060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.454 qpair failed and we were unable to recover it. 00:27:34.454 [2024-11-15 10:46:22.739263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.454 [2024-11-15 10:46:22.739288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.454 qpair failed and we were unable to recover it. 00:27:34.454 [2024-11-15 10:46:22.739446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.454 [2024-11-15 10:46:22.739495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.454 qpair failed and we were unable to recover it. 00:27:34.454 [2024-11-15 10:46:22.739647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.454 [2024-11-15 10:46:22.739692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.454 qpair failed and we were unable to recover it. 00:27:34.454 [2024-11-15 10:46:22.739924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.454 [2024-11-15 10:46:22.739974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.454 qpair failed and we were unable to recover it. 00:27:34.454 [2024-11-15 10:46:22.740157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.454 [2024-11-15 10:46:22.740203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.454 qpair failed and we were unable to recover it. 00:27:34.454 [2024-11-15 10:46:22.740307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.454 [2024-11-15 10:46:22.740330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.454 qpair failed and we were unable to recover it. 00:27:34.454 [2024-11-15 10:46:22.740578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.454 [2024-11-15 10:46:22.740626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.454 qpair failed and we were unable to recover it. 00:27:34.454 [2024-11-15 10:46:22.740844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.454 [2024-11-15 10:46:22.740890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.454 qpair failed and we were unable to recover it. 00:27:34.454 [2024-11-15 10:46:22.741107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.454 [2024-11-15 10:46:22.741155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.454 qpair failed and we were unable to recover it. 00:27:34.454 [2024-11-15 10:46:22.741375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.454 [2024-11-15 10:46:22.741400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.454 qpair failed and we were unable to recover it. 00:27:34.454 [2024-11-15 10:46:22.741546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.454 [2024-11-15 10:46:22.741570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.454 qpair failed and we were unable to recover it. 00:27:34.454 [2024-11-15 10:46:22.741763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.454 [2024-11-15 10:46:22.741810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.454 qpair failed and we were unable to recover it. 00:27:34.454 [2024-11-15 10:46:22.741992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.454 [2024-11-15 10:46:22.742041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.454 qpair failed and we were unable to recover it. 00:27:34.454 [2024-11-15 10:46:22.742206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.454 [2024-11-15 10:46:22.742230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.454 qpair failed and we were unable to recover it. 00:27:34.454 [2024-11-15 10:46:22.742428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.454 [2024-11-15 10:46:22.742480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.454 qpair failed and we were unable to recover it. 00:27:34.454 [2024-11-15 10:46:22.742675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.454 [2024-11-15 10:46:22.742721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.454 qpair failed and we were unable to recover it. 00:27:34.454 [2024-11-15 10:46:22.742913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.454 [2024-11-15 10:46:22.742963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.454 qpair failed and we were unable to recover it. 00:27:34.454 [2024-11-15 10:46:22.743151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.454 [2024-11-15 10:46:22.743198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.454 qpair failed and we were unable to recover it. 00:27:34.454 [2024-11-15 10:46:22.743402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.454 [2024-11-15 10:46:22.743427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.454 qpair failed and we were unable to recover it. 00:27:34.454 [2024-11-15 10:46:22.743589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.454 [2024-11-15 10:46:22.743649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.454 qpair failed and we were unable to recover it. 00:27:34.454 [2024-11-15 10:46:22.743866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.454 [2024-11-15 10:46:22.743913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.454 qpair failed and we were unable to recover it. 00:27:34.454 [2024-11-15 10:46:22.744098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.454 [2024-11-15 10:46:22.744147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.454 qpair failed and we were unable to recover it. 00:27:34.454 [2024-11-15 10:46:22.744353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.454 [2024-11-15 10:46:22.744388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.454 qpair failed and we were unable to recover it. 00:27:34.454 [2024-11-15 10:46:22.744548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.454 [2024-11-15 10:46:22.744572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.454 qpair failed and we were unable to recover it. 00:27:34.454 [2024-11-15 10:46:22.744800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.454 [2024-11-15 10:46:22.744847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.454 qpair failed and we were unable to recover it. 00:27:34.454 [2024-11-15 10:46:22.745006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.454 [2024-11-15 10:46:22.745054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.454 qpair failed and we were unable to recover it. 00:27:34.454 [2024-11-15 10:46:22.745263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.454 [2024-11-15 10:46:22.745286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.454 qpair failed and we were unable to recover it. 00:27:34.454 [2024-11-15 10:46:22.745492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.454 [2024-11-15 10:46:22.745518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.454 qpair failed and we were unable to recover it. 00:27:34.454 [2024-11-15 10:46:22.745713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.454 [2024-11-15 10:46:22.745764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.454 qpair failed and we were unable to recover it. 00:27:34.454 [2024-11-15 10:46:22.745990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.454 [2024-11-15 10:46:22.746037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.454 qpair failed and we were unable to recover it. 00:27:34.454 [2024-11-15 10:46:22.746212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.454 [2024-11-15 10:46:22.746236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.454 qpair failed and we were unable to recover it. 00:27:34.454 [2024-11-15 10:46:22.746428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.454 [2024-11-15 10:46:22.746483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.454 qpair failed and we were unable to recover it. 00:27:34.454 [2024-11-15 10:46:22.746667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.455 [2024-11-15 10:46:22.746717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.455 qpair failed and we were unable to recover it. 00:27:34.455 [2024-11-15 10:46:22.746959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.455 [2024-11-15 10:46:22.747007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.455 qpair failed and we were unable to recover it. 00:27:34.455 [2024-11-15 10:46:22.747220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.455 [2024-11-15 10:46:22.747244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.455 qpair failed and we were unable to recover it. 00:27:34.455 [2024-11-15 10:46:22.747441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.455 [2024-11-15 10:46:22.747493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.455 qpair failed and we were unable to recover it. 00:27:34.455 [2024-11-15 10:46:22.747757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.455 [2024-11-15 10:46:22.747806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.455 qpair failed and we were unable to recover it. 00:27:34.455 [2024-11-15 10:46:22.747961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.455 [2024-11-15 10:46:22.748011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.455 qpair failed and we were unable to recover it. 00:27:34.455 [2024-11-15 10:46:22.748171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.455 [2024-11-15 10:46:22.748196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.455 qpair failed and we were unable to recover it. 00:27:34.455 [2024-11-15 10:46:22.748400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.455 [2024-11-15 10:46:22.748424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.455 qpair failed and we were unable to recover it. 00:27:34.455 [2024-11-15 10:46:22.748608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.455 [2024-11-15 10:46:22.748658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.455 qpair failed and we were unable to recover it. 00:27:34.455 [2024-11-15 10:46:22.748805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.455 [2024-11-15 10:46:22.748853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.455 qpair failed and we were unable to recover it. 00:27:34.455 [2024-11-15 10:46:22.749042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.455 [2024-11-15 10:46:22.749092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.455 qpair failed and we were unable to recover it. 00:27:34.455 [2024-11-15 10:46:22.749269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.455 [2024-11-15 10:46:22.749293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.455 qpair failed and we were unable to recover it. 00:27:34.455 [2024-11-15 10:46:22.749478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.455 [2024-11-15 10:46:22.749526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.455 qpair failed and we were unable to recover it. 00:27:34.455 [2024-11-15 10:46:22.749761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.455 [2024-11-15 10:46:22.749808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.455 qpair failed and we were unable to recover it. 00:27:34.455 [2024-11-15 10:46:22.749994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.455 [2024-11-15 10:46:22.750045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.455 qpair failed and we were unable to recover it. 00:27:34.455 [2024-11-15 10:46:22.750222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.455 [2024-11-15 10:46:22.750246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.455 qpair failed and we were unable to recover it. 00:27:34.455 [2024-11-15 10:46:22.750389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.455 [2024-11-15 10:46:22.750413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.455 qpair failed and we were unable to recover it. 00:27:34.455 [2024-11-15 10:46:22.750605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.455 [2024-11-15 10:46:22.750663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.455 qpair failed and we were unable to recover it. 00:27:34.455 [2024-11-15 10:46:22.750893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.455 [2024-11-15 10:46:22.750943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.455 qpair failed and we were unable to recover it. 00:27:34.455 [2024-11-15 10:46:22.751157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.455 [2024-11-15 10:46:22.751206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.455 qpair failed and we were unable to recover it. 00:27:34.455 [2024-11-15 10:46:22.751409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.455 [2024-11-15 10:46:22.751464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.455 qpair failed and we were unable to recover it. 00:27:34.455 [2024-11-15 10:46:22.751671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.455 [2024-11-15 10:46:22.751720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.455 qpair failed and we were unable to recover it. 00:27:34.455 [2024-11-15 10:46:22.751938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.455 [2024-11-15 10:46:22.751997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.455 qpair failed and we were unable to recover it. 00:27:34.455 [2024-11-15 10:46:22.752173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.455 [2024-11-15 10:46:22.752197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.455 qpair failed and we were unable to recover it. 00:27:34.455 [2024-11-15 10:46:22.752400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.455 [2024-11-15 10:46:22.752425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.455 qpair failed and we were unable to recover it. 00:27:34.455 [2024-11-15 10:46:22.752615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.455 [2024-11-15 10:46:22.752668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.455 qpair failed and we were unable to recover it. 00:27:34.455 [2024-11-15 10:46:22.752836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.455 [2024-11-15 10:46:22.752887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.455 qpair failed and we were unable to recover it. 00:27:34.455 [2024-11-15 10:46:22.753033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.455 [2024-11-15 10:46:22.753080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.455 qpair failed and we were unable to recover it. 00:27:34.455 [2024-11-15 10:46:22.753242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.455 [2024-11-15 10:46:22.753266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.455 qpair failed and we were unable to recover it. 00:27:34.455 [2024-11-15 10:46:22.753487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.455 [2024-11-15 10:46:22.753537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.455 qpair failed and we were unable to recover it. 00:27:34.455 [2024-11-15 10:46:22.753715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.455 [2024-11-15 10:46:22.753765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.455 qpair failed and we were unable to recover it. 00:27:34.455 [2024-11-15 10:46:22.753986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.455 [2024-11-15 10:46:22.754032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.455 qpair failed and we were unable to recover it. 00:27:34.455 [2024-11-15 10:46:22.754240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.455 [2024-11-15 10:46:22.754265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.455 qpair failed and we were unable to recover it. 00:27:34.455 [2024-11-15 10:46:22.754481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.455 [2024-11-15 10:46:22.754538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.455 qpair failed and we were unable to recover it. 00:27:34.455 [2024-11-15 10:46:22.754727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.455 [2024-11-15 10:46:22.754777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.455 qpair failed and we were unable to recover it. 00:27:34.455 [2024-11-15 10:46:22.754994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.455 [2024-11-15 10:46:22.755047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.455 qpair failed and we were unable to recover it. 00:27:34.455 [2024-11-15 10:46:22.755261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.455 [2024-11-15 10:46:22.755285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.455 qpair failed and we were unable to recover it. 00:27:34.455 [2024-11-15 10:46:22.755502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.455 [2024-11-15 10:46:22.755548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.455 qpair failed and we were unable to recover it. 00:27:34.455 [2024-11-15 10:46:22.755739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.455 [2024-11-15 10:46:22.755791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.455 qpair failed and we were unable to recover it. 00:27:34.455 [2024-11-15 10:46:22.755945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.455 [2024-11-15 10:46:22.755994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.455 qpair failed and we were unable to recover it. 00:27:34.455 [2024-11-15 10:46:22.756172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.455 [2024-11-15 10:46:22.756196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.455 qpair failed and we were unable to recover it. 00:27:34.455 [2024-11-15 10:46:22.756408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.455 [2024-11-15 10:46:22.756433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.455 qpair failed and we were unable to recover it. 00:27:34.455 [2024-11-15 10:46:22.756636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.455 [2024-11-15 10:46:22.756686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.455 qpair failed and we were unable to recover it. 00:27:34.455 [2024-11-15 10:46:22.756844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.455 [2024-11-15 10:46:22.756892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.455 qpair failed and we were unable to recover it. 00:27:34.455 [2024-11-15 10:46:22.757053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.456 [2024-11-15 10:46:22.757106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.456 qpair failed and we were unable to recover it. 00:27:34.456 [2024-11-15 10:46:22.757319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.456 [2024-11-15 10:46:22.757344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.456 qpair failed and we were unable to recover it. 00:27:34.456 [2024-11-15 10:46:22.757500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.456 [2024-11-15 10:46:22.757525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.456 qpair failed and we were unable to recover it. 00:27:34.456 [2024-11-15 10:46:22.757686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.456 [2024-11-15 10:46:22.757734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.456 qpair failed and we were unable to recover it. 00:27:34.456 [2024-11-15 10:46:22.757956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.456 [2024-11-15 10:46:22.758006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.456 qpair failed and we were unable to recover it. 00:27:34.456 [2024-11-15 10:46:22.758193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.456 [2024-11-15 10:46:22.758218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.456 qpair failed and we were unable to recover it. 00:27:34.456 [2024-11-15 10:46:22.758421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.456 [2024-11-15 10:46:22.758446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.456 qpair failed and we were unable to recover it. 00:27:34.456 [2024-11-15 10:46:22.758662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.456 [2024-11-15 10:46:22.758716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.456 qpair failed and we were unable to recover it. 00:27:34.456 [2024-11-15 10:46:22.758949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.456 [2024-11-15 10:46:22.758996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.456 qpair failed and we were unable to recover it. 00:27:34.456 [2024-11-15 10:46:22.759177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.456 [2024-11-15 10:46:22.759202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.456 qpair failed and we were unable to recover it. 00:27:34.456 [2024-11-15 10:46:22.759341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.456 [2024-11-15 10:46:22.759380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.456 qpair failed and we were unable to recover it. 00:27:34.456 [2024-11-15 10:46:22.759565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.456 [2024-11-15 10:46:22.759613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.456 qpair failed and we were unable to recover it. 00:27:34.456 [2024-11-15 10:46:22.759801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.456 [2024-11-15 10:46:22.759851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.456 qpair failed and we were unable to recover it. 00:27:34.456 [2024-11-15 10:46:22.760016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.456 [2024-11-15 10:46:22.760063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.456 qpair failed and we were unable to recover it. 00:27:34.456 [2024-11-15 10:46:22.760277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.456 [2024-11-15 10:46:22.760300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.456 qpair failed and we were unable to recover it. 00:27:34.456 [2024-11-15 10:46:22.760477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.456 [2024-11-15 10:46:22.760528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.456 qpair failed and we were unable to recover it. 00:27:34.456 [2024-11-15 10:46:22.760740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.456 [2024-11-15 10:46:22.760788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.456 qpair failed and we were unable to recover it. 00:27:34.456 [2024-11-15 10:46:22.760944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.456 [2024-11-15 10:46:22.760993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.456 qpair failed and we were unable to recover it. 00:27:34.456 [2024-11-15 10:46:22.761201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.456 [2024-11-15 10:46:22.761226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.456 qpair failed and we were unable to recover it. 00:27:34.456 [2024-11-15 10:46:22.761390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.456 [2024-11-15 10:46:22.761432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.456 qpair failed and we were unable to recover it. 00:27:34.456 [2024-11-15 10:46:22.761624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.456 [2024-11-15 10:46:22.761674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.456 qpair failed and we were unable to recover it. 00:27:34.456 [2024-11-15 10:46:22.761852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.456 [2024-11-15 10:46:22.761899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.456 qpair failed and we were unable to recover it. 00:27:34.456 [2024-11-15 10:46:22.762079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.456 [2024-11-15 10:46:22.762128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.456 qpair failed and we were unable to recover it. 00:27:34.456 [2024-11-15 10:46:22.762306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.456 [2024-11-15 10:46:22.762329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.456 qpair failed and we were unable to recover it. 00:27:34.456 [2024-11-15 10:46:22.762558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.456 [2024-11-15 10:46:22.762605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.456 qpair failed and we were unable to recover it. 00:27:34.456 [2024-11-15 10:46:22.762766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.456 [2024-11-15 10:46:22.762813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.456 qpair failed and we were unable to recover it. 00:27:34.456 [2024-11-15 10:46:22.763031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.456 [2024-11-15 10:46:22.763079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.456 qpair failed and we were unable to recover it. 00:27:34.456 [2024-11-15 10:46:22.763297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.456 [2024-11-15 10:46:22.763322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.456 qpair failed and we were unable to recover it. 00:27:34.456 [2024-11-15 10:46:22.763550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.456 [2024-11-15 10:46:22.763597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.456 qpair failed and we were unable to recover it. 00:27:34.456 [2024-11-15 10:46:22.763784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.456 [2024-11-15 10:46:22.763832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.456 qpair failed and we were unable to recover it. 00:27:34.456 [2024-11-15 10:46:22.764043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.456 [2024-11-15 10:46:22.764092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.456 qpair failed and we were unable to recover it. 00:27:34.456 [2024-11-15 10:46:22.764230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.456 [2024-11-15 10:46:22.764254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.456 qpair failed and we were unable to recover it. 00:27:34.456 [2024-11-15 10:46:22.764483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.456 [2024-11-15 10:46:22.764533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.456 qpair failed and we were unable to recover it. 00:27:34.456 [2024-11-15 10:46:22.764719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.456 [2024-11-15 10:46:22.764767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.456 qpair failed and we were unable to recover it. 00:27:34.456 [2024-11-15 10:46:22.765004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.456 [2024-11-15 10:46:22.765053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.456 qpair failed and we were unable to recover it. 00:27:34.456 [2024-11-15 10:46:22.765170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.456 [2024-11-15 10:46:22.765193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.456 qpair failed and we were unable to recover it. 00:27:34.456 [2024-11-15 10:46:22.765370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.456 [2024-11-15 10:46:22.765394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.456 qpair failed and we were unable to recover it. 00:27:34.456 [2024-11-15 10:46:22.765586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.456 [2024-11-15 10:46:22.765611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.456 qpair failed and we were unable to recover it. 00:27:34.456 [2024-11-15 10:46:22.765828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.456 [2024-11-15 10:46:22.765852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.456 qpair failed and we were unable to recover it. 00:27:34.456 [2024-11-15 10:46:22.766089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.456 [2024-11-15 10:46:22.766137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.456 qpair failed and we were unable to recover it. 00:27:34.456 [2024-11-15 10:46:22.766276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.456 [2024-11-15 10:46:22.766299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.456 qpair failed and we were unable to recover it. 00:27:34.456 [2024-11-15 10:46:22.766474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.456 [2024-11-15 10:46:22.766498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.456 qpair failed and we were unable to recover it. 00:27:34.456 [2024-11-15 10:46:22.766694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.456 [2024-11-15 10:46:22.766742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.456 qpair failed and we were unable to recover it. 00:27:34.456 [2024-11-15 10:46:22.766942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.456 [2024-11-15 10:46:22.766991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.456 qpair failed and we were unable to recover it. 00:27:34.456 [2024-11-15 10:46:22.767197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.456 [2024-11-15 10:46:22.767246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.456 qpair failed and we were unable to recover it. 00:27:34.457 [2024-11-15 10:46:22.767410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.457 [2024-11-15 10:46:22.767435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.457 qpair failed and we were unable to recover it. 00:27:34.457 [2024-11-15 10:46:22.767594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.457 [2024-11-15 10:46:22.767645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.457 qpair failed and we were unable to recover it. 00:27:34.457 [2024-11-15 10:46:22.767830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.457 [2024-11-15 10:46:22.767881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.457 qpair failed and we were unable to recover it. 00:27:34.457 [2024-11-15 10:46:22.768052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.457 [2024-11-15 10:46:22.768100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.457 qpair failed and we were unable to recover it. 00:27:34.457 [2024-11-15 10:46:22.768307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.457 [2024-11-15 10:46:22.768330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.457 qpair failed and we were unable to recover it. 00:27:34.457 [2024-11-15 10:46:22.768500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.457 [2024-11-15 10:46:22.768545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.457 qpair failed and we were unable to recover it. 00:27:34.457 [2024-11-15 10:46:22.768724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.457 [2024-11-15 10:46:22.768771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.457 qpair failed and we were unable to recover it. 00:27:34.457 [2024-11-15 10:46:22.769004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.457 [2024-11-15 10:46:22.769051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.457 qpair failed and we were unable to recover it. 00:27:34.457 [2024-11-15 10:46:22.769227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.457 [2024-11-15 10:46:22.769251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.457 qpair failed and we were unable to recover it. 00:27:34.457 [2024-11-15 10:46:22.769447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.457 [2024-11-15 10:46:22.769499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.457 qpair failed and we were unable to recover it. 00:27:34.457 [2024-11-15 10:46:22.769648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.457 [2024-11-15 10:46:22.769699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.457 qpair failed and we were unable to recover it. 00:27:34.457 [2024-11-15 10:46:22.769916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.457 [2024-11-15 10:46:22.769963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.457 qpair failed and we were unable to recover it. 00:27:34.457 [2024-11-15 10:46:22.770153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.457 [2024-11-15 10:46:22.770176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.457 qpair failed and we were unable to recover it. 00:27:34.457 [2024-11-15 10:46:22.770388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.457 [2024-11-15 10:46:22.770414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.457 qpair failed and we were unable to recover it. 00:27:34.457 [2024-11-15 10:46:22.770606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.457 [2024-11-15 10:46:22.770655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.457 qpair failed and we were unable to recover it. 00:27:34.457 [2024-11-15 10:46:22.770874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.457 [2024-11-15 10:46:22.770927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.457 qpair failed and we were unable to recover it. 00:27:34.457 [2024-11-15 10:46:22.771088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.457 [2024-11-15 10:46:22.771137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.457 qpair failed and we were unable to recover it. 00:27:34.457 [2024-11-15 10:46:22.771261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.457 [2024-11-15 10:46:22.771286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.457 qpair failed and we were unable to recover it. 00:27:34.457 [2024-11-15 10:46:22.771452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.457 [2024-11-15 10:46:22.771499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.457 qpair failed and we were unable to recover it. 00:27:34.457 [2024-11-15 10:46:22.771740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.457 [2024-11-15 10:46:22.771786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.457 qpair failed and we were unable to recover it. 00:27:34.457 [2024-11-15 10:46:22.772006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.457 [2024-11-15 10:46:22.772054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.457 qpair failed and we were unable to recover it. 00:27:34.457 [2024-11-15 10:46:22.772274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.457 [2024-11-15 10:46:22.772300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.457 qpair failed and we were unable to recover it. 00:27:34.457 [2024-11-15 10:46:22.772462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.457 [2024-11-15 10:46:22.772509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.457 qpair failed and we were unable to recover it. 00:27:34.457 [2024-11-15 10:46:22.772702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.457 [2024-11-15 10:46:22.772754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.457 qpair failed and we were unable to recover it. 00:27:34.457 [2024-11-15 10:46:22.772968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.457 [2024-11-15 10:46:22.773015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.457 qpair failed and we were unable to recover it. 00:27:34.457 [2024-11-15 10:46:22.773223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.457 [2024-11-15 10:46:22.773248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.457 qpair failed and we were unable to recover it. 00:27:34.457 [2024-11-15 10:46:22.773440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.457 [2024-11-15 10:46:22.773490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.457 qpair failed and we were unable to recover it. 00:27:34.457 [2024-11-15 10:46:22.773709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.457 [2024-11-15 10:46:22.773758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.457 qpair failed and we were unable to recover it. 00:27:34.457 [2024-11-15 10:46:22.773941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.457 [2024-11-15 10:46:22.773991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.457 qpair failed and we were unable to recover it. 00:27:34.457 [2024-11-15 10:46:22.774197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.457 [2024-11-15 10:46:22.774222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.457 qpair failed and we were unable to recover it. 00:27:34.457 [2024-11-15 10:46:22.774430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.457 [2024-11-15 10:46:22.774455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.457 qpair failed and we were unable to recover it. 00:27:34.457 [2024-11-15 10:46:22.774617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.457 [2024-11-15 10:46:22.774678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.457 qpair failed and we were unable to recover it. 00:27:34.457 [2024-11-15 10:46:22.774905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.457 [2024-11-15 10:46:22.774953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.457 qpair failed and we were unable to recover it. 00:27:34.457 [2024-11-15 10:46:22.775124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.457 [2024-11-15 10:46:22.775174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.457 qpair failed and we were unable to recover it. 00:27:34.457 [2024-11-15 10:46:22.775388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.457 [2024-11-15 10:46:22.775413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.457 qpair failed and we were unable to recover it. 00:27:34.457 [2024-11-15 10:46:22.775550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.457 [2024-11-15 10:46:22.775595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.457 qpair failed and we were unable to recover it. 00:27:34.457 [2024-11-15 10:46:22.775810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.457 [2024-11-15 10:46:22.775862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.457 qpair failed and we were unable to recover it. 00:27:34.457 [2024-11-15 10:46:22.776060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.457 [2024-11-15 10:46:22.776107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.457 qpair failed and we were unable to recover it. 00:27:34.457 [2024-11-15 10:46:22.776333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.457 [2024-11-15 10:46:22.776357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.457 qpair failed and we were unable to recover it. 00:27:34.457 [2024-11-15 10:46:22.776520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.457 [2024-11-15 10:46:22.776545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.457 qpair failed and we were unable to recover it. 00:27:34.457 [2024-11-15 10:46:22.776690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.457 [2024-11-15 10:46:22.776751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.457 qpair failed and we were unable to recover it. 00:27:34.457 [2024-11-15 10:46:22.776936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.457 [2024-11-15 10:46:22.776984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.457 qpair failed and we were unable to recover it. 00:27:34.457 [2024-11-15 10:46:22.777158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.457 [2024-11-15 10:46:22.777207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.457 qpair failed and we were unable to recover it. 00:27:34.457 [2024-11-15 10:46:22.777384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.457 [2024-11-15 10:46:22.777410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.457 qpair failed and we were unable to recover it. 00:27:34.457 [2024-11-15 10:46:22.777540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.457 [2024-11-15 10:46:22.777587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.457 qpair failed and we were unable to recover it. 00:27:34.458 [2024-11-15 10:46:22.777811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.458 [2024-11-15 10:46:22.777859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.458 qpair failed and we were unable to recover it. 00:27:34.458 [2024-11-15 10:46:22.778086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.458 [2024-11-15 10:46:22.778135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.458 qpair failed and we were unable to recover it. 00:27:34.458 [2024-11-15 10:46:22.778321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.458 [2024-11-15 10:46:22.778345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.458 qpair failed and we were unable to recover it. 00:27:34.458 [2024-11-15 10:46:22.778516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.458 [2024-11-15 10:46:22.778540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.458 qpair failed and we were unable to recover it. 00:27:34.458 [2024-11-15 10:46:22.778784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.458 [2024-11-15 10:46:22.778831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.458 qpair failed and we were unable to recover it. 00:27:34.458 [2024-11-15 10:46:22.779027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.458 [2024-11-15 10:46:22.779080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.458 qpair failed and we were unable to recover it. 00:27:34.458 [2024-11-15 10:46:22.779219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.458 [2024-11-15 10:46:22.779244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.458 qpair failed and we were unable to recover it. 00:27:34.458 [2024-11-15 10:46:22.779445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.458 [2024-11-15 10:46:22.779496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.458 qpair failed and we were unable to recover it. 00:27:34.458 [2024-11-15 10:46:22.779692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.458 [2024-11-15 10:46:22.779736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.458 qpair failed and we were unable to recover it. 00:27:34.458 [2024-11-15 10:46:22.779952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.458 [2024-11-15 10:46:22.779999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.458 qpair failed and we were unable to recover it. 00:27:34.458 [2024-11-15 10:46:22.780130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.458 [2024-11-15 10:46:22.780154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.458 qpair failed and we were unable to recover it. 00:27:34.458 [2024-11-15 10:46:22.780319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.458 [2024-11-15 10:46:22.780344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.458 qpair failed and we were unable to recover it. 00:27:34.458 [2024-11-15 10:46:22.780501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.458 [2024-11-15 10:46:22.780552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.458 qpair failed and we were unable to recover it. 00:27:34.458 [2024-11-15 10:46:22.780744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.458 [2024-11-15 10:46:22.780768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.458 qpair failed and we were unable to recover it. 00:27:34.458 [2024-11-15 10:46:22.780984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.458 [2024-11-15 10:46:22.781034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.458 qpair failed and we were unable to recover it. 00:27:34.458 [2024-11-15 10:46:22.781243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.458 [2024-11-15 10:46:22.781268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.458 qpair failed and we were unable to recover it. 00:27:34.458 [2024-11-15 10:46:22.781460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.458 [2024-11-15 10:46:22.781511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.458 qpair failed and we were unable to recover it. 00:27:34.458 [2024-11-15 10:46:22.781700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.458 [2024-11-15 10:46:22.781746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.458 qpair failed and we were unable to recover it. 00:27:34.458 [2024-11-15 10:46:22.781923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.458 [2024-11-15 10:46:22.781973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.458 qpair failed and we were unable to recover it. 00:27:34.458 [2024-11-15 10:46:22.782173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.458 [2024-11-15 10:46:22.782223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.458 qpair failed and we were unable to recover it. 00:27:34.458 [2024-11-15 10:46:22.782417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.458 [2024-11-15 10:46:22.782441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.458 qpair failed and we were unable to recover it. 00:27:34.458 [2024-11-15 10:46:22.782682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.458 [2024-11-15 10:46:22.782727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.458 qpair failed and we were unable to recover it. 00:27:34.458 [2024-11-15 10:46:22.782957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.458 [2024-11-15 10:46:22.783004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.458 qpair failed and we were unable to recover it. 00:27:34.458 [2024-11-15 10:46:22.783173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.458 [2024-11-15 10:46:22.783198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.458 qpair failed and we were unable to recover it. 00:27:34.458 [2024-11-15 10:46:22.783348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.458 [2024-11-15 10:46:22.783379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.458 qpair failed and we were unable to recover it. 00:27:34.458 [2024-11-15 10:46:22.783567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.458 [2024-11-15 10:46:22.783617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.458 qpair failed and we were unable to recover it. 00:27:34.458 [2024-11-15 10:46:22.783841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.458 [2024-11-15 10:46:22.783888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.458 qpair failed and we were unable to recover it. 00:27:34.458 [2024-11-15 10:46:22.784061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.458 [2024-11-15 10:46:22.784108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.458 qpair failed and we were unable to recover it. 00:27:34.458 [2024-11-15 10:46:22.784285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.458 [2024-11-15 10:46:22.784309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.458 qpair failed and we were unable to recover it. 00:27:34.458 [2024-11-15 10:46:22.784488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.458 [2024-11-15 10:46:22.784513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.458 qpair failed and we were unable to recover it. 00:27:34.458 [2024-11-15 10:46:22.784739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.458 [2024-11-15 10:46:22.784789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.458 qpair failed and we were unable to recover it. 00:27:34.458 [2024-11-15 10:46:22.785024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.458 [2024-11-15 10:46:22.785074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.458 qpair failed and we were unable to recover it. 00:27:34.458 [2024-11-15 10:46:22.785226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.458 [2024-11-15 10:46:22.785250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.458 qpair failed and we were unable to recover it. 00:27:34.458 [2024-11-15 10:46:22.785446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.458 [2024-11-15 10:46:22.785494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.458 qpair failed and we were unable to recover it. 00:27:34.458 [2024-11-15 10:46:22.785690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.458 [2024-11-15 10:46:22.785739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.458 qpair failed and we were unable to recover it. 00:27:34.458 [2024-11-15 10:46:22.785950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.458 [2024-11-15 10:46:22.785996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.458 qpair failed and we were unable to recover it. 00:27:34.458 [2024-11-15 10:46:22.786207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.458 [2024-11-15 10:46:22.786231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.458 qpair failed and we were unable to recover it. 00:27:34.458 [2024-11-15 10:46:22.786438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.458 [2024-11-15 10:46:22.786493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.458 qpair failed and we were unable to recover it. 00:27:34.458 [2024-11-15 10:46:22.786674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.458 [2024-11-15 10:46:22.786723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.458 qpair failed and we were unable to recover it. 00:27:34.458 [2024-11-15 10:46:22.786945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.458 [2024-11-15 10:46:22.786993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.458 qpair failed and we were unable to recover it. 00:27:34.458 [2024-11-15 10:46:22.787104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.458 [2024-11-15 10:46:22.787128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.458 qpair failed and we were unable to recover it. 00:27:34.458 [2024-11-15 10:46:22.787257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.458 [2024-11-15 10:46:22.787281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.458 qpair failed and we were unable to recover it. 00:27:34.458 [2024-11-15 10:46:22.787470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.458 [2024-11-15 10:46:22.787519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.458 qpair failed and we were unable to recover it. 00:27:34.459 [2024-11-15 10:46:22.787764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.459 [2024-11-15 10:46:22.787789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.459 qpair failed and we were unable to recover it. 00:27:34.459 [2024-11-15 10:46:22.787943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.459 [2024-11-15 10:46:22.787990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.459 qpair failed and we were unable to recover it. 00:27:34.459 [2024-11-15 10:46:22.788200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.459 [2024-11-15 10:46:22.788224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.459 qpair failed and we were unable to recover it. 00:27:34.459 [2024-11-15 10:46:22.788348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.459 [2024-11-15 10:46:22.788380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.459 qpair failed and we were unable to recover it. 00:27:34.459 [2024-11-15 10:46:22.788552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.459 [2024-11-15 10:46:22.788576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.459 qpair failed and we were unable to recover it. 00:27:34.459 [2024-11-15 10:46:22.788738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.459 [2024-11-15 10:46:22.788788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.459 qpair failed and we were unable to recover it. 00:27:34.459 [2024-11-15 10:46:22.789026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.459 [2024-11-15 10:46:22.789075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.459 qpair failed and we were unable to recover it. 00:27:34.459 [2024-11-15 10:46:22.789225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.459 [2024-11-15 10:46:22.789250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.459 qpair failed and we were unable to recover it. 00:27:34.459 [2024-11-15 10:46:22.789371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.459 [2024-11-15 10:46:22.789396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.459 qpair failed and we were unable to recover it. 00:27:34.459 [2024-11-15 10:46:22.789607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.459 [2024-11-15 10:46:22.789632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.459 qpair failed and we were unable to recover it. 00:27:34.459 [2024-11-15 10:46:22.789864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.459 [2024-11-15 10:46:22.789913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.459 qpair failed and we were unable to recover it. 00:27:34.459 [2024-11-15 10:46:22.790073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.459 [2024-11-15 10:46:22.790120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.459 qpair failed and we were unable to recover it. 00:27:34.459 [2024-11-15 10:46:22.790265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.459 [2024-11-15 10:46:22.790290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.459 qpair failed and we were unable to recover it. 00:27:34.459 [2024-11-15 10:46:22.790524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.459 [2024-11-15 10:46:22.790574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.459 qpair failed and we were unable to recover it. 00:27:34.459 [2024-11-15 10:46:22.790790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.459 [2024-11-15 10:46:22.790838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.459 qpair failed and we were unable to recover it. 00:27:34.459 [2024-11-15 10:46:22.791003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.459 [2024-11-15 10:46:22.791053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.459 qpair failed and we were unable to recover it. 00:27:34.459 [2024-11-15 10:46:22.791185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.459 [2024-11-15 10:46:22.791209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.459 qpair failed and we were unable to recover it. 00:27:34.459 [2024-11-15 10:46:22.791403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.459 [2024-11-15 10:46:22.791457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.459 qpair failed and we were unable to recover it. 00:27:34.459 [2024-11-15 10:46:22.791646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.459 [2024-11-15 10:46:22.791693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.459 qpair failed and we were unable to recover it. 00:27:34.459 [2024-11-15 10:46:22.791916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.459 [2024-11-15 10:46:22.791964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.459 qpair failed and we were unable to recover it. 00:27:34.459 [2024-11-15 10:46:22.792166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.459 [2024-11-15 10:46:22.792190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.459 qpair failed and we were unable to recover it. 00:27:34.459 [2024-11-15 10:46:22.792346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.459 [2024-11-15 10:46:22.792393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.459 qpair failed and we were unable to recover it. 00:27:34.459 [2024-11-15 10:46:22.792553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.459 [2024-11-15 10:46:22.792602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.459 qpair failed and we were unable to recover it. 00:27:34.459 [2024-11-15 10:46:22.792808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.459 [2024-11-15 10:46:22.792857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.459 qpair failed and we were unable to recover it. 00:27:34.459 [2024-11-15 10:46:22.793047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.459 [2024-11-15 10:46:22.793095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.459 qpair failed and we were unable to recover it. 00:27:34.459 [2024-11-15 10:46:22.793236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.459 [2024-11-15 10:46:22.793260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.459 qpair failed and we were unable to recover it. 00:27:34.459 [2024-11-15 10:46:22.793488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.459 [2024-11-15 10:46:22.793549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.459 qpair failed and we were unable to recover it. 00:27:34.459 [2024-11-15 10:46:22.793728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.459 [2024-11-15 10:46:22.793775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.459 qpair failed and we were unable to recover it. 00:27:34.459 [2024-11-15 10:46:22.793984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.459 [2024-11-15 10:46:22.794033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.459 qpair failed and we were unable to recover it. 00:27:34.459 [2024-11-15 10:46:22.794201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.459 [2024-11-15 10:46:22.794225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.459 qpair failed and we were unable to recover it. 00:27:34.459 [2024-11-15 10:46:22.794417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.459 [2024-11-15 10:46:22.794478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.459 qpair failed and we were unable to recover it. 00:27:34.459 [2024-11-15 10:46:22.794672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.459 [2024-11-15 10:46:22.794720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.459 qpair failed and we were unable to recover it. 00:27:34.459 [2024-11-15 10:46:22.794954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.459 [2024-11-15 10:46:22.795003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.459 qpair failed and we were unable to recover it. 00:27:34.459 [2024-11-15 10:46:22.795159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.459 [2024-11-15 10:46:22.795183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.459 qpair failed and we were unable to recover it. 00:27:34.459 [2024-11-15 10:46:22.795355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.459 [2024-11-15 10:46:22.795387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.459 qpair failed and we were unable to recover it. 00:27:34.459 [2024-11-15 10:46:22.795587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.459 [2024-11-15 10:46:22.795637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.459 qpair failed and we were unable to recover it. 00:27:34.459 [2024-11-15 10:46:22.795878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.459 [2024-11-15 10:46:22.795926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.459 qpair failed and we were unable to recover it. 00:27:34.459 [2024-11-15 10:46:22.796114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.459 [2024-11-15 10:46:22.796160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.459 qpair failed and we were unable to recover it. 00:27:34.459 [2024-11-15 10:46:22.796293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.459 [2024-11-15 10:46:22.796318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.459 qpair failed and we were unable to recover it. 00:27:34.459 [2024-11-15 10:46:22.796506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.459 [2024-11-15 10:46:22.796553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.459 qpair failed and we were unable to recover it. 00:27:34.459 [2024-11-15 10:46:22.796745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.459 [2024-11-15 10:46:22.796794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.459 qpair failed and we were unable to recover it. 00:27:34.459 [2024-11-15 10:46:22.797027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.459 [2024-11-15 10:46:22.797075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.459 qpair failed and we were unable to recover it. 00:27:34.459 [2024-11-15 10:46:22.797286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.459 [2024-11-15 10:46:22.797310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.459 qpair failed and we were unable to recover it. 00:27:34.459 [2024-11-15 10:46:22.797431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.459 [2024-11-15 10:46:22.797471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.459 qpair failed and we were unable to recover it. 00:27:34.459 [2024-11-15 10:46:22.797706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.459 [2024-11-15 10:46:22.797755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.459 qpair failed and we were unable to recover it. 00:27:34.459 [2024-11-15 10:46:22.797936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.459 [2024-11-15 10:46:22.797986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.459 qpair failed and we were unable to recover it. 00:27:34.459 [2024-11-15 10:46:22.798171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.459 [2024-11-15 10:46:22.798221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.459 qpair failed and we were unable to recover it. 00:27:34.459 [2024-11-15 10:46:22.798400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.459 [2024-11-15 10:46:22.798449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.460 qpair failed and we were unable to recover it. 00:27:34.460 [2024-11-15 10:46:22.798667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.460 [2024-11-15 10:46:22.798722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.460 qpair failed and we were unable to recover it. 00:27:34.460 [2024-11-15 10:46:22.798948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.460 [2024-11-15 10:46:22.798995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.460 qpair failed and we were unable to recover it. 00:27:34.460 [2024-11-15 10:46:22.799175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.460 [2024-11-15 10:46:22.799200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.460 qpair failed and we were unable to recover it. 00:27:34.460 [2024-11-15 10:46:22.799398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.460 [2024-11-15 10:46:22.799423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.460 qpair failed and we were unable to recover it. 00:27:34.460 [2024-11-15 10:46:22.799633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.460 [2024-11-15 10:46:22.799681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.460 qpair failed and we were unable to recover it. 00:27:34.460 [2024-11-15 10:46:22.799897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.460 [2024-11-15 10:46:22.799945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.460 qpair failed and we were unable to recover it. 00:27:34.460 [2024-11-15 10:46:22.800161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.460 [2024-11-15 10:46:22.800213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.460 qpair failed and we were unable to recover it. 00:27:34.460 [2024-11-15 10:46:22.800389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.460 [2024-11-15 10:46:22.800433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.460 qpair failed and we were unable to recover it. 00:27:34.460 [2024-11-15 10:46:22.800642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.460 [2024-11-15 10:46:22.800696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.460 qpair failed and we were unable to recover it. 00:27:34.460 [2024-11-15 10:46:22.800880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.460 [2024-11-15 10:46:22.800931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.460 qpair failed and we were unable to recover it. 00:27:34.460 [2024-11-15 10:46:22.801149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.460 [2024-11-15 10:46:22.801196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.460 qpair failed and we were unable to recover it. 00:27:34.460 [2024-11-15 10:46:22.801359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.460 [2024-11-15 10:46:22.801390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.460 qpair failed and we were unable to recover it. 00:27:34.460 [2024-11-15 10:46:22.801539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.460 [2024-11-15 10:46:22.801563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.460 qpair failed and we were unable to recover it. 00:27:34.460 [2024-11-15 10:46:22.801794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.460 [2024-11-15 10:46:22.801841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.460 qpair failed and we were unable to recover it. 00:27:34.460 [2024-11-15 10:46:22.802064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.460 [2024-11-15 10:46:22.802113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.460 qpair failed and we were unable to recover it. 00:27:34.460 [2024-11-15 10:46:22.802302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.460 [2024-11-15 10:46:22.802326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.460 qpair failed and we were unable to recover it. 00:27:34.460 [2024-11-15 10:46:22.802509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.460 [2024-11-15 10:46:22.802534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.460 qpair failed and we were unable to recover it. 00:27:34.460 [2024-11-15 10:46:22.802668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.460 [2024-11-15 10:46:22.802724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.460 qpair failed and we were unable to recover it. 00:27:34.460 [2024-11-15 10:46:22.802940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.460 [2024-11-15 10:46:22.802987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.460 qpair failed and we were unable to recover it. 00:27:34.460 [2024-11-15 10:46:22.803092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.460 [2024-11-15 10:46:22.803144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.460 qpair failed and we were unable to recover it. 00:27:34.460 [2024-11-15 10:46:22.803279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.460 [2024-11-15 10:46:22.803302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.460 qpair failed and we were unable to recover it. 00:27:34.460 [2024-11-15 10:46:22.803508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.460 [2024-11-15 10:46:22.803559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.460 qpair failed and we were unable to recover it. 00:27:34.460 [2024-11-15 10:46:22.803692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.460 [2024-11-15 10:46:22.803741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.460 qpair failed and we were unable to recover it. 00:27:34.460 [2024-11-15 10:46:22.803952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.460 [2024-11-15 10:46:22.803998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.460 qpair failed and we were unable to recover it. 00:27:34.460 [2024-11-15 10:46:22.804212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.460 [2024-11-15 10:46:22.804235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.460 qpair failed and we were unable to recover it. 00:27:34.460 [2024-11-15 10:46:22.804429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.460 [2024-11-15 10:46:22.804481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.460 qpair failed and we were unable to recover it. 00:27:34.460 [2024-11-15 10:46:22.804633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.460 [2024-11-15 10:46:22.804688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.460 qpair failed and we were unable to recover it. 00:27:34.460 [2024-11-15 10:46:22.804905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.460 [2024-11-15 10:46:22.804959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.460 qpair failed and we were unable to recover it. 00:27:34.460 [2024-11-15 10:46:22.805146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.460 [2024-11-15 10:46:22.805171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.460 qpair failed and we were unable to recover it. 00:27:34.460 [2024-11-15 10:46:22.805337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.460 [2024-11-15 10:46:22.805368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.460 qpair failed and we were unable to recover it. 00:27:34.460 [2024-11-15 10:46:22.805610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.460 [2024-11-15 10:46:22.805667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.460 qpair failed and we were unable to recover it. 00:27:34.460 [2024-11-15 10:46:22.805902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.460 [2024-11-15 10:46:22.805952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.460 qpair failed and we were unable to recover it. 00:27:34.460 [2024-11-15 10:46:22.806166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.460 [2024-11-15 10:46:22.806213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.460 qpair failed and we were unable to recover it. 00:27:34.460 [2024-11-15 10:46:22.806407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.460 [2024-11-15 10:46:22.806432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.460 qpair failed and we were unable to recover it. 00:27:34.460 [2024-11-15 10:46:22.806674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.460 [2024-11-15 10:46:22.806722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.460 qpair failed and we were unable to recover it. 00:27:34.460 [2024-11-15 10:46:22.806871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.460 [2024-11-15 10:46:22.806920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.460 qpair failed and we were unable to recover it. 00:27:34.460 [2024-11-15 10:46:22.807102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.460 [2024-11-15 10:46:22.807152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.460 qpair failed and we were unable to recover it. 00:27:34.460 [2024-11-15 10:46:22.807372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.460 [2024-11-15 10:46:22.807398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.460 qpair failed and we were unable to recover it. 00:27:34.460 [2024-11-15 10:46:22.807606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.460 [2024-11-15 10:46:22.807630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.460 qpair failed and we were unable to recover it. 00:27:34.460 [2024-11-15 10:46:22.807797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.460 [2024-11-15 10:46:22.807844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.460 qpair failed and we were unable to recover it. 00:27:34.460 [2024-11-15 10:46:22.808026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.460 [2024-11-15 10:46:22.808073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.460 qpair failed and we were unable to recover it. 00:27:34.460 [2024-11-15 10:46:22.808276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.460 [2024-11-15 10:46:22.808301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.460 qpair failed and we were unable to recover it. 00:27:34.460 [2024-11-15 10:46:22.808510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.460 [2024-11-15 10:46:22.808536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.460 qpair failed and we were unable to recover it. 00:27:34.460 [2024-11-15 10:46:22.808686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.460 [2024-11-15 10:46:22.808735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.460 qpair failed and we were unable to recover it. 00:27:34.460 [2024-11-15 10:46:22.808914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.460 [2024-11-15 10:46:22.808964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.460 qpair failed and we were unable to recover it. 00:27:34.460 [2024-11-15 10:46:22.809147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.460 [2024-11-15 10:46:22.809198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.460 qpair failed and we were unable to recover it. 00:27:34.460 [2024-11-15 10:46:22.809415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.460 [2024-11-15 10:46:22.809440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.460 qpair failed and we were unable to recover it. 00:27:34.460 [2024-11-15 10:46:22.809633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.460 [2024-11-15 10:46:22.809689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.460 qpair failed and we were unable to recover it. 00:27:34.460 [2024-11-15 10:46:22.809911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.460 [2024-11-15 10:46:22.809960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.460 qpair failed and we were unable to recover it. 00:27:34.460 [2024-11-15 10:46:22.810159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.460 [2024-11-15 10:46:22.810209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.460 qpair failed and we were unable to recover it. 00:27:34.460 [2024-11-15 10:46:22.810383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.460 [2024-11-15 10:46:22.810445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.460 qpair failed and we were unable to recover it. 00:27:34.460 [2024-11-15 10:46:22.810639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.461 [2024-11-15 10:46:22.810686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.461 qpair failed and we were unable to recover it. 00:27:34.461 [2024-11-15 10:46:22.810902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.461 [2024-11-15 10:46:22.810953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.461 qpair failed and we were unable to recover it. 00:27:34.461 [2024-11-15 10:46:22.811163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.461 [2024-11-15 10:46:22.811213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.461 qpair failed and we were unable to recover it. 00:27:34.461 [2024-11-15 10:46:22.811416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.461 [2024-11-15 10:46:22.811476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.461 qpair failed and we were unable to recover it. 00:27:34.461 [2024-11-15 10:46:22.811644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.461 [2024-11-15 10:46:22.811691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.461 qpair failed and we were unable to recover it. 00:27:34.461 [2024-11-15 10:46:22.811911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.461 [2024-11-15 10:46:22.811960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.461 qpair failed and we were unable to recover it. 00:27:34.461 [2024-11-15 10:46:22.812109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.461 [2024-11-15 10:46:22.812159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.461 qpair failed and we were unable to recover it. 00:27:34.461 [2024-11-15 10:46:22.812336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.461 [2024-11-15 10:46:22.812359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.461 qpair failed and we were unable to recover it. 00:27:34.461 [2024-11-15 10:46:22.812557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.461 [2024-11-15 10:46:22.812605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.461 qpair failed and we were unable to recover it. 00:27:34.461 [2024-11-15 10:46:22.812815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.461 [2024-11-15 10:46:22.812865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.461 qpair failed and we were unable to recover it. 00:27:34.461 [2024-11-15 10:46:22.813040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.461 [2024-11-15 10:46:22.813086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.461 qpair failed and we were unable to recover it. 00:27:34.461 [2024-11-15 10:46:22.813302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.461 [2024-11-15 10:46:22.813327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.461 qpair failed and we were unable to recover it. 00:27:34.461 [2024-11-15 10:46:22.813514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.461 [2024-11-15 10:46:22.813564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.461 qpair failed and we were unable to recover it. 00:27:34.461 [2024-11-15 10:46:22.813791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.461 [2024-11-15 10:46:22.813841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.461 qpair failed and we were unable to recover it. 00:27:34.461 [2024-11-15 10:46:22.814038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.461 [2024-11-15 10:46:22.814086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.461 qpair failed and we were unable to recover it. 00:27:34.461 [2024-11-15 10:46:22.814266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.461 [2024-11-15 10:46:22.814290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.461 qpair failed and we were unable to recover it. 00:27:34.461 [2024-11-15 10:46:22.814456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.461 [2024-11-15 10:46:22.814482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.461 qpair failed and we were unable to recover it. 00:27:34.461 [2024-11-15 10:46:22.814645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.461 [2024-11-15 10:46:22.814703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.461 qpair failed and we were unable to recover it. 00:27:34.461 [2024-11-15 10:46:22.814925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.461 [2024-11-15 10:46:22.814973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.461 qpair failed and we were unable to recover it. 00:27:34.461 [2024-11-15 10:46:22.815189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.461 [2024-11-15 10:46:22.815235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.461 qpair failed and we were unable to recover it. 00:27:34.461 [2024-11-15 10:46:22.815442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.461 [2024-11-15 10:46:22.815508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.461 qpair failed and we were unable to recover it. 00:27:34.461 [2024-11-15 10:46:22.815677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.461 [2024-11-15 10:46:22.815724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.461 qpair failed and we were unable to recover it. 00:27:34.461 [2024-11-15 10:46:22.815935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.461 [2024-11-15 10:46:22.815982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.461 qpair failed and we were unable to recover it. 00:27:34.461 [2024-11-15 10:46:22.816191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.461 [2024-11-15 10:46:22.816215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.461 qpair failed and we were unable to recover it. 00:27:34.461 [2024-11-15 10:46:22.816413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.461 [2024-11-15 10:46:22.816465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.461 qpair failed and we were unable to recover it. 00:27:34.461 [2024-11-15 10:46:22.816616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.461 [2024-11-15 10:46:22.816674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.461 qpair failed and we were unable to recover it. 00:27:34.461 [2024-11-15 10:46:22.816876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.461 [2024-11-15 10:46:22.816926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.461 qpair failed and we were unable to recover it. 00:27:34.461 [2024-11-15 10:46:22.817139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.461 [2024-11-15 10:46:22.817164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.461 qpair failed and we were unable to recover it. 00:27:34.461 [2024-11-15 10:46:22.817379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.461 [2024-11-15 10:46:22.817404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.461 qpair failed and we were unable to recover it. 00:27:34.461 [2024-11-15 10:46:22.817560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.461 [2024-11-15 10:46:22.817611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.461 qpair failed and we were unable to recover it. 00:27:34.461 [2024-11-15 10:46:22.817835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.461 [2024-11-15 10:46:22.817883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.461 qpair failed and we were unable to recover it. 00:27:34.461 [2024-11-15 10:46:22.818108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.461 [2024-11-15 10:46:22.818158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.461 qpair failed and we were unable to recover it. 00:27:34.461 [2024-11-15 10:46:22.818326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.461 [2024-11-15 10:46:22.818350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.461 qpair failed and we were unable to recover it. 00:27:34.461 [2024-11-15 10:46:22.818526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.461 [2024-11-15 10:46:22.818550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.461 qpair failed and we were unable to recover it. 00:27:34.461 [2024-11-15 10:46:22.818726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.461 [2024-11-15 10:46:22.818775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.461 qpair failed and we were unable to recover it. 00:27:34.461 [2024-11-15 10:46:22.818969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.461 [2024-11-15 10:46:22.819014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.461 qpair failed and we were unable to recover it. 00:27:34.461 [2024-11-15 10:46:22.819169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.461 [2024-11-15 10:46:22.819194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.461 qpair failed and we were unable to recover it. 00:27:34.461 [2024-11-15 10:46:22.819360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.461 [2024-11-15 10:46:22.819399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.461 qpair failed and we were unable to recover it. 00:27:34.461 [2024-11-15 10:46:22.819613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.461 [2024-11-15 10:46:22.819638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.461 qpair failed and we were unable to recover it. 00:27:34.461 [2024-11-15 10:46:22.819829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.461 [2024-11-15 10:46:22.819877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.461 qpair failed and we were unable to recover it. 00:27:34.461 [2024-11-15 10:46:22.820070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.461 [2024-11-15 10:46:22.820120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.461 qpair failed and we were unable to recover it. 00:27:34.461 [2024-11-15 10:46:22.820327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.461 [2024-11-15 10:46:22.820352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.461 qpair failed and we were unable to recover it. 00:27:34.461 [2024-11-15 10:46:22.820583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.461 [2024-11-15 10:46:22.820630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.461 qpair failed and we were unable to recover it. 00:27:34.461 [2024-11-15 10:46:22.820811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.461 [2024-11-15 10:46:22.820858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.461 qpair failed and we were unable to recover it. 00:27:34.461 [2024-11-15 10:46:22.820997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.461 [2024-11-15 10:46:22.821044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.461 qpair failed and we were unable to recover it. 00:27:34.461 [2024-11-15 10:46:22.821249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.461 [2024-11-15 10:46:22.821273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.461 qpair failed and we were unable to recover it. 00:27:34.461 [2024-11-15 10:46:22.821439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.461 [2024-11-15 10:46:22.821464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.461 qpair failed and we were unable to recover it. 00:27:34.461 [2024-11-15 10:46:22.821642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.461 [2024-11-15 10:46:22.821691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.462 qpair failed and we were unable to recover it. 00:27:34.462 [2024-11-15 10:46:22.821836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.462 [2024-11-15 10:46:22.821883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.462 qpair failed and we were unable to recover it. 00:27:34.462 [2024-11-15 10:46:22.822105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.462 [2024-11-15 10:46:22.822152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.462 qpair failed and we were unable to recover it. 00:27:34.462 [2024-11-15 10:46:22.822328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.462 [2024-11-15 10:46:22.822353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.462 qpair failed and we were unable to recover it. 00:27:34.462 [2024-11-15 10:46:22.822595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.462 [2024-11-15 10:46:22.822643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.462 qpair failed and we were unable to recover it. 00:27:34.462 [2024-11-15 10:46:22.822843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.462 [2024-11-15 10:46:22.822890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.462 qpair failed and we were unable to recover it. 00:27:34.462 [2024-11-15 10:46:22.823115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.462 [2024-11-15 10:46:22.823168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.462 qpair failed and we were unable to recover it. 00:27:34.462 [2024-11-15 10:46:22.823344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.462 [2024-11-15 10:46:22.823396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.462 qpair failed and we were unable to recover it. 00:27:34.462 [2024-11-15 10:46:22.823616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.462 [2024-11-15 10:46:22.823656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.462 qpair failed and we were unable to recover it. 00:27:34.462 [2024-11-15 10:46:22.823821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.462 [2024-11-15 10:46:22.823870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.462 qpair failed and we were unable to recover it. 00:27:34.462 [2024-11-15 10:46:22.824000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.462 [2024-11-15 10:46:22.824048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.462 qpair failed and we were unable to recover it. 00:27:34.462 [2024-11-15 10:46:22.824228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.462 [2024-11-15 10:46:22.824252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.462 qpair failed and we were unable to recover it. 00:27:34.462 [2024-11-15 10:46:22.824474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.462 [2024-11-15 10:46:22.824522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.462 qpair failed and we were unable to recover it. 00:27:34.462 [2024-11-15 10:46:22.824759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.462 [2024-11-15 10:46:22.824808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.462 qpair failed and we were unable to recover it. 00:27:34.462 [2024-11-15 10:46:22.824990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.462 [2024-11-15 10:46:22.825039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.462 qpair failed and we were unable to recover it. 00:27:34.462 [2024-11-15 10:46:22.825214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.462 [2024-11-15 10:46:22.825238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.462 qpair failed and we were unable to recover it. 00:27:34.462 [2024-11-15 10:46:22.825398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.462 [2024-11-15 10:46:22.825423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.462 qpair failed and we were unable to recover it. 00:27:34.462 [2024-11-15 10:46:22.825604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.462 [2024-11-15 10:46:22.825654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.462 qpair failed and we were unable to recover it. 00:27:34.462 [2024-11-15 10:46:22.825884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.462 [2024-11-15 10:46:22.825934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.462 qpair failed and we were unable to recover it. 00:27:34.462 [2024-11-15 10:46:22.826124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.462 [2024-11-15 10:46:22.826173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.462 qpair failed and we were unable to recover it. 00:27:34.462 [2024-11-15 10:46:22.826352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.462 [2024-11-15 10:46:22.826383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.462 qpair failed and we were unable to recover it. 00:27:34.462 [2024-11-15 10:46:22.826591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.462 [2024-11-15 10:46:22.826640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.462 qpair failed and we were unable to recover it. 00:27:34.462 [2024-11-15 10:46:22.826804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.462 [2024-11-15 10:46:22.826849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.462 qpair failed and we were unable to recover it. 00:27:34.462 [2024-11-15 10:46:22.826997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.462 [2024-11-15 10:46:22.827046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.462 qpair failed and we were unable to recover it. 00:27:34.462 [2024-11-15 10:46:22.827248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.462 [2024-11-15 10:46:22.827272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.462 qpair failed and we were unable to recover it. 00:27:34.462 [2024-11-15 10:46:22.827473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.462 [2024-11-15 10:46:22.827520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.462 qpair failed and we were unable to recover it. 00:27:34.462 [2024-11-15 10:46:22.827701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.462 [2024-11-15 10:46:22.827751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.462 qpair failed and we were unable to recover it. 00:27:34.462 [2024-11-15 10:46:22.827935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.462 [2024-11-15 10:46:22.827985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.462 qpair failed and we were unable to recover it. 00:27:34.462 [2024-11-15 10:46:22.828180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.462 [2024-11-15 10:46:22.828204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.462 qpair failed and we were unable to recover it. 00:27:34.462 [2024-11-15 10:46:22.828360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.462 [2024-11-15 10:46:22.828405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.462 qpair failed and we were unable to recover it. 00:27:34.462 [2024-11-15 10:46:22.828597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.462 [2024-11-15 10:46:22.828623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.462 qpair failed and we were unable to recover it. 00:27:34.462 [2024-11-15 10:46:22.828845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.462 [2024-11-15 10:46:22.828893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.462 qpair failed and we were unable to recover it. 00:27:34.462 [2024-11-15 10:46:22.829113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.462 [2024-11-15 10:46:22.829160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.462 qpair failed and we were unable to recover it. 00:27:34.462 [2024-11-15 10:46:22.829317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.462 [2024-11-15 10:46:22.829346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.462 qpair failed and we were unable to recover it. 00:27:34.462 [2024-11-15 10:46:22.829529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.462 [2024-11-15 10:46:22.829554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.462 qpair failed and we were unable to recover it. 00:27:34.462 [2024-11-15 10:46:22.829794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.462 [2024-11-15 10:46:22.829843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.462 qpair failed and we were unable to recover it. 00:27:34.462 [2024-11-15 10:46:22.830035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.462 [2024-11-15 10:46:22.830083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.462 qpair failed and we were unable to recover it. 00:27:34.462 [2024-11-15 10:46:22.830287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.462 [2024-11-15 10:46:22.830311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.462 qpair failed and we were unable to recover it. 00:27:34.462 [2024-11-15 10:46:22.830508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.462 [2024-11-15 10:46:22.830533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.462 qpair failed and we were unable to recover it. 00:27:34.462 [2024-11-15 10:46:22.830739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.462 [2024-11-15 10:46:22.830785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.462 qpair failed and we were unable to recover it. 00:27:34.462 [2024-11-15 10:46:22.830962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.462 [2024-11-15 10:46:22.831008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.462 qpair failed and we were unable to recover it. 00:27:34.462 [2024-11-15 10:46:22.831180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.462 [2024-11-15 10:46:22.831228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.462 qpair failed and we were unable to recover it. 00:27:34.462 [2024-11-15 10:46:22.831431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.462 [2024-11-15 10:46:22.831458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.462 qpair failed and we were unable to recover it. 00:27:34.462 [2024-11-15 10:46:22.831645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.462 [2024-11-15 10:46:22.831702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.462 qpair failed and we were unable to recover it. 00:27:34.462 [2024-11-15 10:46:22.831884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.462 [2024-11-15 10:46:22.831932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.462 qpair failed and we were unable to recover it. 00:27:34.462 [2024-11-15 10:46:22.832154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.462 [2024-11-15 10:46:22.832200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.462 qpair failed and we were unable to recover it. 00:27:34.462 [2024-11-15 10:46:22.832408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.462 [2024-11-15 10:46:22.832433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.462 qpair failed and we were unable to recover it. 00:27:34.462 [2024-11-15 10:46:22.832625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.462 [2024-11-15 10:46:22.832674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.462 qpair failed and we were unable to recover it. 00:27:34.462 [2024-11-15 10:46:22.832823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.462 [2024-11-15 10:46:22.832870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.462 qpair failed and we were unable to recover it. 00:27:34.462 [2024-11-15 10:46:22.833057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.462 [2024-11-15 10:46:22.833104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.462 qpair failed and we were unable to recover it. 00:27:34.462 [2024-11-15 10:46:22.833269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.462 [2024-11-15 10:46:22.833292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.462 qpair failed and we were unable to recover it. 00:27:34.462 [2024-11-15 10:46:22.833490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.463 [2024-11-15 10:46:22.833539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.463 qpair failed and we were unable to recover it. 00:27:34.463 [2024-11-15 10:46:22.833669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.463 [2024-11-15 10:46:22.833718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.463 qpair failed and we were unable to recover it. 00:27:34.463 [2024-11-15 10:46:22.833859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.463 [2024-11-15 10:46:22.833909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.463 qpair failed and we were unable to recover it. 00:27:34.463 [2024-11-15 10:46:22.834093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.463 [2024-11-15 10:46:22.834142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.463 qpair failed and we were unable to recover it. 00:27:34.463 [2024-11-15 10:46:22.834303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.463 [2024-11-15 10:46:22.834327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.463 qpair failed and we were unable to recover it. 00:27:34.463 [2024-11-15 10:46:22.834543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.463 [2024-11-15 10:46:22.834592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.463 qpair failed and we were unable to recover it. 00:27:34.463 [2024-11-15 10:46:22.834733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.463 [2024-11-15 10:46:22.834779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.463 qpair failed and we were unable to recover it. 00:27:34.463 [2024-11-15 10:46:22.834995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.463 [2024-11-15 10:46:22.835044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.463 qpair failed and we were unable to recover it. 00:27:34.463 [2024-11-15 10:46:22.835254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.463 [2024-11-15 10:46:22.835277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.463 qpair failed and we were unable to recover it. 00:27:34.463 [2024-11-15 10:46:22.835460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.463 [2024-11-15 10:46:22.835508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.463 qpair failed and we were unable to recover it. 00:27:34.463 [2024-11-15 10:46:22.835685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.463 [2024-11-15 10:46:22.835730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.463 qpair failed and we were unable to recover it. 00:27:34.463 [2024-11-15 10:46:22.835906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.463 [2024-11-15 10:46:22.835954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.463 qpair failed and we were unable to recover it. 00:27:34.463 [2024-11-15 10:46:22.836133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.463 [2024-11-15 10:46:22.836180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.463 qpair failed and we were unable to recover it. 00:27:34.463 [2024-11-15 10:46:22.836390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.463 [2024-11-15 10:46:22.836415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.463 qpair failed and we were unable to recover it. 00:27:34.463 [2024-11-15 10:46:22.836639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.463 [2024-11-15 10:46:22.836692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.463 qpair failed and we were unable to recover it. 00:27:34.463 [2024-11-15 10:46:22.836894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.463 [2024-11-15 10:46:22.836942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.463 qpair failed and we were unable to recover it. 00:27:34.463 [2024-11-15 10:46:22.837131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.463 [2024-11-15 10:46:22.837181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.463 qpair failed and we were unable to recover it. 00:27:34.463 [2024-11-15 10:46:22.837290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.463 [2024-11-15 10:46:22.837313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.463 qpair failed and we were unable to recover it. 00:27:34.463 [2024-11-15 10:46:22.837524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.463 [2024-11-15 10:46:22.837571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.463 qpair failed and we were unable to recover it. 00:27:34.463 [2024-11-15 10:46:22.837761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.463 [2024-11-15 10:46:22.837810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.463 qpair failed and we were unable to recover it. 00:27:34.463 [2024-11-15 10:46:22.838049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.463 [2024-11-15 10:46:22.838103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.463 qpair failed and we were unable to recover it. 00:27:34.463 [2024-11-15 10:46:22.838279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.463 [2024-11-15 10:46:22.838302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.463 qpair failed and we were unable to recover it. 00:27:34.463 [2024-11-15 10:46:22.838493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.463 [2024-11-15 10:46:22.838544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.463 qpair failed and we were unable to recover it. 00:27:34.463 [2024-11-15 10:46:22.838751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.463 [2024-11-15 10:46:22.838797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.463 qpair failed and we were unable to recover it. 00:27:34.463 [2024-11-15 10:46:22.839030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.463 [2024-11-15 10:46:22.839079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.463 qpair failed and we were unable to recover it. 00:27:34.463 [2024-11-15 10:46:22.839292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.463 [2024-11-15 10:46:22.839316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.463 qpair failed and we were unable to recover it. 00:27:34.463 [2024-11-15 10:46:22.839456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.463 [2024-11-15 10:46:22.839481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.463 qpair failed and we were unable to recover it. 00:27:34.463 [2024-11-15 10:46:22.839639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.463 [2024-11-15 10:46:22.839689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.463 qpair failed and we were unable to recover it. 00:27:34.463 [2024-11-15 10:46:22.839812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.463 [2024-11-15 10:46:22.839858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.463 qpair failed and we were unable to recover it. 00:27:34.463 [2024-11-15 10:46:22.840041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.463 [2024-11-15 10:46:22.840085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.463 qpair failed and we were unable to recover it. 00:27:34.463 [2024-11-15 10:46:22.840299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.463 [2024-11-15 10:46:22.840323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.463 qpair failed and we were unable to recover it. 00:27:34.463 [2024-11-15 10:46:22.840564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.463 [2024-11-15 10:46:22.840614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.463 qpair failed and we were unable to recover it. 00:27:34.463 [2024-11-15 10:46:22.840792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.463 [2024-11-15 10:46:22.840839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.463 qpair failed and we were unable to recover it. 00:27:34.463 [2024-11-15 10:46:22.841004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.463 [2024-11-15 10:46:22.841051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.463 qpair failed and we were unable to recover it. 00:27:34.463 [2024-11-15 10:46:22.841223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.463 [2024-11-15 10:46:22.841247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.463 qpair failed and we were unable to recover it. 00:27:34.463 [2024-11-15 10:46:22.841433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.463 [2024-11-15 10:46:22.841486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.463 qpair failed and we were unable to recover it. 00:27:34.463 [2024-11-15 10:46:22.841706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.463 [2024-11-15 10:46:22.841766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.463 qpair failed and we were unable to recover it. 00:27:34.463 [2024-11-15 10:46:22.841995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.463 [2024-11-15 10:46:22.842045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.463 qpair failed and we were unable to recover it. 00:27:34.463 [2024-11-15 10:46:22.842197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.463 [2024-11-15 10:46:22.842221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.463 qpair failed and we were unable to recover it. 00:27:34.463 [2024-11-15 10:46:22.842429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.463 [2024-11-15 10:46:22.842491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.463 qpair failed and we were unable to recover it. 00:27:34.463 [2024-11-15 10:46:22.842705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.463 [2024-11-15 10:46:22.842757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.463 qpair failed and we were unable to recover it. 00:27:34.463 [2024-11-15 10:46:22.842938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.463 [2024-11-15 10:46:22.842985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.463 qpair failed and we were unable to recover it. 00:27:34.463 [2024-11-15 10:46:22.843202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.463 [2024-11-15 10:46:22.843226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.463 qpair failed and we were unable to recover it. 00:27:34.463 [2024-11-15 10:46:22.843447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.463 [2024-11-15 10:46:22.843496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.463 qpair failed and we were unable to recover it. 00:27:34.463 [2024-11-15 10:46:22.843643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.463 [2024-11-15 10:46:22.843699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.463 qpair failed and we were unable to recover it. 00:27:34.463 [2024-11-15 10:46:22.843892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.463 [2024-11-15 10:46:22.843940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.463 qpair failed and we were unable to recover it. 00:27:34.463 [2024-11-15 10:46:22.844112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.463 [2024-11-15 10:46:22.844136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.463 qpair failed and we were unable to recover it. 00:27:34.463 [2024-11-15 10:46:22.844320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.463 [2024-11-15 10:46:22.844343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.463 qpair failed and we were unable to recover it. 00:27:34.463 [2024-11-15 10:46:22.844504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.463 [2024-11-15 10:46:22.844554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.463 qpair failed and we were unable to recover it. 00:27:34.463 [2024-11-15 10:46:22.844735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.463 [2024-11-15 10:46:22.844786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.463 qpair failed and we were unable to recover it. 00:27:34.463 [2024-11-15 10:46:22.845014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.463 [2024-11-15 10:46:22.845065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.463 qpair failed and we were unable to recover it. 00:27:34.463 [2024-11-15 10:46:22.845276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.463 [2024-11-15 10:46:22.845300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.463 qpair failed and we were unable to recover it. 00:27:34.463 [2024-11-15 10:46:22.845402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.463 [2024-11-15 10:46:22.845428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.463 qpair failed and we were unable to recover it. 00:27:34.463 [2024-11-15 10:46:22.845601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.463 [2024-11-15 10:46:22.845653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.464 qpair failed and we were unable to recover it. 00:27:34.464 [2024-11-15 10:46:22.845804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.464 [2024-11-15 10:46:22.845853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.464 qpair failed and we were unable to recover it. 00:27:34.464 [2024-11-15 10:46:22.846023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.464 [2024-11-15 10:46:22.846073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.464 qpair failed and we were unable to recover it. 00:27:34.464 [2024-11-15 10:46:22.846261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.464 [2024-11-15 10:46:22.846284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.464 qpair failed and we were unable to recover it. 00:27:34.464 [2024-11-15 10:46:22.846509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.464 [2024-11-15 10:46:22.846558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.464 qpair failed and we were unable to recover it. 00:27:34.464 [2024-11-15 10:46:22.846688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.464 [2024-11-15 10:46:22.846731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.464 qpair failed and we were unable to recover it. 00:27:34.464 [2024-11-15 10:46:22.846956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.464 [2024-11-15 10:46:22.847003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.464 qpair failed and we were unable to recover it. 00:27:34.464 [2024-11-15 10:46:22.847186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.464 [2024-11-15 10:46:22.847210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.464 qpair failed and we were unable to recover it. 00:27:34.464 [2024-11-15 10:46:22.847389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.464 [2024-11-15 10:46:22.847414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.464 qpair failed and we were unable to recover it. 00:27:34.464 [2024-11-15 10:46:22.847599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.464 [2024-11-15 10:46:22.847658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.464 qpair failed and we were unable to recover it. 00:27:34.464 [2024-11-15 10:46:22.847874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.464 [2024-11-15 10:46:22.847921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.464 qpair failed and we were unable to recover it. 00:27:34.464 [2024-11-15 10:46:22.848140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.464 [2024-11-15 10:46:22.848197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.464 qpair failed and we were unable to recover it. 00:27:34.464 [2024-11-15 10:46:22.848379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.464 [2024-11-15 10:46:22.848405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.464 qpair failed and we were unable to recover it. 00:27:34.464 [2024-11-15 10:46:22.848611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.464 [2024-11-15 10:46:22.848668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.464 qpair failed and we were unable to recover it. 00:27:34.464 [2024-11-15 10:46:22.848900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.464 [2024-11-15 10:46:22.848947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.464 qpair failed and we were unable to recover it. 00:27:34.464 [2024-11-15 10:46:22.849172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.464 [2024-11-15 10:46:22.849222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.464 qpair failed and we were unable to recover it. 00:27:34.464 [2024-11-15 10:46:22.849445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.464 [2024-11-15 10:46:22.849471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.464 qpair failed and we were unable to recover it. 00:27:34.464 [2024-11-15 10:46:22.849669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.464 [2024-11-15 10:46:22.849719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.464 qpair failed and we were unable to recover it. 00:27:34.464 [2024-11-15 10:46:22.849908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.464 [2024-11-15 10:46:22.849957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.464 qpair failed and we were unable to recover it. 00:27:34.464 [2024-11-15 10:46:22.850157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.464 [2024-11-15 10:46:22.850203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.464 qpair failed and we were unable to recover it. 00:27:34.464 [2024-11-15 10:46:22.850430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.464 [2024-11-15 10:46:22.850455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.464 qpair failed and we were unable to recover it. 00:27:34.464 [2024-11-15 10:46:22.850633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.464 [2024-11-15 10:46:22.850684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.464 qpair failed and we were unable to recover it. 00:27:34.464 [2024-11-15 10:46:22.850860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.464 [2024-11-15 10:46:22.850909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.464 qpair failed and we were unable to recover it. 00:27:34.464 [2024-11-15 10:46:22.851131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.464 [2024-11-15 10:46:22.851180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.464 qpair failed and we were unable to recover it. 00:27:34.464 [2024-11-15 10:46:22.851313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.464 [2024-11-15 10:46:22.851357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.464 qpair failed and we were unable to recover it. 00:27:34.464 [2024-11-15 10:46:22.851537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.464 [2024-11-15 10:46:22.851563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.464 qpair failed and we were unable to recover it. 00:27:34.464 [2024-11-15 10:46:22.851742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.464 [2024-11-15 10:46:22.851791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.464 qpair failed and we were unable to recover it. 00:27:34.464 [2024-11-15 10:46:22.852014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.464 [2024-11-15 10:46:22.852061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.464 qpair failed and we were unable to recover it. 00:27:34.464 [2024-11-15 10:46:22.852240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.464 [2024-11-15 10:46:22.852264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.464 qpair failed and we were unable to recover it. 00:27:34.464 [2024-11-15 10:46:22.852432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.464 [2024-11-15 10:46:22.852487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.464 qpair failed and we were unable to recover it. 00:27:34.464 [2024-11-15 10:46:22.852675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.464 [2024-11-15 10:46:22.852723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.464 qpair failed and we were unable to recover it. 00:27:34.464 [2024-11-15 10:46:22.852938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.464 [2024-11-15 10:46:22.852987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.464 qpair failed and we were unable to recover it. 00:27:34.464 [2024-11-15 10:46:22.853174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.464 [2024-11-15 10:46:22.853223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.464 qpair failed and we were unable to recover it. 00:27:34.464 [2024-11-15 10:46:22.853416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.464 [2024-11-15 10:46:22.853441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.464 qpair failed and we were unable to recover it. 00:27:34.464 [2024-11-15 10:46:22.853659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.464 [2024-11-15 10:46:22.853708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.464 qpair failed and we were unable to recover it. 00:27:34.464 [2024-11-15 10:46:22.853896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.464 [2024-11-15 10:46:22.853942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.464 qpair failed and we were unable to recover it. 00:27:34.464 [2024-11-15 10:46:22.854165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.464 [2024-11-15 10:46:22.854215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.464 qpair failed and we were unable to recover it. 00:27:34.464 [2024-11-15 10:46:22.854426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.464 [2024-11-15 10:46:22.854469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.464 qpair failed and we were unable to recover it. 00:27:34.464 [2024-11-15 10:46:22.854684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.464 [2024-11-15 10:46:22.854731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.464 qpair failed and we were unable to recover it. 00:27:34.464 [2024-11-15 10:46:22.854922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.464 [2024-11-15 10:46:22.854970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.464 qpair failed and we were unable to recover it. 00:27:34.464 [2024-11-15 10:46:22.855173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.464 [2024-11-15 10:46:22.855198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.464 qpair failed and we were unable to recover it. 00:27:34.464 [2024-11-15 10:46:22.855404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.464 [2024-11-15 10:46:22.855430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.464 qpair failed and we were unable to recover it. 00:27:34.464 [2024-11-15 10:46:22.855643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.464 [2024-11-15 10:46:22.855695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.464 qpair failed and we were unable to recover it. 00:27:34.464 [2024-11-15 10:46:22.855923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.464 [2024-11-15 10:46:22.855970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.464 qpair failed and we were unable to recover it. 00:27:34.464 [2024-11-15 10:46:22.856146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.464 [2024-11-15 10:46:22.856194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.464 qpair failed and we were unable to recover it. 00:27:34.464 [2024-11-15 10:46:22.856407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.464 [2024-11-15 10:46:22.856432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.464 qpair failed and we were unable to recover it. 00:27:34.464 [2024-11-15 10:46:22.856646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.464 [2024-11-15 10:46:22.856694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.464 qpair failed and we were unable to recover it. 00:27:34.464 [2024-11-15 10:46:22.856879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.464 [2024-11-15 10:46:22.856933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.464 qpair failed and we were unable to recover it. 00:27:34.464 [2024-11-15 10:46:22.857140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.464 [2024-11-15 10:46:22.857186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.464 qpair failed and we were unable to recover it. 00:27:34.464 [2024-11-15 10:46:22.857396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.464 [2024-11-15 10:46:22.857421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.464 qpair failed and we were unable to recover it. 00:27:34.464 [2024-11-15 10:46:22.857559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.464 [2024-11-15 10:46:22.857607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.464 qpair failed and we were unable to recover it. 00:27:34.464 [2024-11-15 10:46:22.857788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.464 [2024-11-15 10:46:22.857838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.464 qpair failed and we were unable to recover it. 00:27:34.464 [2024-11-15 10:46:22.858027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.464 [2024-11-15 10:46:22.858077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.464 qpair failed and we were unable to recover it. 00:27:34.464 [2024-11-15 10:46:22.858209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.464 [2024-11-15 10:46:22.858233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.464 qpair failed and we were unable to recover it. 00:27:34.464 [2024-11-15 10:46:22.858441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.464 [2024-11-15 10:46:22.858495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.464 qpair failed and we were unable to recover it. 00:27:34.464 [2024-11-15 10:46:22.858719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.465 [2024-11-15 10:46:22.858769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.465 qpair failed and we were unable to recover it. 00:27:34.465 [2024-11-15 10:46:22.858970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.465 [2024-11-15 10:46:22.859020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.465 qpair failed and we were unable to recover it. 00:27:34.465 [2024-11-15 10:46:22.859172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.465 [2024-11-15 10:46:22.859197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.465 qpair failed and we were unable to recover it. 00:27:34.465 [2024-11-15 10:46:22.859333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.465 [2024-11-15 10:46:22.859357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.465 qpair failed and we were unable to recover it. 00:27:34.465 [2024-11-15 10:46:22.859546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.465 [2024-11-15 10:46:22.859596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.465 qpair failed and we were unable to recover it. 00:27:34.465 [2024-11-15 10:46:22.859770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.465 [2024-11-15 10:46:22.859794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.465 qpair failed and we were unable to recover it. 00:27:34.465 [2024-11-15 10:46:22.859967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.465 [2024-11-15 10:46:22.860017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.465 qpair failed and we were unable to recover it. 00:27:34.465 [2024-11-15 10:46:22.860236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.465 [2024-11-15 10:46:22.860260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.465 qpair failed and we were unable to recover it. 00:27:34.465 [2024-11-15 10:46:22.860482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.465 [2024-11-15 10:46:22.860530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.465 qpair failed and we were unable to recover it. 00:27:34.465 [2024-11-15 10:46:22.860669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.465 [2024-11-15 10:46:22.860717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.465 qpair failed and we were unable to recover it. 00:27:34.465 [2024-11-15 10:46:22.860937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.465 [2024-11-15 10:46:22.860987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.465 qpair failed and we were unable to recover it. 00:27:34.465 [2024-11-15 10:46:22.861168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.465 [2024-11-15 10:46:22.861193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.465 qpair failed and we were unable to recover it. 00:27:34.465 [2024-11-15 10:46:22.861356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.465 [2024-11-15 10:46:22.861392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.465 qpair failed and we were unable to recover it. 00:27:34.465 [2024-11-15 10:46:22.861612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.465 [2024-11-15 10:46:22.861666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.465 qpair failed and we were unable to recover it. 00:27:34.465 [2024-11-15 10:46:22.861881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.465 [2024-11-15 10:46:22.861927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.465 qpair failed and we were unable to recover it. 00:27:34.465 [2024-11-15 10:46:22.862120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.465 [2024-11-15 10:46:22.862169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.465 qpair failed and we were unable to recover it. 00:27:34.465 [2024-11-15 10:46:22.862398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.465 [2024-11-15 10:46:22.862423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.465 qpair failed and we were unable to recover it. 00:27:34.465 [2024-11-15 10:46:22.862557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.465 [2024-11-15 10:46:22.862603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.465 qpair failed and we were unable to recover it. 00:27:34.465 [2024-11-15 10:46:22.862789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.465 [2024-11-15 10:46:22.862843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.465 qpair failed and we were unable to recover it. 00:27:34.465 [2024-11-15 10:46:22.862995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.465 [2024-11-15 10:46:22.863042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.465 qpair failed and we were unable to recover it. 00:27:34.465 [2024-11-15 10:46:22.863217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.465 [2024-11-15 10:46:22.863241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.465 qpair failed and we were unable to recover it. 00:27:34.465 [2024-11-15 10:46:22.863461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.465 [2024-11-15 10:46:22.863520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.465 qpair failed and we were unable to recover it. 00:27:34.465 [2024-11-15 10:46:22.863713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.465 [2024-11-15 10:46:22.863762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.465 qpair failed and we were unable to recover it. 00:27:34.465 [2024-11-15 10:46:22.863973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.465 [2024-11-15 10:46:22.864021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.465 qpair failed and we were unable to recover it. 00:27:34.465 [2024-11-15 10:46:22.864204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.465 [2024-11-15 10:46:22.864229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.465 qpair failed and we were unable to recover it. 00:27:34.465 [2024-11-15 10:46:22.864411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.465 [2024-11-15 10:46:22.864437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.465 qpair failed and we were unable to recover it. 00:27:34.465 [2024-11-15 10:46:22.864627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.465 [2024-11-15 10:46:22.864682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.465 qpair failed and we were unable to recover it. 00:27:34.465 [2024-11-15 10:46:22.864890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.465 [2024-11-15 10:46:22.864938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.465 qpair failed and we were unable to recover it. 00:27:34.465 [2024-11-15 10:46:22.865084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.465 [2024-11-15 10:46:22.865131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.465 qpair failed and we were unable to recover it. 00:27:34.465 [2024-11-15 10:46:22.865252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.465 [2024-11-15 10:46:22.865276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.465 qpair failed and we were unable to recover it. 00:27:34.465 [2024-11-15 10:46:22.865499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.465 [2024-11-15 10:46:22.865545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.465 qpair failed and we were unable to recover it. 00:27:34.465 [2024-11-15 10:46:22.865722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.465 [2024-11-15 10:46:22.865772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.465 qpair failed and we were unable to recover it. 00:27:34.465 [2024-11-15 10:46:22.865963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.465 [2024-11-15 10:46:22.866013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.465 qpair failed and we were unable to recover it. 00:27:34.465 [2024-11-15 10:46:22.866159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.465 [2024-11-15 10:46:22.866184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.465 qpair failed and we were unable to recover it. 00:27:34.465 [2024-11-15 10:46:22.866391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.465 [2024-11-15 10:46:22.866416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.465 qpair failed and we were unable to recover it. 00:27:34.465 [2024-11-15 10:46:22.866600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.465 [2024-11-15 10:46:22.866648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.465 qpair failed and we were unable to recover it. 00:27:34.465 [2024-11-15 10:46:22.866877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.465 [2024-11-15 10:46:22.866924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.465 qpair failed and we were unable to recover it. 00:27:34.465 [2024-11-15 10:46:22.867117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.465 [2024-11-15 10:46:22.867162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.465 qpair failed and we were unable to recover it. 00:27:34.465 [2024-11-15 10:46:22.867375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.465 [2024-11-15 10:46:22.867399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.465 qpair failed and we were unable to recover it. 00:27:34.465 [2024-11-15 10:46:22.867583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.465 [2024-11-15 10:46:22.867609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.465 qpair failed and we were unable to recover it. 00:27:34.465 [2024-11-15 10:46:22.867848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.465 [2024-11-15 10:46:22.867898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.465 qpair failed and we were unable to recover it. 00:27:34.465 [2024-11-15 10:46:22.868085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.465 [2024-11-15 10:46:22.868135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.465 qpair failed and we were unable to recover it. 00:27:34.465 [2024-11-15 10:46:22.868326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.465 [2024-11-15 10:46:22.868350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.465 qpair failed and we were unable to recover it. 00:27:34.465 [2024-11-15 10:46:22.868512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.465 [2024-11-15 10:46:22.868537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.465 qpair failed and we were unable to recover it. 00:27:34.465 [2024-11-15 10:46:22.868723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.465 [2024-11-15 10:46:22.868772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.465 qpair failed and we were unable to recover it. 00:27:34.465 [2024-11-15 10:46:22.868965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.465 [2024-11-15 10:46:22.869013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.465 qpair failed and we were unable to recover it. 00:27:34.465 [2024-11-15 10:46:22.869192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.465 [2024-11-15 10:46:22.869237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.465 qpair failed and we were unable to recover it. 00:27:34.465 [2024-11-15 10:46:22.869408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.465 [2024-11-15 10:46:22.869469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.465 qpair failed and we were unable to recover it. 00:27:34.465 [2024-11-15 10:46:22.869688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.465 [2024-11-15 10:46:22.869736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.465 qpair failed and we were unable to recover it. 00:27:34.465 [2024-11-15 10:46:22.869954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.465 [2024-11-15 10:46:22.870009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.465 qpair failed and we were unable to recover it. 00:27:34.465 [2024-11-15 10:46:22.870232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.465 [2024-11-15 10:46:22.870256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.466 qpair failed and we were unable to recover it. 00:27:34.466 [2024-11-15 10:46:22.870440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.466 [2024-11-15 10:46:22.870494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.466 qpair failed and we were unable to recover it. 00:27:34.466 [2024-11-15 10:46:22.870723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.466 [2024-11-15 10:46:22.870771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.466 qpair failed and we were unable to recover it. 00:27:34.466 [2024-11-15 10:46:22.870994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.466 [2024-11-15 10:46:22.871042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.466 qpair failed and we were unable to recover it. 00:27:34.466 [2024-11-15 10:46:22.871250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.466 [2024-11-15 10:46:22.871274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.466 qpair failed and we were unable to recover it. 00:27:34.466 [2024-11-15 10:46:22.871450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.466 [2024-11-15 10:46:22.871508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.466 qpair failed and we were unable to recover it. 00:27:34.466 [2024-11-15 10:46:22.871721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.466 [2024-11-15 10:46:22.871769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.466 qpair failed and we were unable to recover it. 00:27:34.466 [2024-11-15 10:46:22.871961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.466 [2024-11-15 10:46:22.872010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.466 qpair failed and we were unable to recover it. 00:27:34.466 [2024-11-15 10:46:22.872182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.466 [2024-11-15 10:46:22.872206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.466 qpair failed and we were unable to recover it. 00:27:34.466 [2024-11-15 10:46:22.872416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.466 [2024-11-15 10:46:22.872441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.466 qpair failed and we were unable to recover it. 00:27:34.466 [2024-11-15 10:46:22.872639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.466 [2024-11-15 10:46:22.872692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.466 qpair failed and we were unable to recover it. 00:27:34.466 [2024-11-15 10:46:22.872870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.466 [2024-11-15 10:46:22.872916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.466 qpair failed and we were unable to recover it. 00:27:34.466 [2024-11-15 10:46:22.873134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.466 [2024-11-15 10:46:22.873183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.466 qpair failed and we were unable to recover it. 00:27:34.466 [2024-11-15 10:46:22.873336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.466 [2024-11-15 10:46:22.873360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.466 qpair failed and we were unable to recover it. 00:27:34.466 [2024-11-15 10:46:22.873562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.466 [2024-11-15 10:46:22.873590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.466 qpair failed and we were unable to recover it. 00:27:34.466 [2024-11-15 10:46:22.873719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.466 [2024-11-15 10:46:22.873769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.466 qpair failed and we were unable to recover it. 00:27:34.466 [2024-11-15 10:46:22.873948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.466 [2024-11-15 10:46:22.873997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.466 qpair failed and we were unable to recover it. 00:27:34.466 [2024-11-15 10:46:22.874167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.466 [2024-11-15 10:46:22.874216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.466 qpair failed and we were unable to recover it. 00:27:34.466 [2024-11-15 10:46:22.874430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.466 [2024-11-15 10:46:22.874478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.466 qpair failed and we were unable to recover it. 00:27:34.466 [2024-11-15 10:46:22.874665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.466 [2024-11-15 10:46:22.874711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.466 qpair failed and we were unable to recover it. 00:27:34.466 [2024-11-15 10:46:22.874871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.466 [2024-11-15 10:46:22.874921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.466 qpair failed and we were unable to recover it. 00:27:34.466 [2024-11-15 10:46:22.875136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.466 [2024-11-15 10:46:22.875190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.466 qpair failed and we were unable to recover it. 00:27:34.466 [2024-11-15 10:46:22.875382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.466 [2024-11-15 10:46:22.875407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.466 qpair failed and we were unable to recover it. 00:27:34.466 [2024-11-15 10:46:22.875593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.466 [2024-11-15 10:46:22.875644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.466 qpair failed and we were unable to recover it. 00:27:34.466 [2024-11-15 10:46:22.875836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.466 [2024-11-15 10:46:22.875885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.466 qpair failed and we were unable to recover it. 00:27:34.466 [2024-11-15 10:46:22.876101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.466 [2024-11-15 10:46:22.876150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.466 qpair failed and we were unable to recover it. 00:27:34.466 [2024-11-15 10:46:22.876328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.466 [2024-11-15 10:46:22.876376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.466 qpair failed and we were unable to recover it. 00:27:34.466 [2024-11-15 10:46:22.876588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.466 [2024-11-15 10:46:22.876612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.466 qpair failed and we were unable to recover it. 00:27:34.466 [2024-11-15 10:46:22.876831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.466 [2024-11-15 10:46:22.876880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.466 qpair failed and we were unable to recover it. 00:27:34.466 [2024-11-15 10:46:22.877078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.466 [2024-11-15 10:46:22.877127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.466 qpair failed and we were unable to recover it. 00:27:34.466 [2024-11-15 10:46:22.877289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.466 [2024-11-15 10:46:22.877315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.466 qpair failed and we were unable to recover it. 00:27:34.466 [2024-11-15 10:46:22.877489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.466 [2024-11-15 10:46:22.877550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.466 qpair failed and we were unable to recover it. 00:27:34.466 [2024-11-15 10:46:22.877741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.466 [2024-11-15 10:46:22.877790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.466 qpair failed and we were unable to recover it. 00:27:34.466 [2024-11-15 10:46:22.877987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.466 [2024-11-15 10:46:22.878035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.466 qpair failed and we were unable to recover it. 00:27:34.466 [2024-11-15 10:46:22.878242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.466 [2024-11-15 10:46:22.878266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.466 qpair failed and we were unable to recover it. 00:27:34.466 [2024-11-15 10:46:22.878467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.466 [2024-11-15 10:46:22.878523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.466 qpair failed and we were unable to recover it. 00:27:34.466 [2024-11-15 10:46:22.878742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.466 [2024-11-15 10:46:22.878790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.466 qpair failed and we were unable to recover it. 00:27:34.466 [2024-11-15 10:46:22.879016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.466 [2024-11-15 10:46:22.879064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.466 qpair failed and we were unable to recover it. 00:27:34.466 [2024-11-15 10:46:22.879283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.466 [2024-11-15 10:46:22.879307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.466 qpair failed and we were unable to recover it. 00:27:34.466 [2024-11-15 10:46:22.879443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.466 [2024-11-15 10:46:22.879468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.466 qpair failed and we were unable to recover it. 00:27:34.466 [2024-11-15 10:46:22.879686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.466 [2024-11-15 10:46:22.879736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.466 qpair failed and we were unable to recover it. 00:27:34.748 [2024-11-15 10:46:22.879935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.748 [2024-11-15 10:46:22.879987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.748 qpair failed and we were unable to recover it. 00:27:34.748 [2024-11-15 10:46:22.880214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.748 [2024-11-15 10:46:22.880268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.748 qpair failed and we were unable to recover it. 00:27:34.748 [2024-11-15 10:46:22.880492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.748 [2024-11-15 10:46:22.880542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.748 qpair failed and we were unable to recover it. 00:27:34.748 [2024-11-15 10:46:22.880753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.748 [2024-11-15 10:46:22.880803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.748 qpair failed and we were unable to recover it. 00:27:34.748 [2024-11-15 10:46:22.881026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.748 [2024-11-15 10:46:22.881075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.748 qpair failed and we were unable to recover it. 00:27:34.748 [2024-11-15 10:46:22.881262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.748 [2024-11-15 10:46:22.881287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.748 qpair failed and we were unable to recover it. 00:27:34.748 [2024-11-15 10:46:22.881494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.748 [2024-11-15 10:46:22.881545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.748 qpair failed and we were unable to recover it. 00:27:34.748 [2024-11-15 10:46:22.881752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.748 [2024-11-15 10:46:22.881802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.748 qpair failed and we were unable to recover it. 00:27:34.748 [2024-11-15 10:46:22.881987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.748 [2024-11-15 10:46:22.882036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.748 qpair failed and we were unable to recover it. 00:27:34.748 [2024-11-15 10:46:22.882244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.748 [2024-11-15 10:46:22.882269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.748 qpair failed and we were unable to recover it. 00:27:34.748 [2024-11-15 10:46:22.882450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.748 [2024-11-15 10:46:22.882501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.748 qpair failed and we were unable to recover it. 00:27:34.748 [2024-11-15 10:46:22.882680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.748 [2024-11-15 10:46:22.882728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.748 qpair failed and we were unable to recover it. 00:27:34.748 [2024-11-15 10:46:22.882956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.748 [2024-11-15 10:46:22.883006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.748 qpair failed and we were unable to recover it. 00:27:34.748 [2024-11-15 10:46:22.883219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.748 [2024-11-15 10:46:22.883243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.748 qpair failed and we were unable to recover it. 00:27:34.748 [2024-11-15 10:46:22.883460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.748 [2024-11-15 10:46:22.883509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.748 qpair failed and we were unable to recover it. 00:27:34.748 [2024-11-15 10:46:22.883728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.748 [2024-11-15 10:46:22.883778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.748 qpair failed and we were unable to recover it. 00:27:34.748 [2024-11-15 10:46:22.883982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.748 [2024-11-15 10:46:22.884029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.748 qpair failed and we were unable to recover it. 00:27:34.748 [2024-11-15 10:46:22.884214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.748 [2024-11-15 10:46:22.884240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.748 qpair failed and we were unable to recover it. 00:27:34.748 [2024-11-15 10:46:22.884443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.748 [2024-11-15 10:46:22.884497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.748 qpair failed and we were unable to recover it. 00:27:34.748 [2024-11-15 10:46:22.884679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.748 [2024-11-15 10:46:22.884726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.748 qpair failed and we were unable to recover it. 00:27:34.748 [2024-11-15 10:46:22.884894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.748 [2024-11-15 10:46:22.884938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.748 qpair failed and we were unable to recover it. 00:27:34.748 [2024-11-15 10:46:22.885131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.748 [2024-11-15 10:46:22.885155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.748 qpair failed and we were unable to recover it. 00:27:34.748 [2024-11-15 10:46:22.885296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.748 [2024-11-15 10:46:22.885320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.748 qpair failed and we were unable to recover it. 00:27:34.748 [2024-11-15 10:46:22.885481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.748 [2024-11-15 10:46:22.885531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.748 qpair failed and we were unable to recover it. 00:27:34.748 [2024-11-15 10:46:22.885744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.748 [2024-11-15 10:46:22.885769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.748 qpair failed and we were unable to recover it. 00:27:34.748 [2024-11-15 10:46:22.885897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.748 [2024-11-15 10:46:22.885921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.748 qpair failed and we were unable to recover it. 00:27:34.748 [2024-11-15 10:46:22.886127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.748 [2024-11-15 10:46:22.886152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.748 qpair failed and we were unable to recover it. 00:27:34.748 [2024-11-15 10:46:22.886377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.748 [2024-11-15 10:46:22.886402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.749 qpair failed and we were unable to recover it. 00:27:34.749 [2024-11-15 10:46:22.886552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.749 [2024-11-15 10:46:22.886576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.749 qpair failed and we were unable to recover it. 00:27:34.749 [2024-11-15 10:46:22.886793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.749 [2024-11-15 10:46:22.886840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.749 qpair failed and we were unable to recover it. 00:27:34.749 [2024-11-15 10:46:22.887029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.749 [2024-11-15 10:46:22.887077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.749 qpair failed and we were unable to recover it. 00:27:34.749 [2024-11-15 10:46:22.887259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.749 [2024-11-15 10:46:22.887284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.749 qpair failed and we were unable to recover it. 00:27:34.749 [2024-11-15 10:46:22.887508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.749 [2024-11-15 10:46:22.887533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.749 qpair failed and we were unable to recover it. 00:27:34.749 [2024-11-15 10:46:22.887709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.749 [2024-11-15 10:46:22.887759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.749 qpair failed and we were unable to recover it. 00:27:34.749 [2024-11-15 10:46:22.887951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.749 [2024-11-15 10:46:22.887997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.749 qpair failed and we were unable to recover it. 00:27:34.749 [2024-11-15 10:46:22.888224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.749 [2024-11-15 10:46:22.888271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.749 qpair failed and we were unable to recover it. 00:27:34.749 [2024-11-15 10:46:22.888483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.749 [2024-11-15 10:46:22.888530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.749 qpair failed and we were unable to recover it. 00:27:34.749 [2024-11-15 10:46:22.888686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.749 [2024-11-15 10:46:22.888733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.749 qpair failed and we were unable to recover it. 00:27:34.749 [2024-11-15 10:46:22.888947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.749 [2024-11-15 10:46:22.888993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.749 qpair failed and we were unable to recover it. 00:27:34.749 [2024-11-15 10:46:22.889159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.749 [2024-11-15 10:46:22.889184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.749 qpair failed and we were unable to recover it. 00:27:34.749 [2024-11-15 10:46:22.889358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.749 [2024-11-15 10:46:22.889390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.749 qpair failed and we were unable to recover it. 00:27:34.749 [2024-11-15 10:46:22.889574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.749 [2024-11-15 10:46:22.889600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.749 qpair failed and we were unable to recover it. 00:27:34.749 [2024-11-15 10:46:22.889836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.749 [2024-11-15 10:46:22.889884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.749 qpair failed and we were unable to recover it. 00:27:34.749 [2024-11-15 10:46:22.890108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.749 [2024-11-15 10:46:22.890159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.749 qpair failed and we were unable to recover it. 00:27:34.749 [2024-11-15 10:46:22.890298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.749 [2024-11-15 10:46:22.890321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.749 qpair failed and we were unable to recover it. 00:27:34.749 [2024-11-15 10:46:22.890555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.749 [2024-11-15 10:46:22.890582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.749 qpair failed and we were unable to recover it. 00:27:34.749 [2024-11-15 10:46:22.890746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.749 [2024-11-15 10:46:22.890794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.749 qpair failed and we were unable to recover it. 00:27:34.749 [2024-11-15 10:46:22.891012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.749 [2024-11-15 10:46:22.891060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.749 qpair failed and we were unable to recover it. 00:27:34.749 [2024-11-15 10:46:22.891279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.749 [2024-11-15 10:46:22.891303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.749 qpair failed and we were unable to recover it. 00:27:34.749 [2024-11-15 10:46:22.891510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.749 [2024-11-15 10:46:22.891536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.749 qpair failed and we were unable to recover it. 00:27:34.749 [2024-11-15 10:46:22.891774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.749 [2024-11-15 10:46:22.891821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.749 qpair failed and we were unable to recover it. 00:27:34.749 [2024-11-15 10:46:22.891990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.749 [2024-11-15 10:46:22.892039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.749 qpair failed and we were unable to recover it. 00:27:34.749 [2024-11-15 10:46:22.892172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.749 [2024-11-15 10:46:22.892196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.749 qpair failed and we were unable to recover it. 00:27:34.749 [2024-11-15 10:46:22.892351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.749 [2024-11-15 10:46:22.892383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.749 qpair failed and we were unable to recover it. 00:27:34.749 [2024-11-15 10:46:22.892557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.749 [2024-11-15 10:46:22.892582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.749 qpair failed and we were unable to recover it. 00:27:34.749 [2024-11-15 10:46:22.892771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.749 [2024-11-15 10:46:22.892820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.749 qpair failed and we were unable to recover it. 00:27:34.749 [2024-11-15 10:46:22.893006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.749 [2024-11-15 10:46:22.893055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.749 qpair failed and we were unable to recover it. 00:27:34.749 [2024-11-15 10:46:22.893247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.749 [2024-11-15 10:46:22.893271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.749 qpair failed and we were unable to recover it. 00:27:34.749 [2024-11-15 10:46:22.893395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.749 [2024-11-15 10:46:22.893420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.749 qpair failed and we were unable to recover it. 00:27:34.749 [2024-11-15 10:46:22.893607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.749 [2024-11-15 10:46:22.893665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.749 qpair failed and we were unable to recover it. 00:27:34.749 [2024-11-15 10:46:22.893841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.749 [2024-11-15 10:46:22.893891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.749 qpair failed and we were unable to recover it. 00:27:34.749 [2024-11-15 10:46:22.894057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.749 [2024-11-15 10:46:22.894107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.749 qpair failed and we were unable to recover it. 00:27:34.749 [2024-11-15 10:46:22.894287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.749 [2024-11-15 10:46:22.894310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.749 qpair failed and we were unable to recover it. 00:27:34.749 [2024-11-15 10:46:22.894499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.749 [2024-11-15 10:46:22.894546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.749 qpair failed and we were unable to recover it. 00:27:34.749 [2024-11-15 10:46:22.894764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.749 [2024-11-15 10:46:22.894813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.750 qpair failed and we were unable to recover it. 00:27:34.750 [2024-11-15 10:46:22.894987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.750 [2024-11-15 10:46:22.895033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.750 qpair failed and we were unable to recover it. 00:27:34.750 [2024-11-15 10:46:22.895242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.750 [2024-11-15 10:46:22.895266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.750 qpair failed and we were unable to recover it. 00:27:34.750 [2024-11-15 10:46:22.895396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.750 [2024-11-15 10:46:22.895422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.750 qpair failed and we were unable to recover it. 00:27:34.750 [2024-11-15 10:46:22.895621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.750 [2024-11-15 10:46:22.895679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.750 qpair failed and we were unable to recover it. 00:27:34.750 [2024-11-15 10:46:22.895896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.750 [2024-11-15 10:46:22.895945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.750 qpair failed and we were unable to recover it. 00:27:34.750 [2024-11-15 10:46:22.896130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.750 [2024-11-15 10:46:22.896178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.750 qpair failed and we were unable to recover it. 00:27:34.750 [2024-11-15 10:46:22.896357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.750 [2024-11-15 10:46:22.896387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.750 qpair failed and we were unable to recover it. 00:27:34.750 [2024-11-15 10:46:22.896526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.750 [2024-11-15 10:46:22.896576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.750 qpair failed and we were unable to recover it. 00:27:34.750 [2024-11-15 10:46:22.896758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.750 [2024-11-15 10:46:22.896809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.750 qpair failed and we were unable to recover it. 00:27:34.750 [2024-11-15 10:46:22.897037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.750 [2024-11-15 10:46:22.897084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.750 qpair failed and we were unable to recover it. 00:27:34.750 [2024-11-15 10:46:22.897226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.750 [2024-11-15 10:46:22.897249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.750 qpair failed and we were unable to recover it. 00:27:34.750 [2024-11-15 10:46:22.897449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.750 [2024-11-15 10:46:22.897499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.750 qpair failed and we were unable to recover it. 00:27:34.750 [2024-11-15 10:46:22.897641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.750 [2024-11-15 10:46:22.897693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.750 qpair failed and we were unable to recover it. 00:27:34.750 [2024-11-15 10:46:22.897875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.750 [2024-11-15 10:46:22.897900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.750 qpair failed and we were unable to recover it. 00:27:34.750 [2024-11-15 10:46:22.898103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.750 [2024-11-15 10:46:22.898127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.750 qpair failed and we were unable to recover it. 00:27:34.750 [2024-11-15 10:46:22.898328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.750 [2024-11-15 10:46:22.898353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.750 qpair failed and we were unable to recover it. 00:27:34.750 [2024-11-15 10:46:22.898546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.750 [2024-11-15 10:46:22.898593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.750 qpair failed and we were unable to recover it. 00:27:34.750 [2024-11-15 10:46:22.898808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.750 [2024-11-15 10:46:22.898856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.750 qpair failed and we were unable to recover it. 00:27:34.750 [2024-11-15 10:46:22.899073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.750 [2024-11-15 10:46:22.899123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.750 qpair failed and we were unable to recover it. 00:27:34.750 [2024-11-15 10:46:22.899336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.750 [2024-11-15 10:46:22.899360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.750 qpair failed and we were unable to recover it. 00:27:34.750 [2024-11-15 10:46:22.899520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.750 [2024-11-15 10:46:22.899545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.750 qpair failed and we were unable to recover it. 00:27:34.750 [2024-11-15 10:46:22.899769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.750 [2024-11-15 10:46:22.899818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.750 qpair failed and we were unable to recover it. 00:27:34.750 [2024-11-15 10:46:22.899973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.750 [2024-11-15 10:46:22.900021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.750 qpair failed and we were unable to recover it. 00:27:34.750 [2024-11-15 10:46:22.900211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.750 [2024-11-15 10:46:22.900260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.750 qpair failed and we were unable to recover it. 00:27:34.750 [2024-11-15 10:46:22.900443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.750 [2024-11-15 10:46:22.900497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.750 qpair failed and we were unable to recover it. 00:27:34.750 [2024-11-15 10:46:22.900731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.750 [2024-11-15 10:46:22.900780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.750 qpair failed and we were unable to recover it. 00:27:34.750 [2024-11-15 10:46:22.900921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.750 [2024-11-15 10:46:22.900969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.750 qpair failed and we were unable to recover it. 00:27:34.750 [2024-11-15 10:46:22.901150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.750 [2024-11-15 10:46:22.901198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.750 qpair failed and we were unable to recover it. 00:27:34.750 [2024-11-15 10:46:22.901379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.750 [2024-11-15 10:46:22.901405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.750 qpair failed and we were unable to recover it. 00:27:34.750 [2024-11-15 10:46:22.901549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.750 [2024-11-15 10:46:22.901573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.750 qpair failed and we were unable to recover it. 00:27:34.750 [2024-11-15 10:46:22.901798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.750 [2024-11-15 10:46:22.901853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.750 qpair failed and we were unable to recover it. 00:27:34.750 [2024-11-15 10:46:22.901995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.750 [2024-11-15 10:46:22.902046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.750 qpair failed and we were unable to recover it. 00:27:34.750 [2024-11-15 10:46:22.902172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.750 [2024-11-15 10:46:22.902197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.750 qpair failed and we were unable to recover it. 00:27:34.750 [2024-11-15 10:46:22.902389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.750 [2024-11-15 10:46:22.902429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.750 qpair failed and we were unable to recover it. 00:27:34.750 [2024-11-15 10:46:22.902643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.750 [2024-11-15 10:46:22.902667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.750 qpair failed and we were unable to recover it. 00:27:34.750 [2024-11-15 10:46:22.902890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.750 [2024-11-15 10:46:22.902940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.750 qpair failed and we were unable to recover it. 00:27:34.750 [2024-11-15 10:46:22.903079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.750 [2024-11-15 10:46:22.903127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.750 qpair failed and we were unable to recover it. 00:27:34.750 [2024-11-15 10:46:22.903291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.750 [2024-11-15 10:46:22.903315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.750 qpair failed and we were unable to recover it. 00:27:34.751 [2024-11-15 10:46:22.903522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.751 [2024-11-15 10:46:22.903570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.751 qpair failed and we were unable to recover it. 00:27:34.751 [2024-11-15 10:46:22.903797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.751 [2024-11-15 10:46:22.903845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.751 qpair failed and we were unable to recover it. 00:27:34.751 [2024-11-15 10:46:22.904076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.751 [2024-11-15 10:46:22.904124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.751 qpair failed and we were unable to recover it. 00:27:34.751 [2024-11-15 10:46:22.904339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.751 [2024-11-15 10:46:22.904370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.751 qpair failed and we were unable to recover it. 00:27:34.751 [2024-11-15 10:46:22.904501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.751 [2024-11-15 10:46:22.904525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.751 qpair failed and we were unable to recover it. 00:27:34.751 [2024-11-15 10:46:22.904746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.751 [2024-11-15 10:46:22.904796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.751 qpair failed and we were unable to recover it. 00:27:34.751 [2024-11-15 10:46:22.904983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.751 [2024-11-15 10:46:22.905032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.751 qpair failed and we were unable to recover it. 00:27:34.751 [2024-11-15 10:46:22.905175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.751 [2024-11-15 10:46:22.905222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.751 qpair failed and we were unable to recover it. 00:27:34.751 [2024-11-15 10:46:22.905414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.751 [2024-11-15 10:46:22.905439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.751 qpair failed and we were unable to recover it. 00:27:34.751 [2024-11-15 10:46:22.905590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.751 [2024-11-15 10:46:22.905639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.751 qpair failed and we were unable to recover it. 00:27:34.751 [2024-11-15 10:46:22.905829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.751 [2024-11-15 10:46:22.905892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.751 qpair failed and we were unable to recover it. 00:27:34.751 [2024-11-15 10:46:22.906113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.751 [2024-11-15 10:46:22.906161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.751 qpair failed and we were unable to recover it. 00:27:34.751 [2024-11-15 10:46:22.906372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.751 [2024-11-15 10:46:22.906398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.751 qpair failed and we were unable to recover it. 00:27:34.751 [2024-11-15 10:46:22.906580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.751 [2024-11-15 10:46:22.906605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.751 qpair failed and we were unable to recover it. 00:27:34.751 [2024-11-15 10:46:22.906822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.751 [2024-11-15 10:46:22.906872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.751 qpair failed and we were unable to recover it. 00:27:34.751 [2024-11-15 10:46:22.907055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.751 [2024-11-15 10:46:22.907101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.751 qpair failed and we were unable to recover it. 00:27:34.751 [2024-11-15 10:46:22.907271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.751 [2024-11-15 10:46:22.907295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.751 qpair failed and we were unable to recover it. 00:27:34.751 [2024-11-15 10:46:22.907424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.751 [2024-11-15 10:46:22.907465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.751 qpair failed and we were unable to recover it. 00:27:34.751 [2024-11-15 10:46:22.907625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.751 [2024-11-15 10:46:22.907676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.751 qpair failed and we were unable to recover it. 00:27:34.751 [2024-11-15 10:46:22.907895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.751 [2024-11-15 10:46:22.907949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.751 qpair failed and we were unable to recover it. 00:27:34.751 [2024-11-15 10:46:22.908174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.751 [2024-11-15 10:46:22.908221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.751 qpair failed and we were unable to recover it. 00:27:34.751 [2024-11-15 10:46:22.908351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.751 [2024-11-15 10:46:22.908407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.751 qpair failed and we were unable to recover it. 00:27:34.751 [2024-11-15 10:46:22.908633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.751 [2024-11-15 10:46:22.908692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.751 qpair failed and we were unable to recover it. 00:27:34.751 [2024-11-15 10:46:22.908893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.751 [2024-11-15 10:46:22.908943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.751 qpair failed and we were unable to recover it. 00:27:34.751 [2024-11-15 10:46:22.909113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.751 [2024-11-15 10:46:22.909164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.751 qpair failed and we were unable to recover it. 00:27:34.751 [2024-11-15 10:46:22.909377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.751 [2024-11-15 10:46:22.909402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.751 qpair failed and we were unable to recover it. 00:27:34.751 [2024-11-15 10:46:22.909569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.751 [2024-11-15 10:46:22.909594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.751 qpair failed and we were unable to recover it. 00:27:34.751 [2024-11-15 10:46:22.909724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.751 [2024-11-15 10:46:22.909780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.751 qpair failed and we were unable to recover it. 00:27:34.751 [2024-11-15 10:46:22.909966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.751 [2024-11-15 10:46:22.910016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.751 qpair failed and we were unable to recover it. 00:27:34.751 [2024-11-15 10:46:22.910201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.751 [2024-11-15 10:46:22.910225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.751 qpair failed and we were unable to recover it. 00:27:34.751 [2024-11-15 10:46:22.910389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.751 [2024-11-15 10:46:22.910415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.751 qpair failed and we were unable to recover it. 00:27:34.751 [2024-11-15 10:46:22.910589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.751 [2024-11-15 10:46:22.910637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.751 qpair failed and we were unable to recover it. 00:27:34.751 [2024-11-15 10:46:22.910808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.751 [2024-11-15 10:46:22.910860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.751 qpair failed and we were unable to recover it. 00:27:34.751 [2024-11-15 10:46:22.911044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.751 [2024-11-15 10:46:22.911092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.751 qpair failed and we were unable to recover it. 00:27:34.751 [2024-11-15 10:46:22.911256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.751 [2024-11-15 10:46:22.911280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.751 qpair failed and we were unable to recover it. 00:27:34.751 [2024-11-15 10:46:22.911490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.751 [2024-11-15 10:46:22.911547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.751 qpair failed and we were unable to recover it. 00:27:34.751 [2024-11-15 10:46:22.911760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.751 [2024-11-15 10:46:22.911809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.751 qpair failed and we were unable to recover it. 00:27:34.751 [2024-11-15 10:46:22.912030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.751 [2024-11-15 10:46:22.912083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.751 qpair failed and we were unable to recover it. 00:27:34.751 [2024-11-15 10:46:22.912257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.752 [2024-11-15 10:46:22.912281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.752 qpair failed and we were unable to recover it. 00:27:34.752 [2024-11-15 10:46:22.912466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.752 [2024-11-15 10:46:22.912516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.752 qpair failed and we were unable to recover it. 00:27:34.752 [2024-11-15 10:46:22.912664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.752 [2024-11-15 10:46:22.912716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.752 qpair failed and we were unable to recover it. 00:27:34.752 [2024-11-15 10:46:22.912897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.752 [2024-11-15 10:46:22.912943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.752 qpair failed and we were unable to recover it. 00:27:34.752 [2024-11-15 10:46:22.913167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.752 [2024-11-15 10:46:22.913217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.752 qpair failed and we were unable to recover it. 00:27:34.752 [2024-11-15 10:46:22.913321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.752 [2024-11-15 10:46:22.913377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.752 qpair failed and we were unable to recover it. 00:27:34.752 [2024-11-15 10:46:22.913546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.752 [2024-11-15 10:46:22.913596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.752 qpair failed and we were unable to recover it. 00:27:34.752 [2024-11-15 10:46:22.913825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.752 [2024-11-15 10:46:22.913874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.752 qpair failed and we were unable to recover it. 00:27:34.752 [2024-11-15 10:46:22.914090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.752 [2024-11-15 10:46:22.914135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.752 qpair failed and we were unable to recover it. 00:27:34.752 [2024-11-15 10:46:22.914328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.752 [2024-11-15 10:46:22.914353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.752 qpair failed and we were unable to recover it. 00:27:34.752 [2024-11-15 10:46:22.914603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.752 [2024-11-15 10:46:22.914653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.752 qpair failed and we were unable to recover it. 00:27:34.752 [2024-11-15 10:46:22.914790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.752 [2024-11-15 10:46:22.914839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.752 qpair failed and we were unable to recover it. 00:27:34.752 [2024-11-15 10:46:22.914971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.752 [2024-11-15 10:46:22.915021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.752 qpair failed and we were unable to recover it. 00:27:34.752 [2024-11-15 10:46:22.915176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.752 [2024-11-15 10:46:22.915199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.752 qpair failed and we were unable to recover it. 00:27:34.752 [2024-11-15 10:46:22.915423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.752 [2024-11-15 10:46:22.915449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.752 qpair failed and we were unable to recover it. 00:27:34.752 [2024-11-15 10:46:22.915664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.752 [2024-11-15 10:46:22.915718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.752 qpair failed and we were unable to recover it. 00:27:34.752 [2024-11-15 10:46:22.915886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.752 [2024-11-15 10:46:22.915935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.752 qpair failed and we were unable to recover it. 00:27:34.752 [2024-11-15 10:46:22.916166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.752 [2024-11-15 10:46:22.916213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.752 qpair failed and we were unable to recover it. 00:27:34.752 [2024-11-15 10:46:22.916405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.752 [2024-11-15 10:46:22.916447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.752 qpair failed and we were unable to recover it. 00:27:34.752 [2024-11-15 10:46:22.916658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.752 [2024-11-15 10:46:22.916707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.752 qpair failed and we were unable to recover it. 00:27:34.752 [2024-11-15 10:46:22.916878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.752 [2024-11-15 10:46:22.916925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.752 qpair failed and we were unable to recover it. 00:27:34.752 [2024-11-15 10:46:22.917146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.752 [2024-11-15 10:46:22.917196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.752 qpair failed and we were unable to recover it. 00:27:34.752 [2024-11-15 10:46:22.917327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.752 [2024-11-15 10:46:22.917355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.752 qpair failed and we were unable to recover it. 00:27:34.752 [2024-11-15 10:46:22.917574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.752 [2024-11-15 10:46:22.917625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.752 qpair failed and we were unable to recover it. 00:27:34.752 [2024-11-15 10:46:22.917853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.752 [2024-11-15 10:46:22.917904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.752 qpair failed and we were unable to recover it. 00:27:34.752 [2024-11-15 10:46:22.918118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.752 [2024-11-15 10:46:22.918165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.752 qpair failed and we were unable to recover it. 00:27:34.752 [2024-11-15 10:46:22.918385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.752 [2024-11-15 10:46:22.918410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.752 qpair failed and we were unable to recover it. 00:27:34.752 [2024-11-15 10:46:22.918587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.752 [2024-11-15 10:46:22.918611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.752 qpair failed and we were unable to recover it. 00:27:34.752 [2024-11-15 10:46:22.918745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.752 [2024-11-15 10:46:22.918798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.752 qpair failed and we were unable to recover it. 00:27:34.752 [2024-11-15 10:46:22.918982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.752 [2024-11-15 10:46:22.919031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.752 qpair failed and we were unable to recover it. 00:27:34.752 [2024-11-15 10:46:22.919184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.752 [2024-11-15 10:46:22.919233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.752 qpair failed and we were unable to recover it. 00:27:34.752 [2024-11-15 10:46:22.919332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.752 [2024-11-15 10:46:22.919356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.752 qpair failed and we were unable to recover it. 00:27:34.753 [2024-11-15 10:46:22.919551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.753 [2024-11-15 10:46:22.919602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.753 qpair failed and we were unable to recover it. 00:27:34.753 [2024-11-15 10:46:22.919821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.753 [2024-11-15 10:46:22.919870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.753 qpair failed and we were unable to recover it. 00:27:34.753 [2024-11-15 10:46:22.920107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.753 [2024-11-15 10:46:22.920156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.753 qpair failed and we were unable to recover it. 00:27:34.753 [2024-11-15 10:46:22.920265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.753 [2024-11-15 10:46:22.920289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.753 qpair failed and we were unable to recover it. 00:27:34.753 [2024-11-15 10:46:22.920492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.753 [2024-11-15 10:46:22.920544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.753 qpair failed and we were unable to recover it. 00:27:34.753 [2024-11-15 10:46:22.920764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.753 [2024-11-15 10:46:22.920812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.753 qpair failed and we were unable to recover it. 00:27:34.753 [2024-11-15 10:46:22.921040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.753 [2024-11-15 10:46:22.921087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.753 qpair failed and we were unable to recover it. 00:27:34.753 [2024-11-15 10:46:22.921274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.753 [2024-11-15 10:46:22.921299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.753 qpair failed and we were unable to recover it. 00:27:34.753 [2024-11-15 10:46:22.921447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.753 [2024-11-15 10:46:22.921496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.753 qpair failed and we were unable to recover it. 00:27:34.753 [2024-11-15 10:46:22.921684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.753 [2024-11-15 10:46:22.921733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.753 qpair failed and we were unable to recover it. 00:27:34.753 [2024-11-15 10:46:22.921899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.753 [2024-11-15 10:46:22.921947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.753 qpair failed and we were unable to recover it. 00:27:34.753 [2024-11-15 10:46:22.922140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.753 [2024-11-15 10:46:22.922188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.753 qpair failed and we were unable to recover it. 00:27:34.753 [2024-11-15 10:46:22.922359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.753 [2024-11-15 10:46:22.922405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.753 qpair failed and we were unable to recover it. 00:27:34.753 [2024-11-15 10:46:22.922553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.753 [2024-11-15 10:46:22.922604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.753 qpair failed and we were unable to recover it. 00:27:34.753 [2024-11-15 10:46:22.922762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.753 [2024-11-15 10:46:22.922812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.753 qpair failed and we were unable to recover it. 00:27:34.753 [2024-11-15 10:46:22.923041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.753 [2024-11-15 10:46:22.923089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.753 qpair failed and we were unable to recover it. 00:27:34.753 [2024-11-15 10:46:22.923302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.753 [2024-11-15 10:46:22.923326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.753 qpair failed and we were unable to recover it. 00:27:34.753 [2024-11-15 10:46:22.923533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.753 [2024-11-15 10:46:22.923585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.753 qpair failed and we were unable to recover it. 00:27:34.753 [2024-11-15 10:46:22.923810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.753 [2024-11-15 10:46:22.923859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.753 qpair failed and we were unable to recover it. 00:27:34.753 [2024-11-15 10:46:22.923975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.753 [2024-11-15 10:46:22.924024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.753 qpair failed and we were unable to recover it. 00:27:34.753 [2024-11-15 10:46:22.924242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.753 [2024-11-15 10:46:22.924265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.753 qpair failed and we were unable to recover it. 00:27:34.753 [2024-11-15 10:46:22.924434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.753 [2024-11-15 10:46:22.924458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.753 qpair failed and we were unable to recover it. 00:27:34.753 [2024-11-15 10:46:22.924674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.753 [2024-11-15 10:46:22.924724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.753 qpair failed and we were unable to recover it. 00:27:34.753 [2024-11-15 10:46:22.924940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.753 [2024-11-15 10:46:22.924998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.753 qpair failed and we were unable to recover it. 00:27:34.753 [2024-11-15 10:46:22.925166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.753 [2024-11-15 10:46:22.925215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.753 qpair failed and we were unable to recover it. 00:27:34.753 [2024-11-15 10:46:22.925413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.753 [2024-11-15 10:46:22.925462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.753 qpair failed and we were unable to recover it. 00:27:34.753 [2024-11-15 10:46:22.925679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.753 [2024-11-15 10:46:22.925736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.753 qpair failed and we were unable to recover it. 00:27:34.753 [2024-11-15 10:46:22.925959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.753 [2024-11-15 10:46:22.926008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.753 qpair failed and we were unable to recover it. 00:27:34.753 [2024-11-15 10:46:22.926190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.753 [2024-11-15 10:46:22.926215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.753 qpair failed and we were unable to recover it. 00:27:34.753 [2024-11-15 10:46:22.926433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.753 [2024-11-15 10:46:22.926480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.753 qpair failed and we were unable to recover it. 00:27:34.753 [2024-11-15 10:46:22.926695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.753 [2024-11-15 10:46:22.926754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.753 qpair failed and we were unable to recover it. 00:27:34.753 [2024-11-15 10:46:22.926945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.753 [2024-11-15 10:46:22.926993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.753 qpair failed and we were unable to recover it. 00:27:34.753 [2024-11-15 10:46:22.927178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.753 [2024-11-15 10:46:22.927203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.753 qpair failed and we were unable to recover it. 00:27:34.753 [2024-11-15 10:46:22.927387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.753 [2024-11-15 10:46:22.927413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.753 qpair failed and we were unable to recover it. 00:27:34.753 [2024-11-15 10:46:22.927601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.753 [2024-11-15 10:46:22.927649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.753 qpair failed and we were unable to recover it. 00:27:34.753 [2024-11-15 10:46:22.927828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.753 [2024-11-15 10:46:22.927876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.753 qpair failed and we were unable to recover it. 00:27:34.753 [2024-11-15 10:46:22.928046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.753 [2024-11-15 10:46:22.928092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.753 qpair failed and we were unable to recover it. 00:27:34.753 [2024-11-15 10:46:22.928275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.753 [2024-11-15 10:46:22.928299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.753 qpair failed and we were unable to recover it. 00:27:34.753 [2024-11-15 10:46:22.928471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.754 [2024-11-15 10:46:22.928497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.754 qpair failed and we were unable to recover it. 00:27:34.754 [2024-11-15 10:46:22.928669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.754 [2024-11-15 10:46:22.928716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.754 qpair failed and we were unable to recover it. 00:27:34.754 [2024-11-15 10:46:22.928931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.754 [2024-11-15 10:46:22.928979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.754 qpair failed and we were unable to recover it. 00:27:34.754 [2024-11-15 10:46:22.929216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.754 [2024-11-15 10:46:22.929272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.754 qpair failed and we were unable to recover it. 00:27:34.754 [2024-11-15 10:46:22.929466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.754 [2024-11-15 10:46:22.929515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.754 qpair failed and we were unable to recover it. 00:27:34.754 [2024-11-15 10:46:22.929715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.754 [2024-11-15 10:46:22.929763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.754 qpair failed and we were unable to recover it. 00:27:34.754 [2024-11-15 10:46:22.929985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.754 [2024-11-15 10:46:22.930036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.754 qpair failed and we were unable to recover it. 00:27:34.754 [2024-11-15 10:46:22.930200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.754 [2024-11-15 10:46:22.930224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.754 qpair failed and we were unable to recover it. 00:27:34.754 [2024-11-15 10:46:22.930441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.754 [2024-11-15 10:46:22.930499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.754 qpair failed and we were unable to recover it. 00:27:34.754 [2024-11-15 10:46:22.930731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.754 [2024-11-15 10:46:22.930781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.754 qpair failed and we were unable to recover it. 00:27:34.754 [2024-11-15 10:46:22.931004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.754 [2024-11-15 10:46:22.931052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.754 qpair failed and we were unable to recover it. 00:27:34.754 [2024-11-15 10:46:22.931262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.754 [2024-11-15 10:46:22.931286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.754 qpair failed and we were unable to recover it. 00:27:34.754 [2024-11-15 10:46:22.931486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.754 [2024-11-15 10:46:22.931533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.754 qpair failed and we were unable to recover it. 00:27:34.754 [2024-11-15 10:46:22.931724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.754 [2024-11-15 10:46:22.931774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.754 qpair failed and we were unable to recover it. 00:27:34.754 [2024-11-15 10:46:22.931931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.754 [2024-11-15 10:46:22.931980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.754 qpair failed and we were unable to recover it. 00:27:34.754 [2024-11-15 10:46:22.932191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.754 [2024-11-15 10:46:22.932215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.754 qpair failed and we were unable to recover it. 00:27:34.754 [2024-11-15 10:46:22.932399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.754 [2024-11-15 10:46:22.932447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.754 qpair failed and we were unable to recover it. 00:27:34.754 [2024-11-15 10:46:22.932668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.754 [2024-11-15 10:46:22.932719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.754 qpair failed and we were unable to recover it. 00:27:34.754 [2024-11-15 10:46:22.932935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.754 [2024-11-15 10:46:22.932985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.754 qpair failed and we were unable to recover it. 00:27:34.754 [2024-11-15 10:46:22.933164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.754 [2024-11-15 10:46:22.933189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.754 qpair failed and we were unable to recover it. 00:27:34.754 [2024-11-15 10:46:22.933382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.754 [2024-11-15 10:46:22.933407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.754 qpair failed and we were unable to recover it. 00:27:34.754 [2024-11-15 10:46:22.933626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.754 [2024-11-15 10:46:22.933676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.754 qpair failed and we were unable to recover it. 00:27:34.754 [2024-11-15 10:46:22.933895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.754 [2024-11-15 10:46:22.933945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.754 qpair failed and we were unable to recover it. 00:27:34.754 [2024-11-15 10:46:22.934093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.754 [2024-11-15 10:46:22.934145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.754 qpair failed and we were unable to recover it. 00:27:34.754 [2024-11-15 10:46:22.934318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.754 [2024-11-15 10:46:22.934343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.754 qpair failed and we were unable to recover it. 00:27:34.754 [2024-11-15 10:46:22.934504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.754 [2024-11-15 10:46:22.934552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.754 qpair failed and we were unable to recover it. 00:27:34.754 [2024-11-15 10:46:22.934793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.754 [2024-11-15 10:46:22.934841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.754 qpair failed and we were unable to recover it. 00:27:34.754 [2024-11-15 10:46:22.935051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.754 [2024-11-15 10:46:22.935102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.754 qpair failed and we were unable to recover it. 00:27:34.754 [2024-11-15 10:46:22.935316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.754 [2024-11-15 10:46:22.935340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.754 qpair failed and we were unable to recover it. 00:27:34.754 [2024-11-15 10:46:22.935574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.754 [2024-11-15 10:46:22.935615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.754 qpair failed and we were unable to recover it. 00:27:34.754 [2024-11-15 10:46:22.935834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.754 [2024-11-15 10:46:22.935883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.754 qpair failed and we were unable to recover it. 00:27:34.754 [2024-11-15 10:46:22.936114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.754 [2024-11-15 10:46:22.936164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.754 qpair failed and we were unable to recover it. 00:27:34.754 [2024-11-15 10:46:22.936294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.754 [2024-11-15 10:46:22.936319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.754 qpair failed and we were unable to recover it. 00:27:34.754 [2024-11-15 10:46:22.936489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.754 [2024-11-15 10:46:22.936516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.754 qpair failed and we were unable to recover it. 00:27:34.754 [2024-11-15 10:46:22.936683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.754 [2024-11-15 10:46:22.936732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.754 qpair failed and we were unable to recover it. 00:27:34.754 [2024-11-15 10:46:22.936896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.754 [2024-11-15 10:46:22.936950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.754 qpair failed and we were unable to recover it. 00:27:34.754 [2024-11-15 10:46:22.937174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.754 [2024-11-15 10:46:22.937220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.754 qpair failed and we were unable to recover it. 00:27:34.754 [2024-11-15 10:46:22.937438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.754 [2024-11-15 10:46:22.937463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.754 qpair failed and we were unable to recover it. 00:27:34.754 [2024-11-15 10:46:22.937688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.754 [2024-11-15 10:46:22.937713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.755 qpair failed and we were unable to recover it. 00:27:34.755 [2024-11-15 10:46:22.937852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.755 [2024-11-15 10:46:22.937877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.755 qpair failed and we were unable to recover it. 00:27:34.755 [2024-11-15 10:46:22.938039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.755 [2024-11-15 10:46:22.938064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.755 qpair failed and we were unable to recover it. 00:27:34.755 [2024-11-15 10:46:22.938227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.755 [2024-11-15 10:46:22.938251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.755 qpair failed and we were unable to recover it. 00:27:34.755 [2024-11-15 10:46:22.938449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.755 [2024-11-15 10:46:22.938495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.755 qpair failed and we were unable to recover it. 00:27:34.755 [2024-11-15 10:46:22.938640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.755 [2024-11-15 10:46:22.938665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.755 qpair failed and we were unable to recover it. 00:27:34.755 [2024-11-15 10:46:22.938893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.755 [2024-11-15 10:46:22.938942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.755 qpair failed and we were unable to recover it. 00:27:34.755 [2024-11-15 10:46:22.939163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.755 [2024-11-15 10:46:22.939187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.755 qpair failed and we were unable to recover it. 00:27:34.755 [2024-11-15 10:46:22.939297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.755 [2024-11-15 10:46:22.939321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.755 qpair failed and we were unable to recover it. 00:27:34.755 [2024-11-15 10:46:22.939567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.755 [2024-11-15 10:46:22.939617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.755 qpair failed and we were unable to recover it. 00:27:34.755 [2024-11-15 10:46:22.939836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.755 [2024-11-15 10:46:22.939885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.755 qpair failed and we were unable to recover it. 00:27:34.755 [2024-11-15 10:46:22.940102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.755 [2024-11-15 10:46:22.940150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.755 qpair failed and we were unable to recover it. 00:27:34.755 [2024-11-15 10:46:22.940372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.755 [2024-11-15 10:46:22.940397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.755 qpair failed and we were unable to recover it. 00:27:34.755 [2024-11-15 10:46:22.940561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.755 [2024-11-15 10:46:22.940585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.755 qpair failed and we were unable to recover it. 00:27:34.755 [2024-11-15 10:46:22.940804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.755 [2024-11-15 10:46:22.940853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.755 qpair failed and we were unable to recover it. 00:27:34.755 [2024-11-15 10:46:22.941077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.755 [2024-11-15 10:46:22.941125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.755 qpair failed and we were unable to recover it. 00:27:34.755 [2024-11-15 10:46:22.941296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.755 [2024-11-15 10:46:22.941321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.755 qpair failed and we were unable to recover it. 00:27:34.755 [2024-11-15 10:46:22.941498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.755 [2024-11-15 10:46:22.941524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.755 qpair failed and we were unable to recover it. 00:27:34.755 [2024-11-15 10:46:22.941717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.755 [2024-11-15 10:46:22.941767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.755 qpair failed and we were unable to recover it. 00:27:34.755 [2024-11-15 10:46:22.941987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.755 [2024-11-15 10:46:22.942034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.755 qpair failed and we were unable to recover it. 00:27:34.755 [2024-11-15 10:46:22.942246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.755 [2024-11-15 10:46:22.942270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.755 qpair failed and we were unable to recover it. 00:27:34.755 [2024-11-15 10:46:22.942452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.755 [2024-11-15 10:46:22.942478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.755 qpair failed and we were unable to recover it. 00:27:34.755 [2024-11-15 10:46:22.942667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.755 [2024-11-15 10:46:22.942716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.755 qpair failed and we were unable to recover it. 00:27:34.755 [2024-11-15 10:46:22.942936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.755 [2024-11-15 10:46:22.942984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.755 qpair failed and we were unable to recover it. 00:27:34.755 [2024-11-15 10:46:22.943198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.755 [2024-11-15 10:46:22.943245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.755 qpair failed and we were unable to recover it. 00:27:34.755 [2024-11-15 10:46:22.943432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.755 [2024-11-15 10:46:22.943483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.755 qpair failed and we were unable to recover it. 00:27:34.755 [2024-11-15 10:46:22.943686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.755 [2024-11-15 10:46:22.943732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.755 qpair failed and we were unable to recover it. 00:27:34.755 [2024-11-15 10:46:22.943927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.755 [2024-11-15 10:46:22.943976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.755 qpair failed and we were unable to recover it. 00:27:34.755 [2024-11-15 10:46:22.944153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.755 [2024-11-15 10:46:22.944178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.755 qpair failed and we were unable to recover it. 00:27:34.755 [2024-11-15 10:46:22.944355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.755 [2024-11-15 10:46:22.944386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.755 qpair failed and we were unable to recover it. 00:27:34.755 [2024-11-15 10:46:22.944556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.755 [2024-11-15 10:46:22.944580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.755 qpair failed and we were unable to recover it. 00:27:34.755 [2024-11-15 10:46:22.944754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.755 [2024-11-15 10:46:22.944799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.755 qpair failed and we were unable to recover it. 00:27:34.755 [2024-11-15 10:46:22.944978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.755 [2024-11-15 10:46:22.945028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.755 qpair failed and we were unable to recover it. 00:27:34.755 [2024-11-15 10:46:22.945207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.755 [2024-11-15 10:46:22.945231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.755 qpair failed and we were unable to recover it. 00:27:34.755 [2024-11-15 10:46:22.945375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.755 [2024-11-15 10:46:22.945400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.755 qpair failed and we were unable to recover it. 00:27:34.755 [2024-11-15 10:46:22.945586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.755 [2024-11-15 10:46:22.945636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.755 qpair failed and we were unable to recover it. 00:27:34.755 [2024-11-15 10:46:22.945854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.755 [2024-11-15 10:46:22.945906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.755 qpair failed and we were unable to recover it. 00:27:34.755 [2024-11-15 10:46:22.946122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.755 [2024-11-15 10:46:22.946172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.755 qpair failed and we were unable to recover it. 00:27:34.755 [2024-11-15 10:46:22.946379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.755 [2024-11-15 10:46:22.946404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.755 qpair failed and we were unable to recover it. 00:27:34.756 [2024-11-15 10:46:22.946547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.756 [2024-11-15 10:46:22.946596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.756 qpair failed and we were unable to recover it. 00:27:34.756 [2024-11-15 10:46:22.946828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.756 [2024-11-15 10:46:22.946877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.756 qpair failed and we were unable to recover it. 00:27:34.756 [2024-11-15 10:46:22.947077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.756 [2024-11-15 10:46:22.947125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.756 qpair failed and we were unable to recover it. 00:27:34.756 [2024-11-15 10:46:22.947239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.756 [2024-11-15 10:46:22.947263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.756 qpair failed and we were unable to recover it. 00:27:34.756 [2024-11-15 10:46:22.947477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.756 [2024-11-15 10:46:22.947523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.756 qpair failed and we were unable to recover it. 00:27:34.756 [2024-11-15 10:46:22.947746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.756 [2024-11-15 10:46:22.947801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.756 qpair failed and we were unable to recover it. 00:27:34.756 [2024-11-15 10:46:22.947996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.756 [2024-11-15 10:46:22.948045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.756 qpair failed and we were unable to recover it. 00:27:34.756 [2024-11-15 10:46:22.948204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.756 [2024-11-15 10:46:22.948228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.756 qpair failed and we were unable to recover it. 00:27:34.756 [2024-11-15 10:46:22.948450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.756 [2024-11-15 10:46:22.948500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.756 qpair failed and we were unable to recover it. 00:27:34.756 [2024-11-15 10:46:22.948684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.756 [2024-11-15 10:46:22.948709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.756 qpair failed and we were unable to recover it. 00:27:34.756 [2024-11-15 10:46:22.948880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.756 [2024-11-15 10:46:22.948930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.756 qpair failed and we were unable to recover it. 00:27:34.756 [2024-11-15 10:46:22.949120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.756 [2024-11-15 10:46:22.949171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.756 qpair failed and we were unable to recover it. 00:27:34.756 [2024-11-15 10:46:22.949389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.756 [2024-11-15 10:46:22.949414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.756 qpair failed and we were unable to recover it. 00:27:34.756 [2024-11-15 10:46:22.949582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.756 [2024-11-15 10:46:22.949630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.756 qpair failed and we were unable to recover it. 00:27:34.756 [2024-11-15 10:46:22.949804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.756 [2024-11-15 10:46:22.949853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.756 qpair failed and we were unable to recover it. 00:27:34.756 [2024-11-15 10:46:22.953561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.756 [2024-11-15 10:46:22.953599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.756 qpair failed and we were unable to recover it. 00:27:34.756 [2024-11-15 10:46:22.953798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.756 [2024-11-15 10:46:22.953824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.756 qpair failed and we were unable to recover it. 00:27:34.756 [2024-11-15 10:46:22.954053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.756 [2024-11-15 10:46:22.954102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.756 qpair failed and we were unable to recover it. 00:27:34.756 [2024-11-15 10:46:22.954285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.756 [2024-11-15 10:46:22.954331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.756 qpair failed and we were unable to recover it. 00:27:34.756 [2024-11-15 10:46:22.954475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.756 [2024-11-15 10:46:22.954500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.756 qpair failed and we were unable to recover it. 00:27:34.756 [2024-11-15 10:46:22.954720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.756 [2024-11-15 10:46:22.954744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.756 qpair failed and we were unable to recover it. 00:27:34.756 [2024-11-15 10:46:22.954920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.756 [2024-11-15 10:46:22.954969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.756 qpair failed and we were unable to recover it. 00:27:34.756 [2024-11-15 10:46:22.955181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.756 [2024-11-15 10:46:22.955231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.756 qpair failed and we were unable to recover it. 00:27:34.756 [2024-11-15 10:46:22.955438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.756 [2024-11-15 10:46:22.955464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.756 qpair failed and we were unable to recover it. 00:27:34.756 [2024-11-15 10:46:22.955637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.756 [2024-11-15 10:46:22.955689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.756 qpair failed and we were unable to recover it. 00:27:34.756 [2024-11-15 10:46:22.955901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.756 [2024-11-15 10:46:22.955950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.756 qpair failed and we were unable to recover it. 00:27:34.756 [2024-11-15 10:46:22.956179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.756 [2024-11-15 10:46:22.956228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.756 qpair failed and we were unable to recover it. 00:27:34.756 [2024-11-15 10:46:22.956460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.756 [2024-11-15 10:46:22.956506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.756 qpair failed and we were unable to recover it. 00:27:34.756 [2024-11-15 10:46:22.956681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.756 [2024-11-15 10:46:22.956732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.756 qpair failed and we were unable to recover it. 00:27:34.756 [2024-11-15 10:46:22.956926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.756 [2024-11-15 10:46:22.956972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.756 qpair failed and we were unable to recover it. 00:27:34.756 [2024-11-15 10:46:22.957196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.756 [2024-11-15 10:46:22.957243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.756 qpair failed and we were unable to recover it. 00:27:34.756 [2024-11-15 10:46:22.957448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.756 [2024-11-15 10:46:22.957495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.756 qpair failed and we were unable to recover it. 00:27:34.756 [2024-11-15 10:46:22.957680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.756 [2024-11-15 10:46:22.957728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.756 qpair failed and we were unable to recover it. 00:27:34.756 [2024-11-15 10:46:22.957932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.756 [2024-11-15 10:46:22.957957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.756 qpair failed and we were unable to recover it. 00:27:34.756 [2024-11-15 10:46:22.958132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.756 [2024-11-15 10:46:22.958182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.756 qpair failed and we were unable to recover it. 00:27:34.756 [2024-11-15 10:46:22.958276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.756 [2024-11-15 10:46:22.958299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.756 qpair failed and we were unable to recover it. 00:27:34.756 [2024-11-15 10:46:22.958534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.756 [2024-11-15 10:46:22.958581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.756 qpair failed and we were unable to recover it. 00:27:34.756 [2024-11-15 10:46:22.958813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.756 [2024-11-15 10:46:22.958861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.756 qpair failed and we were unable to recover it. 00:27:34.756 [2024-11-15 10:46:22.959089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.756 [2024-11-15 10:46:22.959138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.757 qpair failed and we were unable to recover it. 00:27:34.757 [2024-11-15 10:46:22.959327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.757 [2024-11-15 10:46:22.959351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.757 qpair failed and we were unable to recover it. 00:27:34.757 [2024-11-15 10:46:22.959566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.757 [2024-11-15 10:46:22.959614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.757 qpair failed and we were unable to recover it. 00:27:34.757 [2024-11-15 10:46:22.959833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.757 [2024-11-15 10:46:22.959880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.757 qpair failed and we were unable to recover it. 00:27:34.757 [2024-11-15 10:46:22.960054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.757 [2024-11-15 10:46:22.960104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.757 qpair failed and we were unable to recover it. 00:27:34.757 [2024-11-15 10:46:22.960270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.757 [2024-11-15 10:46:22.960294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.757 qpair failed and we were unable to recover it. 00:27:34.757 [2024-11-15 10:46:22.960480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.757 [2024-11-15 10:46:22.960527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.757 qpair failed and we were unable to recover it. 00:27:34.757 [2024-11-15 10:46:22.960763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.757 [2024-11-15 10:46:22.960811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.757 qpair failed and we were unable to recover it. 00:27:34.757 [2024-11-15 10:46:22.960971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.757 [2024-11-15 10:46:22.961020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.757 qpair failed and we were unable to recover it. 00:27:34.757 [2024-11-15 10:46:22.961199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.757 [2024-11-15 10:46:22.961223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.757 qpair failed and we were unable to recover it. 00:27:34.757 [2024-11-15 10:46:22.961443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.757 [2024-11-15 10:46:22.961492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.757 qpair failed and we were unable to recover it. 00:27:34.757 [2024-11-15 10:46:22.961723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.757 [2024-11-15 10:46:22.961771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.757 qpair failed and we were unable to recover it. 00:27:34.757 [2024-11-15 10:46:22.962008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.757 [2024-11-15 10:46:22.962056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.757 qpair failed and we were unable to recover it. 00:27:34.757 [2024-11-15 10:46:22.962270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.757 [2024-11-15 10:46:22.962301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.757 qpair failed and we were unable to recover it. 00:27:34.757 [2024-11-15 10:46:22.962532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.757 [2024-11-15 10:46:22.962580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.757 qpair failed and we were unable to recover it. 00:27:34.757 [2024-11-15 10:46:22.962809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.757 [2024-11-15 10:46:22.962853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.757 qpair failed and we were unable to recover it. 00:27:34.757 [2024-11-15 10:46:22.963036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.757 [2024-11-15 10:46:22.963085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.757 qpair failed and we were unable to recover it. 00:27:34.757 [2024-11-15 10:46:22.963294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.757 [2024-11-15 10:46:22.963317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.757 qpair failed and we were unable to recover it. 00:27:34.757 [2024-11-15 10:46:22.963480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.757 [2024-11-15 10:46:22.963505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.757 qpair failed and we were unable to recover it. 00:27:34.757 [2024-11-15 10:46:22.963698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.757 [2024-11-15 10:46:22.963746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.757 qpair failed and we were unable to recover it. 00:27:34.757 [2024-11-15 10:46:22.963948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.757 [2024-11-15 10:46:22.963994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.757 qpair failed and we were unable to recover it. 00:27:34.757 [2024-11-15 10:46:22.964214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.757 [2024-11-15 10:46:22.964261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.757 qpair failed and we were unable to recover it. 00:27:34.757 [2024-11-15 10:46:22.964379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.757 [2024-11-15 10:46:22.964404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.757 qpair failed and we were unable to recover it. 00:27:34.757 [2024-11-15 10:46:22.964619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.757 [2024-11-15 10:46:22.964666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.757 qpair failed and we were unable to recover it. 00:27:34.757 [2024-11-15 10:46:22.964900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.757 [2024-11-15 10:46:22.964946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.757 qpair failed and we were unable to recover it. 00:27:34.757 [2024-11-15 10:46:22.965166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.757 [2024-11-15 10:46:22.965216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.757 qpair failed and we were unable to recover it. 00:27:34.757 [2024-11-15 10:46:22.965398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.757 [2024-11-15 10:46:22.965445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.757 qpair failed and we were unable to recover it. 00:27:34.757 [2024-11-15 10:46:22.965668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.757 [2024-11-15 10:46:22.965717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.757 qpair failed and we were unable to recover it. 00:27:34.757 [2024-11-15 10:46:22.965938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.757 [2024-11-15 10:46:22.965985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.757 qpair failed and we were unable to recover it. 00:27:34.757 [2024-11-15 10:46:22.966208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.757 [2024-11-15 10:46:22.966232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.757 qpair failed and we were unable to recover it. 00:27:34.757 [2024-11-15 10:46:22.966427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.757 [2024-11-15 10:46:22.966482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.757 qpair failed and we were unable to recover it. 00:27:34.757 [2024-11-15 10:46:22.966698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.757 [2024-11-15 10:46:22.966746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.757 qpair failed and we were unable to recover it. 00:27:34.757 [2024-11-15 10:46:22.966976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.757 [2024-11-15 10:46:22.967021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.757 qpair failed and we were unable to recover it. 00:27:34.758 [2024-11-15 10:46:22.967195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.758 [2024-11-15 10:46:22.967219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.758 qpair failed and we were unable to recover it. 00:27:34.758 [2024-11-15 10:46:22.967459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.758 [2024-11-15 10:46:22.967506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.758 qpair failed and we were unable to recover it. 00:27:34.758 [2024-11-15 10:46:22.967733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.758 [2024-11-15 10:46:22.967783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.758 qpair failed and we were unable to recover it. 00:27:34.758 [2024-11-15 10:46:22.968004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.758 [2024-11-15 10:46:22.968051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.758 qpair failed and we were unable to recover it. 00:27:34.758 [2024-11-15 10:46:22.968257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.758 [2024-11-15 10:46:22.968281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.758 qpair failed and we were unable to recover it. 00:27:34.758 [2024-11-15 10:46:22.968431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.758 [2024-11-15 10:46:22.968483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.758 qpair failed and we were unable to recover it. 00:27:34.758 [2024-11-15 10:46:22.968698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.758 [2024-11-15 10:46:22.968747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.758 qpair failed and we were unable to recover it. 00:27:34.758 [2024-11-15 10:46:22.968905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.758 [2024-11-15 10:46:22.968955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.758 qpair failed and we were unable to recover it. 00:27:34.758 [2024-11-15 10:46:22.969168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.758 [2024-11-15 10:46:22.969192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.758 qpair failed and we were unable to recover it. 00:27:34.758 [2024-11-15 10:46:22.969414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.758 [2024-11-15 10:46:22.969439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.758 qpair failed and we were unable to recover it. 00:27:34.758 [2024-11-15 10:46:22.969612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.758 [2024-11-15 10:46:22.969662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.758 qpair failed and we were unable to recover it. 00:27:34.758 [2024-11-15 10:46:22.969878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.758 [2024-11-15 10:46:22.969928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.758 qpair failed and we were unable to recover it. 00:27:34.758 [2024-11-15 10:46:22.970164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.758 [2024-11-15 10:46:22.970212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.758 qpair failed and we were unable to recover it. 00:27:34.758 [2024-11-15 10:46:22.970444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.758 [2024-11-15 10:46:22.970470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.758 qpair failed and we were unable to recover it. 00:27:34.758 [2024-11-15 10:46:22.970687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.758 [2024-11-15 10:46:22.970736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.758 qpair failed and we were unable to recover it. 00:27:34.758 [2024-11-15 10:46:22.970914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.758 [2024-11-15 10:46:22.970962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.758 qpair failed and we were unable to recover it. 00:27:34.758 [2024-11-15 10:46:22.971160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.758 [2024-11-15 10:46:22.971207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.758 qpair failed and we were unable to recover it. 00:27:34.758 [2024-11-15 10:46:22.971419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.758 [2024-11-15 10:46:22.971444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.758 qpair failed and we were unable to recover it. 00:27:34.758 [2024-11-15 10:46:22.971670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.758 [2024-11-15 10:46:22.971718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.758 qpair failed and we were unable to recover it. 00:27:34.758 [2024-11-15 10:46:22.971934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.758 [2024-11-15 10:46:22.971981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.758 qpair failed and we were unable to recover it. 00:27:34.758 [2024-11-15 10:46:22.972170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.758 [2024-11-15 10:46:22.972218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.758 qpair failed and we were unable to recover it. 00:27:34.758 [2024-11-15 10:46:22.972440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.758 [2024-11-15 10:46:22.972470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.758 qpair failed and we were unable to recover it. 00:27:34.758 [2024-11-15 10:46:22.972697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.758 [2024-11-15 10:46:22.972744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.758 qpair failed and we were unable to recover it. 00:27:34.758 [2024-11-15 10:46:22.972914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.758 [2024-11-15 10:46:22.972964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.758 qpair failed and we were unable to recover it. 00:27:34.758 [2024-11-15 10:46:22.973155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.758 [2024-11-15 10:46:22.973205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.758 qpair failed and we were unable to recover it. 00:27:34.758 [2024-11-15 10:46:22.973416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.758 [2024-11-15 10:46:22.973441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.758 qpair failed and we were unable to recover it. 00:27:34.758 [2024-11-15 10:46:22.973665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.758 [2024-11-15 10:46:22.973712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.758 qpair failed and we were unable to recover it. 00:27:34.758 [2024-11-15 10:46:22.973950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.758 [2024-11-15 10:46:22.973998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.758 qpair failed and we were unable to recover it. 00:27:34.758 [2024-11-15 10:46:22.974218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.758 [2024-11-15 10:46:22.974266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.758 qpair failed and we were unable to recover it. 00:27:34.758 [2024-11-15 10:46:22.974429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.758 [2024-11-15 10:46:22.974454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.758 qpair failed and we were unable to recover it. 00:27:34.758 [2024-11-15 10:46:22.974637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.758 [2024-11-15 10:46:22.974683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.758 qpair failed and we were unable to recover it. 00:27:34.758 [2024-11-15 10:46:22.974908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.758 [2024-11-15 10:46:22.974956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.758 qpair failed and we were unable to recover it. 00:27:34.758 [2024-11-15 10:46:22.975084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.758 [2024-11-15 10:46:22.975131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.758 qpair failed and we were unable to recover it. 00:27:34.758 [2024-11-15 10:46:22.975298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.758 [2024-11-15 10:46:22.975322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.758 qpair failed and we were unable to recover it. 00:27:34.758 [2024-11-15 10:46:22.975558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.758 [2024-11-15 10:46:22.975610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.758 qpair failed and we were unable to recover it. 00:27:34.758 [2024-11-15 10:46:22.975843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.758 [2024-11-15 10:46:22.975893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.758 qpair failed and we were unable to recover it. 00:27:34.758 [2024-11-15 10:46:22.976106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.758 [2024-11-15 10:46:22.976155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.758 qpair failed and we were unable to recover it. 00:27:34.758 [2024-11-15 10:46:22.976389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.758 [2024-11-15 10:46:22.976415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.758 qpair failed and we were unable to recover it. 00:27:34.758 [2024-11-15 10:46:22.976583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.759 [2024-11-15 10:46:22.976608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.759 qpair failed and we were unable to recover it. 00:27:34.759 [2024-11-15 10:46:22.976785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.759 [2024-11-15 10:46:22.976831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.759 qpair failed and we were unable to recover it. 00:27:34.759 [2024-11-15 10:46:22.977055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.759 [2024-11-15 10:46:22.977102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.759 qpair failed and we were unable to recover it. 00:27:34.759 [2024-11-15 10:46:22.977321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.759 [2024-11-15 10:46:22.977344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.759 qpair failed and we were unable to recover it. 00:27:34.759 [2024-11-15 10:46:22.977532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.759 [2024-11-15 10:46:22.977556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.759 qpair failed and we were unable to recover it. 00:27:34.759 [2024-11-15 10:46:22.977758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.759 [2024-11-15 10:46:22.977806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.759 qpair failed and we were unable to recover it. 00:27:34.759 [2024-11-15 10:46:22.977993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.759 [2024-11-15 10:46:22.978041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.759 qpair failed and we were unable to recover it. 00:27:34.759 [2024-11-15 10:46:22.978159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.759 [2024-11-15 10:46:22.978183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.759 qpair failed and we were unable to recover it. 00:27:34.759 [2024-11-15 10:46:22.978298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.759 [2024-11-15 10:46:22.978323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.759 qpair failed and we were unable to recover it. 00:27:34.759 [2024-11-15 10:46:22.978545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.759 [2024-11-15 10:46:22.978594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.759 qpair failed and we were unable to recover it. 00:27:34.759 [2024-11-15 10:46:22.978778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.759 [2024-11-15 10:46:22.978828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.759 qpair failed and we were unable to recover it. 00:27:34.759 [2024-11-15 10:46:22.979019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.759 [2024-11-15 10:46:22.979068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.759 qpair failed and we were unable to recover it. 00:27:34.759 [2024-11-15 10:46:22.979219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.759 [2024-11-15 10:46:22.979243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.759 qpair failed and we were unable to recover it. 00:27:34.759 [2024-11-15 10:46:22.979434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.759 [2024-11-15 10:46:22.979486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.759 qpair failed and we were unable to recover it. 00:27:34.759 [2024-11-15 10:46:22.979663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.759 [2024-11-15 10:46:22.979711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.759 qpair failed and we were unable to recover it. 00:27:34.759 [2024-11-15 10:46:22.979943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.759 [2024-11-15 10:46:22.979989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.759 qpair failed and we were unable to recover it. 00:27:34.759 [2024-11-15 10:46:22.980196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.759 [2024-11-15 10:46:22.980221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.759 qpair failed and we were unable to recover it. 00:27:34.759 [2024-11-15 10:46:22.980414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.759 [2024-11-15 10:46:22.980463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.759 qpair failed and we were unable to recover it. 00:27:34.759 [2024-11-15 10:46:22.980664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.759 [2024-11-15 10:46:22.980711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.759 qpair failed and we were unable to recover it. 00:27:34.759 [2024-11-15 10:46:22.980936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.759 [2024-11-15 10:46:22.980982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.759 qpair failed and we were unable to recover it. 00:27:34.759 [2024-11-15 10:46:22.981134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.759 [2024-11-15 10:46:22.981158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.759 qpair failed and we were unable to recover it. 00:27:34.759 [2024-11-15 10:46:22.981290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.759 [2024-11-15 10:46:22.981314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.759 qpair failed and we were unable to recover it. 00:27:34.759 [2024-11-15 10:46:22.981487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.759 [2024-11-15 10:46:22.981536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.759 qpair failed and we were unable to recover it. 00:27:34.759 [2024-11-15 10:46:22.981748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.759 [2024-11-15 10:46:22.981793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.759 qpair failed and we were unable to recover it. 00:27:34.759 [2024-11-15 10:46:22.982010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.759 [2024-11-15 10:46:22.982059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.759 qpair failed and we were unable to recover it. 00:27:34.759 [2024-11-15 10:46:22.982208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.759 [2024-11-15 10:46:22.982232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.759 qpair failed and we were unable to recover it. 00:27:34.759 [2024-11-15 10:46:22.982422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.759 [2024-11-15 10:46:22.982475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.759 qpair failed and we were unable to recover it. 00:27:34.759 [2024-11-15 10:46:22.982668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.759 [2024-11-15 10:46:22.982716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.759 qpair failed and we were unable to recover it. 00:27:34.759 [2024-11-15 10:46:22.982937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.759 [2024-11-15 10:46:22.982986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.759 qpair failed and we were unable to recover it. 00:27:34.759 [2024-11-15 10:46:22.983195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.759 [2024-11-15 10:46:22.983219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.759 qpair failed and we were unable to recover it. 00:27:34.759 [2024-11-15 10:46:22.983452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.759 [2024-11-15 10:46:22.983504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.759 qpair failed and we were unable to recover it. 00:27:34.759 [2024-11-15 10:46:22.983722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.759 [2024-11-15 10:46:22.983769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.759 qpair failed and we were unable to recover it. 00:27:34.759 [2024-11-15 10:46:22.983977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.759 [2024-11-15 10:46:22.984026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.759 qpair failed and we were unable to recover it. 00:27:34.759 [2024-11-15 10:46:22.984232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.759 [2024-11-15 10:46:22.984256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.759 qpair failed and we were unable to recover it. 00:27:34.759 [2024-11-15 10:46:22.984445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.759 [2024-11-15 10:46:22.984493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.759 qpair failed and we were unable to recover it. 00:27:34.759 [2024-11-15 10:46:22.984676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.759 [2024-11-15 10:46:22.984726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.759 qpair failed and we were unable to recover it. 00:27:34.759 [2024-11-15 10:46:22.984900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.759 [2024-11-15 10:46:22.984948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.759 qpair failed and we were unable to recover it. 00:27:34.759 [2024-11-15 10:46:22.985094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.759 [2024-11-15 10:46:22.985123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.760 qpair failed and we were unable to recover it. 00:27:34.760 [2024-11-15 10:46:22.985297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.760 [2024-11-15 10:46:22.985321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.760 qpair failed and we were unable to recover it. 00:27:34.760 [2024-11-15 10:46:22.985563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.760 [2024-11-15 10:46:22.985611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.760 qpair failed and we were unable to recover it. 00:27:34.760 [2024-11-15 10:46:22.985840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.760 [2024-11-15 10:46:22.985889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.760 qpair failed and we were unable to recover it. 00:27:34.760 [2024-11-15 10:46:22.986117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.760 [2024-11-15 10:46:22.986166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.760 qpair failed and we were unable to recover it. 00:27:34.760 [2024-11-15 10:46:22.986382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.760 [2024-11-15 10:46:22.986407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.760 qpair failed and we were unable to recover it. 00:27:34.760 [2024-11-15 10:46:22.986614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.760 [2024-11-15 10:46:22.986639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.760 qpair failed and we were unable to recover it. 00:27:34.760 [2024-11-15 10:46:22.986808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.760 [2024-11-15 10:46:22.986857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.760 qpair failed and we were unable to recover it. 00:27:34.760 [2024-11-15 10:46:22.987087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.760 [2024-11-15 10:46:22.987134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.760 qpair failed and we were unable to recover it. 00:27:34.760 [2024-11-15 10:46:22.987354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.760 [2024-11-15 10:46:22.987387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.760 qpair failed and we were unable to recover it. 00:27:34.760 [2024-11-15 10:46:22.987562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.760 [2024-11-15 10:46:22.987586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.760 qpair failed and we were unable to recover it. 00:27:34.760 [2024-11-15 10:46:22.987815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.760 [2024-11-15 10:46:22.987862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.760 qpair failed and we were unable to recover it. 00:27:34.760 [2024-11-15 10:46:22.988028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.760 [2024-11-15 10:46:22.988075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.760 qpair failed and we were unable to recover it. 00:27:34.760 [2024-11-15 10:46:22.988284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.760 [2024-11-15 10:46:22.988308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.760 qpair failed and we were unable to recover it. 00:27:34.760 [2024-11-15 10:46:22.988484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.760 [2024-11-15 10:46:22.988509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.760 qpair failed and we were unable to recover it. 00:27:34.760 [2024-11-15 10:46:22.988697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.760 [2024-11-15 10:46:22.988743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.760 qpair failed and we were unable to recover it. 00:27:34.760 [2024-11-15 10:46:22.988958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.760 [2024-11-15 10:46:22.989008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.760 qpair failed and we were unable to recover it. 00:27:34.760 [2024-11-15 10:46:22.989233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.760 [2024-11-15 10:46:22.989281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.760 qpair failed and we were unable to recover it. 00:27:34.760 [2024-11-15 10:46:22.989431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.760 [2024-11-15 10:46:22.989456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.760 qpair failed and we were unable to recover it. 00:27:34.760 [2024-11-15 10:46:22.989558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.760 [2024-11-15 10:46:22.989609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.760 qpair failed and we were unable to recover it. 00:27:34.760 [2024-11-15 10:46:22.989812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.760 [2024-11-15 10:46:22.989860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.760 qpair failed and we were unable to recover it. 00:27:34.760 [2024-11-15 10:46:22.990078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.760 [2024-11-15 10:46:22.990125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.760 qpair failed and we were unable to recover it. 00:27:34.760 [2024-11-15 10:46:22.990346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.760 [2024-11-15 10:46:22.990376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.760 qpair failed and we were unable to recover it. 00:27:34.760 [2024-11-15 10:46:22.990598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.760 [2024-11-15 10:46:22.990623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.760 qpair failed and we were unable to recover it. 00:27:34.760 [2024-11-15 10:46:22.990859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.760 [2024-11-15 10:46:22.990909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.760 qpair failed and we were unable to recover it. 00:27:34.760 [2024-11-15 10:46:22.991048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.760 [2024-11-15 10:46:22.991096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.760 qpair failed and we were unable to recover it. 00:27:34.760 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 36: 491070 Killed "${NVMF_APP[@]}" "$@" 00:27:34.760 [2024-11-15 10:46:22.991307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.760 [2024-11-15 10:46:22.991331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.760 qpair failed and we were unable to recover it. 00:27:34.760 [2024-11-15 10:46:22.991497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.760 [2024-11-15 10:46:22.991523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.760 qpair failed and we were unable to recover it. 00:27:34.760 [2024-11-15 10:46:22.991669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.760 10:46:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # disconnect_init 10.0.0.2 00:27:34.760 [2024-11-15 10:46:22.991725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.760 qpair failed and we were unable to recover it. 00:27:34.760 [2024-11-15 10:46:22.991856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.760 [2024-11-15 10:46:22.991902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.760 qpair failed and we were unable to recover it. 00:27:34.760 10:46:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:27:34.760 [2024-11-15 10:46:22.992130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.760 10:46:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:27:34.760 [2024-11-15 10:46:22.992179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.760 qpair failed and we were unable to recover it. 00:27:34.760 10:46:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:27:34.760 10:46:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:34.760 [2024-11-15 10:46:22.992420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.760 [2024-11-15 10:46:22.992445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.760 qpair failed and we were unable to recover it. 00:27:34.760 [2024-11-15 10:46:22.992589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.760 [2024-11-15 10:46:22.992646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.760 qpair failed and we were unable to recover it. 00:27:34.760 [2024-11-15 10:46:22.992878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.760 [2024-11-15 10:46:22.992928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.760 qpair failed and we were unable to recover it. 00:27:34.760 [2024-11-15 10:46:22.993163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.761 [2024-11-15 10:46:22.993213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.761 qpair failed and we were unable to recover it. 00:27:34.761 [2024-11-15 10:46:22.993339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.761 [2024-11-15 10:46:22.993391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.761 qpair failed and we were unable to recover it. 00:27:34.761 [2024-11-15 10:46:22.993546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.761 [2024-11-15 10:46:22.993596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.761 qpair failed and we were unable to recover it. 00:27:34.761 [2024-11-15 10:46:22.993844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.761 [2024-11-15 10:46:22.993892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.761 qpair failed and we were unable to recover it. 00:27:34.761 [2024-11-15 10:46:22.994111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.761 [2024-11-15 10:46:22.994164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.761 qpair failed and we were unable to recover it. 00:27:34.761 [2024-11-15 10:46:22.994305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.761 [2024-11-15 10:46:22.994329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.761 qpair failed and we were unable to recover it. 00:27:34.761 [2024-11-15 10:46:22.994511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.761 [2024-11-15 10:46:22.994561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.761 qpair failed and we were unable to recover it. 00:27:34.761 [2024-11-15 10:46:22.994718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.761 [2024-11-15 10:46:22.994765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.761 qpair failed and we were unable to recover it. 00:27:34.761 [2024-11-15 10:46:22.994900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.761 [2024-11-15 10:46:22.994948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.761 qpair failed and we were unable to recover it. 00:27:34.761 [2024-11-15 10:46:22.995114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.761 [2024-11-15 10:46:22.995138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.761 qpair failed and we were unable to recover it. 00:27:34.761 [2024-11-15 10:46:22.995302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.761 [2024-11-15 10:46:22.995326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.761 qpair failed and we were unable to recover it. 00:27:34.761 [2024-11-15 10:46:22.995499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.761 [2024-11-15 10:46:22.995548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.761 qpair failed and we were unable to recover it. 00:27:34.761 [2024-11-15 10:46:22.995714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.761 [2024-11-15 10:46:22.995763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.761 qpair failed and we were unable to recover it. 00:27:34.761 [2024-11-15 10:46:22.995911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.761 [2024-11-15 10:46:22.995962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.761 qpair failed and we were unable to recover it. 00:27:34.761 [2024-11-15 10:46:22.996138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.761 [2024-11-15 10:46:22.996186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.761 qpair failed and we were unable to recover it. 00:27:34.761 [2024-11-15 10:46:22.996292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.761 [2024-11-15 10:46:22.996316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.761 qpair failed and we were unable to recover it. 00:27:34.761 [2024-11-15 10:46:22.996457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.761 [2024-11-15 10:46:22.996483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.761 qpair failed and we were unable to recover it. 00:27:34.761 [2024-11-15 10:46:22.996622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.761 [2024-11-15 10:46:22.996696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.761 qpair failed and we were unable to recover it. 00:27:34.761 [2024-11-15 10:46:22.996902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.761 [2024-11-15 10:46:22.996927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.761 qpair failed and we were unable to recover it. 00:27:34.761 [2024-11-15 10:46:22.997016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.761 [2024-11-15 10:46:22.997049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.761 qpair failed and we were unable to recover it. 00:27:34.761 [2024-11-15 10:46:22.997209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.761 [2024-11-15 10:46:22.997233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.761 qpair failed and we were unable to recover it. 00:27:34.761 [2024-11-15 10:46:22.997447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.761 [2024-11-15 10:46:22.997504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.761 qpair failed and we were unable to recover it. 00:27:34.761 [2024-11-15 10:46:22.997663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.761 [2024-11-15 10:46:22.997688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.761 qpair failed and we were unable to recover it. 00:27:34.761 [2024-11-15 10:46:22.997803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.761 [2024-11-15 10:46:22.997827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.761 qpair failed and we were unable to recover it. 00:27:34.761 [2024-11-15 10:46:22.997935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.761 [2024-11-15 10:46:22.997959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.761 qpair failed and we were unable to recover it. 00:27:34.761 10:46:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@509 -- # nvmfpid=491606 00:27:34.761 [2024-11-15 10:46:22.998101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.761 [2024-11-15 10:46:22.998126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.761 qpair failed and we were unable to recover it. 00:27:34.761 10:46:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:27:34.761 10:46:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@510 -- # waitforlisten 491606 00:27:34.761 [2024-11-15 10:46:22.998217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.761 [2024-11-15 10:46:22.998241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.761 qpair failed and we were unable to recover it. 00:27:34.761 [2024-11-15 10:46:22.998338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.761 [2024-11-15 10:46:22.998367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.761 qpair failed and we were unable to recover it. 00:27:34.761 10:46:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@833 -- # '[' -z 491606 ']' 00:27:34.762 [2024-11-15 10:46:22.998498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.762 [2024-11-15 10:46:22.998522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.762 qpair failed and we were unable to recover it. 00:27:34.762 10:46:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:34.762 [2024-11-15 10:46:22.998687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.762 [2024-11-15 10:46:22.998713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.762 qpair failed and we were unable to recover it. 00:27:34.762 [2024-11-15 10:46:22.998790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.762 10:46:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@838 -- # local max_retries=100 00:27:34.762 [2024-11-15 10:46:22.998815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.762 qpair failed and we were unable to recover it. 00:27:34.762 [2024-11-15 10:46:22.998937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.762 10:46:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:34.762 [2024-11-15 10:46:22.998961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 witWaiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:34.762 h addr=10.0.0.2, port=4420 00:27:34.762 qpair failed and we were unable to recover it. 00:27:34.762 [2024-11-15 10:46:22.999061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.762 [2024-11-15 10:46:22.999086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.762 10:46:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@842 -- # xtrace_disable 00:27:34.762 qpair failed and we were unable to recover it. 00:27:34.762 [2024-11-15 10:46:22.999196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.762 [2024-11-15 10:46:22.999221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.762 qpair failed and we were unable to recover it. 00:27:34.762 10:46:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:34.762 [2024-11-15 10:46:22.999349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.762 [2024-11-15 10:46:22.999384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.762 qpair failed and we were unable to recover it. 00:27:34.762 [2024-11-15 10:46:22.999472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.762 [2024-11-15 10:46:22.999497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.762 qpair failed and we were unable to recover it. 00:27:34.762 [2024-11-15 10:46:22.999606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.762 [2024-11-15 10:46:22.999631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.762 qpair failed and we were unable to recover it. 00:27:34.762 [2024-11-15 10:46:22.999759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.762 [2024-11-15 10:46:22.999783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.762 qpair failed and we were unable to recover it. 00:27:34.762 [2024-11-15 10:46:22.999906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.762 [2024-11-15 10:46:22.999931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.762 qpair failed and we were unable to recover it. 00:27:34.762 [2024-11-15 10:46:23.000048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.762 [2024-11-15 10:46:23.000072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.762 qpair failed and we were unable to recover it. 00:27:34.762 [2024-11-15 10:46:23.000213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.762 [2024-11-15 10:46:23.000242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.762 qpair failed and we were unable to recover it. 00:27:34.762 [2024-11-15 10:46:23.000378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.762 [2024-11-15 10:46:23.000405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.762 qpair failed and we were unable to recover it. 00:27:34.762 [2024-11-15 10:46:23.000538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.762 [2024-11-15 10:46:23.000563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.762 qpair failed and we were unable to recover it. 00:27:34.762 [2024-11-15 10:46:23.000716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.762 [2024-11-15 10:46:23.000743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.762 qpair failed and we were unable to recover it. 00:27:34.762 [2024-11-15 10:46:23.000901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.762 [2024-11-15 10:46:23.000926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.762 qpair failed and we were unable to recover it. 00:27:34.762 [2024-11-15 10:46:23.001027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.762 [2024-11-15 10:46:23.001052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.762 qpair failed and we were unable to recover it. 00:27:34.762 [2024-11-15 10:46:23.001180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.762 [2024-11-15 10:46:23.001204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.762 qpair failed and we were unable to recover it. 00:27:34.762 [2024-11-15 10:46:23.004379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.762 [2024-11-15 10:46:23.004421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.762 qpair failed and we were unable to recover it. 00:27:34.762 [2024-11-15 10:46:23.004580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.762 [2024-11-15 10:46:23.004608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.762 qpair failed and we were unable to recover it. 00:27:34.762 [2024-11-15 10:46:23.004793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.762 [2024-11-15 10:46:23.004819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.762 qpair failed and we were unable to recover it. 00:27:34.762 [2024-11-15 10:46:23.004997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.762 [2024-11-15 10:46:23.005039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.762 qpair failed and we were unable to recover it. 00:27:34.762 [2024-11-15 10:46:23.005199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.762 [2024-11-15 10:46:23.005227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.762 qpair failed and we were unable to recover it. 00:27:34.762 [2024-11-15 10:46:23.005385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.762 [2024-11-15 10:46:23.005428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.762 qpair failed and we were unable to recover it. 00:27:34.762 [2024-11-15 10:46:23.005569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.762 [2024-11-15 10:46:23.005598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.762 qpair failed and we were unable to recover it. 00:27:34.762 [2024-11-15 10:46:23.005748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.762 [2024-11-15 10:46:23.005776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.762 qpair failed and we were unable to recover it. 00:27:34.762 [2024-11-15 10:46:23.005920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.762 [2024-11-15 10:46:23.005947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.762 qpair failed and we were unable to recover it. 00:27:34.762 [2024-11-15 10:46:23.006090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.762 [2024-11-15 10:46:23.006117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.762 qpair failed and we were unable to recover it. 00:27:34.762 [2024-11-15 10:46:23.006259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.762 [2024-11-15 10:46:23.006285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.763 qpair failed and we were unable to recover it. 00:27:34.763 [2024-11-15 10:46:23.006401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.763 [2024-11-15 10:46:23.006429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.763 qpair failed and we were unable to recover it. 00:27:34.763 [2024-11-15 10:46:23.006533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.763 [2024-11-15 10:46:23.006560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.763 qpair failed and we were unable to recover it. 00:27:34.763 [2024-11-15 10:46:23.006709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.763 [2024-11-15 10:46:23.006736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.763 qpair failed and we were unable to recover it. 00:27:34.763 [2024-11-15 10:46:23.006878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.763 [2024-11-15 10:46:23.006920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.763 qpair failed and we were unable to recover it. 00:27:34.763 [2024-11-15 10:46:23.007056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.763 [2024-11-15 10:46:23.007083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.763 qpair failed and we were unable to recover it. 00:27:34.763 [2024-11-15 10:46:23.007235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.763 [2024-11-15 10:46:23.007261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.763 qpair failed and we were unable to recover it. 00:27:34.763 [2024-11-15 10:46:23.007404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.763 [2024-11-15 10:46:23.007432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.763 qpair failed and we were unable to recover it. 00:27:34.763 [2024-11-15 10:46:23.007528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.763 [2024-11-15 10:46:23.007555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.763 qpair failed and we were unable to recover it. 00:27:34.763 [2024-11-15 10:46:23.007721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.763 [2024-11-15 10:46:23.007763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.763 qpair failed and we were unable to recover it. 00:27:34.763 [2024-11-15 10:46:23.007913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.763 [2024-11-15 10:46:23.007942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.763 qpair failed and we were unable to recover it. 00:27:34.763 [2024-11-15 10:46:23.008094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.763 [2024-11-15 10:46:23.008120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.763 qpair failed and we were unable to recover it. 00:27:34.763 [2024-11-15 10:46:23.008232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.763 [2024-11-15 10:46:23.008258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.763 qpair failed and we were unable to recover it. 00:27:34.763 [2024-11-15 10:46:23.008387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.763 [2024-11-15 10:46:23.008428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.763 qpair failed and we were unable to recover it. 00:27:34.763 [2024-11-15 10:46:23.008532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.763 [2024-11-15 10:46:23.008558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.763 qpair failed and we were unable to recover it. 00:27:34.763 [2024-11-15 10:46:23.008692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.763 [2024-11-15 10:46:23.008734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.763 qpair failed and we were unable to recover it. 00:27:34.763 [2024-11-15 10:46:23.008874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.763 [2024-11-15 10:46:23.008900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.763 qpair failed and we were unable to recover it. 00:27:34.763 [2024-11-15 10:46:23.009088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.763 [2024-11-15 10:46:23.009115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.763 qpair failed and we were unable to recover it. 00:27:34.763 [2024-11-15 10:46:23.009256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.763 [2024-11-15 10:46:23.009298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.763 qpair failed and we were unable to recover it. 00:27:34.763 [2024-11-15 10:46:23.009757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.763 [2024-11-15 10:46:23.009799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.763 qpair failed and we were unable to recover it. 00:27:34.763 [2024-11-15 10:46:23.009945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.763 [2024-11-15 10:46:23.009972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.763 qpair failed and we were unable to recover it. 00:27:34.763 [2024-11-15 10:46:23.011393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.763 [2024-11-15 10:46:23.011453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.763 qpair failed and we were unable to recover it. 00:27:34.763 [2024-11-15 10:46:23.011617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.763 [2024-11-15 10:46:23.011663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.763 qpair failed and we were unable to recover it. 00:27:34.763 [2024-11-15 10:46:23.011797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.763 [2024-11-15 10:46:23.011825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.763 qpair failed and we were unable to recover it. 00:27:34.763 [2024-11-15 10:46:23.011999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.763 [2024-11-15 10:46:23.012042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.763 qpair failed and we were unable to recover it. 00:27:34.763 [2024-11-15 10:46:23.012220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.763 [2024-11-15 10:46:23.012248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.763 qpair failed and we were unable to recover it. 00:27:34.763 [2024-11-15 10:46:23.012400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.763 [2024-11-15 10:46:23.012429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.763 qpair failed and we were unable to recover it. 00:27:34.763 [2024-11-15 10:46:23.012540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.763 [2024-11-15 10:46:23.012568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.763 qpair failed and we were unable to recover it. 00:27:34.763 [2024-11-15 10:46:23.012696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.763 [2024-11-15 10:46:23.012722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.763 qpair failed and we were unable to recover it. 00:27:34.764 [2024-11-15 10:46:23.012899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.764 [2024-11-15 10:46:23.012926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.764 qpair failed and we were unable to recover it. 00:27:34.764 [2024-11-15 10:46:23.013061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.764 [2024-11-15 10:46:23.013088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.764 qpair failed and we were unable to recover it. 00:27:34.764 [2024-11-15 10:46:23.013199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.764 [2024-11-15 10:46:23.013226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.764 qpair failed and we were unable to recover it. 00:27:34.764 [2024-11-15 10:46:23.013372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.764 [2024-11-15 10:46:23.013402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.764 qpair failed and we were unable to recover it. 00:27:34.764 [2024-11-15 10:46:23.013496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.764 [2024-11-15 10:46:23.013523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.764 qpair failed and we were unable to recover it. 00:27:34.764 [2024-11-15 10:46:23.014879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.764 [2024-11-15 10:46:23.014909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.764 qpair failed and we were unable to recover it. 00:27:34.764 [2024-11-15 10:46:23.015088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.764 [2024-11-15 10:46:23.015115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.764 qpair failed and we were unable to recover it. 00:27:34.764 [2024-11-15 10:46:23.015260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.764 [2024-11-15 10:46:23.015286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.764 qpair failed and we were unable to recover it. 00:27:34.764 [2024-11-15 10:46:23.015457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.764 [2024-11-15 10:46:23.015491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.764 qpair failed and we were unable to recover it. 00:27:34.764 [2024-11-15 10:46:23.015667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.764 [2024-11-15 10:46:23.015709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.764 qpair failed and we were unable to recover it. 00:27:34.764 [2024-11-15 10:46:23.015845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.764 [2024-11-15 10:46:23.015886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.764 qpair failed and we were unable to recover it. 00:27:34.764 [2024-11-15 10:46:23.016007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.764 [2024-11-15 10:46:23.016051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.764 qpair failed and we were unable to recover it. 00:27:34.764 [2024-11-15 10:46:23.016184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.764 [2024-11-15 10:46:23.016209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.764 qpair failed and we were unable to recover it. 00:27:34.764 [2024-11-15 10:46:23.016327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.764 [2024-11-15 10:46:23.016374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.764 qpair failed and we were unable to recover it. 00:27:34.764 [2024-11-15 10:46:23.016501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.764 [2024-11-15 10:46:23.016528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.764 qpair failed and we were unable to recover it. 00:27:34.764 [2024-11-15 10:46:23.016649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.764 [2024-11-15 10:46:23.016690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.764 qpair failed and we were unable to recover it. 00:27:34.764 [2024-11-15 10:46:23.016821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.764 [2024-11-15 10:46:23.016845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.764 qpair failed and we were unable to recover it. 00:27:34.764 [2024-11-15 10:46:23.016955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.764 [2024-11-15 10:46:23.016994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.764 qpair failed and we were unable to recover it. 00:27:34.764 [2024-11-15 10:46:23.017127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.764 [2024-11-15 10:46:23.017153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.764 qpair failed and we were unable to recover it. 00:27:34.764 [2024-11-15 10:46:23.017246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.764 [2024-11-15 10:46:23.017272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.764 qpair failed and we were unable to recover it. 00:27:34.764 [2024-11-15 10:46:23.017377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.764 [2024-11-15 10:46:23.017403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.764 qpair failed and we were unable to recover it. 00:27:34.764 [2024-11-15 10:46:23.017502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.764 [2024-11-15 10:46:23.017528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.764 qpair failed and we were unable to recover it. 00:27:34.764 [2024-11-15 10:46:23.017687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.764 [2024-11-15 10:46:23.017726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.764 qpair failed and we were unable to recover it. 00:27:34.764 [2024-11-15 10:46:23.017833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.764 [2024-11-15 10:46:23.017857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.764 qpair failed and we were unable to recover it. 00:27:34.764 [2024-11-15 10:46:23.018010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.764 [2024-11-15 10:46:23.018034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.764 qpair failed and we were unable to recover it. 00:27:34.764 [2024-11-15 10:46:23.018142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.764 [2024-11-15 10:46:23.018165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.764 qpair failed and we were unable to recover it. 00:27:34.764 [2024-11-15 10:46:23.018300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.764 [2024-11-15 10:46:23.018323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.764 qpair failed and we were unable to recover it. 00:27:34.764 [2024-11-15 10:46:23.018487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.764 [2024-11-15 10:46:23.018513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.764 qpair failed and we were unable to recover it. 00:27:34.764 [2024-11-15 10:46:23.018637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.764 [2024-11-15 10:46:23.018675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.764 qpair failed and we were unable to recover it. 00:27:34.764 [2024-11-15 10:46:23.018807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.764 [2024-11-15 10:46:23.018831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.764 qpair failed and we were unable to recover it. 00:27:34.764 [2024-11-15 10:46:23.018942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.764 [2024-11-15 10:46:23.018966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.764 qpair failed and we were unable to recover it. 00:27:34.764 [2024-11-15 10:46:23.019084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.764 [2024-11-15 10:46:23.019108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.764 qpair failed and we were unable to recover it. 00:27:34.764 [2024-11-15 10:46:23.019259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.764 [2024-11-15 10:46:23.019282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.764 qpair failed and we were unable to recover it. 00:27:34.764 [2024-11-15 10:46:23.019404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.764 [2024-11-15 10:46:23.019446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.764 qpair failed and we were unable to recover it. 00:27:34.764 [2024-11-15 10:46:23.019571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.764 [2024-11-15 10:46:23.019596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.764 qpair failed and we were unable to recover it. 00:27:34.764 [2024-11-15 10:46:23.019715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.764 [2024-11-15 10:46:23.019755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.764 qpair failed and we were unable to recover it. 00:27:34.764 [2024-11-15 10:46:23.019916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.764 [2024-11-15 10:46:23.019941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.764 qpair failed and we were unable to recover it. 00:27:34.764 [2024-11-15 10:46:23.020072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.764 [2024-11-15 10:46:23.020095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.764 qpair failed and we were unable to recover it. 00:27:34.765 [2024-11-15 10:46:23.020235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.765 [2024-11-15 10:46:23.020258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.765 qpair failed and we were unable to recover it. 00:27:34.765 [2024-11-15 10:46:23.020389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.765 [2024-11-15 10:46:23.020414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.765 qpair failed and we were unable to recover it. 00:27:34.765 [2024-11-15 10:46:23.020500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.765 [2024-11-15 10:46:23.020524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.765 qpair failed and we were unable to recover it. 00:27:34.765 [2024-11-15 10:46:23.020663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.765 [2024-11-15 10:46:23.020703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.765 qpair failed and we were unable to recover it. 00:27:34.765 [2024-11-15 10:46:23.020845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.765 [2024-11-15 10:46:23.020885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.765 qpair failed and we were unable to recover it. 00:27:34.765 [2024-11-15 10:46:23.021014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.765 [2024-11-15 10:46:23.021039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.765 qpair failed and we were unable to recover it. 00:27:34.765 [2024-11-15 10:46:23.021174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.765 [2024-11-15 10:46:23.021199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.765 qpair failed and we were unable to recover it. 00:27:34.765 [2024-11-15 10:46:23.021368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.765 [2024-11-15 10:46:23.021394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.765 qpair failed and we were unable to recover it. 00:27:34.765 [2024-11-15 10:46:23.021538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.765 [2024-11-15 10:46:23.021563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.765 qpair failed and we were unable to recover it. 00:27:34.765 [2024-11-15 10:46:23.021698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.765 [2024-11-15 10:46:23.021723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.765 qpair failed and we were unable to recover it. 00:27:34.765 [2024-11-15 10:46:23.021849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.765 [2024-11-15 10:46:23.021874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.765 qpair failed and we were unable to recover it. 00:27:34.765 [2024-11-15 10:46:23.022035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.765 [2024-11-15 10:46:23.022064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.765 qpair failed and we were unable to recover it. 00:27:34.765 [2024-11-15 10:46:23.022189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.765 [2024-11-15 10:46:23.022214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.765 qpair failed and we were unable to recover it. 00:27:34.765 [2024-11-15 10:46:23.022327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.765 [2024-11-15 10:46:23.022373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.765 qpair failed and we were unable to recover it. 00:27:34.765 [2024-11-15 10:46:23.022515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.765 [2024-11-15 10:46:23.022541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.765 qpair failed and we were unable to recover it. 00:27:34.765 [2024-11-15 10:46:23.022658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.765 [2024-11-15 10:46:23.022683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.765 qpair failed and we were unable to recover it. 00:27:34.765 [2024-11-15 10:46:23.022813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.765 [2024-11-15 10:46:23.022839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.765 qpair failed and we were unable to recover it. 00:27:34.765 [2024-11-15 10:46:23.022952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.765 [2024-11-15 10:46:23.022977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.765 qpair failed and we were unable to recover it. 00:27:34.765 [2024-11-15 10:46:23.023097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.765 [2024-11-15 10:46:23.023122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.765 qpair failed and we were unable to recover it. 00:27:34.765 [2024-11-15 10:46:23.023266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.765 [2024-11-15 10:46:23.023291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.765 qpair failed and we were unable to recover it. 00:27:34.765 [2024-11-15 10:46:23.023470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.765 [2024-11-15 10:46:23.023496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.765 qpair failed and we were unable to recover it. 00:27:34.765 [2024-11-15 10:46:23.023590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.765 [2024-11-15 10:46:23.023615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.765 qpair failed and we were unable to recover it. 00:27:34.765 [2024-11-15 10:46:23.023713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.765 [2024-11-15 10:46:23.023738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.765 qpair failed and we were unable to recover it. 00:27:34.765 [2024-11-15 10:46:23.023881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.765 [2024-11-15 10:46:23.023906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.765 qpair failed and we were unable to recover it. 00:27:34.765 [2024-11-15 10:46:23.024042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.765 [2024-11-15 10:46:23.024066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.765 qpair failed and we were unable to recover it. 00:27:34.765 [2024-11-15 10:46:23.024206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.765 [2024-11-15 10:46:23.024232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.765 qpair failed and we were unable to recover it. 00:27:34.765 [2024-11-15 10:46:23.024338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.765 [2024-11-15 10:46:23.024387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.765 qpair failed and we were unable to recover it. 00:27:34.765 [2024-11-15 10:46:23.024523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.765 [2024-11-15 10:46:23.024547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.765 qpair failed and we were unable to recover it. 00:27:34.765 [2024-11-15 10:46:23.024655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.766 [2024-11-15 10:46:23.024680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.766 qpair failed and we were unable to recover it. 00:27:34.766 [2024-11-15 10:46:23.024823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.766 [2024-11-15 10:46:23.024847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.766 qpair failed and we were unable to recover it. 00:27:34.766 [2024-11-15 10:46:23.024931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.766 [2024-11-15 10:46:23.024955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.766 qpair failed and we were unable to recover it. 00:27:34.766 [2024-11-15 10:46:23.025060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.766 [2024-11-15 10:46:23.025086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.766 qpair failed and we were unable to recover it. 00:27:34.766 [2024-11-15 10:46:23.025247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.766 [2024-11-15 10:46:23.025272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.766 qpair failed and we were unable to recover it. 00:27:34.766 [2024-11-15 10:46:23.025437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.766 [2024-11-15 10:46:23.025464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.766 qpair failed and we were unable to recover it. 00:27:34.766 [2024-11-15 10:46:23.025553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.766 [2024-11-15 10:46:23.025578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.766 qpair failed and we were unable to recover it. 00:27:34.766 [2024-11-15 10:46:23.025725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.766 [2024-11-15 10:46:23.025749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.766 qpair failed and we were unable to recover it. 00:27:34.766 [2024-11-15 10:46:23.025857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.766 [2024-11-15 10:46:23.025881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.766 qpair failed and we were unable to recover it. 00:27:34.766 [2024-11-15 10:46:23.026056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.766 [2024-11-15 10:46:23.026081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.766 qpair failed and we were unable to recover it. 00:27:34.766 [2024-11-15 10:46:23.026182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.766 [2024-11-15 10:46:23.026211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.766 qpair failed and we were unable to recover it. 00:27:34.766 [2024-11-15 10:46:23.026373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.766 [2024-11-15 10:46:23.026397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.766 qpair failed and we were unable to recover it. 00:27:34.766 [2024-11-15 10:46:23.026560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.766 [2024-11-15 10:46:23.026584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.766 qpair failed and we were unable to recover it. 00:27:34.766 [2024-11-15 10:46:23.026753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.766 [2024-11-15 10:46:23.026776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.766 qpair failed and we were unable to recover it. 00:27:34.766 [2024-11-15 10:46:23.026939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.766 [2024-11-15 10:46:23.026964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.766 qpair failed and we were unable to recover it. 00:27:34.766 [2024-11-15 10:46:23.027132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.766 [2024-11-15 10:46:23.027156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.766 qpair failed and we were unable to recover it. 00:27:34.766 [2024-11-15 10:46:23.027253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.766 [2024-11-15 10:46:23.027277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.766 qpair failed and we were unable to recover it. 00:27:34.766 [2024-11-15 10:46:23.027399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.766 [2024-11-15 10:46:23.027425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.766 qpair failed and we were unable to recover it. 00:27:34.766 [2024-11-15 10:46:23.027562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.766 [2024-11-15 10:46:23.027588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.766 qpair failed and we were unable to recover it. 00:27:34.766 [2024-11-15 10:46:23.027723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.766 [2024-11-15 10:46:23.027747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.766 qpair failed and we were unable to recover it. 00:27:34.766 [2024-11-15 10:46:23.027880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.766 [2024-11-15 10:46:23.027904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.766 qpair failed and we were unable to recover it. 00:27:34.766 [2024-11-15 10:46:23.028036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.766 [2024-11-15 10:46:23.028061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.766 qpair failed and we were unable to recover it. 00:27:34.766 [2024-11-15 10:46:23.028210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.766 [2024-11-15 10:46:23.028235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.766 qpair failed and we were unable to recover it. 00:27:34.766 [2024-11-15 10:46:23.028345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.766 [2024-11-15 10:46:23.028376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.766 qpair failed and we were unable to recover it. 00:27:34.766 [2024-11-15 10:46:23.028546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.766 [2024-11-15 10:46:23.028571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.766 qpair failed and we were unable to recover it. 00:27:34.766 [2024-11-15 10:46:23.028699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.766 [2024-11-15 10:46:23.028738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.766 qpair failed and we were unable to recover it. 00:27:34.766 [2024-11-15 10:46:23.028862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.766 [2024-11-15 10:46:23.028886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.766 qpair failed and we were unable to recover it. 00:27:34.766 [2024-11-15 10:46:23.029031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.766 [2024-11-15 10:46:23.029055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.766 qpair failed and we were unable to recover it. 00:27:34.766 [2024-11-15 10:46:23.029215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.766 [2024-11-15 10:46:23.029252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.766 qpair failed and we were unable to recover it. 00:27:34.766 [2024-11-15 10:46:23.029358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.766 [2024-11-15 10:46:23.029388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.766 qpair failed and we were unable to recover it. 00:27:34.766 [2024-11-15 10:46:23.029519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.766 [2024-11-15 10:46:23.029544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.766 qpair failed and we were unable to recover it. 00:27:34.766 [2024-11-15 10:46:23.029700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.766 [2024-11-15 10:46:23.029723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.766 qpair failed and we were unable to recover it. 00:27:34.766 [2024-11-15 10:46:23.029885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.766 [2024-11-15 10:46:23.029910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.766 qpair failed and we were unable to recover it. 00:27:34.766 [2024-11-15 10:46:23.030047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.766 [2024-11-15 10:46:23.030086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.766 qpair failed and we were unable to recover it. 00:27:34.766 [2024-11-15 10:46:23.030211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.766 [2024-11-15 10:46:23.030236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.766 qpair failed and we were unable to recover it. 00:27:34.766 [2024-11-15 10:46:23.030406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.766 [2024-11-15 10:46:23.030431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.766 qpair failed and we were unable to recover it. 00:27:34.766 [2024-11-15 10:46:23.030576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.766 [2024-11-15 10:46:23.030600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.766 qpair failed and we were unable to recover it. 00:27:34.766 [2024-11-15 10:46:23.030710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.766 [2024-11-15 10:46:23.030739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.766 qpair failed and we were unable to recover it. 00:27:34.766 [2024-11-15 10:46:23.030899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.767 [2024-11-15 10:46:23.030923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.767 qpair failed and we were unable to recover it. 00:27:34.767 [2024-11-15 10:46:23.031058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.767 [2024-11-15 10:46:23.031083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.767 qpair failed and we were unable to recover it. 00:27:34.767 [2024-11-15 10:46:23.031251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.767 [2024-11-15 10:46:23.031288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.767 qpair failed and we were unable to recover it. 00:27:34.767 [2024-11-15 10:46:23.031399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.767 [2024-11-15 10:46:23.031423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.767 qpair failed and we were unable to recover it. 00:27:34.767 [2024-11-15 10:46:23.031529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.767 [2024-11-15 10:46:23.031554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.767 qpair failed and we were unable to recover it. 00:27:34.767 [2024-11-15 10:46:23.031701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.767 [2024-11-15 10:46:23.031725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.767 qpair failed and we were unable to recover it. 00:27:34.767 [2024-11-15 10:46:23.031890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.767 [2024-11-15 10:46:23.031915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.767 qpair failed and we were unable to recover it. 00:27:34.767 [2024-11-15 10:46:23.032057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.767 [2024-11-15 10:46:23.032097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.767 qpair failed and we were unable to recover it. 00:27:34.767 [2024-11-15 10:46:23.032242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.767 [2024-11-15 10:46:23.032280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.767 qpair failed and we were unable to recover it. 00:27:34.767 [2024-11-15 10:46:23.032387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.767 [2024-11-15 10:46:23.032412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.767 qpair failed and we were unable to recover it. 00:27:34.767 [2024-11-15 10:46:23.032525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.767 [2024-11-15 10:46:23.032551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.767 qpair failed and we were unable to recover it. 00:27:34.767 [2024-11-15 10:46:23.032679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.767 [2024-11-15 10:46:23.032703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.767 qpair failed and we were unable to recover it. 00:27:34.767 [2024-11-15 10:46:23.032873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.767 [2024-11-15 10:46:23.032898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.767 qpair failed and we were unable to recover it. 00:27:34.767 [2024-11-15 10:46:23.033000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.767 [2024-11-15 10:46:23.033024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.767 qpair failed and we were unable to recover it. 00:27:34.767 [2024-11-15 10:46:23.033186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.767 [2024-11-15 10:46:23.033210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.767 qpair failed and we were unable to recover it. 00:27:34.767 [2024-11-15 10:46:23.033331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.767 [2024-11-15 10:46:23.033357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.767 qpair failed and we were unable to recover it. 00:27:34.767 [2024-11-15 10:46:23.033512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.767 [2024-11-15 10:46:23.033537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.767 qpair failed and we were unable to recover it. 00:27:34.767 [2024-11-15 10:46:23.033701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.767 [2024-11-15 10:46:23.033725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.767 qpair failed and we were unable to recover it. 00:27:34.767 [2024-11-15 10:46:23.033864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.767 [2024-11-15 10:46:23.033888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.767 qpair failed and we were unable to recover it. 00:27:34.767 [2024-11-15 10:46:23.034047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.767 [2024-11-15 10:46:23.034086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.767 qpair failed and we were unable to recover it. 00:27:34.767 [2024-11-15 10:46:23.034206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.767 [2024-11-15 10:46:23.034244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.767 qpair failed and we were unable to recover it. 00:27:34.767 [2024-11-15 10:46:23.034400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.767 [2024-11-15 10:46:23.034425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.767 qpair failed and we were unable to recover it. 00:27:34.767 [2024-11-15 10:46:23.034538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.767 [2024-11-15 10:46:23.034563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.767 qpair failed and we were unable to recover it. 00:27:34.767 [2024-11-15 10:46:23.034728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.767 [2024-11-15 10:46:23.034753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.768 qpair failed and we were unable to recover it. 00:27:34.768 [2024-11-15 10:46:23.034907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.768 [2024-11-15 10:46:23.034931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.768 qpair failed and we were unable to recover it. 00:27:34.768 [2024-11-15 10:46:23.035071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.768 [2024-11-15 10:46:23.035096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.768 qpair failed and we were unable to recover it. 00:27:34.768 [2024-11-15 10:46:23.035224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.768 [2024-11-15 10:46:23.035248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.768 qpair failed and we were unable to recover it. 00:27:34.768 [2024-11-15 10:46:23.035427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.768 [2024-11-15 10:46:23.035452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.768 qpair failed and we were unable to recover it. 00:27:34.768 [2024-11-15 10:46:23.035599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.768 [2024-11-15 10:46:23.035625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.768 qpair failed and we were unable to recover it. 00:27:34.768 [2024-11-15 10:46:23.035725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.768 [2024-11-15 10:46:23.035748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.768 qpair failed and we were unable to recover it. 00:27:34.768 [2024-11-15 10:46:23.035886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.768 [2024-11-15 10:46:23.035910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.768 qpair failed and we were unable to recover it. 00:27:34.768 [2024-11-15 10:46:23.036049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.768 [2024-11-15 10:46:23.036074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.768 qpair failed and we were unable to recover it. 00:27:34.768 [2024-11-15 10:46:23.036194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.768 [2024-11-15 10:46:23.036218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.768 qpair failed and we were unable to recover it. 00:27:34.768 [2024-11-15 10:46:23.036370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.768 [2024-11-15 10:46:23.036395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.768 qpair failed and we were unable to recover it. 00:27:34.768 [2024-11-15 10:46:23.036531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.768 [2024-11-15 10:46:23.036556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.768 qpair failed and we were unable to recover it. 00:27:34.768 [2024-11-15 10:46:23.036690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.768 [2024-11-15 10:46:23.036715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.768 qpair failed and we were unable to recover it. 00:27:34.768 [2024-11-15 10:46:23.036842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.768 [2024-11-15 10:46:23.036866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.768 qpair failed and we were unable to recover it. 00:27:34.768 [2024-11-15 10:46:23.037002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.768 [2024-11-15 10:46:23.037027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.768 qpair failed and we were unable to recover it. 00:27:34.768 [2024-11-15 10:46:23.037156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.768 [2024-11-15 10:46:23.037195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.768 qpair failed and we were unable to recover it. 00:27:34.768 [2024-11-15 10:46:23.037318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.768 [2024-11-15 10:46:23.037340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.768 qpair failed and we were unable to recover it. 00:27:34.768 [2024-11-15 10:46:23.037505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.768 [2024-11-15 10:46:23.037534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.768 qpair failed and we were unable to recover it. 00:27:34.768 [2024-11-15 10:46:23.037698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.768 [2024-11-15 10:46:23.037722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.768 qpair failed and we were unable to recover it. 00:27:34.768 [2024-11-15 10:46:23.037830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.768 [2024-11-15 10:46:23.037854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.768 qpair failed and we were unable to recover it. 00:27:34.768 [2024-11-15 10:46:23.037980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.768 [2024-11-15 10:46:23.038005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.768 qpair failed and we were unable to recover it. 00:27:34.768 [2024-11-15 10:46:23.038153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.768 [2024-11-15 10:46:23.038179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.768 qpair failed and we were unable to recover it. 00:27:34.768 [2024-11-15 10:46:23.038307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.768 [2024-11-15 10:46:23.038332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.768 qpair failed and we were unable to recover it. 00:27:34.768 [2024-11-15 10:46:23.038452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.768 [2024-11-15 10:46:23.038478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.768 qpair failed and we were unable to recover it. 00:27:34.768 [2024-11-15 10:46:23.038596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.768 [2024-11-15 10:46:23.038621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.768 qpair failed and we were unable to recover it. 00:27:34.768 [2024-11-15 10:46:23.038757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.768 [2024-11-15 10:46:23.038780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.768 qpair failed and we were unable to recover it. 00:27:34.768 [2024-11-15 10:46:23.038908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.768 [2024-11-15 10:46:23.038934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.768 qpair failed and we were unable to recover it. 00:27:34.768 [2024-11-15 10:46:23.039092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.768 [2024-11-15 10:46:23.039117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.768 qpair failed and we were unable to recover it. 00:27:34.768 [2024-11-15 10:46:23.039231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.768 [2024-11-15 10:46:23.039254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.768 qpair failed and we were unable to recover it. 00:27:34.768 [2024-11-15 10:46:23.039390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.768 [2024-11-15 10:46:23.039414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.768 qpair failed and we were unable to recover it. 00:27:34.768 [2024-11-15 10:46:23.039540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.768 [2024-11-15 10:46:23.039564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.768 qpair failed and we were unable to recover it. 00:27:34.768 [2024-11-15 10:46:23.039733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.768 [2024-11-15 10:46:23.039758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.768 qpair failed and we were unable to recover it. 00:27:34.768 [2024-11-15 10:46:23.039875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.768 [2024-11-15 10:46:23.039900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.768 qpair failed and we were unable to recover it. 00:27:34.768 [2024-11-15 10:46:23.040029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.768 [2024-11-15 10:46:23.040052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.768 qpair failed and we were unable to recover it. 00:27:34.768 [2024-11-15 10:46:23.040188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.768 [2024-11-15 10:46:23.040213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.768 qpair failed and we were unable to recover it. 00:27:34.768 [2024-11-15 10:46:23.040351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.769 [2024-11-15 10:46:23.040395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.769 qpair failed and we were unable to recover it. 00:27:34.769 [2024-11-15 10:46:23.040497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.769 [2024-11-15 10:46:23.040522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.769 qpair failed and we were unable to recover it. 00:27:34.769 [2024-11-15 10:46:23.040621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.769 [2024-11-15 10:46:23.040646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.769 qpair failed and we were unable to recover it. 00:27:34.769 [2024-11-15 10:46:23.040769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.769 [2024-11-15 10:46:23.040794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.769 qpair failed and we were unable to recover it. 00:27:34.769 [2024-11-15 10:46:23.040934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.769 [2024-11-15 10:46:23.040958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.769 qpair failed and we were unable to recover it. 00:27:34.769 [2024-11-15 10:46:23.041142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.769 [2024-11-15 10:46:23.041165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.769 qpair failed and we were unable to recover it. 00:27:34.769 [2024-11-15 10:46:23.041313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.769 [2024-11-15 10:46:23.041337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.769 qpair failed and we were unable to recover it. 00:27:34.769 [2024-11-15 10:46:23.041480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.769 [2024-11-15 10:46:23.041516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:34.769 qpair failed and we were unable to recover it. 00:27:34.769 [2024-11-15 10:46:23.041658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.769 [2024-11-15 10:46:23.041686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:34.769 qpair failed and we were unable to recover it. 00:27:34.769 [2024-11-15 10:46:23.041821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.769 [2024-11-15 10:46:23.041854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:34.769 qpair failed and we were unable to recover it. 00:27:34.769 [2024-11-15 10:46:23.041981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.769 [2024-11-15 10:46:23.042009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:34.769 qpair failed and we were unable to recover it. 00:27:34.769 [2024-11-15 10:46:23.042147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.769 [2024-11-15 10:46:23.042191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:34.769 qpair failed and we were unable to recover it. 00:27:34.769 [2024-11-15 10:46:23.042330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.769 [2024-11-15 10:46:23.042357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:34.769 qpair failed and we were unable to recover it. 00:27:34.769 [2024-11-15 10:46:23.042492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.769 [2024-11-15 10:46:23.042520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:34.769 qpair failed and we were unable to recover it. 00:27:34.769 [2024-11-15 10:46:23.042652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.769 [2024-11-15 10:46:23.042680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:34.769 qpair failed and we were unable to recover it. 00:27:34.769 [2024-11-15 10:46:23.042863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.769 [2024-11-15 10:46:23.042891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:34.769 qpair failed and we were unable to recover it. 00:27:34.769 [2024-11-15 10:46:23.043015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.769 [2024-11-15 10:46:23.043055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.769 qpair failed and we were unable to recover it. 00:27:34.769 [2024-11-15 10:46:23.043211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.769 [2024-11-15 10:46:23.043235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.769 qpair failed and we were unable to recover it. 00:27:34.769 [2024-11-15 10:46:23.043388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.769 [2024-11-15 10:46:23.043412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.769 qpair failed and we were unable to recover it. 00:27:34.769 [2024-11-15 10:46:23.043560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.769 [2024-11-15 10:46:23.043598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.769 qpair failed and we were unable to recover it. 00:27:34.769 [2024-11-15 10:46:23.043715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.769 [2024-11-15 10:46:23.043738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.769 qpair failed and we were unable to recover it. 00:27:34.769 [2024-11-15 10:46:23.043857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.769 [2024-11-15 10:46:23.043881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.769 qpair failed and we were unable to recover it. 00:27:34.769 [2024-11-15 10:46:23.043988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.769 [2024-11-15 10:46:23.044013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.769 qpair failed and we were unable to recover it. 00:27:34.769 [2024-11-15 10:46:23.044158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.769 [2024-11-15 10:46:23.044182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.769 qpair failed and we were unable to recover it. 00:27:34.769 [2024-11-15 10:46:23.044347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.769 [2024-11-15 10:46:23.044405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.769 qpair failed and we were unable to recover it. 00:27:34.769 [2024-11-15 10:46:23.044552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.769 [2024-11-15 10:46:23.044577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.769 qpair failed and we were unable to recover it. 00:27:34.769 [2024-11-15 10:46:23.044723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.769 [2024-11-15 10:46:23.044747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.769 qpair failed and we were unable to recover it. 00:27:34.769 [2024-11-15 10:46:23.044915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.769 [2024-11-15 10:46:23.044938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.769 qpair failed and we were unable to recover it. 00:27:34.769 [2024-11-15 10:46:23.045033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.769 [2024-11-15 10:46:23.045057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.769 qpair failed and we were unable to recover it. 00:27:34.769 [2024-11-15 10:46:23.045234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.769 [2024-11-15 10:46:23.045258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.769 qpair failed and we were unable to recover it. 00:27:34.769 [2024-11-15 10:46:23.045403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.769 [2024-11-15 10:46:23.045428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.769 qpair failed and we were unable to recover it. 00:27:34.769 [2024-11-15 10:46:23.045577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.769 [2024-11-15 10:46:23.045616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.769 qpair failed and we were unable to recover it. 00:27:34.769 [2024-11-15 10:46:23.045734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.769 [2024-11-15 10:46:23.045757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.769 qpair failed and we were unable to recover it. 00:27:34.769 [2024-11-15 10:46:23.045887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.769 [2024-11-15 10:46:23.045911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.769 qpair failed and we were unable to recover it. 00:27:34.769 [2024-11-15 10:46:23.046045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.769 [2024-11-15 10:46:23.046068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.769 qpair failed and we were unable to recover it. 00:27:34.769 [2024-11-15 10:46:23.046205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.769 [2024-11-15 10:46:23.046229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.769 qpair failed and we were unable to recover it. 00:27:34.769 [2024-11-15 10:46:23.046355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.769 [2024-11-15 10:46:23.046389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.769 qpair failed and we were unable to recover it. 00:27:34.769 [2024-11-15 10:46:23.046519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.769 [2024-11-15 10:46:23.046544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.770 qpair failed and we were unable to recover it. 00:27:34.770 [2024-11-15 10:46:23.046694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.770 [2024-11-15 10:46:23.046718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.770 qpair failed and we were unable to recover it. 00:27:34.770 [2024-11-15 10:46:23.046857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.770 [2024-11-15 10:46:23.046881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.770 qpair failed and we were unable to recover it. 00:27:34.770 [2024-11-15 10:46:23.047005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.770 [2024-11-15 10:46:23.047029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.770 qpair failed and we were unable to recover it. 00:27:34.770 [2024-11-15 10:46:23.047195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.770 [2024-11-15 10:46:23.047219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.770 qpair failed and we were unable to recover it. 00:27:34.770 [2024-11-15 10:46:23.047379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.770 [2024-11-15 10:46:23.047404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.770 qpair failed and we were unable to recover it. 00:27:34.770 [2024-11-15 10:46:23.047532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.770 [2024-11-15 10:46:23.047556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.770 qpair failed and we were unable to recover it. 00:27:34.770 [2024-11-15 10:46:23.047674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.770 [2024-11-15 10:46:23.047697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.770 qpair failed and we were unable to recover it. 00:27:34.770 [2024-11-15 10:46:23.047822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.770 [2024-11-15 10:46:23.047846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.770 qpair failed and we were unable to recover it. 00:27:34.770 [2024-11-15 10:46:23.048006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.770 [2024-11-15 10:46:23.048044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.770 qpair failed and we were unable to recover it. 00:27:34.770 [2024-11-15 10:46:23.048166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.770 [2024-11-15 10:46:23.048190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.770 qpair failed and we were unable to recover it. 00:27:34.770 [2024-11-15 10:46:23.048348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.770 [2024-11-15 10:46:23.048380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.770 qpair failed and we were unable to recover it. 00:27:34.770 [2024-11-15 10:46:23.048493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.770 [2024-11-15 10:46:23.048518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.770 qpair failed and we were unable to recover it. 00:27:34.770 [2024-11-15 10:46:23.048656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.770 [2024-11-15 10:46:23.048679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.770 qpair failed and we were unable to recover it. 00:27:34.770 [2024-11-15 10:46:23.048812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.770 [2024-11-15 10:46:23.048850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.770 qpair failed and we were unable to recover it. 00:27:34.770 [2024-11-15 10:46:23.048936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.770 [2024-11-15 10:46:23.048959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.770 qpair failed and we were unable to recover it. 00:27:34.770 [2024-11-15 10:46:23.049112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.770 [2024-11-15 10:46:23.049136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.770 qpair failed and we were unable to recover it. 00:27:34.770 [2024-11-15 10:46:23.049306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.770 [2024-11-15 10:46:23.049330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.770 qpair failed and we were unable to recover it. 00:27:34.770 [2024-11-15 10:46:23.049520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.770 [2024-11-15 10:46:23.049545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.770 qpair failed and we were unable to recover it. 00:27:34.770 [2024-11-15 10:46:23.049674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.770 [2024-11-15 10:46:23.049697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.770 qpair failed and we were unable to recover it. 00:27:34.770 [2024-11-15 10:46:23.049866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.770 [2024-11-15 10:46:23.049889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.770 qpair failed and we were unable to recover it. 00:27:34.770 [2024-11-15 10:46:23.050045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.770 [2024-11-15 10:46:23.050067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.770 qpair failed and we were unable to recover it. 00:27:34.770 [2024-11-15 10:46:23.050168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.770 [2024-11-15 10:46:23.050192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.770 qpair failed and we were unable to recover it. 00:27:34.770 [2024-11-15 10:46:23.050348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.770 [2024-11-15 10:46:23.050381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.770 qpair failed and we were unable to recover it. 00:27:34.770 [2024-11-15 10:46:23.050500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.770 [2024-11-15 10:46:23.050525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.770 qpair failed and we were unable to recover it. 00:27:34.770 [2024-11-15 10:46:23.050659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.770 [2024-11-15 10:46:23.050682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.770 qpair failed and we were unable to recover it. 00:27:34.770 [2024-11-15 10:46:23.050846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.770 [2024-11-15 10:46:23.050874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.770 qpair failed and we were unable to recover it. 00:27:34.770 [2024-11-15 10:46:23.050998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.770 [2024-11-15 10:46:23.051035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.770 qpair failed and we were unable to recover it. 00:27:34.770 [2024-11-15 10:46:23.051160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.770 [2024-11-15 10:46:23.051183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.770 qpair failed and we were unable to recover it. 00:27:34.770 [2024-11-15 10:46:23.051344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.770 [2024-11-15 10:46:23.051377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.770 qpair failed and we were unable to recover it. 00:27:34.770 [2024-11-15 10:46:23.051536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.770 [2024-11-15 10:46:23.051561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.770 qpair failed and we were unable to recover it. 00:27:34.770 [2024-11-15 10:46:23.051682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.770 [2024-11-15 10:46:23.051706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.770 qpair failed and we were unable to recover it. 00:27:34.770 [2024-11-15 10:46:23.051850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.770 [2024-11-15 10:46:23.051889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.770 qpair failed and we were unable to recover it. 00:27:34.770 [2024-11-15 10:46:23.052037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.770 [2024-11-15 10:46:23.052075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.770 qpair failed and we were unable to recover it. 00:27:34.770 [2024-11-15 10:46:23.052234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.770 [2024-11-15 10:46:23.052258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.770 qpair failed and we were unable to recover it. 00:27:34.770 [2024-11-15 10:46:23.052346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.770 [2024-11-15 10:46:23.052378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.770 qpair failed and we were unable to recover it. 00:27:34.770 [2024-11-15 10:46:23.052499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.770 [2024-11-15 10:46:23.052524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.770 qpair failed and we were unable to recover it. 00:27:34.770 [2024-11-15 10:46:23.052649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.770 [2024-11-15 10:46:23.052673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.770 qpair failed and we were unable to recover it. 00:27:34.770 [2024-11-15 10:46:23.052772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.770 [2024-11-15 10:46:23.052797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.770 qpair failed and we were unable to recover it. 00:27:34.771 [2024-11-15 10:46:23.052953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.771 [2024-11-15 10:46:23.052977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.771 qpair failed and we were unable to recover it. 00:27:34.771 [2024-11-15 10:46:23.053113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.771 [2024-11-15 10:46:23.053137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.771 qpair failed and we were unable to recover it. 00:27:34.771 [2024-11-15 10:46:23.053236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.771 [2024-11-15 10:46:23.053261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.771 qpair failed and we were unable to recover it. 00:27:34.771 [2024-11-15 10:46:23.053337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.771 [2024-11-15 10:46:23.053374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.771 qpair failed and we were unable to recover it. 00:27:34.771 [2024-11-15 10:46:23.053489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.771 [2024-11-15 10:46:23.053513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.771 qpair failed and we were unable to recover it. 00:27:34.771 [2024-11-15 10:46:23.053656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.771 [2024-11-15 10:46:23.053694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.771 qpair failed and we were unable to recover it. 00:27:34.771 [2024-11-15 10:46:23.053826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.771 [2024-11-15 10:46:23.053849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.771 qpair failed and we were unable to recover it. 00:27:34.771 [2024-11-15 10:46:23.054006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.771 [2024-11-15 10:46:23.054029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.771 qpair failed and we were unable to recover it. 00:27:34.771 [2024-11-15 10:46:23.054199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.771 [2024-11-15 10:46:23.054222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.771 qpair failed and we were unable to recover it. 00:27:34.771 [2024-11-15 10:46:23.054342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.771 [2024-11-15 10:46:23.054388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.771 qpair failed and we were unable to recover it. 00:27:34.771 [2024-11-15 10:46:23.054509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.771 [2024-11-15 10:46:23.054533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.771 qpair failed and we were unable to recover it. 00:27:34.771 [2024-11-15 10:46:23.054665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.771 [2024-11-15 10:46:23.054689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.771 qpair failed and we were unable to recover it. 00:27:34.771 [2024-11-15 10:46:23.054818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.771 [2024-11-15 10:46:23.054842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.771 qpair failed and we were unable to recover it. 00:27:34.771 [2024-11-15 10:46:23.054966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.771 [2024-11-15 10:46:23.054989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.771 qpair failed and we were unable to recover it. 00:27:34.771 [2024-11-15 10:46:23.055122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.771 [2024-11-15 10:46:23.055147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.771 qpair failed and we were unable to recover it. 00:27:34.771 [2024-11-15 10:46:23.055230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.771 [2024-11-15 10:46:23.055268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.771 qpair failed and we were unable to recover it. 00:27:34.771 [2024-11-15 10:46:23.055409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.771 [2024-11-15 10:46:23.055433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.771 qpair failed and we were unable to recover it. 00:27:34.771 [2024-11-15 10:46:23.055564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.771 [2024-11-15 10:46:23.055588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.771 qpair failed and we were unable to recover it. 00:27:34.771 [2024-11-15 10:46:23.055743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.771 [2024-11-15 10:46:23.055781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.771 qpair failed and we were unable to recover it. 00:27:34.771 [2024-11-15 10:46:23.055929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.771 [2024-11-15 10:46:23.055953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.771 qpair failed and we were unable to recover it. 00:27:34.771 [2024-11-15 10:46:23.056079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.771 [2024-11-15 10:46:23.056104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.771 qpair failed and we were unable to recover it. 00:27:34.771 [2024-11-15 10:46:23.056240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.771 [2024-11-15 10:46:23.056264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.771 qpair failed and we were unable to recover it. 00:27:34.771 [2024-11-15 10:46:23.056393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.771 [2024-11-15 10:46:23.056433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.771 qpair failed and we were unable to recover it. 00:27:34.771 [2024-11-15 10:46:23.056577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.771 [2024-11-15 10:46:23.056602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.771 qpair failed and we were unable to recover it. 00:27:34.771 [2024-11-15 10:46:23.056723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.771 [2024-11-15 10:46:23.056760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.771 qpair failed and we were unable to recover it. 00:27:34.771 [2024-11-15 10:46:23.056856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.771 [2024-11-15 10:46:23.056880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.771 qpair failed and we were unable to recover it. 00:27:34.771 [2024-11-15 10:46:23.057013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.771 [2024-11-15 10:46:23.057038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.771 qpair failed and we were unable to recover it. 00:27:34.771 [2024-11-15 10:46:23.057185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.771 [2024-11-15 10:46:23.057209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.771 qpair failed and we were unable to recover it. 00:27:34.771 [2024-11-15 10:46:23.057334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.771 [2024-11-15 10:46:23.057358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.771 qpair failed and we were unable to recover it. 00:27:34.771 [2024-11-15 10:46:23.057501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.771 [2024-11-15 10:46:23.057526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.771 qpair failed and we were unable to recover it. 00:27:34.771 [2024-11-15 10:46:23.057723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.771 [2024-11-15 10:46:23.057754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:34.771 qpair failed and we were unable to recover it. 00:27:34.771 [2024-11-15 10:46:23.057893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.771 [2024-11-15 10:46:23.057921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:34.771 qpair failed and we were unable to recover it. 00:27:34.771 [2024-11-15 10:46:23.058070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.771 [2024-11-15 10:46:23.058112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:34.771 qpair failed and we were unable to recover it. 00:27:34.771 [2024-11-15 10:46:23.058208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.771 [2024-11-15 10:46:23.058234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:34.771 qpair failed and we were unable to recover it. 00:27:34.771 [2024-11-15 10:46:23.058366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.771 [2024-11-15 10:46:23.058392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:34.771 qpair failed and we were unable to recover it. 00:27:34.771 [2024-11-15 10:46:23.058558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.771 [2024-11-15 10:46:23.058585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:34.771 qpair failed and we were unable to recover it. 00:27:34.771 [2024-11-15 10:46:23.058764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.771 [2024-11-15 10:46:23.058788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.771 qpair failed and we were unable to recover it. 00:27:34.771 [2024-11-15 10:46:23.058917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.771 [2024-11-15 10:46:23.058941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.771 qpair failed and we were unable to recover it. 00:27:34.771 [2024-11-15 10:46:23.059047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.772 [2024-11-15 10:46:23.059072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.772 qpair failed and we were unable to recover it. 00:27:34.772 [2024-11-15 10:46:23.059238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.772 [2024-11-15 10:46:23.059262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.772 qpair failed and we were unable to recover it. 00:27:34.772 [2024-11-15 10:46:23.059428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.772 [2024-11-15 10:46:23.059453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.772 qpair failed and we were unable to recover it. 00:27:34.772 [2024-11-15 10:46:23.059594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.772 [2024-11-15 10:46:23.059618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.772 qpair failed and we were unable to recover it. 00:27:34.772 [2024-11-15 10:46:23.059789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.772 [2024-11-15 10:46:23.059812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.772 qpair failed and we were unable to recover it. 00:27:34.772 [2024-11-15 10:46:23.059936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.772 [2024-11-15 10:46:23.059960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.772 qpair failed and we were unable to recover it. 00:27:34.772 [2024-11-15 10:46:23.060093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.772 [2024-11-15 10:46:23.060117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.772 qpair failed and we were unable to recover it. 00:27:34.772 [2024-11-15 10:46:23.060254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.772 [2024-11-15 10:46:23.060277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.772 qpair failed and we were unable to recover it. 00:27:34.772 [2024-11-15 10:46:23.060400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.772 [2024-11-15 10:46:23.060439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.772 qpair failed and we were unable to recover it. 00:27:34.772 [2024-11-15 10:46:23.060560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.772 [2024-11-15 10:46:23.060585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.772 qpair failed and we were unable to recover it. 00:27:34.772 [2024-11-15 10:46:23.060703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.772 [2024-11-15 10:46:23.060727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.772 qpair failed and we were unable to recover it. 00:27:34.772 [2024-11-15 10:46:23.060894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.772 [2024-11-15 10:46:23.060918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.772 qpair failed and we were unable to recover it. 00:27:34.772 [2024-11-15 10:46:23.061071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.772 [2024-11-15 10:46:23.061095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.772 qpair failed and we were unable to recover it. 00:27:34.772 [2024-11-15 10:46:23.061273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.772 [2024-11-15 10:46:23.061296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.772 qpair failed and we were unable to recover it. 00:27:34.772 [2024-11-15 10:46:23.061454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.772 [2024-11-15 10:46:23.061478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.772 qpair failed and we were unable to recover it. 00:27:34.772 [2024-11-15 10:46:23.061636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.772 [2024-11-15 10:46:23.061660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.772 qpair failed and we were unable to recover it. 00:27:34.772 [2024-11-15 10:46:23.061794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.772 [2024-11-15 10:46:23.061833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.772 qpair failed and we were unable to recover it. 00:27:34.772 [2024-11-15 10:46:23.061970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.772 [2024-11-15 10:46:23.061994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.772 qpair failed and we were unable to recover it. 00:27:34.772 [2024-11-15 10:46:23.062153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.772 [2024-11-15 10:46:23.062177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.772 qpair failed and we were unable to recover it. 00:27:34.772 [2024-11-15 10:46:23.062354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.772 [2024-11-15 10:46:23.062382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.772 qpair failed and we were unable to recover it. 00:27:34.772 [2024-11-15 10:46:23.062542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.772 [2024-11-15 10:46:23.062566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.772 qpair failed and we were unable to recover it. 00:27:34.772 [2024-11-15 10:46:23.062676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.772 [2024-11-15 10:46:23.062700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.772 qpair failed and we were unable to recover it. 00:27:34.772 [2024-11-15 10:46:23.062825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.772 [2024-11-15 10:46:23.062849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.772 qpair failed and we were unable to recover it. 00:27:34.772 [2024-11-15 10:46:23.063014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.772 [2024-11-15 10:46:23.063037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.772 qpair failed and we were unable to recover it. 00:27:34.772 [2024-11-15 10:46:23.063164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.772 [2024-11-15 10:46:23.063189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.772 qpair failed and we were unable to recover it. 00:27:34.772 [2024-11-15 10:46:23.063310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.772 [2024-11-15 10:46:23.063334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.772 qpair failed and we were unable to recover it. 00:27:34.772 [2024-11-15 10:46:23.063514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.772 [2024-11-15 10:46:23.063539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.772 qpair failed and we were unable to recover it. 00:27:34.772 [2024-11-15 10:46:23.063656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.772 [2024-11-15 10:46:23.063694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.772 qpair failed and we were unable to recover it. 00:27:34.772 [2024-11-15 10:46:23.063806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.772 [2024-11-15 10:46:23.063829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.772 qpair failed and we were unable to recover it. 00:27:34.772 [2024-11-15 10:46:23.063986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.772 [2024-11-15 10:46:23.064009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.772 qpair failed and we were unable to recover it. 00:27:34.772 [2024-11-15 10:46:23.064139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.772 [2024-11-15 10:46:23.064164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.772 qpair failed and we were unable to recover it. 00:27:34.772 [2024-11-15 10:46:23.064327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.772 [2024-11-15 10:46:23.064372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.772 qpair failed and we were unable to recover it. 00:27:34.772 [2024-11-15 10:46:23.064538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.772 [2024-11-15 10:46:23.064562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.772 qpair failed and we were unable to recover it. 00:27:34.772 [2024-11-15 10:46:23.064661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.773 [2024-11-15 10:46:23.064685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.773 qpair failed and we were unable to recover it. 00:27:34.773 [2024-11-15 10:46:23.064824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.773 [2024-11-15 10:46:23.064848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.773 qpair failed and we were unable to recover it. 00:27:34.773 [2024-11-15 10:46:23.064985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.773 [2024-11-15 10:46:23.065008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.773 qpair failed and we were unable to recover it. 00:27:34.773 [2024-11-15 10:46:23.065137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.773 [2024-11-15 10:46:23.065161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.773 qpair failed and we were unable to recover it. 00:27:34.773 [2024-11-15 10:46:23.065325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.773 [2024-11-15 10:46:23.065370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.773 qpair failed and we were unable to recover it. 00:27:34.773 [2024-11-15 10:46:23.065495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.773 [2024-11-15 10:46:23.065518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.773 qpair failed and we were unable to recover it. 00:27:34.773 [2024-11-15 10:46:23.065672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.773 [2024-11-15 10:46:23.065696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.773 qpair failed and we were unable to recover it. 00:27:34.773 [2024-11-15 10:46:23.065831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.773 [2024-11-15 10:46:23.065869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.773 qpair failed and we were unable to recover it. 00:27:34.773 [2024-11-15 10:46:23.066021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.773 [2024-11-15 10:46:23.066044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.773 qpair failed and we were unable to recover it. 00:27:34.773 [2024-11-15 10:46:23.066175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.773 [2024-11-15 10:46:23.066199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.773 qpair failed and we were unable to recover it. 00:27:34.773 [2024-11-15 10:46:23.066325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.773 [2024-11-15 10:46:23.066371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.773 qpair failed and we were unable to recover it. 00:27:34.773 [2024-11-15 10:46:23.066505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.773 [2024-11-15 10:46:23.066532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.773 qpair failed and we were unable to recover it. 00:27:34.773 [2024-11-15 10:46:23.066655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.773 [2024-11-15 10:46:23.066680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.773 qpair failed and we were unable to recover it. 00:27:34.773 [2024-11-15 10:46:23.066808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.773 [2024-11-15 10:46:23.066831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.773 qpair failed and we were unable to recover it. 00:27:34.773 [2024-11-15 10:46:23.066955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.773 [2024-11-15 10:46:23.066979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.773 qpair failed and we were unable to recover it. 00:27:34.773 [2024-11-15 10:46:23.067136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.773 [2024-11-15 10:46:23.067175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.773 qpair failed and we were unable to recover it. 00:27:34.773 [2024-11-15 10:46:23.067332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.773 [2024-11-15 10:46:23.067354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.773 qpair failed and we were unable to recover it. 00:27:34.773 [2024-11-15 10:46:23.067517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.773 [2024-11-15 10:46:23.067541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.773 qpair failed and we were unable to recover it. 00:27:34.773 [2024-11-15 10:46:23.067710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.773 [2024-11-15 10:46:23.067734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.773 qpair failed and we were unable to recover it. 00:27:34.773 [2024-11-15 10:46:23.067892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.773 [2024-11-15 10:46:23.067915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.773 qpair failed and we were unable to recover it. 00:27:34.773 [2024-11-15 10:46:23.068012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.773 [2024-11-15 10:46:23.068036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.773 qpair failed and we were unable to recover it. 00:27:34.773 [2024-11-15 10:46:23.068098] Starting SPDK v25.01-pre git sha1 318515b44 / DPDK 24.03.0 initialization... 00:27:34.773 [2024-11-15 10:46:23.068181] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:34.773 [2024-11-15 10:46:23.068193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.773 [2024-11-15 10:46:23.068218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.773 qpair failed and we were unable to recover it. 00:27:34.773 [2024-11-15 10:46:23.068367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.773 [2024-11-15 10:46:23.068391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.773 qpair failed and we were unable to recover it. 00:27:34.773 [2024-11-15 10:46:23.068509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.773 [2024-11-15 10:46:23.068533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.773 qpair failed and we were unable to recover it. 00:27:34.773 [2024-11-15 10:46:23.068706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.773 [2024-11-15 10:46:23.068729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.773 qpair failed and we were unable to recover it. 00:27:34.773 [2024-11-15 10:46:23.068885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.773 [2024-11-15 10:46:23.068908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.773 qpair failed and we were unable to recover it. 00:27:34.773 [2024-11-15 10:46:23.069041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.773 [2024-11-15 10:46:23.069065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.773 qpair failed and we were unable to recover it. 00:27:34.773 [2024-11-15 10:46:23.069196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.773 [2024-11-15 10:46:23.069220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.773 qpair failed and we were unable to recover it. 00:27:34.773 [2024-11-15 10:46:23.069357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.773 [2024-11-15 10:46:23.069411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.773 qpair failed and we were unable to recover it. 00:27:34.773 [2024-11-15 10:46:23.069549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.773 [2024-11-15 10:46:23.069573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.773 qpair failed and we were unable to recover it. 00:27:34.773 [2024-11-15 10:46:23.069703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.773 [2024-11-15 10:46:23.069742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.773 qpair failed and we were unable to recover it. 00:27:34.773 [2024-11-15 10:46:23.069860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.773 [2024-11-15 10:46:23.069884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.773 qpair failed and we were unable to recover it. 00:27:34.773 [2024-11-15 10:46:23.070039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.773 [2024-11-15 10:46:23.070063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.773 qpair failed and we were unable to recover it. 00:27:34.773 [2024-11-15 10:46:23.070197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.773 [2024-11-15 10:46:23.070222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.773 qpair failed and we were unable to recover it. 00:27:34.773 [2024-11-15 10:46:23.070343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.773 [2024-11-15 10:46:23.070389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.773 qpair failed and we were unable to recover it. 00:27:34.773 [2024-11-15 10:46:23.070523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.773 [2024-11-15 10:46:23.070547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.773 qpair failed and we were unable to recover it. 00:27:34.773 [2024-11-15 10:46:23.070714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.773 [2024-11-15 10:46:23.070737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.773 qpair failed and we were unable to recover it. 00:27:34.773 [2024-11-15 10:46:23.070908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.774 [2024-11-15 10:46:23.070939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:34.774 qpair failed and we were unable to recover it. 00:27:34.774 [2024-11-15 10:46:23.071115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.774 [2024-11-15 10:46:23.071143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:34.774 qpair failed and we were unable to recover it. 00:27:34.774 [2024-11-15 10:46:23.071322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.774 [2024-11-15 10:46:23.071349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:34.774 qpair failed and we were unable to recover it. 00:27:34.774 [2024-11-15 10:46:23.071530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.774 [2024-11-15 10:46:23.071556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.774 qpair failed and we were unable to recover it. 00:27:34.774 [2024-11-15 10:46:23.071649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.774 [2024-11-15 10:46:23.071674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.774 qpair failed and we were unable to recover it. 00:27:34.774 [2024-11-15 10:46:23.071816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.774 [2024-11-15 10:46:23.071840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.774 qpair failed and we were unable to recover it. 00:27:34.774 [2024-11-15 10:46:23.071970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.774 [2024-11-15 10:46:23.071994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.774 qpair failed and we were unable to recover it. 00:27:34.774 [2024-11-15 10:46:23.072132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.774 [2024-11-15 10:46:23.072156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.774 qpair failed and we were unable to recover it. 00:27:34.774 [2024-11-15 10:46:23.072283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.774 [2024-11-15 10:46:23.072308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.774 qpair failed and we were unable to recover it. 00:27:34.774 [2024-11-15 10:46:23.072430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.774 [2024-11-15 10:46:23.072456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.774 qpair failed and we were unable to recover it. 00:27:34.774 [2024-11-15 10:46:23.072613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.774 [2024-11-15 10:46:23.072637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.774 qpair failed and we were unable to recover it. 00:27:34.774 [2024-11-15 10:46:23.072779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.774 [2024-11-15 10:46:23.072803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.774 qpair failed and we were unable to recover it. 00:27:34.774 [2024-11-15 10:46:23.072885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.774 [2024-11-15 10:46:23.072909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.774 qpair failed and we were unable to recover it. 00:27:34.774 [2024-11-15 10:46:23.073038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.774 [2024-11-15 10:46:23.073062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.774 qpair failed and we were unable to recover it. 00:27:34.774 [2024-11-15 10:46:23.073203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.774 [2024-11-15 10:46:23.073228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.774 qpair failed and we were unable to recover it. 00:27:34.774 [2024-11-15 10:46:23.073356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.774 [2024-11-15 10:46:23.073402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.774 qpair failed and we were unable to recover it. 00:27:34.774 [2024-11-15 10:46:23.073523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.774 [2024-11-15 10:46:23.073547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.774 qpair failed and we were unable to recover it. 00:27:34.774 [2024-11-15 10:46:23.073666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.774 [2024-11-15 10:46:23.073691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.774 qpair failed and we were unable to recover it. 00:27:34.774 [2024-11-15 10:46:23.073821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.774 [2024-11-15 10:46:23.073859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.774 qpair failed and we were unable to recover it. 00:27:34.774 [2024-11-15 10:46:23.073980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.774 [2024-11-15 10:46:23.074004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.774 qpair failed and we were unable to recover it. 00:27:34.774 [2024-11-15 10:46:23.074138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.774 [2024-11-15 10:46:23.074163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.774 qpair failed and we were unable to recover it. 00:27:34.774 [2024-11-15 10:46:23.074319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.774 [2024-11-15 10:46:23.074358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.774 qpair failed and we were unable to recover it. 00:27:34.774 [2024-11-15 10:46:23.074491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.774 [2024-11-15 10:46:23.074516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.774 qpair failed and we were unable to recover it. 00:27:34.774 [2024-11-15 10:46:23.074637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.774 [2024-11-15 10:46:23.074662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.774 qpair failed and we were unable to recover it. 00:27:34.774 [2024-11-15 10:46:23.074789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.774 [2024-11-15 10:46:23.074813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.774 qpair failed and we were unable to recover it. 00:27:34.774 [2024-11-15 10:46:23.074955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.774 [2024-11-15 10:46:23.074979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.774 qpair failed and we were unable to recover it. 00:27:34.774 [2024-11-15 10:46:23.075112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.774 [2024-11-15 10:46:23.075152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.774 qpair failed and we were unable to recover it. 00:27:34.774 [2024-11-15 10:46:23.075273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.774 [2024-11-15 10:46:23.075300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.774 qpair failed and we were unable to recover it. 00:27:34.774 [2024-11-15 10:46:23.075444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.774 [2024-11-15 10:46:23.075470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.774 qpair failed and we were unable to recover it. 00:27:34.774 [2024-11-15 10:46:23.075553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.774 [2024-11-15 10:46:23.075578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.774 qpair failed and we were unable to recover it. 00:27:34.774 [2024-11-15 10:46:23.075706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.774 [2024-11-15 10:46:23.075730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.774 qpair failed and we were unable to recover it. 00:27:34.774 [2024-11-15 10:46:23.075870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.774 [2024-11-15 10:46:23.075893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.774 qpair failed and we were unable to recover it. 00:27:34.774 [2024-11-15 10:46:23.076049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.774 [2024-11-15 10:46:23.076074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.774 qpair failed and we were unable to recover it. 00:27:34.774 [2024-11-15 10:46:23.076199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.774 [2024-11-15 10:46:23.076223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.774 qpair failed and we were unable to recover it. 00:27:34.774 [2024-11-15 10:46:23.076355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.774 [2024-11-15 10:46:23.076384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.774 qpair failed and we were unable to recover it. 00:27:34.774 [2024-11-15 10:46:23.076510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.774 [2024-11-15 10:46:23.076535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.774 qpair failed and we were unable to recover it. 00:27:34.774 [2024-11-15 10:46:23.076680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.774 [2024-11-15 10:46:23.076703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.774 qpair failed and we were unable to recover it. 00:27:34.774 [2024-11-15 10:46:23.076831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.774 [2024-11-15 10:46:23.076855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.774 qpair failed and we were unable to recover it. 00:27:34.774 [2024-11-15 10:46:23.077013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.774 [2024-11-15 10:46:23.077038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.775 qpair failed and we were unable to recover it. 00:27:34.775 [2024-11-15 10:46:23.077146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.775 [2024-11-15 10:46:23.077170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.775 qpair failed and we were unable to recover it. 00:27:34.775 [2024-11-15 10:46:23.077342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.775 [2024-11-15 10:46:23.077371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.775 qpair failed and we were unable to recover it. 00:27:34.775 [2024-11-15 10:46:23.077513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.775 [2024-11-15 10:46:23.077538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.775 qpair failed and we were unable to recover it. 00:27:34.775 [2024-11-15 10:46:23.077617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.775 [2024-11-15 10:46:23.077656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.775 qpair failed and we were unable to recover it. 00:27:34.775 [2024-11-15 10:46:23.077811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.775 [2024-11-15 10:46:23.077835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.775 qpair failed and we were unable to recover it. 00:27:34.775 [2024-11-15 10:46:23.077977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.775 [2024-11-15 10:46:23.078001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.775 qpair failed and we were unable to recover it. 00:27:34.775 [2024-11-15 10:46:23.078149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.775 [2024-11-15 10:46:23.078187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.775 qpair failed and we were unable to recover it. 00:27:34.775 [2024-11-15 10:46:23.078314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.775 [2024-11-15 10:46:23.078338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.775 qpair failed and we were unable to recover it. 00:27:34.775 [2024-11-15 10:46:23.078450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.775 [2024-11-15 10:46:23.078476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.775 qpair failed and we were unable to recover it. 00:27:34.775 [2024-11-15 10:46:23.078560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.775 [2024-11-15 10:46:23.078586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.775 qpair failed and we were unable to recover it. 00:27:34.775 [2024-11-15 10:46:23.078710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.775 [2024-11-15 10:46:23.078734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.775 qpair failed and we were unable to recover it. 00:27:34.775 [2024-11-15 10:46:23.078847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.775 [2024-11-15 10:46:23.078871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.775 qpair failed and we were unable to recover it. 00:27:34.775 [2024-11-15 10:46:23.078999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.775 [2024-11-15 10:46:23.079022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.775 qpair failed and we were unable to recover it. 00:27:34.775 [2024-11-15 10:46:23.079155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.775 [2024-11-15 10:46:23.079179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.775 qpair failed and we were unable to recover it. 00:27:34.775 [2024-11-15 10:46:23.079339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.775 [2024-11-15 10:46:23.079388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.775 qpair failed and we were unable to recover it. 00:27:34.775 [2024-11-15 10:46:23.079473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.775 [2024-11-15 10:46:23.079503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.775 qpair failed and we were unable to recover it. 00:27:34.775 [2024-11-15 10:46:23.079627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.775 [2024-11-15 10:46:23.079667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.775 qpair failed and we were unable to recover it. 00:27:34.775 [2024-11-15 10:46:23.079830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.775 [2024-11-15 10:46:23.079853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.775 qpair failed and we were unable to recover it. 00:27:34.775 [2024-11-15 10:46:23.079976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.775 [2024-11-15 10:46:23.080000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.775 qpair failed and we were unable to recover it. 00:27:34.775 [2024-11-15 10:46:23.080239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.775 [2024-11-15 10:46:23.080263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.775 qpair failed and we were unable to recover it. 00:27:34.775 [2024-11-15 10:46:23.080423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.775 [2024-11-15 10:46:23.080449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.775 qpair failed and we were unable to recover it. 00:27:34.775 [2024-11-15 10:46:23.080575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.775 [2024-11-15 10:46:23.080601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.775 qpair failed and we were unable to recover it. 00:27:34.775 [2024-11-15 10:46:23.080756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.775 [2024-11-15 10:46:23.080780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.775 qpair failed and we were unable to recover it. 00:27:34.775 [2024-11-15 10:46:23.080918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.775 [2024-11-15 10:46:23.080942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.775 qpair failed and we were unable to recover it. 00:27:34.775 [2024-11-15 10:46:23.081086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.775 [2024-11-15 10:46:23.081110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.775 qpair failed and we were unable to recover it. 00:27:34.775 [2024-11-15 10:46:23.081233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.775 [2024-11-15 10:46:23.081257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.775 qpair failed and we were unable to recover it. 00:27:34.775 [2024-11-15 10:46:23.081420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.775 [2024-11-15 10:46:23.081446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.775 qpair failed and we were unable to recover it. 00:27:34.775 [2024-11-15 10:46:23.081613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.775 [2024-11-15 10:46:23.081638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.775 qpair failed and we were unable to recover it. 00:27:34.775 [2024-11-15 10:46:23.081802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.775 [2024-11-15 10:46:23.081826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.775 qpair failed and we were unable to recover it. 00:27:34.775 [2024-11-15 10:46:23.081958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.775 [2024-11-15 10:46:23.081982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.775 qpair failed and we were unable to recover it. 00:27:34.775 [2024-11-15 10:46:23.082119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.775 [2024-11-15 10:46:23.082143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.775 qpair failed and we were unable to recover it. 00:27:34.775 [2024-11-15 10:46:23.082294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.775 [2024-11-15 10:46:23.082318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.775 qpair failed and we were unable to recover it. 00:27:34.775 [2024-11-15 10:46:23.082488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.775 [2024-11-15 10:46:23.082514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.775 qpair failed and we were unable to recover it. 00:27:34.775 [2024-11-15 10:46:23.082627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.775 [2024-11-15 10:46:23.082666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.775 qpair failed and we were unable to recover it. 00:27:34.775 [2024-11-15 10:46:23.082779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.775 [2024-11-15 10:46:23.082802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.775 qpair failed and we were unable to recover it. 00:27:34.775 [2024-11-15 10:46:23.082911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.775 [2024-11-15 10:46:23.082934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.775 qpair failed and we were unable to recover it. 00:27:34.775 [2024-11-15 10:46:23.083073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.775 [2024-11-15 10:46:23.083096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.775 qpair failed and we were unable to recover it. 00:27:34.775 [2024-11-15 10:46:23.083204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.775 [2024-11-15 10:46:23.083227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.775 qpair failed and we were unable to recover it. 00:27:34.775 [2024-11-15 10:46:23.083328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.776 [2024-11-15 10:46:23.083352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.776 qpair failed and we were unable to recover it. 00:27:34.776 [2024-11-15 10:46:23.083480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.776 [2024-11-15 10:46:23.083504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.776 qpair failed and we were unable to recover it. 00:27:34.776 [2024-11-15 10:46:23.083642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.776 [2024-11-15 10:46:23.083665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.776 qpair failed and we were unable to recover it. 00:27:34.776 [2024-11-15 10:46:23.083788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.776 [2024-11-15 10:46:23.083811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.776 qpair failed and we were unable to recover it. 00:27:34.776 [2024-11-15 10:46:23.083923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.776 [2024-11-15 10:46:23.083950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.776 qpair failed and we were unable to recover it. 00:27:34.776 [2024-11-15 10:46:23.084092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.776 [2024-11-15 10:46:23.084115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.776 qpair failed and we were unable to recover it. 00:27:34.776 [2024-11-15 10:46:23.084198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.776 [2024-11-15 10:46:23.084221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.776 qpair failed and we were unable to recover it. 00:27:34.776 [2024-11-15 10:46:23.084356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.776 [2024-11-15 10:46:23.084406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.776 qpair failed and we were unable to recover it. 00:27:34.776 [2024-11-15 10:46:23.084504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.776 [2024-11-15 10:46:23.084528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.776 qpair failed and we were unable to recover it. 00:27:34.776 [2024-11-15 10:46:23.084642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.776 [2024-11-15 10:46:23.084666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.776 qpair failed and we were unable to recover it. 00:27:34.776 [2024-11-15 10:46:23.084786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.776 [2024-11-15 10:46:23.084809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.776 qpair failed and we were unable to recover it. 00:27:34.776 [2024-11-15 10:46:23.084925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.776 [2024-11-15 10:46:23.084948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.776 qpair failed and we were unable to recover it. 00:27:34.776 [2024-11-15 10:46:23.085027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.776 [2024-11-15 10:46:23.085050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.776 qpair failed and we were unable to recover it. 00:27:34.776 [2024-11-15 10:46:23.085196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.776 [2024-11-15 10:46:23.085219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.776 qpair failed and we were unable to recover it. 00:27:34.776 [2024-11-15 10:46:23.085375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.776 [2024-11-15 10:46:23.085399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.776 qpair failed and we were unable to recover it. 00:27:34.776 [2024-11-15 10:46:23.085505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.776 [2024-11-15 10:46:23.085529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.776 qpair failed and we were unable to recover it. 00:27:34.776 [2024-11-15 10:46:23.085667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.776 [2024-11-15 10:46:23.085692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.776 qpair failed and we were unable to recover it. 00:27:34.776 [2024-11-15 10:46:23.085851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.776 [2024-11-15 10:46:23.085889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.776 qpair failed and we were unable to recover it. 00:27:34.776 [2024-11-15 10:46:23.086046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.776 [2024-11-15 10:46:23.086070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.776 qpair failed and we were unable to recover it. 00:27:34.776 [2024-11-15 10:46:23.086192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.776 [2024-11-15 10:46:23.086217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.776 qpair failed and we were unable to recover it. 00:27:34.776 [2024-11-15 10:46:23.086343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.776 [2024-11-15 10:46:23.086388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.776 qpair failed and we were unable to recover it. 00:27:34.776 [2024-11-15 10:46:23.086551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.776 [2024-11-15 10:46:23.086574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.776 qpair failed and we were unable to recover it. 00:27:34.776 [2024-11-15 10:46:23.086719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.776 [2024-11-15 10:46:23.086743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.776 qpair failed and we were unable to recover it. 00:27:34.776 [2024-11-15 10:46:23.086857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.776 [2024-11-15 10:46:23.086881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.776 qpair failed and we were unable to recover it. 00:27:34.776 [2024-11-15 10:46:23.087013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.776 [2024-11-15 10:46:23.087037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.776 qpair failed and we were unable to recover it. 00:27:34.776 [2024-11-15 10:46:23.087197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.776 [2024-11-15 10:46:23.087221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.776 qpair failed and we were unable to recover it. 00:27:34.776 [2024-11-15 10:46:23.087318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.776 [2024-11-15 10:46:23.087342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.776 qpair failed and we were unable to recover it. 00:27:34.776 [2024-11-15 10:46:23.087519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.776 [2024-11-15 10:46:23.087543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.776 qpair failed and we were unable to recover it. 00:27:34.776 [2024-11-15 10:46:23.087660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.776 [2024-11-15 10:46:23.087697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.776 qpair failed and we were unable to recover it. 00:27:34.776 [2024-11-15 10:46:23.087818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.776 [2024-11-15 10:46:23.087855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.776 qpair failed and we were unable to recover it. 00:27:34.776 [2024-11-15 10:46:23.087962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.776 [2024-11-15 10:46:23.087985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.776 qpair failed and we were unable to recover it. 00:27:34.776 [2024-11-15 10:46:23.088123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.776 [2024-11-15 10:46:23.088148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.776 qpair failed and we were unable to recover it. 00:27:34.776 [2024-11-15 10:46:23.088309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.776 [2024-11-15 10:46:23.088346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.776 qpair failed and we were unable to recover it. 00:27:34.776 [2024-11-15 10:46:23.088507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.776 [2024-11-15 10:46:23.088531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.776 qpair failed and we were unable to recover it. 00:27:34.776 [2024-11-15 10:46:23.088653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.776 [2024-11-15 10:46:23.088678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.776 qpair failed and we were unable to recover it. 00:27:34.776 [2024-11-15 10:46:23.088814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.776 [2024-11-15 10:46:23.088837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.776 qpair failed and we were unable to recover it. 00:27:34.776 [2024-11-15 10:46:23.088953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.776 [2024-11-15 10:46:23.088976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.776 qpair failed and we were unable to recover it. 00:27:34.776 [2024-11-15 10:46:23.089104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.776 [2024-11-15 10:46:23.089129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.776 qpair failed and we were unable to recover it. 00:27:34.776 [2024-11-15 10:46:23.089272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.776 [2024-11-15 10:46:23.089296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.777 qpair failed and we were unable to recover it. 00:27:34.777 [2024-11-15 10:46:23.089423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.777 [2024-11-15 10:46:23.089448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.777 qpair failed and we were unable to recover it. 00:27:34.777 [2024-11-15 10:46:23.089568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.777 [2024-11-15 10:46:23.089591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.777 qpair failed and we were unable to recover it. 00:27:34.777 [2024-11-15 10:46:23.089724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.777 [2024-11-15 10:46:23.089748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.777 qpair failed and we were unable to recover it. 00:27:34.777 [2024-11-15 10:46:23.089883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.777 [2024-11-15 10:46:23.089905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.777 qpair failed and we were unable to recover it. 00:27:34.777 [2024-11-15 10:46:23.090060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.777 [2024-11-15 10:46:23.090084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.777 qpair failed and we were unable to recover it. 00:27:34.777 [2024-11-15 10:46:23.090266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.777 [2024-11-15 10:46:23.090290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.777 qpair failed and we were unable to recover it. 00:27:34.777 [2024-11-15 10:46:23.090438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.777 [2024-11-15 10:46:23.090462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.777 qpair failed and we were unable to recover it. 00:27:34.777 [2024-11-15 10:46:23.090621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.777 [2024-11-15 10:46:23.090660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.777 qpair failed and we were unable to recover it. 00:27:34.777 [2024-11-15 10:46:23.090778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.777 [2024-11-15 10:46:23.090815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.777 qpair failed and we were unable to recover it. 00:27:34.777 [2024-11-15 10:46:23.090941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.777 [2024-11-15 10:46:23.090965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.777 qpair failed and we were unable to recover it. 00:27:34.777 [2024-11-15 10:46:23.091123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.777 [2024-11-15 10:46:23.091146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.777 qpair failed and we were unable to recover it. 00:27:34.777 [2024-11-15 10:46:23.091279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.777 [2024-11-15 10:46:23.091317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.777 qpair failed and we were unable to recover it. 00:27:34.777 [2024-11-15 10:46:23.091417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.777 [2024-11-15 10:46:23.091456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.777 qpair failed and we were unable to recover it. 00:27:34.777 [2024-11-15 10:46:23.091576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.777 [2024-11-15 10:46:23.091600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.777 qpair failed and we were unable to recover it. 00:27:34.777 [2024-11-15 10:46:23.091725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.777 [2024-11-15 10:46:23.091767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.777 qpair failed and we were unable to recover it. 00:27:34.777 [2024-11-15 10:46:23.091905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.777 [2024-11-15 10:46:23.091928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.777 qpair failed and we were unable to recover it. 00:27:34.777 [2024-11-15 10:46:23.092059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.777 [2024-11-15 10:46:23.092083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.777 qpair failed and we were unable to recover it. 00:27:34.777 [2024-11-15 10:46:23.092197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.777 [2024-11-15 10:46:23.092221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.777 qpair failed and we were unable to recover it. 00:27:34.777 [2024-11-15 10:46:23.092394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.777 [2024-11-15 10:46:23.092418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.777 qpair failed and we were unable to recover it. 00:27:34.777 [2024-11-15 10:46:23.092527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.777 [2024-11-15 10:46:23.092552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.777 qpair failed and we were unable to recover it. 00:27:34.777 [2024-11-15 10:46:23.092685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.777 [2024-11-15 10:46:23.092708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.777 qpair failed and we were unable to recover it. 00:27:34.777 [2024-11-15 10:46:23.092879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.777 [2024-11-15 10:46:23.092902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.777 qpair failed and we were unable to recover it. 00:27:34.777 [2024-11-15 10:46:23.093059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.777 [2024-11-15 10:46:23.093084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.777 qpair failed and we were unable to recover it. 00:27:34.777 [2024-11-15 10:46:23.093190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.777 [2024-11-15 10:46:23.093213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.777 qpair failed and we were unable to recover it. 00:27:34.777 [2024-11-15 10:46:23.093346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.777 [2024-11-15 10:46:23.093378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.777 qpair failed and we were unable to recover it. 00:27:34.777 [2024-11-15 10:46:23.093514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.777 [2024-11-15 10:46:23.093538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.777 qpair failed and we were unable to recover it. 00:27:34.777 [2024-11-15 10:46:23.093672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.777 [2024-11-15 10:46:23.093695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.777 qpair failed and we were unable to recover it. 00:27:34.777 [2024-11-15 10:46:23.093858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.777 [2024-11-15 10:46:23.093882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.777 qpair failed and we were unable to recover it. 00:27:34.777 [2024-11-15 10:46:23.094018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.777 [2024-11-15 10:46:23.094042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.777 qpair failed and we were unable to recover it. 00:27:34.777 [2024-11-15 10:46:23.094206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.777 [2024-11-15 10:46:23.094244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.777 qpair failed and we were unable to recover it. 00:27:34.777 [2024-11-15 10:46:23.094405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.777 [2024-11-15 10:46:23.094431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.777 qpair failed and we were unable to recover it. 00:27:34.777 [2024-11-15 10:46:23.094549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.777 [2024-11-15 10:46:23.094573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.777 qpair failed and we were unable to recover it. 00:27:34.777 [2024-11-15 10:46:23.094698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.778 [2024-11-15 10:46:23.094736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.778 qpair failed and we were unable to recover it. 00:27:34.778 [2024-11-15 10:46:23.094877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.778 [2024-11-15 10:46:23.094904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.778 qpair failed and we were unable to recover it. 00:27:34.778 [2024-11-15 10:46:23.095075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.778 [2024-11-15 10:46:23.095098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.778 qpair failed and we were unable to recover it. 00:27:34.778 [2024-11-15 10:46:23.095220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.778 [2024-11-15 10:46:23.095258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.778 qpair failed and we were unable to recover it. 00:27:34.778 [2024-11-15 10:46:23.095417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.778 [2024-11-15 10:46:23.095442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.778 qpair failed and we were unable to recover it. 00:27:34.778 [2024-11-15 10:46:23.095573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.778 [2024-11-15 10:46:23.095597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.778 qpair failed and we were unable to recover it. 00:27:34.778 [2024-11-15 10:46:23.095731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.778 [2024-11-15 10:46:23.095755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.778 qpair failed and we were unable to recover it. 00:27:34.778 [2024-11-15 10:46:23.095890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.778 [2024-11-15 10:46:23.095914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.778 qpair failed and we were unable to recover it. 00:27:34.778 [2024-11-15 10:46:23.096073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.778 [2024-11-15 10:46:23.096096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.778 qpair failed and we were unable to recover it. 00:27:34.778 [2024-11-15 10:46:23.096232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.778 [2024-11-15 10:46:23.096270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.778 qpair failed and we were unable to recover it. 00:27:34.778 [2024-11-15 10:46:23.096438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.778 [2024-11-15 10:46:23.096462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.778 qpair failed and we were unable to recover it. 00:27:34.778 [2024-11-15 10:46:23.096606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.778 [2024-11-15 10:46:23.096629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.778 qpair failed and we were unable to recover it. 00:27:34.778 [2024-11-15 10:46:23.096800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.778 [2024-11-15 10:46:23.096824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.778 qpair failed and we were unable to recover it. 00:27:34.778 [2024-11-15 10:46:23.096954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.778 [2024-11-15 10:46:23.096977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.778 qpair failed and we were unable to recover it. 00:27:34.778 [2024-11-15 10:46:23.097134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.778 [2024-11-15 10:46:23.097172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.778 qpair failed and we were unable to recover it. 00:27:34.778 [2024-11-15 10:46:23.097267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.778 [2024-11-15 10:46:23.097291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.778 qpair failed and we were unable to recover it. 00:27:34.778 [2024-11-15 10:46:23.097436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.778 [2024-11-15 10:46:23.097476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.778 qpair failed and we were unable to recover it. 00:27:34.778 [2024-11-15 10:46:23.097615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.778 [2024-11-15 10:46:23.097639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.778 qpair failed and we were unable to recover it. 00:27:34.778 [2024-11-15 10:46:23.097763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.778 [2024-11-15 10:46:23.097802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.778 qpair failed and we were unable to recover it. 00:27:34.778 [2024-11-15 10:46:23.097933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.778 [2024-11-15 10:46:23.097956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.778 qpair failed and we were unable to recover it. 00:27:34.778 [2024-11-15 10:46:23.098088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.778 [2024-11-15 10:46:23.098113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.778 qpair failed and we were unable to recover it. 00:27:34.778 [2024-11-15 10:46:23.098228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.778 [2024-11-15 10:46:23.098251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.778 qpair failed and we were unable to recover it. 00:27:34.778 [2024-11-15 10:46:23.098369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.778 [2024-11-15 10:46:23.098393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.778 qpair failed and we were unable to recover it. 00:27:34.778 [2024-11-15 10:46:23.098542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.778 [2024-11-15 10:46:23.098566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.778 qpair failed and we were unable to recover it. 00:27:34.778 [2024-11-15 10:46:23.098651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.778 [2024-11-15 10:46:23.098674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.778 qpair failed and we were unable to recover it. 00:27:34.778 [2024-11-15 10:46:23.098830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.778 [2024-11-15 10:46:23.098854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.778 qpair failed and we were unable to recover it. 00:27:34.778 [2024-11-15 10:46:23.099014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.778 [2024-11-15 10:46:23.099037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.778 qpair failed and we were unable to recover it. 00:27:34.778 [2024-11-15 10:46:23.099169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.778 [2024-11-15 10:46:23.099193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.778 qpair failed and we were unable to recover it. 00:27:34.778 [2024-11-15 10:46:23.099342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.778 [2024-11-15 10:46:23.099378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.778 qpair failed and we were unable to recover it. 00:27:34.778 [2024-11-15 10:46:23.099525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.778 [2024-11-15 10:46:23.099550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.778 qpair failed and we were unable to recover it. 00:27:34.778 [2024-11-15 10:46:23.099668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.778 [2024-11-15 10:46:23.099707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.778 qpair failed and we were unable to recover it. 00:27:34.778 [2024-11-15 10:46:23.099884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.778 [2024-11-15 10:46:23.099907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.778 qpair failed and we were unable to recover it. 00:27:34.778 [2024-11-15 10:46:23.100067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.778 [2024-11-15 10:46:23.100090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.778 qpair failed and we were unable to recover it. 00:27:34.778 [2024-11-15 10:46:23.100191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.778 [2024-11-15 10:46:23.100215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.778 qpair failed and we were unable to recover it. 00:27:34.778 [2024-11-15 10:46:23.100406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.778 [2024-11-15 10:46:23.100432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.778 qpair failed and we were unable to recover it. 00:27:34.778 [2024-11-15 10:46:23.100592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.778 [2024-11-15 10:46:23.100615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.778 qpair failed and we were unable to recover it. 00:27:34.778 [2024-11-15 10:46:23.100738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.778 [2024-11-15 10:46:23.100776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.778 qpair failed and we were unable to recover it. 00:27:34.778 [2024-11-15 10:46:23.100893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.778 [2024-11-15 10:46:23.100918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.778 qpair failed and we were unable to recover it. 00:27:34.778 [2024-11-15 10:46:23.100999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.778 [2024-11-15 10:46:23.101022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.779 qpair failed and we were unable to recover it. 00:27:34.779 [2024-11-15 10:46:23.101151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.779 [2024-11-15 10:46:23.101175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.779 qpair failed and we were unable to recover it. 00:27:34.779 [2024-11-15 10:46:23.101295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.779 [2024-11-15 10:46:23.101318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.779 qpair failed and we were unable to recover it. 00:27:34.779 [2024-11-15 10:46:23.101464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.779 [2024-11-15 10:46:23.101489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.779 qpair failed and we were unable to recover it. 00:27:34.779 [2024-11-15 10:46:23.101617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.779 [2024-11-15 10:46:23.101642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.779 qpair failed and we were unable to recover it. 00:27:34.779 [2024-11-15 10:46:23.101780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.779 [2024-11-15 10:46:23.101819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.779 qpair failed and we were unable to recover it. 00:27:34.779 [2024-11-15 10:46:23.101945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.779 [2024-11-15 10:46:23.101968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.779 qpair failed and we were unable to recover it. 00:27:34.779 [2024-11-15 10:46:23.102125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.779 [2024-11-15 10:46:23.102149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.779 qpair failed and we were unable to recover it. 00:27:34.779 [2024-11-15 10:46:23.102284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.779 [2024-11-15 10:46:23.102307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.779 qpair failed and we were unable to recover it. 00:27:34.779 [2024-11-15 10:46:23.102480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.779 [2024-11-15 10:46:23.102505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.779 qpair failed and we were unable to recover it. 00:27:34.779 [2024-11-15 10:46:23.102600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.779 [2024-11-15 10:46:23.102625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.779 qpair failed and we were unable to recover it. 00:27:34.779 [2024-11-15 10:46:23.102813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.779 [2024-11-15 10:46:23.102835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.779 qpair failed and we were unable to recover it. 00:27:34.779 [2024-11-15 10:46:23.102991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.779 [2024-11-15 10:46:23.103014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.779 qpair failed and we were unable to recover it. 00:27:34.779 [2024-11-15 10:46:23.103138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.779 [2024-11-15 10:46:23.103163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.779 qpair failed and we were unable to recover it. 00:27:34.779 [2024-11-15 10:46:23.103289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.779 [2024-11-15 10:46:23.103312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.779 qpair failed and we were unable to recover it. 00:27:34.779 [2024-11-15 10:46:23.103459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.779 [2024-11-15 10:46:23.103484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.779 qpair failed and we were unable to recover it. 00:27:34.779 [2024-11-15 10:46:23.103606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.779 [2024-11-15 10:46:23.103630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.779 qpair failed and we were unable to recover it. 00:27:34.779 [2024-11-15 10:46:23.103756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.779 [2024-11-15 10:46:23.103794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.779 qpair failed and we were unable to recover it. 00:27:34.779 [2024-11-15 10:46:23.103955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.779 [2024-11-15 10:46:23.103978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.779 qpair failed and we were unable to recover it. 00:27:34.779 [2024-11-15 10:46:23.104112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.779 [2024-11-15 10:46:23.104136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.779 qpair failed and we were unable to recover it. 00:27:34.779 [2024-11-15 10:46:23.104270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.779 [2024-11-15 10:46:23.104293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.779 qpair failed and we were unable to recover it. 00:27:34.779 [2024-11-15 10:46:23.104442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.779 [2024-11-15 10:46:23.104467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.779 qpair failed and we were unable to recover it. 00:27:34.779 [2024-11-15 10:46:23.104584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.779 [2024-11-15 10:46:23.104608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.779 qpair failed and we were unable to recover it. 00:27:34.779 [2024-11-15 10:46:23.104732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.779 [2024-11-15 10:46:23.104756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.779 qpair failed and we were unable to recover it. 00:27:34.779 [2024-11-15 10:46:23.104893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.779 [2024-11-15 10:46:23.104916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.779 qpair failed and we were unable to recover it. 00:27:34.779 [2024-11-15 10:46:23.105046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.779 [2024-11-15 10:46:23.105070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.779 qpair failed and we were unable to recover it. 00:27:34.779 [2024-11-15 10:46:23.105176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.779 [2024-11-15 10:46:23.105199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.779 qpair failed and we were unable to recover it. 00:27:34.779 [2024-11-15 10:46:23.105314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.779 [2024-11-15 10:46:23.105337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.779 qpair failed and we were unable to recover it. 00:27:34.779 [2024-11-15 10:46:23.105481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.779 [2024-11-15 10:46:23.105506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.779 qpair failed and we were unable to recover it. 00:27:34.779 [2024-11-15 10:46:23.105630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.779 [2024-11-15 10:46:23.105654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.779 qpair failed and we were unable to recover it. 00:27:34.779 [2024-11-15 10:46:23.105807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.779 [2024-11-15 10:46:23.105830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.779 qpair failed and we were unable to recover it. 00:27:34.779 [2024-11-15 10:46:23.105966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.779 [2024-11-15 10:46:23.106005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.779 qpair failed and we were unable to recover it. 00:27:34.779 [2024-11-15 10:46:23.106095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.779 [2024-11-15 10:46:23.106119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.779 qpair failed and we were unable to recover it. 00:27:34.779 [2024-11-15 10:46:23.106278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.779 [2024-11-15 10:46:23.106302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.779 qpair failed and we were unable to recover it. 00:27:34.779 [2024-11-15 10:46:23.106438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.779 [2024-11-15 10:46:23.106463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.779 qpair failed and we were unable to recover it. 00:27:34.779 [2024-11-15 10:46:23.106618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.779 [2024-11-15 10:46:23.106657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.779 qpair failed and we were unable to recover it. 00:27:34.779 [2024-11-15 10:46:23.106770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.779 [2024-11-15 10:46:23.106794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.779 qpair failed and we were unable to recover it. 00:27:34.779 [2024-11-15 10:46:23.106954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.779 [2024-11-15 10:46:23.106977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.779 qpair failed and we were unable to recover it. 00:27:34.779 [2024-11-15 10:46:23.107093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.779 [2024-11-15 10:46:23.107117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.779 qpair failed and we were unable to recover it. 00:27:34.779 [2024-11-15 10:46:23.107257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.780 [2024-11-15 10:46:23.107279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.780 qpair failed and we were unable to recover it. 00:27:34.780 [2024-11-15 10:46:23.107406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.780 [2024-11-15 10:46:23.107432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.780 qpair failed and we were unable to recover it. 00:27:34.780 [2024-11-15 10:46:23.107579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.780 [2024-11-15 10:46:23.107603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.780 qpair failed and we were unable to recover it. 00:27:34.780 [2024-11-15 10:46:23.107725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.780 [2024-11-15 10:46:23.107749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.780 qpair failed and we were unable to recover it. 00:27:34.780 [2024-11-15 10:46:23.107874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.780 [2024-11-15 10:46:23.107898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.780 qpair failed and we were unable to recover it. 00:27:34.780 [2024-11-15 10:46:23.108030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.780 [2024-11-15 10:46:23.108054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.780 qpair failed and we were unable to recover it. 00:27:34.780 [2024-11-15 10:46:23.108200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.780 [2024-11-15 10:46:23.108224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.780 qpair failed and we were unable to recover it. 00:27:34.780 [2024-11-15 10:46:23.108383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.780 [2024-11-15 10:46:23.108407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.780 qpair failed and we were unable to recover it. 00:27:34.780 [2024-11-15 10:46:23.108500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.780 [2024-11-15 10:46:23.108525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.780 qpair failed and we were unable to recover it. 00:27:34.780 [2024-11-15 10:46:23.108655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.780 [2024-11-15 10:46:23.108678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.780 qpair failed and we were unable to recover it. 00:27:34.780 [2024-11-15 10:46:23.108810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.780 [2024-11-15 10:46:23.108848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.780 qpair failed and we were unable to recover it. 00:27:34.780 [2024-11-15 10:46:23.108940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.780 [2024-11-15 10:46:23.108963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.780 qpair failed and we were unable to recover it. 00:27:34.780 [2024-11-15 10:46:23.109085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.780 [2024-11-15 10:46:23.109108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.780 qpair failed and we were unable to recover it. 00:27:34.780 [2024-11-15 10:46:23.109241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.780 [2024-11-15 10:46:23.109266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.780 qpair failed and we were unable to recover it. 00:27:34.780 [2024-11-15 10:46:23.109433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.780 [2024-11-15 10:46:23.109457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.780 qpair failed and we were unable to recover it. 00:27:34.780 [2024-11-15 10:46:23.109566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.780 [2024-11-15 10:46:23.109590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.780 qpair failed and we were unable to recover it. 00:27:34.780 [2024-11-15 10:46:23.109672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.780 [2024-11-15 10:46:23.109696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.780 qpair failed and we were unable to recover it. 00:27:34.780 [2024-11-15 10:46:23.109820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.780 [2024-11-15 10:46:23.109843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.780 qpair failed and we were unable to recover it. 00:27:34.780 [2024-11-15 10:46:23.109986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.780 [2024-11-15 10:46:23.110009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.780 qpair failed and we were unable to recover it. 00:27:34.780 [2024-11-15 10:46:23.110166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.780 [2024-11-15 10:46:23.110194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.780 qpair failed and we were unable to recover it. 00:27:34.780 [2024-11-15 10:46:23.110305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.780 [2024-11-15 10:46:23.110329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.780 qpair failed and we were unable to recover it. 00:27:34.780 [2024-11-15 10:46:23.110475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.780 [2024-11-15 10:46:23.110500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.780 qpair failed and we were unable to recover it. 00:27:34.780 [2024-11-15 10:46:23.110616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.780 [2024-11-15 10:46:23.110640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.780 qpair failed and we were unable to recover it. 00:27:34.780 [2024-11-15 10:46:23.110773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.780 [2024-11-15 10:46:23.110797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.780 qpair failed and we were unable to recover it. 00:27:34.780 [2024-11-15 10:46:23.110953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.780 [2024-11-15 10:46:23.110976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.780 qpair failed and we were unable to recover it. 00:27:34.780 [2024-11-15 10:46:23.111131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.780 [2024-11-15 10:46:23.111154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.780 qpair failed and we were unable to recover it. 00:27:34.780 [2024-11-15 10:46:23.111292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.780 [2024-11-15 10:46:23.111329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.780 qpair failed and we were unable to recover it. 00:27:34.780 [2024-11-15 10:46:23.111500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.780 [2024-11-15 10:46:23.111525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.780 qpair failed and we were unable to recover it. 00:27:34.780 [2024-11-15 10:46:23.111643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.780 [2024-11-15 10:46:23.111667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.780 qpair failed and we were unable to recover it. 00:27:34.780 [2024-11-15 10:46:23.111799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.780 [2024-11-15 10:46:23.111837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.780 qpair failed and we were unable to recover it. 00:27:34.780 [2024-11-15 10:46:23.111981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.780 [2024-11-15 10:46:23.112004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.780 qpair failed and we were unable to recover it. 00:27:34.780 [2024-11-15 10:46:23.112135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.780 [2024-11-15 10:46:23.112159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.780 qpair failed and we were unable to recover it. 00:27:34.780 [2024-11-15 10:46:23.112317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.780 [2024-11-15 10:46:23.112355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.780 qpair failed and we were unable to recover it. 00:27:34.780 [2024-11-15 10:46:23.112524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.780 [2024-11-15 10:46:23.112548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.780 qpair failed and we were unable to recover it. 00:27:34.780 [2024-11-15 10:46:23.112680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.780 [2024-11-15 10:46:23.112704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.780 qpair failed and we were unable to recover it. 00:27:34.780 [2024-11-15 10:46:23.112832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.780 [2024-11-15 10:46:23.112856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.780 qpair failed and we were unable to recover it. 00:27:34.780 [2024-11-15 10:46:23.112994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.780 [2024-11-15 10:46:23.113018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.780 qpair failed and we were unable to recover it. 00:27:34.780 [2024-11-15 10:46:23.113192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.780 [2024-11-15 10:46:23.113215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.780 qpair failed and we were unable to recover it. 00:27:34.780 [2024-11-15 10:46:23.113344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.781 [2024-11-15 10:46:23.113389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.781 qpair failed and we were unable to recover it. 00:27:34.781 [2024-11-15 10:46:23.113522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.781 [2024-11-15 10:46:23.113545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.781 qpair failed and we were unable to recover it. 00:27:34.781 [2024-11-15 10:46:23.113701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.781 [2024-11-15 10:46:23.113739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.781 qpair failed and we were unable to recover it. 00:27:34.781 [2024-11-15 10:46:23.113901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.781 [2024-11-15 10:46:23.113924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.781 qpair failed and we were unable to recover it. 00:27:34.781 [2024-11-15 10:46:23.114080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.781 [2024-11-15 10:46:23.114104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.781 qpair failed and we were unable to recover it. 00:27:34.781 [2024-11-15 10:46:23.114239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.781 [2024-11-15 10:46:23.114278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.781 qpair failed and we were unable to recover it. 00:27:34.781 [2024-11-15 10:46:23.114406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.781 [2024-11-15 10:46:23.114430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.781 qpair failed and we were unable to recover it. 00:27:34.781 [2024-11-15 10:46:23.114531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.781 [2024-11-15 10:46:23.114554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.781 qpair failed and we were unable to recover it. 00:27:34.781 [2024-11-15 10:46:23.114710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.781 [2024-11-15 10:46:23.114738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.781 qpair failed and we were unable to recover it. 00:27:34.781 [2024-11-15 10:46:23.114908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.781 [2024-11-15 10:46:23.114932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.781 qpair failed and we were unable to recover it. 00:27:34.781 [2024-11-15 10:46:23.115091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.781 [2024-11-15 10:46:23.115113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.781 qpair failed and we were unable to recover it. 00:27:34.781 [2024-11-15 10:46:23.115291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.781 [2024-11-15 10:46:23.115313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.781 qpair failed and we were unable to recover it. 00:27:34.781 [2024-11-15 10:46:23.115405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.781 [2024-11-15 10:46:23.115431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.781 qpair failed and we were unable to recover it. 00:27:34.781 [2024-11-15 10:46:23.115550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.781 [2024-11-15 10:46:23.115573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.781 qpair failed and we were unable to recover it. 00:27:34.781 [2024-11-15 10:46:23.115707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.781 [2024-11-15 10:46:23.115732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.781 qpair failed and we were unable to recover it. 00:27:34.781 [2024-11-15 10:46:23.115842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.781 [2024-11-15 10:46:23.115865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.781 qpair failed and we were unable to recover it. 00:27:34.781 [2024-11-15 10:46:23.115997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.781 [2024-11-15 10:46:23.116021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.781 qpair failed and we were unable to recover it. 00:27:34.781 [2024-11-15 10:46:23.116155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.781 [2024-11-15 10:46:23.116179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.781 qpair failed and we were unable to recover it. 00:27:34.781 [2024-11-15 10:46:23.116306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.781 [2024-11-15 10:46:23.116345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.781 qpair failed and we were unable to recover it. 00:27:34.781 [2024-11-15 10:46:23.116511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.781 [2024-11-15 10:46:23.116536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.781 qpair failed and we were unable to recover it. 00:27:34.781 [2024-11-15 10:46:23.116662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.781 [2024-11-15 10:46:23.116686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.781 qpair failed and we were unable to recover it. 00:27:34.781 [2024-11-15 10:46:23.116839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.781 [2024-11-15 10:46:23.116865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.781 qpair failed and we were unable to recover it. 00:27:34.781 [2024-11-15 10:46:23.116989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.781 [2024-11-15 10:46:23.117013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.781 qpair failed and we were unable to recover it. 00:27:34.781 [2024-11-15 10:46:23.117137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.781 [2024-11-15 10:46:23.117161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.781 qpair failed and we were unable to recover it. 00:27:34.781 [2024-11-15 10:46:23.117304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.781 [2024-11-15 10:46:23.117329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.781 qpair failed and we were unable to recover it. 00:27:34.781 [2024-11-15 10:46:23.117475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.781 [2024-11-15 10:46:23.117500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.781 qpair failed and we were unable to recover it. 00:27:34.781 [2024-11-15 10:46:23.117640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.781 [2024-11-15 10:46:23.117665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.781 qpair failed and we were unable to recover it. 00:27:34.781 [2024-11-15 10:46:23.117839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.781 [2024-11-15 10:46:23.117863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.781 qpair failed and we were unable to recover it. 00:27:34.781 [2024-11-15 10:46:23.117996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.781 [2024-11-15 10:46:23.118019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.781 qpair failed and we were unable to recover it. 00:27:34.781 [2024-11-15 10:46:23.118163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.781 [2024-11-15 10:46:23.118188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.781 qpair failed and we were unable to recover it. 00:27:34.781 [2024-11-15 10:46:23.118380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.781 [2024-11-15 10:46:23.118407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.781 qpair failed and we were unable to recover it. 00:27:34.781 [2024-11-15 10:46:23.118508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.781 [2024-11-15 10:46:23.118532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.781 qpair failed and we were unable to recover it. 00:27:34.781 [2024-11-15 10:46:23.118687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.781 [2024-11-15 10:46:23.118711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.781 qpair failed and we were unable to recover it. 00:27:34.781 [2024-11-15 10:46:23.118892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.781 [2024-11-15 10:46:23.118916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.781 qpair failed and we were unable to recover it. 00:27:34.781 [2024-11-15 10:46:23.119076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.781 [2024-11-15 10:46:23.119101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.782 qpair failed and we were unable to recover it. 00:27:34.782 [2024-11-15 10:46:23.119277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.782 [2024-11-15 10:46:23.119305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.782 qpair failed and we were unable to recover it. 00:27:34.782 [2024-11-15 10:46:23.119427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.782 [2024-11-15 10:46:23.119467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.782 qpair failed and we were unable to recover it. 00:27:34.782 [2024-11-15 10:46:23.119578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.782 [2024-11-15 10:46:23.119601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.782 qpair failed and we were unable to recover it. 00:27:34.782 [2024-11-15 10:46:23.119741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.782 [2024-11-15 10:46:23.119765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.782 qpair failed and we were unable to recover it. 00:27:34.782 [2024-11-15 10:46:23.119885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.782 [2024-11-15 10:46:23.119916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.782 qpair failed and we were unable to recover it. 00:27:34.782 [2024-11-15 10:46:23.120089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.782 [2024-11-15 10:46:23.120113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.782 qpair failed and we were unable to recover it. 00:27:34.782 [2024-11-15 10:46:23.120284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.782 [2024-11-15 10:46:23.120322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.782 qpair failed and we were unable to recover it. 00:27:34.782 [2024-11-15 10:46:23.120487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.782 [2024-11-15 10:46:23.120512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.782 qpair failed and we were unable to recover it. 00:27:34.782 [2024-11-15 10:46:23.120618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.782 [2024-11-15 10:46:23.120642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.782 qpair failed and we were unable to recover it. 00:27:34.782 [2024-11-15 10:46:23.120825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.782 [2024-11-15 10:46:23.120848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.782 qpair failed and we were unable to recover it. 00:27:34.782 [2024-11-15 10:46:23.120970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.782 [2024-11-15 10:46:23.121011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.782 qpair failed and we were unable to recover it. 00:27:34.782 [2024-11-15 10:46:23.121175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.782 [2024-11-15 10:46:23.121199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.782 qpair failed and we were unable to recover it. 00:27:34.782 [2024-11-15 10:46:23.121442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.782 [2024-11-15 10:46:23.121471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.782 qpair failed and we were unable to recover it. 00:27:34.782 [2024-11-15 10:46:23.121608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.782 [2024-11-15 10:46:23.121632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.782 qpair failed and we were unable to recover it. 00:27:34.782 [2024-11-15 10:46:23.121760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.782 [2024-11-15 10:46:23.121783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.782 qpair failed and we were unable to recover it. 00:27:34.782 [2024-11-15 10:46:23.121930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.782 [2024-11-15 10:46:23.121969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.782 qpair failed and we were unable to recover it. 00:27:34.782 [2024-11-15 10:46:23.122139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.782 [2024-11-15 10:46:23.122162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.782 qpair failed and we were unable to recover it. 00:27:34.782 [2024-11-15 10:46:23.122325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.782 [2024-11-15 10:46:23.122371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.782 qpair failed and we were unable to recover it. 00:27:34.782 [2024-11-15 10:46:23.122504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.782 [2024-11-15 10:46:23.122528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.782 qpair failed and we were unable to recover it. 00:27:34.782 [2024-11-15 10:46:23.122702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.782 [2024-11-15 10:46:23.122727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.782 qpair failed and we were unable to recover it. 00:27:34.782 [2024-11-15 10:46:23.122853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.782 [2024-11-15 10:46:23.122877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.782 qpair failed and we were unable to recover it. 00:27:34.782 [2024-11-15 10:46:23.123005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.782 [2024-11-15 10:46:23.123030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.782 qpair failed and we were unable to recover it. 00:27:34.782 [2024-11-15 10:46:23.123187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.782 [2024-11-15 10:46:23.123212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.782 qpair failed and we were unable to recover it. 00:27:34.782 [2024-11-15 10:46:23.123374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.782 [2024-11-15 10:46:23.123399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.782 qpair failed and we were unable to recover it. 00:27:34.782 [2024-11-15 10:46:23.123506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.782 [2024-11-15 10:46:23.123531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.782 qpair failed and we were unable to recover it. 00:27:34.782 [2024-11-15 10:46:23.123724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.782 [2024-11-15 10:46:23.123762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.782 qpair failed and we were unable to recover it. 00:27:34.782 [2024-11-15 10:46:23.123896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.782 [2024-11-15 10:46:23.123919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.782 qpair failed and we were unable to recover it. 00:27:34.782 [2024-11-15 10:46:23.124059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.782 [2024-11-15 10:46:23.124083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.782 qpair failed and we were unable to recover it. 00:27:34.782 [2024-11-15 10:46:23.124204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.782 [2024-11-15 10:46:23.124228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.782 qpair failed and we were unable to recover it. 00:27:34.782 [2024-11-15 10:46:23.124390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.782 [2024-11-15 10:46:23.124415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.782 qpair failed and we were unable to recover it. 00:27:34.782 [2024-11-15 10:46:23.124548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.782 [2024-11-15 10:46:23.124574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.782 qpair failed and we were unable to recover it. 00:27:34.782 [2024-11-15 10:46:23.124739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.782 [2024-11-15 10:46:23.124778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.782 qpair failed and we were unable to recover it. 00:27:34.782 [2024-11-15 10:46:23.124918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.782 [2024-11-15 10:46:23.124941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.782 qpair failed and we were unable to recover it. 00:27:34.782 [2024-11-15 10:46:23.125085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.782 [2024-11-15 10:46:23.125110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.782 qpair failed and we were unable to recover it. 00:27:34.782 [2024-11-15 10:46:23.125255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.782 [2024-11-15 10:46:23.125293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.782 qpair failed and we were unable to recover it. 00:27:34.782 [2024-11-15 10:46:23.125423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.782 [2024-11-15 10:46:23.125448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.782 qpair failed and we were unable to recover it. 00:27:34.782 [2024-11-15 10:46:23.125568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.782 [2024-11-15 10:46:23.125593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.782 qpair failed and we were unable to recover it. 00:27:34.782 [2024-11-15 10:46:23.125770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.783 [2024-11-15 10:46:23.125794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.783 qpair failed and we were unable to recover it. 00:27:34.783 [2024-11-15 10:46:23.125924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.783 [2024-11-15 10:46:23.125949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.783 qpair failed and we were unable to recover it. 00:27:34.783 [2024-11-15 10:46:23.126189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.783 [2024-11-15 10:46:23.126213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.783 qpair failed and we were unable to recover it. 00:27:34.783 [2024-11-15 10:46:23.126325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.783 [2024-11-15 10:46:23.126348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.783 qpair failed and we were unable to recover it. 00:27:34.783 [2024-11-15 10:46:23.126513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.783 [2024-11-15 10:46:23.126537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.783 qpair failed and we were unable to recover it. 00:27:34.783 [2024-11-15 10:46:23.126713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.783 [2024-11-15 10:46:23.126737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.783 qpair failed and we were unable to recover it. 00:27:34.783 [2024-11-15 10:46:23.126868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.783 [2024-11-15 10:46:23.126907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.783 qpair failed and we were unable to recover it. 00:27:34.783 [2024-11-15 10:46:23.127059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.783 [2024-11-15 10:46:23.127083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.783 qpair failed and we were unable to recover it. 00:27:34.783 [2024-11-15 10:46:23.127221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.783 [2024-11-15 10:46:23.127261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.783 qpair failed and we were unable to recover it. 00:27:34.783 [2024-11-15 10:46:23.127409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.783 [2024-11-15 10:46:23.127451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.783 qpair failed and we were unable to recover it. 00:27:34.783 [2024-11-15 10:46:23.127571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.783 [2024-11-15 10:46:23.127596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.783 qpair failed and we were unable to recover it. 00:27:34.783 [2024-11-15 10:46:23.127734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.783 [2024-11-15 10:46:23.127773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.783 qpair failed and we were unable to recover it. 00:27:34.783 [2024-11-15 10:46:23.127958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.783 [2024-11-15 10:46:23.127982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.783 qpair failed and we were unable to recover it. 00:27:34.783 [2024-11-15 10:46:23.128167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.783 [2024-11-15 10:46:23.128191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.783 qpair failed and we were unable to recover it. 00:27:34.783 [2024-11-15 10:46:23.128351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.783 [2024-11-15 10:46:23.128398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.783 qpair failed and we were unable to recover it. 00:27:34.783 [2024-11-15 10:46:23.128502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.783 [2024-11-15 10:46:23.128526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.783 qpair failed and we were unable to recover it. 00:27:34.783 [2024-11-15 10:46:23.128638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.783 [2024-11-15 10:46:23.128662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.783 qpair failed and we were unable to recover it. 00:27:34.783 [2024-11-15 10:46:23.128823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.783 [2024-11-15 10:46:23.128864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.783 qpair failed and we were unable to recover it. 00:27:34.783 [2024-11-15 10:46:23.128985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.783 [2024-11-15 10:46:23.129024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.783 qpair failed and we were unable to recover it. 00:27:34.783 [2024-11-15 10:46:23.129188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.783 [2024-11-15 10:46:23.129212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.783 qpair failed and we were unable to recover it. 00:27:34.783 [2024-11-15 10:46:23.129353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.783 [2024-11-15 10:46:23.129387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.783 qpair failed and we were unable to recover it. 00:27:34.783 [2024-11-15 10:46:23.129518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.783 [2024-11-15 10:46:23.129544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.783 qpair failed and we were unable to recover it. 00:27:34.783 [2024-11-15 10:46:23.129718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.783 [2024-11-15 10:46:23.129742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.783 qpair failed and we were unable to recover it. 00:27:34.783 [2024-11-15 10:46:23.129919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.783 [2024-11-15 10:46:23.129943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.783 qpair failed and we were unable to recover it. 00:27:34.783 [2024-11-15 10:46:23.130090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.783 [2024-11-15 10:46:23.130114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.783 qpair failed and we were unable to recover it. 00:27:34.783 [2024-11-15 10:46:23.130253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.783 [2024-11-15 10:46:23.130277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.783 qpair failed and we were unable to recover it. 00:27:34.783 [2024-11-15 10:46:23.130437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.783 [2024-11-15 10:46:23.130463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.783 qpair failed and we were unable to recover it. 00:27:34.783 [2024-11-15 10:46:23.130586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.783 [2024-11-15 10:46:23.130621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.783 qpair failed and we were unable to recover it. 00:27:34.783 [2024-11-15 10:46:23.130730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.783 [2024-11-15 10:46:23.130755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.783 qpair failed and we were unable to recover it. 00:27:34.783 [2024-11-15 10:46:23.130871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.783 [2024-11-15 10:46:23.130896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.783 qpair failed and we were unable to recover it. 00:27:34.783 [2024-11-15 10:46:23.131028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.783 [2024-11-15 10:46:23.131053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.783 qpair failed and we were unable to recover it. 00:27:34.783 [2024-11-15 10:46:23.131145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.783 [2024-11-15 10:46:23.131175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.783 qpair failed and we were unable to recover it. 00:27:34.783 [2024-11-15 10:46:23.131311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.783 [2024-11-15 10:46:23.131337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.783 qpair failed and we were unable to recover it. 00:27:34.783 [2024-11-15 10:46:23.131461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.783 [2024-11-15 10:46:23.131487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.783 qpair failed and we were unable to recover it. 00:27:34.783 [2024-11-15 10:46:23.131610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.783 [2024-11-15 10:46:23.131649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.783 qpair failed and we were unable to recover it. 00:27:34.783 [2024-11-15 10:46:23.131828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.783 [2024-11-15 10:46:23.131853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.783 qpair failed and we were unable to recover it. 00:27:34.783 [2024-11-15 10:46:23.131955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.783 [2024-11-15 10:46:23.131978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.783 qpair failed and we were unable to recover it. 00:27:34.783 [2024-11-15 10:46:23.132094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.783 [2024-11-15 10:46:23.132119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.783 qpair failed and we were unable to recover it. 00:27:34.783 [2024-11-15 10:46:23.132259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.783 [2024-11-15 10:46:23.132283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.784 qpair failed and we were unable to recover it. 00:27:34.784 [2024-11-15 10:46:23.132462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.784 [2024-11-15 10:46:23.132489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.784 qpair failed and we were unable to recover it. 00:27:34.784 [2024-11-15 10:46:23.132657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.784 [2024-11-15 10:46:23.132681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.784 qpair failed and we were unable to recover it. 00:27:34.784 [2024-11-15 10:46:23.132827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.784 [2024-11-15 10:46:23.132850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.784 qpair failed and we were unable to recover it. 00:27:34.784 [2024-11-15 10:46:23.132996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.784 [2024-11-15 10:46:23.133034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.784 qpair failed and we were unable to recover it. 00:27:34.784 [2024-11-15 10:46:23.133154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.784 [2024-11-15 10:46:23.133178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.784 qpair failed and we were unable to recover it. 00:27:34.784 [2024-11-15 10:46:23.133307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.784 [2024-11-15 10:46:23.133331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.784 qpair failed and we were unable to recover it. 00:27:34.784 [2024-11-15 10:46:23.133440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.784 [2024-11-15 10:46:23.133465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.784 qpair failed and we were unable to recover it. 00:27:34.784 [2024-11-15 10:46:23.133565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.784 [2024-11-15 10:46:23.133589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.784 qpair failed and we were unable to recover it. 00:27:34.784 [2024-11-15 10:46:23.133739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.784 [2024-11-15 10:46:23.133778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.784 qpair failed and we were unable to recover it. 00:27:34.784 [2024-11-15 10:46:23.133905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.784 [2024-11-15 10:46:23.133929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.784 qpair failed and we were unable to recover it. 00:27:34.784 [2024-11-15 10:46:23.134068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.784 [2024-11-15 10:46:23.134092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.784 qpair failed and we were unable to recover it. 00:27:34.784 [2024-11-15 10:46:23.134264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.784 [2024-11-15 10:46:23.134304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.784 qpair failed and we were unable to recover it. 00:27:34.784 [2024-11-15 10:46:23.134411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.784 [2024-11-15 10:46:23.134452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.784 qpair failed and we were unable to recover it. 00:27:34.784 [2024-11-15 10:46:23.134546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.784 [2024-11-15 10:46:23.134570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.784 qpair failed and we were unable to recover it. 00:27:34.784 [2024-11-15 10:46:23.134714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.784 [2024-11-15 10:46:23.134739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.784 qpair failed and we were unable to recover it. 00:27:34.784 [2024-11-15 10:46:23.134924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.784 [2024-11-15 10:46:23.134948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.784 qpair failed and we were unable to recover it. 00:27:34.784 [2024-11-15 10:46:23.135088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.784 [2024-11-15 10:46:23.135112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.784 qpair failed and we were unable to recover it. 00:27:34.784 [2024-11-15 10:46:23.135252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.784 [2024-11-15 10:46:23.135293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.784 qpair failed and we were unable to recover it. 00:27:34.784 [2024-11-15 10:46:23.135480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.784 [2024-11-15 10:46:23.135507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.784 qpair failed and we were unable to recover it. 00:27:34.784 [2024-11-15 10:46:23.135660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.784 [2024-11-15 10:46:23.135689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.784 qpair failed and we were unable to recover it. 00:27:34.784 [2024-11-15 10:46:23.135840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.784 [2024-11-15 10:46:23.135864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.784 qpair failed and we were unable to recover it. 00:27:34.784 [2024-11-15 10:46:23.136020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.784 [2024-11-15 10:46:23.136044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.784 qpair failed and we were unable to recover it. 00:27:34.784 [2024-11-15 10:46:23.136214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.784 [2024-11-15 10:46:23.136238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.784 qpair failed and we were unable to recover it. 00:27:34.784 [2024-11-15 10:46:23.136396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.784 [2024-11-15 10:46:23.136421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.784 qpair failed and we were unable to recover it. 00:27:34.784 [2024-11-15 10:46:23.136550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.784 [2024-11-15 10:46:23.136574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.784 qpair failed and we were unable to recover it. 00:27:34.784 [2024-11-15 10:46:23.136747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.784 [2024-11-15 10:46:23.136772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.784 qpair failed and we were unable to recover it. 00:27:34.784 [2024-11-15 10:46:23.136933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.784 [2024-11-15 10:46:23.136957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.784 qpair failed and we were unable to recover it. 00:27:34.784 [2024-11-15 10:46:23.137084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.784 [2024-11-15 10:46:23.137122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.784 qpair failed and we were unable to recover it. 00:27:34.784 [2024-11-15 10:46:23.137248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.784 [2024-11-15 10:46:23.137272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.784 qpair failed and we were unable to recover it. 00:27:34.784 [2024-11-15 10:46:23.137396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.784 [2024-11-15 10:46:23.137421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.784 qpair failed and we were unable to recover it. 00:27:34.784 [2024-11-15 10:46:23.137518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.784 [2024-11-15 10:46:23.137543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.784 qpair failed and we were unable to recover it. 00:27:34.784 [2024-11-15 10:46:23.137691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.784 [2024-11-15 10:46:23.137716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.784 qpair failed and we were unable to recover it. 00:27:34.784 [2024-11-15 10:46:23.137875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.784 [2024-11-15 10:46:23.137899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.784 qpair failed and we were unable to recover it. 00:27:34.784 [2024-11-15 10:46:23.138034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.784 [2024-11-15 10:46:23.138059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.784 qpair failed and we were unable to recover it. 00:27:34.784 [2024-11-15 10:46:23.138212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.784 [2024-11-15 10:46:23.138237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.784 qpair failed and we were unable to recover it. 00:27:34.784 [2024-11-15 10:46:23.138371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.784 [2024-11-15 10:46:23.138397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.784 qpair failed and we were unable to recover it. 00:27:34.784 [2024-11-15 10:46:23.138525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.784 [2024-11-15 10:46:23.138550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.784 qpair failed and we were unable to recover it. 00:27:34.784 [2024-11-15 10:46:23.138662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.784 [2024-11-15 10:46:23.138686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.784 qpair failed and we were unable to recover it. 00:27:34.785 [2024-11-15 10:46:23.138832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.785 [2024-11-15 10:46:23.138872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.785 qpair failed and we were unable to recover it. 00:27:34.785 [2024-11-15 10:46:23.139036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.785 [2024-11-15 10:46:23.139059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.785 qpair failed and we were unable to recover it. 00:27:34.785 [2024-11-15 10:46:23.139234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.785 [2024-11-15 10:46:23.139258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.785 qpair failed and we were unable to recover it. 00:27:34.785 [2024-11-15 10:46:23.139443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.785 [2024-11-15 10:46:23.139469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.785 qpair failed and we were unable to recover it. 00:27:34.785 [2024-11-15 10:46:23.139601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.785 [2024-11-15 10:46:23.139626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.785 qpair failed and we were unable to recover it. 00:27:34.785 [2024-11-15 10:46:23.139794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.785 [2024-11-15 10:46:23.139818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.785 qpair failed and we were unable to recover it. 00:27:34.785 [2024-11-15 10:46:23.139953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.785 [2024-11-15 10:46:23.139979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.785 qpair failed and we were unable to recover it. 00:27:34.785 [2024-11-15 10:46:23.140172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.785 [2024-11-15 10:46:23.140196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.785 qpair failed and we were unable to recover it. 00:27:34.785 [2024-11-15 10:46:23.140308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.785 [2024-11-15 10:46:23.140332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.785 qpair failed and we were unable to recover it. 00:27:34.785 [2024-11-15 10:46:23.140463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.785 [2024-11-15 10:46:23.140488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.785 qpair failed and we were unable to recover it. 00:27:34.785 [2024-11-15 10:46:23.140607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.785 [2024-11-15 10:46:23.140633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.785 qpair failed and we were unable to recover it. 00:27:34.785 [2024-11-15 10:46:23.140765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.785 [2024-11-15 10:46:23.140790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.785 qpair failed and we were unable to recover it. 00:27:34.785 [2024-11-15 10:46:23.140976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.785 [2024-11-15 10:46:23.141001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.785 qpair failed and we were unable to recover it. 00:27:34.785 [2024-11-15 10:46:23.141166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.785 [2024-11-15 10:46:23.141190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.785 qpair failed and we were unable to recover it. 00:27:34.785 [2024-11-15 10:46:23.141356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.785 [2024-11-15 10:46:23.141387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.785 qpair failed and we were unable to recover it. 00:27:34.785 [2024-11-15 10:46:23.141536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.785 [2024-11-15 10:46:23.141561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.785 qpair failed and we were unable to recover it. 00:27:34.785 [2024-11-15 10:46:23.141681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.785 [2024-11-15 10:46:23.141725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.785 qpair failed and we were unable to recover it. 00:27:34.785 [2024-11-15 10:46:23.141845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.785 [2024-11-15 10:46:23.141869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.785 qpair failed and we were unable to recover it. 00:27:34.785 [2024-11-15 10:46:23.142014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.785 [2024-11-15 10:46:23.142039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.785 qpair failed and we were unable to recover it. 00:27:34.785 [2024-11-15 10:46:23.142156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.785 [2024-11-15 10:46:23.142180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.785 qpair failed and we were unable to recover it. 00:27:34.785 [2024-11-15 10:46:23.142312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.785 [2024-11-15 10:46:23.142336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.785 qpair failed and we were unable to recover it. 00:27:34.785 [2024-11-15 10:46:23.142465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.785 [2024-11-15 10:46:23.142491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.785 qpair failed and we were unable to recover it. 00:27:34.785 [2024-11-15 10:46:23.142616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.785 [2024-11-15 10:46:23.142669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:34.785 qpair failed and we were unable to recover it. 00:27:34.785 [2024-11-15 10:46:23.142847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.785 [2024-11-15 10:46:23.142877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:34.785 qpair failed and we were unable to recover it. 00:27:34.785 [2024-11-15 10:46:23.143044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.785 [2024-11-15 10:46:23.143074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:34.785 qpair failed and we were unable to recover it. 00:27:34.785 [2024-11-15 10:46:23.143251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.785 [2024-11-15 10:46:23.143279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:34.785 qpair failed and we were unable to recover it. 00:27:34.785 [2024-11-15 10:46:23.143429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.785 [2024-11-15 10:46:23.143459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:34.785 qpair failed and we were unable to recover it. 00:27:34.785 [2024-11-15 10:46:23.143577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.785 [2024-11-15 10:46:23.143605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:34.785 qpair failed and we were unable to recover it. 00:27:34.785 [2024-11-15 10:46:23.143763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.785 [2024-11-15 10:46:23.143789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.785 qpair failed and we were unable to recover it. 00:27:34.785 [2024-11-15 10:46:23.143921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.785 [2024-11-15 10:46:23.143945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.785 qpair failed and we were unable to recover it. 00:27:34.785 [2024-11-15 10:46:23.144092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.785 [2024-11-15 10:46:23.144117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.785 qpair failed and we were unable to recover it. 00:27:34.785 [2024-11-15 10:46:23.144248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.785 [2024-11-15 10:46:23.144272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.785 qpair failed and we were unable to recover it. 00:27:34.785 [2024-11-15 10:46:23.144431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.785 [2024-11-15 10:46:23.144457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.785 qpair failed and we were unable to recover it. 00:27:34.785 [2024-11-15 10:46:23.144593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.785 [2024-11-15 10:46:23.144618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.785 qpair failed and we were unable to recover it. 00:27:34.785 [2024-11-15 10:46:23.144831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.785 [2024-11-15 10:46:23.144855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.785 qpair failed and we were unable to recover it. 00:27:34.785 [2024-11-15 10:46:23.145003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.785 [2024-11-15 10:46:23.145028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.785 qpair failed and we were unable to recover it. 00:27:34.785 [2024-11-15 10:46:23.145161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.785 [2024-11-15 10:46:23.145186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.785 qpair failed and we were unable to recover it. 00:27:34.785 [2024-11-15 10:46:23.145318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.785 [2024-11-15 10:46:23.145342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.785 qpair failed and we were unable to recover it. 00:27:34.785 [2024-11-15 10:46:23.145464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.786 [2024-11-15 10:46:23.145489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.786 qpair failed and we were unable to recover it. 00:27:34.786 [2024-11-15 10:46:23.145630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.786 [2024-11-15 10:46:23.145655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.786 qpair failed and we were unable to recover it. 00:27:34.786 [2024-11-15 10:46:23.145853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.786 [2024-11-15 10:46:23.145876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.786 qpair failed and we were unable to recover it. 00:27:34.786 [2024-11-15 10:46:23.146016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.786 [2024-11-15 10:46:23.146041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.786 qpair failed and we were unable to recover it. 00:27:34.786 [2024-11-15 10:46:23.146154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.786 [2024-11-15 10:46:23.146179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.786 qpair failed and we were unable to recover it. 00:27:34.786 [2024-11-15 10:46:23.146316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.786 [2024-11-15 10:46:23.146341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.786 qpair failed and we were unable to recover it. 00:27:34.786 [2024-11-15 10:46:23.146506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.786 [2024-11-15 10:46:23.146531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.786 qpair failed and we were unable to recover it. 00:27:34.786 [2024-11-15 10:46:23.146657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.786 [2024-11-15 10:46:23.146682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.786 qpair failed and we were unable to recover it. 00:27:34.786 [2024-11-15 10:46:23.146859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.786 [2024-11-15 10:46:23.146883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.786 qpair failed and we were unable to recover it. 00:27:34.786 [2024-11-15 10:46:23.147033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.786 [2024-11-15 10:46:23.147057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.786 qpair failed and we were unable to recover it. 00:27:34.786 [2024-11-15 10:46:23.147237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.786 [2024-11-15 10:46:23.147261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.786 qpair failed and we were unable to recover it. 00:27:34.786 [2024-11-15 10:46:23.147403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.786 [2024-11-15 10:46:23.147454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:34.786 qpair failed and we were unable to recover it. 00:27:34.786 [2024-11-15 10:46:23.147547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.786 [2024-11-15 10:46:23.147574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:34.786 qpair failed and we were unable to recover it. 00:27:34.786 [2024-11-15 10:46:23.147726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.786 [2024-11-15 10:46:23.147754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:34.786 qpair failed and we were unable to recover it. 00:27:34.786 [2024-11-15 10:46:23.147905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.786 [2024-11-15 10:46:23.147948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:34.786 qpair failed and we were unable to recover it. 00:27:34.786 [2024-11-15 10:46:23.148096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.786 [2024-11-15 10:46:23.148126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:34.786 qpair failed and we were unable to recover it. 00:27:34.786 [2024-11-15 10:46:23.148247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.786 [2024-11-15 10:46:23.148276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:34.786 qpair failed and we were unable to recover it. 00:27:34.786 [2024-11-15 10:46:23.148450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.786 [2024-11-15 10:46:23.148492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.786 qpair failed and we were unable to recover it. 00:27:34.786 [2024-11-15 10:46:23.148589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.786 [2024-11-15 10:46:23.148613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.786 qpair failed and we were unable to recover it. 00:27:34.786 [2024-11-15 10:46:23.148790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.786 [2024-11-15 10:46:23.148830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.786 qpair failed and we were unable to recover it. 00:27:34.786 [2024-11-15 10:46:23.148923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.786 [2024-11-15 10:46:23.148961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.786 qpair failed and we were unable to recover it. 00:27:34.786 [2024-11-15 10:46:23.149094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.786 [2024-11-15 10:46:23.149117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.786 qpair failed and we were unable to recover it. 00:27:34.786 [2024-11-15 10:46:23.149218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.786 [2024-11-15 10:46:23.149243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.786 qpair failed and we were unable to recover it. 00:27:34.786 [2024-11-15 10:46:23.149394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.786 [2024-11-15 10:46:23.149419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.786 qpair failed and we were unable to recover it. 00:27:34.786 [2024-11-15 10:46:23.149549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.786 [2024-11-15 10:46:23.149575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.786 qpair failed and we were unable to recover it. 00:27:34.786 [2024-11-15 10:46:23.149699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.786 [2024-11-15 10:46:23.149724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.786 qpair failed and we were unable to recover it. 00:27:34.786 [2024-11-15 10:46:23.149862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.786 [2024-11-15 10:46:23.149887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.786 qpair failed and we were unable to recover it. 00:27:34.786 [2024-11-15 10:46:23.150058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.786 [2024-11-15 10:46:23.150082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.786 qpair failed and we were unable to recover it. 00:27:34.786 [2024-11-15 10:46:23.150221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.786 [2024-11-15 10:46:23.150245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.786 qpair failed and we were unable to recover it. 00:27:34.786 [2024-11-15 10:46:23.150376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.786 [2024-11-15 10:46:23.150407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.786 qpair failed and we were unable to recover it. 00:27:34.786 [2024-11-15 10:46:23.150501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.786 [2024-11-15 10:46:23.150526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.786 qpair failed and we were unable to recover it. 00:27:34.786 [2024-11-15 10:46:23.150663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.786 [2024-11-15 10:46:23.150688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.786 qpair failed and we were unable to recover it. 00:27:34.786 [2024-11-15 10:46:23.150844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.786 [2024-11-15 10:46:23.150882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.786 qpair failed and we were unable to recover it. 00:27:34.786 [2024-11-15 10:46:23.151045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.786 [2024-11-15 10:46:23.151071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.787 qpair failed and we were unable to recover it. 00:27:34.787 [2024-11-15 10:46:23.151191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.787 [2024-11-15 10:46:23.151216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.787 qpair failed and we were unable to recover it. 00:27:34.787 [2024-11-15 10:46:23.151415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.787 [2024-11-15 10:46:23.151442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.787 qpair failed and we were unable to recover it. 00:27:34.787 [2024-11-15 10:46:23.151545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.787 [2024-11-15 10:46:23.151569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.787 qpair failed and we were unable to recover it. 00:27:34.787 [2024-11-15 10:46:23.151723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.787 [2024-11-15 10:46:23.151748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.787 qpair failed and we were unable to recover it. 00:27:34.787 [2024-11-15 10:46:23.151943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.787 [2024-11-15 10:46:23.151977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:34.787 qpair failed and we were unable to recover it. 00:27:34.787 [2024-11-15 10:46:23.152167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.787 [2024-11-15 10:46:23.152198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:34.787 qpair failed and we were unable to recover it. 00:27:34.787 [2024-11-15 10:46:23.152331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.787 [2024-11-15 10:46:23.152360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:34.787 qpair failed and we were unable to recover it. 00:27:34.787 [2024-11-15 10:46:23.152487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.787 [2024-11-15 10:46:23.152515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:34.787 qpair failed and we were unable to recover it. 00:27:34.787 [2024-11-15 10:46:23.152691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.787 [2024-11-15 10:46:23.152719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:34.787 qpair failed and we were unable to recover it. 00:27:34.787 [2024-11-15 10:46:23.152863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.787 [2024-11-15 10:46:23.152891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b54000b90 with addr=10.0.0.2, port=4420 00:27:34.787 qpair failed and we were unable to recover it. 00:27:34.787 [2024-11-15 10:46:23.153037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.787 [2024-11-15 10:46:23.153077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.787 qpair failed and we were unable to recover it. 00:27:34.787 [2024-11-15 10:46:23.153174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.787 [2024-11-15 10:46:23.153198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.787 qpair failed and we were unable to recover it. 00:27:34.787 [2024-11-15 10:46:23.153332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.787 [2024-11-15 10:46:23.153356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.787 qpair failed and we were unable to recover it. 00:27:34.787 [2024-11-15 10:46:23.153497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.787 [2024-11-15 10:46:23.153523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.787 qpair failed and we were unable to recover it. 00:27:34.787 [2024-11-15 10:46:23.153642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.787 [2024-11-15 10:46:23.153665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.787 qpair failed and we were unable to recover it. 00:27:34.787 [2024-11-15 10:46:23.153831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.787 [2024-11-15 10:46:23.153870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.787 qpair failed and we were unable to recover it. 00:27:34.787 [2024-11-15 10:46:23.154020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.787 [2024-11-15 10:46:23.154044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.787 qpair failed and we were unable to recover it. 00:27:34.787 [2024-11-15 10:46:23.154180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.787 [2024-11-15 10:46:23.154204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.787 qpair failed and we were unable to recover it. 00:27:34.787 [2024-11-15 10:46:23.154347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.787 [2024-11-15 10:46:23.154377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.787 qpair failed and we were unable to recover it. 00:27:34.787 [2024-11-15 10:46:23.154507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.787 [2024-11-15 10:46:23.154533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.787 qpair failed and we were unable to recover it. 00:27:34.787 [2024-11-15 10:46:23.154666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.787 [2024-11-15 10:46:23.154690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.787 qpair failed and we were unable to recover it. 00:27:34.787 [2024-11-15 10:46:23.154840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.787 [2024-11-15 10:46:23.154880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.787 qpair failed and we were unable to recover it. 00:27:34.787 [2024-11-15 10:46:23.155043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.787 [2024-11-15 10:46:23.155066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.787 qpair failed and we were unable to recover it. 00:27:34.787 [2024-11-15 10:46:23.155250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.787 [2024-11-15 10:46:23.155275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.787 qpair failed and we were unable to recover it. 00:27:34.787 [2024-11-15 10:46:23.155388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.787 [2024-11-15 10:46:23.155425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.787 qpair failed and we were unable to recover it. 00:27:34.787 [2024-11-15 10:46:23.155577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.787 [2024-11-15 10:46:23.155601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.787 qpair failed and we were unable to recover it. 00:27:34.787 [2024-11-15 10:46:23.155732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.787 [2024-11-15 10:46:23.155756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.787 qpair failed and we were unable to recover it. 00:27:34.787 [2024-11-15 10:46:23.155892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.787 [2024-11-15 10:46:23.155918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.787 qpair failed and we were unable to recover it. 00:27:34.787 [2024-11-15 10:46:23.156064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.787 [2024-11-15 10:46:23.156088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.787 qpair failed and we were unable to recover it. 00:27:34.787 [2024-11-15 10:46:23.156227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.787 [2024-11-15 10:46:23.156250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.787 qpair failed and we were unable to recover it. 00:27:34.787 [2024-11-15 10:46:23.156402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.787 [2024-11-15 10:46:23.156428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.787 qpair failed and we were unable to recover it. 00:27:34.787 [2024-11-15 10:46:23.156552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.787 [2024-11-15 10:46:23.156580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.787 qpair failed and we were unable to recover it. 00:27:34.787 [2024-11-15 10:46:23.156711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.787 [2024-11-15 10:46:23.156736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.787 qpair failed and we were unable to recover it. 00:27:34.787 [2024-11-15 10:46:23.156895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.787 [2024-11-15 10:46:23.156920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.787 qpair failed and we were unable to recover it. 00:27:34.787 [2024-11-15 10:46:23.157032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.787 [2024-11-15 10:46:23.157055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.787 qpair failed and we were unable to recover it. 00:27:34.787 [2024-11-15 10:46:23.157195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.787 [2024-11-15 10:46:23.157219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.787 qpair failed and we were unable to recover it. 00:27:34.787 [2024-11-15 10:46:23.157411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.787 [2024-11-15 10:46:23.157437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.787 qpair failed and we were unable to recover it. 00:27:34.787 [2024-11-15 10:46:23.157567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.787 [2024-11-15 10:46:23.157592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.787 qpair failed and we were unable to recover it. 00:27:34.787 [2024-11-15 10:46:23.157746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.788 [2024-11-15 10:46:23.157770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.788 qpair failed and we were unable to recover it. 00:27:34.788 [2024-11-15 10:46:23.157925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.788 [2024-11-15 10:46:23.157949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.788 qpair failed and we were unable to recover it. 00:27:34.788 [2024-11-15 10:46:23.158074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.788 [2024-11-15 10:46:23.158098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.788 qpair failed and we were unable to recover it. 00:27:34.788 [2024-11-15 10:46:23.158281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.788 [2024-11-15 10:46:23.158305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.788 qpair failed and we were unable to recover it. 00:27:34.788 [2024-11-15 10:46:23.158452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.788 [2024-11-15 10:46:23.158478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.788 qpair failed and we were unable to recover it. 00:27:34.788 [2024-11-15 10:46:23.158613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.788 [2024-11-15 10:46:23.158645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.788 qpair failed and we were unable to recover it. 00:27:34.788 [2024-11-15 10:46:23.158751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.788 [2024-11-15 10:46:23.158774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.788 qpair failed and we were unable to recover it. 00:27:34.788 [2024-11-15 10:46:23.158923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.788 [2024-11-15 10:46:23.158949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.788 qpair failed and we were unable to recover it. 00:27:34.788 [2024-11-15 10:46:23.159082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.788 [2024-11-15 10:46:23.159107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.788 qpair failed and we were unable to recover it. 00:27:34.788 [2024-11-15 10:46:23.159290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.788 [2024-11-15 10:46:23.159313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.788 qpair failed and we were unable to recover it. 00:27:34.788 [2024-11-15 10:46:23.159461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.788 [2024-11-15 10:46:23.159487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.788 qpair failed and we were unable to recover it. 00:27:34.788 [2024-11-15 10:46:23.159670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.788 [2024-11-15 10:46:23.159694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.788 qpair failed and we were unable to recover it. 00:27:34.788 [2024-11-15 10:46:23.159832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.788 [2024-11-15 10:46:23.159856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.788 qpair failed and we were unable to recover it. 00:27:34.788 [2024-11-15 10:46:23.160059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.788 [2024-11-15 10:46:23.160083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.788 qpair failed and we were unable to recover it. 00:27:34.788 [2024-11-15 10:46:23.160225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.788 [2024-11-15 10:46:23.160249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.788 qpair failed and we were unable to recover it. 00:27:34.788 [2024-11-15 10:46:23.160288] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:34.788 [2024-11-15 10:46:23.160394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.788 [2024-11-15 10:46:23.160424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.788 qpair failed and we were unable to recover it. 00:27:34.788 [2024-11-15 10:46:23.160518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.788 [2024-11-15 10:46:23.160542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.788 qpair failed and we were unable to recover it. 00:27:34.788 [2024-11-15 10:46:23.160661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.788 [2024-11-15 10:46:23.160687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.788 qpair failed and we were unable to recover it. 00:27:34.788 [2024-11-15 10:46:23.160802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.788 [2024-11-15 10:46:23.160826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.788 qpair failed and we were unable to recover it. 00:27:34.788 [2024-11-15 10:46:23.160947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.788 [2024-11-15 10:46:23.160973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.788 qpair failed and we were unable to recover it. 00:27:34.788 [2024-11-15 10:46:23.161161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.788 [2024-11-15 10:46:23.161185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.788 qpair failed and we were unable to recover it. 00:27:34.788 [2024-11-15 10:46:23.161294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.788 [2024-11-15 10:46:23.161318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.788 qpair failed and we were unable to recover it. 00:27:34.788 [2024-11-15 10:46:23.161482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.788 [2024-11-15 10:46:23.161508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.788 qpair failed and we were unable to recover it. 00:27:34.788 [2024-11-15 10:46:23.161660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.788 [2024-11-15 10:46:23.161685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.788 qpair failed and we were unable to recover it. 00:27:34.788 [2024-11-15 10:46:23.161808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.788 [2024-11-15 10:46:23.161832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.788 qpair failed and we were unable to recover it. 00:27:34.788 [2024-11-15 10:46:23.161969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.788 [2024-11-15 10:46:23.161995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.788 qpair failed and we were unable to recover it. 00:27:34.788 [2024-11-15 10:46:23.162100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.788 [2024-11-15 10:46:23.162124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.788 qpair failed and we were unable to recover it. 00:27:34.788 [2024-11-15 10:46:23.162262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.788 [2024-11-15 10:46:23.162285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.788 qpair failed and we were unable to recover it. 00:27:34.788 [2024-11-15 10:46:23.162442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.788 [2024-11-15 10:46:23.162468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.788 qpair failed and we were unable to recover it. 00:27:34.788 [2024-11-15 10:46:23.162610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.788 [2024-11-15 10:46:23.162653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.788 qpair failed and we were unable to recover it. 00:27:34.788 [2024-11-15 10:46:23.162805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.788 [2024-11-15 10:46:23.162828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.788 qpair failed and we were unable to recover it. 00:27:34.788 [2024-11-15 10:46:23.162966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.788 [2024-11-15 10:46:23.162989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.788 qpair failed and we were unable to recover it. 00:27:34.788 [2024-11-15 10:46:23.163138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.788 [2024-11-15 10:46:23.163160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.788 qpair failed and we were unable to recover it. 00:27:34.788 [2024-11-15 10:46:23.163323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.788 [2024-11-15 10:46:23.163346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.788 qpair failed and we were unable to recover it. 00:27:34.788 [2024-11-15 10:46:23.163458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.788 [2024-11-15 10:46:23.163483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.788 qpair failed and we were unable to recover it. 00:27:34.788 [2024-11-15 10:46:23.163610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.788 [2024-11-15 10:46:23.163634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.788 qpair failed and we were unable to recover it. 00:27:34.788 [2024-11-15 10:46:23.163788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.788 [2024-11-15 10:46:23.163812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.788 qpair failed and we were unable to recover it. 00:27:34.788 [2024-11-15 10:46:23.163956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.788 [2024-11-15 10:46:23.163979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.788 qpair failed and we were unable to recover it. 00:27:34.788 [2024-11-15 10:46:23.164115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.789 [2024-11-15 10:46:23.164139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.789 qpair failed and we were unable to recover it. 00:27:34.789 [2024-11-15 10:46:23.164254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.789 [2024-11-15 10:46:23.164277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.789 qpair failed and we were unable to recover it. 00:27:34.789 [2024-11-15 10:46:23.164424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.789 [2024-11-15 10:46:23.164449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.789 qpair failed and we were unable to recover it. 00:27:34.789 [2024-11-15 10:46:23.164576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.789 [2024-11-15 10:46:23.164607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.789 qpair failed and we were unable to recover it. 00:27:34.789 [2024-11-15 10:46:23.164743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.789 [2024-11-15 10:46:23.164766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.789 qpair failed and we were unable to recover it. 00:27:34.789 [2024-11-15 10:46:23.164933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.789 [2024-11-15 10:46:23.164956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.789 qpair failed and we were unable to recover it. 00:27:34.789 [2024-11-15 10:46:23.165076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.789 [2024-11-15 10:46:23.165100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.789 qpair failed and we were unable to recover it. 00:27:34.789 [2024-11-15 10:46:23.165252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.789 [2024-11-15 10:46:23.165275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.789 qpair failed and we were unable to recover it. 00:27:34.789 [2024-11-15 10:46:23.165428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.789 [2024-11-15 10:46:23.165452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1206fa0 with addr=10.0.0.2, port=4420 00:27:34.789 qpair failed and we were unable to recover it. 00:27:34.789 [2024-11-15 10:46:23.165566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.789 [2024-11-15 10:46:23.165617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:34.789 qpair failed and we were unable to recover it. 00:27:34.789 [2024-11-15 10:46:23.165777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.789 [2024-11-15 10:46:23.165805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:34.789 qpair failed and we were unable to recover it. 00:27:34.789 [2024-11-15 10:46:23.165928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.789 [2024-11-15 10:46:23.165955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:34.789 qpair failed and we were unable to recover it. 00:27:34.789 [2024-11-15 10:46:23.166099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.789 [2024-11-15 10:46:23.166124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:34.789 qpair failed and we were unable to recover it. 00:27:34.789 [2024-11-15 10:46:23.166272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.789 [2024-11-15 10:46:23.166298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:34.789 qpair failed and we were unable to recover it. 00:27:34.789 [2024-11-15 10:46:23.166428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.789 [2024-11-15 10:46:23.166456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:34.789 qpair failed and we were unable to recover it. 00:27:34.789 [2024-11-15 10:46:23.166570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.789 [2024-11-15 10:46:23.166595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:34.789 qpair failed and we were unable to recover it. 00:27:34.789 [2024-11-15 10:46:23.166738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.789 [2024-11-15 10:46:23.166763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:34.789 qpair failed and we were unable to recover it. 00:27:34.789 [2024-11-15 10:46:23.166903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.789 [2024-11-15 10:46:23.166929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:34.789 qpair failed and we were unable to recover it. 00:27:34.789 [2024-11-15 10:46:23.167060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.789 [2024-11-15 10:46:23.167084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:34.789 qpair failed and we were unable to recover it. 00:27:34.789 [2024-11-15 10:46:23.167223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.789 [2024-11-15 10:46:23.167249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:34.789 qpair failed and we were unable to recover it. 00:27:34.789 [2024-11-15 10:46:23.167403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.789 [2024-11-15 10:46:23.167431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:34.789 qpair failed and we were unable to recover it. 00:27:34.789 [2024-11-15 10:46:23.167528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.789 [2024-11-15 10:46:23.167555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:34.789 qpair failed and we were unable to recover it. 00:27:34.789 [2024-11-15 10:46:23.167689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.789 [2024-11-15 10:46:23.167720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:34.789 qpair failed and we were unable to recover it. 00:27:34.789 [2024-11-15 10:46:23.167899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.789 [2024-11-15 10:46:23.167935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:34.789 qpair failed and we were unable to recover it. 00:27:34.789 [2024-11-15 10:46:23.168061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.789 [2024-11-15 10:46:23.168088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:34.789 qpair failed and we were unable to recover it. 00:27:34.789 [2024-11-15 10:46:23.168225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.789 [2024-11-15 10:46:23.168252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:34.789 qpair failed and we were unable to recover it. 00:27:34.789 [2024-11-15 10:46:23.168384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.789 [2024-11-15 10:46:23.168420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:34.789 qpair failed and we were unable to recover it. 00:27:34.789 [2024-11-15 10:46:23.168564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.789 [2024-11-15 10:46:23.168589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:34.789 qpair failed and we were unable to recover it. 00:27:34.789 [2024-11-15 10:46:23.168766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.789 [2024-11-15 10:46:23.168791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:34.789 qpair failed and we were unable to recover it. 00:27:34.789 [2024-11-15 10:46:23.168914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.789 [2024-11-15 10:46:23.168953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:34.789 qpair failed and we were unable to recover it. 00:27:34.789 [2024-11-15 10:46:23.169087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.789 [2024-11-15 10:46:23.169112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:34.789 qpair failed and we were unable to recover it. 00:27:34.789 [2024-11-15 10:46:23.169256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.789 [2024-11-15 10:46:23.169282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:34.789 qpair failed and we were unable to recover it. 00:27:34.789 [2024-11-15 10:46:23.169443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.789 [2024-11-15 10:46:23.169485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:34.789 qpair failed and we were unable to recover it. 00:27:34.789 [2024-11-15 10:46:23.169677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.789 [2024-11-15 10:46:23.169703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:34.789 qpair failed and we were unable to recover it. 00:27:34.789 [2024-11-15 10:46:23.169862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.789 [2024-11-15 10:46:23.169887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:34.789 qpair failed and we were unable to recover it. 00:27:34.789 [2024-11-15 10:46:23.170040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.789 [2024-11-15 10:46:23.170087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:34.789 qpair failed and we were unable to recover it. 00:27:34.789 [2024-11-15 10:46:23.170211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.789 [2024-11-15 10:46:23.170237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:34.789 qpair failed and we were unable to recover it. 00:27:34.789 [2024-11-15 10:46:23.170430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.789 [2024-11-15 10:46:23.170457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:34.789 qpair failed and we were unable to recover it. 00:27:34.789 [2024-11-15 10:46:23.170610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.790 [2024-11-15 10:46:23.170636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:34.790 qpair failed and we were unable to recover it. 00:27:34.790 [2024-11-15 10:46:23.170763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.790 [2024-11-15 10:46:23.170795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:34.790 qpair failed and we were unable to recover it. 00:27:34.790 [2024-11-15 10:46:23.170948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.790 [2024-11-15 10:46:23.170974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:34.790 qpair failed and we were unable to recover it. 00:27:34.790 [2024-11-15 10:46:23.171119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.790 [2024-11-15 10:46:23.171161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:34.790 qpair failed and we were unable to recover it. 00:27:34.790 [2024-11-15 10:46:23.171262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.790 [2024-11-15 10:46:23.171287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:34.790 qpair failed and we were unable to recover it. 00:27:34.790 [2024-11-15 10:46:23.171411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.790 [2024-11-15 10:46:23.171438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:34.790 qpair failed and we were unable to recover it. 00:27:34.790 [2024-11-15 10:46:23.171542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.790 [2024-11-15 10:46:23.171568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:34.790 qpair failed and we were unable to recover it. 00:27:34.790 [2024-11-15 10:46:23.171674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.790 [2024-11-15 10:46:23.171699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:34.790 qpair failed and we were unable to recover it. 00:27:34.790 [2024-11-15 10:46:23.171892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.790 [2024-11-15 10:46:23.171917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:34.790 qpair failed and we were unable to recover it. 00:27:34.790 [2024-11-15 10:46:23.172096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.790 [2024-11-15 10:46:23.172121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:34.790 qpair failed and we were unable to recover it. 00:27:34.790 [2024-11-15 10:46:23.172284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.790 [2024-11-15 10:46:23.172309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:34.790 qpair failed and we were unable to recover it. 00:27:34.790 [2024-11-15 10:46:23.172451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.790 [2024-11-15 10:46:23.172479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:34.790 qpair failed and we were unable to recover it. 00:27:34.790 [2024-11-15 10:46:23.172579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.790 [2024-11-15 10:46:23.172606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:34.790 qpair failed and we were unable to recover it. 00:27:34.790 [2024-11-15 10:46:23.172719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.790 [2024-11-15 10:46:23.172744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:34.790 qpair failed and we were unable to recover it. 00:27:34.790 [2024-11-15 10:46:23.172886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.790 [2024-11-15 10:46:23.172912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:34.790 qpair failed and we were unable to recover it. 00:27:34.790 [2024-11-15 10:46:23.173059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.790 [2024-11-15 10:46:23.173101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:34.790 qpair failed and we were unable to recover it. 00:27:34.790 [2024-11-15 10:46:23.173198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.790 [2024-11-15 10:46:23.173223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:34.790 qpair failed and we were unable to recover it. 00:27:34.790 [2024-11-15 10:46:23.173346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.790 [2024-11-15 10:46:23.173380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:34.790 qpair failed and we were unable to recover it. 00:27:34.790 [2024-11-15 10:46:23.173483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.790 [2024-11-15 10:46:23.173510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:34.790 qpair failed and we were unable to recover it. 00:27:34.790 [2024-11-15 10:46:23.173711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.790 [2024-11-15 10:46:23.173736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:34.790 qpair failed and we were unable to recover it. 00:27:34.790 [2024-11-15 10:46:23.173947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.790 [2024-11-15 10:46:23.173972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:34.790 qpair failed and we were unable to recover it. 00:27:34.790 [2024-11-15 10:46:23.174139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.790 [2024-11-15 10:46:23.174165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:34.790 qpair failed and we were unable to recover it. 00:27:34.790 [2024-11-15 10:46:23.174319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.790 [2024-11-15 10:46:23.174359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:34.790 qpair failed and we were unable to recover it. 00:27:34.790 [2024-11-15 10:46:23.174507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.790 [2024-11-15 10:46:23.174534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:34.790 qpair failed and we were unable to recover it. 00:27:34.790 [2024-11-15 10:46:23.174634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.790 [2024-11-15 10:46:23.174661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:34.790 qpair failed and we were unable to recover it. 00:27:34.790 [2024-11-15 10:46:23.174807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.790 [2024-11-15 10:46:23.174832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:34.790 qpair failed and we were unable to recover it. 00:27:34.790 [2024-11-15 10:46:23.174983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.790 [2024-11-15 10:46:23.175023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:34.790 qpair failed and we were unable to recover it. 00:27:34.790 [2024-11-15 10:46:23.175147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.790 [2024-11-15 10:46:23.175187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:34.790 qpair failed and we were unable to recover it. 00:27:34.790 [2024-11-15 10:46:23.175360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.790 [2024-11-15 10:46:23.175421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:34.790 qpair failed and we were unable to recover it. 00:27:34.790 [2024-11-15 10:46:23.175622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.790 [2024-11-15 10:46:23.175663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:34.790 qpair failed and we were unable to recover it. 00:27:34.790 [2024-11-15 10:46:23.175820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.790 [2024-11-15 10:46:23.175845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:34.790 qpair failed and we were unable to recover it. 00:27:34.790 [2024-11-15 10:46:23.176015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.790 [2024-11-15 10:46:23.176040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:34.790 qpair failed and we were unable to recover it. 00:27:34.790 [2024-11-15 10:46:23.176145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.790 [2024-11-15 10:46:23.176170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:34.790 qpair failed and we were unable to recover it. 00:27:34.790 [2024-11-15 10:46:23.176302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.790 [2024-11-15 10:46:23.176333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:34.790 qpair failed and we were unable to recover it. 00:27:34.790 [2024-11-15 10:46:23.176476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.790 [2024-11-15 10:46:23.176503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:34.790 qpair failed and we were unable to recover it. 00:27:34.790 [2024-11-15 10:46:23.176706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.790 [2024-11-15 10:46:23.176731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:34.791 qpair failed and we were unable to recover it. 00:27:34.791 [2024-11-15 10:46:23.176889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.791 [2024-11-15 10:46:23.176913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:34.791 qpair failed and we were unable to recover it. 00:27:34.791 [2024-11-15 10:46:23.177025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.791 [2024-11-15 10:46:23.177057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:34.791 qpair failed and we were unable to recover it. 00:27:34.791 [2024-11-15 10:46:23.177200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.791 [2024-11-15 10:46:23.177225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:34.791 qpair failed and we were unable to recover it. 00:27:34.791 [2024-11-15 10:46:23.177401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.791 [2024-11-15 10:46:23.177428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:34.791 qpair failed and we were unable to recover it. 00:27:34.791 [2024-11-15 10:46:23.177535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.791 [2024-11-15 10:46:23.177560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:34.791 qpair failed and we were unable to recover it. 00:27:34.791 [2024-11-15 10:46:23.177679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.791 [2024-11-15 10:46:23.177704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:34.791 qpair failed and we were unable to recover it. 00:27:34.791 [2024-11-15 10:46:23.177877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.791 [2024-11-15 10:46:23.177917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:34.791 qpair failed and we were unable to recover it. 00:27:34.791 [2024-11-15 10:46:23.178023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.791 [2024-11-15 10:46:23.178047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:34.791 qpair failed and we were unable to recover it. 00:27:34.791 [2024-11-15 10:46:23.178196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.791 [2024-11-15 10:46:23.178222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:34.791 qpair failed and we were unable to recover it. 00:27:34.791 [2024-11-15 10:46:23.178399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.791 [2024-11-15 10:46:23.178425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:34.791 qpair failed and we were unable to recover it. 00:27:34.791 [2024-11-15 10:46:23.178553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.791 [2024-11-15 10:46:23.178586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:34.791 qpair failed and we were unable to recover it. 00:27:34.791 [2024-11-15 10:46:23.178743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.791 [2024-11-15 10:46:23.178767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:34.791 qpair failed and we were unable to recover it. 00:27:34.791 [2024-11-15 10:46:23.178897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.791 [2024-11-15 10:46:23.178927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:34.791 qpair failed and we were unable to recover it. 00:27:34.791 [2024-11-15 10:46:23.179092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.791 [2024-11-15 10:46:23.179117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:34.791 qpair failed and we were unable to recover it. 00:27:34.791 [2024-11-15 10:46:23.179255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.791 [2024-11-15 10:46:23.179281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:34.791 qpair failed and we were unable to recover it. 00:27:34.791 [2024-11-15 10:46:23.179410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.791 [2024-11-15 10:46:23.179441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:34.791 qpair failed and we were unable to recover it. 00:27:34.791 [2024-11-15 10:46:23.179567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.791 [2024-11-15 10:46:23.179593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:34.791 qpair failed and we were unable to recover it. 00:27:34.791 [2024-11-15 10:46:23.179761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.791 [2024-11-15 10:46:23.179785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:34.791 qpair failed and we were unable to recover it. 00:27:34.791 [2024-11-15 10:46:23.179911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.791 [2024-11-15 10:46:23.179951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:34.791 qpair failed and we were unable to recover it. 00:27:34.791 [2024-11-15 10:46:23.180095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.791 [2024-11-15 10:46:23.180120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:34.791 qpair failed and we were unable to recover it. 00:27:34.791 [2024-11-15 10:46:23.180289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.791 [2024-11-15 10:46:23.180329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:34.791 qpair failed and we were unable to recover it. 00:27:34.791 [2024-11-15 10:46:23.180451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.791 [2024-11-15 10:46:23.180491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:34.791 qpair failed and we were unable to recover it. 00:27:34.791 [2024-11-15 10:46:23.180595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.791 [2024-11-15 10:46:23.180620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:34.791 qpair failed and we were unable to recover it. 00:27:34.791 [2024-11-15 10:46:23.180791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.791 [2024-11-15 10:46:23.180830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:34.791 qpair failed and we were unable to recover it. 00:27:34.791 [2024-11-15 10:46:23.180993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.791 [2024-11-15 10:46:23.181017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:34.791 qpair failed and we were unable to recover it. 00:27:34.791 [2024-11-15 10:46:23.181269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.791 [2024-11-15 10:46:23.181294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:34.791 qpair failed and we were unable to recover it. 00:27:34.791 [2024-11-15 10:46:23.181450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.791 [2024-11-15 10:46:23.181476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:34.791 qpair failed and we were unable to recover it. 00:27:34.791 [2024-11-15 10:46:23.181614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.791 [2024-11-15 10:46:23.181658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:34.791 qpair failed and we were unable to recover it. 00:27:34.791 [2024-11-15 10:46:23.181783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.791 [2024-11-15 10:46:23.181808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:34.791 qpair failed and we were unable to recover it. 00:27:34.791 [2024-11-15 10:46:23.182001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.791 [2024-11-15 10:46:23.182026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:34.791 qpair failed and we were unable to recover it. 00:27:34.791 [2024-11-15 10:46:23.182148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.791 [2024-11-15 10:46:23.182187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:34.791 qpair failed and we were unable to recover it. 00:27:34.791 [2024-11-15 10:46:23.182335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.791 [2024-11-15 10:46:23.182382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:34.791 qpair failed and we were unable to recover it. 00:27:34.791 [2024-11-15 10:46:23.182515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.791 [2024-11-15 10:46:23.182541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:34.791 qpair failed and we were unable to recover it. 00:27:34.791 [2024-11-15 10:46:23.182720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.791 [2024-11-15 10:46:23.182745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:34.791 qpair failed and we were unable to recover it. 00:27:34.791 [2024-11-15 10:46:23.182920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.791 [2024-11-15 10:46:23.182945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:34.791 qpair failed and we were unable to recover it. 00:27:34.791 [2024-11-15 10:46:23.183122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.791 [2024-11-15 10:46:23.183162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:34.791 qpair failed and we were unable to recover it. 00:27:34.791 [2024-11-15 10:46:23.183303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.791 [2024-11-15 10:46:23.183328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:34.791 qpair failed and we were unable to recover it. 00:27:34.791 [2024-11-15 10:46:23.183470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.791 [2024-11-15 10:46:23.183495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:34.791 qpair failed and we were unable to recover it. 00:27:34.791 [2024-11-15 10:46:23.183708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.791 [2024-11-15 10:46:23.183733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:34.791 qpair failed and we were unable to recover it. 00:27:34.791 [2024-11-15 10:46:23.183902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.791 [2024-11-15 10:46:23.183927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:34.791 qpair failed and we were unable to recover it. 00:27:34.791 [2024-11-15 10:46:23.184069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.792 [2024-11-15 10:46:23.184094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:34.792 qpair failed and we were unable to recover it. 00:27:34.792 [2024-11-15 10:46:23.184237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.792 [2024-11-15 10:46:23.184277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:34.792 qpair failed and we were unable to recover it. 00:27:34.792 [2024-11-15 10:46:23.184464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.792 [2024-11-15 10:46:23.184491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:34.792 qpair failed and we were unable to recover it. 00:27:34.792 [2024-11-15 10:46:23.184590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.792 [2024-11-15 10:46:23.184615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:34.792 qpair failed and we were unable to recover it. 00:27:34.792 [2024-11-15 10:46:23.184787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.792 [2024-11-15 10:46:23.184826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:34.792 qpair failed and we were unable to recover it. 00:27:34.792 [2024-11-15 10:46:23.184991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.792 [2024-11-15 10:46:23.185016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:34.792 qpair failed and we were unable to recover it. 00:27:34.792 [2024-11-15 10:46:23.185167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.792 [2024-11-15 10:46:23.185191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:34.792 qpair failed and we were unable to recover it. 00:27:34.792 [2024-11-15 10:46:23.185386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.792 [2024-11-15 10:46:23.185435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:34.792 qpair failed and we were unable to recover it. 00:27:34.792 [2024-11-15 10:46:23.185532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.792 [2024-11-15 10:46:23.185573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:34.792 qpair failed and we were unable to recover it. 00:27:34.792 [2024-11-15 10:46:23.185727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.792 [2024-11-15 10:46:23.185751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:34.792 qpair failed and we were unable to recover it. 00:27:34.792 [2024-11-15 10:46:23.185892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.792 [2024-11-15 10:46:23.185926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:34.792 qpair failed and we were unable to recover it. 00:27:34.792 [2024-11-15 10:46:23.186060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.792 [2024-11-15 10:46:23.186086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:34.792 qpair failed and we were unable to recover it. 00:27:34.792 [2024-11-15 10:46:23.186299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.792 [2024-11-15 10:46:23.186323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:34.792 qpair failed and we were unable to recover it. 00:27:34.792 [2024-11-15 10:46:23.186461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.792 [2024-11-15 10:46:23.186487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:34.792 qpair failed and we were unable to recover it. 00:27:34.792 [2024-11-15 10:46:23.186619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.792 [2024-11-15 10:46:23.186659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:34.792 qpair failed and we were unable to recover it. 00:27:34.792 [2024-11-15 10:46:23.186808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.792 [2024-11-15 10:46:23.186838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:34.792 qpair failed and we were unable to recover it. 00:27:34.792 [2024-11-15 10:46:23.186993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.792 [2024-11-15 10:46:23.187019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:34.792 qpair failed and we were unable to recover it. 00:27:34.792 [2024-11-15 10:46:23.187190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.792 [2024-11-15 10:46:23.187214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:34.792 qpair failed and we were unable to recover it. 00:27:34.792 [2024-11-15 10:46:23.187370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.792 [2024-11-15 10:46:23.187397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:34.792 qpair failed and we were unable to recover it. 00:27:34.792 [2024-11-15 10:46:23.187525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.792 [2024-11-15 10:46:23.187551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:34.792 qpair failed and we were unable to recover it. 00:27:34.792 [2024-11-15 10:46:23.187681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.792 [2024-11-15 10:46:23.187707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:34.792 qpair failed and we were unable to recover it. 00:27:34.792 [2024-11-15 10:46:23.187822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.792 [2024-11-15 10:46:23.187848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:34.792 qpair failed and we were unable to recover it. 00:27:34.792 [2024-11-15 10:46:23.187969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.792 [2024-11-15 10:46:23.187994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:34.792 qpair failed and we were unable to recover it. 00:27:34.792 [2024-11-15 10:46:23.188126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.792 [2024-11-15 10:46:23.188152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:34.792 qpair failed and we were unable to recover it. 00:27:34.792 [2024-11-15 10:46:23.188341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.792 [2024-11-15 10:46:23.188374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:34.792 qpair failed and we were unable to recover it. 00:27:34.792 [2024-11-15 10:46:23.188494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.792 [2024-11-15 10:46:23.188520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:34.792 qpair failed and we were unable to recover it. 00:27:34.792 [2024-11-15 10:46:23.188630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.792 [2024-11-15 10:46:23.188669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:34.792 qpair failed and we were unable to recover it. 00:27:34.792 [2024-11-15 10:46:23.188836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.792 [2024-11-15 10:46:23.188862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:34.792 qpair failed and we were unable to recover it. 00:27:34.792 [2024-11-15 10:46:23.189019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.792 [2024-11-15 10:46:23.189044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:34.792 qpair failed and we were unable to recover it. 00:27:34.792 [2024-11-15 10:46:23.189277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.792 [2024-11-15 10:46:23.189302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:34.792 qpair failed and we were unable to recover it. 00:27:34.792 [2024-11-15 10:46:23.189436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.792 [2024-11-15 10:46:23.189462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:34.792 qpair failed and we were unable to recover it. 00:27:34.792 [2024-11-15 10:46:23.189616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.792 [2024-11-15 10:46:23.189642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:34.792 qpair failed and we were unable to recover it. 00:27:34.792 [2024-11-15 10:46:23.189785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.792 [2024-11-15 10:46:23.189818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:34.792 qpair failed and we were unable to recover it. 00:27:34.792 [2024-11-15 10:46:23.189930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.792 [2024-11-15 10:46:23.189969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:34.792 qpair failed and we were unable to recover it. 00:27:34.792 [2024-11-15 10:46:23.190133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.792 [2024-11-15 10:46:23.190159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:34.792 qpair failed and we were unable to recover it. 00:27:34.792 [2024-11-15 10:46:23.190260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.792 [2024-11-15 10:46:23.190286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:34.792 qpair failed and we were unable to recover it. 00:27:34.792 [2024-11-15 10:46:23.190397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.792 [2024-11-15 10:46:23.190433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:34.792 qpair failed and we were unable to recover it. 00:27:34.792 [2024-11-15 10:46:23.190536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.792 [2024-11-15 10:46:23.190562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:34.792 qpair failed and we were unable to recover it. 00:27:34.792 [2024-11-15 10:46:23.190702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.792 [2024-11-15 10:46:23.190728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:34.792 qpair failed and we were unable to recover it. 00:27:34.792 [2024-11-15 10:46:23.190881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.792 [2024-11-15 10:46:23.190906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:34.792 qpair failed and we were unable to recover it. 00:27:34.792 [2024-11-15 10:46:23.191062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.792 [2024-11-15 10:46:23.191087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:34.792 qpair failed and we were unable to recover it. 00:27:34.792 [2024-11-15 10:46:23.191263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.793 [2024-11-15 10:46:23.191289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:34.793 qpair failed and we were unable to recover it. 00:27:34.793 [2024-11-15 10:46:23.191419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.793 [2024-11-15 10:46:23.191459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:34.793 qpair failed and we were unable to recover it. 00:27:34.793 [2024-11-15 10:46:23.191658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.793 [2024-11-15 10:46:23.191697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:34.793 qpair failed and we were unable to recover it. 00:27:34.793 [2024-11-15 10:46:23.191879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.793 [2024-11-15 10:46:23.191905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:34.793 qpair failed and we were unable to recover it. 00:27:34.793 [2024-11-15 10:46:23.192020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.793 [2024-11-15 10:46:23.192045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:34.793 qpair failed and we were unable to recover it. 00:27:34.793 [2024-11-15 10:46:23.192201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.793 [2024-11-15 10:46:23.192226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:34.793 qpair failed and we were unable to recover it. 00:27:34.793 [2024-11-15 10:46:23.192347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.793 [2024-11-15 10:46:23.192381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:34.793 qpair failed and we were unable to recover it. 00:27:35.060 [2024-11-15 10:46:23.192482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.060 [2024-11-15 10:46:23.192508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.060 qpair failed and we were unable to recover it. 00:27:35.060 [2024-11-15 10:46:23.192635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.060 [2024-11-15 10:46:23.192660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.060 qpair failed and we were unable to recover it. 00:27:35.060 [2024-11-15 10:46:23.192780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.060 [2024-11-15 10:46:23.192806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.060 qpair failed and we were unable to recover it. 00:27:35.060 [2024-11-15 10:46:23.193005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.060 [2024-11-15 10:46:23.193030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.060 qpair failed and we were unable to recover it. 00:27:35.060 [2024-11-15 10:46:23.193150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.060 [2024-11-15 10:46:23.193174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.060 qpair failed and we were unable to recover it. 00:27:35.060 [2024-11-15 10:46:23.193320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.060 [2024-11-15 10:46:23.193346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.060 qpair failed and we were unable to recover it. 00:27:35.060 [2024-11-15 10:46:23.193468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.060 [2024-11-15 10:46:23.193494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.060 qpair failed and we were unable to recover it. 00:27:35.060 [2024-11-15 10:46:23.193629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.060 [2024-11-15 10:46:23.193659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.060 qpair failed and we were unable to recover it. 00:27:35.060 [2024-11-15 10:46:23.193795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.060 [2024-11-15 10:46:23.193821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.060 qpair failed and we were unable to recover it. 00:27:35.060 [2024-11-15 10:46:23.193948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.060 [2024-11-15 10:46:23.193974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.060 qpair failed and we were unable to recover it. 00:27:35.060 [2024-11-15 10:46:23.194124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.060 [2024-11-15 10:46:23.194149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.060 qpair failed and we were unable to recover it. 00:27:35.060 [2024-11-15 10:46:23.194375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.060 [2024-11-15 10:46:23.194403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.060 qpair failed and we were unable to recover it. 00:27:35.060 [2024-11-15 10:46:23.194540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.060 [2024-11-15 10:46:23.194565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.060 qpair failed and we were unable to recover it. 00:27:35.060 [2024-11-15 10:46:23.194691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.060 [2024-11-15 10:46:23.194717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.060 qpair failed and we were unable to recover it. 00:27:35.060 [2024-11-15 10:46:23.194873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.060 [2024-11-15 10:46:23.194899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.060 qpair failed and we were unable to recover it. 00:27:35.060 [2024-11-15 10:46:23.195054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.060 [2024-11-15 10:46:23.195079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.060 qpair failed and we were unable to recover it. 00:27:35.060 [2024-11-15 10:46:23.195201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.060 [2024-11-15 10:46:23.195227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.060 qpair failed and we were unable to recover it. 00:27:35.060 [2024-11-15 10:46:23.195370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.060 [2024-11-15 10:46:23.195397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.060 qpair failed and we were unable to recover it. 00:27:35.060 [2024-11-15 10:46:23.195495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.060 [2024-11-15 10:46:23.195520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.060 qpair failed and we were unable to recover it. 00:27:35.060 [2024-11-15 10:46:23.195655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.060 [2024-11-15 10:46:23.195681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.060 qpair failed and we were unable to recover it. 00:27:35.060 [2024-11-15 10:46:23.195809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.060 [2024-11-15 10:46:23.195835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.060 qpair failed and we were unable to recover it. 00:27:35.060 [2024-11-15 10:46:23.195968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.060 [2024-11-15 10:46:23.195993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.060 qpair failed and we were unable to recover it. 00:27:35.060 [2024-11-15 10:46:23.196194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.060 [2024-11-15 10:46:23.196219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.060 qpair failed and we were unable to recover it. 00:27:35.060 [2024-11-15 10:46:23.196379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.060 [2024-11-15 10:46:23.196406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.060 qpair failed and we were unable to recover it. 00:27:35.060 [2024-11-15 10:46:23.196550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.061 [2024-11-15 10:46:23.196575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.061 qpair failed and we were unable to recover it. 00:27:35.061 [2024-11-15 10:46:23.196672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.061 [2024-11-15 10:46:23.196697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.061 qpair failed and we were unable to recover it. 00:27:35.061 [2024-11-15 10:46:23.196859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.061 [2024-11-15 10:46:23.196885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.061 qpair failed and we were unable to recover it. 00:27:35.061 [2024-11-15 10:46:23.197039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.061 [2024-11-15 10:46:23.197064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.061 qpair failed and we were unable to recover it. 00:27:35.061 [2024-11-15 10:46:23.197186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.061 [2024-11-15 10:46:23.197212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.061 qpair failed and we were unable to recover it. 00:27:35.061 [2024-11-15 10:46:23.197375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.061 [2024-11-15 10:46:23.197402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.061 qpair failed and we were unable to recover it. 00:27:35.061 [2024-11-15 10:46:23.197485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.061 [2024-11-15 10:46:23.197510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.061 qpair failed and we were unable to recover it. 00:27:35.061 [2024-11-15 10:46:23.197659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.061 [2024-11-15 10:46:23.197684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.061 qpair failed and we were unable to recover it. 00:27:35.061 [2024-11-15 10:46:23.197858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.061 [2024-11-15 10:46:23.197884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.061 qpair failed and we were unable to recover it. 00:27:35.061 [2024-11-15 10:46:23.198082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.061 [2024-11-15 10:46:23.198108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.061 qpair failed and we were unable to recover it. 00:27:35.061 [2024-11-15 10:46:23.198278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.061 [2024-11-15 10:46:23.198303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.061 qpair failed and we were unable to recover it. 00:27:35.061 [2024-11-15 10:46:23.198462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.061 [2024-11-15 10:46:23.198488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.061 qpair failed and we were unable to recover it. 00:27:35.061 [2024-11-15 10:46:23.198602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.061 [2024-11-15 10:46:23.198638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.061 qpair failed and we were unable to recover it. 00:27:35.061 [2024-11-15 10:46:23.198826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.061 [2024-11-15 10:46:23.198851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.061 qpair failed and we were unable to recover it. 00:27:35.061 [2024-11-15 10:46:23.198994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.061 [2024-11-15 10:46:23.199019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.061 qpair failed and we were unable to recover it. 00:27:35.061 [2024-11-15 10:46:23.199165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.061 [2024-11-15 10:46:23.199189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.061 qpair failed and we were unable to recover it. 00:27:35.061 [2024-11-15 10:46:23.199334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.061 [2024-11-15 10:46:23.199359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.061 qpair failed and we were unable to recover it. 00:27:35.061 [2024-11-15 10:46:23.199481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.061 [2024-11-15 10:46:23.199506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.061 qpair failed and we were unable to recover it. 00:27:35.061 [2024-11-15 10:46:23.199649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.061 [2024-11-15 10:46:23.199674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.061 qpair failed and we were unable to recover it. 00:27:35.061 [2024-11-15 10:46:23.199854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.061 [2024-11-15 10:46:23.199879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.061 qpair failed and we were unable to recover it. 00:27:35.061 [2024-11-15 10:46:23.200039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.061 [2024-11-15 10:46:23.200063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.061 qpair failed and we were unable to recover it. 00:27:35.061 [2024-11-15 10:46:23.200180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.061 [2024-11-15 10:46:23.200204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.061 qpair failed and we were unable to recover it. 00:27:35.061 [2024-11-15 10:46:23.200343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.061 [2024-11-15 10:46:23.200376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.061 qpair failed and we were unable to recover it. 00:27:35.061 [2024-11-15 10:46:23.200488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.061 [2024-11-15 10:46:23.200518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.061 qpair failed and we were unable to recover it. 00:27:35.061 [2024-11-15 10:46:23.200674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.061 [2024-11-15 10:46:23.200699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.061 qpair failed and we were unable to recover it. 00:27:35.061 [2024-11-15 10:46:23.200848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.061 [2024-11-15 10:46:23.200873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.061 qpair failed and we were unable to recover it. 00:27:35.061 [2024-11-15 10:46:23.201016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.061 [2024-11-15 10:46:23.201055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.061 qpair failed and we were unable to recover it. 00:27:35.061 [2024-11-15 10:46:23.201234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.061 [2024-11-15 10:46:23.201259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.061 qpair failed and we were unable to recover it. 00:27:35.061 [2024-11-15 10:46:23.201414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.061 [2024-11-15 10:46:23.201441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.061 qpair failed and we were unable to recover it. 00:27:35.061 [2024-11-15 10:46:23.201534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.061 [2024-11-15 10:46:23.201560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.061 qpair failed and we were unable to recover it. 00:27:35.061 [2024-11-15 10:46:23.201670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.061 [2024-11-15 10:46:23.201695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.061 qpair failed and we were unable to recover it. 00:27:35.061 [2024-11-15 10:46:23.201876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.061 [2024-11-15 10:46:23.201901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.061 qpair failed and we were unable to recover it. 00:27:35.061 [2024-11-15 10:46:23.202071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.061 [2024-11-15 10:46:23.202095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.061 qpair failed and we were unable to recover it. 00:27:35.061 [2024-11-15 10:46:23.202236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.061 [2024-11-15 10:46:23.202261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.061 qpair failed and we were unable to recover it. 00:27:35.061 [2024-11-15 10:46:23.202401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.061 [2024-11-15 10:46:23.202428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.061 qpair failed and we were unable to recover it. 00:27:35.061 [2024-11-15 10:46:23.202538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.061 [2024-11-15 10:46:23.202564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.061 qpair failed and we were unable to recover it. 00:27:35.061 [2024-11-15 10:46:23.202724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.061 [2024-11-15 10:46:23.202749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.061 qpair failed and we were unable to recover it. 00:27:35.061 [2024-11-15 10:46:23.202929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.061 [2024-11-15 10:46:23.202955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.061 qpair failed and we were unable to recover it. 00:27:35.061 [2024-11-15 10:46:23.203082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.061 [2024-11-15 10:46:23.203122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.061 qpair failed and we were unable to recover it. 00:27:35.061 [2024-11-15 10:46:23.203283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.061 [2024-11-15 10:46:23.203307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.061 qpair failed and we were unable to recover it. 00:27:35.061 [2024-11-15 10:46:23.203443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.061 [2024-11-15 10:46:23.203469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.061 qpair failed and we were unable to recover it. 00:27:35.061 [2024-11-15 10:46:23.203600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.061 [2024-11-15 10:46:23.203626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.061 qpair failed and we were unable to recover it. 00:27:35.061 [2024-11-15 10:46:23.203771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.061 [2024-11-15 10:46:23.203796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.061 qpair failed and we were unable to recover it. 00:27:35.061 [2024-11-15 10:46:23.203943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.061 [2024-11-15 10:46:23.203984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.061 qpair failed and we were unable to recover it. 00:27:35.061 [2024-11-15 10:46:23.204159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.061 [2024-11-15 10:46:23.204184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.061 qpair failed and we were unable to recover it. 00:27:35.061 [2024-11-15 10:46:23.204356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.061 [2024-11-15 10:46:23.204387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.061 qpair failed and we were unable to recover it. 00:27:35.061 [2024-11-15 10:46:23.204509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.061 [2024-11-15 10:46:23.204534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.061 qpair failed and we were unable to recover it. 00:27:35.061 [2024-11-15 10:46:23.204680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.061 [2024-11-15 10:46:23.204705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.061 qpair failed and we were unable to recover it. 00:27:35.061 [2024-11-15 10:46:23.204850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.061 [2024-11-15 10:46:23.204875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.061 qpair failed and we were unable to recover it. 00:27:35.061 [2024-11-15 10:46:23.205025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.061 [2024-11-15 10:46:23.205050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.061 qpair failed and we were unable to recover it. 00:27:35.061 [2024-11-15 10:46:23.205204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.061 [2024-11-15 10:46:23.205243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.061 qpair failed and we were unable to recover it. 00:27:35.061 [2024-11-15 10:46:23.205399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.061 [2024-11-15 10:46:23.205424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.061 qpair failed and we were unable to recover it. 00:27:35.061 [2024-11-15 10:46:23.205550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.061 [2024-11-15 10:46:23.205576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.061 qpair failed and we were unable to recover it. 00:27:35.061 [2024-11-15 10:46:23.205728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.061 [2024-11-15 10:46:23.205753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.061 qpair failed and we were unable to recover it. 00:27:35.061 [2024-11-15 10:46:23.205945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.061 [2024-11-15 10:46:23.205969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.061 qpair failed and we were unable to recover it. 00:27:35.061 [2024-11-15 10:46:23.206198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.061 [2024-11-15 10:46:23.206223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.061 qpair failed and we were unable to recover it. 00:27:35.061 [2024-11-15 10:46:23.206430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.061 [2024-11-15 10:46:23.206455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.061 qpair failed and we were unable to recover it. 00:27:35.061 [2024-11-15 10:46:23.206564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.061 [2024-11-15 10:46:23.206589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.061 qpair failed and we were unable to recover it. 00:27:35.061 [2024-11-15 10:46:23.206732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.061 [2024-11-15 10:46:23.206774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.061 qpair failed and we were unable to recover it. 00:27:35.061 [2024-11-15 10:46:23.206949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.061 [2024-11-15 10:46:23.206973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.061 qpair failed and we were unable to recover it. 00:27:35.061 [2024-11-15 10:46:23.207138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.061 [2024-11-15 10:46:23.207164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.061 qpair failed and we were unable to recover it. 00:27:35.061 [2024-11-15 10:46:23.207346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.061 [2024-11-15 10:46:23.207392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.061 qpair failed and we were unable to recover it. 00:27:35.061 [2024-11-15 10:46:23.207519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.061 [2024-11-15 10:46:23.207545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.061 qpair failed and we were unable to recover it. 00:27:35.061 [2024-11-15 10:46:23.207669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.061 [2024-11-15 10:46:23.207699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.061 qpair failed and we were unable to recover it. 00:27:35.061 [2024-11-15 10:46:23.207815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.061 [2024-11-15 10:46:23.207840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.061 qpair failed and we were unable to recover it. 00:27:35.061 [2024-11-15 10:46:23.208022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.061 [2024-11-15 10:46:23.208061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.061 qpair failed and we were unable to recover it. 00:27:35.061 [2024-11-15 10:46:23.208175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.061 [2024-11-15 10:46:23.208211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.061 qpair failed and we were unable to recover it. 00:27:35.061 [2024-11-15 10:46:23.208374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.061 [2024-11-15 10:46:23.208400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.062 qpair failed and we were unable to recover it. 00:27:35.062 [2024-11-15 10:46:23.208504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.062 [2024-11-15 10:46:23.208529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.062 qpair failed and we were unable to recover it. 00:27:35.062 [2024-11-15 10:46:23.208701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.062 [2024-11-15 10:46:23.208726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.062 qpair failed and we were unable to recover it. 00:27:35.062 [2024-11-15 10:46:23.208873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.062 [2024-11-15 10:46:23.208897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.062 qpair failed and we were unable to recover it. 00:27:35.062 [2024-11-15 10:46:23.209022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.062 [2024-11-15 10:46:23.209047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.062 qpair failed and we were unable to recover it. 00:27:35.062 [2024-11-15 10:46:23.209265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.062 [2024-11-15 10:46:23.209290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.062 qpair failed and we were unable to recover it. 00:27:35.062 [2024-11-15 10:46:23.209433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.062 [2024-11-15 10:46:23.209459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.062 qpair failed and we were unable to recover it. 00:27:35.062 [2024-11-15 10:46:23.209592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.062 [2024-11-15 10:46:23.209621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.062 qpair failed and we were unable to recover it. 00:27:35.062 [2024-11-15 10:46:23.209762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.062 [2024-11-15 10:46:23.209787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.062 qpair failed and we were unable to recover it. 00:27:35.062 [2024-11-15 10:46:23.209968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.062 [2024-11-15 10:46:23.209994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.062 qpair failed and we were unable to recover it. 00:27:35.062 [2024-11-15 10:46:23.210166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.062 [2024-11-15 10:46:23.210190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.062 qpair failed and we were unable to recover it. 00:27:35.062 [2024-11-15 10:46:23.210298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.062 [2024-11-15 10:46:23.210322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.062 qpair failed and we were unable to recover it. 00:27:35.062 [2024-11-15 10:46:23.210463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.062 [2024-11-15 10:46:23.210489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.062 qpair failed and we were unable to recover it. 00:27:35.062 [2024-11-15 10:46:23.210595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.062 [2024-11-15 10:46:23.210621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.062 qpair failed and we were unable to recover it. 00:27:35.062 [2024-11-15 10:46:23.210788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.062 [2024-11-15 10:46:23.210813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.062 qpair failed and we were unable to recover it. 00:27:35.062 [2024-11-15 10:46:23.211031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.062 [2024-11-15 10:46:23.211056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.062 qpair failed and we were unable to recover it. 00:27:35.062 [2024-11-15 10:46:23.211219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.062 [2024-11-15 10:46:23.211244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.062 qpair failed and we were unable to recover it. 00:27:35.062 [2024-11-15 10:46:23.211416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.062 [2024-11-15 10:46:23.211442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.062 qpair failed and we were unable to recover it. 00:27:35.062 [2024-11-15 10:46:23.211537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.062 [2024-11-15 10:46:23.211562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.062 qpair failed and we were unable to recover it. 00:27:35.062 [2024-11-15 10:46:23.211712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.062 [2024-11-15 10:46:23.211737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.062 qpair failed and we were unable to recover it. 00:27:35.062 [2024-11-15 10:46:23.211906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.062 [2024-11-15 10:46:23.211931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.062 qpair failed and we were unable to recover it. 00:27:35.062 [2024-11-15 10:46:23.212077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.062 [2024-11-15 10:46:23.212102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.062 qpair failed and we were unable to recover it. 00:27:35.062 [2024-11-15 10:46:23.212246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.062 [2024-11-15 10:46:23.212287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.062 qpair failed and we were unable to recover it. 00:27:35.062 [2024-11-15 10:46:23.212433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.062 [2024-11-15 10:46:23.212460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.062 qpair failed and we were unable to recover it. 00:27:35.062 [2024-11-15 10:46:23.212566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.062 [2024-11-15 10:46:23.212591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.062 qpair failed and we were unable to recover it. 00:27:35.062 [2024-11-15 10:46:23.212787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.062 [2024-11-15 10:46:23.212811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.062 qpair failed and we were unable to recover it. 00:27:35.062 [2024-11-15 10:46:23.212925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.062 [2024-11-15 10:46:23.212950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.062 qpair failed and we were unable to recover it. 00:27:35.062 [2024-11-15 10:46:23.213116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.062 [2024-11-15 10:46:23.213141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.062 qpair failed and we were unable to recover it. 00:27:35.062 [2024-11-15 10:46:23.213325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.062 [2024-11-15 10:46:23.213349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.062 qpair failed and we were unable to recover it. 00:27:35.062 [2024-11-15 10:46:23.213506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.062 [2024-11-15 10:46:23.213547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.062 qpair failed and we were unable to recover it. 00:27:35.062 [2024-11-15 10:46:23.213658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.062 [2024-11-15 10:46:23.213698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.062 qpair failed and we were unable to recover it. 00:27:35.062 [2024-11-15 10:46:23.213863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.062 [2024-11-15 10:46:23.213899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.062 qpair failed and we were unable to recover it. 00:27:35.062 [2024-11-15 10:46:23.214068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.062 [2024-11-15 10:46:23.214092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.062 qpair failed and we were unable to recover it. 00:27:35.062 [2024-11-15 10:46:23.214307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.062 [2024-11-15 10:46:23.214331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.062 qpair failed and we were unable to recover it. 00:27:35.062 [2024-11-15 10:46:23.214506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.062 [2024-11-15 10:46:23.214532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.062 qpair failed and we were unable to recover it. 00:27:35.062 [2024-11-15 10:46:23.214701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.062 [2024-11-15 10:46:23.214726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.062 qpair failed and we were unable to recover it. 00:27:35.062 [2024-11-15 10:46:23.214877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.062 [2024-11-15 10:46:23.214907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.062 qpair failed and we were unable to recover it. 00:27:35.062 [2024-11-15 10:46:23.215109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.062 [2024-11-15 10:46:23.215133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.062 qpair failed and we were unable to recover it. 00:27:35.062 [2024-11-15 10:46:23.215276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.062 [2024-11-15 10:46:23.215301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.062 qpair failed and we were unable to recover it. 00:27:35.062 [2024-11-15 10:46:23.215429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.062 [2024-11-15 10:46:23.215456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.062 qpair failed and we were unable to recover it. 00:27:35.062 [2024-11-15 10:46:23.215579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.062 [2024-11-15 10:46:23.215604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.062 qpair failed and we were unable to recover it. 00:27:35.062 [2024-11-15 10:46:23.215814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.062 [2024-11-15 10:46:23.215838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.062 qpair failed and we were unable to recover it. 00:27:35.062 [2024-11-15 10:46:23.215999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.062 [2024-11-15 10:46:23.216023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.062 qpair failed and we were unable to recover it. 00:27:35.062 [2024-11-15 10:46:23.216204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.062 [2024-11-15 10:46:23.216229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.062 qpair failed and we were unable to recover it. 00:27:35.062 [2024-11-15 10:46:23.216378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.062 [2024-11-15 10:46:23.216405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.062 qpair failed and we were unable to recover it. 00:27:35.062 [2024-11-15 10:46:23.216534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.062 [2024-11-15 10:46:23.216560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.062 qpair failed and we were unable to recover it. 00:27:35.062 [2024-11-15 10:46:23.216739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.062 [2024-11-15 10:46:23.216764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.062 qpair failed and we were unable to recover it. 00:27:35.062 [2024-11-15 10:46:23.216955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.062 [2024-11-15 10:46:23.216979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.062 qpair failed and we were unable to recover it. 00:27:35.062 [2024-11-15 10:46:23.217135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.062 [2024-11-15 10:46:23.217159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.062 qpair failed and we were unable to recover it. 00:27:35.062 [2024-11-15 10:46:23.217284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.062 [2024-11-15 10:46:23.217309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.062 qpair failed and we were unable to recover it. 00:27:35.062 [2024-11-15 10:46:23.217472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.062 [2024-11-15 10:46:23.217499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.062 qpair failed and we were unable to recover it. 00:27:35.062 [2024-11-15 10:46:23.217609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.062 [2024-11-15 10:46:23.217634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.062 qpair failed and we were unable to recover it. 00:27:35.062 [2024-11-15 10:46:23.217750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.062 [2024-11-15 10:46:23.217776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.062 qpair failed and we were unable to recover it. 00:27:35.062 [2024-11-15 10:46:23.217945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.062 [2024-11-15 10:46:23.217985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.062 qpair failed and we were unable to recover it. 00:27:35.062 [2024-11-15 10:46:23.218128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.062 [2024-11-15 10:46:23.218152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.062 qpair failed and we were unable to recover it. 00:27:35.062 [2024-11-15 10:46:23.218321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.062 [2024-11-15 10:46:23.218346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.062 qpair failed and we were unable to recover it. 00:27:35.062 [2024-11-15 10:46:23.218453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.062 [2024-11-15 10:46:23.218479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.062 qpair failed and we were unable to recover it. 00:27:35.062 [2024-11-15 10:46:23.218585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.062 [2024-11-15 10:46:23.218610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.062 qpair failed and we were unable to recover it. 00:27:35.062 [2024-11-15 10:46:23.218754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.062 [2024-11-15 10:46:23.218779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.062 qpair failed and we were unable to recover it. 00:27:35.062 [2024-11-15 10:46:23.218922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.062 [2024-11-15 10:46:23.218963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.062 qpair failed and we were unable to recover it. 00:27:35.062 [2024-11-15 10:46:23.219099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.062 [2024-11-15 10:46:23.219139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.062 qpair failed and we were unable to recover it. 00:27:35.062 [2024-11-15 10:46:23.219274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.062 [2024-11-15 10:46:23.219299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.062 qpair failed and we were unable to recover it. 00:27:35.062 [2024-11-15 10:46:23.219435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.062 [2024-11-15 10:46:23.219461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.062 qpair failed and we were unable to recover it. 00:27:35.062 [2024-11-15 10:46:23.219564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.062 [2024-11-15 10:46:23.219590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.062 qpair failed and we were unable to recover it. 00:27:35.062 [2024-11-15 10:46:23.219708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.062 [2024-11-15 10:46:23.219733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.062 qpair failed and we were unable to recover it. 00:27:35.062 [2024-11-15 10:46:23.219875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.062 [2024-11-15 10:46:23.219901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.062 qpair failed and we were unable to recover it. 00:27:35.062 [2024-11-15 10:46:23.220017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.062 [2024-11-15 10:46:23.220042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.062 qpair failed and we were unable to recover it. 00:27:35.062 [2024-11-15 10:46:23.220221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.062 [2024-11-15 10:46:23.220247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.062 qpair failed and we were unable to recover it. 00:27:35.062 [2024-11-15 10:46:23.220348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.062 [2024-11-15 10:46:23.220379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.062 qpair failed and we were unable to recover it. 00:27:35.062 [2024-11-15 10:46:23.220495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.062 [2024-11-15 10:46:23.220521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.062 qpair failed and we were unable to recover it. 00:27:35.062 [2024-11-15 10:46:23.220633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.062 [2024-11-15 10:46:23.220674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.062 qpair failed and we were unable to recover it. 00:27:35.063 [2024-11-15 10:46:23.220801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.063 [2024-11-15 10:46:23.220826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.063 qpair failed and we were unable to recover it. 00:27:35.063 [2024-11-15 10:46:23.220992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.063 [2024-11-15 10:46:23.221018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.063 qpair failed and we were unable to recover it. 00:27:35.063 [2024-11-15 10:46:23.221140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.063 [2024-11-15 10:46:23.221165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.063 qpair failed and we were unable to recover it. 00:27:35.063 [2024-11-15 10:46:23.221311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.063 [2024-11-15 10:46:23.221335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.063 qpair failed and we were unable to recover it. 00:27:35.063 [2024-11-15 10:46:23.221462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.063 [2024-11-15 10:46:23.221488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.063 qpair failed and we were unable to recover it. 00:27:35.063 [2024-11-15 10:46:23.221626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.063 [2024-11-15 10:46:23.221655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.063 qpair failed and we were unable to recover it. 00:27:35.063 [2024-11-15 10:46:23.221850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.063 [2024-11-15 10:46:23.221874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.063 qpair failed and we were unable to recover it. 00:27:35.063 [2024-11-15 10:46:23.222051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.063 [2024-11-15 10:46:23.222084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.063 qpair failed and we were unable to recover it. 00:27:35.063 [2024-11-15 10:46:23.222247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.063 [2024-11-15 10:46:23.222271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.063 qpair failed and we were unable to recover it. 00:27:35.063 [2024-11-15 10:46:23.222399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.063 [2024-11-15 10:46:23.222425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.063 qpair failed and we were unable to recover it. 00:27:35.063 [2024-11-15 10:46:23.222532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.063 [2024-11-15 10:46:23.222558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.063 qpair failed and we were unable to recover it. 00:27:35.063 [2024-11-15 10:46:23.222693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.063 [2024-11-15 10:46:23.222718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.063 qpair failed and we were unable to recover it. 00:27:35.063 [2024-11-15 10:46:23.222826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.063 [2024-11-15 10:46:23.222852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.063 qpair failed and we were unable to recover it. 00:27:35.063 [2024-11-15 10:46:23.223005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.063 [2024-11-15 10:46:23.223030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.063 qpair failed and we were unable to recover it. 00:27:35.063 [2024-11-15 10:46:23.223215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.063 [2024-11-15 10:46:23.223240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.063 qpair failed and we were unable to recover it. 00:27:35.063 [2024-11-15 10:46:23.223416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.063 [2024-11-15 10:46:23.223442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.063 qpair failed and we were unable to recover it. 00:27:35.063 [2024-11-15 10:46:23.223536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.063 [2024-11-15 10:46:23.223562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.063 qpair failed and we were unable to recover it. 00:27:35.063 [2024-11-15 10:46:23.223704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.063 [2024-11-15 10:46:23.223730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.063 qpair failed and we were unable to recover it. 00:27:35.063 [2024-11-15 10:46:23.223875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.063 [2024-11-15 10:46:23.223900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.063 qpair failed and we were unable to recover it. 00:27:35.063 [2024-11-15 10:46:23.224056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.063 [2024-11-15 10:46:23.224081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.063 qpair failed and we were unable to recover it. 00:27:35.063 [2024-11-15 10:46:23.224270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.063 [2024-11-15 10:46:23.224295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.063 qpair failed and we were unable to recover it. 00:27:35.063 [2024-11-15 10:46:23.224434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.063 [2024-11-15 10:46:23.224460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.063 qpair failed and we were unable to recover it. 00:27:35.063 [2024-11-15 10:46:23.224560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.063 [2024-11-15 10:46:23.224586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.063 qpair failed and we were unable to recover it. 00:27:35.063 [2024-11-15 10:46:23.224785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.063 [2024-11-15 10:46:23.224810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.063 qpair failed and we were unable to recover it. 00:27:35.063 [2024-11-15 10:46:23.224956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.063 [2024-11-15 10:46:23.224980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.063 qpair failed and we were unable to recover it. 00:27:35.063 [2024-11-15 10:46:23.225165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.063 [2024-11-15 10:46:23.225190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.063 qpair failed and we were unable to recover it. 00:27:35.063 [2024-11-15 10:46:23.225444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.063 [2024-11-15 10:46:23.225470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.063 qpair failed and we were unable to recover it. 00:27:35.063 [2024-11-15 10:46:23.225702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.063 [2024-11-15 10:46:23.225727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.063 qpair failed and we were unable to recover it. 00:27:35.063 [2024-11-15 10:46:23.225907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.063 [2024-11-15 10:46:23.225932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.063 qpair failed and we were unable to recover it. 00:27:35.063 [2024-11-15 10:46:23.226105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.063 [2024-11-15 10:46:23.226129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.063 qpair failed and we were unable to recover it. 00:27:35.063 [2024-11-15 10:46:23.226265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.063 [2024-11-15 10:46:23.226304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.063 qpair failed and we were unable to recover it. 00:27:35.063 [2024-11-15 10:46:23.226456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.063 [2024-11-15 10:46:23.226483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.063 qpair failed and we were unable to recover it. 00:27:35.063 [2024-11-15 10:46:23.226622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.063 [2024-11-15 10:46:23.226648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.063 qpair failed and we were unable to recover it. 00:27:35.063 [2024-11-15 10:46:23.226843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.063 [2024-11-15 10:46:23.226867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.063 qpair failed and we were unable to recover it. 00:27:35.063 [2024-11-15 10:46:23.226985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.063 [2024-11-15 10:46:23.227011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.063 qpair failed and we were unable to recover it. 00:27:35.063 [2024-11-15 10:46:23.227177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.063 [2024-11-15 10:46:23.227202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.063 qpair failed and we were unable to recover it. 00:27:35.063 [2024-11-15 10:46:23.227348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.063 [2024-11-15 10:46:23.227388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.063 qpair failed and we were unable to recover it. 00:27:35.063 [2024-11-15 10:46:23.227553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.063 [2024-11-15 10:46:23.227579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.063 qpair failed and we were unable to recover it. 00:27:35.063 [2024-11-15 10:46:23.227768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.063 [2024-11-15 10:46:23.227793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.063 qpair failed and we were unable to recover it. 00:27:35.063 [2024-11-15 10:46:23.227954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.063 [2024-11-15 10:46:23.227978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.063 qpair failed and we were unable to recover it. 00:27:35.063 [2024-11-15 10:46:23.228197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.063 [2024-11-15 10:46:23.228222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.063 qpair failed and we were unable to recover it. 00:27:35.063 [2024-11-15 10:46:23.228371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.063 [2024-11-15 10:46:23.228413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.063 qpair failed and we were unable to recover it. 00:27:35.063 [2024-11-15 10:46:23.228546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.063 [2024-11-15 10:46:23.228587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.063 qpair failed and we were unable to recover it. 00:27:35.063 [2024-11-15 10:46:23.228758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.063 [2024-11-15 10:46:23.228783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.063 qpair failed and we were unable to recover it. 00:27:35.063 [2024-11-15 10:46:23.228963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.063 [2024-11-15 10:46:23.228988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.063 qpair failed and we were unable to recover it. 00:27:35.063 [2024-11-15 10:46:23.229147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.063 [2024-11-15 10:46:23.229175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.063 qpair failed and we were unable to recover it. 00:27:35.063 [2024-11-15 10:46:23.229325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.063 [2024-11-15 10:46:23.229350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.063 qpair failed and we were unable to recover it. 00:27:35.063 [2024-11-15 10:46:23.229485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.063 [2024-11-15 10:46:23.229511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.063 qpair failed and we were unable to recover it. 00:27:35.063 [2024-11-15 10:46:23.229658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.063 [2024-11-15 10:46:23.229683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.063 qpair failed and we were unable to recover it. 00:27:35.063 [2024-11-15 10:46:23.229815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.063 [2024-11-15 10:46:23.229841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.063 qpair failed and we were unable to recover it. 00:27:35.063 [2024-11-15 10:46:23.229981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.063 [2024-11-15 10:46:23.230007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.063 qpair failed and we were unable to recover it. 00:27:35.063 [2024-11-15 10:46:23.230174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.063 [2024-11-15 10:46:23.230200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.063 qpair failed and we were unable to recover it. 00:27:35.063 [2024-11-15 10:46:23.230329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.063 [2024-11-15 10:46:23.230354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.063 qpair failed and we were unable to recover it. 00:27:35.063 [2024-11-15 10:46:23.230465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.063 [2024-11-15 10:46:23.230491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.063 qpair failed and we were unable to recover it. 00:27:35.063 [2024-11-15 10:46:23.230676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.063 [2024-11-15 10:46:23.230702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.063 qpair failed and we were unable to recover it. 00:27:35.063 [2024-11-15 10:46:23.230835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.063 [2024-11-15 10:46:23.230861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.063 qpair failed and we were unable to recover it. 00:27:35.063 [2024-11-15 10:46:23.231083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.063 [2024-11-15 10:46:23.231109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.063 qpair failed and we were unable to recover it. 00:27:35.063 [2024-11-15 10:46:23.231246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.063 [2024-11-15 10:46:23.231272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.063 [2024-11-15 10:46:23.231267] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:35.063 qpair failed and we were unable to recover it. 00:27:35.063 [2024-11-15 10:46:23.231303] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:35.063 [2024-11-15 10:46:23.231322] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:35.063 [2024-11-15 10:46:23.231334] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:35.063 [2024-11-15 10:46:23.231345] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:35.063 [2024-11-15 10:46:23.231435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.063 [2024-11-15 10:46:23.231460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.064 qpair failed and we were unable to recover it. 00:27:35.064 [2024-11-15 10:46:23.231566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.064 [2024-11-15 10:46:23.231591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.064 qpair failed and we were unable to recover it. 00:27:35.064 [2024-11-15 10:46:23.231731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.064 [2024-11-15 10:46:23.231757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.064 qpair failed and we were unable to recover it. 00:27:35.064 [2024-11-15 10:46:23.231857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.064 [2024-11-15 10:46:23.231883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.064 qpair failed and we were unable to recover it. 00:27:35.064 [2024-11-15 10:46:23.232061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.064 [2024-11-15 10:46:23.232085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.064 qpair failed and we were unable to recover it. 00:27:35.064 [2024-11-15 10:46:23.232230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.064 [2024-11-15 10:46:23.232256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.064 qpair failed and we were unable to recover it. 00:27:35.064 [2024-11-15 10:46:23.232396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.064 [2024-11-15 10:46:23.232434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.064 qpair failed and we were unable to recover it. 00:27:35.064 [2024-11-15 10:46:23.232544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.064 [2024-11-15 10:46:23.232569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.064 qpair failed and we were unable to recover it. 00:27:35.064 [2024-11-15 10:46:23.232691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.064 [2024-11-15 10:46:23.232716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.064 qpair failed and we were unable to recover it. 00:27:35.064 [2024-11-15 10:46:23.232901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.064 [2024-11-15 10:46:23.232926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.064 qpair failed and we were unable to recover it. 00:27:35.064 [2024-11-15 10:46:23.233098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.064 [2024-11-15 10:46:23.233138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.064 qpair failed and we were unable to recover it. 00:27:35.064 [2024-11-15 10:46:23.233282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.064 [2024-11-15 10:46:23.233223] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:27:35.064 [2024-11-15 10:46:23.233307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.064 qpair failed and we were unable to recover it. 00:27:35.064 [2024-11-15 10:46:23.233290] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:27:35.064 [2024-11-15 10:46:23.233246] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:27:35.064 [2024-11-15 10:46:23.233293] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:27:35.064 [2024-11-15 10:46:23.233463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.064 [2024-11-15 10:46:23.233489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.064 qpair failed and we were unable to recover it. 00:27:35.064 [2024-11-15 10:46:23.233593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.064 [2024-11-15 10:46:23.233618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.064 qpair failed and we were unable to recover it. 00:27:35.064 [2024-11-15 10:46:23.233749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.064 [2024-11-15 10:46:23.233773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.064 qpair failed and we were unable to recover it. 00:27:35.064 [2024-11-15 10:46:23.233914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.064 [2024-11-15 10:46:23.233939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.064 qpair failed and we were unable to recover it. 00:27:35.064 [2024-11-15 10:46:23.234100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.064 [2024-11-15 10:46:23.234125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.064 qpair failed and we were unable to recover it. 00:27:35.064 [2024-11-15 10:46:23.234282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.064 [2024-11-15 10:46:23.234308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.064 qpair failed and we were unable to recover it. 00:27:35.064 [2024-11-15 10:46:23.234454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.064 [2024-11-15 10:46:23.234481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.064 qpair failed and we were unable to recover it. 00:27:35.064 [2024-11-15 10:46:23.234684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.064 [2024-11-15 10:46:23.234709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.064 qpair failed and we were unable to recover it. 00:27:35.064 [2024-11-15 10:46:23.234907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.064 [2024-11-15 10:46:23.234933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.064 qpair failed and we were unable to recover it. 00:27:35.064 [2024-11-15 10:46:23.235098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.064 [2024-11-15 10:46:23.235124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.064 qpair failed and we were unable to recover it. 00:27:35.064 [2024-11-15 10:46:23.235254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.064 [2024-11-15 10:46:23.235280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.064 qpair failed and we were unable to recover it. 00:27:35.064 [2024-11-15 10:46:23.235398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.064 [2024-11-15 10:46:23.235425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.064 qpair failed and we were unable to recover it. 00:27:35.064 [2024-11-15 10:46:23.235584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.064 [2024-11-15 10:46:23.235624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.064 qpair failed and we were unable to recover it. 00:27:35.064 [2024-11-15 10:46:23.235785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.064 [2024-11-15 10:46:23.235811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.064 qpair failed and we were unable to recover it. 00:27:35.064 [2024-11-15 10:46:23.235940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.064 [2024-11-15 10:46:23.235965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.064 qpair failed and we were unable to recover it. 00:27:35.064 [2024-11-15 10:46:23.236111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.064 [2024-11-15 10:46:23.236136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.064 qpair failed and we were unable to recover it. 00:27:35.064 [2024-11-15 10:46:23.236343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.064 [2024-11-15 10:46:23.236393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.064 qpair failed and we were unable to recover it. 00:27:35.064 [2024-11-15 10:46:23.236491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.064 [2024-11-15 10:46:23.236517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.064 qpair failed and we were unable to recover it. 00:27:35.064 [2024-11-15 10:46:23.236665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.064 [2024-11-15 10:46:23.236690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.064 qpair failed and we were unable to recover it. 00:27:35.064 [2024-11-15 10:46:23.236818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.064 [2024-11-15 10:46:23.236844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.064 qpair failed and we were unable to recover it. 00:27:35.064 [2024-11-15 10:46:23.237004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.064 [2024-11-15 10:46:23.237030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.064 qpair failed and we were unable to recover it. 00:27:35.064 [2024-11-15 10:46:23.237161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.064 [2024-11-15 10:46:23.237187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.064 qpair failed and we were unable to recover it. 00:27:35.064 [2024-11-15 10:46:23.237331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.064 [2024-11-15 10:46:23.237356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.064 qpair failed and we were unable to recover it. 00:27:35.064 [2024-11-15 10:46:23.237494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.064 [2024-11-15 10:46:23.237520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.064 qpair failed and we were unable to recover it. 00:27:35.064 [2024-11-15 10:46:23.237634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.064 [2024-11-15 10:46:23.237660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.064 qpair failed and we were unable to recover it. 00:27:35.064 [2024-11-15 10:46:23.237784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.064 [2024-11-15 10:46:23.237810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.064 qpair failed and we were unable to recover it. 00:27:35.064 [2024-11-15 10:46:23.237947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.064 [2024-11-15 10:46:23.237972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.064 qpair failed and we were unable to recover it. 00:27:35.064 [2024-11-15 10:46:23.238095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.064 [2024-11-15 10:46:23.238121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.064 qpair failed and we were unable to recover it. 00:27:35.064 [2024-11-15 10:46:23.238283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.064 [2024-11-15 10:46:23.238308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.064 qpair failed and we were unable to recover it. 00:27:35.064 [2024-11-15 10:46:23.238440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.064 [2024-11-15 10:46:23.238466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.064 qpair failed and we were unable to recover it. 00:27:35.064 [2024-11-15 10:46:23.238567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.064 [2024-11-15 10:46:23.238593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.064 qpair failed and we were unable to recover it. 00:27:35.064 [2024-11-15 10:46:23.238708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.064 [2024-11-15 10:46:23.238734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.064 qpair failed and we were unable to recover it. 00:27:35.064 [2024-11-15 10:46:23.238869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.064 [2024-11-15 10:46:23.238895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.064 qpair failed and we were unable to recover it. 00:27:35.064 [2024-11-15 10:46:23.239048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.064 [2024-11-15 10:46:23.239074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.064 qpair failed and we were unable to recover it. 00:27:35.064 [2024-11-15 10:46:23.239228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.064 [2024-11-15 10:46:23.239253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.064 qpair failed and we were unable to recover it. 00:27:35.064 [2024-11-15 10:46:23.239384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.064 [2024-11-15 10:46:23.239411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.064 qpair failed and we were unable to recover it. 00:27:35.064 [2024-11-15 10:46:23.239521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.064 [2024-11-15 10:46:23.239547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.064 qpair failed and we were unable to recover it. 00:27:35.064 [2024-11-15 10:46:23.239671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.064 [2024-11-15 10:46:23.239696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.064 qpair failed and we were unable to recover it. 00:27:35.064 [2024-11-15 10:46:23.239826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.064 [2024-11-15 10:46:23.239855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.064 qpair failed and we were unable to recover it. 00:27:35.064 [2024-11-15 10:46:23.240042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.064 [2024-11-15 10:46:23.240069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.064 qpair failed and we were unable to recover it. 00:27:35.064 [2024-11-15 10:46:23.240284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.064 [2024-11-15 10:46:23.240310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.064 qpair failed and we were unable to recover it. 00:27:35.064 [2024-11-15 10:46:23.240473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.064 [2024-11-15 10:46:23.240499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.064 qpair failed and we were unable to recover it. 00:27:35.064 [2024-11-15 10:46:23.240615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.064 [2024-11-15 10:46:23.240641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.064 qpair failed and we were unable to recover it. 00:27:35.064 [2024-11-15 10:46:23.240808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.064 [2024-11-15 10:46:23.240833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.064 qpair failed and we were unable to recover it. 00:27:35.064 [2024-11-15 10:46:23.240972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.064 [2024-11-15 10:46:23.240998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.064 qpair failed and we were unable to recover it. 00:27:35.065 [2024-11-15 10:46:23.241101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.065 [2024-11-15 10:46:23.241128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.065 qpair failed and we were unable to recover it. 00:27:35.065 [2024-11-15 10:46:23.241292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.065 [2024-11-15 10:46:23.241317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.065 qpair failed and we were unable to recover it. 00:27:35.065 [2024-11-15 10:46:23.241423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.065 [2024-11-15 10:46:23.241449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.065 qpair failed and we were unable to recover it. 00:27:35.065 [2024-11-15 10:46:23.241584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.065 [2024-11-15 10:46:23.241610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.065 qpair failed and we were unable to recover it. 00:27:35.065 [2024-11-15 10:46:23.241728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.065 [2024-11-15 10:46:23.241754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.065 qpair failed and we were unable to recover it. 00:27:35.065 [2024-11-15 10:46:23.241886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.065 [2024-11-15 10:46:23.241911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.065 qpair failed and we were unable to recover it. 00:27:35.065 [2024-11-15 10:46:23.242013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.065 [2024-11-15 10:46:23.242039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.065 qpair failed and we were unable to recover it. 00:27:35.065 [2024-11-15 10:46:23.242195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.065 [2024-11-15 10:46:23.242226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.065 qpair failed and we were unable to recover it. 00:27:35.065 [2024-11-15 10:46:23.242324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.065 [2024-11-15 10:46:23.242350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.065 qpair failed and we were unable to recover it. 00:27:35.065 [2024-11-15 10:46:23.242467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.065 [2024-11-15 10:46:23.242492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.065 qpair failed and we were unable to recover it. 00:27:35.065 [2024-11-15 10:46:23.242650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.065 [2024-11-15 10:46:23.242675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.065 qpair failed and we were unable to recover it. 00:27:35.065 [2024-11-15 10:46:23.242836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.065 [2024-11-15 10:46:23.242861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.065 qpair failed and we were unable to recover it. 00:27:35.065 [2024-11-15 10:46:23.243018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.065 [2024-11-15 10:46:23.243044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.065 qpair failed and we were unable to recover it. 00:27:35.065 [2024-11-15 10:46:23.243171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.065 [2024-11-15 10:46:23.243197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.065 qpair failed and we were unable to recover it. 00:27:35.065 [2024-11-15 10:46:23.243353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.065 [2024-11-15 10:46:23.243387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.065 qpair failed and we were unable to recover it. 00:27:35.065 [2024-11-15 10:46:23.243497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.065 [2024-11-15 10:46:23.243523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.065 qpair failed and we were unable to recover it. 00:27:35.065 [2024-11-15 10:46:23.243655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.065 [2024-11-15 10:46:23.243680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.065 qpair failed and we were unable to recover it. 00:27:35.065 [2024-11-15 10:46:23.243787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.065 [2024-11-15 10:46:23.243813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.065 qpair failed and we were unable to recover it. 00:27:35.065 [2024-11-15 10:46:23.243968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.065 [2024-11-15 10:46:23.243994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.065 qpair failed and we were unable to recover it. 00:27:35.065 [2024-11-15 10:46:23.244150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.065 [2024-11-15 10:46:23.244176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.065 qpair failed and we were unable to recover it. 00:27:35.065 [2024-11-15 10:46:23.244270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.065 [2024-11-15 10:46:23.244296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.065 qpair failed and we were unable to recover it. 00:27:35.065 [2024-11-15 10:46:23.244449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.065 [2024-11-15 10:46:23.244475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.065 qpair failed and we were unable to recover it. 00:27:35.065 [2024-11-15 10:46:23.244583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.065 [2024-11-15 10:46:23.244608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.065 qpair failed and we were unable to recover it. 00:27:35.065 [2024-11-15 10:46:23.244743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.065 [2024-11-15 10:46:23.244770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.065 qpair failed and we were unable to recover it. 00:27:35.065 [2024-11-15 10:46:23.244897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.065 [2024-11-15 10:46:23.244922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.065 qpair failed and we were unable to recover it. 00:27:35.065 [2024-11-15 10:46:23.245119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.065 [2024-11-15 10:46:23.245144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.065 qpair failed and we were unable to recover it. 00:27:35.065 [2024-11-15 10:46:23.245329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.065 [2024-11-15 10:46:23.245356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.065 qpair failed and we were unable to recover it. 00:27:35.065 [2024-11-15 10:46:23.245522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.065 [2024-11-15 10:46:23.245548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.065 qpair failed and we were unable to recover it. 00:27:35.065 [2024-11-15 10:46:23.245679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.065 [2024-11-15 10:46:23.245705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.065 qpair failed and we were unable to recover it. 00:27:35.065 [2024-11-15 10:46:23.245822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.065 [2024-11-15 10:46:23.245848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.065 qpair failed and we were unable to recover it. 00:27:35.065 [2024-11-15 10:46:23.246012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.065 [2024-11-15 10:46:23.246038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.065 qpair failed and we were unable to recover it. 00:27:35.065 [2024-11-15 10:46:23.246209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.065 [2024-11-15 10:46:23.246235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.065 qpair failed and we were unable to recover it. 00:27:35.065 [2024-11-15 10:46:23.246343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.065 [2024-11-15 10:46:23.246376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.065 qpair failed and we were unable to recover it. 00:27:35.065 [2024-11-15 10:46:23.246488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.065 [2024-11-15 10:46:23.246513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.065 qpair failed and we were unable to recover it. 00:27:35.065 [2024-11-15 10:46:23.246614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.065 [2024-11-15 10:46:23.246640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.065 qpair failed and we were unable to recover it. 00:27:35.065 [2024-11-15 10:46:23.246749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.065 [2024-11-15 10:46:23.246775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.065 qpair failed and we were unable to recover it. 00:27:35.065 [2024-11-15 10:46:23.246927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.065 [2024-11-15 10:46:23.246952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.065 qpair failed and we were unable to recover it. 00:27:35.065 [2024-11-15 10:46:23.247049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.065 [2024-11-15 10:46:23.247074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.065 qpair failed and we were unable to recover it. 00:27:35.065 [2024-11-15 10:46:23.247202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.065 [2024-11-15 10:46:23.247227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.065 qpair failed and we were unable to recover it. 00:27:35.065 [2024-11-15 10:46:23.247325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.065 [2024-11-15 10:46:23.247351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.065 qpair failed and we were unable to recover it. 00:27:35.065 [2024-11-15 10:46:23.247461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.065 [2024-11-15 10:46:23.247486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.065 qpair failed and we were unable to recover it. 00:27:35.065 [2024-11-15 10:46:23.247624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.065 [2024-11-15 10:46:23.247649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.065 qpair failed and we were unable to recover it. 00:27:35.065 [2024-11-15 10:46:23.247803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.065 [2024-11-15 10:46:23.247829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.065 qpair failed and we were unable to recover it. 00:27:35.065 [2024-11-15 10:46:23.248033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.065 [2024-11-15 10:46:23.248058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.065 qpair failed and we were unable to recover it. 00:27:35.065 [2024-11-15 10:46:23.248252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.065 [2024-11-15 10:46:23.248288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.065 qpair failed and we were unable to recover it. 00:27:35.065 [2024-11-15 10:46:23.248403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.065 [2024-11-15 10:46:23.248430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.065 qpair failed and we were unable to recover it. 00:27:35.065 [2024-11-15 10:46:23.248562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.065 [2024-11-15 10:46:23.248588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.065 qpair failed and we were unable to recover it. 00:27:35.065 [2024-11-15 10:46:23.248690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.065 [2024-11-15 10:46:23.248720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.065 qpair failed and we were unable to recover it. 00:27:35.065 [2024-11-15 10:46:23.248878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.065 [2024-11-15 10:46:23.248904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.065 qpair failed and we were unable to recover it. 00:27:35.065 [2024-11-15 10:46:23.249026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.065 [2024-11-15 10:46:23.249051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.065 qpair failed and we were unable to recover it. 00:27:35.065 [2024-11-15 10:46:23.249181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.065 [2024-11-15 10:46:23.249206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.065 qpair failed and we were unable to recover it. 00:27:35.065 [2024-11-15 10:46:23.249338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.065 [2024-11-15 10:46:23.249369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.065 qpair failed and we were unable to recover it. 00:27:35.065 [2024-11-15 10:46:23.249459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.065 [2024-11-15 10:46:23.249484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.065 qpair failed and we were unable to recover it. 00:27:35.065 [2024-11-15 10:46:23.249577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.065 [2024-11-15 10:46:23.249602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.065 qpair failed and we were unable to recover it. 00:27:35.065 [2024-11-15 10:46:23.249733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.065 [2024-11-15 10:46:23.249759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.065 qpair failed and we were unable to recover it. 00:27:35.065 [2024-11-15 10:46:23.249885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.065 [2024-11-15 10:46:23.249911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.065 qpair failed and we were unable to recover it. 00:27:35.065 [2024-11-15 10:46:23.250025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.065 [2024-11-15 10:46:23.250051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.065 qpair failed and we were unable to recover it. 00:27:35.065 [2024-11-15 10:46:23.250152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.065 [2024-11-15 10:46:23.250178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.065 qpair failed and we were unable to recover it. 00:27:35.065 [2024-11-15 10:46:23.250275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.066 [2024-11-15 10:46:23.250300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.066 qpair failed and we were unable to recover it. 00:27:35.066 [2024-11-15 10:46:23.250423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.066 [2024-11-15 10:46:23.250449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.066 qpair failed and we were unable to recover it. 00:27:35.066 [2024-11-15 10:46:23.250556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.066 [2024-11-15 10:46:23.250582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.066 qpair failed and we were unable to recover it. 00:27:35.066 [2024-11-15 10:46:23.250720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.066 [2024-11-15 10:46:23.250746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.066 qpair failed and we were unable to recover it. 00:27:35.066 [2024-11-15 10:46:23.250845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.066 [2024-11-15 10:46:23.250870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.066 qpair failed and we were unable to recover it. 00:27:35.066 [2024-11-15 10:46:23.251030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.066 [2024-11-15 10:46:23.251056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.066 qpair failed and we were unable to recover it. 00:27:35.066 [2024-11-15 10:46:23.251209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.066 [2024-11-15 10:46:23.251235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.066 qpair failed and we were unable to recover it. 00:27:35.066 [2024-11-15 10:46:23.251352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.066 [2024-11-15 10:46:23.251384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.066 qpair failed and we were unable to recover it. 00:27:35.066 [2024-11-15 10:46:23.251488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.066 [2024-11-15 10:46:23.251514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.066 qpair failed and we were unable to recover it. 00:27:35.066 [2024-11-15 10:46:23.251653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.066 [2024-11-15 10:46:23.251678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.066 qpair failed and we were unable to recover it. 00:27:35.066 [2024-11-15 10:46:23.251879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.066 [2024-11-15 10:46:23.251905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.066 qpair failed and we were unable to recover it. 00:27:35.066 [2024-11-15 10:46:23.252060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.066 [2024-11-15 10:46:23.252085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.066 qpair failed and we were unable to recover it. 00:27:35.066 [2024-11-15 10:46:23.252215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.066 [2024-11-15 10:46:23.252240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.066 qpair failed and we were unable to recover it. 00:27:35.066 [2024-11-15 10:46:23.252375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.066 [2024-11-15 10:46:23.252403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.066 qpair failed and we were unable to recover it. 00:27:35.066 [2024-11-15 10:46:23.252511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.066 [2024-11-15 10:46:23.252536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.066 qpair failed and we were unable to recover it. 00:27:35.066 [2024-11-15 10:46:23.252704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.066 [2024-11-15 10:46:23.252730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.066 qpair failed and we were unable to recover it. 00:27:35.066 [2024-11-15 10:46:23.252894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.066 [2024-11-15 10:46:23.252920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.066 qpair failed and we were unable to recover it. 00:27:35.066 [2024-11-15 10:46:23.253031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.066 [2024-11-15 10:46:23.253057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.066 qpair failed and we were unable to recover it. 00:27:35.066 [2024-11-15 10:46:23.253162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.066 [2024-11-15 10:46:23.253188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.066 qpair failed and we were unable to recover it. 00:27:35.066 [2024-11-15 10:46:23.253298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.066 [2024-11-15 10:46:23.253324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.066 qpair failed and we were unable to recover it. 00:27:35.066 [2024-11-15 10:46:23.253431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.066 [2024-11-15 10:46:23.253457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.066 qpair failed and we were unable to recover it. 00:27:35.066 [2024-11-15 10:46:23.253572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.066 [2024-11-15 10:46:23.253598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.066 qpair failed and we were unable to recover it. 00:27:35.066 [2024-11-15 10:46:23.253759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.066 [2024-11-15 10:46:23.253786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.066 qpair failed and we were unable to recover it. 00:27:35.066 [2024-11-15 10:46:23.253897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.066 [2024-11-15 10:46:23.253923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.066 qpair failed and we were unable to recover it. 00:27:35.066 [2024-11-15 10:46:23.254075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.066 [2024-11-15 10:46:23.254100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.066 qpair failed and we were unable to recover it. 00:27:35.066 [2024-11-15 10:46:23.254216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.066 [2024-11-15 10:46:23.254241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.066 qpair failed and we were unable to recover it. 00:27:35.066 [2024-11-15 10:46:23.254344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.066 [2024-11-15 10:46:23.254412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.066 qpair failed and we were unable to recover it. 00:27:35.066 [2024-11-15 10:46:23.254518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.066 [2024-11-15 10:46:23.254543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.066 qpair failed and we were unable to recover it. 00:27:35.066 [2024-11-15 10:46:23.254673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.066 [2024-11-15 10:46:23.254699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.066 qpair failed and we were unable to recover it. 00:27:35.066 [2024-11-15 10:46:23.254820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.066 [2024-11-15 10:46:23.254850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.066 qpair failed and we were unable to recover it. 00:27:35.066 [2024-11-15 10:46:23.254941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.066 [2024-11-15 10:46:23.254967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.066 qpair failed and we were unable to recover it. 00:27:35.066 [2024-11-15 10:46:23.255124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.066 [2024-11-15 10:46:23.255149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.066 qpair failed and we were unable to recover it. 00:27:35.066 [2024-11-15 10:46:23.255249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.066 [2024-11-15 10:46:23.255274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.066 qpair failed and we were unable to recover it. 00:27:35.066 [2024-11-15 10:46:23.255479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.066 [2024-11-15 10:46:23.255505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.066 qpair failed and we were unable to recover it. 00:27:35.066 [2024-11-15 10:46:23.255598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.066 [2024-11-15 10:46:23.255625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.066 qpair failed and we were unable to recover it. 00:27:35.066 [2024-11-15 10:46:23.255830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.066 [2024-11-15 10:46:23.255856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.066 qpair failed and we were unable to recover it. 00:27:35.066 [2024-11-15 10:46:23.255958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.066 [2024-11-15 10:46:23.255983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.066 qpair failed and we were unable to recover it. 00:27:35.066 [2024-11-15 10:46:23.256109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.066 [2024-11-15 10:46:23.256135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.066 qpair failed and we were unable to recover it. 00:27:35.066 [2024-11-15 10:46:23.256261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.066 [2024-11-15 10:46:23.256287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.066 qpair failed and we were unable to recover it. 00:27:35.066 [2024-11-15 10:46:23.256390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.066 [2024-11-15 10:46:23.256416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.066 qpair failed and we were unable to recover it. 00:27:35.066 [2024-11-15 10:46:23.256542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.066 [2024-11-15 10:46:23.256568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.066 qpair failed and we were unable to recover it. 00:27:35.066 [2024-11-15 10:46:23.256709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.066 [2024-11-15 10:46:23.256735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.066 qpair failed and we were unable to recover it. 00:27:35.066 [2024-11-15 10:46:23.256828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.066 [2024-11-15 10:46:23.256854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.066 qpair failed and we were unable to recover it. 00:27:35.066 [2024-11-15 10:46:23.256986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.066 [2024-11-15 10:46:23.257012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.066 qpair failed and we were unable to recover it. 00:27:35.066 [2024-11-15 10:46:23.257138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.066 [2024-11-15 10:46:23.257164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.066 qpair failed and we were unable to recover it. 00:27:35.066 [2024-11-15 10:46:23.257294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.066 [2024-11-15 10:46:23.257320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.066 qpair failed and we were unable to recover it. 00:27:35.066 [2024-11-15 10:46:23.257424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.066 [2024-11-15 10:46:23.257451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.066 qpair failed and we were unable to recover it. 00:27:35.066 [2024-11-15 10:46:23.257565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.066 [2024-11-15 10:46:23.257591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.066 qpair failed and we were unable to recover it. 00:27:35.066 [2024-11-15 10:46:23.257692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.066 [2024-11-15 10:46:23.257718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.066 qpair failed and we were unable to recover it. 00:27:35.066 [2024-11-15 10:46:23.257850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.066 [2024-11-15 10:46:23.257877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.066 qpair failed and we were unable to recover it. 00:27:35.066 [2024-11-15 10:46:23.258032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.066 [2024-11-15 10:46:23.258057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.066 qpair failed and we were unable to recover it. 00:27:35.066 [2024-11-15 10:46:23.258180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.066 [2024-11-15 10:46:23.258205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.066 qpair failed and we were unable to recover it. 00:27:35.066 [2024-11-15 10:46:23.258305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.066 [2024-11-15 10:46:23.258331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.066 qpair failed and we were unable to recover it. 00:27:35.066 [2024-11-15 10:46:23.258433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.066 [2024-11-15 10:46:23.258459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.066 qpair failed and we were unable to recover it. 00:27:35.066 [2024-11-15 10:46:23.258565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.066 [2024-11-15 10:46:23.258590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.066 qpair failed and we were unable to recover it. 00:27:35.066 [2024-11-15 10:46:23.258724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.066 [2024-11-15 10:46:23.258750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.066 qpair failed and we were unable to recover it. 00:27:35.066 [2024-11-15 10:46:23.258878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.066 [2024-11-15 10:46:23.258904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.066 qpair failed and we were unable to recover it. 00:27:35.066 [2024-11-15 10:46:23.259017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.066 [2024-11-15 10:46:23.259043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.066 qpair failed and we were unable to recover it. 00:27:35.066 [2024-11-15 10:46:23.259148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.066 [2024-11-15 10:46:23.259173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.066 qpair failed and we were unable to recover it. 00:27:35.066 [2024-11-15 10:46:23.259303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.066 [2024-11-15 10:46:23.259328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.066 qpair failed and we were unable to recover it. 00:27:35.066 [2024-11-15 10:46:23.259471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.067 [2024-11-15 10:46:23.259498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.067 qpair failed and we were unable to recover it. 00:27:35.067 [2024-11-15 10:46:23.259597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.067 [2024-11-15 10:46:23.259623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.067 qpair failed and we were unable to recover it. 00:27:35.067 [2024-11-15 10:46:23.259750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.067 [2024-11-15 10:46:23.259775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.067 qpair failed and we were unable to recover it. 00:27:35.067 [2024-11-15 10:46:23.259873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.067 [2024-11-15 10:46:23.259899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.067 qpair failed and we were unable to recover it. 00:27:35.067 [2024-11-15 10:46:23.259997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.067 [2024-11-15 10:46:23.260022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.067 qpair failed and we were unable to recover it. 00:27:35.067 [2024-11-15 10:46:23.260120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.067 [2024-11-15 10:46:23.260146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.067 qpair failed and we were unable to recover it. 00:27:35.067 [2024-11-15 10:46:23.260254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.067 [2024-11-15 10:46:23.260280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.067 qpair failed and we were unable to recover it. 00:27:35.067 [2024-11-15 10:46:23.260435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.067 [2024-11-15 10:46:23.260461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.067 qpair failed and we were unable to recover it. 00:27:35.067 [2024-11-15 10:46:23.260558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.067 [2024-11-15 10:46:23.260584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.067 qpair failed and we were unable to recover it. 00:27:35.067 [2024-11-15 10:46:23.260716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.067 [2024-11-15 10:46:23.260747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.067 qpair failed and we were unable to recover it. 00:27:35.067 [2024-11-15 10:46:23.260954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.067 [2024-11-15 10:46:23.260979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.067 qpair failed and we were unable to recover it. 00:27:35.067 [2024-11-15 10:46:23.261117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.067 [2024-11-15 10:46:23.261143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.067 qpair failed and we were unable to recover it. 00:27:35.067 [2024-11-15 10:46:23.261239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.067 [2024-11-15 10:46:23.261264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.067 qpair failed and we were unable to recover it. 00:27:35.067 [2024-11-15 10:46:23.261399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.067 [2024-11-15 10:46:23.261426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.067 qpair failed and we were unable to recover it. 00:27:35.067 [2024-11-15 10:46:23.261539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.067 [2024-11-15 10:46:23.261564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.067 qpair failed and we were unable to recover it. 00:27:35.067 [2024-11-15 10:46:23.261696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.067 [2024-11-15 10:46:23.261721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.067 qpair failed and we were unable to recover it. 00:27:35.067 [2024-11-15 10:46:23.261852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.067 [2024-11-15 10:46:23.261877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.067 qpair failed and we were unable to recover it. 00:27:35.067 [2024-11-15 10:46:23.261971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.067 [2024-11-15 10:46:23.261996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.067 qpair failed and we were unable to recover it. 00:27:35.067 [2024-11-15 10:46:23.262165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.067 [2024-11-15 10:46:23.262191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.067 qpair failed and we were unable to recover it. 00:27:35.067 [2024-11-15 10:46:23.262349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.067 [2024-11-15 10:46:23.262381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.067 qpair failed and we were unable to recover it. 00:27:35.067 [2024-11-15 10:46:23.262491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.067 [2024-11-15 10:46:23.262516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.067 qpair failed and we were unable to recover it. 00:27:35.067 [2024-11-15 10:46:23.262613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.067 [2024-11-15 10:46:23.262639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.067 qpair failed and we were unable to recover it. 00:27:35.067 [2024-11-15 10:46:23.262795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.067 [2024-11-15 10:46:23.262821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.067 qpair failed and we were unable to recover it. 00:27:35.067 [2024-11-15 10:46:23.262960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.067 [2024-11-15 10:46:23.262986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.067 qpair failed and we were unable to recover it. 00:27:35.067 [2024-11-15 10:46:23.263113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.067 [2024-11-15 10:46:23.263138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.067 qpair failed and we were unable to recover it. 00:27:35.067 [2024-11-15 10:46:23.263242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.067 [2024-11-15 10:46:23.263267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.067 qpair failed and we were unable to recover it. 00:27:35.067 [2024-11-15 10:46:23.263367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.067 [2024-11-15 10:46:23.263394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.067 qpair failed and we were unable to recover it. 00:27:35.067 [2024-11-15 10:46:23.263524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.067 [2024-11-15 10:46:23.263549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.067 qpair failed and we were unable to recover it. 00:27:35.067 [2024-11-15 10:46:23.263646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.067 [2024-11-15 10:46:23.263671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.067 qpair failed and we were unable to recover it. 00:27:35.067 [2024-11-15 10:46:23.263794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.067 [2024-11-15 10:46:23.263820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.067 qpair failed and we were unable to recover it. 00:27:35.067 [2024-11-15 10:46:23.263976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.067 [2024-11-15 10:46:23.264002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.067 qpair failed and we were unable to recover it. 00:27:35.067 [2024-11-15 10:46:23.264121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.067 [2024-11-15 10:46:23.264147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.067 qpair failed and we were unable to recover it. 00:27:35.067 [2024-11-15 10:46:23.264264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.067 [2024-11-15 10:46:23.264290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.067 qpair failed and we were unable to recover it. 00:27:35.067 [2024-11-15 10:46:23.264428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.067 [2024-11-15 10:46:23.264454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.067 qpair failed and we were unable to recover it. 00:27:35.067 [2024-11-15 10:46:23.264556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.067 [2024-11-15 10:46:23.264582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.067 qpair failed and we were unable to recover it. 00:27:35.067 [2024-11-15 10:46:23.264686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.067 [2024-11-15 10:46:23.264712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.067 qpair failed and we were unable to recover it. 00:27:35.067 [2024-11-15 10:46:23.264919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.067 [2024-11-15 10:46:23.264945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.067 qpair failed and we were unable to recover it. 00:27:35.067 [2024-11-15 10:46:23.265072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.067 [2024-11-15 10:46:23.265097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.067 qpair failed and we were unable to recover it. 00:27:35.067 [2024-11-15 10:46:23.265223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.067 [2024-11-15 10:46:23.265249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.067 qpair failed and we were unable to recover it. 00:27:35.067 [2024-11-15 10:46:23.265336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.067 [2024-11-15 10:46:23.265367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.067 qpair failed and we were unable to recover it. 00:27:35.067 [2024-11-15 10:46:23.265481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.067 [2024-11-15 10:46:23.265507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.067 qpair failed and we were unable to recover it. 00:27:35.067 [2024-11-15 10:46:23.265603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.067 [2024-11-15 10:46:23.265629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.067 qpair failed and we were unable to recover it. 00:27:35.067 [2024-11-15 10:46:23.265758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.067 [2024-11-15 10:46:23.265784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.067 qpair failed and we were unable to recover it. 00:27:35.067 [2024-11-15 10:46:23.265944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.067 [2024-11-15 10:46:23.265969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.067 qpair failed and we were unable to recover it. 00:27:35.067 [2024-11-15 10:46:23.266076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.067 [2024-11-15 10:46:23.266101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.067 qpair failed and we were unable to recover it. 00:27:35.067 [2024-11-15 10:46:23.266202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.067 [2024-11-15 10:46:23.266227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.067 qpair failed and we were unable to recover it. 00:27:35.067 [2024-11-15 10:46:23.266343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.067 [2024-11-15 10:46:23.266376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.067 qpair failed and we were unable to recover it. 00:27:35.067 [2024-11-15 10:46:23.266483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.067 [2024-11-15 10:46:23.266508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.067 qpair failed and we were unable to recover it. 00:27:35.067 [2024-11-15 10:46:23.266608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.067 [2024-11-15 10:46:23.266634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.067 qpair failed and we were unable to recover it. 00:27:35.067 [2024-11-15 10:46:23.266738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.067 [2024-11-15 10:46:23.266768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.067 qpair failed and we were unable to recover it. 00:27:35.067 [2024-11-15 10:46:23.266909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.067 [2024-11-15 10:46:23.266934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.067 qpair failed and we were unable to recover it. 00:27:35.067 [2024-11-15 10:46:23.267064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.067 [2024-11-15 10:46:23.267090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.067 qpair failed and we were unable to recover it. 00:27:35.067 [2024-11-15 10:46:23.267248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.067 [2024-11-15 10:46:23.267274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.067 qpair failed and we were unable to recover it. 00:27:35.067 [2024-11-15 10:46:23.267436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.067 [2024-11-15 10:46:23.267462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.067 qpair failed and we were unable to recover it. 00:27:35.067 [2024-11-15 10:46:23.267558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.067 [2024-11-15 10:46:23.267584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.067 qpair failed and we were unable to recover it. 00:27:35.067 [2024-11-15 10:46:23.267716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.067 [2024-11-15 10:46:23.267742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.067 qpair failed and we were unable to recover it. 00:27:35.067 [2024-11-15 10:46:23.267908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.067 [2024-11-15 10:46:23.267934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.068 qpair failed and we were unable to recover it. 00:27:35.068 [2024-11-15 10:46:23.268062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.068 [2024-11-15 10:46:23.268088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.068 qpair failed and we were unable to recover it. 00:27:35.068 [2024-11-15 10:46:23.268248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.068 [2024-11-15 10:46:23.268273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.068 qpair failed and we were unable to recover it. 00:27:35.068 [2024-11-15 10:46:23.268422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.068 [2024-11-15 10:46:23.268448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.068 qpair failed and we were unable to recover it. 00:27:35.068 [2024-11-15 10:46:23.268547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.068 [2024-11-15 10:46:23.268572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.068 qpair failed and we were unable to recover it. 00:27:35.068 [2024-11-15 10:46:23.268700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.068 [2024-11-15 10:46:23.268725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.068 qpair failed and we were unable to recover it. 00:27:35.068 [2024-11-15 10:46:23.268852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.068 [2024-11-15 10:46:23.268878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.068 qpair failed and we were unable to recover it. 00:27:35.068 [2024-11-15 10:46:23.268991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.068 [2024-11-15 10:46:23.269017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.068 qpair failed and we were unable to recover it. 00:27:35.068 [2024-11-15 10:46:23.269171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.068 [2024-11-15 10:46:23.269197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.068 qpair failed and we were unable to recover it. 00:27:35.068 [2024-11-15 10:46:23.269319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.068 [2024-11-15 10:46:23.269344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.068 qpair failed and we were unable to recover it. 00:27:35.068 [2024-11-15 10:46:23.269453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.068 [2024-11-15 10:46:23.269479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.068 qpair failed and we were unable to recover it. 00:27:35.068 [2024-11-15 10:46:23.269591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.068 [2024-11-15 10:46:23.269617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.068 qpair failed and we were unable to recover it. 00:27:35.068 [2024-11-15 10:46:23.269773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.068 [2024-11-15 10:46:23.269798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.068 qpair failed and we were unable to recover it. 00:27:35.068 [2024-11-15 10:46:23.269900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.068 [2024-11-15 10:46:23.269925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.068 qpair failed and we were unable to recover it. 00:27:35.068 [2024-11-15 10:46:23.270053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.068 [2024-11-15 10:46:23.270079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.068 qpair failed and we were unable to recover it. 00:27:35.068 [2024-11-15 10:46:23.270203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.068 [2024-11-15 10:46:23.270229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.068 qpair failed and we were unable to recover it. 00:27:35.068 [2024-11-15 10:46:23.270354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.068 [2024-11-15 10:46:23.270385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.068 qpair failed and we were unable to recover it. 00:27:35.068 [2024-11-15 10:46:23.270488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.068 [2024-11-15 10:46:23.270514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.068 qpair failed and we were unable to recover it. 00:27:35.068 [2024-11-15 10:46:23.270599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.068 [2024-11-15 10:46:23.270624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.068 qpair failed and we were unable to recover it. 00:27:35.068 [2024-11-15 10:46:23.270767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.068 [2024-11-15 10:46:23.270792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.068 qpair failed and we were unable to recover it. 00:27:35.068 [2024-11-15 10:46:23.270892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.068 [2024-11-15 10:46:23.270918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.068 qpair failed and we were unable to recover it. 00:27:35.068 [2024-11-15 10:46:23.271027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.068 [2024-11-15 10:46:23.271052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.068 qpair failed and we were unable to recover it. 00:27:35.068 [2024-11-15 10:46:23.271185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.068 [2024-11-15 10:46:23.271211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.068 qpair failed and we were unable to recover it. 00:27:35.068 [2024-11-15 10:46:23.271340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.068 [2024-11-15 10:46:23.271387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.068 qpair failed and we were unable to recover it. 00:27:35.068 [2024-11-15 10:46:23.271521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.068 [2024-11-15 10:46:23.271547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.068 qpair failed and we were unable to recover it. 00:27:35.068 [2024-11-15 10:46:23.271726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.068 [2024-11-15 10:46:23.271752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.068 qpair failed and we were unable to recover it. 00:27:35.068 [2024-11-15 10:46:23.271839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.068 [2024-11-15 10:46:23.271864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.068 qpair failed and we were unable to recover it. 00:27:35.068 [2024-11-15 10:46:23.271974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.068 [2024-11-15 10:46:23.271999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.068 qpair failed and we were unable to recover it. 00:27:35.068 [2024-11-15 10:46:23.272194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.068 [2024-11-15 10:46:23.272220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.068 qpair failed and we were unable to recover it. 00:27:35.068 [2024-11-15 10:46:23.272376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.068 [2024-11-15 10:46:23.272403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.068 qpair failed and we were unable to recover it. 00:27:35.068 [2024-11-15 10:46:23.272594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.068 [2024-11-15 10:46:23.272631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.068 qpair failed and we were unable to recover it. 00:27:35.068 [2024-11-15 10:46:23.272761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.068 [2024-11-15 10:46:23.272787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.068 qpair failed and we were unable to recover it. 00:27:35.068 [2024-11-15 10:46:23.272932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.068 [2024-11-15 10:46:23.272958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.068 qpair failed and we were unable to recover it. 00:27:35.068 [2024-11-15 10:46:23.273131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.068 [2024-11-15 10:46:23.273161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.068 qpair failed and we were unable to recover it. 00:27:35.068 [2024-11-15 10:46:23.273289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.068 [2024-11-15 10:46:23.273315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.068 qpair failed and we were unable to recover it. 00:27:35.068 [2024-11-15 10:46:23.273426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.068 [2024-11-15 10:46:23.273453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.068 qpair failed and we were unable to recover it. 00:27:35.068 [2024-11-15 10:46:23.273552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.068 [2024-11-15 10:46:23.273577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.068 qpair failed and we were unable to recover it. 00:27:35.068 [2024-11-15 10:46:23.273733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.068 [2024-11-15 10:46:23.273758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.068 qpair failed and we were unable to recover it. 00:27:35.068 [2024-11-15 10:46:23.273879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.068 [2024-11-15 10:46:23.273905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.068 qpair failed and we were unable to recover it. 00:27:35.068 [2024-11-15 10:46:23.274027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.068 [2024-11-15 10:46:23.274053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.068 qpair failed and we were unable to recover it. 00:27:35.068 [2024-11-15 10:46:23.274188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.068 [2024-11-15 10:46:23.274214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.068 qpair failed and we were unable to recover it. 00:27:35.068 [2024-11-15 10:46:23.274314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.068 [2024-11-15 10:46:23.274339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.068 qpair failed and we were unable to recover it. 00:27:35.068 [2024-11-15 10:46:23.274446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.068 [2024-11-15 10:46:23.274471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.068 qpair failed and we were unable to recover it. 00:27:35.068 [2024-11-15 10:46:23.274579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.068 [2024-11-15 10:46:23.274604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.068 qpair failed and we were unable to recover it. 00:27:35.068 [2024-11-15 10:46:23.274730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.068 [2024-11-15 10:46:23.274756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.068 qpair failed and we were unable to recover it. 00:27:35.068 [2024-11-15 10:46:23.274897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.068 [2024-11-15 10:46:23.274923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.068 qpair failed and we were unable to recover it. 00:27:35.068 [2024-11-15 10:46:23.275060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.068 [2024-11-15 10:46:23.275086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.068 qpair failed and we were unable to recover it. 00:27:35.068 [2024-11-15 10:46:23.275226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.068 [2024-11-15 10:46:23.275261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.068 qpair failed and we were unable to recover it. 00:27:35.068 [2024-11-15 10:46:23.275409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.068 [2024-11-15 10:46:23.275437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.068 qpair failed and we were unable to recover it. 00:27:35.068 [2024-11-15 10:46:23.275571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.068 [2024-11-15 10:46:23.275597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.068 qpair failed and we were unable to recover it. 00:27:35.068 [2024-11-15 10:46:23.275733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.068 [2024-11-15 10:46:23.275759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.068 qpair failed and we were unable to recover it. 00:27:35.068 [2024-11-15 10:46:23.275891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.068 [2024-11-15 10:46:23.275917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.068 qpair failed and we were unable to recover it. 00:27:35.068 [2024-11-15 10:46:23.276073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.068 [2024-11-15 10:46:23.276099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.068 qpair failed and we were unable to recover it. 00:27:35.068 [2024-11-15 10:46:23.276232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.068 [2024-11-15 10:46:23.276258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.068 qpair failed and we were unable to recover it. 00:27:35.068 [2024-11-15 10:46:23.276402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.068 [2024-11-15 10:46:23.276428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.068 qpair failed and we were unable to recover it. 00:27:35.068 [2024-11-15 10:46:23.276558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.068 [2024-11-15 10:46:23.276584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.068 qpair failed and we were unable to recover it. 00:27:35.068 [2024-11-15 10:46:23.276708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.068 [2024-11-15 10:46:23.276734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.068 qpair failed and we were unable to recover it. 00:27:35.068 [2024-11-15 10:46:23.276869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.068 [2024-11-15 10:46:23.276894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.068 qpair failed and we were unable to recover it. 00:27:35.068 [2024-11-15 10:46:23.277017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.069 [2024-11-15 10:46:23.277043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.069 qpair failed and we were unable to recover it. 00:27:35.069 [2024-11-15 10:46:23.277199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.069 [2024-11-15 10:46:23.277225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.069 qpair failed and we were unable to recover it. 00:27:35.069 [2024-11-15 10:46:23.277330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.069 [2024-11-15 10:46:23.277356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.069 qpair failed and we were unable to recover it. 00:27:35.069 [2024-11-15 10:46:23.277490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.069 [2024-11-15 10:46:23.277516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.069 qpair failed and we were unable to recover it. 00:27:35.069 [2024-11-15 10:46:23.277651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.069 [2024-11-15 10:46:23.277677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.069 qpair failed and we were unable to recover it. 00:27:35.069 [2024-11-15 10:46:23.277833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.069 [2024-11-15 10:46:23.277859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.069 qpair failed and we were unable to recover it. 00:27:35.069 [2024-11-15 10:46:23.278012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.069 [2024-11-15 10:46:23.278038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.069 qpair failed and we were unable to recover it. 00:27:35.069 [2024-11-15 10:46:23.278198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.069 [2024-11-15 10:46:23.278224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.069 qpair failed and we were unable to recover it. 00:27:35.069 [2024-11-15 10:46:23.278335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.069 [2024-11-15 10:46:23.278366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.069 qpair failed and we were unable to recover it. 00:27:35.069 [2024-11-15 10:46:23.278459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.069 [2024-11-15 10:46:23.278485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.069 qpair failed and we were unable to recover it. 00:27:35.069 [2024-11-15 10:46:23.278585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.069 [2024-11-15 10:46:23.278610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.069 qpair failed and we were unable to recover it. 00:27:35.069 [2024-11-15 10:46:23.278719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.069 [2024-11-15 10:46:23.278744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.069 qpair failed and we were unable to recover it. 00:27:35.069 [2024-11-15 10:46:23.278874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.069 [2024-11-15 10:46:23.278900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.069 qpair failed and we were unable to recover it. 00:27:35.069 [2024-11-15 10:46:23.279033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.069 [2024-11-15 10:46:23.279059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.069 qpair failed and we were unable to recover it. 00:27:35.069 [2024-11-15 10:46:23.279163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.069 [2024-11-15 10:46:23.279188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.069 qpair failed and we were unable to recover it. 00:27:35.069 [2024-11-15 10:46:23.279334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.069 [2024-11-15 10:46:23.279371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.069 qpair failed and we were unable to recover it. 00:27:35.069 [2024-11-15 10:46:23.279506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.069 [2024-11-15 10:46:23.279532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.069 qpair failed and we were unable to recover it. 00:27:35.069 [2024-11-15 10:46:23.279637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.069 [2024-11-15 10:46:23.279662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.069 qpair failed and we were unable to recover it. 00:27:35.069 [2024-11-15 10:46:23.279762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.069 [2024-11-15 10:46:23.279788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.069 qpair failed and we were unable to recover it. 00:27:35.069 [2024-11-15 10:46:23.279916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.069 [2024-11-15 10:46:23.279941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.069 qpair failed and we were unable to recover it. 00:27:35.069 [2024-11-15 10:46:23.280071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.069 [2024-11-15 10:46:23.280096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.069 qpair failed and we were unable to recover it. 00:27:35.069 [2024-11-15 10:46:23.280293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.069 [2024-11-15 10:46:23.280318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.069 qpair failed and we were unable to recover it. 00:27:35.069 [2024-11-15 10:46:23.280439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.069 [2024-11-15 10:46:23.280465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.069 qpair failed and we were unable to recover it. 00:27:35.069 [2024-11-15 10:46:23.280569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.069 [2024-11-15 10:46:23.280595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.069 qpair failed and we were unable to recover it. 00:27:35.069 [2024-11-15 10:46:23.280749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.069 [2024-11-15 10:46:23.280775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.069 qpair failed and we were unable to recover it. 00:27:35.069 [2024-11-15 10:46:23.280970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.069 [2024-11-15 10:46:23.280995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.069 qpair failed and we were unable to recover it. 00:27:35.069 [2024-11-15 10:46:23.281154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.069 [2024-11-15 10:46:23.281180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.069 qpair failed and we were unable to recover it. 00:27:35.069 [2024-11-15 10:46:23.281317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.069 [2024-11-15 10:46:23.281344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.069 qpair failed and we were unable to recover it. 00:27:35.069 [2024-11-15 10:46:23.281468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.069 [2024-11-15 10:46:23.281494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.069 qpair failed and we were unable to recover it. 00:27:35.069 [2024-11-15 10:46:23.281616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.069 [2024-11-15 10:46:23.281642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.069 qpair failed and we were unable to recover it. 00:27:35.069 [2024-11-15 10:46:23.281767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.069 [2024-11-15 10:46:23.281793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.069 qpair failed and we were unable to recover it. 00:27:35.069 [2024-11-15 10:46:23.281950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.069 [2024-11-15 10:46:23.281975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.069 qpair failed and we were unable to recover it. 00:27:35.069 [2024-11-15 10:46:23.282107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.069 [2024-11-15 10:46:23.282133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.069 qpair failed and we were unable to recover it. 00:27:35.069 [2024-11-15 10:46:23.282230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.069 [2024-11-15 10:46:23.282256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.069 qpair failed and we were unable to recover it. 00:27:35.069 [2024-11-15 10:46:23.282378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.069 [2024-11-15 10:46:23.282405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.069 qpair failed and we were unable to recover it. 00:27:35.069 [2024-11-15 10:46:23.282510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.069 [2024-11-15 10:46:23.282535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.069 qpair failed and we were unable to recover it. 00:27:35.069 [2024-11-15 10:46:23.282670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.069 [2024-11-15 10:46:23.282696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.069 qpair failed and we were unable to recover it. 00:27:35.069 [2024-11-15 10:46:23.282792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.069 [2024-11-15 10:46:23.282818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.069 qpair failed and we were unable to recover it. 00:27:35.069 [2024-11-15 10:46:23.282950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.069 [2024-11-15 10:46:23.282975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.069 qpair failed and we were unable to recover it. 00:27:35.069 [2024-11-15 10:46:23.283080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.069 [2024-11-15 10:46:23.283105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.069 qpair failed and we were unable to recover it. 00:27:35.069 [2024-11-15 10:46:23.283272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.069 [2024-11-15 10:46:23.283298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.069 qpair failed and we were unable to recover it. 00:27:35.069 [2024-11-15 10:46:23.283393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.069 [2024-11-15 10:46:23.283419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.069 qpair failed and we were unable to recover it. 00:27:35.069 [2024-11-15 10:46:23.283538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.069 [2024-11-15 10:46:23.283564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.069 qpair failed and we were unable to recover it. 00:27:35.069 [2024-11-15 10:46:23.283727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.069 [2024-11-15 10:46:23.283752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.069 qpair failed and we were unable to recover it. 00:27:35.069 [2024-11-15 10:46:23.283876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.069 [2024-11-15 10:46:23.283901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.069 qpair failed and we were unable to recover it. 00:27:35.069 [2024-11-15 10:46:23.284052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.069 [2024-11-15 10:46:23.284078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.069 qpair failed and we were unable to recover it. 00:27:35.069 [2024-11-15 10:46:23.284231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.069 [2024-11-15 10:46:23.284256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.069 qpair failed and we were unable to recover it. 00:27:35.069 [2024-11-15 10:46:23.284388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.069 [2024-11-15 10:46:23.284415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.069 qpair failed and we were unable to recover it. 00:27:35.069 [2024-11-15 10:46:23.284506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.069 [2024-11-15 10:46:23.284532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.069 qpair failed and we were unable to recover it. 00:27:35.069 [2024-11-15 10:46:23.284687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.069 [2024-11-15 10:46:23.284713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.069 qpair failed and we were unable to recover it. 00:27:35.069 [2024-11-15 10:46:23.284801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.069 [2024-11-15 10:46:23.284827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.069 qpair failed and we were unable to recover it. 00:27:35.069 [2024-11-15 10:46:23.284943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.069 [2024-11-15 10:46:23.284969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.069 qpair failed and we were unable to recover it. 00:27:35.069 [2024-11-15 10:46:23.285109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.069 [2024-11-15 10:46:23.285135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.069 qpair failed and we were unable to recover it. 00:27:35.069 [2024-11-15 10:46:23.285239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.069 [2024-11-15 10:46:23.285264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.069 qpair failed and we were unable to recover it. 00:27:35.069 [2024-11-15 10:46:23.285390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.069 [2024-11-15 10:46:23.285423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.069 qpair failed and we were unable to recover it. 00:27:35.069 [2024-11-15 10:46:23.285515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.069 [2024-11-15 10:46:23.285545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.069 qpair failed and we were unable to recover it. 00:27:35.069 [2024-11-15 10:46:23.285668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.069 [2024-11-15 10:46:23.285694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.069 qpair failed and we were unable to recover it. 00:27:35.069 [2024-11-15 10:46:23.285822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.069 [2024-11-15 10:46:23.285847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.069 qpair failed and we were unable to recover it. 00:27:35.069 [2024-11-15 10:46:23.285974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.070 [2024-11-15 10:46:23.285999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.070 qpair failed and we were unable to recover it. 00:27:35.070 [2024-11-15 10:46:23.286087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.070 [2024-11-15 10:46:23.286112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.070 qpair failed and we were unable to recover it. 00:27:35.070 [2024-11-15 10:46:23.286230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.070 [2024-11-15 10:46:23.286255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.070 qpair failed and we were unable to recover it. 00:27:35.070 [2024-11-15 10:46:23.286395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.070 [2024-11-15 10:46:23.286421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.070 qpair failed and we were unable to recover it. 00:27:35.070 [2024-11-15 10:46:23.286518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.070 [2024-11-15 10:46:23.286544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.070 qpair failed and we were unable to recover it. 00:27:35.070 [2024-11-15 10:46:23.286672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.070 [2024-11-15 10:46:23.286697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.070 qpair failed and we were unable to recover it. 00:27:35.070 [2024-11-15 10:46:23.286819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.070 [2024-11-15 10:46:23.286845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.070 qpair failed and we were unable to recover it. 00:27:35.070 [2024-11-15 10:46:23.286933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.070 [2024-11-15 10:46:23.286958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.070 qpair failed and we were unable to recover it. 00:27:35.070 [2024-11-15 10:46:23.287043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.070 [2024-11-15 10:46:23.287068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.070 qpair failed and we were unable to recover it. 00:27:35.070 [2024-11-15 10:46:23.287193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.070 [2024-11-15 10:46:23.287218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.070 qpair failed and we were unable to recover it. 00:27:35.070 [2024-11-15 10:46:23.287306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.070 [2024-11-15 10:46:23.287332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.070 qpair failed and we were unable to recover it. 00:27:35.070 [2024-11-15 10:46:23.287442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.070 [2024-11-15 10:46:23.287469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.070 qpair failed and we were unable to recover it. 00:27:35.070 [2024-11-15 10:46:23.287554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.070 [2024-11-15 10:46:23.287579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.070 qpair failed and we were unable to recover it. 00:27:35.070 [2024-11-15 10:46:23.287704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.070 [2024-11-15 10:46:23.287729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.070 qpair failed and we were unable to recover it. 00:27:35.070 [2024-11-15 10:46:23.287841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.070 [2024-11-15 10:46:23.287867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.070 qpair failed and we were unable to recover it. 00:27:35.070 [2024-11-15 10:46:23.287982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.070 [2024-11-15 10:46:23.288008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.070 qpair failed and we were unable to recover it. 00:27:35.070 [2024-11-15 10:46:23.288165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.070 [2024-11-15 10:46:23.288190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.070 qpair failed and we were unable to recover it. 00:27:35.070 [2024-11-15 10:46:23.288277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.070 [2024-11-15 10:46:23.288302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.070 qpair failed and we were unable to recover it. 00:27:35.070 [2024-11-15 10:46:23.288412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.070 [2024-11-15 10:46:23.288438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.070 qpair failed and we were unable to recover it. 00:27:35.070 [2024-11-15 10:46:23.288558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.070 [2024-11-15 10:46:23.288584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.070 qpair failed and we were unable to recover it. 00:27:35.070 [2024-11-15 10:46:23.288687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.070 [2024-11-15 10:46:23.288712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.070 qpair failed and we were unable to recover it. 00:27:35.070 [2024-11-15 10:46:23.288797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.070 [2024-11-15 10:46:23.288822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.070 qpair failed and we were unable to recover it. 00:27:35.070 [2024-11-15 10:46:23.288978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.070 [2024-11-15 10:46:23.289004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.070 qpair failed and we were unable to recover it. 00:27:35.070 [2024-11-15 10:46:23.289125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.070 [2024-11-15 10:46:23.289151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.070 qpair failed and we were unable to recover it. 00:27:35.070 [2024-11-15 10:46:23.289308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.070 [2024-11-15 10:46:23.289334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.070 qpair failed and we were unable to recover it. 00:27:35.070 [2024-11-15 10:46:23.289434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.070 [2024-11-15 10:46:23.289460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.070 qpair failed and we were unable to recover it. 00:27:35.070 [2024-11-15 10:46:23.289587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.070 [2024-11-15 10:46:23.289612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.070 qpair failed and we were unable to recover it. 00:27:35.070 [2024-11-15 10:46:23.289729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.070 [2024-11-15 10:46:23.289755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.070 qpair failed and we were unable to recover it. 00:27:35.070 [2024-11-15 10:46:23.289880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.070 [2024-11-15 10:46:23.289905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.070 qpair failed and we were unable to recover it. 00:27:35.070 [2024-11-15 10:46:23.290022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.070 [2024-11-15 10:46:23.290048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.070 qpair failed and we were unable to recover it. 00:27:35.070 [2024-11-15 10:46:23.290177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.070 [2024-11-15 10:46:23.290202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.070 qpair failed and we were unable to recover it. 00:27:35.070 [2024-11-15 10:46:23.290304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.070 [2024-11-15 10:46:23.290329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.070 qpair failed and we were unable to recover it. 00:27:35.070 [2024-11-15 10:46:23.290444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.070 [2024-11-15 10:46:23.290470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.070 qpair failed and we were unable to recover it. 00:27:35.070 [2024-11-15 10:46:23.290562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.070 [2024-11-15 10:46:23.290587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.070 qpair failed and we were unable to recover it. 00:27:35.070 [2024-11-15 10:46:23.290703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.070 [2024-11-15 10:46:23.290728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.070 qpair failed and we were unable to recover it. 00:27:35.070 [2024-11-15 10:46:23.290850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.070 [2024-11-15 10:46:23.290876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.070 qpair failed and we were unable to recover it. 00:27:35.070 [2024-11-15 10:46:23.290958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.070 [2024-11-15 10:46:23.290983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.070 qpair failed and we were unable to recover it. 00:27:35.070 [2024-11-15 10:46:23.291096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.070 [2024-11-15 10:46:23.291126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.070 qpair failed and we were unable to recover it. 00:27:35.070 [2024-11-15 10:46:23.291247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.070 [2024-11-15 10:46:23.291272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.070 qpair failed and we were unable to recover it. 00:27:35.070 [2024-11-15 10:46:23.291417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.070 [2024-11-15 10:46:23.291445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.070 qpair failed and we were unable to recover it. 00:27:35.070 [2024-11-15 10:46:23.291579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.070 [2024-11-15 10:46:23.291604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.070 qpair failed and we were unable to recover it. 00:27:35.070 [2024-11-15 10:46:23.291733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.070 [2024-11-15 10:46:23.291759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.070 qpair failed and we were unable to recover it. 00:27:35.070 [2024-11-15 10:46:23.291882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.070 [2024-11-15 10:46:23.291908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.070 qpair failed and we were unable to recover it. 00:27:35.070 [2024-11-15 10:46:23.292015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.070 [2024-11-15 10:46:23.292040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.070 qpair failed and we were unable to recover it. 00:27:35.070 [2024-11-15 10:46:23.292130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.070 [2024-11-15 10:46:23.292156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.070 qpair failed and we were unable to recover it. 00:27:35.070 [2024-11-15 10:46:23.292238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.070 [2024-11-15 10:46:23.292264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.070 qpair failed and we were unable to recover it. 00:27:35.070 [2024-11-15 10:46:23.292383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.070 [2024-11-15 10:46:23.292410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.070 qpair failed and we were unable to recover it. 00:27:35.070 [2024-11-15 10:46:23.292495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.070 [2024-11-15 10:46:23.292520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.070 qpair failed and we were unable to recover it. 00:27:35.070 [2024-11-15 10:46:23.292667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.070 [2024-11-15 10:46:23.292693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.070 qpair failed and we were unable to recover it. 00:27:35.070 [2024-11-15 10:46:23.292797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.070 [2024-11-15 10:46:23.292823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.070 qpair failed and we were unable to recover it. 00:27:35.070 [2024-11-15 10:46:23.292943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.070 [2024-11-15 10:46:23.292969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.070 qpair failed and we were unable to recover it. 00:27:35.070 [2024-11-15 10:46:23.293090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.070 [2024-11-15 10:46:23.293116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.070 qpair failed and we were unable to recover it. 00:27:35.070 [2024-11-15 10:46:23.293209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.070 [2024-11-15 10:46:23.293234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.070 qpair failed and we were unable to recover it. 00:27:35.070 [2024-11-15 10:46:23.293346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.070 [2024-11-15 10:46:23.293377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.070 qpair failed and we were unable to recover it. 00:27:35.070 [2024-11-15 10:46:23.293500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.070 [2024-11-15 10:46:23.293526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.070 qpair failed and we were unable to recover it. 00:27:35.070 [2024-11-15 10:46:23.293652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.070 [2024-11-15 10:46:23.293677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.070 qpair failed and we were unable to recover it. 00:27:35.070 [2024-11-15 10:46:23.293825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.070 [2024-11-15 10:46:23.293850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.070 qpair failed and we were unable to recover it. 00:27:35.070 [2024-11-15 10:46:23.293997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.070 [2024-11-15 10:46:23.294023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.070 qpair failed and we were unable to recover it. 00:27:35.070 [2024-11-15 10:46:23.294117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.070 [2024-11-15 10:46:23.294143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.070 qpair failed and we were unable to recover it. 00:27:35.070 [2024-11-15 10:46:23.294268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.070 [2024-11-15 10:46:23.294294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.070 qpair failed and we were unable to recover it. 00:27:35.070 [2024-11-15 10:46:23.294417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.070 [2024-11-15 10:46:23.294443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.070 qpair failed and we were unable to recover it. 00:27:35.070 [2024-11-15 10:46:23.294571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.070 [2024-11-15 10:46:23.294597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.070 qpair failed and we were unable to recover it. 00:27:35.070 [2024-11-15 10:46:23.294722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.070 [2024-11-15 10:46:23.294748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.070 qpair failed and we were unable to recover it. 00:27:35.070 [2024-11-15 10:46:23.294908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.070 [2024-11-15 10:46:23.294934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.070 qpair failed and we were unable to recover it. 00:27:35.070 [2024-11-15 10:46:23.295050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.070 [2024-11-15 10:46:23.295076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.070 qpair failed and we were unable to recover it. 00:27:35.070 [2024-11-15 10:46:23.295174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.071 [2024-11-15 10:46:23.295199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.071 qpair failed and we were unable to recover it. 00:27:35.071 [2024-11-15 10:46:23.295324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.071 [2024-11-15 10:46:23.295349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.071 qpair failed and we were unable to recover it. 00:27:35.071 [2024-11-15 10:46:23.295488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.071 [2024-11-15 10:46:23.295514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.071 qpair failed and we were unable to recover it. 00:27:35.071 [2024-11-15 10:46:23.295660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.071 [2024-11-15 10:46:23.295686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.071 qpair failed and we were unable to recover it. 00:27:35.071 [2024-11-15 10:46:23.295808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.071 [2024-11-15 10:46:23.295833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.071 qpair failed and we were unable to recover it. 00:27:35.071 [2024-11-15 10:46:23.295985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.071 [2024-11-15 10:46:23.296010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.071 qpair failed and we were unable to recover it. 00:27:35.071 [2024-11-15 10:46:23.296127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.071 [2024-11-15 10:46:23.296153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.071 qpair failed and we were unable to recover it. 00:27:35.071 [2024-11-15 10:46:23.296275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.071 [2024-11-15 10:46:23.296300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.071 qpair failed and we were unable to recover it. 00:27:35.071 [2024-11-15 10:46:23.296389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.071 [2024-11-15 10:46:23.296415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.071 qpair failed and we were unable to recover it. 00:27:35.071 [2024-11-15 10:46:23.296545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.071 [2024-11-15 10:46:23.296571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.071 qpair failed and we were unable to recover it. 00:27:35.071 [2024-11-15 10:46:23.296660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.071 [2024-11-15 10:46:23.296686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.071 qpair failed and we were unable to recover it. 00:27:35.071 [2024-11-15 10:46:23.296834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.071 [2024-11-15 10:46:23.296859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.071 qpair failed and we were unable to recover it. 00:27:35.071 [2024-11-15 10:46:23.296986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.071 [2024-11-15 10:46:23.297016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.071 qpair failed and we were unable to recover it. 00:27:35.071 [2024-11-15 10:46:23.297142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.071 [2024-11-15 10:46:23.297168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.071 qpair failed and we were unable to recover it. 00:27:35.071 [2024-11-15 10:46:23.297254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.071 [2024-11-15 10:46:23.297280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.071 qpair failed and we were unable to recover it. 00:27:35.071 [2024-11-15 10:46:23.297409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.071 [2024-11-15 10:46:23.297436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.071 qpair failed and we were unable to recover it. 00:27:35.071 [2024-11-15 10:46:23.297553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.071 [2024-11-15 10:46:23.297578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.071 qpair failed and we were unable to recover it. 00:27:35.071 [2024-11-15 10:46:23.297701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.071 [2024-11-15 10:46:23.297726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.071 qpair failed and we were unable to recover it. 00:27:35.071 [2024-11-15 10:46:23.297844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.071 [2024-11-15 10:46:23.297870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.071 qpair failed and we were unable to recover it. 00:27:35.071 [2024-11-15 10:46:23.297965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.071 [2024-11-15 10:46:23.297991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.071 qpair failed and we were unable to recover it. 00:27:35.071 [2024-11-15 10:46:23.298107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.071 [2024-11-15 10:46:23.298133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.071 qpair failed and we were unable to recover it. 00:27:35.071 [2024-11-15 10:46:23.298281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.071 [2024-11-15 10:46:23.298306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.071 qpair failed and we were unable to recover it. 00:27:35.071 [2024-11-15 10:46:23.298398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.071 [2024-11-15 10:46:23.298425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.071 qpair failed and we were unable to recover it. 00:27:35.071 [2024-11-15 10:46:23.298513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.071 [2024-11-15 10:46:23.298539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.071 qpair failed and we were unable to recover it. 00:27:35.071 [2024-11-15 10:46:23.298688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.071 [2024-11-15 10:46:23.298713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.071 qpair failed and we were unable to recover it. 00:27:35.071 [2024-11-15 10:46:23.298863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.071 [2024-11-15 10:46:23.298888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.071 qpair failed and we were unable to recover it. 00:27:35.071 [2024-11-15 10:46:23.299010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.071 [2024-11-15 10:46:23.299036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.071 qpair failed and we were unable to recover it. 00:27:35.071 [2024-11-15 10:46:23.299163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.071 [2024-11-15 10:46:23.299188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.071 qpair failed and we were unable to recover it. 00:27:35.071 [2024-11-15 10:46:23.299306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.071 [2024-11-15 10:46:23.299331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.071 qpair failed and we were unable to recover it. 00:27:35.071 [2024-11-15 10:46:23.299498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.071 [2024-11-15 10:46:23.299525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.071 qpair failed and we were unable to recover it. 00:27:35.071 [2024-11-15 10:46:23.299648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.071 [2024-11-15 10:46:23.299674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.071 qpair failed and we were unable to recover it. 00:27:35.071 [2024-11-15 10:46:23.299789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.071 [2024-11-15 10:46:23.299815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.071 qpair failed and we were unable to recover it. 00:27:35.071 [2024-11-15 10:46:23.299942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.071 [2024-11-15 10:46:23.299967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.071 qpair failed and we were unable to recover it. 00:27:35.071 [2024-11-15 10:46:23.300116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.071 [2024-11-15 10:46:23.300142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.071 qpair failed and we were unable to recover it. 00:27:35.071 [2024-11-15 10:46:23.300291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.071 [2024-11-15 10:46:23.300316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.071 qpair failed and we were unable to recover it. 00:27:35.071 [2024-11-15 10:46:23.300406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.071 [2024-11-15 10:46:23.300432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.071 qpair failed and we were unable to recover it. 00:27:35.071 [2024-11-15 10:46:23.300557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.071 [2024-11-15 10:46:23.300582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.071 qpair failed and we were unable to recover it. 00:27:35.071 [2024-11-15 10:46:23.300703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.071 [2024-11-15 10:46:23.300729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.071 qpair failed and we were unable to recover it. 00:27:35.071 [2024-11-15 10:46:23.300810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.071 [2024-11-15 10:46:23.300835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.071 qpair failed and we were unable to recover it. 00:27:35.071 [2024-11-15 10:46:23.300932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.071 [2024-11-15 10:46:23.300958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.071 qpair failed and we were unable to recover it. 00:27:35.071 [2024-11-15 10:46:23.301105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.071 [2024-11-15 10:46:23.301131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.071 qpair failed and we were unable to recover it. 00:27:35.071 [2024-11-15 10:46:23.301229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.071 [2024-11-15 10:46:23.301255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.071 qpair failed and we were unable to recover it. 00:27:35.071 [2024-11-15 10:46:23.301390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.071 [2024-11-15 10:46:23.301417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.071 qpair failed and we were unable to recover it. 00:27:35.071 [2024-11-15 10:46:23.301536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.071 [2024-11-15 10:46:23.301561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.071 qpair failed and we were unable to recover it. 00:27:35.071 [2024-11-15 10:46:23.301651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.071 [2024-11-15 10:46:23.301677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.071 qpair failed and we were unable to recover it. 00:27:35.071 [2024-11-15 10:46:23.301780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.071 [2024-11-15 10:46:23.301806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.071 qpair failed and we were unable to recover it. 00:27:35.071 [2024-11-15 10:46:23.301906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.071 [2024-11-15 10:46:23.301932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.071 qpair failed and we were unable to recover it. 00:27:35.071 [2024-11-15 10:46:23.302036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.071 [2024-11-15 10:46:23.302062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.071 qpair failed and we were unable to recover it. 00:27:35.071 [2024-11-15 10:46:23.302167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.071 [2024-11-15 10:46:23.302192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.071 qpair failed and we were unable to recover it. 00:27:35.071 [2024-11-15 10:46:23.302309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.071 [2024-11-15 10:46:23.302335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.071 qpair failed and we were unable to recover it. 00:27:35.071 [2024-11-15 10:46:23.302444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.071 [2024-11-15 10:46:23.302470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.071 qpair failed and we were unable to recover it. 00:27:35.071 [2024-11-15 10:46:23.302588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.071 [2024-11-15 10:46:23.302614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.071 qpair failed and we were unable to recover it. 00:27:35.071 [2024-11-15 10:46:23.302727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.071 [2024-11-15 10:46:23.302757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.071 qpair failed and we were unable to recover it. 00:27:35.071 [2024-11-15 10:46:23.302874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.071 [2024-11-15 10:46:23.302900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.071 qpair failed and we were unable to recover it. 00:27:35.071 [2024-11-15 10:46:23.303018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.071 [2024-11-15 10:46:23.303043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.071 qpair failed and we were unable to recover it. 00:27:35.071 [2024-11-15 10:46:23.303166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.071 [2024-11-15 10:46:23.303192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.071 qpair failed and we were unable to recover it. 00:27:35.071 [2024-11-15 10:46:23.303305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.071 [2024-11-15 10:46:23.303331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.071 qpair failed and we were unable to recover it. 00:27:35.071 [2024-11-15 10:46:23.303461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.071 [2024-11-15 10:46:23.303487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.071 qpair failed and we were unable to recover it. 00:27:35.071 [2024-11-15 10:46:23.303580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.071 [2024-11-15 10:46:23.303606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.071 qpair failed and we were unable to recover it. 00:27:35.071 [2024-11-15 10:46:23.303752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.071 [2024-11-15 10:46:23.303778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.071 qpair failed and we were unable to recover it. 00:27:35.071 [2024-11-15 10:46:23.303928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.071 [2024-11-15 10:46:23.303953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.071 qpair failed and we were unable to recover it. 00:27:35.071 [2024-11-15 10:46:23.304047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.071 [2024-11-15 10:46:23.304072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.071 qpair failed and we were unable to recover it. 00:27:35.071 [2024-11-15 10:46:23.304193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.071 [2024-11-15 10:46:23.304219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.071 qpair failed and we were unable to recover it. 00:27:35.072 [2024-11-15 10:46:23.304340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.072 [2024-11-15 10:46:23.304371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.072 qpair failed and we were unable to recover it. 00:27:35.072 [2024-11-15 10:46:23.304524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.072 [2024-11-15 10:46:23.304549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.072 qpair failed and we were unable to recover it. 00:27:35.072 [2024-11-15 10:46:23.304668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.072 [2024-11-15 10:46:23.304693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.072 qpair failed and we were unable to recover it. 00:27:35.072 [2024-11-15 10:46:23.304791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.072 [2024-11-15 10:46:23.304817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.072 qpair failed and we were unable to recover it. 00:27:35.072 [2024-11-15 10:46:23.304938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.072 [2024-11-15 10:46:23.304964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.072 qpair failed and we were unable to recover it. 00:27:35.072 [2024-11-15 10:46:23.305083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.072 [2024-11-15 10:46:23.305109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.072 qpair failed and we were unable to recover it. 00:27:35.072 [2024-11-15 10:46:23.305240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.072 [2024-11-15 10:46:23.305265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.072 qpair failed and we were unable to recover it. 00:27:35.072 [2024-11-15 10:46:23.305347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.072 [2024-11-15 10:46:23.305380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.072 qpair failed and we were unable to recover it. 00:27:35.072 [2024-11-15 10:46:23.305506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.072 [2024-11-15 10:46:23.305532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.072 qpair failed and we were unable to recover it. 00:27:35.072 [2024-11-15 10:46:23.305662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.072 [2024-11-15 10:46:23.305687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.072 qpair failed and we were unable to recover it. 00:27:35.072 [2024-11-15 10:46:23.305801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.072 [2024-11-15 10:46:23.305826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.072 qpair failed and we were unable to recover it. 00:27:35.072 [2024-11-15 10:46:23.305955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.072 [2024-11-15 10:46:23.305981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.072 qpair failed and we were unable to recover it. 00:27:35.072 [2024-11-15 10:46:23.306127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.072 [2024-11-15 10:46:23.306153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.072 qpair failed and we were unable to recover it. 00:27:35.072 [2024-11-15 10:46:23.306259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.072 [2024-11-15 10:46:23.306285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.072 qpair failed and we were unable to recover it. 00:27:35.072 [2024-11-15 10:46:23.306414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.072 [2024-11-15 10:46:23.306441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.072 qpair failed and we were unable to recover it. 00:27:35.072 [2024-11-15 10:46:23.306559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.072 [2024-11-15 10:46:23.306584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.072 qpair failed and we were unable to recover it. 00:27:35.072 [2024-11-15 10:46:23.306715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.072 [2024-11-15 10:46:23.306741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.072 qpair failed and we were unable to recover it. 00:27:35.072 [2024-11-15 10:46:23.306867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.072 [2024-11-15 10:46:23.306892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.072 qpair failed and we were unable to recover it. 00:27:35.072 [2024-11-15 10:46:23.307044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.072 [2024-11-15 10:46:23.307069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.072 qpair failed and we were unable to recover it. 00:27:35.072 [2024-11-15 10:46:23.307218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.072 [2024-11-15 10:46:23.307244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.072 qpair failed and we were unable to recover it. 00:27:35.072 [2024-11-15 10:46:23.307393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.072 [2024-11-15 10:46:23.307419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.072 qpair failed and we were unable to recover it. 00:27:35.072 [2024-11-15 10:46:23.307546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.072 [2024-11-15 10:46:23.307571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.072 qpair failed and we were unable to recover it. 00:27:35.072 [2024-11-15 10:46:23.307724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.072 [2024-11-15 10:46:23.307749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.072 qpair failed and we were unable to recover it. 00:27:35.072 [2024-11-15 10:46:23.307839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.072 [2024-11-15 10:46:23.307865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.072 qpair failed and we were unable to recover it. 00:27:35.072 [2024-11-15 10:46:23.307984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.072 [2024-11-15 10:46:23.308009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.072 qpair failed and we were unable to recover it. 00:27:35.072 [2024-11-15 10:46:23.308161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.072 [2024-11-15 10:46:23.308187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.072 qpair failed and we were unable to recover it. 00:27:35.072 [2024-11-15 10:46:23.308309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.072 [2024-11-15 10:46:23.308335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.072 qpair failed and we were unable to recover it. 00:27:35.072 [2024-11-15 10:46:23.308490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.072 [2024-11-15 10:46:23.308516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.072 qpair failed and we were unable to recover it. 00:27:35.072 [2024-11-15 10:46:23.308628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.072 [2024-11-15 10:46:23.308654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.072 qpair failed and we were unable to recover it. 00:27:35.072 [2024-11-15 10:46:23.308784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.072 [2024-11-15 10:46:23.308809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.072 qpair failed and we were unable to recover it. 00:27:35.072 [2024-11-15 10:46:23.308911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.072 [2024-11-15 10:46:23.308937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.072 qpair failed and we were unable to recover it. 00:27:35.072 [2024-11-15 10:46:23.309088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.072 [2024-11-15 10:46:23.309113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.072 qpair failed and we were unable to recover it. 00:27:35.072 [2024-11-15 10:46:23.309231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.072 [2024-11-15 10:46:23.309257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.072 qpair failed and we were unable to recover it. 00:27:35.072 [2024-11-15 10:46:23.309342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.072 [2024-11-15 10:46:23.309376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.072 qpair failed and we were unable to recover it. 00:27:35.072 [2024-11-15 10:46:23.309507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.072 [2024-11-15 10:46:23.309532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.072 qpair failed and we were unable to recover it. 00:27:35.072 [2024-11-15 10:46:23.309679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.072 [2024-11-15 10:46:23.309705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.072 qpair failed and we were unable to recover it. 00:27:35.072 [2024-11-15 10:46:23.309827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.072 [2024-11-15 10:46:23.309853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.072 qpair failed and we were unable to recover it. 00:27:35.072 [2024-11-15 10:46:23.309975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.072 [2024-11-15 10:46:23.310000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.072 qpair failed and we were unable to recover it. 00:27:35.072 [2024-11-15 10:46:23.310119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.072 [2024-11-15 10:46:23.310145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.072 qpair failed and we were unable to recover it. 00:27:35.072 [2024-11-15 10:46:23.310267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.072 [2024-11-15 10:46:23.310292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.072 qpair failed and we were unable to recover it. 00:27:35.072 [2024-11-15 10:46:23.310416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.072 [2024-11-15 10:46:23.310442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.072 qpair failed and we were unable to recover it. 00:27:35.072 [2024-11-15 10:46:23.310529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.072 [2024-11-15 10:46:23.310555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.072 qpair failed and we were unable to recover it. 00:27:35.072 [2024-11-15 10:46:23.310684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.072 [2024-11-15 10:46:23.310710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.072 qpair failed and we were unable to recover it. 00:27:35.072 [2024-11-15 10:46:23.310806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.072 [2024-11-15 10:46:23.310833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.072 qpair failed and we were unable to recover it. 00:27:35.072 [2024-11-15 10:46:23.310958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.072 [2024-11-15 10:46:23.310983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.072 qpair failed and we were unable to recover it. 00:27:35.072 [2024-11-15 10:46:23.311076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.072 [2024-11-15 10:46:23.311101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.072 qpair failed and we were unable to recover it. 00:27:35.072 [2024-11-15 10:46:23.311221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.072 [2024-11-15 10:46:23.311246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.072 qpair failed and we were unable to recover it. 00:27:35.072 [2024-11-15 10:46:23.311377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.072 [2024-11-15 10:46:23.311403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.072 qpair failed and we were unable to recover it. 00:27:35.072 [2024-11-15 10:46:23.311524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.072 [2024-11-15 10:46:23.311550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.072 qpair failed and we were unable to recover it. 00:27:35.072 [2024-11-15 10:46:23.311665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.072 [2024-11-15 10:46:23.311689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.072 qpair failed and we were unable to recover it. 00:27:35.072 [2024-11-15 10:46:23.311813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.072 [2024-11-15 10:46:23.311839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.072 qpair failed and we were unable to recover it. 00:27:35.072 [2024-11-15 10:46:23.311955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.072 [2024-11-15 10:46:23.311981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.072 qpair failed and we were unable to recover it. 00:27:35.072 [2024-11-15 10:46:23.312082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.072 [2024-11-15 10:46:23.312108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.072 qpair failed and we were unable to recover it. 00:27:35.072 [2024-11-15 10:46:23.312218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.072 [2024-11-15 10:46:23.312244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.072 qpair failed and we were unable to recover it. 00:27:35.072 [2024-11-15 10:46:23.312339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.072 [2024-11-15 10:46:23.312369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.072 qpair failed and we were unable to recover it. 00:27:35.072 [2024-11-15 10:46:23.312516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.072 [2024-11-15 10:46:23.312541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.072 qpair failed and we were unable to recover it. 00:27:35.072 [2024-11-15 10:46:23.312661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.072 [2024-11-15 10:46:23.312694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.072 qpair failed and we were unable to recover it. 00:27:35.072 [2024-11-15 10:46:23.312842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.072 [2024-11-15 10:46:23.312867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.072 qpair failed and we were unable to recover it. 00:27:35.072 [2024-11-15 10:46:23.312995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.072 [2024-11-15 10:46:23.313020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.072 qpair failed and we were unable to recover it. 00:27:35.072 [2024-11-15 10:46:23.313173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.072 [2024-11-15 10:46:23.313198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.072 qpair failed and we were unable to recover it. 00:27:35.072 [2024-11-15 10:46:23.313297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.072 [2024-11-15 10:46:23.313322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.072 qpair failed and we were unable to recover it. 00:27:35.072 [2024-11-15 10:46:23.313456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.072 [2024-11-15 10:46:23.313482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.072 qpair failed and we were unable to recover it. 00:27:35.072 [2024-11-15 10:46:23.313607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.072 [2024-11-15 10:46:23.313633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.072 qpair failed and we were unable to recover it. 00:27:35.072 [2024-11-15 10:46:23.313731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.072 [2024-11-15 10:46:23.313756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.072 qpair failed and we were unable to recover it. 00:27:35.072 [2024-11-15 10:46:23.313848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.072 [2024-11-15 10:46:23.313873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.072 qpair failed and we were unable to recover it. 00:27:35.072 [2024-11-15 10:46:23.313966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.072 [2024-11-15 10:46:23.313991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.072 qpair failed and we were unable to recover it. 00:27:35.072 [2024-11-15 10:46:23.314104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.072 [2024-11-15 10:46:23.314129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.072 qpair failed and we were unable to recover it. 00:27:35.072 [2024-11-15 10:46:23.314286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.072 [2024-11-15 10:46:23.314312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.072 qpair failed and we were unable to recover it. 00:27:35.072 [2024-11-15 10:46:23.314405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.072 [2024-11-15 10:46:23.314431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.072 qpair failed and we were unable to recover it. 00:27:35.072 [2024-11-15 10:46:23.314553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.073 [2024-11-15 10:46:23.314578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.073 qpair failed and we were unable to recover it. 00:27:35.073 [2024-11-15 10:46:23.314706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.073 [2024-11-15 10:46:23.314731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.073 qpair failed and we were unable to recover it. 00:27:35.073 [2024-11-15 10:46:23.314846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.073 [2024-11-15 10:46:23.314872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.073 qpair failed and we were unable to recover it. 00:27:35.073 [2024-11-15 10:46:23.314990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.073 [2024-11-15 10:46:23.315016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.073 qpair failed and we were unable to recover it. 00:27:35.073 [2024-11-15 10:46:23.315142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.073 [2024-11-15 10:46:23.315168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.073 qpair failed and we were unable to recover it. 00:27:35.073 [2024-11-15 10:46:23.315267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.073 [2024-11-15 10:46:23.315293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.073 qpair failed and we were unable to recover it. 00:27:35.073 [2024-11-15 10:46:23.315417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.073 [2024-11-15 10:46:23.315444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.073 qpair failed and we were unable to recover it. 00:27:35.073 [2024-11-15 10:46:23.315568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.073 [2024-11-15 10:46:23.315593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.073 qpair failed and we were unable to recover it. 00:27:35.073 [2024-11-15 10:46:23.315682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.073 [2024-11-15 10:46:23.315708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.073 qpair failed and we were unable to recover it. 00:27:35.073 [2024-11-15 10:46:23.315831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.073 [2024-11-15 10:46:23.315856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.073 qpair failed and we were unable to recover it. 00:27:35.073 [2024-11-15 10:46:23.316009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.073 [2024-11-15 10:46:23.316035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.073 qpair failed and we were unable to recover it. 00:27:35.073 [2024-11-15 10:46:23.316159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.073 [2024-11-15 10:46:23.316184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.073 qpair failed and we were unable to recover it. 00:27:35.073 [2024-11-15 10:46:23.316317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.073 [2024-11-15 10:46:23.316342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.073 qpair failed and we were unable to recover it. 00:27:35.073 [2024-11-15 10:46:23.316499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.073 [2024-11-15 10:46:23.316524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.073 qpair failed and we were unable to recover it. 00:27:35.073 [2024-11-15 10:46:23.316652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.073 [2024-11-15 10:46:23.316678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.073 qpair failed and we were unable to recover it. 00:27:35.073 [2024-11-15 10:46:23.316805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.073 [2024-11-15 10:46:23.316830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.073 qpair failed and we were unable to recover it. 00:27:35.073 [2024-11-15 10:46:23.316956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.073 [2024-11-15 10:46:23.316981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.073 qpair failed and we were unable to recover it. 00:27:35.073 [2024-11-15 10:46:23.317100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.073 [2024-11-15 10:46:23.317126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.073 qpair failed and we were unable to recover it. 00:27:35.073 [2024-11-15 10:46:23.317273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.073 [2024-11-15 10:46:23.317298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.073 qpair failed and we were unable to recover it. 00:27:35.073 [2024-11-15 10:46:23.317398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.073 [2024-11-15 10:46:23.317424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.073 qpair failed and we were unable to recover it. 00:27:35.073 [2024-11-15 10:46:23.317574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.073 [2024-11-15 10:46:23.317600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.073 qpair failed and we were unable to recover it. 00:27:35.073 [2024-11-15 10:46:23.317716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.073 [2024-11-15 10:46:23.317742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.073 qpair failed and we were unable to recover it. 00:27:35.073 [2024-11-15 10:46:23.317831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.073 [2024-11-15 10:46:23.317856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.073 qpair failed and we were unable to recover it. 00:27:35.073 [2024-11-15 10:46:23.317990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.073 [2024-11-15 10:46:23.318016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.073 qpair failed and we were unable to recover it. 00:27:35.073 [2024-11-15 10:46:23.318148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.073 [2024-11-15 10:46:23.318174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.073 qpair failed and we were unable to recover it. 00:27:35.073 [2024-11-15 10:46:23.318294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.073 [2024-11-15 10:46:23.318319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.073 qpair failed and we were unable to recover it. 00:27:35.073 [2024-11-15 10:46:23.318458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.073 [2024-11-15 10:46:23.318484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.073 qpair failed and we were unable to recover it. 00:27:35.073 [2024-11-15 10:46:23.318608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.073 [2024-11-15 10:46:23.318639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.073 qpair failed and we were unable to recover it. 00:27:35.073 [2024-11-15 10:46:23.318732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.073 [2024-11-15 10:46:23.318758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.073 qpair failed and we were unable to recover it. 00:27:35.073 [2024-11-15 10:46:23.318880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.073 [2024-11-15 10:46:23.318906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.073 qpair failed and we were unable to recover it. 00:27:35.073 [2024-11-15 10:46:23.318991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.073 [2024-11-15 10:46:23.319016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.073 qpair failed and we were unable to recover it. 00:27:35.073 [2024-11-15 10:46:23.319172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.073 [2024-11-15 10:46:23.319197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.073 qpair failed and we were unable to recover it. 00:27:35.073 [2024-11-15 10:46:23.319323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.073 [2024-11-15 10:46:23.319349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.073 qpair failed and we were unable to recover it. 00:27:35.073 [2024-11-15 10:46:23.319496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.073 [2024-11-15 10:46:23.319521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.073 qpair failed and we were unable to recover it. 00:27:35.073 [2024-11-15 10:46:23.319666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.073 [2024-11-15 10:46:23.319692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.073 qpair failed and we were unable to recover it. 00:27:35.073 [2024-11-15 10:46:23.319851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.073 [2024-11-15 10:46:23.319876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.073 qpair failed and we were unable to recover it. 00:27:35.073 [2024-11-15 10:46:23.319971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.073 [2024-11-15 10:46:23.319996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.073 qpair failed and we were unable to recover it. 00:27:35.073 [2024-11-15 10:46:23.320135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.073 [2024-11-15 10:46:23.320160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.073 qpair failed and we were unable to recover it. 00:27:35.073 [2024-11-15 10:46:23.320292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.073 [2024-11-15 10:46:23.320330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.073 qpair failed and we were unable to recover it. 00:27:35.073 [2024-11-15 10:46:23.320423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.073 [2024-11-15 10:46:23.320449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.073 qpair failed and we were unable to recover it. 00:27:35.073 [2024-11-15 10:46:23.320597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.073 [2024-11-15 10:46:23.320633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.073 qpair failed and we were unable to recover it. 00:27:35.073 [2024-11-15 10:46:23.320765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.073 [2024-11-15 10:46:23.320791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.073 qpair failed and we were unable to recover it. 00:27:35.073 [2024-11-15 10:46:23.320970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.073 [2024-11-15 10:46:23.320995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.073 qpair failed and we were unable to recover it. 00:27:35.073 [2024-11-15 10:46:23.321123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.073 [2024-11-15 10:46:23.321149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.073 qpair failed and we were unable to recover it. 00:27:35.073 [2024-11-15 10:46:23.321281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.073 [2024-11-15 10:46:23.321307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.073 qpair failed and we were unable to recover it. 00:27:35.073 [2024-11-15 10:46:23.321416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.073 [2024-11-15 10:46:23.321442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.073 qpair failed and we were unable to recover it. 00:27:35.073 [2024-11-15 10:46:23.321566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.073 [2024-11-15 10:46:23.321592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.073 qpair failed and we were unable to recover it. 00:27:35.073 [2024-11-15 10:46:23.321730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.073 [2024-11-15 10:46:23.321756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.073 qpair failed and we were unable to recover it. 00:27:35.073 [2024-11-15 10:46:23.321888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.073 [2024-11-15 10:46:23.321913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.073 qpair failed and we were unable to recover it. 00:27:35.073 [2024-11-15 10:46:23.322067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.073 [2024-11-15 10:46:23.322093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.073 qpair failed and we were unable to recover it. 00:27:35.073 [2024-11-15 10:46:23.322224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.073 [2024-11-15 10:46:23.322256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.073 qpair failed and we were unable to recover it. 00:27:35.073 [2024-11-15 10:46:23.322360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.073 [2024-11-15 10:46:23.322391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.073 qpair failed and we were unable to recover it. 00:27:35.073 [2024-11-15 10:46:23.322508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.073 [2024-11-15 10:46:23.322534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.073 qpair failed and we were unable to recover it. 00:27:35.073 [2024-11-15 10:46:23.322685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.073 [2024-11-15 10:46:23.322710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.073 qpair failed and we were unable to recover it. 00:27:35.073 [2024-11-15 10:46:23.322876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.073 [2024-11-15 10:46:23.322901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.073 qpair failed and we were unable to recover it. 00:27:35.073 [2024-11-15 10:46:23.323049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.073 [2024-11-15 10:46:23.323082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.073 qpair failed and we were unable to recover it. 00:27:35.073 [2024-11-15 10:46:23.323238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.073 [2024-11-15 10:46:23.323263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.073 qpair failed and we were unable to recover it. 00:27:35.073 [2024-11-15 10:46:23.323359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.073 [2024-11-15 10:46:23.323402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.073 qpair failed and we were unable to recover it. 00:27:35.073 [2024-11-15 10:46:23.323548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.073 [2024-11-15 10:46:23.323574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.073 qpair failed and we were unable to recover it. 00:27:35.073 [2024-11-15 10:46:23.323749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.073 [2024-11-15 10:46:23.323774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.073 qpair failed and we were unable to recover it. 00:27:35.073 [2024-11-15 10:46:23.323937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.073 [2024-11-15 10:46:23.323963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.073 qpair failed and we were unable to recover it. 00:27:35.073 [2024-11-15 10:46:23.324120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.073 [2024-11-15 10:46:23.324145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.073 qpair failed and we were unable to recover it. 00:27:35.073 [2024-11-15 10:46:23.324244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.073 [2024-11-15 10:46:23.324270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.073 qpair failed and we were unable to recover it. 00:27:35.073 [2024-11-15 10:46:23.324417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.073 [2024-11-15 10:46:23.324443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.073 qpair failed and we were unable to recover it. 00:27:35.073 [2024-11-15 10:46:23.324562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.073 [2024-11-15 10:46:23.324587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.073 qpair failed and we were unable to recover it. 00:27:35.073 [2024-11-15 10:46:23.324755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.073 [2024-11-15 10:46:23.324781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.073 qpair failed and we were unable to recover it. 00:27:35.073 [2024-11-15 10:46:23.324920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.073 [2024-11-15 10:46:23.324945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.073 qpair failed and we were unable to recover it. 00:27:35.073 [2024-11-15 10:46:23.325101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.074 [2024-11-15 10:46:23.325131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.074 qpair failed and we were unable to recover it. 00:27:35.074 [2024-11-15 10:46:23.325228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.074 [2024-11-15 10:46:23.325254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.074 qpair failed and we were unable to recover it. 00:27:35.074 [2024-11-15 10:46:23.325416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.074 [2024-11-15 10:46:23.325443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.074 qpair failed and we were unable to recover it. 00:27:35.074 [2024-11-15 10:46:23.325527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.074 [2024-11-15 10:46:23.325553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.074 qpair failed and we were unable to recover it. 00:27:35.074 [2024-11-15 10:46:23.325729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.074 [2024-11-15 10:46:23.325754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.074 qpair failed and we were unable to recover it. 00:27:35.074 [2024-11-15 10:46:23.325880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.074 [2024-11-15 10:46:23.325906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.074 qpair failed and we were unable to recover it. 00:27:35.074 [2024-11-15 10:46:23.325995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.074 [2024-11-15 10:46:23.326020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.074 qpair failed and we were unable to recover it. 00:27:35.074 [2024-11-15 10:46:23.326152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.074 [2024-11-15 10:46:23.326178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.074 qpair failed and we were unable to recover it. 00:27:35.074 [2024-11-15 10:46:23.326314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.074 [2024-11-15 10:46:23.326340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.074 qpair failed and we were unable to recover it. 00:27:35.074 [2024-11-15 10:46:23.326514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.074 [2024-11-15 10:46:23.326539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.074 qpair failed and we were unable to recover it. 00:27:35.074 [2024-11-15 10:46:23.326663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.074 [2024-11-15 10:46:23.326693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.074 qpair failed and we were unable to recover it. 00:27:35.074 [2024-11-15 10:46:23.326774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.074 [2024-11-15 10:46:23.326800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.074 qpair failed and we were unable to recover it. 00:27:35.074 [2024-11-15 10:46:23.326893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.074 [2024-11-15 10:46:23.326918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.074 qpair failed and we were unable to recover it. 00:27:35.074 [2024-11-15 10:46:23.327078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.074 [2024-11-15 10:46:23.327104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.074 qpair failed and we were unable to recover it. 00:27:35.074 [2024-11-15 10:46:23.327229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.074 [2024-11-15 10:46:23.327255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.074 qpair failed and we were unable to recover it. 00:27:35.074 [2024-11-15 10:46:23.327408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.074 [2024-11-15 10:46:23.327434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.074 qpair failed and we were unable to recover it. 00:27:35.074 [2024-11-15 10:46:23.327583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.074 [2024-11-15 10:46:23.327608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.074 qpair failed and we were unable to recover it. 00:27:35.074 [2024-11-15 10:46:23.327733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.074 [2024-11-15 10:46:23.327758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.074 qpair failed and we were unable to recover it. 00:27:35.074 [2024-11-15 10:46:23.327890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.074 [2024-11-15 10:46:23.327915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.074 qpair failed and we were unable to recover it. 00:27:35.074 [2024-11-15 10:46:23.328078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.074 [2024-11-15 10:46:23.328103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.074 qpair failed and we were unable to recover it. 00:27:35.074 [2024-11-15 10:46:23.328232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.074 [2024-11-15 10:46:23.328257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.074 qpair failed and we were unable to recover it. 00:27:35.074 [2024-11-15 10:46:23.328426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.074 [2024-11-15 10:46:23.328452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.074 qpair failed and we were unable to recover it. 00:27:35.074 [2024-11-15 10:46:23.328558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.074 [2024-11-15 10:46:23.328584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.074 qpair failed and we were unable to recover it. 00:27:35.074 [2024-11-15 10:46:23.328668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.074 [2024-11-15 10:46:23.328694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.074 qpair failed and we were unable to recover it. 00:27:35.074 [2024-11-15 10:46:23.328814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.074 [2024-11-15 10:46:23.328840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.074 qpair failed and we were unable to recover it. 00:27:35.074 [2024-11-15 10:46:23.328943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.074 [2024-11-15 10:46:23.328968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.074 qpair failed and we were unable to recover it. 00:27:35.074 [2024-11-15 10:46:23.329101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.074 [2024-11-15 10:46:23.329126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.074 qpair failed and we were unable to recover it. 00:27:35.074 [2024-11-15 10:46:23.329266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.074 [2024-11-15 10:46:23.329292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.074 qpair failed and we were unable to recover it. 00:27:35.074 [2024-11-15 10:46:23.329458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.074 [2024-11-15 10:46:23.329485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.074 qpair failed and we were unable to recover it. 00:27:35.074 [2024-11-15 10:46:23.329573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.074 [2024-11-15 10:46:23.329598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.074 qpair failed and we were unable to recover it. 00:27:35.074 [2024-11-15 10:46:23.329717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.074 [2024-11-15 10:46:23.329742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.074 qpair failed and we were unable to recover it. 00:27:35.074 [2024-11-15 10:46:23.329871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.074 [2024-11-15 10:46:23.329897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.074 qpair failed and we were unable to recover it. 00:27:35.074 [2024-11-15 10:46:23.329981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.074 [2024-11-15 10:46:23.330006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.074 qpair failed and we were unable to recover it. 00:27:35.074 [2024-11-15 10:46:23.330138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.074 [2024-11-15 10:46:23.330164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.074 qpair failed and we were unable to recover it. 00:27:35.074 [2024-11-15 10:46:23.330299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.074 [2024-11-15 10:46:23.330325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.074 qpair failed and we were unable to recover it. 00:27:35.074 [2024-11-15 10:46:23.330423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.074 [2024-11-15 10:46:23.330449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.074 qpair failed and we were unable to recover it. 00:27:35.074 [2024-11-15 10:46:23.330596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.074 [2024-11-15 10:46:23.330627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.074 qpair failed and we were unable to recover it. 00:27:35.074 [2024-11-15 10:46:23.330729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.074 [2024-11-15 10:46:23.330755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.074 qpair failed and we were unable to recover it. 00:27:35.074 [2024-11-15 10:46:23.330892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.074 [2024-11-15 10:46:23.330917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.074 qpair failed and we were unable to recover it. 00:27:35.074 [2024-11-15 10:46:23.331016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.074 [2024-11-15 10:46:23.331042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.074 qpair failed and we were unable to recover it. 00:27:35.074 [2024-11-15 10:46:23.331175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.074 [2024-11-15 10:46:23.331205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.074 qpair failed and we were unable to recover it. 00:27:35.074 [2024-11-15 10:46:23.331391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.074 [2024-11-15 10:46:23.331446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.074 qpair failed and we were unable to recover it. 00:27:35.074 [2024-11-15 10:46:23.331530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.074 [2024-11-15 10:46:23.331556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.074 qpair failed and we were unable to recover it. 00:27:35.074 [2024-11-15 10:46:23.331735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.074 [2024-11-15 10:46:23.331760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.074 qpair failed and we were unable to recover it. 00:27:35.074 [2024-11-15 10:46:23.331880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.074 [2024-11-15 10:46:23.331905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.074 qpair failed and we were unable to recover it. 00:27:35.074 [2024-11-15 10:46:23.332041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.074 [2024-11-15 10:46:23.332067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.074 qpair failed and we were unable to recover it. 00:27:35.074 [2024-11-15 10:46:23.332212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.074 [2024-11-15 10:46:23.332238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.074 qpair failed and we were unable to recover it. 00:27:35.074 [2024-11-15 10:46:23.332401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.074 [2024-11-15 10:46:23.332427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.074 qpair failed and we were unable to recover it. 00:27:35.074 [2024-11-15 10:46:23.332553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.074 [2024-11-15 10:46:23.332578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.074 qpair failed and we were unable to recover it. 00:27:35.074 [2024-11-15 10:46:23.332711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.074 [2024-11-15 10:46:23.332736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.074 qpair failed and we were unable to recover it. 00:27:35.074 [2024-11-15 10:46:23.332900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.074 [2024-11-15 10:46:23.332926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.074 qpair failed and we were unable to recover it. 00:27:35.074 [2024-11-15 10:46:23.333046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.074 [2024-11-15 10:46:23.333071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.074 qpair failed and we were unable to recover it. 00:27:35.074 [2024-11-15 10:46:23.333204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.074 [2024-11-15 10:46:23.333229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.074 qpair failed and we were unable to recover it. 00:27:35.074 [2024-11-15 10:46:23.333349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.074 [2024-11-15 10:46:23.333381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.074 qpair failed and we were unable to recover it. 00:27:35.074 [2024-11-15 10:46:23.333525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.074 [2024-11-15 10:46:23.333550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.074 qpair failed and we were unable to recover it. 00:27:35.074 [2024-11-15 10:46:23.333719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.074 [2024-11-15 10:46:23.333745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.074 qpair failed and we were unable to recover it. 00:27:35.074 [2024-11-15 10:46:23.333830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.074 [2024-11-15 10:46:23.333855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.074 qpair failed and we were unable to recover it. 00:27:35.074 [2024-11-15 10:46:23.333984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.074 [2024-11-15 10:46:23.334009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.074 qpair failed and we were unable to recover it. 00:27:35.074 [2024-11-15 10:46:23.334175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.074 [2024-11-15 10:46:23.334201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.074 qpair failed and we were unable to recover it. 00:27:35.074 [2024-11-15 10:46:23.334295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.074 [2024-11-15 10:46:23.334321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.074 qpair failed and we were unable to recover it. 00:27:35.074 [2024-11-15 10:46:23.334449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.074 [2024-11-15 10:46:23.334475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.074 qpair failed and we were unable to recover it. 00:27:35.074 [2024-11-15 10:46:23.334610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.075 [2024-11-15 10:46:23.334635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.075 qpair failed and we were unable to recover it. 00:27:35.075 [2024-11-15 10:46:23.334784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.075 [2024-11-15 10:46:23.334809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.075 qpair failed and we were unable to recover it. 00:27:35.075 [2024-11-15 10:46:23.334970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.075 [2024-11-15 10:46:23.334996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.075 qpair failed and we were unable to recover it. 00:27:35.075 [2024-11-15 10:46:23.335130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.075 [2024-11-15 10:46:23.335156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.075 qpair failed and we were unable to recover it. 00:27:35.075 [2024-11-15 10:46:23.335251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.075 [2024-11-15 10:46:23.335276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.075 qpair failed and we were unable to recover it. 00:27:35.075 [2024-11-15 10:46:23.335398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.075 [2024-11-15 10:46:23.335424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.075 qpair failed and we were unable to recover it. 00:27:35.075 [2024-11-15 10:46:23.335512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.075 [2024-11-15 10:46:23.335538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.075 qpair failed and we were unable to recover it. 00:27:35.075 [2024-11-15 10:46:23.335660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.075 [2024-11-15 10:46:23.335686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.075 qpair failed and we were unable to recover it. 00:27:35.075 [2024-11-15 10:46:23.335812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.075 [2024-11-15 10:46:23.335838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.075 qpair failed and we were unable to recover it. 00:27:35.075 [2024-11-15 10:46:23.335941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.075 [2024-11-15 10:46:23.335966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.075 qpair failed and we were unable to recover it. 00:27:35.075 [2024-11-15 10:46:23.336091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.075 [2024-11-15 10:46:23.336117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.075 qpair failed and we were unable to recover it. 00:27:35.075 [2024-11-15 10:46:23.336238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.075 [2024-11-15 10:46:23.336264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.075 qpair failed and we were unable to recover it. 00:27:35.075 [2024-11-15 10:46:23.336394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.075 [2024-11-15 10:46:23.336420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.075 qpair failed and we were unable to recover it. 00:27:35.075 [2024-11-15 10:46:23.336602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.075 [2024-11-15 10:46:23.336627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.075 qpair failed and we were unable to recover it. 00:27:35.075 [2024-11-15 10:46:23.336717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.075 [2024-11-15 10:46:23.336742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.075 qpair failed and we were unable to recover it. 00:27:35.075 [2024-11-15 10:46:23.336896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.075 [2024-11-15 10:46:23.336921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.075 qpair failed and we were unable to recover it. 00:27:35.075 [2024-11-15 10:46:23.337049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.075 [2024-11-15 10:46:23.337074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.075 qpair failed and we were unable to recover it. 00:27:35.075 [2024-11-15 10:46:23.337231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.075 [2024-11-15 10:46:23.337257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.075 qpair failed and we were unable to recover it. 00:27:35.075 [2024-11-15 10:46:23.337401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.075 [2024-11-15 10:46:23.337428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.075 qpair failed and we were unable to recover it. 00:27:35.075 [2024-11-15 10:46:23.337584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.075 [2024-11-15 10:46:23.337614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.075 qpair failed and we were unable to recover it. 00:27:35.075 [2024-11-15 10:46:23.337750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.075 [2024-11-15 10:46:23.337775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.075 qpair failed and we were unable to recover it. 00:27:35.075 [2024-11-15 10:46:23.337911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.075 [2024-11-15 10:46:23.337936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.075 qpair failed and we were unable to recover it. 00:27:35.075 [2024-11-15 10:46:23.338061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.075 [2024-11-15 10:46:23.338086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.075 qpair failed and we were unable to recover it. 00:27:35.075 [2024-11-15 10:46:23.338242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.075 [2024-11-15 10:46:23.338268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.075 qpair failed and we were unable to recover it. 00:27:35.075 [2024-11-15 10:46:23.338392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.075 [2024-11-15 10:46:23.338418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.075 qpair failed and we were unable to recover it. 00:27:35.075 [2024-11-15 10:46:23.338521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.075 [2024-11-15 10:46:23.338547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.075 qpair failed and we were unable to recover it. 00:27:35.075 [2024-11-15 10:46:23.338648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.075 [2024-11-15 10:46:23.338674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.075 qpair failed and we were unable to recover it. 00:27:35.075 [2024-11-15 10:46:23.338798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.075 [2024-11-15 10:46:23.338823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.075 qpair failed and we were unable to recover it. 00:27:35.075 [2024-11-15 10:46:23.338945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.075 [2024-11-15 10:46:23.338970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.075 qpair failed and we were unable to recover it. 00:27:35.075 [2024-11-15 10:46:23.339145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.075 [2024-11-15 10:46:23.339171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.075 qpair failed and we were unable to recover it. 00:27:35.075 [2024-11-15 10:46:23.339266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.075 [2024-11-15 10:46:23.339292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.075 qpair failed and we were unable to recover it. 00:27:35.075 [2024-11-15 10:46:23.339445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.075 [2024-11-15 10:46:23.339471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.075 qpair failed and we were unable to recover it. 00:27:35.075 [2024-11-15 10:46:23.339601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.075 [2024-11-15 10:46:23.339631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.075 qpair failed and we were unable to recover it. 00:27:35.075 [2024-11-15 10:46:23.339757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.075 [2024-11-15 10:46:23.339783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.075 qpair failed and we were unable to recover it. 00:27:35.075 [2024-11-15 10:46:23.339923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.075 [2024-11-15 10:46:23.339948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.075 qpair failed and we were unable to recover it. 00:27:35.075 [2024-11-15 10:46:23.340050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.075 [2024-11-15 10:46:23.340075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.075 qpair failed and we were unable to recover it. 00:27:35.075 [2024-11-15 10:46:23.340217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.075 [2024-11-15 10:46:23.340242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.075 qpair failed and we were unable to recover it. 00:27:35.075 [2024-11-15 10:46:23.340376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.075 [2024-11-15 10:46:23.340402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.075 qpair failed and we were unable to recover it. 00:27:35.075 [2024-11-15 10:46:23.340543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.075 [2024-11-15 10:46:23.340568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.075 qpair failed and we were unable to recover it. 00:27:35.075 [2024-11-15 10:46:23.340673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.075 [2024-11-15 10:46:23.340698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.075 qpair failed and we were unable to recover it. 00:27:35.075 [2024-11-15 10:46:23.340798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.075 [2024-11-15 10:46:23.340824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.075 qpair failed and we were unable to recover it. 00:27:35.075 [2024-11-15 10:46:23.340916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.075 [2024-11-15 10:46:23.340941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.075 qpair failed and we were unable to recover it. 00:27:35.075 [2024-11-15 10:46:23.341064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.075 [2024-11-15 10:46:23.341099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.075 qpair failed and we were unable to recover it. 00:27:35.075 [2024-11-15 10:46:23.341216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.075 [2024-11-15 10:46:23.341241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.075 qpair failed and we were unable to recover it. 00:27:35.075 [2024-11-15 10:46:23.341412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.075 [2024-11-15 10:46:23.341438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.075 qpair failed and we were unable to recover it. 00:27:35.075 [2024-11-15 10:46:23.341567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.075 [2024-11-15 10:46:23.341593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.075 qpair failed and we were unable to recover it. 00:27:35.075 [2024-11-15 10:46:23.341734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.075 [2024-11-15 10:46:23.341766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.075 qpair failed and we were unable to recover it. 00:27:35.075 [2024-11-15 10:46:23.341885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.075 [2024-11-15 10:46:23.341910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.075 qpair failed and we were unable to recover it. 00:27:35.075 [2024-11-15 10:46:23.341995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.075 [2024-11-15 10:46:23.342028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.075 qpair failed and we were unable to recover it. 00:27:35.075 [2024-11-15 10:46:23.342187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.075 [2024-11-15 10:46:23.342212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.075 qpair failed and we were unable to recover it. 00:27:35.075 [2024-11-15 10:46:23.342331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.075 [2024-11-15 10:46:23.342356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.075 qpair failed and we were unable to recover it. 00:27:35.075 [2024-11-15 10:46:23.342487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.075 [2024-11-15 10:46:23.342513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.075 qpair failed and we were unable to recover it. 00:27:35.075 [2024-11-15 10:46:23.342646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.075 [2024-11-15 10:46:23.342680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.075 qpair failed and we were unable to recover it. 00:27:35.075 [2024-11-15 10:46:23.342767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.075 [2024-11-15 10:46:23.342793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.075 qpair failed and we were unable to recover it. 00:27:35.075 [2024-11-15 10:46:23.342907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.075 [2024-11-15 10:46:23.342932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.075 qpair failed and we were unable to recover it. 00:27:35.075 [2024-11-15 10:46:23.343063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.075 [2024-11-15 10:46:23.343088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.075 qpair failed and we were unable to recover it. 00:27:35.075 [2024-11-15 10:46:23.343231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.075 [2024-11-15 10:46:23.343256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.075 qpair failed and we were unable to recover it. 00:27:35.075 [2024-11-15 10:46:23.343359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.075 [2024-11-15 10:46:23.343404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.075 qpair failed and we were unable to recover it. 00:27:35.075 [2024-11-15 10:46:23.343555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.075 [2024-11-15 10:46:23.343580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.075 qpair failed and we were unable to recover it. 00:27:35.075 [2024-11-15 10:46:23.343705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.075 [2024-11-15 10:46:23.343737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.075 qpair failed and we were unable to recover it. 00:27:35.075 [2024-11-15 10:46:23.343858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.075 [2024-11-15 10:46:23.343884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.075 qpair failed and we were unable to recover it. 00:27:35.075 [2024-11-15 10:46:23.344009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.075 [2024-11-15 10:46:23.344034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.075 qpair failed and we were unable to recover it. 00:27:35.075 [2024-11-15 10:46:23.344151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.075 [2024-11-15 10:46:23.344176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.075 qpair failed and we were unable to recover it. 00:27:35.075 [2024-11-15 10:46:23.344312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.075 [2024-11-15 10:46:23.344337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.075 qpair failed and we were unable to recover it. 00:27:35.075 [2024-11-15 10:46:23.344500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.075 [2024-11-15 10:46:23.344526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.075 qpair failed and we were unable to recover it. 00:27:35.075 [2024-11-15 10:46:23.344650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.075 [2024-11-15 10:46:23.344676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.075 qpair failed and we were unable to recover it. 00:27:35.075 [2024-11-15 10:46:23.344808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.075 [2024-11-15 10:46:23.344833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.075 qpair failed and we were unable to recover it. 00:27:35.075 [2024-11-15 10:46:23.344972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.075 [2024-11-15 10:46:23.344998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.075 qpair failed and we were unable to recover it. 00:27:35.075 [2024-11-15 10:46:23.345136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.075 [2024-11-15 10:46:23.345161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.075 qpair failed and we were unable to recover it. 00:27:35.075 [2024-11-15 10:46:23.345290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.075 [2024-11-15 10:46:23.345315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.075 qpair failed and we were unable to recover it. 00:27:35.075 [2024-11-15 10:46:23.345452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.075 [2024-11-15 10:46:23.345478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.075 qpair failed and we were unable to recover it. 00:27:35.075 [2024-11-15 10:46:23.345563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.075 [2024-11-15 10:46:23.345589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.075 qpair failed and we were unable to recover it. 00:27:35.076 [2024-11-15 10:46:23.345720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.076 [2024-11-15 10:46:23.345745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.076 qpair failed and we were unable to recover it. 00:27:35.076 [2024-11-15 10:46:23.345880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.076 [2024-11-15 10:46:23.345906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.076 qpair failed and we were unable to recover it. 00:27:35.076 [2024-11-15 10:46:23.346030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.076 [2024-11-15 10:46:23.346055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.076 qpair failed and we were unable to recover it. 00:27:35.076 [2024-11-15 10:46:23.346209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.076 [2024-11-15 10:46:23.346242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.076 qpair failed and we were unable to recover it. 00:27:35.076 [2024-11-15 10:46:23.346396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.076 [2024-11-15 10:46:23.346422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.076 qpair failed and we were unable to recover it. 00:27:35.076 [2024-11-15 10:46:23.346598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.076 [2024-11-15 10:46:23.346623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.076 qpair failed and we were unable to recover it. 00:27:35.076 [2024-11-15 10:46:23.346780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.076 [2024-11-15 10:46:23.346806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.076 qpair failed and we were unable to recover it. 00:27:35.076 [2024-11-15 10:46:23.346961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.076 [2024-11-15 10:46:23.346987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.076 qpair failed and we were unable to recover it. 00:27:35.076 [2024-11-15 10:46:23.347145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.076 [2024-11-15 10:46:23.347170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.076 qpair failed and we were unable to recover it. 00:27:35.076 [2024-11-15 10:46:23.347271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.076 [2024-11-15 10:46:23.347296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.076 qpair failed and we were unable to recover it. 00:27:35.076 [2024-11-15 10:46:23.347434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.076 [2024-11-15 10:46:23.347460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.076 qpair failed and we were unable to recover it. 00:27:35.076 [2024-11-15 10:46:23.347558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.076 [2024-11-15 10:46:23.347583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.076 qpair failed and we were unable to recover it. 00:27:35.076 [2024-11-15 10:46:23.347678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.076 [2024-11-15 10:46:23.347703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.076 qpair failed and we were unable to recover it. 00:27:35.076 [2024-11-15 10:46:23.347827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.076 [2024-11-15 10:46:23.347852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.076 qpair failed and we were unable to recover it. 00:27:35.076 [2024-11-15 10:46:23.347964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.076 [2024-11-15 10:46:23.347990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.076 qpair failed and we were unable to recover it. 00:27:35.076 [2024-11-15 10:46:23.348137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.076 [2024-11-15 10:46:23.348162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.076 qpair failed and we were unable to recover it. 00:27:35.076 [2024-11-15 10:46:23.348293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.076 [2024-11-15 10:46:23.348319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.076 qpair failed and we were unable to recover it. 00:27:35.076 [2024-11-15 10:46:23.348457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.076 [2024-11-15 10:46:23.348484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.076 qpair failed and we were unable to recover it. 00:27:35.076 [2024-11-15 10:46:23.348580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.076 [2024-11-15 10:46:23.348606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.076 qpair failed and we were unable to recover it. 00:27:35.076 [2024-11-15 10:46:23.348773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.076 [2024-11-15 10:46:23.348799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.076 qpair failed and we were unable to recover it. 00:27:35.076 [2024-11-15 10:46:23.348957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.076 [2024-11-15 10:46:23.348982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.076 qpair failed and we were unable to recover it. 00:27:35.076 [2024-11-15 10:46:23.349124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.076 [2024-11-15 10:46:23.349149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.076 qpair failed and we were unable to recover it. 00:27:35.076 [2024-11-15 10:46:23.349276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.076 [2024-11-15 10:46:23.349302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.076 qpair failed and we were unable to recover it. 00:27:35.076 [2024-11-15 10:46:23.349473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.076 [2024-11-15 10:46:23.349498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.076 qpair failed and we were unable to recover it. 00:27:35.076 [2024-11-15 10:46:23.349616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.076 [2024-11-15 10:46:23.349642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.076 qpair failed and we were unable to recover it. 00:27:35.076 [2024-11-15 10:46:23.349783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.076 [2024-11-15 10:46:23.349808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.076 qpair failed and we were unable to recover it. 00:27:35.076 [2024-11-15 10:46:23.349940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.076 [2024-11-15 10:46:23.349965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.076 qpair failed and we were unable to recover it. 00:27:35.076 [2024-11-15 10:46:23.350061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.076 [2024-11-15 10:46:23.350091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.076 qpair failed and we were unable to recover it. 00:27:35.076 [2024-11-15 10:46:23.350251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.076 [2024-11-15 10:46:23.350276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.076 qpair failed and we were unable to recover it. 00:27:35.076 [2024-11-15 10:46:23.350392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.076 [2024-11-15 10:46:23.350419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.076 qpair failed and we were unable to recover it. 00:27:35.076 [2024-11-15 10:46:23.350566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.076 [2024-11-15 10:46:23.350599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.076 qpair failed and we were unable to recover it. 00:27:35.076 [2024-11-15 10:46:23.350689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.076 [2024-11-15 10:46:23.350715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.076 qpair failed and we were unable to recover it. 00:27:35.076 [2024-11-15 10:46:23.350827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.076 [2024-11-15 10:46:23.350853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.076 qpair failed and we were unable to recover it. 00:27:35.076 [2024-11-15 10:46:23.350971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.076 [2024-11-15 10:46:23.350997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.076 qpair failed and we were unable to recover it. 00:27:35.076 10:46:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:27:35.076 [2024-11-15 10:46:23.351100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.076 [2024-11-15 10:46:23.351127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.076 qpair failed and we were unable to recover it. 00:27:35.076 10:46:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@866 -- # return 0 00:27:35.076 [2024-11-15 10:46:23.351264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.076 [2024-11-15 10:46:23.351290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.076 qpair failed and we were unable to recover it. 00:27:35.076 [2024-11-15 10:46:23.351414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.076 [2024-11-15 10:46:23.351441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.076 10:46:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:27:35.076 qpair failed and we were unable to recover it. 00:27:35.076 [2024-11-15 10:46:23.351548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.076 [2024-11-15 10:46:23.351575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.076 qpair failed and we were unable to recover it. 00:27:35.076 10:46:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:27:35.076 [2024-11-15 10:46:23.351672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.076 [2024-11-15 10:46:23.351707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.076 qpair failed and we were unable to recover it. 00:27:35.076 10:46:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:35.076 [2024-11-15 10:46:23.351868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.076 [2024-11-15 10:46:23.351894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.076 qpair failed and we were unable to recover it. 00:27:35.076 [2024-11-15 10:46:23.352052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.076 [2024-11-15 10:46:23.352088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.076 qpair failed and we were unable to recover it. 00:27:35.076 [2024-11-15 10:46:23.352185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.076 [2024-11-15 10:46:23.352211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.076 qpair failed and we were unable to recover it. 00:27:35.076 [2024-11-15 10:46:23.352314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.076 [2024-11-15 10:46:23.352340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.076 qpair failed and we were unable to recover it. 00:27:35.076 [2024-11-15 10:46:23.352470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.076 [2024-11-15 10:46:23.352497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.076 qpair failed and we were unable to recover it. 00:27:35.076 [2024-11-15 10:46:23.352600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.076 [2024-11-15 10:46:23.352625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.076 qpair failed and we were unable to recover it. 00:27:35.076 [2024-11-15 10:46:23.352713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.076 [2024-11-15 10:46:23.352738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.076 qpair failed and we were unable to recover it. 00:27:35.076 [2024-11-15 10:46:23.352842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.076 [2024-11-15 10:46:23.352867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.076 qpair failed and we were unable to recover it. 00:27:35.076 [2024-11-15 10:46:23.352976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.076 [2024-11-15 10:46:23.353002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.076 qpair failed and we were unable to recover it. 00:27:35.076 [2024-11-15 10:46:23.353123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.076 [2024-11-15 10:46:23.353148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.076 qpair failed and we were unable to recover it. 00:27:35.076 [2024-11-15 10:46:23.353240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.076 [2024-11-15 10:46:23.353266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.076 qpair failed and we were unable to recover it. 00:27:35.076 [2024-11-15 10:46:23.353405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.076 [2024-11-15 10:46:23.353432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.076 qpair failed and we were unable to recover it. 00:27:35.076 [2024-11-15 10:46:23.353533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.076 [2024-11-15 10:46:23.353559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.076 qpair failed and we were unable to recover it. 00:27:35.076 [2024-11-15 10:46:23.353663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.076 [2024-11-15 10:46:23.353690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.076 qpair failed and we were unable to recover it. 00:27:35.076 [2024-11-15 10:46:23.353782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.076 [2024-11-15 10:46:23.353809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.076 qpair failed and we were unable to recover it. 00:27:35.076 [2024-11-15 10:46:23.353956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.076 [2024-11-15 10:46:23.353982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.076 qpair failed and we were unable to recover it. 00:27:35.076 [2024-11-15 10:46:23.354108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.076 [2024-11-15 10:46:23.354134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.076 qpair failed and we were unable to recover it. 00:27:35.076 [2024-11-15 10:46:23.354240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.076 [2024-11-15 10:46:23.354266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.076 qpair failed and we were unable to recover it. 00:27:35.076 [2024-11-15 10:46:23.354384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.076 [2024-11-15 10:46:23.354411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.076 qpair failed and we were unable to recover it. 00:27:35.076 [2024-11-15 10:46:23.354511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.076 [2024-11-15 10:46:23.354537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.076 qpair failed and we were unable to recover it. 00:27:35.076 [2024-11-15 10:46:23.354687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.076 [2024-11-15 10:46:23.354713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.076 qpair failed and we were unable to recover it. 00:27:35.076 [2024-11-15 10:46:23.354834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.076 [2024-11-15 10:46:23.354859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.076 qpair failed and we were unable to recover it. 00:27:35.076 [2024-11-15 10:46:23.354958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.076 [2024-11-15 10:46:23.354984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.076 qpair failed and we were unable to recover it. 00:27:35.076 [2024-11-15 10:46:23.355106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.076 [2024-11-15 10:46:23.355132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.076 qpair failed and we were unable to recover it. 00:27:35.076 [2024-11-15 10:46:23.355233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.076 [2024-11-15 10:46:23.355259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.076 qpair failed and we were unable to recover it. 00:27:35.076 [2024-11-15 10:46:23.355359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.076 [2024-11-15 10:46:23.355390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.076 qpair failed and we were unable to recover it. 00:27:35.077 [2024-11-15 10:46:23.355493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.077 [2024-11-15 10:46:23.355522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.077 qpair failed and we were unable to recover it. 00:27:35.077 [2024-11-15 10:46:23.355616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.077 [2024-11-15 10:46:23.355642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.077 qpair failed and we were unable to recover it. 00:27:35.077 [2024-11-15 10:46:23.355746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.077 [2024-11-15 10:46:23.355771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.077 qpair failed and we were unable to recover it. 00:27:35.077 [2024-11-15 10:46:23.355871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.077 [2024-11-15 10:46:23.355897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.077 qpair failed and we were unable to recover it. 00:27:35.077 [2024-11-15 10:46:23.356047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.077 [2024-11-15 10:46:23.356073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.077 qpair failed and we were unable to recover it. 00:27:35.077 [2024-11-15 10:46:23.356179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.077 [2024-11-15 10:46:23.356205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.077 qpair failed and we were unable to recover it. 00:27:35.077 [2024-11-15 10:46:23.356308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.077 [2024-11-15 10:46:23.356333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.077 qpair failed and we were unable to recover it. 00:27:35.077 [2024-11-15 10:46:23.356456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.077 [2024-11-15 10:46:23.356483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.077 qpair failed and we were unable to recover it. 00:27:35.077 [2024-11-15 10:46:23.356604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.077 [2024-11-15 10:46:23.356630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.077 qpair failed and we were unable to recover it. 00:27:35.077 [2024-11-15 10:46:23.356775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.077 [2024-11-15 10:46:23.356800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.077 qpair failed and we were unable to recover it. 00:27:35.077 [2024-11-15 10:46:23.356899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.077 [2024-11-15 10:46:23.356925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.077 qpair failed and we were unable to recover it. 00:27:35.077 [2024-11-15 10:46:23.357026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.077 [2024-11-15 10:46:23.357052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.077 qpair failed and we were unable to recover it. 00:27:35.077 [2024-11-15 10:46:23.357171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.077 [2024-11-15 10:46:23.357197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.077 qpair failed and we were unable to recover it. 00:27:35.077 [2024-11-15 10:46:23.357331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.077 [2024-11-15 10:46:23.357358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.077 qpair failed and we were unable to recover it. 00:27:35.077 [2024-11-15 10:46:23.357471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.077 [2024-11-15 10:46:23.357497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.077 qpair failed and we were unable to recover it. 00:27:35.077 [2024-11-15 10:46:23.357594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.077 [2024-11-15 10:46:23.357620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.077 qpair failed and we were unable to recover it. 00:27:35.077 [2024-11-15 10:46:23.357719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.077 [2024-11-15 10:46:23.357744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.077 qpair failed and we were unable to recover it. 00:27:35.077 [2024-11-15 10:46:23.357828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.077 [2024-11-15 10:46:23.357853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.077 qpair failed and we were unable to recover it. 00:27:35.077 [2024-11-15 10:46:23.357995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.077 [2024-11-15 10:46:23.358021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.077 qpair failed and we were unable to recover it. 00:27:35.077 [2024-11-15 10:46:23.358106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.077 [2024-11-15 10:46:23.358132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.077 qpair failed and we were unable to recover it. 00:27:35.077 [2024-11-15 10:46:23.358250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.077 [2024-11-15 10:46:23.358276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.077 qpair failed and we were unable to recover it. 00:27:35.077 [2024-11-15 10:46:23.358391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.077 [2024-11-15 10:46:23.358419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.077 qpair failed and we were unable to recover it. 00:27:35.077 [2024-11-15 10:46:23.358525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.077 [2024-11-15 10:46:23.358551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.077 qpair failed and we were unable to recover it. 00:27:35.077 [2024-11-15 10:46:23.358645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.077 [2024-11-15 10:46:23.358671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.077 qpair failed and we were unable to recover it. 00:27:35.077 [2024-11-15 10:46:23.358775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.077 [2024-11-15 10:46:23.358801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.077 qpair failed and we were unable to recover it. 00:27:35.077 [2024-11-15 10:46:23.358891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.077 [2024-11-15 10:46:23.358917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.077 qpair failed and we were unable to recover it. 00:27:35.077 [2024-11-15 10:46:23.359035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.077 [2024-11-15 10:46:23.359062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.077 qpair failed and we were unable to recover it. 00:27:35.077 [2024-11-15 10:46:23.359167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.077 [2024-11-15 10:46:23.359192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.077 qpair failed and we were unable to recover it. 00:27:35.077 [2024-11-15 10:46:23.359295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.077 [2024-11-15 10:46:23.359320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.077 qpair failed and we were unable to recover it. 00:27:35.077 [2024-11-15 10:46:23.359422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.077 [2024-11-15 10:46:23.359449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.077 qpair failed and we were unable to recover it. 00:27:35.077 [2024-11-15 10:46:23.359554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.077 [2024-11-15 10:46:23.359580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.077 qpair failed and we were unable to recover it. 00:27:35.077 [2024-11-15 10:46:23.359702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.077 [2024-11-15 10:46:23.359728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.077 qpair failed and we were unable to recover it. 00:27:35.077 [2024-11-15 10:46:23.359847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.077 [2024-11-15 10:46:23.359873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.077 qpair failed and we were unable to recover it. 00:27:35.077 [2024-11-15 10:46:23.360000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.077 [2024-11-15 10:46:23.360025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.077 qpair failed and we were unable to recover it. 00:27:35.077 [2024-11-15 10:46:23.360149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.077 [2024-11-15 10:46:23.360175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.077 qpair failed and we were unable to recover it. 00:27:35.077 [2024-11-15 10:46:23.360298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.077 [2024-11-15 10:46:23.360324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.077 qpair failed and we were unable to recover it. 00:27:35.077 [2024-11-15 10:46:23.360419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.077 [2024-11-15 10:46:23.360445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.077 qpair failed and we were unable to recover it. 00:27:35.077 [2024-11-15 10:46:23.360547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.077 [2024-11-15 10:46:23.360574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.077 qpair failed and we were unable to recover it. 00:27:35.077 [2024-11-15 10:46:23.360694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.077 [2024-11-15 10:46:23.360720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.077 qpair failed and we were unable to recover it. 00:27:35.077 [2024-11-15 10:46:23.360812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.077 [2024-11-15 10:46:23.360838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.077 qpair failed and we were unable to recover it. 00:27:35.077 [2024-11-15 10:46:23.360980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.077 [2024-11-15 10:46:23.361010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.077 qpair failed and we were unable to recover it. 00:27:35.077 [2024-11-15 10:46:23.361139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.077 [2024-11-15 10:46:23.361165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.077 qpair failed and we were unable to recover it. 00:27:35.077 [2024-11-15 10:46:23.361324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.077 [2024-11-15 10:46:23.361350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.077 qpair failed and we were unable to recover it. 00:27:35.077 [2024-11-15 10:46:23.361947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.077 [2024-11-15 10:46:23.361978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.077 qpair failed and we were unable to recover it. 00:27:35.077 [2024-11-15 10:46:23.362113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.077 [2024-11-15 10:46:23.362140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.077 qpair failed and we were unable to recover it. 00:27:35.077 [2024-11-15 10:46:23.362227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.077 [2024-11-15 10:46:23.362262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.077 qpair failed and we were unable to recover it. 00:27:35.077 [2024-11-15 10:46:23.362369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.077 [2024-11-15 10:46:23.362396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.077 qpair failed and we were unable to recover it. 00:27:35.077 [2024-11-15 10:46:23.362504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.077 [2024-11-15 10:46:23.362530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.077 qpair failed and we were unable to recover it. 00:27:35.077 [2024-11-15 10:46:23.362659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.077 [2024-11-15 10:46:23.362685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.077 qpair failed and we were unable to recover it. 00:27:35.077 [2024-11-15 10:46:23.362808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.077 [2024-11-15 10:46:23.362834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.077 qpair failed and we were unable to recover it. 00:27:35.077 [2024-11-15 10:46:23.362943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.077 [2024-11-15 10:46:23.362969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.077 qpair failed and we were unable to recover it. 00:27:35.077 [2024-11-15 10:46:23.363097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.077 [2024-11-15 10:46:23.363124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.077 qpair failed and we were unable to recover it. 00:27:35.077 [2024-11-15 10:46:23.363214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.077 [2024-11-15 10:46:23.363241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.077 qpair failed and we were unable to recover it. 00:27:35.077 [2024-11-15 10:46:23.363370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.077 [2024-11-15 10:46:23.363397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.077 qpair failed and we were unable to recover it. 00:27:35.077 [2024-11-15 10:46:23.363513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.077 [2024-11-15 10:46:23.363539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.077 qpair failed and we were unable to recover it. 00:27:35.077 [2024-11-15 10:46:23.363685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.077 [2024-11-15 10:46:23.363712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.077 qpair failed and we were unable to recover it. 00:27:35.077 [2024-11-15 10:46:23.363862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.077 [2024-11-15 10:46:23.363888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.077 qpair failed and we were unable to recover it. 00:27:35.077 [2024-11-15 10:46:23.363980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.077 [2024-11-15 10:46:23.364006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.077 qpair failed and we were unable to recover it. 00:27:35.077 [2024-11-15 10:46:23.364146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.077 [2024-11-15 10:46:23.364173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.077 qpair failed and we were unable to recover it. 00:27:35.077 [2024-11-15 10:46:23.364261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.077 [2024-11-15 10:46:23.364287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.077 qpair failed and we were unable to recover it. 00:27:35.077 [2024-11-15 10:46:23.364382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.077 [2024-11-15 10:46:23.364409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.077 qpair failed and we were unable to recover it. 00:27:35.077 [2024-11-15 10:46:23.364506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.077 [2024-11-15 10:46:23.364532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.077 qpair failed and we were unable to recover it. 00:27:35.077 [2024-11-15 10:46:23.364644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.077 [2024-11-15 10:46:23.364669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.077 qpair failed and we were unable to recover it. 00:27:35.077 [2024-11-15 10:46:23.364754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.077 [2024-11-15 10:46:23.364779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.077 qpair failed and we were unable to recover it. 00:27:35.077 [2024-11-15 10:46:23.364928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.077 [2024-11-15 10:46:23.364954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.077 qpair failed and we were unable to recover it. 00:27:35.077 [2024-11-15 10:46:23.365059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.077 [2024-11-15 10:46:23.365085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.077 qpair failed and we were unable to recover it. 00:27:35.077 [2024-11-15 10:46:23.365209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.077 [2024-11-15 10:46:23.365236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.077 qpair failed and we were unable to recover it. 00:27:35.077 [2024-11-15 10:46:23.365373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.078 [2024-11-15 10:46:23.365401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.078 qpair failed and we were unable to recover it. 00:27:35.078 [2024-11-15 10:46:23.365500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.078 [2024-11-15 10:46:23.365526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.078 qpair failed and we were unable to recover it. 00:27:35.078 [2024-11-15 10:46:23.365630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.078 [2024-11-15 10:46:23.365656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.078 qpair failed and we were unable to recover it. 00:27:35.078 [2024-11-15 10:46:23.365746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.078 [2024-11-15 10:46:23.365772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.078 qpair failed and we were unable to recover it. 00:27:35.078 [2024-11-15 10:46:23.365908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.078 [2024-11-15 10:46:23.365934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.078 qpair failed and we were unable to recover it. 00:27:35.078 [2024-11-15 10:46:23.366085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.078 [2024-11-15 10:46:23.366111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.078 qpair failed and we were unable to recover it. 00:27:35.078 [2024-11-15 10:46:23.366214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.078 [2024-11-15 10:46:23.366240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.078 qpair failed and we were unable to recover it. 00:27:35.078 [2024-11-15 10:46:23.366360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.078 [2024-11-15 10:46:23.366409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.078 qpair failed and we were unable to recover it. 00:27:35.078 [2024-11-15 10:46:23.366499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.078 [2024-11-15 10:46:23.366525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.078 qpair failed and we were unable to recover it. 00:27:35.078 [2024-11-15 10:46:23.366647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.078 [2024-11-15 10:46:23.366673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.078 qpair failed and we were unable to recover it. 00:27:35.078 [2024-11-15 10:46:23.366797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.078 [2024-11-15 10:46:23.366824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.078 qpair failed and we were unable to recover it. 00:27:35.078 [2024-11-15 10:46:23.366910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.078 [2024-11-15 10:46:23.366936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.078 qpair failed and we were unable to recover it. 00:27:35.078 [2024-11-15 10:46:23.367065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.078 [2024-11-15 10:46:23.367095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.078 qpair failed and we were unable to recover it. 00:27:35.078 [2024-11-15 10:46:23.367235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.078 [2024-11-15 10:46:23.367265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.078 qpair failed and we were unable to recover it. 00:27:35.078 [2024-11-15 10:46:23.367399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.078 [2024-11-15 10:46:23.367425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.078 qpair failed and we were unable to recover it. 00:27:35.078 [2024-11-15 10:46:23.367517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.078 [2024-11-15 10:46:23.367544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.078 qpair failed and we were unable to recover it. 00:27:35.078 [2024-11-15 10:46:23.367648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.078 [2024-11-15 10:46:23.367675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.078 qpair failed and we were unable to recover it. 00:27:35.078 [2024-11-15 10:46:23.367769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.078 [2024-11-15 10:46:23.367795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.078 qpair failed and we were unable to recover it. 00:27:35.078 [2024-11-15 10:46:23.367896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.078 [2024-11-15 10:46:23.367922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.078 qpair failed and we were unable to recover it. 00:27:35.078 [2024-11-15 10:46:23.368057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.078 [2024-11-15 10:46:23.368090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.078 qpair failed and we were unable to recover it. 00:27:35.078 [2024-11-15 10:46:23.368219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.078 [2024-11-15 10:46:23.368245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.078 qpair failed and we were unable to recover it. 00:27:35.078 [2024-11-15 10:46:23.368342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.078 [2024-11-15 10:46:23.368377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.078 qpair failed and we were unable to recover it. 00:27:35.078 [2024-11-15 10:46:23.368480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.078 [2024-11-15 10:46:23.368506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.078 qpair failed and we were unable to recover it. 00:27:35.078 [2024-11-15 10:46:23.368605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.078 [2024-11-15 10:46:23.368631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.078 qpair failed and we were unable to recover it. 00:27:35.078 [2024-11-15 10:46:23.368733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.078 [2024-11-15 10:46:23.368759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.078 qpair failed and we were unable to recover it. 00:27:35.078 [2024-11-15 10:46:23.368860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.078 [2024-11-15 10:46:23.368886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.078 qpair failed and we were unable to recover it. 00:27:35.078 [2024-11-15 10:46:23.368987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.078 [2024-11-15 10:46:23.369023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.078 qpair failed and we were unable to recover it. 00:27:35.078 [2024-11-15 10:46:23.369149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.078 [2024-11-15 10:46:23.369175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.078 qpair failed and we were unable to recover it. 00:27:35.078 [2024-11-15 10:46:23.369265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.078 [2024-11-15 10:46:23.369291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.078 qpair failed and we were unable to recover it. 00:27:35.078 [2024-11-15 10:46:23.369404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.078 [2024-11-15 10:46:23.369430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.078 qpair failed and we were unable to recover it. 00:27:35.078 [2024-11-15 10:46:23.369531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.078 [2024-11-15 10:46:23.369558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.078 qpair failed and we were unable to recover it. 00:27:35.078 [2024-11-15 10:46:23.369703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.078 [2024-11-15 10:46:23.369740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.078 qpair failed and we were unable to recover it. 00:27:35.078 [2024-11-15 10:46:23.369834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.078 [2024-11-15 10:46:23.369860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.078 qpair failed and we were unable to recover it. 00:27:35.078 [2024-11-15 10:46:23.370000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.078 [2024-11-15 10:46:23.370026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.078 qpair failed and we were unable to recover it. 00:27:35.078 [2024-11-15 10:46:23.370188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.078 [2024-11-15 10:46:23.370214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.078 qpair failed and we were unable to recover it. 00:27:35.078 [2024-11-15 10:46:23.370313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.078 [2024-11-15 10:46:23.370338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.078 qpair failed and we were unable to recover it. 00:27:35.078 [2024-11-15 10:46:23.370446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.078 [2024-11-15 10:46:23.370473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.078 qpair failed and we were unable to recover it. 00:27:35.078 [2024-11-15 10:46:23.370561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.078 [2024-11-15 10:46:23.370587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.078 qpair failed and we were unable to recover it. 00:27:35.078 [2024-11-15 10:46:23.370725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.078 [2024-11-15 10:46:23.370752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.078 qpair failed and we were unable to recover it. 00:27:35.078 [2024-11-15 10:46:23.370874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.078 [2024-11-15 10:46:23.370900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.078 qpair failed and we were unable to recover it. 00:27:35.078 [2024-11-15 10:46:23.371009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.078 [2024-11-15 10:46:23.371034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.078 qpair failed and we were unable to recover it. 00:27:35.078 [2024-11-15 10:46:23.371166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.078 [2024-11-15 10:46:23.371192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.078 qpair failed and we were unable to recover it. 00:27:35.078 [2024-11-15 10:46:23.371297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.078 [2024-11-15 10:46:23.371324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.078 qpair failed and we were unable to recover it. 00:27:35.078 [2024-11-15 10:46:23.371417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.078 [2024-11-15 10:46:23.371443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.078 qpair failed and we were unable to recover it. 00:27:35.078 [2024-11-15 10:46:23.371535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.078 [2024-11-15 10:46:23.371560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.078 qpair failed and we were unable to recover it. 00:27:35.078 [2024-11-15 10:46:23.371712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.078 [2024-11-15 10:46:23.371739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b9 10:46:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:35.078 0 with addr=10.0.0.2, port=4420 00:27:35.078 qpair failed and we were unable to recover it. 00:27:35.078 [2024-11-15 10:46:23.371841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.078 [2024-11-15 10:46:23.371866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.078 qpair failed and we were unable to recover it. 00:27:35.078 10:46:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:27:35.078 [2024-11-15 10:46:23.371965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.078 [2024-11-15 10:46:23.371991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.078 qpair failed and we were unable to recover it. 00:27:35.078 10:46:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:35.078 [2024-11-15 10:46:23.372090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.078 [2024-11-15 10:46:23.372120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.078 qpair failed and we were unable to recover it. 00:27:35.078 [2024-11-15 10:46:23.372214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.078 [2024-11-15 10:46:23.372240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b9 10:46:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:35.078 0 with addr=10.0.0.2, port=4420 00:27:35.078 qpair failed and we were unable to recover it. 00:27:35.078 [2024-11-15 10:46:23.372390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.078 [2024-11-15 10:46:23.372417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.078 qpair failed and we were unable to recover it. 00:27:35.078 [2024-11-15 10:46:23.372520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.078 [2024-11-15 10:46:23.372550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.078 qpair failed and we were unable to recover it. 00:27:35.078 [2024-11-15 10:46:23.372665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.078 [2024-11-15 10:46:23.372691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.078 qpair failed and we were unable to recover it. 00:27:35.078 [2024-11-15 10:46:23.372816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.078 [2024-11-15 10:46:23.372842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.078 qpair failed and we were unable to recover it. 00:27:35.078 [2024-11-15 10:46:23.373028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.078 [2024-11-15 10:46:23.373054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.078 qpair failed and we were unable to recover it. 00:27:35.078 [2024-11-15 10:46:23.373159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.078 [2024-11-15 10:46:23.373185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.078 qpair failed and we were unable to recover it. 00:27:35.078 [2024-11-15 10:46:23.373276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.078 [2024-11-15 10:46:23.373302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.078 qpair failed and we were unable to recover it. 00:27:35.078 [2024-11-15 10:46:23.373414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.078 [2024-11-15 10:46:23.373440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.078 qpair failed and we were unable to recover it. 00:27:35.078 [2024-11-15 10:46:23.373530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.078 [2024-11-15 10:46:23.373556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.078 qpair failed and we were unable to recover it. 00:27:35.078 [2024-11-15 10:46:23.373671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.078 [2024-11-15 10:46:23.373705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.078 qpair failed and we were unable to recover it. 00:27:35.078 [2024-11-15 10:46:23.373804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.078 [2024-11-15 10:46:23.373831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.078 qpair failed and we were unable to recover it. 00:27:35.078 [2024-11-15 10:46:23.373955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.078 [2024-11-15 10:46:23.373981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.078 qpair failed and we were unable to recover it. 00:27:35.078 [2024-11-15 10:46:23.374107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.078 [2024-11-15 10:46:23.374141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.078 qpair failed and we were unable to recover it. 00:27:35.078 [2024-11-15 10:46:23.374292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.078 [2024-11-15 10:46:23.374318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.079 qpair failed and we were unable to recover it. 00:27:35.079 [2024-11-15 10:46:23.374407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.079 [2024-11-15 10:46:23.374434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.079 qpair failed and we were unable to recover it. 00:27:35.079 [2024-11-15 10:46:23.374531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.079 [2024-11-15 10:46:23.374557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.079 qpair failed and we were unable to recover it. 00:27:35.079 [2024-11-15 10:46:23.374711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.079 [2024-11-15 10:46:23.374747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.079 qpair failed and we were unable to recover it. 00:27:35.079 [2024-11-15 10:46:23.374864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.079 [2024-11-15 10:46:23.374890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.079 qpair failed and we were unable to recover it. 00:27:35.079 [2024-11-15 10:46:23.374995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.079 [2024-11-15 10:46:23.375029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.079 qpair failed and we were unable to recover it. 00:27:35.079 [2024-11-15 10:46:23.375189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.079 [2024-11-15 10:46:23.375215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.079 qpair failed and we were unable to recover it. 00:27:35.079 [2024-11-15 10:46:23.375300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.079 [2024-11-15 10:46:23.375326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.079 qpair failed and we were unable to recover it. 00:27:35.079 [2024-11-15 10:46:23.375433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.079 [2024-11-15 10:46:23.375459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.079 qpair failed and we were unable to recover it. 00:27:35.079 [2024-11-15 10:46:23.375557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.079 [2024-11-15 10:46:23.375583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.079 qpair failed and we were unable to recover it. 00:27:35.079 [2024-11-15 10:46:23.375717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.079 [2024-11-15 10:46:23.375743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.079 qpair failed and we were unable to recover it. 00:27:35.079 [2024-11-15 10:46:23.375829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.079 [2024-11-15 10:46:23.375855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.079 qpair failed and we were unable to recover it. 00:27:35.079 [2024-11-15 10:46:23.375945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.079 [2024-11-15 10:46:23.375971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.079 qpair failed and we were unable to recover it. 00:27:35.079 [2024-11-15 10:46:23.376061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.079 [2024-11-15 10:46:23.376087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.079 qpair failed and we were unable to recover it. 00:27:35.079 [2024-11-15 10:46:23.376215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.079 [2024-11-15 10:46:23.376241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.079 qpair failed and we were unable to recover it. 00:27:35.079 [2024-11-15 10:46:23.376369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.079 [2024-11-15 10:46:23.376396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.079 qpair failed and we were unable to recover it. 00:27:35.079 [2024-11-15 10:46:23.376495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.079 [2024-11-15 10:46:23.376522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.079 qpair failed and we were unable to recover it. 00:27:35.079 [2024-11-15 10:46:23.376614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.079 [2024-11-15 10:46:23.376641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.079 qpair failed and we were unable to recover it. 00:27:35.079 [2024-11-15 10:46:23.376765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.079 [2024-11-15 10:46:23.376791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.079 qpair failed and we were unable to recover it. 00:27:35.079 [2024-11-15 10:46:23.376918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.079 [2024-11-15 10:46:23.376944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.079 qpair failed and we were unable to recover it. 00:27:35.079 [2024-11-15 10:46:23.377088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.079 [2024-11-15 10:46:23.377114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.079 qpair failed and we were unable to recover it. 00:27:35.079 [2024-11-15 10:46:23.377240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.079 [2024-11-15 10:46:23.377265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.079 qpair failed and we were unable to recover it. 00:27:35.079 [2024-11-15 10:46:23.377396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.079 [2024-11-15 10:46:23.377423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.079 qpair failed and we were unable to recover it. 00:27:35.079 [2024-11-15 10:46:23.377528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.079 [2024-11-15 10:46:23.377554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.079 qpair failed and we were unable to recover it. 00:27:35.079 [2024-11-15 10:46:23.377673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.079 [2024-11-15 10:46:23.377699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.079 qpair failed and we were unable to recover it. 00:27:35.079 [2024-11-15 10:46:23.377814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.079 [2024-11-15 10:46:23.377840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.079 qpair failed and we were unable to recover it. 00:27:35.079 [2024-11-15 10:46:23.377936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.079 [2024-11-15 10:46:23.377962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.079 qpair failed and we were unable to recover it. 00:27:35.079 [2024-11-15 10:46:23.378091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.079 [2024-11-15 10:46:23.378117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.079 qpair failed and we were unable to recover it. 00:27:35.079 [2024-11-15 10:46:23.378238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.079 [2024-11-15 10:46:23.378267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.079 qpair failed and we were unable to recover it. 00:27:35.079 [2024-11-15 10:46:23.378367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.079 [2024-11-15 10:46:23.378394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.079 qpair failed and we were unable to recover it. 00:27:35.079 [2024-11-15 10:46:23.378485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.079 [2024-11-15 10:46:23.378510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.079 qpair failed and we were unable to recover it. 00:27:35.079 [2024-11-15 10:46:23.378662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.079 [2024-11-15 10:46:23.378688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.079 qpair failed and we were unable to recover it. 00:27:35.079 [2024-11-15 10:46:23.378837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.079 [2024-11-15 10:46:23.378862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.079 qpair failed and we were unable to recover it. 00:27:35.079 [2024-11-15 10:46:23.378975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.079 [2024-11-15 10:46:23.379001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.079 qpair failed and we were unable to recover it. 00:27:35.079 [2024-11-15 10:46:23.379118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.079 [2024-11-15 10:46:23.379143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.079 qpair failed and we were unable to recover it. 00:27:35.079 [2024-11-15 10:46:23.379269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.079 [2024-11-15 10:46:23.379294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.079 qpair failed and we were unable to recover it. 00:27:35.079 [2024-11-15 10:46:23.379415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.079 [2024-11-15 10:46:23.379442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.079 qpair failed and we were unable to recover it. 00:27:35.079 [2024-11-15 10:46:23.379539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.079 [2024-11-15 10:46:23.379565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.079 qpair failed and we were unable to recover it. 00:27:35.079 [2024-11-15 10:46:23.379687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.079 [2024-11-15 10:46:23.379713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.079 qpair failed and we were unable to recover it. 00:27:35.079 [2024-11-15 10:46:23.379810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.079 [2024-11-15 10:46:23.379836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.079 qpair failed and we were unable to recover it. 00:27:35.079 [2024-11-15 10:46:23.379954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.079 [2024-11-15 10:46:23.379979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.079 qpair failed and we were unable to recover it. 00:27:35.079 [2024-11-15 10:46:23.380105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.079 [2024-11-15 10:46:23.380131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.079 qpair failed and we were unable to recover it. 00:27:35.079 [2024-11-15 10:46:23.380258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.079 [2024-11-15 10:46:23.380284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.079 qpair failed and we were unable to recover it. 00:27:35.079 [2024-11-15 10:46:23.380378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.079 [2024-11-15 10:46:23.380405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.079 qpair failed and we were unable to recover it. 00:27:35.079 [2024-11-15 10:46:23.380543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.079 [2024-11-15 10:46:23.380569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.079 qpair failed and we were unable to recover it. 00:27:35.079 [2024-11-15 10:46:23.380669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.079 [2024-11-15 10:46:23.380695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.079 qpair failed and we were unable to recover it. 00:27:35.079 [2024-11-15 10:46:23.380819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.079 [2024-11-15 10:46:23.380844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.079 qpair failed and we were unable to recover it. 00:27:35.079 [2024-11-15 10:46:23.380946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.079 [2024-11-15 10:46:23.380972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.079 qpair failed and we were unable to recover it. 00:27:35.079 [2024-11-15 10:46:23.381119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.079 [2024-11-15 10:46:23.381145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.079 qpair failed and we were unable to recover it. 00:27:35.079 [2024-11-15 10:46:23.381267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.079 [2024-11-15 10:46:23.381292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.079 qpair failed and we were unable to recover it. 00:27:35.079 [2024-11-15 10:46:23.381394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.079 [2024-11-15 10:46:23.381420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.079 qpair failed and we were unable to recover it. 00:27:35.079 [2024-11-15 10:46:23.381545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.079 [2024-11-15 10:46:23.381571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.079 qpair failed and we were unable to recover it. 00:27:35.079 [2024-11-15 10:46:23.381652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.079 [2024-11-15 10:46:23.381678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.079 qpair failed and we were unable to recover it. 00:27:35.079 [2024-11-15 10:46:23.381793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.079 [2024-11-15 10:46:23.381818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.079 qpair failed and we were unable to recover it. 00:27:35.079 [2024-11-15 10:46:23.381943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.079 [2024-11-15 10:46:23.381968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.079 qpair failed and we were unable to recover it. 00:27:35.079 [2024-11-15 10:46:23.382085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.079 [2024-11-15 10:46:23.382112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.079 qpair failed and we were unable to recover it. 00:27:35.079 [2024-11-15 10:46:23.382230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.079 [2024-11-15 10:46:23.382255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.079 qpair failed and we were unable to recover it. 00:27:35.079 [2024-11-15 10:46:23.382379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.079 [2024-11-15 10:46:23.382405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.079 qpair failed and we were unable to recover it. 00:27:35.079 [2024-11-15 10:46:23.382504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.079 [2024-11-15 10:46:23.382530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.079 qpair failed and we were unable to recover it. 00:27:35.079 [2024-11-15 10:46:23.382628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.079 [2024-11-15 10:46:23.382654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.079 qpair failed and we were unable to recover it. 00:27:35.079 [2024-11-15 10:46:23.382770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.080 [2024-11-15 10:46:23.382797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.080 qpair failed and we were unable to recover it. 00:27:35.080 [2024-11-15 10:46:23.382919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.080 [2024-11-15 10:46:23.382945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.080 qpair failed and we were unable to recover it. 00:27:35.080 [2024-11-15 10:46:23.383031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.080 [2024-11-15 10:46:23.383056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.080 qpair failed and we were unable to recover it. 00:27:35.080 [2024-11-15 10:46:23.383186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.080 [2024-11-15 10:46:23.383212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.080 qpair failed and we were unable to recover it. 00:27:35.080 [2024-11-15 10:46:23.383368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.080 [2024-11-15 10:46:23.383395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.080 qpair failed and we were unable to recover it. 00:27:35.080 [2024-11-15 10:46:23.383489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.080 [2024-11-15 10:46:23.383515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.080 qpair failed and we were unable to recover it. 00:27:35.080 [2024-11-15 10:46:23.383601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.080 [2024-11-15 10:46:23.383627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.080 qpair failed and we were unable to recover it. 00:27:35.080 [2024-11-15 10:46:23.383725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.080 [2024-11-15 10:46:23.383750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.080 qpair failed and we were unable to recover it. 00:27:35.080 [2024-11-15 10:46:23.383839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.080 [2024-11-15 10:46:23.383869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.080 qpair failed and we were unable to recover it. 00:27:35.080 [2024-11-15 10:46:23.383996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.080 [2024-11-15 10:46:23.384022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.080 qpair failed and we were unable to recover it. 00:27:35.080 [2024-11-15 10:46:23.384107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.080 [2024-11-15 10:46:23.384132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.080 qpair failed and we were unable to recover it. 00:27:35.080 [2024-11-15 10:46:23.384254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.080 [2024-11-15 10:46:23.384280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.080 qpair failed and we were unable to recover it. 00:27:35.080 [2024-11-15 10:46:23.384389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.080 [2024-11-15 10:46:23.384416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.080 qpair failed and we were unable to recover it. 00:27:35.080 [2024-11-15 10:46:23.384504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.080 [2024-11-15 10:46:23.384529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.080 qpair failed and we were unable to recover it. 00:27:35.080 [2024-11-15 10:46:23.384648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.080 [2024-11-15 10:46:23.384674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.080 qpair failed and we were unable to recover it. 00:27:35.080 [2024-11-15 10:46:23.384764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.080 [2024-11-15 10:46:23.384790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.080 qpair failed and we were unable to recover it. 00:27:35.080 [2024-11-15 10:46:23.384882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.080 [2024-11-15 10:46:23.384908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.080 qpair failed and we were unable to recover it. 00:27:35.080 [2024-11-15 10:46:23.385033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.080 [2024-11-15 10:46:23.385059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.080 qpair failed and we were unable to recover it. 00:27:35.080 [2024-11-15 10:46:23.385146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.080 [2024-11-15 10:46:23.385172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.080 qpair failed and we were unable to recover it. 00:27:35.080 [2024-11-15 10:46:23.385262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.080 [2024-11-15 10:46:23.385287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.080 qpair failed and we were unable to recover it. 00:27:35.080 [2024-11-15 10:46:23.385426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.080 [2024-11-15 10:46:23.385452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.080 qpair failed and we were unable to recover it. 00:27:35.080 [2024-11-15 10:46:23.385550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.080 [2024-11-15 10:46:23.385576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.080 qpair failed and we were unable to recover it. 00:27:35.080 [2024-11-15 10:46:23.385673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.080 [2024-11-15 10:46:23.385699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.080 qpair failed and we were unable to recover it. 00:27:35.080 [2024-11-15 10:46:23.385847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.080 [2024-11-15 10:46:23.385873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.080 qpair failed and we were unable to recover it. 00:27:35.080 [2024-11-15 10:46:23.385990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.080 [2024-11-15 10:46:23.386016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.080 qpair failed and we were unable to recover it. 00:27:35.080 [2024-11-15 10:46:23.386136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.080 [2024-11-15 10:46:23.386162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.080 qpair failed and we were unable to recover it. 00:27:35.080 [2024-11-15 10:46:23.386288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.080 [2024-11-15 10:46:23.386313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.080 qpair failed and we were unable to recover it. 00:27:35.080 [2024-11-15 10:46:23.386398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.080 [2024-11-15 10:46:23.386425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.080 qpair failed and we were unable to recover it. 00:27:35.080 [2024-11-15 10:46:23.386518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.080 [2024-11-15 10:46:23.386544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.080 qpair failed and we were unable to recover it. 00:27:35.080 [2024-11-15 10:46:23.386705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.080 [2024-11-15 10:46:23.386730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.080 qpair failed and we were unable to recover it. 00:27:35.080 [2024-11-15 10:46:23.386819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.080 [2024-11-15 10:46:23.386845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.080 qpair failed and we were unable to recover it. 00:27:35.080 [2024-11-15 10:46:23.386964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.080 [2024-11-15 10:46:23.386990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.080 qpair failed and we were unable to recover it. 00:27:35.080 [2024-11-15 10:46:23.387116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.080 [2024-11-15 10:46:23.387142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.080 qpair failed and we were unable to recover it. 00:27:35.080 [2024-11-15 10:46:23.387266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.080 [2024-11-15 10:46:23.387291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.080 qpair failed and we were unable to recover it. 00:27:35.080 [2024-11-15 10:46:23.387386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.080 [2024-11-15 10:46:23.387412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.080 qpair failed and we were unable to recover it. 00:27:35.080 [2024-11-15 10:46:23.387508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.080 [2024-11-15 10:46:23.387534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.080 qpair failed and we were unable to recover it. 00:27:35.080 [2024-11-15 10:46:23.387653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.080 [2024-11-15 10:46:23.387678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.080 qpair failed and we were unable to recover it. 00:27:35.080 [2024-11-15 10:46:23.387839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.080 [2024-11-15 10:46:23.387864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.080 qpair failed and we were unable to recover it. 00:27:35.080 [2024-11-15 10:46:23.387952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.080 [2024-11-15 10:46:23.387978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.080 qpair failed and we were unable to recover it. 00:27:35.080 [2024-11-15 10:46:23.388099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.080 [2024-11-15 10:46:23.388124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.080 qpair failed and we were unable to recover it. 00:27:35.080 [2024-11-15 10:46:23.388218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.080 [2024-11-15 10:46:23.388243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.080 qpair failed and we were unable to recover it. 00:27:35.080 [2024-11-15 10:46:23.388375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.080 [2024-11-15 10:46:23.388402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.080 qpair failed and we were unable to recover it. 00:27:35.080 [2024-11-15 10:46:23.388513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.080 [2024-11-15 10:46:23.388538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.080 qpair failed and we were unable to recover it. 00:27:35.080 [2024-11-15 10:46:23.388695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.080 [2024-11-15 10:46:23.388721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.080 qpair failed and we were unable to recover it. 00:27:35.080 [2024-11-15 10:46:23.388842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.080 [2024-11-15 10:46:23.388868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.080 qpair failed and we were unable to recover it. 00:27:35.080 [2024-11-15 10:46:23.388991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.080 [2024-11-15 10:46:23.389016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.080 qpair failed and we were unable to recover it. 00:27:35.080 [2024-11-15 10:46:23.389137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.080 [2024-11-15 10:46:23.389163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.080 qpair failed and we were unable to recover it. 00:27:35.080 [2024-11-15 10:46:23.389247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.080 [2024-11-15 10:46:23.389273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.080 qpair failed and we were unable to recover it. 00:27:35.080 [2024-11-15 10:46:23.389368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.080 [2024-11-15 10:46:23.389399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.080 qpair failed and we were unable to recover it. 00:27:35.080 [2024-11-15 10:46:23.389491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.080 [2024-11-15 10:46:23.389517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.080 qpair failed and we were unable to recover it. 00:27:35.080 [2024-11-15 10:46:23.389645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.080 [2024-11-15 10:46:23.389671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.080 qpair failed and we were unable to recover it. 00:27:35.080 [2024-11-15 10:46:23.389794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.080 [2024-11-15 10:46:23.389820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.080 qpair failed and we were unable to recover it. 00:27:35.080 [2024-11-15 10:46:23.389945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.080 [2024-11-15 10:46:23.389970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.080 qpair failed and we were unable to recover it. 00:27:35.080 [2024-11-15 10:46:23.390090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.080 [2024-11-15 10:46:23.390118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.080 qpair failed and we were unable to recover it. 00:27:35.080 [2024-11-15 10:46:23.390244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.080 [2024-11-15 10:46:23.390269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.080 qpair failed and we were unable to recover it. 00:27:35.080 [2024-11-15 10:46:23.390352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.080 [2024-11-15 10:46:23.390386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.080 qpair failed and we were unable to recover it. 00:27:35.080 [2024-11-15 10:46:23.390472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.080 [2024-11-15 10:46:23.390498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.080 qpair failed and we were unable to recover it. 00:27:35.080 [2024-11-15 10:46:23.390621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.080 [2024-11-15 10:46:23.390647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.080 qpair failed and we were unable to recover it. 00:27:35.080 [2024-11-15 10:46:23.390741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.080 [2024-11-15 10:46:23.390767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.080 qpair failed and we were unable to recover it. 00:27:35.080 [2024-11-15 10:46:23.390926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.080 [2024-11-15 10:46:23.390952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.080 qpair failed and we were unable to recover it. 00:27:35.080 [2024-11-15 10:46:23.391075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.080 [2024-11-15 10:46:23.391100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.080 qpair failed and we were unable to recover it. 00:27:35.080 [2024-11-15 10:46:23.391185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.080 [2024-11-15 10:46:23.391211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.080 qpair failed and we were unable to recover it. 00:27:35.080 [2024-11-15 10:46:23.391332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.080 [2024-11-15 10:46:23.391358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.080 qpair failed and we were unable to recover it. 00:27:35.080 [2024-11-15 10:46:23.391492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.080 [2024-11-15 10:46:23.391518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.080 qpair failed and we were unable to recover it. 00:27:35.080 [2024-11-15 10:46:23.391639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.080 [2024-11-15 10:46:23.391666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.080 qpair failed and we were unable to recover it. 00:27:35.080 [2024-11-15 10:46:23.391797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.081 [2024-11-15 10:46:23.391823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.081 qpair failed and we were unable to recover it. 00:27:35.081 [2024-11-15 10:46:23.391937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.081 [2024-11-15 10:46:23.391963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.081 qpair failed and we were unable to recover it. 00:27:35.081 [2024-11-15 10:46:23.392050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.081 [2024-11-15 10:46:23.392076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.081 qpair failed and we were unable to recover it. 00:27:35.081 [2024-11-15 10:46:23.392191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.081 [2024-11-15 10:46:23.392217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.081 qpair failed and we were unable to recover it. 00:27:35.081 [2024-11-15 10:46:23.392368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.081 [2024-11-15 10:46:23.392395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.081 qpair failed and we were unable to recover it. 00:27:35.081 [2024-11-15 10:46:23.392488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.081 [2024-11-15 10:46:23.392513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.081 qpair failed and we were unable to recover it. 00:27:35.081 [2024-11-15 10:46:23.392599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.081 [2024-11-15 10:46:23.392625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.081 qpair failed and we were unable to recover it. 00:27:35.081 [2024-11-15 10:46:23.392775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.081 [2024-11-15 10:46:23.392801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.081 qpair failed and we were unable to recover it. 00:27:35.081 [2024-11-15 10:46:23.392890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.081 [2024-11-15 10:46:23.392915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.081 qpair failed and we were unable to recover it. 00:27:35.081 [2024-11-15 10:46:23.393028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.081 [2024-11-15 10:46:23.393054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.081 qpair failed and we were unable to recover it. 00:27:35.081 [2024-11-15 10:46:23.393196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.081 [2024-11-15 10:46:23.393222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.081 qpair failed and we were unable to recover it. 00:27:35.081 [2024-11-15 10:46:23.393311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.081 [2024-11-15 10:46:23.393337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.081 qpair failed and we were unable to recover it. 00:27:35.081 [2024-11-15 10:46:23.393463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.081 [2024-11-15 10:46:23.393489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.081 qpair failed and we were unable to recover it. 00:27:35.081 [2024-11-15 10:46:23.393643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.081 [2024-11-15 10:46:23.393669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.081 qpair failed and we were unable to recover it. 00:27:35.081 [2024-11-15 10:46:23.393770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.081 [2024-11-15 10:46:23.393796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.081 qpair failed and we were unable to recover it. 00:27:35.081 [2024-11-15 10:46:23.393892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.081 [2024-11-15 10:46:23.393918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.081 qpair failed and we were unable to recover it. 00:27:35.081 [2024-11-15 10:46:23.394034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.081 [2024-11-15 10:46:23.394060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.081 qpair failed and we were unable to recover it. 00:27:35.081 [2024-11-15 10:46:23.394149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.081 [2024-11-15 10:46:23.394174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.081 qpair failed and we were unable to recover it. 00:27:35.081 [2024-11-15 10:46:23.394301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.081 [2024-11-15 10:46:23.394327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.081 qpair failed and we were unable to recover it. 00:27:35.081 [2024-11-15 10:46:23.394463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.081 [2024-11-15 10:46:23.394490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.081 qpair failed and we were unable to recover it. 00:27:35.081 [2024-11-15 10:46:23.394574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.081 [2024-11-15 10:46:23.394599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.081 qpair failed and we were unable to recover it. 00:27:35.081 [2024-11-15 10:46:23.394691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.081 [2024-11-15 10:46:23.394717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.081 qpair failed and we were unable to recover it. 00:27:35.081 [2024-11-15 10:46:23.394821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.081 [2024-11-15 10:46:23.394847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.081 qpair failed and we were unable to recover it. 00:27:35.081 [2024-11-15 10:46:23.394925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.081 [2024-11-15 10:46:23.394951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.081 qpair failed and we were unable to recover it. 00:27:35.081 [2024-11-15 10:46:23.395071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.081 [2024-11-15 10:46:23.395097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.081 qpair failed and we were unable to recover it. 00:27:35.081 [2024-11-15 10:46:23.395183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.081 [2024-11-15 10:46:23.395209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.081 qpair failed and we were unable to recover it. 00:27:35.081 [2024-11-15 10:46:23.395327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.081 [2024-11-15 10:46:23.395352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.081 qpair failed and we were unable to recover it. 00:27:35.081 [2024-11-15 10:46:23.395447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.081 [2024-11-15 10:46:23.395473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.081 qpair failed and we were unable to recover it. 00:27:35.081 [2024-11-15 10:46:23.395565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.081 [2024-11-15 10:46:23.395591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.081 qpair failed and we were unable to recover it. 00:27:35.081 [2024-11-15 10:46:23.395708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.081 [2024-11-15 10:46:23.395734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.081 qpair failed and we were unable to recover it. 00:27:35.081 [2024-11-15 10:46:23.395819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.081 [2024-11-15 10:46:23.395845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.081 qpair failed and we were unable to recover it. 00:27:35.081 [2024-11-15 10:46:23.395922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.081 [2024-11-15 10:46:23.395948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.081 qpair failed and we were unable to recover it. 00:27:35.081 [2024-11-15 10:46:23.396044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.081 [2024-11-15 10:46:23.396070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.081 qpair failed and we were unable to recover it. 00:27:35.081 [2024-11-15 10:46:23.396188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.081 [2024-11-15 10:46:23.396215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.081 qpair failed and we were unable to recover it. 00:27:35.081 [2024-11-15 10:46:23.396330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.081 [2024-11-15 10:46:23.396356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.081 qpair failed and we were unable to recover it. 00:27:35.081 [2024-11-15 10:46:23.396454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.081 [2024-11-15 10:46:23.396480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.081 qpair failed and we were unable to recover it. 00:27:35.081 [2024-11-15 10:46:23.396581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.081 [2024-11-15 10:46:23.396606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.081 qpair failed and we were unable to recover it. 00:27:35.081 [2024-11-15 10:46:23.396726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.081 [2024-11-15 10:46:23.396751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.081 qpair failed and we were unable to recover it. 00:27:35.081 [2024-11-15 10:46:23.396866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.081 [2024-11-15 10:46:23.396892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.081 qpair failed and we were unable to recover it. 00:27:35.081 [2024-11-15 10:46:23.396977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.081 [2024-11-15 10:46:23.397002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.081 qpair failed and we were unable to recover it. 00:27:35.081 [2024-11-15 10:46:23.397129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.081 [2024-11-15 10:46:23.397155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.081 qpair failed and we were unable to recover it. 00:27:35.081 [2024-11-15 10:46:23.397260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.081 [2024-11-15 10:46:23.397286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.081 qpair failed and we were unable to recover it. 00:27:35.081 [2024-11-15 10:46:23.397404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.081 [2024-11-15 10:46:23.397432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.081 qpair failed and we were unable to recover it. 00:27:35.081 [2024-11-15 10:46:23.397516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.081 [2024-11-15 10:46:23.397542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.081 qpair failed and we were unable to recover it. 00:27:35.081 [2024-11-15 10:46:23.397676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.081 [2024-11-15 10:46:23.397702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.081 qpair failed and we were unable to recover it. 00:27:35.081 [2024-11-15 10:46:23.397814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.081 [2024-11-15 10:46:23.397841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.081 qpair failed and we were unable to recover it. 00:27:35.081 [2024-11-15 10:46:23.397955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.081 [2024-11-15 10:46:23.397981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.081 qpair failed and we were unable to recover it. 00:27:35.081 [2024-11-15 10:46:23.398105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.081 [2024-11-15 10:46:23.398131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.081 qpair failed and we were unable to recover it. 00:27:35.081 [2024-11-15 10:46:23.398254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.081 [2024-11-15 10:46:23.398280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.081 qpair failed and we were unable to recover it. 00:27:35.081 [2024-11-15 10:46:23.398405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.081 [2024-11-15 10:46:23.398431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.081 qpair failed and we were unable to recover it. 00:27:35.081 [2024-11-15 10:46:23.398552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.081 [2024-11-15 10:46:23.398582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.081 qpair failed and we were unable to recover it. 00:27:35.081 [2024-11-15 10:46:23.398673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.081 [2024-11-15 10:46:23.398698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.081 qpair failed and we were unable to recover it. 00:27:35.081 [2024-11-15 10:46:23.398848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.081 [2024-11-15 10:46:23.398874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.081 qpair failed and we were unable to recover it. 00:27:35.081 [2024-11-15 10:46:23.398969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.081 [2024-11-15 10:46:23.398995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.081 qpair failed and we were unable to recover it. 00:27:35.081 [2024-11-15 10:46:23.399078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.081 [2024-11-15 10:46:23.399104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.081 qpair failed and we were unable to recover it. 00:27:35.081 [2024-11-15 10:46:23.399195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.081 [2024-11-15 10:46:23.399221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.081 qpair failed and we were unable to recover it. 00:27:35.081 [2024-11-15 10:46:23.399312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.081 [2024-11-15 10:46:23.399337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.081 qpair failed and we were unable to recover it. 00:27:35.081 [2024-11-15 10:46:23.399432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.081 [2024-11-15 10:46:23.399458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.081 qpair failed and we were unable to recover it. 00:27:35.081 [2024-11-15 10:46:23.399604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.081 [2024-11-15 10:46:23.399630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.081 qpair failed and we were unable to recover it. 00:27:35.082 [2024-11-15 10:46:23.399714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.082 [2024-11-15 10:46:23.399740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.082 qpair failed and we were unable to recover it. 00:27:35.082 [2024-11-15 10:46:23.399871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.082 [2024-11-15 10:46:23.399896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.082 qpair failed and we were unable to recover it. 00:27:35.082 [2024-11-15 10:46:23.399984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.082 [2024-11-15 10:46:23.400009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.082 qpair failed and we were unable to recover it. 00:27:35.082 [2024-11-15 10:46:23.400136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.082 [2024-11-15 10:46:23.400162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.082 qpair failed and we were unable to recover it. 00:27:35.082 [2024-11-15 10:46:23.400291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.082 [2024-11-15 10:46:23.400327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.082 qpair failed and we were unable to recover it. 00:27:35.082 [2024-11-15 10:46:23.400467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.082 [2024-11-15 10:46:23.400494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.082 qpair failed and we were unable to recover it. 00:27:35.082 [2024-11-15 10:46:23.400639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.082 [2024-11-15 10:46:23.400665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.082 qpair failed and we were unable to recover it. 00:27:35.082 [2024-11-15 10:46:23.400798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.082 [2024-11-15 10:46:23.400824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.082 qpair failed and we were unable to recover it. 00:27:35.082 [2024-11-15 10:46:23.400950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.082 [2024-11-15 10:46:23.400976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.082 qpair failed and we were unable to recover it. 00:27:35.082 [2024-11-15 10:46:23.401122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.082 [2024-11-15 10:46:23.401147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.082 qpair failed and we were unable to recover it. 00:27:35.082 [2024-11-15 10:46:23.401254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.082 [2024-11-15 10:46:23.401279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.082 qpair failed and we were unable to recover it. 00:27:35.082 [2024-11-15 10:46:23.401375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.082 [2024-11-15 10:46:23.401402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.082 qpair failed and we were unable to recover it. 00:27:35.082 [2024-11-15 10:46:23.401548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.082 [2024-11-15 10:46:23.401574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.082 qpair failed and we were unable to recover it. 00:27:35.082 [2024-11-15 10:46:23.401709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.082 [2024-11-15 10:46:23.401735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.082 qpair failed and we were unable to recover it. 00:27:35.082 [2024-11-15 10:46:23.401939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.082 [2024-11-15 10:46:23.401965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.082 qpair failed and we were unable to recover it. 00:27:35.082 [2024-11-15 10:46:23.402131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.082 [2024-11-15 10:46:23.402157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.082 qpair failed and we were unable to recover it. 00:27:35.082 [2024-11-15 10:46:23.402282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.082 [2024-11-15 10:46:23.402308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.082 qpair failed and we were unable to recover it. 00:27:35.082 [2024-11-15 10:46:23.402434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.082 [2024-11-15 10:46:23.402461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.082 qpair failed and we were unable to recover it. 00:27:35.082 [2024-11-15 10:46:23.402624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.082 [2024-11-15 10:46:23.402650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.082 qpair failed and we were unable to recover it. 00:27:35.082 [2024-11-15 10:46:23.402800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.082 [2024-11-15 10:46:23.402826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.082 qpair failed and we were unable to recover it. 00:27:35.082 [2024-11-15 10:46:23.402917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.082 [2024-11-15 10:46:23.402943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.082 qpair failed and we were unable to recover it. 00:27:35.082 [2024-11-15 10:46:23.403081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.082 [2024-11-15 10:46:23.403107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.082 qpair failed and we were unable to recover it. 00:27:35.082 [2024-11-15 10:46:23.403272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.082 [2024-11-15 10:46:23.403304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.082 qpair failed and we were unable to recover it. 00:27:35.082 [2024-11-15 10:46:23.403404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.082 [2024-11-15 10:46:23.403431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.082 qpair failed and we were unable to recover it. 00:27:35.082 [2024-11-15 10:46:23.403556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.082 [2024-11-15 10:46:23.403581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.082 qpair failed and we were unable to recover it. 00:27:35.082 [2024-11-15 10:46:23.403731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.082 [2024-11-15 10:46:23.403756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.082 qpair failed and we were unable to recover it. 00:27:35.082 [2024-11-15 10:46:23.403848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.082 [2024-11-15 10:46:23.403874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.082 qpair failed and we were unable to recover it. 00:27:35.082 [2024-11-15 10:46:23.404049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.082 [2024-11-15 10:46:23.404075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.082 qpair failed and we were unable to recover it. 00:27:35.082 [2024-11-15 10:46:23.404199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.082 [2024-11-15 10:46:23.404225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.082 qpair failed and we were unable to recover it. 00:27:35.082 [2024-11-15 10:46:23.404312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.082 [2024-11-15 10:46:23.404337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.082 qpair failed and we were unable to recover it. 00:27:35.082 [2024-11-15 10:46:23.404443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.082 [2024-11-15 10:46:23.404469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.082 qpair failed and we were unable to recover it. 00:27:35.082 [2024-11-15 10:46:23.404606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.082 [2024-11-15 10:46:23.404635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.082 qpair failed and we were unable to recover it. 00:27:35.082 [2024-11-15 10:46:23.404745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.082 [2024-11-15 10:46:23.404771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.082 qpair failed and we were unable to recover it. 00:27:35.082 [2024-11-15 10:46:23.404962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.082 [2024-11-15 10:46:23.404988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.082 qpair failed and we were unable to recover it. 00:27:35.082 [2024-11-15 10:46:23.405098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.082 [2024-11-15 10:46:23.405123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.082 qpair failed and we were unable to recover it. 00:27:35.082 [2024-11-15 10:46:23.405265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.082 [2024-11-15 10:46:23.405291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.082 qpair failed and we were unable to recover it. 00:27:35.082 [2024-11-15 10:46:23.405421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.082 [2024-11-15 10:46:23.405448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.082 qpair failed and we were unable to recover it. 00:27:35.082 [2024-11-15 10:46:23.405543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.082 [2024-11-15 10:46:23.405568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.082 qpair failed and we were unable to recover it. 00:27:35.082 [2024-11-15 10:46:23.405669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.082 [2024-11-15 10:46:23.405695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.082 qpair failed and we were unable to recover it. 00:27:35.082 [2024-11-15 10:46:23.405785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.082 [2024-11-15 10:46:23.405811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.082 qpair failed and we were unable to recover it. 00:27:35.082 [2024-11-15 10:46:23.405899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.082 [2024-11-15 10:46:23.405931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.082 qpair failed and we were unable to recover it. 00:27:35.082 [2024-11-15 10:46:23.406061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.082 [2024-11-15 10:46:23.406087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.082 qpair failed and we were unable to recover it. 00:27:35.082 [2024-11-15 10:46:23.406183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.082 [2024-11-15 10:46:23.406209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.082 qpair failed and we were unable to recover it. 00:27:35.082 [2024-11-15 10:46:23.406324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.082 [2024-11-15 10:46:23.406350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.082 qpair failed and we were unable to recover it. 00:27:35.082 [2024-11-15 10:46:23.406472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.082 [2024-11-15 10:46:23.406497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.082 qpair failed and we were unable to recover it. 00:27:35.082 [2024-11-15 10:46:23.406589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.082 [2024-11-15 10:46:23.406619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.082 qpair failed and we were unable to recover it. 00:27:35.082 [2024-11-15 10:46:23.406716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.082 [2024-11-15 10:46:23.406742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.082 qpair failed and we were unable to recover it. 00:27:35.082 [2024-11-15 10:46:23.406950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.082 [2024-11-15 10:46:23.406976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.082 qpair failed and we were unable to recover it. 00:27:35.082 [2024-11-15 10:46:23.407153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.082 [2024-11-15 10:46:23.407178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.082 qpair failed and we were unable to recover it. 00:27:35.082 [2024-11-15 10:46:23.407305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.082 [2024-11-15 10:46:23.407330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.082 qpair failed and we were unable to recover it. 00:27:35.082 [2024-11-15 10:46:23.407449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.082 [2024-11-15 10:46:23.407475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.082 qpair failed and we were unable to recover it. 00:27:35.082 [2024-11-15 10:46:23.407575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.082 [2024-11-15 10:46:23.407600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.082 qpair failed and we were unable to recover it. 00:27:35.082 [2024-11-15 10:46:23.407697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.082 [2024-11-15 10:46:23.407723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.082 qpair failed and we were unable to recover it. 00:27:35.082 [2024-11-15 10:46:23.407857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.082 [2024-11-15 10:46:23.407883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.082 qpair failed and we were unable to recover it. 00:27:35.082 [2024-11-15 10:46:23.408036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.082 [2024-11-15 10:46:23.408062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.082 qpair failed and we were unable to recover it. 00:27:35.082 [2024-11-15 10:46:23.408191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.082 [2024-11-15 10:46:23.408216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.082 qpair failed and we were unable to recover it. 00:27:35.082 [2024-11-15 10:46:23.408376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.082 [2024-11-15 10:46:23.408403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.082 qpair failed and we were unable to recover it. 00:27:35.082 [2024-11-15 10:46:23.408527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.082 [2024-11-15 10:46:23.408553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.082 qpair failed and we were unable to recover it. 00:27:35.082 [2024-11-15 10:46:23.408672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.082 [2024-11-15 10:46:23.408699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.082 qpair failed and we were unable to recover it. 00:27:35.082 [2024-11-15 10:46:23.408835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.082 [2024-11-15 10:46:23.408861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.082 qpair failed and we were unable to recover it. 00:27:35.082 [2024-11-15 10:46:23.409028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.082 [2024-11-15 10:46:23.409053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.082 qpair failed and we were unable to recover it. 00:27:35.082 [2024-11-15 10:46:23.409184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.082 [2024-11-15 10:46:23.409210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.082 qpair failed and we were unable to recover it. 00:27:35.082 [2024-11-15 10:46:23.409346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.083 [2024-11-15 10:46:23.409397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.083 qpair failed and we were unable to recover it. 00:27:35.083 [2024-11-15 10:46:23.409520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.083 [2024-11-15 10:46:23.409546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.083 qpair failed and we were unable to recover it. 00:27:35.083 [2024-11-15 10:46:23.409659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.083 [2024-11-15 10:46:23.409685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.083 qpair failed and we were unable to recover it. 00:27:35.083 [2024-11-15 10:46:23.409804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.083 [2024-11-15 10:46:23.409830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.083 qpair failed and we were unable to recover it. 00:27:35.083 Malloc0 00:27:35.083 [2024-11-15 10:46:23.409923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.083 [2024-11-15 10:46:23.409949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.083 qpair failed and we were unable to recover it. 00:27:35.083 [2024-11-15 10:46:23.410111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.083 [2024-11-15 10:46:23.410137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.083 qpair failed and we were unable to recover it. 00:27:35.083 10:46:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:35.083 [2024-11-15 10:46:23.410263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.083 [2024-11-15 10:46:23.410289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.083 qpair failed and we were unable to recover it. 00:27:35.083 10:46:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:27:35.083 [2024-11-15 10:46:23.410416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.083 [2024-11-15 10:46:23.410442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.083 qpair failed and we were unable to recover it. 00:27:35.083 10:46:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:35.083 [2024-11-15 10:46:23.410583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.083 [2024-11-15 10:46:23.410608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.083 qpair failed and we were unable to recover it. 00:27:35.083 10:46:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:35.083 [2024-11-15 10:46:23.410742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.083 [2024-11-15 10:46:23.410767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.083 qpair failed and we were unable to recover it. 00:27:35.083 [2024-11-15 10:46:23.410887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.083 [2024-11-15 10:46:23.410913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.083 qpair failed and we were unable to recover it. 00:27:35.083 [2024-11-15 10:46:23.411061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.083 [2024-11-15 10:46:23.411086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.083 qpair failed and we were unable to recover it. 00:27:35.083 [2024-11-15 10:46:23.411206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.083 [2024-11-15 10:46:23.411235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.083 qpair failed and we were unable to recover it. 00:27:35.083 [2024-11-15 10:46:23.411356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.083 [2024-11-15 10:46:23.411387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.083 qpair failed and we were unable to recover it. 00:27:35.083 [2024-11-15 10:46:23.411512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.083 [2024-11-15 10:46:23.411538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.083 qpair failed and we were unable to recover it. 00:27:35.083 [2024-11-15 10:46:23.411648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.083 [2024-11-15 10:46:23.411674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.083 qpair failed and we were unable to recover it. 00:27:35.083 [2024-11-15 10:46:23.411785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.083 [2024-11-15 10:46:23.411810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.083 qpair failed and we were unable to recover it. 00:27:35.083 [2024-11-15 10:46:23.411932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.083 [2024-11-15 10:46:23.411958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.083 qpair failed and we were unable to recover it. 00:27:35.083 [2024-11-15 10:46:23.412048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.083 [2024-11-15 10:46:23.412073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.083 qpair failed and we were unable to recover it. 00:27:35.083 [2024-11-15 10:46:23.412175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.083 [2024-11-15 10:46:23.412200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.083 qpair failed and we were unable to recover it. 00:27:35.083 [2024-11-15 10:46:23.412322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.083 [2024-11-15 10:46:23.412347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.083 qpair failed and we were unable to recover it. 00:27:35.083 [2024-11-15 10:46:23.412445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.083 [2024-11-15 10:46:23.412472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.083 qpair failed and we were unable to recover it. 00:27:35.083 [2024-11-15 10:46:23.412588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.083 [2024-11-15 10:46:23.412613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.083 qpair failed and we were unable to recover it. 00:27:35.083 [2024-11-15 10:46:23.412727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.083 [2024-11-15 10:46:23.412752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.083 qpair failed and we were unable to recover it. 00:27:35.083 [2024-11-15 10:46:23.412879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.083 [2024-11-15 10:46:23.412905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.083 qpair failed and we were unable to recover it. 00:27:35.083 [2024-11-15 10:46:23.413101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.083 [2024-11-15 10:46:23.413127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.083 qpair failed and we were unable to recover it. 00:27:35.083 [2024-11-15 10:46:23.413239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.083 [2024-11-15 10:46:23.413265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.083 qpair failed and we were unable to recover it. 00:27:35.083 [2024-11-15 10:46:23.413352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.083 [2024-11-15 10:46:23.413383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.083 qpair failed and we were unable to recover it. 00:27:35.083 [2024-11-15 10:46:23.413473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.083 [2024-11-15 10:46:23.413498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.083 qpair failed and we were unable to recover it. 00:27:35.083 [2024-11-15 10:46:23.413535] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:35.083 [2024-11-15 10:46:23.413577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.083 [2024-11-15 10:46:23.413601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.083 qpair failed and we were unable to recover it. 00:27:35.083 [2024-11-15 10:46:23.413787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.083 [2024-11-15 10:46:23.413813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.083 qpair failed and we were unable to recover it. 00:27:35.083 [2024-11-15 10:46:23.413933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.083 [2024-11-15 10:46:23.413959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.083 qpair failed and we were unable to recover it. 00:27:35.083 [2024-11-15 10:46:23.414083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.083 [2024-11-15 10:46:23.414109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.083 qpair failed and we were unable to recover it. 00:27:35.083 [2024-11-15 10:46:23.414214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.083 [2024-11-15 10:46:23.414240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.083 qpair failed and we were unable to recover it. 00:27:35.083 [2024-11-15 10:46:23.414357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.083 [2024-11-15 10:46:23.414390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.083 qpair failed and we were unable to recover it. 00:27:35.083 [2024-11-15 10:46:23.414509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.083 [2024-11-15 10:46:23.414535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.083 qpair failed and we were unable to recover it. 00:27:35.083 [2024-11-15 10:46:23.414661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.083 [2024-11-15 10:46:23.414686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.083 qpair failed and we were unable to recover it. 00:27:35.083 [2024-11-15 10:46:23.414818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.083 [2024-11-15 10:46:23.414843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.083 qpair failed and we were unable to recover it. 00:27:35.083 [2024-11-15 10:46:23.414966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.083 [2024-11-15 10:46:23.414991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.083 qpair failed and we were unable to recover it. 00:27:35.083 [2024-11-15 10:46:23.415111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.083 [2024-11-15 10:46:23.415136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.083 qpair failed and we were unable to recover it. 00:27:35.083 [2024-11-15 10:46:23.415250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.083 [2024-11-15 10:46:23.415276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.083 qpair failed and we were unable to recover it. 00:27:35.083 [2024-11-15 10:46:23.415376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.083 [2024-11-15 10:46:23.415402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.083 qpair failed and we were unable to recover it. 00:27:35.083 [2024-11-15 10:46:23.415524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.083 [2024-11-15 10:46:23.415550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.083 qpair failed and we were unable to recover it. 00:27:35.083 [2024-11-15 10:46:23.415639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.083 [2024-11-15 10:46:23.415665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.083 qpair failed and we were unable to recover it. 00:27:35.083 [2024-11-15 10:46:23.415812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.083 [2024-11-15 10:46:23.415837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.083 qpair failed and we were unable to recover it. 00:27:35.083 [2024-11-15 10:46:23.415951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.083 [2024-11-15 10:46:23.415977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.083 qpair failed and we were unable to recover it. 00:27:35.083 [2024-11-15 10:46:23.416103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.083 [2024-11-15 10:46:23.416129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.083 qpair failed and we were unable to recover it. 00:27:35.083 [2024-11-15 10:46:23.416282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.083 [2024-11-15 10:46:23.416308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.083 qpair failed and we were unable to recover it. 00:27:35.083 [2024-11-15 10:46:23.416433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.083 [2024-11-15 10:46:23.416460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.083 qpair failed and we were unable to recover it. 00:27:35.083 [2024-11-15 10:46:23.416557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.083 [2024-11-15 10:46:23.416582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.083 qpair failed and we were unable to recover it. 00:27:35.083 [2024-11-15 10:46:23.416727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.083 [2024-11-15 10:46:23.416752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.083 qpair failed and we were unable to recover it. 00:27:35.083 [2024-11-15 10:46:23.416843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.083 [2024-11-15 10:46:23.416869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.083 qpair failed and we were unable to recover it. 00:27:35.083 [2024-11-15 10:46:23.416960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.083 [2024-11-15 10:46:23.416985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.083 qpair failed and we were unable to recover it. 00:27:35.083 [2024-11-15 10:46:23.417083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.083 [2024-11-15 10:46:23.417110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.083 qpair failed and we were unable to recover it. 00:27:35.083 [2024-11-15 10:46:23.417231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.083 [2024-11-15 10:46:23.417256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.083 qpair failed and we were unable to recover it. 00:27:35.083 [2024-11-15 10:46:23.417387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.083 [2024-11-15 10:46:23.417414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.083 qpair failed and we were unable to recover it. 00:27:35.083 [2024-11-15 10:46:23.417569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.083 [2024-11-15 10:46:23.417594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.083 qpair failed and we were unable to recover it. 00:27:35.083 [2024-11-15 10:46:23.417719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.083 [2024-11-15 10:46:23.417745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.083 qpair failed and we were unable to recover it. 00:27:35.083 [2024-11-15 10:46:23.417861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.083 [2024-11-15 10:46:23.417887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.083 qpair failed and we were unable to recover it. 00:27:35.083 [2024-11-15 10:46:23.418010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.083 [2024-11-15 10:46:23.418036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.083 qpair failed and we were unable to recover it. 00:27:35.083 [2024-11-15 10:46:23.418184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.083 [2024-11-15 10:46:23.418210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.083 qpair failed and we were unable to recover it. 00:27:35.083 [2024-11-15 10:46:23.418333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.083 [2024-11-15 10:46:23.418359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.083 qpair failed and we were unable to recover it. 00:27:35.083 [2024-11-15 10:46:23.418457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.083 [2024-11-15 10:46:23.418483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.084 qpair failed and we were unable to recover it. 00:27:35.084 [2024-11-15 10:46:23.418609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.084 [2024-11-15 10:46:23.418634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.084 qpair failed and we were unable to recover it. 00:27:35.084 [2024-11-15 10:46:23.418787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.084 [2024-11-15 10:46:23.418812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.084 qpair failed and we were unable to recover it. 00:27:35.084 [2024-11-15 10:46:23.418926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.084 [2024-11-15 10:46:23.418951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.084 qpair failed and we were unable to recover it. 00:27:35.084 [2024-11-15 10:46:23.419076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.084 [2024-11-15 10:46:23.419101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.084 qpair failed and we were unable to recover it. 00:27:35.084 [2024-11-15 10:46:23.419246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.084 [2024-11-15 10:46:23.419271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.084 qpair failed and we were unable to recover it. 00:27:35.084 [2024-11-15 10:46:23.419357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.084 [2024-11-15 10:46:23.419387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.084 qpair failed and we were unable to recover it. 00:27:35.084 [2024-11-15 10:46:23.419531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.084 [2024-11-15 10:46:23.419557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.084 qpair failed and we were unable to recover it. 00:27:35.084 [2024-11-15 10:46:23.419677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.084 [2024-11-15 10:46:23.419702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.084 qpair failed and we were unable to recover it. 00:27:35.084 [2024-11-15 10:46:23.419855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.084 [2024-11-15 10:46:23.419881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.084 qpair failed and we were unable to recover it. 00:27:35.084 [2024-11-15 10:46:23.420027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.084 [2024-11-15 10:46:23.420052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.084 qpair failed and we were unable to recover it. 00:27:35.084 [2024-11-15 10:46:23.420170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.084 [2024-11-15 10:46:23.420195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.084 qpair failed and we were unable to recover it. 00:27:35.084 [2024-11-15 10:46:23.420318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.084 [2024-11-15 10:46:23.420343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.084 qpair failed and we were unable to recover it. 00:27:35.084 [2024-11-15 10:46:23.420466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.084 [2024-11-15 10:46:23.420492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.084 qpair failed and we were unable to recover it. 00:27:35.084 [2024-11-15 10:46:23.420576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.084 [2024-11-15 10:46:23.420601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.084 qpair failed and we were unable to recover it. 00:27:35.084 [2024-11-15 10:46:23.420747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.084 [2024-11-15 10:46:23.420773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.084 qpair failed and we were unable to recover it. 00:27:35.084 [2024-11-15 10:46:23.420887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.084 [2024-11-15 10:46:23.420912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.084 qpair failed and we were unable to recover it. 00:27:35.084 [2024-11-15 10:46:23.421002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.084 [2024-11-15 10:46:23.421027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.084 qpair failed and we were unable to recover it. 00:27:35.084 [2024-11-15 10:46:23.421174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.084 [2024-11-15 10:46:23.421200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.084 qpair failed and we were unable to recover it. 00:27:35.084 [2024-11-15 10:46:23.421311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.084 [2024-11-15 10:46:23.421336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.084 qpair failed and we were unable to recover it. 00:27:35.084 [2024-11-15 10:46:23.421498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.084 [2024-11-15 10:46:23.421525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.084 qpair failed and we were unable to recover it. 00:27:35.084 [2024-11-15 10:46:23.421657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.084 [2024-11-15 10:46:23.421682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.084 qpair failed and we were unable to recover it. 00:27:35.084 10:46:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:35.084 [2024-11-15 10:46:23.421829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.084 [2024-11-15 10:46:23.421855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.084 qpair failed and we were unable to recover it. 00:27:35.084 10:46:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:35.084 [2024-11-15 10:46:23.422013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.084 10:46:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:35.084 [2024-11-15 10:46:23.422038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.084 qpair failed and we were unable to recover it. 00:27:35.084 10:46:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:35.084 [2024-11-15 10:46:23.422200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.084 [2024-11-15 10:46:23.422226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.084 qpair failed and we were unable to recover it. 00:27:35.084 [2024-11-15 10:46:23.422376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.084 [2024-11-15 10:46:23.422403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.084 qpair failed and we were unable to recover it. 00:27:35.084 [2024-11-15 10:46:23.422516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.084 [2024-11-15 10:46:23.422542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.084 qpair failed and we were unable to recover it. 00:27:35.084 [2024-11-15 10:46:23.422685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.084 [2024-11-15 10:46:23.422711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.084 qpair failed and we were unable to recover it. 00:27:35.084 [2024-11-15 10:46:23.422855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.084 [2024-11-15 10:46:23.422880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.084 qpair failed and we were unable to recover it. 00:27:35.084 [2024-11-15 10:46:23.423027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.084 [2024-11-15 10:46:23.423052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.084 qpair failed and we were unable to recover it. 00:27:35.084 [2024-11-15 10:46:23.423180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.084 [2024-11-15 10:46:23.423205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.084 qpair failed and we were unable to recover it. 00:27:35.084 [2024-11-15 10:46:23.423286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.084 [2024-11-15 10:46:23.423311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.084 qpair failed and we were unable to recover it. 00:27:35.084 [2024-11-15 10:46:23.423465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.084 [2024-11-15 10:46:23.423491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.084 qpair failed and we were unable to recover it. 00:27:35.084 [2024-11-15 10:46:23.423604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.084 [2024-11-15 10:46:23.423630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.084 qpair failed and we were unable to recover it. 00:27:35.084 [2024-11-15 10:46:23.423750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.084 [2024-11-15 10:46:23.423776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.084 qpair failed and we were unable to recover it. 00:27:35.084 [2024-11-15 10:46:23.423922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.084 [2024-11-15 10:46:23.423948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.084 qpair failed and we were unable to recover it. 00:27:35.084 [2024-11-15 10:46:23.424029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.084 [2024-11-15 10:46:23.424054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.084 qpair failed and we were unable to recover it. 00:27:35.084 [2024-11-15 10:46:23.424179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.084 [2024-11-15 10:46:23.424205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.084 qpair failed and we were unable to recover it. 00:27:35.084 [2024-11-15 10:46:23.424331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.084 [2024-11-15 10:46:23.424357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.084 qpair failed and we were unable to recover it. 00:27:35.084 [2024-11-15 10:46:23.424511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.084 [2024-11-15 10:46:23.424537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.084 qpair failed and we were unable to recover it. 00:27:35.084 [2024-11-15 10:46:23.424660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.084 [2024-11-15 10:46:23.424685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.084 qpair failed and we were unable to recover it. 00:27:35.084 [2024-11-15 10:46:23.424811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.084 [2024-11-15 10:46:23.424836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.084 qpair failed and we were unable to recover it. 00:27:35.084 [2024-11-15 10:46:23.424984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.084 [2024-11-15 10:46:23.425009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.084 qpair failed and we were unable to recover it. 00:27:35.084 [2024-11-15 10:46:23.425154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.084 [2024-11-15 10:46:23.425180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.084 qpair failed and we were unable to recover it. 00:27:35.084 [2024-11-15 10:46:23.425300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.084 [2024-11-15 10:46:23.425326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.084 qpair failed and we were unable to recover it. 00:27:35.084 [2024-11-15 10:46:23.425425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.084 [2024-11-15 10:46:23.425451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.084 qpair failed and we were unable to recover it. 00:27:35.084 [2024-11-15 10:46:23.425574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.084 [2024-11-15 10:46:23.425600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.084 qpair failed and we were unable to recover it. 00:27:35.084 [2024-11-15 10:46:23.425751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.084 [2024-11-15 10:46:23.425777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.084 qpair failed and we were unable to recover it. 00:27:35.084 [2024-11-15 10:46:23.425900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.084 [2024-11-15 10:46:23.425925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.084 qpair failed and we were unable to recover it. 00:27:35.084 [2024-11-15 10:46:23.426036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.084 [2024-11-15 10:46:23.426062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.084 qpair failed and we were unable to recover it. 00:27:35.084 [2024-11-15 10:46:23.426218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.084 [2024-11-15 10:46:23.426244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.084 qpair failed and we were unable to recover it. 00:27:35.084 [2024-11-15 10:46:23.426389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.084 [2024-11-15 10:46:23.426415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.084 qpair failed and we were unable to recover it. 00:27:35.084 [2024-11-15 10:46:23.426532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.084 [2024-11-15 10:46:23.426558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.084 qpair failed and we were unable to recover it. 00:27:35.084 [2024-11-15 10:46:23.426681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.084 [2024-11-15 10:46:23.426707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.084 qpair failed and we were unable to recover it. 00:27:35.084 [2024-11-15 10:46:23.426824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.084 [2024-11-15 10:46:23.426849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.084 qpair failed and we were unable to recover it. 00:27:35.084 [2024-11-15 10:46:23.426972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.084 [2024-11-15 10:46:23.426998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.084 qpair failed and we were unable to recover it. 00:27:35.084 [2024-11-15 10:46:23.427136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.084 [2024-11-15 10:46:23.427161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.084 qpair failed and we were unable to recover it. 00:27:35.084 [2024-11-15 10:46:23.427273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.084 [2024-11-15 10:46:23.427298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.084 qpair failed and we were unable to recover it. 00:27:35.084 [2024-11-15 10:46:23.427422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.084 [2024-11-15 10:46:23.427449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.084 qpair failed and we were unable to recover it. 00:27:35.084 [2024-11-15 10:46:23.427596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.084 [2024-11-15 10:46:23.427622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.084 qpair failed and we were unable to recover it. 00:27:35.084 [2024-11-15 10:46:23.427742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.084 [2024-11-15 10:46:23.427768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.084 qpair failed and we were unable to recover it. 00:27:35.084 [2024-11-15 10:46:23.427914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.084 [2024-11-15 10:46:23.427939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.084 qpair failed and we were unable to recover it. 00:27:35.084 [2024-11-15 10:46:23.428059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.084 [2024-11-15 10:46:23.428085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.084 qpair failed and we were unable to recover it. 00:27:35.084 [2024-11-15 10:46:23.428208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.084 [2024-11-15 10:46:23.428238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.084 qpair failed and we were unable to recover it. 00:27:35.085 [2024-11-15 10:46:23.428360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.085 [2024-11-15 10:46:23.428400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.085 qpair failed and we were unable to recover it. 00:27:35.085 [2024-11-15 10:46:23.428492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.085 [2024-11-15 10:46:23.428517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.085 qpair failed and we were unable to recover it. 00:27:35.085 [2024-11-15 10:46:23.428652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.085 [2024-11-15 10:46:23.428678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.085 qpair failed and we were unable to recover it. 00:27:35.085 [2024-11-15 10:46:23.428825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.085 [2024-11-15 10:46:23.428850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.085 qpair failed and we were unable to recover it. 00:27:35.085 [2024-11-15 10:46:23.428971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.085 [2024-11-15 10:46:23.428996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.085 qpair failed and we were unable to recover it. 00:27:35.085 [2024-11-15 10:46:23.429110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.085 [2024-11-15 10:46:23.429136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.085 qpair failed and we were unable to recover it. 00:27:35.085 [2024-11-15 10:46:23.429251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.085 [2024-11-15 10:46:23.429277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.085 qpair failed and we were unable to recover it. 00:27:35.085 [2024-11-15 10:46:23.429429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.085 [2024-11-15 10:46:23.429456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.085 qpair failed and we were unable to recover it. 00:27:35.085 [2024-11-15 10:46:23.429574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.085 [2024-11-15 10:46:23.429600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.085 qpair failed and we were unable to recover it. 00:27:35.085 [2024-11-15 10:46:23.429746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.085 [2024-11-15 10:46:23.429771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.085 qpair failed and we were unable to recover it. 00:27:35.085 10:46:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:35.085 [2024-11-15 10:46:23.429869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.085 [2024-11-15 10:46:23.429894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.085 qpair failed and we were unable to recover it. 00:27:35.085 10:46:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:27:35.085 [2024-11-15 10:46:23.430009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.085 [2024-11-15 10:46:23.430035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.085 qpair failed and we were unable to recover it. 00:27:35.085 10:46:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:35.085 10:46:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:35.085 [2024-11-15 10:46:23.430191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.085 [2024-11-15 10:46:23.430216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.085 qpair failed and we were unable to recover it. 00:27:35.085 [2024-11-15 10:46:23.430375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.085 [2024-11-15 10:46:23.430401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.085 qpair failed and we were unable to recover it. 00:27:35.085 [2024-11-15 10:46:23.430501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.085 [2024-11-15 10:46:23.430527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.085 qpair failed and we were unable to recover it. 00:27:35.085 [2024-11-15 10:46:23.430628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.085 [2024-11-15 10:46:23.430653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.085 qpair failed and we were unable to recover it. 00:27:35.085 [2024-11-15 10:46:23.430812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.085 [2024-11-15 10:46:23.430838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.085 qpair failed and we were unable to recover it. 00:27:35.085 [2024-11-15 10:46:23.430921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.085 [2024-11-15 10:46:23.430947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.085 qpair failed and we were unable to recover it. 00:27:35.085 [2024-11-15 10:46:23.431049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.085 [2024-11-15 10:46:23.431074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.085 qpair failed and we were unable to recover it. 00:27:35.085 [2024-11-15 10:46:23.431199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.085 [2024-11-15 10:46:23.431224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.085 qpair failed and we were unable to recover it. 00:27:35.085 [2024-11-15 10:46:23.431336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.085 [2024-11-15 10:46:23.431369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.085 qpair failed and we were unable to recover it. 00:27:35.085 [2024-11-15 10:46:23.431491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.085 [2024-11-15 10:46:23.431516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.085 qpair failed and we were unable to recover it. 00:27:35.085 [2024-11-15 10:46:23.431642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.085 [2024-11-15 10:46:23.431667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.085 qpair failed and we were unable to recover it. 00:27:35.085 [2024-11-15 10:46:23.431775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.085 [2024-11-15 10:46:23.431800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.085 qpair failed and we were unable to recover it. 00:27:35.085 [2024-11-15 10:46:23.431951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.085 [2024-11-15 10:46:23.431977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.085 qpair failed and we were unable to recover it. 00:27:35.085 [2024-11-15 10:46:23.432128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.085 [2024-11-15 10:46:23.432153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.085 qpair failed and we were unable to recover it. 00:27:35.085 [2024-11-15 10:46:23.432305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.085 [2024-11-15 10:46:23.432331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.085 qpair failed and we were unable to recover it. 00:27:35.085 [2024-11-15 10:46:23.432433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.085 [2024-11-15 10:46:23.432459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.085 qpair failed and we were unable to recover it. 00:27:35.085 [2024-11-15 10:46:23.432582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.085 [2024-11-15 10:46:23.432607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.085 qpair failed and we were unable to recover it. 00:27:35.085 [2024-11-15 10:46:23.432757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.085 [2024-11-15 10:46:23.432782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.085 qpair failed and we were unable to recover it. 00:27:35.085 [2024-11-15 10:46:23.432897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.085 [2024-11-15 10:46:23.432923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.085 qpair failed and we were unable to recover it. 00:27:35.085 [2024-11-15 10:46:23.433035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.085 [2024-11-15 10:46:23.433060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.085 qpair failed and we were unable to recover it. 00:27:35.085 [2024-11-15 10:46:23.433145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.085 [2024-11-15 10:46:23.433170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.085 qpair failed and we were unable to recover it. 00:27:35.085 [2024-11-15 10:46:23.433319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.085 [2024-11-15 10:46:23.433345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.085 qpair failed and we were unable to recover it. 00:27:35.085 [2024-11-15 10:46:23.433481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.085 [2024-11-15 10:46:23.433507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.085 qpair failed and we were unable to recover it. 00:27:35.085 [2024-11-15 10:46:23.433599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.085 [2024-11-15 10:46:23.433624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.085 qpair failed and we were unable to recover it. 00:27:35.085 [2024-11-15 10:46:23.433739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.085 [2024-11-15 10:46:23.433765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.085 qpair failed and we were unable to recover it. 00:27:35.085 [2024-11-15 10:46:23.433878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.085 [2024-11-15 10:46:23.433903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.085 qpair failed and we were unable to recover it. 00:27:35.085 [2024-11-15 10:46:23.434031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.085 [2024-11-15 10:46:23.434056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.085 qpair failed and we were unable to recover it. 00:27:35.085 [2024-11-15 10:46:23.434203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.085 [2024-11-15 10:46:23.434229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.085 qpair failed and we were unable to recover it. 00:27:35.085 [2024-11-15 10:46:23.434342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.085 [2024-11-15 10:46:23.434373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.085 qpair failed and we were unable to recover it. 00:27:35.085 [2024-11-15 10:46:23.434521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.085 [2024-11-15 10:46:23.434546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.085 qpair failed and we were unable to recover it. 00:27:35.085 [2024-11-15 10:46:23.434702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.085 [2024-11-15 10:46:23.434727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.085 qpair failed and we were unable to recover it. 00:27:35.085 [2024-11-15 10:46:23.434824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.085 [2024-11-15 10:46:23.434850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.085 qpair failed and we were unable to recover it. 00:27:35.085 [2024-11-15 10:46:23.434967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.085 [2024-11-15 10:46:23.434993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.085 qpair failed and we were unable to recover it. 00:27:35.085 [2024-11-15 10:46:23.435114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.085 [2024-11-15 10:46:23.435140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.085 qpair failed and we were unable to recover it. 00:27:35.085 [2024-11-15 10:46:23.435285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.085 [2024-11-15 10:46:23.435310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.085 qpair failed and we were unable to recover it. 00:27:35.085 [2024-11-15 10:46:23.435426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.085 [2024-11-15 10:46:23.435451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.085 qpair failed and we were unable to recover it. 00:27:35.085 [2024-11-15 10:46:23.435546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.085 [2024-11-15 10:46:23.435571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.085 qpair failed and we were unable to recover it. 00:27:35.085 [2024-11-15 10:46:23.435689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.085 [2024-11-15 10:46:23.435715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.085 qpair failed and we were unable to recover it. 00:27:35.085 [2024-11-15 10:46:23.435831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.085 [2024-11-15 10:46:23.435856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.085 qpair failed and we were unable to recover it. 00:27:35.085 [2024-11-15 10:46:23.435948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.085 [2024-11-15 10:46:23.435974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.085 qpair failed and we were unable to recover it. 00:27:35.085 [2024-11-15 10:46:23.436050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.085 [2024-11-15 10:46:23.436076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.085 qpair failed and we were unable to recover it. 00:27:35.085 [2024-11-15 10:46:23.436223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.085 [2024-11-15 10:46:23.436248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.085 qpair failed and we were unable to recover it. 00:27:35.085 [2024-11-15 10:46:23.436334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.085 [2024-11-15 10:46:23.436360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.085 qpair failed and we were unable to recover it. 00:27:35.085 [2024-11-15 10:46:23.436520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.085 [2024-11-15 10:46:23.436548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.085 qpair failed and we were unable to recover it. 00:27:35.085 [2024-11-15 10:46:23.436696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.085 [2024-11-15 10:46:23.436721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.085 qpair failed and we were unable to recover it. 00:27:35.085 [2024-11-15 10:46:23.436829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.085 [2024-11-15 10:46:23.436855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.085 qpair failed and we were unable to recover it. 00:27:35.085 [2024-11-15 10:46:23.436997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.085 [2024-11-15 10:46:23.437022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.085 qpair failed and we were unable to recover it. 00:27:35.085 [2024-11-15 10:46:23.437114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.085 [2024-11-15 10:46:23.437139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.085 qpair failed and we were unable to recover it. 00:27:35.085 [2024-11-15 10:46:23.437264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.085 [2024-11-15 10:46:23.437290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.085 qpair failed and we were unable to recover it. 00:27:35.085 [2024-11-15 10:46:23.437428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.085 [2024-11-15 10:46:23.437454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.085 qpair failed and we were unable to recover it. 00:27:35.085 [2024-11-15 10:46:23.437568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.085 [2024-11-15 10:46:23.437594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.085 qpair failed and we were unable to recover it. 00:27:35.085 [2024-11-15 10:46:23.437681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.085 [2024-11-15 10:46:23.437707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.085 qpair failed and we were unable to recover it. 00:27:35.085 10:46:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:35.085 [2024-11-15 10:46:23.437852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.085 [2024-11-15 10:46:23.437878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.085 qpair failed and we were unable to recover it. 00:27:35.085 10:46:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:35.086 [2024-11-15 10:46:23.438023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.086 [2024-11-15 10:46:23.438048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.086 qpair failed and we were unable to recover it. 00:27:35.086 10:46:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:35.086 [2024-11-15 10:46:23.438171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.086 10:46:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:35.086 [2024-11-15 10:46:23.438197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.086 qpair failed and we were unable to recover it. 00:27:35.086 [2024-11-15 10:46:23.438345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.086 [2024-11-15 10:46:23.438377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.086 qpair failed and we were unable to recover it. 00:27:35.086 [2024-11-15 10:46:23.438496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.086 [2024-11-15 10:46:23.438522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.086 qpair failed and we were unable to recover it. 00:27:35.086 [2024-11-15 10:46:23.438600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.086 [2024-11-15 10:46:23.438626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.086 qpair failed and we were unable to recover it. 00:27:35.086 [2024-11-15 10:46:23.438770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.086 [2024-11-15 10:46:23.438795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.086 qpair failed and we were unable to recover it. 00:27:35.086 [2024-11-15 10:46:23.438908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.086 [2024-11-15 10:46:23.438934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.086 qpair failed and we were unable to recover it. 00:27:35.086 [2024-11-15 10:46:23.439081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.086 [2024-11-15 10:46:23.439106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.086 qpair failed and we were unable to recover it. 00:27:35.086 [2024-11-15 10:46:23.439214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.086 [2024-11-15 10:46:23.439239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.086 qpair failed and we were unable to recover it. 00:27:35.086 [2024-11-15 10:46:23.439320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.086 [2024-11-15 10:46:23.439345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.086 qpair failed and we were unable to recover it. 00:27:35.086 [2024-11-15 10:46:23.439453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.086 [2024-11-15 10:46:23.439478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.086 qpair failed and we were unable to recover it. 00:27:35.086 [2024-11-15 10:46:23.439601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.086 [2024-11-15 10:46:23.439627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.086 qpair failed and we were unable to recover it. 00:27:35.086 [2024-11-15 10:46:23.439703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.086 [2024-11-15 10:46:23.439728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.086 qpair failed and we were unable to recover it. 00:27:35.086 [2024-11-15 10:46:23.439879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.086 [2024-11-15 10:46:23.439905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.086 qpair failed and we were unable to recover it. 00:27:35.086 [2024-11-15 10:46:23.440050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.086 [2024-11-15 10:46:23.440076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.086 qpair failed and we were unable to recover it. 00:27:35.086 [2024-11-15 10:46:23.440224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.086 [2024-11-15 10:46:23.440249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.086 qpair failed and we were unable to recover it. 00:27:35.086 [2024-11-15 10:46:23.440377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.086 [2024-11-15 10:46:23.440403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.086 qpair failed and we were unable to recover it. 00:27:35.086 [2024-11-15 10:46:23.440513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.086 [2024-11-15 10:46:23.440539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.086 qpair failed and we were unable to recover it. 00:27:35.086 [2024-11-15 10:46:23.440690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.086 [2024-11-15 10:46:23.440715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.086 qpair failed and we were unable to recover it. 00:27:35.086 [2024-11-15 10:46:23.440834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.086 [2024-11-15 10:46:23.440860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.086 qpair failed and we were unable to recover it. 00:27:35.086 [2024-11-15 10:46:23.440943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.086 [2024-11-15 10:46:23.440968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.086 qpair failed and we were unable to recover it. 00:27:35.086 [2024-11-15 10:46:23.441117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.086 [2024-11-15 10:46:23.441142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.086 qpair failed and we were unable to recover it. 00:27:35.086 [2024-11-15 10:46:23.441265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.086 [2024-11-15 10:46:23.441290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.086 qpair failed and we were unable to recover it. 00:27:35.086 [2024-11-15 10:46:23.441382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.086 [2024-11-15 10:46:23.441408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.086 qpair failed and we were unable to recover it. 00:27:35.086 [2024-11-15 10:46:23.441521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.086 [2024-11-15 10:46:23.441547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.086 qpair failed and we were unable to recover it. 00:27:35.086 [2024-11-15 10:46:23.441658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.086 [2024-11-15 10:46:23.441683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b4c000b90 with addr=10.0.0.2, port=4420 00:27:35.086 qpair failed and we were unable to recover it. 00:27:35.086 [2024-11-15 10:46:23.441802] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:35.086 [2024-11-15 10:46:23.444404] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:35.086 [2024-11-15 10:46:23.444523] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:35.086 [2024-11-15 10:46:23.444552] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:35.086 [2024-11-15 10:46:23.444568] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:35.086 [2024-11-15 10:46:23.444580] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:35.086 [2024-11-15 10:46:23.444617] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:35.086 qpair failed and we were unable to recover it. 00:27:35.086 10:46:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:35.086 10:46:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:27:35.086 10:46:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:35.086 10:46:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:35.086 10:46:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:35.086 10:46:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # wait 491200 00:27:35.086 [2024-11-15 10:46:23.454188] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:35.086 [2024-11-15 10:46:23.454278] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:35.086 [2024-11-15 10:46:23.454306] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:35.086 [2024-11-15 10:46:23.454320] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:35.086 [2024-11-15 10:46:23.454332] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:35.086 [2024-11-15 10:46:23.454380] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:35.086 qpair failed and we were unable to recover it. 00:27:35.086 [2024-11-15 10:46:23.464251] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:35.086 [2024-11-15 10:46:23.464357] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:35.086 [2024-11-15 10:46:23.464397] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:35.086 [2024-11-15 10:46:23.464412] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:35.086 [2024-11-15 10:46:23.464429] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:35.086 [2024-11-15 10:46:23.464460] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:35.086 qpair failed and we were unable to recover it. 00:27:35.086 [2024-11-15 10:46:23.474263] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:35.086 [2024-11-15 10:46:23.474390] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:35.086 [2024-11-15 10:46:23.474416] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:35.086 [2024-11-15 10:46:23.474431] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:35.086 [2024-11-15 10:46:23.474443] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:35.086 [2024-11-15 10:46:23.474473] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:35.086 qpair failed and we were unable to recover it. 00:27:35.086 [2024-11-15 10:46:23.484181] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:35.086 [2024-11-15 10:46:23.484286] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:35.086 [2024-11-15 10:46:23.484312] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:35.086 [2024-11-15 10:46:23.484326] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:35.086 [2024-11-15 10:46:23.484338] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:35.086 [2024-11-15 10:46:23.484378] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:35.086 qpair failed and we were unable to recover it. 00:27:35.086 [2024-11-15 10:46:23.494228] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:35.086 [2024-11-15 10:46:23.494332] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:35.086 [2024-11-15 10:46:23.494359] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:35.086 [2024-11-15 10:46:23.494382] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:35.086 [2024-11-15 10:46:23.494395] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:35.086 [2024-11-15 10:46:23.494425] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:35.086 qpair failed and we were unable to recover it. 00:27:35.086 [2024-11-15 10:46:23.504285] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:35.086 [2024-11-15 10:46:23.504400] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:35.086 [2024-11-15 10:46:23.504425] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:35.086 [2024-11-15 10:46:23.504439] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:35.086 [2024-11-15 10:46:23.504450] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:35.086 [2024-11-15 10:46:23.504480] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:35.086 qpair failed and we were unable to recover it. 00:27:35.086 [2024-11-15 10:46:23.514274] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:35.086 [2024-11-15 10:46:23.514372] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:35.086 [2024-11-15 10:46:23.514397] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:35.086 [2024-11-15 10:46:23.514411] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:35.086 [2024-11-15 10:46:23.514422] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:35.086 [2024-11-15 10:46:23.514452] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:35.086 qpair failed and we were unable to recover it. 00:27:35.344 [2024-11-15 10:46:23.524432] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:35.344 [2024-11-15 10:46:23.524547] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:35.344 [2024-11-15 10:46:23.524574] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:35.344 [2024-11-15 10:46:23.524588] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:35.344 [2024-11-15 10:46:23.524600] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:35.344 [2024-11-15 10:46:23.524638] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:35.344 qpair failed and we were unable to recover it. 00:27:35.344 [2024-11-15 10:46:23.534420] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:35.344 [2024-11-15 10:46:23.534512] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:35.344 [2024-11-15 10:46:23.534536] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:35.345 [2024-11-15 10:46:23.534550] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:35.345 [2024-11-15 10:46:23.534563] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:35.345 [2024-11-15 10:46:23.534592] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:35.345 qpair failed and we were unable to recover it. 00:27:35.345 [2024-11-15 10:46:23.544425] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:35.345 [2024-11-15 10:46:23.544514] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:35.345 [2024-11-15 10:46:23.544538] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:35.345 [2024-11-15 10:46:23.544552] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:35.345 [2024-11-15 10:46:23.544563] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:35.345 [2024-11-15 10:46:23.544593] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:35.345 qpair failed and we were unable to recover it. 00:27:35.345 [2024-11-15 10:46:23.554430] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:35.345 [2024-11-15 10:46:23.554521] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:35.345 [2024-11-15 10:46:23.554552] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:35.345 [2024-11-15 10:46:23.554567] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:35.345 [2024-11-15 10:46:23.554579] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:35.345 [2024-11-15 10:46:23.554609] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:35.345 qpair failed and we were unable to recover it. 00:27:35.345 [2024-11-15 10:46:23.564456] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:35.345 [2024-11-15 10:46:23.564547] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:35.345 [2024-11-15 10:46:23.564571] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:35.345 [2024-11-15 10:46:23.564585] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:35.345 [2024-11-15 10:46:23.564596] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:35.345 [2024-11-15 10:46:23.564627] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:35.345 qpair failed and we were unable to recover it. 00:27:35.345 [2024-11-15 10:46:23.574460] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:35.345 [2024-11-15 10:46:23.574546] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:35.345 [2024-11-15 10:46:23.574571] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:35.345 [2024-11-15 10:46:23.574585] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:35.345 [2024-11-15 10:46:23.574596] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:35.345 [2024-11-15 10:46:23.574626] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:35.345 qpair failed and we were unable to recover it. 00:27:35.345 [2024-11-15 10:46:23.584537] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:35.345 [2024-11-15 10:46:23.584631] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:35.345 [2024-11-15 10:46:23.584658] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:35.345 [2024-11-15 10:46:23.584672] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:35.345 [2024-11-15 10:46:23.584683] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:35.345 [2024-11-15 10:46:23.584713] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:35.345 qpair failed and we were unable to recover it. 00:27:35.345 [2024-11-15 10:46:23.594562] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:35.345 [2024-11-15 10:46:23.594657] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:35.345 [2024-11-15 10:46:23.594685] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:35.345 [2024-11-15 10:46:23.594699] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:35.345 [2024-11-15 10:46:23.594716] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:35.345 [2024-11-15 10:46:23.594747] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:35.345 qpair failed and we were unable to recover it. 00:27:35.345 [2024-11-15 10:46:23.604580] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:35.345 [2024-11-15 10:46:23.604701] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:35.345 [2024-11-15 10:46:23.604726] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:35.345 [2024-11-15 10:46:23.604741] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:35.345 [2024-11-15 10:46:23.604753] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:35.345 [2024-11-15 10:46:23.604782] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:35.345 qpair failed and we were unable to recover it. 00:27:35.345 [2024-11-15 10:46:23.614600] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:35.345 [2024-11-15 10:46:23.614687] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:35.345 [2024-11-15 10:46:23.614711] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:35.345 [2024-11-15 10:46:23.614725] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:35.345 [2024-11-15 10:46:23.614737] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:35.345 [2024-11-15 10:46:23.614767] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:35.345 qpair failed and we were unable to recover it. 00:27:35.345 [2024-11-15 10:46:23.624712] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:35.345 [2024-11-15 10:46:23.624823] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:35.345 [2024-11-15 10:46:23.624849] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:35.345 [2024-11-15 10:46:23.624863] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:35.345 [2024-11-15 10:46:23.624875] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:35.345 [2024-11-15 10:46:23.624906] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:35.345 qpair failed and we were unable to recover it. 00:27:35.345 [2024-11-15 10:46:23.634698] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:35.345 [2024-11-15 10:46:23.634830] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:35.345 [2024-11-15 10:46:23.634856] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:35.345 [2024-11-15 10:46:23.634869] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:35.345 [2024-11-15 10:46:23.634881] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:35.345 [2024-11-15 10:46:23.634923] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:35.345 qpair failed and we were unable to recover it. 00:27:35.345 [2024-11-15 10:46:23.644688] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:35.346 [2024-11-15 10:46:23.644801] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:35.346 [2024-11-15 10:46:23.644826] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:35.346 [2024-11-15 10:46:23.644841] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:35.346 [2024-11-15 10:46:23.644853] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:35.346 [2024-11-15 10:46:23.644882] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:35.346 qpair failed and we were unable to recover it. 00:27:35.346 [2024-11-15 10:46:23.654734] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:35.346 [2024-11-15 10:46:23.654833] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:35.346 [2024-11-15 10:46:23.654859] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:35.346 [2024-11-15 10:46:23.654874] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:35.346 [2024-11-15 10:46:23.654885] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:35.346 [2024-11-15 10:46:23.654915] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:35.346 qpair failed and we were unable to recover it. 00:27:35.346 [2024-11-15 10:46:23.664768] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:35.346 [2024-11-15 10:46:23.664873] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:35.346 [2024-11-15 10:46:23.664899] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:35.346 [2024-11-15 10:46:23.664914] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:35.346 [2024-11-15 10:46:23.664926] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:35.346 [2024-11-15 10:46:23.664956] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:35.346 qpair failed and we were unable to recover it. 00:27:35.346 [2024-11-15 10:46:23.674828] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:35.346 [2024-11-15 10:46:23.674935] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:35.346 [2024-11-15 10:46:23.674961] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:35.346 [2024-11-15 10:46:23.674975] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:35.346 [2024-11-15 10:46:23.674987] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:35.346 [2024-11-15 10:46:23.675017] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:35.346 qpair failed and we were unable to recover it. 00:27:35.346 [2024-11-15 10:46:23.684792] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:35.346 [2024-11-15 10:46:23.684892] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:35.346 [2024-11-15 10:46:23.684925] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:35.346 [2024-11-15 10:46:23.684940] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:35.346 [2024-11-15 10:46:23.684952] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:35.346 [2024-11-15 10:46:23.684982] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:35.346 qpair failed and we were unable to recover it. 00:27:35.346 [2024-11-15 10:46:23.694865] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:35.346 [2024-11-15 10:46:23.694963] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:35.346 [2024-11-15 10:46:23.694989] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:35.346 [2024-11-15 10:46:23.695003] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:35.346 [2024-11-15 10:46:23.695016] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:35.346 [2024-11-15 10:46:23.695057] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:35.346 qpair failed and we were unable to recover it. 00:27:35.346 [2024-11-15 10:46:23.704809] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:35.346 [2024-11-15 10:46:23.704909] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:35.346 [2024-11-15 10:46:23.704935] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:35.346 [2024-11-15 10:46:23.704949] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:35.346 [2024-11-15 10:46:23.704960] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:35.346 [2024-11-15 10:46:23.704990] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:35.346 qpair failed and we were unable to recover it. 00:27:35.346 [2024-11-15 10:46:23.714913] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:35.346 [2024-11-15 10:46:23.715018] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:35.346 [2024-11-15 10:46:23.715043] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:35.346 [2024-11-15 10:46:23.715056] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:35.346 [2024-11-15 10:46:23.715068] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:35.346 [2024-11-15 10:46:23.715097] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:35.346 qpair failed and we were unable to recover it. 00:27:35.346 [2024-11-15 10:46:23.724920] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:35.346 [2024-11-15 10:46:23.725022] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:35.346 [2024-11-15 10:46:23.725047] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:35.346 [2024-11-15 10:46:23.725067] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:35.346 [2024-11-15 10:46:23.725080] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:35.346 [2024-11-15 10:46:23.725111] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:35.346 qpair failed and we were unable to recover it. 00:27:35.346 [2024-11-15 10:46:23.734900] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:35.346 [2024-11-15 10:46:23.735000] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:35.346 [2024-11-15 10:46:23.735026] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:35.346 [2024-11-15 10:46:23.735040] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:35.346 [2024-11-15 10:46:23.735052] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:35.346 [2024-11-15 10:46:23.735082] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:35.346 qpair failed and we were unable to recover it. 00:27:35.346 [2024-11-15 10:46:23.744931] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:35.346 [2024-11-15 10:46:23.745033] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:35.346 [2024-11-15 10:46:23.745058] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:35.346 [2024-11-15 10:46:23.745071] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:35.346 [2024-11-15 10:46:23.745083] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:35.346 [2024-11-15 10:46:23.745113] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:35.346 qpair failed and we were unable to recover it. 00:27:35.346 [2024-11-15 10:46:23.755029] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:35.346 [2024-11-15 10:46:23.755141] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:35.346 [2024-11-15 10:46:23.755165] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:35.346 [2024-11-15 10:46:23.755179] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:35.346 [2024-11-15 10:46:23.755192] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:35.346 [2024-11-15 10:46:23.755222] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:35.346 qpair failed and we were unable to recover it. 00:27:35.346 [2024-11-15 10:46:23.765053] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:35.346 [2024-11-15 10:46:23.765203] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:35.346 [2024-11-15 10:46:23.765229] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:35.347 [2024-11-15 10:46:23.765243] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:35.347 [2024-11-15 10:46:23.765255] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:35.347 [2024-11-15 10:46:23.765294] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:35.347 qpair failed and we were unable to recover it. 00:27:35.347 [2024-11-15 10:46:23.775064] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:35.347 [2024-11-15 10:46:23.775164] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:35.347 [2024-11-15 10:46:23.775189] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:35.347 [2024-11-15 10:46:23.775203] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:35.347 [2024-11-15 10:46:23.775214] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:35.347 [2024-11-15 10:46:23.775245] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:35.347 qpair failed and we were unable to recover it. 00:27:35.347 [2024-11-15 10:46:23.785064] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:35.347 [2024-11-15 10:46:23.785165] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:35.347 [2024-11-15 10:46:23.785191] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:35.347 [2024-11-15 10:46:23.785205] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:35.347 [2024-11-15 10:46:23.785216] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:35.347 [2024-11-15 10:46:23.785246] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:35.347 qpair failed and we were unable to recover it. 00:27:35.347 [2024-11-15 10:46:23.795126] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:35.347 [2024-11-15 10:46:23.795239] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:35.347 [2024-11-15 10:46:23.795264] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:35.347 [2024-11-15 10:46:23.795278] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:35.347 [2024-11-15 10:46:23.795290] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:35.347 [2024-11-15 10:46:23.795320] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:35.347 qpair failed and we were unable to recover it. 00:27:35.347 [2024-11-15 10:46:23.805144] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:35.347 [2024-11-15 10:46:23.805264] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:35.347 [2024-11-15 10:46:23.805290] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:35.347 [2024-11-15 10:46:23.805304] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:35.347 [2024-11-15 10:46:23.805316] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:35.347 [2024-11-15 10:46:23.805346] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:35.347 qpair failed and we were unable to recover it. 00:27:35.605 [2024-11-15 10:46:23.815173] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:35.605 [2024-11-15 10:46:23.815330] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:35.605 [2024-11-15 10:46:23.815371] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:35.605 [2024-11-15 10:46:23.815388] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:35.605 [2024-11-15 10:46:23.815400] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:35.605 [2024-11-15 10:46:23.815430] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:35.605 qpair failed and we were unable to recover it. 00:27:35.605 [2024-11-15 10:46:23.825157] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:35.605 [2024-11-15 10:46:23.825256] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:35.605 [2024-11-15 10:46:23.825281] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:35.605 [2024-11-15 10:46:23.825295] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:35.605 [2024-11-15 10:46:23.825307] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:35.605 [2024-11-15 10:46:23.825337] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:35.605 qpair failed and we were unable to recover it. 00:27:35.605 [2024-11-15 10:46:23.835248] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:35.605 [2024-11-15 10:46:23.835353] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:35.605 [2024-11-15 10:46:23.835385] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:35.605 [2024-11-15 10:46:23.835399] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:35.605 [2024-11-15 10:46:23.835411] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:35.605 [2024-11-15 10:46:23.835442] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:35.605 qpair failed and we were unable to recover it. 00:27:35.605 [2024-11-15 10:46:23.845232] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:35.605 [2024-11-15 10:46:23.845336] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:35.605 [2024-11-15 10:46:23.845372] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:35.605 [2024-11-15 10:46:23.845389] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:35.605 [2024-11-15 10:46:23.845401] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:35.605 [2024-11-15 10:46:23.845431] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:35.606 qpair failed and we were unable to recover it. 00:27:35.606 [2024-11-15 10:46:23.855288] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:35.606 [2024-11-15 10:46:23.855390] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:35.606 [2024-11-15 10:46:23.855419] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:35.606 [2024-11-15 10:46:23.855439] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:35.606 [2024-11-15 10:46:23.855452] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:35.606 [2024-11-15 10:46:23.855482] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:35.606 qpair failed and we were unable to recover it. 00:27:35.606 [2024-11-15 10:46:23.865299] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:35.606 [2024-11-15 10:46:23.865455] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:35.606 [2024-11-15 10:46:23.865481] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:35.606 [2024-11-15 10:46:23.865495] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:35.606 [2024-11-15 10:46:23.865507] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:35.606 [2024-11-15 10:46:23.865537] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:35.606 qpair failed and we were unable to recover it. 00:27:35.606 [2024-11-15 10:46:23.875356] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:35.606 [2024-11-15 10:46:23.875507] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:35.606 [2024-11-15 10:46:23.875532] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:35.606 [2024-11-15 10:46:23.875547] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:35.606 [2024-11-15 10:46:23.875559] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:35.606 [2024-11-15 10:46:23.875598] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:35.606 qpair failed and we were unable to recover it. 00:27:35.606 [2024-11-15 10:46:23.885381] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:35.606 [2024-11-15 10:46:23.885470] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:35.606 [2024-11-15 10:46:23.885494] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:35.606 [2024-11-15 10:46:23.885507] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:35.606 [2024-11-15 10:46:23.885519] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:35.606 [2024-11-15 10:46:23.885550] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:35.606 qpair failed and we were unable to recover it. 00:27:35.606 [2024-11-15 10:46:23.895391] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:35.606 [2024-11-15 10:46:23.895477] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:35.606 [2024-11-15 10:46:23.895502] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:35.606 [2024-11-15 10:46:23.895516] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:35.606 [2024-11-15 10:46:23.895527] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:35.606 [2024-11-15 10:46:23.895563] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:35.606 qpair failed and we were unable to recover it. 00:27:35.606 [2024-11-15 10:46:23.905421] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:35.606 [2024-11-15 10:46:23.905506] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:35.606 [2024-11-15 10:46:23.905531] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:35.606 [2024-11-15 10:46:23.905544] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:35.606 [2024-11-15 10:46:23.905556] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:35.606 [2024-11-15 10:46:23.905586] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:35.606 qpair failed and we were unable to recover it. 00:27:35.606 [2024-11-15 10:46:23.915526] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:35.606 [2024-11-15 10:46:23.915662] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:35.606 [2024-11-15 10:46:23.915688] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:35.606 [2024-11-15 10:46:23.915701] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:35.606 [2024-11-15 10:46:23.915713] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:35.606 [2024-11-15 10:46:23.915742] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:35.606 qpair failed and we were unable to recover it. 00:27:35.606 [2024-11-15 10:46:23.925475] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:35.606 [2024-11-15 10:46:23.925564] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:35.606 [2024-11-15 10:46:23.925589] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:35.606 [2024-11-15 10:46:23.925602] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:35.606 [2024-11-15 10:46:23.925614] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:35.606 [2024-11-15 10:46:23.925645] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:35.606 qpair failed and we were unable to recover it. 00:27:35.606 [2024-11-15 10:46:23.935480] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:35.606 [2024-11-15 10:46:23.935563] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:35.606 [2024-11-15 10:46:23.935588] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:35.606 [2024-11-15 10:46:23.935601] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:35.606 [2024-11-15 10:46:23.935613] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:35.606 [2024-11-15 10:46:23.935643] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:35.606 qpair failed and we were unable to recover it. 00:27:35.606 [2024-11-15 10:46:23.945543] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:35.606 [2024-11-15 10:46:23.945631] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:35.606 [2024-11-15 10:46:23.945655] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:35.606 [2024-11-15 10:46:23.945668] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:35.606 [2024-11-15 10:46:23.945680] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:35.606 [2024-11-15 10:46:23.945710] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:35.606 qpair failed and we were unable to recover it. 00:27:35.606 [2024-11-15 10:46:23.955618] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:35.606 [2024-11-15 10:46:23.955707] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:35.606 [2024-11-15 10:46:23.955736] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:35.606 [2024-11-15 10:46:23.955750] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:35.607 [2024-11-15 10:46:23.955762] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:35.607 [2024-11-15 10:46:23.955792] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:35.607 qpair failed and we were unable to recover it. 00:27:35.607 [2024-11-15 10:46:23.965613] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:35.607 [2024-11-15 10:46:23.965699] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:35.607 [2024-11-15 10:46:23.965723] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:35.607 [2024-11-15 10:46:23.965737] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:35.607 [2024-11-15 10:46:23.965749] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:35.607 [2024-11-15 10:46:23.965778] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:35.607 qpair failed and we were unable to recover it. 00:27:35.607 [2024-11-15 10:46:23.975650] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:35.607 [2024-11-15 10:46:23.975766] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:35.607 [2024-11-15 10:46:23.975790] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:35.607 [2024-11-15 10:46:23.975804] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:35.607 [2024-11-15 10:46:23.975815] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:35.607 [2024-11-15 10:46:23.975845] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:35.607 qpair failed and we were unable to recover it. 00:27:35.607 [2024-11-15 10:46:23.985652] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:35.607 [2024-11-15 10:46:23.985753] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:35.607 [2024-11-15 10:46:23.985782] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:35.607 [2024-11-15 10:46:23.985797] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:35.607 [2024-11-15 10:46:23.985809] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:35.607 [2024-11-15 10:46:23.985839] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:35.607 qpair failed and we were unable to recover it. 00:27:35.607 [2024-11-15 10:46:23.995739] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:35.607 [2024-11-15 10:46:23.995846] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:35.607 [2024-11-15 10:46:23.995871] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:35.607 [2024-11-15 10:46:23.995885] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:35.607 [2024-11-15 10:46:23.995897] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:35.607 [2024-11-15 10:46:23.995927] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:35.607 qpair failed and we were unable to recover it. 00:27:35.607 [2024-11-15 10:46:24.005770] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:35.607 [2024-11-15 10:46:24.005882] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:35.607 [2024-11-15 10:46:24.005907] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:35.607 [2024-11-15 10:46:24.005921] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:35.607 [2024-11-15 10:46:24.005933] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:35.607 [2024-11-15 10:46:24.005962] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:35.607 qpair failed and we were unable to recover it. 00:27:35.607 [2024-11-15 10:46:24.015741] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:35.607 [2024-11-15 10:46:24.015844] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:35.607 [2024-11-15 10:46:24.015869] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:35.607 [2024-11-15 10:46:24.015884] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:35.607 [2024-11-15 10:46:24.015895] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:35.607 [2024-11-15 10:46:24.015925] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:35.607 qpair failed and we were unable to recover it. 00:27:35.607 [2024-11-15 10:46:24.025771] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:35.607 [2024-11-15 10:46:24.025876] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:35.607 [2024-11-15 10:46:24.025902] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:35.607 [2024-11-15 10:46:24.025917] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:35.607 [2024-11-15 10:46:24.025934] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:35.607 [2024-11-15 10:46:24.025965] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:35.607 qpair failed and we were unable to recover it. 00:27:35.607 [2024-11-15 10:46:24.035813] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:35.607 [2024-11-15 10:46:24.035919] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:35.607 [2024-11-15 10:46:24.035943] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:35.607 [2024-11-15 10:46:24.035957] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:35.607 [2024-11-15 10:46:24.035969] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:35.607 [2024-11-15 10:46:24.035999] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:35.607 qpair failed and we were unable to recover it. 00:27:35.607 [2024-11-15 10:46:24.045842] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:35.607 [2024-11-15 10:46:24.045989] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:35.607 [2024-11-15 10:46:24.046015] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:35.607 [2024-11-15 10:46:24.046029] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:35.607 [2024-11-15 10:46:24.046040] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:35.607 [2024-11-15 10:46:24.046070] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:35.607 qpair failed and we were unable to recover it. 00:27:35.607 [2024-11-15 10:46:24.055844] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:35.607 [2024-11-15 10:46:24.055984] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:35.607 [2024-11-15 10:46:24.056009] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:35.607 [2024-11-15 10:46:24.056023] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:35.607 [2024-11-15 10:46:24.056036] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:35.607 [2024-11-15 10:46:24.056065] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:35.607 qpair failed and we were unable to recover it. 00:27:35.607 [2024-11-15 10:46:24.065879] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:35.607 [2024-11-15 10:46:24.065976] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:35.607 [2024-11-15 10:46:24.066004] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:35.607 [2024-11-15 10:46:24.066018] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:35.607 [2024-11-15 10:46:24.066030] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:35.607 [2024-11-15 10:46:24.066060] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:35.607 qpair failed and we were unable to recover it. 00:27:35.868 [2024-11-15 10:46:24.075950] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:35.868 [2024-11-15 10:46:24.076079] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:35.868 [2024-11-15 10:46:24.076103] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:35.868 [2024-11-15 10:46:24.076117] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:35.868 [2024-11-15 10:46:24.076129] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:35.868 [2024-11-15 10:46:24.076160] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:35.868 qpair failed and we were unable to recover it. 00:27:35.868 [2024-11-15 10:46:24.085916] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:35.868 [2024-11-15 10:46:24.086024] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:35.868 [2024-11-15 10:46:24.086050] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:35.868 [2024-11-15 10:46:24.086064] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:35.868 [2024-11-15 10:46:24.086076] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:35.868 [2024-11-15 10:46:24.086106] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:35.868 qpair failed and we were unable to recover it. 00:27:35.868 [2024-11-15 10:46:24.095966] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:35.868 [2024-11-15 10:46:24.096069] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:35.868 [2024-11-15 10:46:24.096094] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:35.868 [2024-11-15 10:46:24.096108] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:35.868 [2024-11-15 10:46:24.096120] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:35.868 [2024-11-15 10:46:24.096150] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:35.868 qpair failed and we were unable to recover it. 00:27:35.868 [2024-11-15 10:46:24.106032] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:35.868 [2024-11-15 10:46:24.106154] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:35.868 [2024-11-15 10:46:24.106180] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:35.868 [2024-11-15 10:46:24.106195] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:35.868 [2024-11-15 10:46:24.106206] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:35.868 [2024-11-15 10:46:24.106243] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:35.868 qpair failed and we were unable to recover it. 00:27:35.868 [2024-11-15 10:46:24.116067] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:35.868 [2024-11-15 10:46:24.116202] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:35.868 [2024-11-15 10:46:24.116233] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:35.868 [2024-11-15 10:46:24.116249] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:35.868 [2024-11-15 10:46:24.116261] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:35.868 [2024-11-15 10:46:24.116291] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:35.868 qpair failed and we were unable to recover it. 00:27:35.868 [2024-11-15 10:46:24.126092] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:35.868 [2024-11-15 10:46:24.126196] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:35.868 [2024-11-15 10:46:24.126222] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:35.868 [2024-11-15 10:46:24.126236] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:35.868 [2024-11-15 10:46:24.126248] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:35.868 [2024-11-15 10:46:24.126278] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:35.868 qpair failed and we were unable to recover it. 00:27:35.868 [2024-11-15 10:46:24.136123] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:35.868 [2024-11-15 10:46:24.136271] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:35.868 [2024-11-15 10:46:24.136297] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:35.868 [2024-11-15 10:46:24.136311] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:35.868 [2024-11-15 10:46:24.136323] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:35.868 [2024-11-15 10:46:24.136371] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:35.868 qpair failed and we were unable to recover it. 00:27:35.868 [2024-11-15 10:46:24.146112] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:35.869 [2024-11-15 10:46:24.146213] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:35.869 [2024-11-15 10:46:24.146238] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:35.869 [2024-11-15 10:46:24.146253] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:35.869 [2024-11-15 10:46:24.146265] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:35.869 [2024-11-15 10:46:24.146295] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:35.869 qpair failed and we were unable to recover it. 00:27:35.869 [2024-11-15 10:46:24.156163] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:35.869 [2024-11-15 10:46:24.156285] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:35.869 [2024-11-15 10:46:24.156311] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:35.869 [2024-11-15 10:46:24.156326] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:35.869 [2024-11-15 10:46:24.156344] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:35.869 [2024-11-15 10:46:24.156396] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:35.869 qpair failed and we were unable to recover it. 00:27:35.869 [2024-11-15 10:46:24.166137] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:35.869 [2024-11-15 10:46:24.166289] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:35.869 [2024-11-15 10:46:24.166315] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:35.869 [2024-11-15 10:46:24.166329] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:35.869 [2024-11-15 10:46:24.166341] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:35.869 [2024-11-15 10:46:24.166377] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:35.869 qpair failed and we were unable to recover it. 00:27:35.869 [2024-11-15 10:46:24.176196] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:35.869 [2024-11-15 10:46:24.176295] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:35.869 [2024-11-15 10:46:24.176320] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:35.869 [2024-11-15 10:46:24.176333] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:35.869 [2024-11-15 10:46:24.176345] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:35.869 [2024-11-15 10:46:24.176382] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:35.869 qpair failed and we were unable to recover it. 00:27:35.869 [2024-11-15 10:46:24.186216] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:35.869 [2024-11-15 10:46:24.186358] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:35.869 [2024-11-15 10:46:24.186391] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:35.869 [2024-11-15 10:46:24.186406] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:35.869 [2024-11-15 10:46:24.186418] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:35.869 [2024-11-15 10:46:24.186448] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:35.869 qpair failed and we were unable to recover it. 00:27:35.869 [2024-11-15 10:46:24.196224] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:35.869 [2024-11-15 10:46:24.196380] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:35.869 [2024-11-15 10:46:24.196407] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:35.869 [2024-11-15 10:46:24.196422] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:35.869 [2024-11-15 10:46:24.196434] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:35.869 [2024-11-15 10:46:24.196474] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:35.869 qpair failed and we were unable to recover it. 00:27:35.869 [2024-11-15 10:46:24.206286] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:35.869 [2024-11-15 10:46:24.206433] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:35.869 [2024-11-15 10:46:24.206459] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:35.869 [2024-11-15 10:46:24.206474] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:35.869 [2024-11-15 10:46:24.206485] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:35.869 [2024-11-15 10:46:24.206516] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:35.869 qpair failed and we were unable to recover it. 00:27:35.869 [2024-11-15 10:46:24.216259] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:35.869 [2024-11-15 10:46:24.216388] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:35.869 [2024-11-15 10:46:24.216413] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:35.869 [2024-11-15 10:46:24.216427] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:35.869 [2024-11-15 10:46:24.216439] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:35.869 [2024-11-15 10:46:24.216470] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:35.869 qpair failed and we were unable to recover it. 00:27:35.869 [2024-11-15 10:46:24.226316] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:35.869 [2024-11-15 10:46:24.226422] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:35.869 [2024-11-15 10:46:24.226449] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:35.869 [2024-11-15 10:46:24.226463] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:35.869 [2024-11-15 10:46:24.226475] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:35.869 [2024-11-15 10:46:24.226506] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:35.869 qpair failed and we were unable to recover it. 00:27:35.869 [2024-11-15 10:46:24.236321] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:35.869 [2024-11-15 10:46:24.236440] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:35.869 [2024-11-15 10:46:24.236466] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:35.869 [2024-11-15 10:46:24.236480] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:35.869 [2024-11-15 10:46:24.236492] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:35.869 [2024-11-15 10:46:24.236523] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:35.869 qpair failed and we were unable to recover it. 00:27:35.869 [2024-11-15 10:46:24.246606] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:35.870 [2024-11-15 10:46:24.246708] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:35.870 [2024-11-15 10:46:24.246739] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:35.870 [2024-11-15 10:46:24.246754] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:35.870 [2024-11-15 10:46:24.246766] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:35.870 [2024-11-15 10:46:24.246796] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:35.870 qpair failed and we were unable to recover it. 00:27:35.870 [2024-11-15 10:46:24.256476] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:35.870 [2024-11-15 10:46:24.256585] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:35.870 [2024-11-15 10:46:24.256609] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:35.870 [2024-11-15 10:46:24.256624] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:35.870 [2024-11-15 10:46:24.256636] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:35.870 [2024-11-15 10:46:24.256666] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:35.870 qpair failed and we were unable to recover it. 00:27:35.870 [2024-11-15 10:46:24.266489] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:35.870 [2024-11-15 10:46:24.266580] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:35.870 [2024-11-15 10:46:24.266604] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:35.870 [2024-11-15 10:46:24.266617] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:35.870 [2024-11-15 10:46:24.266629] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:35.870 [2024-11-15 10:46:24.266658] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:35.870 qpair failed and we were unable to recover it. 00:27:35.870 [2024-11-15 10:46:24.276539] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:35.870 [2024-11-15 10:46:24.276633] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:35.870 [2024-11-15 10:46:24.276658] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:35.870 [2024-11-15 10:46:24.276672] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:35.870 [2024-11-15 10:46:24.276683] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:35.870 [2024-11-15 10:46:24.276713] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:35.870 qpair failed and we were unable to recover it. 00:27:35.870 [2024-11-15 10:46:24.286476] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:35.870 [2024-11-15 10:46:24.286567] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:35.870 [2024-11-15 10:46:24.286596] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:35.870 [2024-11-15 10:46:24.286616] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:35.870 [2024-11-15 10:46:24.286629] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:35.870 [2024-11-15 10:46:24.286659] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:35.870 qpair failed and we were unable to recover it. 00:27:35.870 [2024-11-15 10:46:24.296509] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:35.870 [2024-11-15 10:46:24.296599] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:35.870 [2024-11-15 10:46:24.296625] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:35.870 [2024-11-15 10:46:24.296640] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:35.870 [2024-11-15 10:46:24.296652] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:35.870 [2024-11-15 10:46:24.296682] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:35.870 qpair failed and we were unable to recover it. 00:27:35.870 [2024-11-15 10:46:24.306613] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:35.870 [2024-11-15 10:46:24.306723] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:35.870 [2024-11-15 10:46:24.306780] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:35.870 [2024-11-15 10:46:24.306796] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:35.870 [2024-11-15 10:46:24.306808] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:35.870 [2024-11-15 10:46:24.306852] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:35.870 qpair failed and we were unable to recover it. 00:27:35.870 [2024-11-15 10:46:24.316653] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:35.870 [2024-11-15 10:46:24.316748] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:35.870 [2024-11-15 10:46:24.316773] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:35.870 [2024-11-15 10:46:24.316788] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:35.870 [2024-11-15 10:46:24.316800] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:35.870 [2024-11-15 10:46:24.316829] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:35.870 qpair failed and we were unable to recover it. 00:27:35.870 [2024-11-15 10:46:24.326590] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:35.870 [2024-11-15 10:46:24.326681] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:35.870 [2024-11-15 10:46:24.326708] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:35.870 [2024-11-15 10:46:24.326723] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:35.870 [2024-11-15 10:46:24.326735] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:35.870 [2024-11-15 10:46:24.326770] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:35.870 qpair failed and we were unable to recover it. 00:27:36.129 [2024-11-15 10:46:24.336606] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:36.129 [2024-11-15 10:46:24.336697] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:36.129 [2024-11-15 10:46:24.336724] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:36.129 [2024-11-15 10:46:24.336738] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:36.129 [2024-11-15 10:46:24.336750] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:36.129 [2024-11-15 10:46:24.336780] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:36.129 qpair failed and we were unable to recover it. 00:27:36.129 [2024-11-15 10:46:24.346639] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:36.129 [2024-11-15 10:46:24.346732] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:36.129 [2024-11-15 10:46:24.346757] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:36.129 [2024-11-15 10:46:24.346771] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:36.129 [2024-11-15 10:46:24.346783] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:36.129 [2024-11-15 10:46:24.346813] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:36.129 qpair failed and we were unable to recover it. 00:27:36.129 [2024-11-15 10:46:24.356728] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:36.129 [2024-11-15 10:46:24.356836] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:36.129 [2024-11-15 10:46:24.356860] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:36.129 [2024-11-15 10:46:24.356875] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:36.129 [2024-11-15 10:46:24.356887] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:36.129 [2024-11-15 10:46:24.356921] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:36.129 qpair failed and we were unable to recover it. 00:27:36.129 [2024-11-15 10:46:24.366773] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:36.129 [2024-11-15 10:46:24.366881] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:36.129 [2024-11-15 10:46:24.366907] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:36.129 [2024-11-15 10:46:24.366922] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:36.129 [2024-11-15 10:46:24.366933] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:36.129 [2024-11-15 10:46:24.366963] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:36.129 qpair failed and we were unable to recover it. 00:27:36.129 [2024-11-15 10:46:24.376741] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:36.129 [2024-11-15 10:46:24.376851] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:36.129 [2024-11-15 10:46:24.376877] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:36.129 [2024-11-15 10:46:24.376891] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:36.129 [2024-11-15 10:46:24.376903] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:36.129 [2024-11-15 10:46:24.376934] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:36.129 qpair failed and we were unable to recover it. 00:27:36.129 [2024-11-15 10:46:24.386733] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:36.129 [2024-11-15 10:46:24.386831] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:36.129 [2024-11-15 10:46:24.386855] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:36.129 [2024-11-15 10:46:24.386869] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:36.129 [2024-11-15 10:46:24.386881] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:36.129 [2024-11-15 10:46:24.386911] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:36.129 qpair failed and we were unable to recover it. 00:27:36.129 [2024-11-15 10:46:24.396772] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:36.129 [2024-11-15 10:46:24.396924] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:36.129 [2024-11-15 10:46:24.396949] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:36.129 [2024-11-15 10:46:24.396963] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:36.129 [2024-11-15 10:46:24.396976] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:36.129 [2024-11-15 10:46:24.397006] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:36.129 qpair failed and we were unable to recover it. 00:27:36.129 [2024-11-15 10:46:24.406818] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:36.129 [2024-11-15 10:46:24.406974] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:36.129 [2024-11-15 10:46:24.407000] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:36.130 [2024-11-15 10:46:24.407014] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:36.130 [2024-11-15 10:46:24.407027] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:36.130 [2024-11-15 10:46:24.407057] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:36.130 qpair failed and we were unable to recover it. 00:27:36.130 [2024-11-15 10:46:24.416839] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:36.130 [2024-11-15 10:46:24.416937] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:36.130 [2024-11-15 10:46:24.416965] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:36.130 [2024-11-15 10:46:24.416988] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:36.130 [2024-11-15 10:46:24.417001] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:36.130 [2024-11-15 10:46:24.417031] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:36.130 qpair failed and we were unable to recover it. 00:27:36.130 [2024-11-15 10:46:24.426890] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:36.130 [2024-11-15 10:46:24.426990] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:36.130 [2024-11-15 10:46:24.427015] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:36.130 [2024-11-15 10:46:24.427029] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:36.130 [2024-11-15 10:46:24.427041] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:36.130 [2024-11-15 10:46:24.427071] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:36.130 qpair failed and we were unable to recover it. 00:27:36.130 [2024-11-15 10:46:24.436898] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:36.130 [2024-11-15 10:46:24.437017] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:36.130 [2024-11-15 10:46:24.437043] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:36.130 [2024-11-15 10:46:24.437057] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:36.130 [2024-11-15 10:46:24.437069] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:36.130 [2024-11-15 10:46:24.437098] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:36.130 qpair failed and we were unable to recover it. 00:27:36.130 [2024-11-15 10:46:24.446901] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:36.130 [2024-11-15 10:46:24.447003] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:36.130 [2024-11-15 10:46:24.447027] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:36.130 [2024-11-15 10:46:24.447041] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:36.130 [2024-11-15 10:46:24.447054] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:36.130 [2024-11-15 10:46:24.447083] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:36.130 qpair failed and we were unable to recover it. 00:27:36.130 [2024-11-15 10:46:24.457004] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:36.130 [2024-11-15 10:46:24.457117] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:36.130 [2024-11-15 10:46:24.457142] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:36.130 [2024-11-15 10:46:24.457157] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:36.130 [2024-11-15 10:46:24.457169] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:36.130 [2024-11-15 10:46:24.457204] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:36.130 qpair failed and we were unable to recover it. 00:27:36.130 [2024-11-15 10:46:24.466981] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:36.130 [2024-11-15 10:46:24.467084] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:36.130 [2024-11-15 10:46:24.467110] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:36.130 [2024-11-15 10:46:24.467124] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:36.130 [2024-11-15 10:46:24.467137] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:36.130 [2024-11-15 10:46:24.467167] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:36.130 qpair failed and we were unable to recover it. 00:27:36.130 [2024-11-15 10:46:24.477033] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:36.130 [2024-11-15 10:46:24.477140] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:36.130 [2024-11-15 10:46:24.477165] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:36.130 [2024-11-15 10:46:24.477179] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:36.130 [2024-11-15 10:46:24.477191] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:36.130 [2024-11-15 10:46:24.477221] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:36.130 qpair failed and we were unable to recover it. 00:27:36.130 [2024-11-15 10:46:24.487077] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:36.130 [2024-11-15 10:46:24.487214] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:36.130 [2024-11-15 10:46:24.487239] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:36.130 [2024-11-15 10:46:24.487254] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:36.130 [2024-11-15 10:46:24.487265] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:36.130 [2024-11-15 10:46:24.487296] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:36.130 qpair failed and we were unable to recover it. 00:27:36.130 [2024-11-15 10:46:24.497044] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:36.130 [2024-11-15 10:46:24.497149] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:36.130 [2024-11-15 10:46:24.497174] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:36.130 [2024-11-15 10:46:24.497188] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:36.130 [2024-11-15 10:46:24.497200] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:36.130 [2024-11-15 10:46:24.497230] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:36.130 qpair failed and we were unable to recover it. 00:27:36.130 [2024-11-15 10:46:24.507058] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:36.130 [2024-11-15 10:46:24.507158] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:36.130 [2024-11-15 10:46:24.507182] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:36.130 [2024-11-15 10:46:24.507196] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:36.130 [2024-11-15 10:46:24.507208] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:36.130 [2024-11-15 10:46:24.507238] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:36.130 qpair failed and we were unable to recover it. 00:27:36.130 [2024-11-15 10:46:24.517091] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:36.130 [2024-11-15 10:46:24.517197] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:36.130 [2024-11-15 10:46:24.517225] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:36.130 [2024-11-15 10:46:24.517239] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:36.130 [2024-11-15 10:46:24.517251] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:36.130 [2024-11-15 10:46:24.517280] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:36.130 qpair failed and we were unable to recover it. 00:27:36.130 [2024-11-15 10:46:24.527122] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:36.130 [2024-11-15 10:46:24.527246] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:36.130 [2024-11-15 10:46:24.527271] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:36.130 [2024-11-15 10:46:24.527286] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:36.130 [2024-11-15 10:46:24.527297] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:36.130 [2024-11-15 10:46:24.527328] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:36.130 qpair failed and we were unable to recover it. 00:27:36.130 [2024-11-15 10:46:24.537151] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:36.131 [2024-11-15 10:46:24.537254] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:36.131 [2024-11-15 10:46:24.537279] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:36.131 [2024-11-15 10:46:24.537294] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:36.131 [2024-11-15 10:46:24.537306] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:36.131 [2024-11-15 10:46:24.537336] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:36.131 qpair failed and we were unable to recover it. 00:27:36.131 [2024-11-15 10:46:24.547166] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:36.131 [2024-11-15 10:46:24.547268] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:36.131 [2024-11-15 10:46:24.547298] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:36.131 [2024-11-15 10:46:24.547313] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:36.131 [2024-11-15 10:46:24.547325] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:36.131 [2024-11-15 10:46:24.547355] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:36.131 qpair failed and we were unable to recover it. 00:27:36.131 [2024-11-15 10:46:24.557221] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:36.131 [2024-11-15 10:46:24.557331] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:36.131 [2024-11-15 10:46:24.557356] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:36.131 [2024-11-15 10:46:24.557381] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:36.131 [2024-11-15 10:46:24.557393] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:36.131 [2024-11-15 10:46:24.557424] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:36.131 qpair failed and we were unable to recover it. 00:27:36.131 [2024-11-15 10:46:24.567271] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:36.131 [2024-11-15 10:46:24.567382] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:36.131 [2024-11-15 10:46:24.567409] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:36.131 [2024-11-15 10:46:24.567423] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:36.131 [2024-11-15 10:46:24.567435] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:36.131 [2024-11-15 10:46:24.567465] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:36.131 qpair failed and we were unable to recover it. 00:27:36.131 [2024-11-15 10:46:24.577297] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:36.131 [2024-11-15 10:46:24.577403] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:36.131 [2024-11-15 10:46:24.577429] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:36.131 [2024-11-15 10:46:24.577443] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:36.131 [2024-11-15 10:46:24.577455] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:36.131 [2024-11-15 10:46:24.577485] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:36.131 qpair failed and we were unable to recover it. 00:27:36.131 [2024-11-15 10:46:24.587373] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:36.131 [2024-11-15 10:46:24.587476] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:36.131 [2024-11-15 10:46:24.587501] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:36.131 [2024-11-15 10:46:24.587516] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:36.131 [2024-11-15 10:46:24.587533] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:36.131 [2024-11-15 10:46:24.587564] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:36.131 qpair failed and we were unable to recover it. 00:27:36.390 [2024-11-15 10:46:24.597406] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:36.391 [2024-11-15 10:46:24.597499] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:36.391 [2024-11-15 10:46:24.597525] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:36.391 [2024-11-15 10:46:24.597540] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:36.391 [2024-11-15 10:46:24.597552] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:36.391 [2024-11-15 10:46:24.597583] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:36.391 qpair failed and we were unable to recover it. 00:27:36.391 [2024-11-15 10:46:24.607360] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:36.391 [2024-11-15 10:46:24.607515] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:36.391 [2024-11-15 10:46:24.607540] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:36.391 [2024-11-15 10:46:24.607554] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:36.391 [2024-11-15 10:46:24.607566] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:36.391 [2024-11-15 10:46:24.607596] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:36.391 qpair failed and we were unable to recover it. 00:27:36.391 [2024-11-15 10:46:24.617428] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:36.391 [2024-11-15 10:46:24.617520] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:36.391 [2024-11-15 10:46:24.617544] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:36.391 [2024-11-15 10:46:24.617557] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:36.391 [2024-11-15 10:46:24.617569] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:36.391 [2024-11-15 10:46:24.617599] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:36.391 qpair failed and we were unable to recover it. 00:27:36.391 [2024-11-15 10:46:24.627447] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:36.391 [2024-11-15 10:46:24.627569] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:36.391 [2024-11-15 10:46:24.627594] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:36.391 [2024-11-15 10:46:24.627608] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:36.391 [2024-11-15 10:46:24.627619] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:36.391 [2024-11-15 10:46:24.627650] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:36.391 qpair failed and we were unable to recover it. 00:27:36.391 [2024-11-15 10:46:24.637531] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:36.391 [2024-11-15 10:46:24.637653] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:36.391 [2024-11-15 10:46:24.637680] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:36.391 [2024-11-15 10:46:24.637694] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:36.391 [2024-11-15 10:46:24.637706] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:36.391 [2024-11-15 10:46:24.637736] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:36.391 qpair failed and we were unable to recover it. 00:27:36.391 [2024-11-15 10:46:24.647541] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:36.391 [2024-11-15 10:46:24.647656] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:36.391 [2024-11-15 10:46:24.647684] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:36.391 [2024-11-15 10:46:24.647699] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:36.391 [2024-11-15 10:46:24.647711] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:36.391 [2024-11-15 10:46:24.647742] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:36.391 qpair failed and we were unable to recover it. 00:27:36.391 [2024-11-15 10:46:24.657567] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:36.391 [2024-11-15 10:46:24.657661] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:36.391 [2024-11-15 10:46:24.657685] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:36.391 [2024-11-15 10:46:24.657699] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:36.391 [2024-11-15 10:46:24.657711] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:36.391 [2024-11-15 10:46:24.657741] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:36.391 qpair failed and we were unable to recover it. 00:27:36.391 [2024-11-15 10:46:24.667577] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:36.391 [2024-11-15 10:46:24.667698] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:36.391 [2024-11-15 10:46:24.667723] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:36.391 [2024-11-15 10:46:24.667738] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:36.391 [2024-11-15 10:46:24.667750] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:36.391 [2024-11-15 10:46:24.667779] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:36.391 qpair failed and we were unable to recover it. 00:27:36.391 [2024-11-15 10:46:24.677622] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:36.391 [2024-11-15 10:46:24.677714] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:36.391 [2024-11-15 10:46:24.677744] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:36.391 [2024-11-15 10:46:24.677759] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:36.391 [2024-11-15 10:46:24.677771] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:36.391 [2024-11-15 10:46:24.677801] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:36.391 qpair failed and we were unable to recover it. 00:27:36.391 [2024-11-15 10:46:24.687643] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:36.391 [2024-11-15 10:46:24.687778] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:36.391 [2024-11-15 10:46:24.687802] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:36.391 [2024-11-15 10:46:24.687816] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:36.391 [2024-11-15 10:46:24.687827] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:36.391 [2024-11-15 10:46:24.687858] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:36.391 qpair failed and we were unable to recover it. 00:27:36.391 [2024-11-15 10:46:24.697699] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:36.391 [2024-11-15 10:46:24.697838] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:36.391 [2024-11-15 10:46:24.697864] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:36.391 [2024-11-15 10:46:24.697879] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:36.391 [2024-11-15 10:46:24.697891] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:36.391 [2024-11-15 10:46:24.697921] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:36.391 qpair failed and we were unable to recover it. 00:27:36.391 [2024-11-15 10:46:24.707706] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:36.391 [2024-11-15 10:46:24.707794] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:36.391 [2024-11-15 10:46:24.707820] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:36.391 [2024-11-15 10:46:24.707834] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:36.391 [2024-11-15 10:46:24.707846] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:36.391 [2024-11-15 10:46:24.707876] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:36.391 qpair failed and we were unable to recover it. 00:27:36.391 [2024-11-15 10:46:24.717749] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:36.391 [2024-11-15 10:46:24.717845] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:36.392 [2024-11-15 10:46:24.717870] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:36.392 [2024-11-15 10:46:24.717884] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:36.392 [2024-11-15 10:46:24.717902] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:36.392 [2024-11-15 10:46:24.717933] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:36.392 qpair failed and we were unable to recover it. 00:27:36.392 [2024-11-15 10:46:24.727833] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:36.392 [2024-11-15 10:46:24.727977] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:36.392 [2024-11-15 10:46:24.728003] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:36.392 [2024-11-15 10:46:24.728018] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:36.392 [2024-11-15 10:46:24.728030] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:36.392 [2024-11-15 10:46:24.728061] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:36.392 qpair failed and we were unable to recover it. 00:27:36.392 [2024-11-15 10:46:24.737784] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:36.392 [2024-11-15 10:46:24.737882] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:36.392 [2024-11-15 10:46:24.737907] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:36.392 [2024-11-15 10:46:24.737921] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:36.392 [2024-11-15 10:46:24.737934] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:36.392 [2024-11-15 10:46:24.737963] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:36.392 qpair failed and we were unable to recover it. 00:27:36.392 [2024-11-15 10:46:24.747786] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:36.392 [2024-11-15 10:46:24.747885] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:36.392 [2024-11-15 10:46:24.747909] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:36.392 [2024-11-15 10:46:24.747923] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:36.392 [2024-11-15 10:46:24.747935] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:36.392 [2024-11-15 10:46:24.747965] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:36.392 qpair failed and we were unable to recover it. 00:27:36.392 [2024-11-15 10:46:24.757849] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:36.392 [2024-11-15 10:46:24.757980] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:36.392 [2024-11-15 10:46:24.758006] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:36.392 [2024-11-15 10:46:24.758020] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:36.392 [2024-11-15 10:46:24.758032] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:36.392 [2024-11-15 10:46:24.758063] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:36.392 qpair failed and we were unable to recover it. 00:27:36.392 [2024-11-15 10:46:24.767896] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:36.392 [2024-11-15 10:46:24.768029] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:36.392 [2024-11-15 10:46:24.768055] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:36.392 [2024-11-15 10:46:24.768070] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:36.392 [2024-11-15 10:46:24.768082] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:36.392 [2024-11-15 10:46:24.768113] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:36.392 qpair failed and we were unable to recover it. 00:27:36.392 [2024-11-15 10:46:24.777888] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:36.392 [2024-11-15 10:46:24.777991] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:36.392 [2024-11-15 10:46:24.778017] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:36.392 [2024-11-15 10:46:24.778031] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:36.392 [2024-11-15 10:46:24.778043] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:36.392 [2024-11-15 10:46:24.778073] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:36.392 qpair failed and we were unable to recover it. 00:27:36.392 [2024-11-15 10:46:24.787983] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:36.392 [2024-11-15 10:46:24.788079] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:36.392 [2024-11-15 10:46:24.788109] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:36.392 [2024-11-15 10:46:24.788123] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:36.392 [2024-11-15 10:46:24.788135] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:36.392 [2024-11-15 10:46:24.788165] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:36.392 qpair failed and we were unable to recover it. 00:27:36.392 [2024-11-15 10:46:24.797939] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:36.392 [2024-11-15 10:46:24.798050] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:36.392 [2024-11-15 10:46:24.798076] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:36.392 [2024-11-15 10:46:24.798091] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:36.392 [2024-11-15 10:46:24.798103] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:36.392 [2024-11-15 10:46:24.798133] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:36.392 qpair failed and we were unable to recover it. 00:27:36.392 [2024-11-15 10:46:24.807943] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:36.392 [2024-11-15 10:46:24.808076] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:36.392 [2024-11-15 10:46:24.808101] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:36.392 [2024-11-15 10:46:24.808116] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:36.392 [2024-11-15 10:46:24.808128] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:36.392 [2024-11-15 10:46:24.808158] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:36.392 qpair failed and we were unable to recover it. 00:27:36.392 [2024-11-15 10:46:24.818003] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:36.392 [2024-11-15 10:46:24.818105] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:36.392 [2024-11-15 10:46:24.818131] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:36.392 [2024-11-15 10:46:24.818145] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:36.392 [2024-11-15 10:46:24.818157] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:36.392 [2024-11-15 10:46:24.818187] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:36.392 qpair failed and we were unable to recover it. 00:27:36.392 [2024-11-15 10:46:24.828046] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:36.392 [2024-11-15 10:46:24.828178] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:36.392 [2024-11-15 10:46:24.828204] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:36.392 [2024-11-15 10:46:24.828218] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:36.392 [2024-11-15 10:46:24.828230] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:36.392 [2024-11-15 10:46:24.828260] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:36.392 qpair failed and we were unable to recover it. 00:27:36.392 [2024-11-15 10:46:24.838092] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:36.392 [2024-11-15 10:46:24.838203] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:36.392 [2024-11-15 10:46:24.838229] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:36.392 [2024-11-15 10:46:24.838243] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:36.393 [2024-11-15 10:46:24.838254] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:36.393 [2024-11-15 10:46:24.838285] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:36.393 qpair failed and we were unable to recover it. 00:27:36.393 [2024-11-15 10:46:24.848043] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:36.393 [2024-11-15 10:46:24.848149] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:36.393 [2024-11-15 10:46:24.848174] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:36.393 [2024-11-15 10:46:24.848194] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:36.393 [2024-11-15 10:46:24.848207] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:36.393 [2024-11-15 10:46:24.848238] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:36.393 qpair failed and we were unable to recover it. 00:27:36.652 [2024-11-15 10:46:24.858188] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:36.652 [2024-11-15 10:46:24.858319] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:36.652 [2024-11-15 10:46:24.858345] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:36.652 [2024-11-15 10:46:24.858359] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:36.652 [2024-11-15 10:46:24.858379] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:36.652 [2024-11-15 10:46:24.858410] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:36.652 qpair failed and we were unable to recover it. 00:27:36.652 [2024-11-15 10:46:24.868094] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:36.652 [2024-11-15 10:46:24.868193] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:36.652 [2024-11-15 10:46:24.868218] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:36.652 [2024-11-15 10:46:24.868232] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:36.652 [2024-11-15 10:46:24.868244] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:36.652 [2024-11-15 10:46:24.868273] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:36.652 qpair failed and we were unable to recover it. 00:27:36.652 [2024-11-15 10:46:24.878220] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:36.652 [2024-11-15 10:46:24.878330] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:36.652 [2024-11-15 10:46:24.878355] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:36.652 [2024-11-15 10:46:24.878379] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:36.652 [2024-11-15 10:46:24.878392] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:36.652 [2024-11-15 10:46:24.878423] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:36.652 qpair failed and we were unable to recover it. 00:27:36.652 [2024-11-15 10:46:24.888207] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:36.652 [2024-11-15 10:46:24.888348] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:36.652 [2024-11-15 10:46:24.888382] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:36.652 [2024-11-15 10:46:24.888397] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:36.652 [2024-11-15 10:46:24.888410] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:36.652 [2024-11-15 10:46:24.888446] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:36.652 qpair failed and we were unable to recover it. 00:27:36.652 [2024-11-15 10:46:24.898214] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:36.652 [2024-11-15 10:46:24.898369] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:36.652 [2024-11-15 10:46:24.898395] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:36.652 [2024-11-15 10:46:24.898410] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:36.652 [2024-11-15 10:46:24.898422] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:36.652 [2024-11-15 10:46:24.898451] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:36.652 qpair failed and we were unable to recover it. 00:27:36.652 [2024-11-15 10:46:24.908233] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:36.652 [2024-11-15 10:46:24.908336] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:36.653 [2024-11-15 10:46:24.908370] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:36.653 [2024-11-15 10:46:24.908387] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:36.653 [2024-11-15 10:46:24.908399] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:36.653 [2024-11-15 10:46:24.908429] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:36.653 qpair failed and we were unable to recover it. 00:27:36.653 [2024-11-15 10:46:24.918264] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:36.653 [2024-11-15 10:46:24.918389] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:36.653 [2024-11-15 10:46:24.918415] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:36.653 [2024-11-15 10:46:24.918429] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:36.653 [2024-11-15 10:46:24.918442] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:36.653 [2024-11-15 10:46:24.918473] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:36.653 qpair failed and we were unable to recover it. 00:27:36.653 [2024-11-15 10:46:24.928287] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:36.653 [2024-11-15 10:46:24.928393] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:36.653 [2024-11-15 10:46:24.928418] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:36.653 [2024-11-15 10:46:24.928431] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:36.653 [2024-11-15 10:46:24.928443] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:36.653 [2024-11-15 10:46:24.928473] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:36.653 qpair failed and we were unable to recover it. 00:27:36.653 [2024-11-15 10:46:24.938427] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:36.653 [2024-11-15 10:46:24.938532] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:36.653 [2024-11-15 10:46:24.938558] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:36.653 [2024-11-15 10:46:24.938572] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:36.653 [2024-11-15 10:46:24.938583] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:36.653 [2024-11-15 10:46:24.938614] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:36.653 qpair failed and we were unable to recover it. 00:27:36.653 [2024-11-15 10:46:24.948359] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:36.653 [2024-11-15 10:46:24.948455] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:36.653 [2024-11-15 10:46:24.948478] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:36.653 [2024-11-15 10:46:24.948492] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:36.653 [2024-11-15 10:46:24.948504] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:36.653 [2024-11-15 10:46:24.948534] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:36.653 qpair failed and we were unable to recover it. 00:27:36.653 [2024-11-15 10:46:24.958415] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:36.653 [2024-11-15 10:46:24.958518] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:36.653 [2024-11-15 10:46:24.958544] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:36.653 [2024-11-15 10:46:24.958559] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:36.653 [2024-11-15 10:46:24.958570] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:36.653 [2024-11-15 10:46:24.958601] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:36.653 qpair failed and we were unable to recover it. 00:27:36.653 [2024-11-15 10:46:24.968417] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:36.653 [2024-11-15 10:46:24.968503] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:36.653 [2024-11-15 10:46:24.968528] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:36.653 [2024-11-15 10:46:24.968542] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:36.653 [2024-11-15 10:46:24.968554] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:36.653 [2024-11-15 10:46:24.968585] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:36.653 qpair failed and we were unable to recover it. 00:27:36.653 [2024-11-15 10:46:24.978440] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:36.653 [2024-11-15 10:46:24.978529] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:36.653 [2024-11-15 10:46:24.978554] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:36.653 [2024-11-15 10:46:24.978574] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:36.653 [2024-11-15 10:46:24.978587] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:36.653 [2024-11-15 10:46:24.978618] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:36.653 qpair failed and we were unable to recover it. 00:27:36.653 [2024-11-15 10:46:24.988567] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:36.653 [2024-11-15 10:46:24.988663] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:36.653 [2024-11-15 10:46:24.988688] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:36.653 [2024-11-15 10:46:24.988702] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:36.653 [2024-11-15 10:46:24.988714] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:36.653 [2024-11-15 10:46:24.988744] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:36.653 qpair failed and we were unable to recover it. 00:27:36.653 [2024-11-15 10:46:24.998522] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:36.653 [2024-11-15 10:46:24.998619] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:36.653 [2024-11-15 10:46:24.998645] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:36.653 [2024-11-15 10:46:24.998659] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:36.653 [2024-11-15 10:46:24.998671] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:36.653 [2024-11-15 10:46:24.998701] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:36.653 qpair failed and we were unable to recover it. 00:27:36.653 [2024-11-15 10:46:25.008601] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:36.653 [2024-11-15 10:46:25.008693] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:36.653 [2024-11-15 10:46:25.008717] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:36.653 [2024-11-15 10:46:25.008730] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:36.653 [2024-11-15 10:46:25.008743] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:36.653 [2024-11-15 10:46:25.008774] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:36.653 qpair failed and we were unable to recover it. 00:27:36.653 [2024-11-15 10:46:25.018540] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:36.653 [2024-11-15 10:46:25.018634] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:36.653 [2024-11-15 10:46:25.018661] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:36.653 [2024-11-15 10:46:25.018676] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:36.653 [2024-11-15 10:46:25.018688] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:36.653 [2024-11-15 10:46:25.018724] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:36.653 qpair failed and we were unable to recover it. 00:27:36.653 [2024-11-15 10:46:25.028563] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:36.653 [2024-11-15 10:46:25.028697] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:36.654 [2024-11-15 10:46:25.028723] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:36.654 [2024-11-15 10:46:25.028737] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:36.654 [2024-11-15 10:46:25.028749] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:36.654 [2024-11-15 10:46:25.028779] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:36.654 qpair failed and we were unable to recover it. 00:27:36.654 [2024-11-15 10:46:25.038689] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:36.654 [2024-11-15 10:46:25.038846] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:36.654 [2024-11-15 10:46:25.038871] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:36.654 [2024-11-15 10:46:25.038885] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:36.654 [2024-11-15 10:46:25.038898] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:36.654 [2024-11-15 10:46:25.038929] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:36.654 qpair failed and we were unable to recover it. 00:27:36.654 [2024-11-15 10:46:25.048649] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:36.654 [2024-11-15 10:46:25.048778] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:36.654 [2024-11-15 10:46:25.048802] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:36.654 [2024-11-15 10:46:25.048816] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:36.654 [2024-11-15 10:46:25.048828] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:36.654 [2024-11-15 10:46:25.048858] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:36.654 qpair failed and we were unable to recover it. 00:27:36.654 [2024-11-15 10:46:25.058682] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:36.654 [2024-11-15 10:46:25.058789] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:36.654 [2024-11-15 10:46:25.058814] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:36.654 [2024-11-15 10:46:25.058828] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:36.654 [2024-11-15 10:46:25.058840] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:36.654 [2024-11-15 10:46:25.058871] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:36.654 qpair failed and we were unable to recover it. 00:27:36.654 [2024-11-15 10:46:25.068722] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:36.654 [2024-11-15 10:46:25.068824] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:36.654 [2024-11-15 10:46:25.068850] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:36.654 [2024-11-15 10:46:25.068864] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:36.654 [2024-11-15 10:46:25.068876] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:36.654 [2024-11-15 10:46:25.068906] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:36.654 qpair failed and we were unable to recover it. 00:27:36.654 [2024-11-15 10:46:25.078757] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:36.654 [2024-11-15 10:46:25.078874] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:36.654 [2024-11-15 10:46:25.078898] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:36.654 [2024-11-15 10:46:25.078912] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:36.654 [2024-11-15 10:46:25.078923] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:36.654 [2024-11-15 10:46:25.078954] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:36.654 qpair failed and we were unable to recover it. 00:27:36.654 [2024-11-15 10:46:25.088770] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:36.654 [2024-11-15 10:46:25.088876] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:36.654 [2024-11-15 10:46:25.088902] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:36.654 [2024-11-15 10:46:25.088916] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:36.654 [2024-11-15 10:46:25.088928] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:36.654 [2024-11-15 10:46:25.088958] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:36.654 qpair failed and we were unable to recover it. 00:27:36.654 [2024-11-15 10:46:25.098771] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:36.654 [2024-11-15 10:46:25.098875] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:36.654 [2024-11-15 10:46:25.098900] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:36.654 [2024-11-15 10:46:25.098914] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:36.654 [2024-11-15 10:46:25.098926] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:36.654 [2024-11-15 10:46:25.098956] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:36.654 qpair failed and we were unable to recover it. 00:27:36.654 [2024-11-15 10:46:25.108776] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:36.654 [2024-11-15 10:46:25.108882] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:36.654 [2024-11-15 10:46:25.108913] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:36.654 [2024-11-15 10:46:25.108928] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:36.654 [2024-11-15 10:46:25.108939] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:36.654 [2024-11-15 10:46:25.108970] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:36.654 qpair failed and we were unable to recover it. 00:27:36.914 [2024-11-15 10:46:25.118849] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:36.914 [2024-11-15 10:46:25.118961] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:36.914 [2024-11-15 10:46:25.118986] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:36.914 [2024-11-15 10:46:25.119001] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:36.914 [2024-11-15 10:46:25.119013] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:36.914 [2024-11-15 10:46:25.119043] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:36.914 qpair failed and we were unable to recover it. 00:27:36.914 [2024-11-15 10:46:25.128883] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:36.914 [2024-11-15 10:46:25.128990] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:36.914 [2024-11-15 10:46:25.129015] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:36.914 [2024-11-15 10:46:25.129030] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:36.914 [2024-11-15 10:46:25.129042] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:36.914 [2024-11-15 10:46:25.129072] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:36.914 qpair failed and we were unable to recover it. 00:27:36.914 [2024-11-15 10:46:25.138918] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:36.914 [2024-11-15 10:46:25.139055] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:36.914 [2024-11-15 10:46:25.139080] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:36.914 [2024-11-15 10:46:25.139095] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:36.914 [2024-11-15 10:46:25.139107] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:36.914 [2024-11-15 10:46:25.139138] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:36.914 qpair failed and we were unable to recover it. 00:27:36.914 [2024-11-15 10:46:25.148933] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:36.914 [2024-11-15 10:46:25.149034] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:36.914 [2024-11-15 10:46:25.149059] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:36.914 [2024-11-15 10:46:25.149074] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:36.914 [2024-11-15 10:46:25.149094] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:36.914 [2024-11-15 10:46:25.149126] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:36.914 qpair failed and we were unable to recover it. 00:27:36.914 [2024-11-15 10:46:25.158944] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:36.914 [2024-11-15 10:46:25.159089] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:36.914 [2024-11-15 10:46:25.159114] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:36.914 [2024-11-15 10:46:25.159128] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:36.914 [2024-11-15 10:46:25.159141] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:36.914 [2024-11-15 10:46:25.159170] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:36.914 qpair failed and we were unable to recover it. 00:27:36.914 [2024-11-15 10:46:25.168989] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:36.914 [2024-11-15 10:46:25.169094] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:36.914 [2024-11-15 10:46:25.169119] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:36.914 [2024-11-15 10:46:25.169133] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:36.914 [2024-11-15 10:46:25.169145] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:36.914 [2024-11-15 10:46:25.169175] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:36.914 qpair failed and we were unable to recover it. 00:27:36.914 [2024-11-15 10:46:25.179030] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:36.914 [2024-11-15 10:46:25.179170] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:36.914 [2024-11-15 10:46:25.179196] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:36.914 [2024-11-15 10:46:25.179211] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:36.914 [2024-11-15 10:46:25.179222] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:36.914 [2024-11-15 10:46:25.179252] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:36.914 qpair failed and we were unable to recover it. 00:27:36.914 [2024-11-15 10:46:25.189006] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:36.914 [2024-11-15 10:46:25.189131] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:36.914 [2024-11-15 10:46:25.189156] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:36.914 [2024-11-15 10:46:25.189171] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:36.914 [2024-11-15 10:46:25.189183] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:36.914 [2024-11-15 10:46:25.189213] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:36.914 qpair failed and we were unable to recover it. 00:27:36.914 [2024-11-15 10:46:25.199066] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:36.914 [2024-11-15 10:46:25.199202] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:36.914 [2024-11-15 10:46:25.199227] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:36.914 [2024-11-15 10:46:25.199242] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:36.914 [2024-11-15 10:46:25.199254] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:36.914 [2024-11-15 10:46:25.199284] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:36.914 qpair failed and we were unable to recover it. 00:27:36.914 [2024-11-15 10:46:25.209037] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:36.914 [2024-11-15 10:46:25.209172] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:36.914 [2024-11-15 10:46:25.209197] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:36.914 [2024-11-15 10:46:25.209212] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:36.914 [2024-11-15 10:46:25.209224] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:36.914 [2024-11-15 10:46:25.209253] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:36.914 qpair failed and we were unable to recover it. 00:27:36.914 [2024-11-15 10:46:25.219184] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:36.914 [2024-11-15 10:46:25.219326] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:36.914 [2024-11-15 10:46:25.219352] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:36.914 [2024-11-15 10:46:25.219376] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:36.914 [2024-11-15 10:46:25.219389] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:36.914 [2024-11-15 10:46:25.219419] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:36.914 qpair failed and we were unable to recover it. 00:27:36.914 [2024-11-15 10:46:25.229104] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:36.914 [2024-11-15 10:46:25.229207] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:36.914 [2024-11-15 10:46:25.229233] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:36.914 [2024-11-15 10:46:25.229247] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:36.915 [2024-11-15 10:46:25.229259] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:36.915 [2024-11-15 10:46:25.229289] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:36.915 qpair failed and we were unable to recover it. 00:27:36.915 [2024-11-15 10:46:25.239153] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:36.915 [2024-11-15 10:46:25.239312] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:36.915 [2024-11-15 10:46:25.239343] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:36.915 [2024-11-15 10:46:25.239358] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:36.915 [2024-11-15 10:46:25.239381] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:36.915 [2024-11-15 10:46:25.239411] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:36.915 qpair failed and we were unable to recover it. 00:27:36.915 [2024-11-15 10:46:25.249172] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:36.915 [2024-11-15 10:46:25.249270] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:36.915 [2024-11-15 10:46:25.249293] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:36.915 [2024-11-15 10:46:25.249307] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:36.915 [2024-11-15 10:46:25.249319] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:36.915 [2024-11-15 10:46:25.249348] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:36.915 qpair failed and we were unable to recover it. 00:27:36.915 [2024-11-15 10:46:25.259190] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:36.915 [2024-11-15 10:46:25.259286] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:36.915 [2024-11-15 10:46:25.259309] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:36.915 [2024-11-15 10:46:25.259323] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:36.915 [2024-11-15 10:46:25.259336] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:36.915 [2024-11-15 10:46:25.259373] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:36.915 qpair failed and we were unable to recover it. 00:27:36.915 [2024-11-15 10:46:25.269216] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:36.915 [2024-11-15 10:46:25.269312] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:36.915 [2024-11-15 10:46:25.269336] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:36.915 [2024-11-15 10:46:25.269350] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:36.915 [2024-11-15 10:46:25.269370] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:36.915 [2024-11-15 10:46:25.269403] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:36.915 qpair failed and we were unable to recover it. 00:27:36.915 [2024-11-15 10:46:25.279263] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:36.915 [2024-11-15 10:46:25.279383] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:36.915 [2024-11-15 10:46:25.279409] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:36.915 [2024-11-15 10:46:25.279423] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:36.915 [2024-11-15 10:46:25.279440] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:36.915 [2024-11-15 10:46:25.279472] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:36.915 qpair failed and we were unable to recover it. 00:27:36.915 [2024-11-15 10:46:25.289312] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:36.915 [2024-11-15 10:46:25.289423] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:36.915 [2024-11-15 10:46:25.289449] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:36.915 [2024-11-15 10:46:25.289463] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:36.915 [2024-11-15 10:46:25.289476] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:36.915 [2024-11-15 10:46:25.289506] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:36.915 qpair failed and we were unable to recover it. 00:27:36.915 [2024-11-15 10:46:25.299331] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:36.915 [2024-11-15 10:46:25.299446] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:36.915 [2024-11-15 10:46:25.299472] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:36.915 [2024-11-15 10:46:25.299487] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:36.915 [2024-11-15 10:46:25.299499] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:36.915 [2024-11-15 10:46:25.299529] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:36.915 qpair failed and we were unable to recover it. 00:27:36.915 [2024-11-15 10:46:25.309328] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:36.915 [2024-11-15 10:46:25.309439] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:36.915 [2024-11-15 10:46:25.309468] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:36.915 [2024-11-15 10:46:25.309482] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:36.915 [2024-11-15 10:46:25.309493] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:36.915 [2024-11-15 10:46:25.309524] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:36.915 qpair failed and we were unable to recover it. 00:27:36.915 [2024-11-15 10:46:25.319430] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:36.915 [2024-11-15 10:46:25.319524] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:36.915 [2024-11-15 10:46:25.319549] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:36.915 [2024-11-15 10:46:25.319564] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:36.915 [2024-11-15 10:46:25.319575] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:36.915 [2024-11-15 10:46:25.319605] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:36.915 qpair failed and we were unable to recover it. 00:27:36.915 [2024-11-15 10:46:25.329451] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:36.915 [2024-11-15 10:46:25.329543] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:36.915 [2024-11-15 10:46:25.329569] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:36.915 [2024-11-15 10:46:25.329583] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:36.915 [2024-11-15 10:46:25.329596] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:36.915 [2024-11-15 10:46:25.329626] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:36.915 qpair failed and we were unable to recover it. 00:27:36.915 [2024-11-15 10:46:25.339452] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:36.915 [2024-11-15 10:46:25.339535] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:36.915 [2024-11-15 10:46:25.339560] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:36.915 [2024-11-15 10:46:25.339573] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:36.915 [2024-11-15 10:46:25.339585] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:36.915 [2024-11-15 10:46:25.339614] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:36.915 qpair failed and we were unable to recover it. 00:27:36.916 [2024-11-15 10:46:25.349495] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:36.916 [2024-11-15 10:46:25.349608] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:36.916 [2024-11-15 10:46:25.349633] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:36.916 [2024-11-15 10:46:25.349648] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:36.916 [2024-11-15 10:46:25.349659] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:36.916 [2024-11-15 10:46:25.349688] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:36.916 qpair failed and we were unable to recover it. 00:27:36.916 [2024-11-15 10:46:25.359531] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:36.916 [2024-11-15 10:46:25.359655] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:36.916 [2024-11-15 10:46:25.359680] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:36.916 [2024-11-15 10:46:25.359694] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:36.916 [2024-11-15 10:46:25.359706] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:36.916 [2024-11-15 10:46:25.359735] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:36.916 qpair failed and we were unable to recover it. 00:27:36.916 [2024-11-15 10:46:25.369534] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:36.916 [2024-11-15 10:46:25.369627] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:36.916 [2024-11-15 10:46:25.369651] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:36.916 [2024-11-15 10:46:25.369664] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:36.916 [2024-11-15 10:46:25.369676] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:36.916 [2024-11-15 10:46:25.369705] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:36.916 qpair failed and we were unable to recover it. 00:27:36.916 [2024-11-15 10:46:25.379641] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:36.916 [2024-11-15 10:46:25.379730] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:36.916 [2024-11-15 10:46:25.379757] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:36.916 [2024-11-15 10:46:25.379771] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:36.916 [2024-11-15 10:46:25.379782] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:36.916 [2024-11-15 10:46:25.379812] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:36.916 qpair failed and we were unable to recover it. 00:27:37.175 [2024-11-15 10:46:25.389596] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:37.175 [2024-11-15 10:46:25.389709] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:37.175 [2024-11-15 10:46:25.389734] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:37.175 [2024-11-15 10:46:25.389749] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:37.175 [2024-11-15 10:46:25.389761] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:37.175 [2024-11-15 10:46:25.389790] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:37.175 qpair failed and we were unable to recover it. 00:27:37.175 [2024-11-15 10:46:25.399620] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:37.175 [2024-11-15 10:46:25.399711] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:37.175 [2024-11-15 10:46:25.399740] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:37.175 [2024-11-15 10:46:25.399754] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:37.175 [2024-11-15 10:46:25.399765] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:37.175 [2024-11-15 10:46:25.399795] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:37.175 qpair failed and we were unable to recover it. 00:27:37.175 [2024-11-15 10:46:25.409642] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:37.175 [2024-11-15 10:46:25.409758] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:37.175 [2024-11-15 10:46:25.409782] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:37.175 [2024-11-15 10:46:25.409802] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:37.175 [2024-11-15 10:46:25.409815] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:37.175 [2024-11-15 10:46:25.409844] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:37.175 qpair failed and we were unable to recover it. 00:27:37.175 [2024-11-15 10:46:25.419683] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:37.175 [2024-11-15 10:46:25.419780] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:37.175 [2024-11-15 10:46:25.419809] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:37.175 [2024-11-15 10:46:25.419824] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:37.175 [2024-11-15 10:46:25.419836] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:37.175 [2024-11-15 10:46:25.419866] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:37.175 qpair failed and we were unable to recover it. 00:27:37.175 [2024-11-15 10:46:25.429707] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:37.175 [2024-11-15 10:46:25.429805] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:37.175 [2024-11-15 10:46:25.429829] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:37.176 [2024-11-15 10:46:25.429842] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:37.176 [2024-11-15 10:46:25.429855] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:37.176 [2024-11-15 10:46:25.429885] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:37.176 qpair failed and we were unable to recover it. 00:27:37.176 [2024-11-15 10:46:25.439826] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:37.176 [2024-11-15 10:46:25.439915] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:37.176 [2024-11-15 10:46:25.439940] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:37.176 [2024-11-15 10:46:25.439954] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:37.176 [2024-11-15 10:46:25.439966] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:37.176 [2024-11-15 10:46:25.439995] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:37.176 qpair failed and we were unable to recover it. 00:27:37.176 [2024-11-15 10:46:25.449826] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:37.176 [2024-11-15 10:46:25.449930] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:37.176 [2024-11-15 10:46:25.449955] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:37.176 [2024-11-15 10:46:25.449969] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:37.176 [2024-11-15 10:46:25.449980] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:37.176 [2024-11-15 10:46:25.450015] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:37.176 qpair failed and we were unable to recover it. 00:27:37.176 [2024-11-15 10:46:25.459768] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:37.176 [2024-11-15 10:46:25.459870] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:37.176 [2024-11-15 10:46:25.459896] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:37.176 [2024-11-15 10:46:25.459910] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:37.176 [2024-11-15 10:46:25.459922] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:37.176 [2024-11-15 10:46:25.459952] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:37.176 qpair failed and we were unable to recover it. 00:27:37.176 [2024-11-15 10:46:25.469785] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:37.176 [2024-11-15 10:46:25.469889] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:37.176 [2024-11-15 10:46:25.469915] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:37.176 [2024-11-15 10:46:25.469930] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:37.176 [2024-11-15 10:46:25.469942] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:37.176 [2024-11-15 10:46:25.469973] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:37.176 qpair failed and we were unable to recover it. 00:27:37.176 [2024-11-15 10:46:25.479841] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:37.176 [2024-11-15 10:46:25.479945] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:37.176 [2024-11-15 10:46:25.479973] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:37.176 [2024-11-15 10:46:25.479987] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:37.176 [2024-11-15 10:46:25.479999] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:37.176 [2024-11-15 10:46:25.480030] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:37.176 qpair failed and we were unable to recover it. 00:27:37.176 [2024-11-15 10:46:25.489884] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:37.176 [2024-11-15 10:46:25.490012] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:37.176 [2024-11-15 10:46:25.490038] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:37.176 [2024-11-15 10:46:25.490053] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:37.176 [2024-11-15 10:46:25.490065] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:37.176 [2024-11-15 10:46:25.490095] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:37.176 qpair failed and we were unable to recover it. 00:27:37.176 [2024-11-15 10:46:25.499893] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:37.176 [2024-11-15 10:46:25.500001] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:37.176 [2024-11-15 10:46:25.500027] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:37.176 [2024-11-15 10:46:25.500041] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:37.176 [2024-11-15 10:46:25.500054] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:37.176 [2024-11-15 10:46:25.500084] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:37.176 qpair failed and we were unable to recover it. 00:27:37.176 [2024-11-15 10:46:25.509935] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:37.176 [2024-11-15 10:46:25.510039] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:37.176 [2024-11-15 10:46:25.510064] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:37.176 [2024-11-15 10:46:25.510078] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:37.176 [2024-11-15 10:46:25.510090] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:37.176 [2024-11-15 10:46:25.510120] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:37.176 qpair failed and we were unable to recover it. 00:27:37.176 [2024-11-15 10:46:25.519972] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:37.176 [2024-11-15 10:46:25.520079] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:37.176 [2024-11-15 10:46:25.520105] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:37.176 [2024-11-15 10:46:25.520119] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:37.176 [2024-11-15 10:46:25.520130] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:37.176 [2024-11-15 10:46:25.520160] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:37.176 qpair failed and we were unable to recover it. 00:27:37.176 [2024-11-15 10:46:25.530031] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:37.176 [2024-11-15 10:46:25.530133] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:37.176 [2024-11-15 10:46:25.530159] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:37.176 [2024-11-15 10:46:25.530174] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:37.176 [2024-11-15 10:46:25.530186] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:37.177 [2024-11-15 10:46:25.530216] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:37.177 qpair failed and we were unable to recover it. 00:27:37.177 [2024-11-15 10:46:25.540036] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:37.177 [2024-11-15 10:46:25.540149] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:37.177 [2024-11-15 10:46:25.540179] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:37.177 [2024-11-15 10:46:25.540193] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:37.177 [2024-11-15 10:46:25.540205] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:37.177 [2024-11-15 10:46:25.540234] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:37.177 qpair failed and we were unable to recover it. 00:27:37.177 [2024-11-15 10:46:25.550019] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:37.177 [2024-11-15 10:46:25.550120] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:37.177 [2024-11-15 10:46:25.550145] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:37.177 [2024-11-15 10:46:25.550160] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:37.177 [2024-11-15 10:46:25.550171] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:37.177 [2024-11-15 10:46:25.550201] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:37.177 qpair failed and we were unable to recover it. 00:27:37.177 [2024-11-15 10:46:25.560152] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:37.177 [2024-11-15 10:46:25.560262] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:37.177 [2024-11-15 10:46:25.560287] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:37.177 [2024-11-15 10:46:25.560301] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:37.177 [2024-11-15 10:46:25.560312] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:37.177 [2024-11-15 10:46:25.560343] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:37.177 qpair failed and we were unable to recover it. 00:27:37.177 [2024-11-15 10:46:25.570123] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:37.177 [2024-11-15 10:46:25.570228] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:37.177 [2024-11-15 10:46:25.570252] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:37.177 [2024-11-15 10:46:25.570266] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:37.177 [2024-11-15 10:46:25.570277] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:37.177 [2024-11-15 10:46:25.570308] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:37.177 qpair failed and we were unable to recover it. 00:27:37.177 [2024-11-15 10:46:25.580108] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:37.177 [2024-11-15 10:46:25.580206] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:37.177 [2024-11-15 10:46:25.580230] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:37.177 [2024-11-15 10:46:25.580244] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:37.177 [2024-11-15 10:46:25.580256] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:37.177 [2024-11-15 10:46:25.580291] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:37.177 qpair failed and we were unable to recover it. 00:27:37.177 [2024-11-15 10:46:25.590128] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:37.177 [2024-11-15 10:46:25.590233] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:37.177 [2024-11-15 10:46:25.590258] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:37.177 [2024-11-15 10:46:25.590272] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:37.177 [2024-11-15 10:46:25.590284] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:37.177 [2024-11-15 10:46:25.590314] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:37.177 qpair failed and we were unable to recover it. 00:27:37.177 [2024-11-15 10:46:25.600228] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:37.177 [2024-11-15 10:46:25.600333] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:37.177 [2024-11-15 10:46:25.600359] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:37.177 [2024-11-15 10:46:25.600387] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:37.177 [2024-11-15 10:46:25.600400] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:37.177 [2024-11-15 10:46:25.600431] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:37.177 qpair failed and we were unable to recover it. 00:27:37.177 [2024-11-15 10:46:25.610286] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:37.177 [2024-11-15 10:46:25.610427] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:37.177 [2024-11-15 10:46:25.610454] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:37.177 [2024-11-15 10:46:25.610468] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:37.177 [2024-11-15 10:46:25.610480] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:37.177 [2024-11-15 10:46:25.610510] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:37.177 qpair failed and we were unable to recover it. 00:27:37.177 [2024-11-15 10:46:25.620280] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:37.177 [2024-11-15 10:46:25.620397] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:37.177 [2024-11-15 10:46:25.620423] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:37.177 [2024-11-15 10:46:25.620437] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:37.177 [2024-11-15 10:46:25.620449] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:37.177 [2024-11-15 10:46:25.620479] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:37.177 qpair failed and we were unable to recover it. 00:27:37.177 [2024-11-15 10:46:25.630279] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:37.177 [2024-11-15 10:46:25.630402] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:37.177 [2024-11-15 10:46:25.630429] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:37.177 [2024-11-15 10:46:25.630443] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:37.177 [2024-11-15 10:46:25.630456] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:37.177 [2024-11-15 10:46:25.630485] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:37.177 qpair failed and we were unable to recover it. 00:27:37.177 [2024-11-15 10:46:25.640422] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:37.178 [2024-11-15 10:46:25.640516] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:37.178 [2024-11-15 10:46:25.640543] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:37.178 [2024-11-15 10:46:25.640558] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:37.178 [2024-11-15 10:46:25.640570] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:37.178 [2024-11-15 10:46:25.640601] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:37.178 qpair failed and we were unable to recover it. 00:27:37.436 [2024-11-15 10:46:25.650385] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:37.436 [2024-11-15 10:46:25.650482] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:37.436 [2024-11-15 10:46:25.650508] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:37.436 [2024-11-15 10:46:25.650523] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:37.436 [2024-11-15 10:46:25.650535] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:37.436 [2024-11-15 10:46:25.650565] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:37.436 qpair failed and we were unable to recover it. 00:27:37.436 [2024-11-15 10:46:25.660415] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:37.436 [2024-11-15 10:46:25.660511] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:37.436 [2024-11-15 10:46:25.660536] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:37.436 [2024-11-15 10:46:25.660550] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:37.436 [2024-11-15 10:46:25.660562] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:37.436 [2024-11-15 10:46:25.660593] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:37.436 qpair failed and we were unable to recover it. 00:27:37.436 [2024-11-15 10:46:25.670432] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:37.436 [2024-11-15 10:46:25.670565] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:37.436 [2024-11-15 10:46:25.670596] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:37.436 [2024-11-15 10:46:25.670611] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:37.436 [2024-11-15 10:46:25.670623] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:37.436 [2024-11-15 10:46:25.670656] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:37.436 qpair failed and we were unable to recover it. 00:27:37.436 [2024-11-15 10:46:25.680467] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:37.436 [2024-11-15 10:46:25.680592] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:37.436 [2024-11-15 10:46:25.680618] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:37.436 [2024-11-15 10:46:25.680632] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:37.436 [2024-11-15 10:46:25.680644] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:37.436 [2024-11-15 10:46:25.680684] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:37.436 qpair failed and we were unable to recover it. 00:27:37.436 [2024-11-15 10:46:25.690544] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:37.436 [2024-11-15 10:46:25.690643] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:37.436 [2024-11-15 10:46:25.690668] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:37.436 [2024-11-15 10:46:25.690682] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:37.436 [2024-11-15 10:46:25.690694] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:37.436 [2024-11-15 10:46:25.690724] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:37.436 qpair failed and we were unable to recover it. 00:27:37.436 [2024-11-15 10:46:25.700503] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:37.437 [2024-11-15 10:46:25.700632] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:37.437 [2024-11-15 10:46:25.700658] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:37.437 [2024-11-15 10:46:25.700672] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:37.437 [2024-11-15 10:46:25.700684] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:37.437 [2024-11-15 10:46:25.700724] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:37.437 qpair failed and we were unable to recover it. 00:27:37.437 [2024-11-15 10:46:25.710505] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:37.437 [2024-11-15 10:46:25.710589] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:37.437 [2024-11-15 10:46:25.710613] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:37.437 [2024-11-15 10:46:25.710627] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:37.437 [2024-11-15 10:46:25.710644] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:37.437 [2024-11-15 10:46:25.710674] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:37.437 qpair failed and we were unable to recover it. 00:27:37.437 [2024-11-15 10:46:25.720643] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:37.437 [2024-11-15 10:46:25.720773] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:37.437 [2024-11-15 10:46:25.720797] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:37.437 [2024-11-15 10:46:25.720810] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:37.437 [2024-11-15 10:46:25.720822] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:37.437 [2024-11-15 10:46:25.720851] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:37.437 qpair failed and we were unable to recover it. 00:27:37.437 [2024-11-15 10:46:25.730600] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:37.437 [2024-11-15 10:46:25.730690] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:37.437 [2024-11-15 10:46:25.730716] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:37.437 [2024-11-15 10:46:25.730730] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:37.437 [2024-11-15 10:46:25.730741] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:37.437 [2024-11-15 10:46:25.730771] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:37.437 qpair failed and we were unable to recover it. 00:27:37.437 [2024-11-15 10:46:25.740640] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:37.437 [2024-11-15 10:46:25.740755] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:37.437 [2024-11-15 10:46:25.740779] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:37.437 [2024-11-15 10:46:25.740792] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:37.437 [2024-11-15 10:46:25.740805] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:37.437 [2024-11-15 10:46:25.740835] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:37.437 qpair failed and we were unable to recover it. 00:27:37.437 [2024-11-15 10:46:25.750681] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:37.437 [2024-11-15 10:46:25.750802] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:37.437 [2024-11-15 10:46:25.750828] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:37.437 [2024-11-15 10:46:25.750842] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:37.437 [2024-11-15 10:46:25.750854] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:37.437 [2024-11-15 10:46:25.750893] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:37.437 qpair failed and we were unable to recover it. 00:27:37.437 [2024-11-15 10:46:25.760681] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:37.437 [2024-11-15 10:46:25.760835] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:37.437 [2024-11-15 10:46:25.760860] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:37.437 [2024-11-15 10:46:25.760875] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:37.437 [2024-11-15 10:46:25.760887] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:37.437 [2024-11-15 10:46:25.760917] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:37.437 qpair failed and we were unable to recover it. 00:27:37.437 [2024-11-15 10:46:25.770747] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:37.437 [2024-11-15 10:46:25.770845] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:37.437 [2024-11-15 10:46:25.770869] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:37.437 [2024-11-15 10:46:25.770882] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:37.437 [2024-11-15 10:46:25.770894] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:37.437 [2024-11-15 10:46:25.770924] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:37.437 qpair failed and we were unable to recover it. 00:27:37.437 [2024-11-15 10:46:25.780781] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:37.437 [2024-11-15 10:46:25.780881] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:37.437 [2024-11-15 10:46:25.780906] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:37.437 [2024-11-15 10:46:25.780920] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:37.437 [2024-11-15 10:46:25.780932] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:37.437 [2024-11-15 10:46:25.780962] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:37.437 qpair failed and we were unable to recover it. 00:27:37.437 [2024-11-15 10:46:25.790778] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:37.437 [2024-11-15 10:46:25.790908] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:37.437 [2024-11-15 10:46:25.790934] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:37.437 [2024-11-15 10:46:25.790947] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:37.437 [2024-11-15 10:46:25.790959] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:37.437 [2024-11-15 10:46:25.790989] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:37.437 qpair failed and we were unable to recover it. 00:27:37.437 [2024-11-15 10:46:25.800816] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:37.437 [2024-11-15 10:46:25.800922] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:37.437 [2024-11-15 10:46:25.800953] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:37.437 [2024-11-15 10:46:25.800968] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:37.437 [2024-11-15 10:46:25.800980] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:37.437 [2024-11-15 10:46:25.801009] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:37.437 qpair failed and we were unable to recover it. 00:27:37.437 [2024-11-15 10:46:25.810930] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:37.437 [2024-11-15 10:46:25.811065] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:37.437 [2024-11-15 10:46:25.811091] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:37.437 [2024-11-15 10:46:25.811105] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:37.437 [2024-11-15 10:46:25.811117] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:37.437 [2024-11-15 10:46:25.811147] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:37.437 qpair failed and we were unable to recover it. 00:27:37.437 [2024-11-15 10:46:25.820841] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:37.437 [2024-11-15 10:46:25.820979] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:37.437 [2024-11-15 10:46:25.821004] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:37.437 [2024-11-15 10:46:25.821019] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:37.437 [2024-11-15 10:46:25.821031] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:37.437 [2024-11-15 10:46:25.821061] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:37.438 qpair failed and we were unable to recover it. 00:27:37.438 [2024-11-15 10:46:25.830905] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:37.438 [2024-11-15 10:46:25.830999] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:37.438 [2024-11-15 10:46:25.831028] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:37.438 [2024-11-15 10:46:25.831043] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:37.438 [2024-11-15 10:46:25.831055] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:37.438 [2024-11-15 10:46:25.831085] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:37.438 qpair failed and we were unable to recover it. 00:27:37.438 [2024-11-15 10:46:25.840952] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:37.438 [2024-11-15 10:46:25.841087] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:37.438 [2024-11-15 10:46:25.841112] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:37.438 [2024-11-15 10:46:25.841133] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:37.438 [2024-11-15 10:46:25.841156] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:37.438 [2024-11-15 10:46:25.841187] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:37.438 qpair failed and we were unable to recover it. 00:27:37.438 [2024-11-15 10:46:25.850965] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:37.438 [2024-11-15 10:46:25.851113] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:37.438 [2024-11-15 10:46:25.851138] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:37.438 [2024-11-15 10:46:25.851152] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:37.438 [2024-11-15 10:46:25.851163] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:37.438 [2024-11-15 10:46:25.851194] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:37.438 qpair failed and we were unable to recover it. 00:27:37.438 [2024-11-15 10:46:25.860945] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:37.438 [2024-11-15 10:46:25.861029] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:37.438 [2024-11-15 10:46:25.861053] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:37.438 [2024-11-15 10:46:25.861067] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:37.438 [2024-11-15 10:46:25.861079] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:37.438 [2024-11-15 10:46:25.861108] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:37.438 qpair failed and we were unable to recover it. 00:27:37.438 [2024-11-15 10:46:25.871006] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:37.438 [2024-11-15 10:46:25.871100] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:37.438 [2024-11-15 10:46:25.871123] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:37.438 [2024-11-15 10:46:25.871137] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:37.438 [2024-11-15 10:46:25.871149] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:37.438 [2024-11-15 10:46:25.871179] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:37.438 qpair failed and we were unable to recover it. 00:27:37.438 [2024-11-15 10:46:25.881051] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:37.438 [2024-11-15 10:46:25.881183] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:37.438 [2024-11-15 10:46:25.881208] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:37.438 [2024-11-15 10:46:25.881222] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:37.438 [2024-11-15 10:46:25.881234] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:37.438 [2024-11-15 10:46:25.881264] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:37.438 qpair failed and we were unable to recover it. 00:27:37.438 [2024-11-15 10:46:25.891026] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:37.438 [2024-11-15 10:46:25.891132] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:37.438 [2024-11-15 10:46:25.891158] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:37.438 [2024-11-15 10:46:25.891173] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:37.438 [2024-11-15 10:46:25.891184] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:37.438 [2024-11-15 10:46:25.891214] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:37.438 qpair failed and we were unable to recover it. 00:27:37.438 [2024-11-15 10:46:25.901104] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:37.438 [2024-11-15 10:46:25.901237] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:37.438 [2024-11-15 10:46:25.901262] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:37.438 [2024-11-15 10:46:25.901276] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:37.438 [2024-11-15 10:46:25.901287] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:37.438 [2024-11-15 10:46:25.901317] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:37.438 qpair failed and we were unable to recover it. 00:27:37.696 [2024-11-15 10:46:25.911161] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:37.696 [2024-11-15 10:46:25.911268] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:37.696 [2024-11-15 10:46:25.911293] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:37.696 [2024-11-15 10:46:25.911308] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:37.696 [2024-11-15 10:46:25.911320] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:37.696 [2024-11-15 10:46:25.911350] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:37.696 qpair failed and we were unable to recover it. 00:27:37.696 [2024-11-15 10:46:25.921149] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:37.696 [2024-11-15 10:46:25.921302] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:37.696 [2024-11-15 10:46:25.921327] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:37.696 [2024-11-15 10:46:25.921342] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:37.696 [2024-11-15 10:46:25.921354] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:37.696 [2024-11-15 10:46:25.921392] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:37.696 qpair failed and we were unable to recover it. 00:27:37.696 [2024-11-15 10:46:25.931162] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:37.697 [2024-11-15 10:46:25.931268] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:37.697 [2024-11-15 10:46:25.931293] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:37.697 [2024-11-15 10:46:25.931307] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:37.697 [2024-11-15 10:46:25.931319] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:37.697 [2024-11-15 10:46:25.931349] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:37.697 qpair failed and we were unable to recover it. 00:27:37.697 [2024-11-15 10:46:25.941181] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:37.697 [2024-11-15 10:46:25.941311] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:37.697 [2024-11-15 10:46:25.941336] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:37.697 [2024-11-15 10:46:25.941350] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:37.697 [2024-11-15 10:46:25.941370] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:37.697 [2024-11-15 10:46:25.941402] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:37.697 qpair failed and we were unable to recover it. 00:27:37.697 [2024-11-15 10:46:25.951214] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:37.697 [2024-11-15 10:46:25.951313] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:37.697 [2024-11-15 10:46:25.951338] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:37.697 [2024-11-15 10:46:25.951352] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:37.697 [2024-11-15 10:46:25.951374] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:37.697 [2024-11-15 10:46:25.951407] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:37.697 qpair failed and we were unable to recover it. 00:27:37.697 [2024-11-15 10:46:25.961252] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:37.697 [2024-11-15 10:46:25.961359] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:37.697 [2024-11-15 10:46:25.961392] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:37.697 [2024-11-15 10:46:25.961406] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:37.697 [2024-11-15 10:46:25.961418] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:37.697 [2024-11-15 10:46:25.961448] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:37.697 qpair failed and we were unable to recover it. 00:27:37.697 [2024-11-15 10:46:25.971223] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:37.697 [2024-11-15 10:46:25.971335] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:37.697 [2024-11-15 10:46:25.971360] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:37.697 [2024-11-15 10:46:25.971389] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:37.697 [2024-11-15 10:46:25.971402] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:37.697 [2024-11-15 10:46:25.971433] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:37.697 qpair failed and we were unable to recover it. 00:27:37.697 [2024-11-15 10:46:25.981299] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:37.697 [2024-11-15 10:46:25.981425] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:37.697 [2024-11-15 10:46:25.981451] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:37.697 [2024-11-15 10:46:25.981466] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:37.697 [2024-11-15 10:46:25.981477] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:37.697 [2024-11-15 10:46:25.981508] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:37.697 qpair failed and we were unable to recover it. 00:27:37.697 [2024-11-15 10:46:25.991309] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:37.697 [2024-11-15 10:46:25.991459] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:37.697 [2024-11-15 10:46:25.991485] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:37.697 [2024-11-15 10:46:25.991499] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:37.697 [2024-11-15 10:46:25.991511] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:37.697 [2024-11-15 10:46:25.991541] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:37.697 qpair failed and we were unable to recover it. 00:27:37.697 [2024-11-15 10:46:26.001394] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:37.697 [2024-11-15 10:46:26.001499] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:37.697 [2024-11-15 10:46:26.001524] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:37.697 [2024-11-15 10:46:26.001539] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:37.697 [2024-11-15 10:46:26.001550] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:37.697 [2024-11-15 10:46:26.001580] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:37.697 qpair failed and we were unable to recover it. 00:27:37.697 [2024-11-15 10:46:26.011409] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:37.697 [2024-11-15 10:46:26.011509] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:37.697 [2024-11-15 10:46:26.011534] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:37.697 [2024-11-15 10:46:26.011548] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:37.697 [2024-11-15 10:46:26.011560] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:37.697 [2024-11-15 10:46:26.011596] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:37.697 qpair failed and we were unable to recover it. 00:27:37.697 [2024-11-15 10:46:26.021429] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:37.697 [2024-11-15 10:46:26.021524] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:37.697 [2024-11-15 10:46:26.021549] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:37.697 [2024-11-15 10:46:26.021564] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:37.697 [2024-11-15 10:46:26.021576] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:37.697 [2024-11-15 10:46:26.021606] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:37.697 qpair failed and we were unable to recover it. 00:27:37.697 [2024-11-15 10:46:26.031442] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:37.697 [2024-11-15 10:46:26.031562] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:37.697 [2024-11-15 10:46:26.031587] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:37.697 [2024-11-15 10:46:26.031601] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:37.697 [2024-11-15 10:46:26.031613] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:37.697 [2024-11-15 10:46:26.031644] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:37.697 qpair failed and we were unable to recover it. 00:27:37.697 [2024-11-15 10:46:26.041502] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:37.697 [2024-11-15 10:46:26.041594] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:37.697 [2024-11-15 10:46:26.041619] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:37.697 [2024-11-15 10:46:26.041633] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:37.697 [2024-11-15 10:46:26.041645] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:37.697 [2024-11-15 10:46:26.041675] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:37.697 qpair failed and we were unable to recover it. 00:27:37.697 [2024-11-15 10:46:26.051511] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:37.697 [2024-11-15 10:46:26.051635] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:37.697 [2024-11-15 10:46:26.051660] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:37.697 [2024-11-15 10:46:26.051674] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:37.697 [2024-11-15 10:46:26.051686] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:37.697 [2024-11-15 10:46:26.051715] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:37.697 qpair failed and we were unable to recover it. 00:27:37.697 [2024-11-15 10:46:26.061546] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:37.698 [2024-11-15 10:46:26.061636] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:37.698 [2024-11-15 10:46:26.061660] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:37.698 [2024-11-15 10:46:26.061674] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:37.698 [2024-11-15 10:46:26.061685] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:37.698 [2024-11-15 10:46:26.061716] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:37.698 qpair failed and we were unable to recover it. 00:27:37.698 [2024-11-15 10:46:26.071574] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:37.698 [2024-11-15 10:46:26.071660] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:37.698 [2024-11-15 10:46:26.071684] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:37.698 [2024-11-15 10:46:26.071698] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:37.698 [2024-11-15 10:46:26.071709] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:37.698 [2024-11-15 10:46:26.071740] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:37.698 qpair failed and we were unable to recover it. 00:27:37.698 [2024-11-15 10:46:26.081631] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:37.698 [2024-11-15 10:46:26.081723] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:37.698 [2024-11-15 10:46:26.081746] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:37.698 [2024-11-15 10:46:26.081760] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:37.698 [2024-11-15 10:46:26.081771] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:37.698 [2024-11-15 10:46:26.081801] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:37.698 qpair failed and we were unable to recover it. 00:27:37.698 [2024-11-15 10:46:26.091648] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:37.698 [2024-11-15 10:46:26.091789] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:37.698 [2024-11-15 10:46:26.091815] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:37.698 [2024-11-15 10:46:26.091829] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:37.698 [2024-11-15 10:46:26.091841] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:37.698 [2024-11-15 10:46:26.091882] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:37.698 qpair failed and we were unable to recover it. 00:27:37.698 [2024-11-15 10:46:26.101685] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:37.698 [2024-11-15 10:46:26.101829] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:37.698 [2024-11-15 10:46:26.101860] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:37.698 [2024-11-15 10:46:26.101876] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:37.698 [2024-11-15 10:46:26.101888] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:37.698 [2024-11-15 10:46:26.101918] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:37.698 qpair failed and we were unable to recover it. 00:27:37.698 [2024-11-15 10:46:26.111744] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:37.698 [2024-11-15 10:46:26.111849] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:37.698 [2024-11-15 10:46:26.111875] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:37.698 [2024-11-15 10:46:26.111889] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:37.698 [2024-11-15 10:46:26.111900] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:37.698 [2024-11-15 10:46:26.111930] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:37.698 qpair failed and we were unable to recover it. 00:27:37.698 [2024-11-15 10:46:26.121755] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:37.698 [2024-11-15 10:46:26.121862] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:37.698 [2024-11-15 10:46:26.121887] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:37.698 [2024-11-15 10:46:26.121901] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:37.698 [2024-11-15 10:46:26.121912] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:37.698 [2024-11-15 10:46:26.121942] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:37.698 qpair failed and we were unable to recover it. 00:27:37.698 [2024-11-15 10:46:26.131812] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:37.698 [2024-11-15 10:46:26.131926] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:37.698 [2024-11-15 10:46:26.131952] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:37.698 [2024-11-15 10:46:26.131967] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:37.698 [2024-11-15 10:46:26.131978] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:37.698 [2024-11-15 10:46:26.132009] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:37.698 qpair failed and we were unable to recover it. 00:27:37.698 [2024-11-15 10:46:26.141769] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:37.698 [2024-11-15 10:46:26.141913] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:37.698 [2024-11-15 10:46:26.141939] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:37.698 [2024-11-15 10:46:26.141953] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:37.698 [2024-11-15 10:46:26.141965] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:37.698 [2024-11-15 10:46:26.142001] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:37.698 qpair failed and we were unable to recover it. 00:27:37.698 [2024-11-15 10:46:26.151880] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:37.698 [2024-11-15 10:46:26.151981] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:37.698 [2024-11-15 10:46:26.152007] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:37.698 [2024-11-15 10:46:26.152021] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:37.698 [2024-11-15 10:46:26.152034] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:37.698 [2024-11-15 10:46:26.152064] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:37.698 qpair failed and we were unable to recover it. 00:27:37.698 [2024-11-15 10:46:26.161852] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:37.698 [2024-11-15 10:46:26.161981] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:37.698 [2024-11-15 10:46:26.162006] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:37.698 [2024-11-15 10:46:26.162021] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:37.698 [2024-11-15 10:46:26.162032] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:37.698 [2024-11-15 10:46:26.162062] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:37.698 qpair failed and we were unable to recover it. 00:27:37.958 [2024-11-15 10:46:26.171836] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:37.958 [2024-11-15 10:46:26.171967] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:37.958 [2024-11-15 10:46:26.171993] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:37.958 [2024-11-15 10:46:26.172007] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:37.959 [2024-11-15 10:46:26.172019] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:37.959 [2024-11-15 10:46:26.172049] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:37.959 qpair failed and we were unable to recover it. 00:27:37.959 [2024-11-15 10:46:26.181882] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:37.959 [2024-11-15 10:46:26.181986] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:37.959 [2024-11-15 10:46:26.182011] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:37.959 [2024-11-15 10:46:26.182025] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:37.959 [2024-11-15 10:46:26.182038] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:37.959 [2024-11-15 10:46:26.182068] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:37.959 qpair failed and we were unable to recover it. 00:27:37.959 [2024-11-15 10:46:26.191925] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:37.959 [2024-11-15 10:46:26.192029] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:37.959 [2024-11-15 10:46:26.192055] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:37.959 [2024-11-15 10:46:26.192069] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:37.959 [2024-11-15 10:46:26.192081] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:37.959 [2024-11-15 10:46:26.192111] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:37.959 qpair failed and we were unable to recover it. 00:27:37.959 [2024-11-15 10:46:26.201998] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:37.959 [2024-11-15 10:46:26.202123] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:37.959 [2024-11-15 10:46:26.202148] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:37.959 [2024-11-15 10:46:26.202163] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:37.959 [2024-11-15 10:46:26.202175] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:37.959 [2024-11-15 10:46:26.202205] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:37.959 qpair failed and we were unable to recover it. 00:27:37.959 [2024-11-15 10:46:26.211990] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:37.959 [2024-11-15 10:46:26.212087] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:37.959 [2024-11-15 10:46:26.212112] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:37.959 [2024-11-15 10:46:26.212126] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:37.959 [2024-11-15 10:46:26.212138] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:37.959 [2024-11-15 10:46:26.212168] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:37.959 qpair failed and we were unable to recover it. 00:27:37.959 [2024-11-15 10:46:26.222006] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:37.959 [2024-11-15 10:46:26.222136] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:37.959 [2024-11-15 10:46:26.222162] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:37.959 [2024-11-15 10:46:26.222176] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:37.959 [2024-11-15 10:46:26.222188] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:37.959 [2024-11-15 10:46:26.222219] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:37.959 qpair failed and we were unable to recover it. 00:27:37.959 [2024-11-15 10:46:26.232030] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:37.959 [2024-11-15 10:46:26.232129] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:37.959 [2024-11-15 10:46:26.232160] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:37.959 [2024-11-15 10:46:26.232175] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:37.959 [2024-11-15 10:46:26.232186] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:37.959 [2024-11-15 10:46:26.232216] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:37.959 qpair failed and we were unable to recover it. 00:27:37.959 [2024-11-15 10:46:26.242111] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:37.959 [2024-11-15 10:46:26.242223] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:37.959 [2024-11-15 10:46:26.242248] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:37.959 [2024-11-15 10:46:26.242263] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:37.959 [2024-11-15 10:46:26.242275] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:37.959 [2024-11-15 10:46:26.242304] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:37.959 qpair failed and we were unable to recover it. 00:27:37.959 [2024-11-15 10:46:26.252224] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:37.959 [2024-11-15 10:46:26.252345] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:37.959 [2024-11-15 10:46:26.252378] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:37.959 [2024-11-15 10:46:26.252393] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:37.959 [2024-11-15 10:46:26.252406] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:37.959 [2024-11-15 10:46:26.252436] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:37.959 qpair failed and we were unable to recover it. 00:27:37.959 [2024-11-15 10:46:26.262143] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:37.959 [2024-11-15 10:46:26.262249] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:37.959 [2024-11-15 10:46:26.262273] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:37.959 [2024-11-15 10:46:26.262286] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:37.959 [2024-11-15 10:46:26.262298] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:37.959 [2024-11-15 10:46:26.262328] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:37.959 qpair failed and we were unable to recover it. 00:27:37.959 [2024-11-15 10:46:26.272181] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:37.959 [2024-11-15 10:46:26.272279] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:37.959 [2024-11-15 10:46:26.272307] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:37.959 [2024-11-15 10:46:26.272322] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:37.959 [2024-11-15 10:46:26.272339] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:37.959 [2024-11-15 10:46:26.272378] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:37.959 qpair failed and we were unable to recover it. 00:27:37.959 [2024-11-15 10:46:26.282205] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:37.959 [2024-11-15 10:46:26.282327] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:37.959 [2024-11-15 10:46:26.282352] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:37.959 [2024-11-15 10:46:26.282375] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:37.959 [2024-11-15 10:46:26.282389] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:37.959 [2024-11-15 10:46:26.282420] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:37.959 qpair failed and we were unable to recover it. 00:27:37.959 [2024-11-15 10:46:26.292188] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:37.959 [2024-11-15 10:46:26.292329] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:37.959 [2024-11-15 10:46:26.292355] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:37.959 [2024-11-15 10:46:26.292377] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:37.959 [2024-11-15 10:46:26.292390] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:37.959 [2024-11-15 10:46:26.292421] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:37.959 qpair failed and we were unable to recover it. 00:27:37.959 [2024-11-15 10:46:26.302205] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:37.959 [2024-11-15 10:46:26.302306] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:37.959 [2024-11-15 10:46:26.302332] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:37.959 [2024-11-15 10:46:26.302346] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:37.960 [2024-11-15 10:46:26.302358] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:37.960 [2024-11-15 10:46:26.302398] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:37.960 qpair failed and we were unable to recover it. 00:27:37.960 [2024-11-15 10:46:26.312288] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:37.960 [2024-11-15 10:46:26.312409] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:37.960 [2024-11-15 10:46:26.312435] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:37.960 [2024-11-15 10:46:26.312449] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:37.960 [2024-11-15 10:46:26.312461] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:37.960 [2024-11-15 10:46:26.312493] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:37.960 qpair failed and we were unable to recover it. 00:27:37.960 [2024-11-15 10:46:26.322260] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:37.960 [2024-11-15 10:46:26.322379] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:37.960 [2024-11-15 10:46:26.322404] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:37.960 [2024-11-15 10:46:26.322418] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:37.960 [2024-11-15 10:46:26.322430] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:37.960 [2024-11-15 10:46:26.322460] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:37.960 qpair failed and we were unable to recover it. 00:27:37.960 [2024-11-15 10:46:26.332307] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:37.960 [2024-11-15 10:46:26.332435] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:37.960 [2024-11-15 10:46:26.332461] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:37.960 [2024-11-15 10:46:26.332475] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:37.960 [2024-11-15 10:46:26.332488] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:37.960 [2024-11-15 10:46:26.332518] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:37.960 qpair failed and we were unable to recover it. 00:27:37.960 [2024-11-15 10:46:26.342302] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:37.960 [2024-11-15 10:46:26.342408] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:37.960 [2024-11-15 10:46:26.342433] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:37.960 [2024-11-15 10:46:26.342447] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:37.960 [2024-11-15 10:46:26.342459] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:37.960 [2024-11-15 10:46:26.342489] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:37.960 qpair failed and we were unable to recover it. 00:27:37.960 [2024-11-15 10:46:26.352381] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:37.960 [2024-11-15 10:46:26.352474] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:37.960 [2024-11-15 10:46:26.352500] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:37.960 [2024-11-15 10:46:26.352515] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:37.960 [2024-11-15 10:46:26.352526] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:37.960 [2024-11-15 10:46:26.352557] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:37.960 qpair failed and we were unable to recover it. 00:27:37.960 [2024-11-15 10:46:26.362442] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:37.960 [2024-11-15 10:46:26.362539] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:37.960 [2024-11-15 10:46:26.362570] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:37.960 [2024-11-15 10:46:26.362585] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:37.960 [2024-11-15 10:46:26.362597] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:37.960 [2024-11-15 10:46:26.362628] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:37.960 qpair failed and we were unable to recover it. 00:27:37.960 [2024-11-15 10:46:26.372463] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:37.960 [2024-11-15 10:46:26.372555] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:37.960 [2024-11-15 10:46:26.372581] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:37.960 [2024-11-15 10:46:26.372595] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:37.960 [2024-11-15 10:46:26.372607] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:37.960 [2024-11-15 10:46:26.372638] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:37.960 qpair failed and we were unable to recover it. 00:27:37.960 [2024-11-15 10:46:26.382481] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:37.960 [2024-11-15 10:46:26.382565] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:37.960 [2024-11-15 10:46:26.382589] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:37.960 [2024-11-15 10:46:26.382603] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:37.960 [2024-11-15 10:46:26.382614] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:37.960 [2024-11-15 10:46:26.382645] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:37.960 qpair failed and we were unable to recover it. 00:27:37.960 [2024-11-15 10:46:26.392471] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:37.960 [2024-11-15 10:46:26.392601] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:37.960 [2024-11-15 10:46:26.392627] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:37.960 [2024-11-15 10:46:26.392641] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:37.960 [2024-11-15 10:46:26.392653] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:37.960 [2024-11-15 10:46:26.392683] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:37.960 qpair failed and we were unable to recover it. 00:27:37.960 [2024-11-15 10:46:26.402507] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:37.960 [2024-11-15 10:46:26.402612] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:37.960 [2024-11-15 10:46:26.402638] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:37.960 [2024-11-15 10:46:26.402657] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:37.960 [2024-11-15 10:46:26.402670] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:37.960 [2024-11-15 10:46:26.402700] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:37.960 qpair failed and we were unable to recover it. 00:27:37.960 [2024-11-15 10:46:26.412588] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:37.960 [2024-11-15 10:46:26.412713] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:37.960 [2024-11-15 10:46:26.412738] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:37.960 [2024-11-15 10:46:26.412752] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:37.960 [2024-11-15 10:46:26.412764] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:37.960 [2024-11-15 10:46:26.412795] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:37.960 qpair failed and we were unable to recover it. 00:27:37.960 [2024-11-15 10:46:26.422545] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:37.960 [2024-11-15 10:46:26.422630] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:37.960 [2024-11-15 10:46:26.422654] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:37.960 [2024-11-15 10:46:26.422667] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:37.960 [2024-11-15 10:46:26.422679] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:37.960 [2024-11-15 10:46:26.422709] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:37.960 qpair failed and we were unable to recover it. 00:27:38.220 [2024-11-15 10:46:26.432645] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:38.220 [2024-11-15 10:46:26.432759] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:38.220 [2024-11-15 10:46:26.432783] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:38.220 [2024-11-15 10:46:26.432797] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:38.220 [2024-11-15 10:46:26.432808] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:38.220 [2024-11-15 10:46:26.432848] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:38.220 qpair failed and we were unable to recover it. 00:27:38.220 [2024-11-15 10:46:26.442640] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:38.220 [2024-11-15 10:46:26.442763] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:38.220 [2024-11-15 10:46:26.442787] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:38.220 [2024-11-15 10:46:26.442801] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:38.220 [2024-11-15 10:46:26.442813] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:38.220 [2024-11-15 10:46:26.442843] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:38.220 qpair failed and we were unable to recover it. 00:27:38.220 [2024-11-15 10:46:26.452675] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:38.220 [2024-11-15 10:46:26.452791] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:38.220 [2024-11-15 10:46:26.452816] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:38.220 [2024-11-15 10:46:26.452830] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:38.220 [2024-11-15 10:46:26.452842] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:38.220 [2024-11-15 10:46:26.452872] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:38.220 qpair failed and we were unable to recover it. 00:27:38.220 [2024-11-15 10:46:26.462699] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:38.220 [2024-11-15 10:46:26.462798] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:38.220 [2024-11-15 10:46:26.462823] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:38.220 [2024-11-15 10:46:26.462837] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:38.220 [2024-11-15 10:46:26.462850] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:38.220 [2024-11-15 10:46:26.462880] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:38.220 qpair failed and we were unable to recover it. 00:27:38.220 [2024-11-15 10:46:26.472737] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:38.220 [2024-11-15 10:46:26.472839] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:38.220 [2024-11-15 10:46:26.472864] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:38.220 [2024-11-15 10:46:26.472879] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:38.220 [2024-11-15 10:46:26.472890] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:38.220 [2024-11-15 10:46:26.472921] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:38.220 qpair failed and we were unable to recover it. 00:27:38.220 [2024-11-15 10:46:26.482764] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:38.220 [2024-11-15 10:46:26.482870] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:38.220 [2024-11-15 10:46:26.482896] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:38.220 [2024-11-15 10:46:26.482910] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:38.220 [2024-11-15 10:46:26.482922] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:38.220 [2024-11-15 10:46:26.482952] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:38.220 qpair failed and we were unable to recover it. 00:27:38.220 [2024-11-15 10:46:26.492747] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:38.220 [2024-11-15 10:46:26.492854] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:38.220 [2024-11-15 10:46:26.492878] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:38.220 [2024-11-15 10:46:26.492892] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:38.220 [2024-11-15 10:46:26.492904] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:38.220 [2024-11-15 10:46:26.492946] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:38.220 qpair failed and we were unable to recover it. 00:27:38.220 [2024-11-15 10:46:26.502781] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:38.220 [2024-11-15 10:46:26.502878] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:38.220 [2024-11-15 10:46:26.502902] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:38.220 [2024-11-15 10:46:26.502916] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:38.220 [2024-11-15 10:46:26.502928] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:38.220 [2024-11-15 10:46:26.502958] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:38.220 qpair failed and we were unable to recover it. 00:27:38.220 [2024-11-15 10:46:26.512801] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:38.220 [2024-11-15 10:46:26.512903] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:38.220 [2024-11-15 10:46:26.512927] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:38.220 [2024-11-15 10:46:26.512941] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:38.220 [2024-11-15 10:46:26.512953] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:38.220 [2024-11-15 10:46:26.512983] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:38.220 qpair failed and we were unable to recover it. 00:27:38.220 [2024-11-15 10:46:26.522865] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:38.220 [2024-11-15 10:46:26.522975] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:38.220 [2024-11-15 10:46:26.523001] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:38.220 [2024-11-15 10:46:26.523015] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:38.220 [2024-11-15 10:46:26.523027] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:38.220 [2024-11-15 10:46:26.523059] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:38.220 qpair failed and we were unable to recover it. 00:27:38.220 [2024-11-15 10:46:26.532853] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:38.220 [2024-11-15 10:46:26.532960] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:38.220 [2024-11-15 10:46:26.532986] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:38.220 [2024-11-15 10:46:26.533015] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:38.220 [2024-11-15 10:46:26.533029] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:38.220 [2024-11-15 10:46:26.533059] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:38.220 qpair failed and we were unable to recover it. 00:27:38.220 [2024-11-15 10:46:26.542929] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:38.221 [2024-11-15 10:46:26.543032] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:38.221 [2024-11-15 10:46:26.543060] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:38.221 [2024-11-15 10:46:26.543074] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:38.221 [2024-11-15 10:46:26.543086] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:38.221 [2024-11-15 10:46:26.543116] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:38.221 qpair failed and we were unable to recover it. 00:27:38.221 [2024-11-15 10:46:26.552942] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:38.221 [2024-11-15 10:46:26.553048] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:38.221 [2024-11-15 10:46:26.553074] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:38.221 [2024-11-15 10:46:26.553089] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:38.221 [2024-11-15 10:46:26.553101] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:38.221 [2024-11-15 10:46:26.553132] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:38.221 qpair failed and we were unable to recover it. 00:27:38.221 [2024-11-15 10:46:26.562970] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:38.221 [2024-11-15 10:46:26.563075] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:38.221 [2024-11-15 10:46:26.563102] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:38.221 [2024-11-15 10:46:26.563116] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:38.221 [2024-11-15 10:46:26.563128] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:38.221 [2024-11-15 10:46:26.563158] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:38.221 qpair failed and we were unable to recover it. 00:27:38.221 [2024-11-15 10:46:26.573010] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:38.221 [2024-11-15 10:46:26.573156] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:38.221 [2024-11-15 10:46:26.573188] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:38.221 [2024-11-15 10:46:26.573202] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:38.221 [2024-11-15 10:46:26.573214] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:38.221 [2024-11-15 10:46:26.573251] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:38.221 qpair failed and we were unable to recover it. 00:27:38.221 [2024-11-15 10:46:26.583006] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:38.221 [2024-11-15 10:46:26.583110] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:38.221 [2024-11-15 10:46:26.583135] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:38.221 [2024-11-15 10:46:26.583150] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:38.221 [2024-11-15 10:46:26.583162] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:38.221 [2024-11-15 10:46:26.583193] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:38.221 qpair failed and we were unable to recover it. 00:27:38.221 [2024-11-15 10:46:26.593031] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:38.221 [2024-11-15 10:46:26.593134] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:38.221 [2024-11-15 10:46:26.593159] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:38.221 [2024-11-15 10:46:26.593174] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:38.221 [2024-11-15 10:46:26.593186] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:38.221 [2024-11-15 10:46:26.593215] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:38.221 qpair failed and we were unable to recover it. 00:27:38.221 [2024-11-15 10:46:26.603061] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:38.221 [2024-11-15 10:46:26.603179] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:38.221 [2024-11-15 10:46:26.603204] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:38.221 [2024-11-15 10:46:26.603218] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:38.221 [2024-11-15 10:46:26.603230] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:38.221 [2024-11-15 10:46:26.603260] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:38.221 qpair failed and we were unable to recover it. 00:27:38.221 [2024-11-15 10:46:26.613094] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:38.221 [2024-11-15 10:46:26.613245] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:38.221 [2024-11-15 10:46:26.613271] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:38.221 [2024-11-15 10:46:26.613285] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:38.221 [2024-11-15 10:46:26.613296] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:38.221 [2024-11-15 10:46:26.613327] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:38.221 qpair failed and we were unable to recover it. 00:27:38.221 [2024-11-15 10:46:26.623139] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:38.221 [2024-11-15 10:46:26.623241] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:38.221 [2024-11-15 10:46:26.623267] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:38.221 [2024-11-15 10:46:26.623281] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:38.221 [2024-11-15 10:46:26.623293] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:38.221 [2024-11-15 10:46:26.623323] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:38.221 qpair failed and we were unable to recover it. 00:27:38.221 [2024-11-15 10:46:26.633147] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:38.221 [2024-11-15 10:46:26.633246] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:38.221 [2024-11-15 10:46:26.633274] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:38.221 [2024-11-15 10:46:26.633288] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:38.221 [2024-11-15 10:46:26.633300] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:38.221 [2024-11-15 10:46:26.633330] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:38.221 qpair failed and we were unable to recover it. 00:27:38.221 [2024-11-15 10:46:26.643216] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:38.221 [2024-11-15 10:46:26.643316] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:38.221 [2024-11-15 10:46:26.643341] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:38.221 [2024-11-15 10:46:26.643356] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:38.221 [2024-11-15 10:46:26.643380] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:38.221 [2024-11-15 10:46:26.643411] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:38.221 qpair failed and we were unable to recover it. 00:27:38.221 [2024-11-15 10:46:26.653201] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:38.221 [2024-11-15 10:46:26.653311] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:38.221 [2024-11-15 10:46:26.653336] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:38.221 [2024-11-15 10:46:26.653351] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:38.221 [2024-11-15 10:46:26.653371] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:38.221 [2024-11-15 10:46:26.653415] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:38.221 qpair failed and we were unable to recover it. 00:27:38.221 [2024-11-15 10:46:26.663216] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:38.221 [2024-11-15 10:46:26.663329] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:38.221 [2024-11-15 10:46:26.663369] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:38.221 [2024-11-15 10:46:26.663387] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:38.222 [2024-11-15 10:46:26.663399] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:38.222 [2024-11-15 10:46:26.663429] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:38.222 qpair failed and we were unable to recover it. 00:27:38.222 [2024-11-15 10:46:26.673250] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:38.222 [2024-11-15 10:46:26.673354] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:38.222 [2024-11-15 10:46:26.673387] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:38.222 [2024-11-15 10:46:26.673402] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:38.222 [2024-11-15 10:46:26.673414] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:38.222 [2024-11-15 10:46:26.673445] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:38.222 qpair failed and we were unable to recover it. 00:27:38.222 [2024-11-15 10:46:26.683283] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:38.222 [2024-11-15 10:46:26.683400] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:38.222 [2024-11-15 10:46:26.683438] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:38.222 [2024-11-15 10:46:26.683453] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:38.222 [2024-11-15 10:46:26.683465] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:38.222 [2024-11-15 10:46:26.683496] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:38.222 qpair failed and we were unable to recover it. 00:27:38.480 [2024-11-15 10:46:26.693309] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:38.480 [2024-11-15 10:46:26.693434] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:38.480 [2024-11-15 10:46:26.693460] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:38.480 [2024-11-15 10:46:26.693475] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:38.480 [2024-11-15 10:46:26.693487] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:38.480 [2024-11-15 10:46:26.693517] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:38.480 qpair failed and we were unable to recover it. 00:27:38.480 [2024-11-15 10:46:26.703305] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:38.480 [2024-11-15 10:46:26.703444] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:38.480 [2024-11-15 10:46:26.703470] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:38.480 [2024-11-15 10:46:26.703485] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:38.480 [2024-11-15 10:46:26.703502] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:38.480 [2024-11-15 10:46:26.703533] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:38.480 qpair failed and we were unable to recover it. 00:27:38.480 [2024-11-15 10:46:26.713342] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:38.481 [2024-11-15 10:46:26.713449] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:38.481 [2024-11-15 10:46:26.713474] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:38.481 [2024-11-15 10:46:26.713488] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:38.481 [2024-11-15 10:46:26.713500] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:38.481 [2024-11-15 10:46:26.713531] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:38.481 qpair failed and we were unable to recover it. 00:27:38.481 [2024-11-15 10:46:26.723417] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:38.481 [2024-11-15 10:46:26.723512] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:38.481 [2024-11-15 10:46:26.723537] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:38.481 [2024-11-15 10:46:26.723552] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:38.481 [2024-11-15 10:46:26.723564] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:38.481 [2024-11-15 10:46:26.723594] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:38.481 qpair failed and we were unable to recover it. 00:27:38.481 [2024-11-15 10:46:26.733409] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:38.481 [2024-11-15 10:46:26.733497] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:38.481 [2024-11-15 10:46:26.733522] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:38.481 [2024-11-15 10:46:26.733536] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:38.481 [2024-11-15 10:46:26.733548] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:38.481 [2024-11-15 10:46:26.733578] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:38.481 qpair failed and we were unable to recover it. 00:27:38.481 [2024-11-15 10:46:26.743517] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:38.481 [2024-11-15 10:46:26.743603] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:38.481 [2024-11-15 10:46:26.743627] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:38.481 [2024-11-15 10:46:26.743641] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:38.481 [2024-11-15 10:46:26.743653] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:38.481 [2024-11-15 10:46:26.743683] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:38.481 qpair failed and we were unable to recover it. 00:27:38.481 [2024-11-15 10:46:26.753489] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:38.481 [2024-11-15 10:46:26.753578] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:38.481 [2024-11-15 10:46:26.753606] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:38.481 [2024-11-15 10:46:26.753621] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:38.481 [2024-11-15 10:46:26.753632] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:38.481 [2024-11-15 10:46:26.753663] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:38.481 qpair failed and we were unable to recover it. 00:27:38.481 [2024-11-15 10:46:26.763565] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:38.481 [2024-11-15 10:46:26.763715] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:38.481 [2024-11-15 10:46:26.763740] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:38.481 [2024-11-15 10:46:26.763755] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:38.481 [2024-11-15 10:46:26.763767] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:38.481 [2024-11-15 10:46:26.763800] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:38.481 qpair failed and we were unable to recover it. 00:27:38.481 [2024-11-15 10:46:26.773559] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:38.481 [2024-11-15 10:46:26.773662] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:38.481 [2024-11-15 10:46:26.773687] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:38.481 [2024-11-15 10:46:26.773701] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:38.481 [2024-11-15 10:46:26.773712] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:38.481 [2024-11-15 10:46:26.773742] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:38.481 qpair failed and we were unable to recover it. 00:27:38.481 [2024-11-15 10:46:26.783559] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:38.481 [2024-11-15 10:46:26.783651] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:38.481 [2024-11-15 10:46:26.783677] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:38.481 [2024-11-15 10:46:26.783691] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:38.481 [2024-11-15 10:46:26.783703] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:38.481 [2024-11-15 10:46:26.783733] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:38.481 qpair failed and we were unable to recover it. 00:27:38.481 [2024-11-15 10:46:26.793712] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:38.481 [2024-11-15 10:46:26.793811] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:38.481 [2024-11-15 10:46:26.793842] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:38.481 [2024-11-15 10:46:26.793857] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:38.481 [2024-11-15 10:46:26.793869] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:38.481 [2024-11-15 10:46:26.793900] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:38.481 qpair failed and we were unable to recover it. 00:27:38.481 [2024-11-15 10:46:26.803633] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:38.481 [2024-11-15 10:46:26.803756] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:38.481 [2024-11-15 10:46:26.803780] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:38.481 [2024-11-15 10:46:26.803794] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:38.481 [2024-11-15 10:46:26.803805] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:38.481 [2024-11-15 10:46:26.803835] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:38.481 qpair failed and we were unable to recover it. 00:27:38.481 [2024-11-15 10:46:26.813651] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:38.481 [2024-11-15 10:46:26.813737] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:38.481 [2024-11-15 10:46:26.813763] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:38.481 [2024-11-15 10:46:26.813777] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:38.481 [2024-11-15 10:46:26.813789] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:38.481 [2024-11-15 10:46:26.813818] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:38.481 qpair failed and we were unable to recover it. 00:27:38.481 [2024-11-15 10:46:26.823734] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:38.481 [2024-11-15 10:46:26.823840] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:38.481 [2024-11-15 10:46:26.823866] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:38.481 [2024-11-15 10:46:26.823880] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:38.481 [2024-11-15 10:46:26.823892] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:38.481 [2024-11-15 10:46:26.823923] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:38.481 qpair failed and we were unable to recover it. 00:27:38.481 [2024-11-15 10:46:26.833729] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:38.481 [2024-11-15 10:46:26.833827] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:38.481 [2024-11-15 10:46:26.833852] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:38.481 [2024-11-15 10:46:26.833866] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:38.481 [2024-11-15 10:46:26.833883] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:38.481 [2024-11-15 10:46:26.833914] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:38.481 qpair failed and we were unable to recover it. 00:27:38.481 [2024-11-15 10:46:26.843785] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:38.482 [2024-11-15 10:46:26.843919] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:38.482 [2024-11-15 10:46:26.843945] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:38.482 [2024-11-15 10:46:26.843960] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:38.482 [2024-11-15 10:46:26.843972] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:38.482 [2024-11-15 10:46:26.844002] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:38.482 qpair failed and we were unable to recover it. 00:27:38.482 [2024-11-15 10:46:26.853779] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:38.482 [2024-11-15 10:46:26.853879] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:38.482 [2024-11-15 10:46:26.853904] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:38.482 [2024-11-15 10:46:26.853918] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:38.482 [2024-11-15 10:46:26.853930] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:38.482 [2024-11-15 10:46:26.853960] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:38.482 qpair failed and we were unable to recover it. 00:27:38.482 [2024-11-15 10:46:26.863835] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:38.482 [2024-11-15 10:46:26.863968] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:38.482 [2024-11-15 10:46:26.863994] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:38.482 [2024-11-15 10:46:26.864008] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:38.482 [2024-11-15 10:46:26.864020] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:38.482 [2024-11-15 10:46:26.864051] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:38.482 qpair failed and we were unable to recover it. 00:27:38.482 [2024-11-15 10:46:26.873827] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:38.482 [2024-11-15 10:46:26.873927] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:38.482 [2024-11-15 10:46:26.873952] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:38.482 [2024-11-15 10:46:26.873966] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:38.482 [2024-11-15 10:46:26.873978] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:38.482 [2024-11-15 10:46:26.874008] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:38.482 qpair failed and we were unable to recover it. 00:27:38.482 [2024-11-15 10:46:26.883882] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:38.482 [2024-11-15 10:46:26.883994] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:38.482 [2024-11-15 10:46:26.884019] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:38.482 [2024-11-15 10:46:26.884033] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:38.482 [2024-11-15 10:46:26.884045] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:38.482 [2024-11-15 10:46:26.884075] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:38.482 qpair failed and we were unable to recover it. 00:27:38.482 [2024-11-15 10:46:26.893893] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:38.482 [2024-11-15 10:46:26.893997] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:38.482 [2024-11-15 10:46:26.894021] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:38.482 [2024-11-15 10:46:26.894035] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:38.482 [2024-11-15 10:46:26.894046] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:38.482 [2024-11-15 10:46:26.894076] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:38.482 qpair failed and we were unable to recover it. 00:27:38.482 [2024-11-15 10:46:26.903903] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:38.482 [2024-11-15 10:46:26.904012] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:38.482 [2024-11-15 10:46:26.904037] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:38.482 [2024-11-15 10:46:26.904052] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:38.482 [2024-11-15 10:46:26.904064] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:38.482 [2024-11-15 10:46:26.904094] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:38.482 qpair failed and we were unable to recover it. 00:27:38.482 [2024-11-15 10:46:26.913958] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:38.482 [2024-11-15 10:46:26.914066] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:38.482 [2024-11-15 10:46:26.914092] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:38.482 [2024-11-15 10:46:26.914107] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:38.482 [2024-11-15 10:46:26.914119] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:38.482 [2024-11-15 10:46:26.914149] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:38.482 qpair failed and we were unable to recover it. 00:27:38.482 [2024-11-15 10:46:26.924045] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:38.482 [2024-11-15 10:46:26.924157] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:38.482 [2024-11-15 10:46:26.924188] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:38.482 [2024-11-15 10:46:26.924203] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:38.482 [2024-11-15 10:46:26.924214] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:38.482 [2024-11-15 10:46:26.924244] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:38.482 qpair failed and we were unable to recover it. 00:27:38.482 [2024-11-15 10:46:26.934017] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:38.482 [2024-11-15 10:46:26.934155] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:38.482 [2024-11-15 10:46:26.934180] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:38.482 [2024-11-15 10:46:26.934195] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:38.482 [2024-11-15 10:46:26.934207] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:38.482 [2024-11-15 10:46:26.934236] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:38.482 qpair failed and we were unable to recover it. 00:27:38.482 [2024-11-15 10:46:26.944048] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:38.482 [2024-11-15 10:46:26.944151] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:38.482 [2024-11-15 10:46:26.944177] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:38.482 [2024-11-15 10:46:26.944191] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:38.482 [2024-11-15 10:46:26.944203] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:38.482 [2024-11-15 10:46:26.944233] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:38.482 qpair failed and we were unable to recover it. 00:27:38.741 [2024-11-15 10:46:26.954060] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:38.741 [2024-11-15 10:46:26.954160] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:38.741 [2024-11-15 10:46:26.954186] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:38.741 [2024-11-15 10:46:26.954201] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:38.741 [2024-11-15 10:46:26.954213] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:38.741 [2024-11-15 10:46:26.954242] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:38.741 qpair failed and we were unable to recover it. 00:27:38.741 [2024-11-15 10:46:26.964095] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:38.741 [2024-11-15 10:46:26.964206] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:38.741 [2024-11-15 10:46:26.964232] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:38.741 [2024-11-15 10:46:26.964251] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:38.741 [2024-11-15 10:46:26.964264] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:38.741 [2024-11-15 10:46:26.964295] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:38.741 qpair failed and we were unable to recover it. 00:27:38.741 [2024-11-15 10:46:26.974178] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:38.741 [2024-11-15 10:46:26.974326] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:38.741 [2024-11-15 10:46:26.974352] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:38.741 [2024-11-15 10:46:26.974373] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:38.741 [2024-11-15 10:46:26.974387] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:38.741 [2024-11-15 10:46:26.974417] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:38.741 qpair failed and we were unable to recover it. 00:27:38.741 [2024-11-15 10:46:26.984146] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:38.741 [2024-11-15 10:46:26.984248] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:38.741 [2024-11-15 10:46:26.984274] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:38.741 [2024-11-15 10:46:26.984288] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:38.741 [2024-11-15 10:46:26.984300] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:38.741 [2024-11-15 10:46:26.984330] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:38.741 qpair failed and we were unable to recover it. 00:27:38.741 [2024-11-15 10:46:26.994152] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:38.741 [2024-11-15 10:46:26.994252] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:38.741 [2024-11-15 10:46:26.994277] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:38.741 [2024-11-15 10:46:26.994291] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:38.741 [2024-11-15 10:46:26.994303] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:38.741 [2024-11-15 10:46:26.994334] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:38.741 qpair failed and we were unable to recover it. 00:27:38.742 [2024-11-15 10:46:27.004194] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:38.742 [2024-11-15 10:46:27.004319] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:38.742 [2024-11-15 10:46:27.004345] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:38.742 [2024-11-15 10:46:27.004359] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:38.742 [2024-11-15 10:46:27.004384] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:38.742 [2024-11-15 10:46:27.004414] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:38.742 qpair failed and we were unable to recover it. 00:27:38.742 [2024-11-15 10:46:27.014218] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:38.742 [2024-11-15 10:46:27.014320] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:38.742 [2024-11-15 10:46:27.014345] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:38.742 [2024-11-15 10:46:27.014359] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:38.742 [2024-11-15 10:46:27.014380] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:38.742 [2024-11-15 10:46:27.014412] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:38.742 qpair failed and we were unable to recover it. 00:27:38.742 [2024-11-15 10:46:27.024269] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:38.742 [2024-11-15 10:46:27.024356] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:38.742 [2024-11-15 10:46:27.024388] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:38.742 [2024-11-15 10:46:27.024403] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:38.742 [2024-11-15 10:46:27.024415] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:38.742 [2024-11-15 10:46:27.024446] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:38.742 qpair failed and we were unable to recover it. 00:27:38.742 [2024-11-15 10:46:27.034262] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:38.742 [2024-11-15 10:46:27.034372] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:38.742 [2024-11-15 10:46:27.034396] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:38.742 [2024-11-15 10:46:27.034410] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:38.742 [2024-11-15 10:46:27.034422] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:38.742 [2024-11-15 10:46:27.034452] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:38.742 qpair failed and we were unable to recover it. 00:27:38.742 [2024-11-15 10:46:27.044307] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:38.742 [2024-11-15 10:46:27.044426] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:38.742 [2024-11-15 10:46:27.044452] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:38.742 [2024-11-15 10:46:27.044467] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:38.742 [2024-11-15 10:46:27.044479] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:38.742 [2024-11-15 10:46:27.044509] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:38.742 qpair failed and we were unable to recover it. 00:27:38.742 [2024-11-15 10:46:27.054341] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:38.742 [2024-11-15 10:46:27.054443] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:38.742 [2024-11-15 10:46:27.054467] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:38.742 [2024-11-15 10:46:27.054480] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:38.742 [2024-11-15 10:46:27.054492] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:38.742 [2024-11-15 10:46:27.054523] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:38.742 qpair failed and we were unable to recover it. 00:27:38.742 [2024-11-15 10:46:27.064404] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:38.742 [2024-11-15 10:46:27.064533] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:38.742 [2024-11-15 10:46:27.064558] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:38.742 [2024-11-15 10:46:27.064572] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:38.742 [2024-11-15 10:46:27.064584] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:38.742 [2024-11-15 10:46:27.064615] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:38.742 qpair failed and we were unable to recover it. 00:27:38.742 [2024-11-15 10:46:27.074399] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:38.742 [2024-11-15 10:46:27.074485] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:38.742 [2024-11-15 10:46:27.074509] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:38.742 [2024-11-15 10:46:27.074522] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:38.742 [2024-11-15 10:46:27.074534] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:38.742 [2024-11-15 10:46:27.074564] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:38.742 qpair failed and we were unable to recover it. 00:27:38.742 [2024-11-15 10:46:27.084475] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:38.742 [2024-11-15 10:46:27.084569] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:38.742 [2024-11-15 10:46:27.084593] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:38.742 [2024-11-15 10:46:27.084607] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:38.742 [2024-11-15 10:46:27.084619] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:38.742 [2024-11-15 10:46:27.084649] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:38.742 qpair failed and we were unable to recover it. 00:27:38.742 [2024-11-15 10:46:27.094424] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:38.742 [2024-11-15 10:46:27.094516] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:38.742 [2024-11-15 10:46:27.094542] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:38.742 [2024-11-15 10:46:27.094561] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:38.742 [2024-11-15 10:46:27.094574] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:38.742 [2024-11-15 10:46:27.094605] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:38.742 qpair failed and we were unable to recover it. 00:27:38.742 [2024-11-15 10:46:27.104475] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:38.742 [2024-11-15 10:46:27.104594] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:38.742 [2024-11-15 10:46:27.104619] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:38.742 [2024-11-15 10:46:27.104634] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:38.742 [2024-11-15 10:46:27.104645] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:38.742 [2024-11-15 10:46:27.104686] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:38.742 qpair failed and we were unable to recover it. 00:27:38.742 [2024-11-15 10:46:27.114508] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:38.742 [2024-11-15 10:46:27.114596] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:38.742 [2024-11-15 10:46:27.114620] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:38.742 [2024-11-15 10:46:27.114634] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:38.742 [2024-11-15 10:46:27.114646] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:38.742 [2024-11-15 10:46:27.114675] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:38.742 qpair failed and we were unable to recover it. 00:27:38.742 [2024-11-15 10:46:27.124589] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:38.742 [2024-11-15 10:46:27.124685] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:38.742 [2024-11-15 10:46:27.124709] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:38.742 [2024-11-15 10:46:27.124723] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:38.743 [2024-11-15 10:46:27.124735] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:38.743 [2024-11-15 10:46:27.124764] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:38.743 qpair failed and we were unable to recover it. 00:27:38.743 [2024-11-15 10:46:27.134560] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:38.743 [2024-11-15 10:46:27.134651] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:38.743 [2024-11-15 10:46:27.134676] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:38.743 [2024-11-15 10:46:27.134689] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:38.743 [2024-11-15 10:46:27.134701] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:38.743 [2024-11-15 10:46:27.134737] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:38.743 qpair failed and we were unable to recover it. 00:27:38.743 [2024-11-15 10:46:27.144574] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:38.743 [2024-11-15 10:46:27.144660] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:38.743 [2024-11-15 10:46:27.144684] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:38.743 [2024-11-15 10:46:27.144699] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:38.743 [2024-11-15 10:46:27.144711] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:38.743 [2024-11-15 10:46:27.144741] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:38.743 qpair failed and we were unable to recover it. 00:27:38.743 [2024-11-15 10:46:27.154609] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:38.743 [2024-11-15 10:46:27.154696] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:38.743 [2024-11-15 10:46:27.154721] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:38.743 [2024-11-15 10:46:27.154735] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:38.743 [2024-11-15 10:46:27.154747] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:38.743 [2024-11-15 10:46:27.154776] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:38.743 qpair failed and we were unable to recover it. 00:27:38.743 [2024-11-15 10:46:27.164761] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:38.743 [2024-11-15 10:46:27.164888] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:38.743 [2024-11-15 10:46:27.164913] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:38.743 [2024-11-15 10:46:27.164927] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:38.743 [2024-11-15 10:46:27.164939] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:38.743 [2024-11-15 10:46:27.164969] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:38.743 qpair failed and we were unable to recover it. 00:27:38.743 [2024-11-15 10:46:27.174649] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:38.743 [2024-11-15 10:46:27.174767] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:38.743 [2024-11-15 10:46:27.174791] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:38.743 [2024-11-15 10:46:27.174804] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:38.743 [2024-11-15 10:46:27.174815] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:38.743 [2024-11-15 10:46:27.174845] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:38.743 qpair failed and we were unable to recover it. 00:27:38.743 [2024-11-15 10:46:27.184736] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:38.743 [2024-11-15 10:46:27.184838] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:38.743 [2024-11-15 10:46:27.184863] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:38.743 [2024-11-15 10:46:27.184877] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:38.743 [2024-11-15 10:46:27.184889] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:38.743 [2024-11-15 10:46:27.184919] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:38.743 qpair failed and we were unable to recover it. 00:27:38.743 [2024-11-15 10:46:27.194727] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:38.743 [2024-11-15 10:46:27.194832] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:38.743 [2024-11-15 10:46:27.194857] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:38.743 [2024-11-15 10:46:27.194872] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:38.743 [2024-11-15 10:46:27.194884] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:38.743 [2024-11-15 10:46:27.194913] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:38.743 qpair failed and we were unable to recover it. 00:27:38.743 [2024-11-15 10:46:27.204817] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:38.743 [2024-11-15 10:46:27.204924] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:38.743 [2024-11-15 10:46:27.204950] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:38.743 [2024-11-15 10:46:27.204964] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:38.743 [2024-11-15 10:46:27.204976] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:38.743 [2024-11-15 10:46:27.205017] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:38.743 qpair failed and we were unable to recover it. 00:27:39.001 [2024-11-15 10:46:27.214760] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:39.001 [2024-11-15 10:46:27.214863] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:39.001 [2024-11-15 10:46:27.214887] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:39.001 [2024-11-15 10:46:27.214901] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:39.001 [2024-11-15 10:46:27.214913] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:39.001 [2024-11-15 10:46:27.214942] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:39.001 qpair failed and we were unable to recover it. 00:27:39.001 [2024-11-15 10:46:27.224848] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:39.001 [2024-11-15 10:46:27.224947] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:39.001 [2024-11-15 10:46:27.224978] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:39.001 [2024-11-15 10:46:27.224993] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:39.001 [2024-11-15 10:46:27.225005] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:39.001 [2024-11-15 10:46:27.225035] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:39.001 qpair failed and we were unable to recover it. 00:27:39.001 [2024-11-15 10:46:27.234822] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:39.001 [2024-11-15 10:46:27.234924] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:39.001 [2024-11-15 10:46:27.234950] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:39.001 [2024-11-15 10:46:27.234965] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:39.001 [2024-11-15 10:46:27.234976] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:39.002 [2024-11-15 10:46:27.235007] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:39.002 qpair failed and we were unable to recover it. 00:27:39.002 [2024-11-15 10:46:27.244841] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:39.002 [2024-11-15 10:46:27.244949] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:39.002 [2024-11-15 10:46:27.244978] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:39.002 [2024-11-15 10:46:27.244992] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:39.002 [2024-11-15 10:46:27.245004] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:39.002 [2024-11-15 10:46:27.245034] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:39.002 qpair failed and we were unable to recover it. 00:27:39.002 [2024-11-15 10:46:27.254879] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:39.002 [2024-11-15 10:46:27.254985] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:39.002 [2024-11-15 10:46:27.255010] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:39.002 [2024-11-15 10:46:27.255025] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:39.002 [2024-11-15 10:46:27.255037] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:39.002 [2024-11-15 10:46:27.255066] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:39.002 qpair failed and we were unable to recover it. 00:27:39.002 [2024-11-15 10:46:27.264865] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:39.002 [2024-11-15 10:46:27.264964] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:39.002 [2024-11-15 10:46:27.264988] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:39.002 [2024-11-15 10:46:27.265002] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:39.002 [2024-11-15 10:46:27.265020] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:39.002 [2024-11-15 10:46:27.265051] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:39.002 qpair failed and we were unable to recover it. 00:27:39.002 [2024-11-15 10:46:27.274960] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:39.002 [2024-11-15 10:46:27.275080] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:39.002 [2024-11-15 10:46:27.275105] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:39.002 [2024-11-15 10:46:27.275120] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:39.002 [2024-11-15 10:46:27.275132] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:39.002 [2024-11-15 10:46:27.275162] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:39.002 qpair failed and we were unable to recover it. 00:27:39.002 [2024-11-15 10:46:27.284969] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:39.002 [2024-11-15 10:46:27.285074] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:39.002 [2024-11-15 10:46:27.285099] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:39.002 [2024-11-15 10:46:27.285113] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:39.002 [2024-11-15 10:46:27.285125] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:39.002 [2024-11-15 10:46:27.285155] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:39.002 qpair failed and we were unable to recover it. 00:27:39.002 [2024-11-15 10:46:27.295023] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:39.002 [2024-11-15 10:46:27.295141] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:39.002 [2024-11-15 10:46:27.295166] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:39.002 [2024-11-15 10:46:27.295180] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:39.002 [2024-11-15 10:46:27.295192] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:39.002 [2024-11-15 10:46:27.295222] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:39.002 qpair failed and we were unable to recover it. 00:27:39.002 [2024-11-15 10:46:27.304999] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:39.002 [2024-11-15 10:46:27.305099] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:39.002 [2024-11-15 10:46:27.305125] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:39.002 [2024-11-15 10:46:27.305140] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:39.002 [2024-11-15 10:46:27.305152] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:39.002 [2024-11-15 10:46:27.305181] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:39.002 qpair failed and we were unable to recover it. 00:27:39.002 [2024-11-15 10:46:27.315015] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:39.002 [2024-11-15 10:46:27.315131] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:39.002 [2024-11-15 10:46:27.315156] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:39.002 [2024-11-15 10:46:27.315171] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:39.002 [2024-11-15 10:46:27.315183] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:39.002 [2024-11-15 10:46:27.315212] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:39.002 qpair failed and we were unable to recover it. 00:27:39.002 [2024-11-15 10:46:27.325061] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:39.002 [2024-11-15 10:46:27.325165] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:39.002 [2024-11-15 10:46:27.325191] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:39.002 [2024-11-15 10:46:27.325205] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:39.002 [2024-11-15 10:46:27.325217] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:39.002 [2024-11-15 10:46:27.325247] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:39.002 qpair failed and we were unable to recover it. 00:27:39.002 [2024-11-15 10:46:27.335070] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:39.002 [2024-11-15 10:46:27.335194] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:39.002 [2024-11-15 10:46:27.335219] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:39.002 [2024-11-15 10:46:27.335234] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:39.002 [2024-11-15 10:46:27.335246] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:39.002 [2024-11-15 10:46:27.335276] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:39.002 qpair failed and we were unable to recover it. 00:27:39.002 [2024-11-15 10:46:27.345099] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:39.002 [2024-11-15 10:46:27.345242] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:39.002 [2024-11-15 10:46:27.345267] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:39.002 [2024-11-15 10:46:27.345282] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:39.002 [2024-11-15 10:46:27.345294] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:39.002 [2024-11-15 10:46:27.345324] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:39.002 qpair failed and we were unable to recover it. 00:27:39.002 [2024-11-15 10:46:27.355230] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:39.002 [2024-11-15 10:46:27.355330] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:39.002 [2024-11-15 10:46:27.355371] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:39.002 [2024-11-15 10:46:27.355389] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:39.002 [2024-11-15 10:46:27.355401] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:39.002 [2024-11-15 10:46:27.355431] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:39.002 qpair failed and we were unable to recover it. 00:27:39.002 [2024-11-15 10:46:27.365177] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:39.002 [2024-11-15 10:46:27.365279] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:39.002 [2024-11-15 10:46:27.365304] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:39.002 [2024-11-15 10:46:27.365318] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:39.002 [2024-11-15 10:46:27.365330] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:39.002 [2024-11-15 10:46:27.365360] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:39.002 qpair failed and we were unable to recover it. 00:27:39.002 [2024-11-15 10:46:27.375223] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:39.002 [2024-11-15 10:46:27.375341] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:39.002 [2024-11-15 10:46:27.375377] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:39.002 [2024-11-15 10:46:27.375394] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:39.002 [2024-11-15 10:46:27.375406] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:39.002 [2024-11-15 10:46:27.375436] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:39.002 qpair failed and we were unable to recover it. 00:27:39.002 [2024-11-15 10:46:27.385294] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:39.002 [2024-11-15 10:46:27.385397] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:39.002 [2024-11-15 10:46:27.385423] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:39.002 [2024-11-15 10:46:27.385438] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:39.002 [2024-11-15 10:46:27.385450] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:39.002 [2024-11-15 10:46:27.385480] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:39.002 qpair failed and we were unable to recover it. 00:27:39.002 [2024-11-15 10:46:27.395238] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:39.002 [2024-11-15 10:46:27.395335] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:39.002 [2024-11-15 10:46:27.395372] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:39.002 [2024-11-15 10:46:27.395389] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:39.002 [2024-11-15 10:46:27.395406] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:39.002 [2024-11-15 10:46:27.395437] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:39.002 qpair failed and we were unable to recover it. 00:27:39.002 [2024-11-15 10:46:27.405291] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:39.002 [2024-11-15 10:46:27.405403] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:39.002 [2024-11-15 10:46:27.405428] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:39.002 [2024-11-15 10:46:27.405442] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:39.002 [2024-11-15 10:46:27.405454] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:39.002 [2024-11-15 10:46:27.405484] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:39.002 qpair failed and we were unable to recover it. 00:27:39.002 [2024-11-15 10:46:27.415312] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:39.002 [2024-11-15 10:46:27.415423] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:39.002 [2024-11-15 10:46:27.415448] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:39.002 [2024-11-15 10:46:27.415463] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:39.002 [2024-11-15 10:46:27.415474] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:39.002 [2024-11-15 10:46:27.415505] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:39.002 qpair failed and we were unable to recover it. 00:27:39.002 [2024-11-15 10:46:27.425360] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:39.002 [2024-11-15 10:46:27.425471] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:39.002 [2024-11-15 10:46:27.425497] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:39.002 [2024-11-15 10:46:27.425511] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:39.002 [2024-11-15 10:46:27.425523] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:39.002 [2024-11-15 10:46:27.425552] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:39.002 qpair failed and we were unable to recover it. 00:27:39.002 [2024-11-15 10:46:27.435395] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:39.002 [2024-11-15 10:46:27.435496] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:39.002 [2024-11-15 10:46:27.435521] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:39.002 [2024-11-15 10:46:27.435536] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:39.002 [2024-11-15 10:46:27.435547] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:39.002 [2024-11-15 10:46:27.435578] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:39.002 qpair failed and we were unable to recover it. 00:27:39.002 [2024-11-15 10:46:27.445486] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:39.002 [2024-11-15 10:46:27.445579] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:39.002 [2024-11-15 10:46:27.445605] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:39.002 [2024-11-15 10:46:27.445619] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:39.002 [2024-11-15 10:46:27.445631] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:39.002 [2024-11-15 10:46:27.445661] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:39.002 qpair failed and we were unable to recover it. 00:27:39.002 [2024-11-15 10:46:27.455441] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:39.002 [2024-11-15 10:46:27.455530] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:39.002 [2024-11-15 10:46:27.455554] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:39.002 [2024-11-15 10:46:27.455568] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:39.002 [2024-11-15 10:46:27.455580] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:39.002 [2024-11-15 10:46:27.455609] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:39.002 qpair failed and we were unable to recover it. 00:27:39.002 [2024-11-15 10:46:27.465474] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:39.002 [2024-11-15 10:46:27.465556] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:39.002 [2024-11-15 10:46:27.465580] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:39.002 [2024-11-15 10:46:27.465594] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:39.002 [2024-11-15 10:46:27.465605] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:39.002 [2024-11-15 10:46:27.465635] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:39.002 qpair failed and we were unable to recover it. 00:27:39.261 [2024-11-15 10:46:27.475524] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:39.261 [2024-11-15 10:46:27.475611] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:39.261 [2024-11-15 10:46:27.475638] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:39.261 [2024-11-15 10:46:27.475652] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:39.261 [2024-11-15 10:46:27.475664] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:39.261 [2024-11-15 10:46:27.475693] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:39.261 qpair failed and we were unable to recover it. 00:27:39.261 [2024-11-15 10:46:27.485602] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:39.261 [2024-11-15 10:46:27.485709] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:39.261 [2024-11-15 10:46:27.485739] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:39.261 [2024-11-15 10:46:27.485754] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:39.261 [2024-11-15 10:46:27.485766] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:39.261 [2024-11-15 10:46:27.485796] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:39.261 qpair failed and we were unable to recover it. 00:27:39.261 [2024-11-15 10:46:27.495581] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:39.261 [2024-11-15 10:46:27.495669] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:39.261 [2024-11-15 10:46:27.495693] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:39.261 [2024-11-15 10:46:27.495707] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:39.261 [2024-11-15 10:46:27.495718] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:39.261 [2024-11-15 10:46:27.495747] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:39.261 qpair failed and we were unable to recover it. 00:27:39.261 [2024-11-15 10:46:27.505597] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:39.261 [2024-11-15 10:46:27.505690] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:39.261 [2024-11-15 10:46:27.505716] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:39.261 [2024-11-15 10:46:27.505730] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:39.261 [2024-11-15 10:46:27.505743] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:39.261 [2024-11-15 10:46:27.505773] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:39.261 qpair failed and we were unable to recover it. 00:27:39.261 [2024-11-15 10:46:27.515611] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:39.261 [2024-11-15 10:46:27.515715] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:39.261 [2024-11-15 10:46:27.515739] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:39.262 [2024-11-15 10:46:27.515753] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:39.262 [2024-11-15 10:46:27.515765] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:39.262 [2024-11-15 10:46:27.515795] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:39.262 qpair failed and we were unable to recover it. 00:27:39.262 [2024-11-15 10:46:27.525705] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:39.262 [2024-11-15 10:46:27.525812] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:39.262 [2024-11-15 10:46:27.525836] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:39.262 [2024-11-15 10:46:27.525856] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:39.262 [2024-11-15 10:46:27.525868] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:39.262 [2024-11-15 10:46:27.525898] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:39.262 qpair failed and we were unable to recover it. 00:27:39.262 [2024-11-15 10:46:27.535718] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:39.262 [2024-11-15 10:46:27.535819] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:39.262 [2024-11-15 10:46:27.535845] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:39.262 [2024-11-15 10:46:27.535859] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:39.262 [2024-11-15 10:46:27.535870] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:39.262 [2024-11-15 10:46:27.535899] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:39.262 qpair failed and we were unable to recover it. 00:27:39.262 [2024-11-15 10:46:27.545729] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:39.262 [2024-11-15 10:46:27.545827] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:39.262 [2024-11-15 10:46:27.545855] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:39.262 [2024-11-15 10:46:27.545869] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:39.262 [2024-11-15 10:46:27.545881] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:39.262 [2024-11-15 10:46:27.545911] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:39.262 qpair failed and we were unable to recover it. 00:27:39.262 [2024-11-15 10:46:27.555752] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:39.262 [2024-11-15 10:46:27.555851] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:39.262 [2024-11-15 10:46:27.555877] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:39.262 [2024-11-15 10:46:27.555891] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:39.262 [2024-11-15 10:46:27.555903] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:39.262 [2024-11-15 10:46:27.555935] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:39.262 qpair failed and we were unable to recover it. 00:27:39.262 [2024-11-15 10:46:27.565792] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:39.262 [2024-11-15 10:46:27.565898] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:39.262 [2024-11-15 10:46:27.565924] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:39.262 [2024-11-15 10:46:27.565939] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:39.262 [2024-11-15 10:46:27.565951] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:39.262 [2024-11-15 10:46:27.565981] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:39.262 qpair failed and we were unable to recover it. 00:27:39.262 [2024-11-15 10:46:27.575823] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:39.262 [2024-11-15 10:46:27.575958] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:39.262 [2024-11-15 10:46:27.575983] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:39.262 [2024-11-15 10:46:27.575997] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:39.262 [2024-11-15 10:46:27.576010] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:39.262 [2024-11-15 10:46:27.576040] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:39.262 qpair failed and we were unable to recover it. 00:27:39.262 [2024-11-15 10:46:27.585829] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:39.262 [2024-11-15 10:46:27.585936] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:39.262 [2024-11-15 10:46:27.585962] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:39.262 [2024-11-15 10:46:27.585976] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:39.262 [2024-11-15 10:46:27.585988] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:39.262 [2024-11-15 10:46:27.586018] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:39.262 qpair failed and we were unable to recover it. 00:27:39.262 [2024-11-15 10:46:27.595864] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:39.262 [2024-11-15 10:46:27.595987] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:39.262 [2024-11-15 10:46:27.596013] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:39.262 [2024-11-15 10:46:27.596027] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:39.262 [2024-11-15 10:46:27.596040] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:39.262 [2024-11-15 10:46:27.596070] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:39.262 qpair failed and we were unable to recover it. 00:27:39.262 [2024-11-15 10:46:27.605933] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:39.262 [2024-11-15 10:46:27.606058] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:39.262 [2024-11-15 10:46:27.606083] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:39.262 [2024-11-15 10:46:27.606098] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:39.262 [2024-11-15 10:46:27.606110] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:39.262 [2024-11-15 10:46:27.606140] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:39.262 qpair failed and we were unable to recover it. 00:27:39.262 [2024-11-15 10:46:27.615924] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:39.262 [2024-11-15 10:46:27.616026] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:39.262 [2024-11-15 10:46:27.616050] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:39.262 [2024-11-15 10:46:27.616064] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:39.262 [2024-11-15 10:46:27.616076] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:39.262 [2024-11-15 10:46:27.616105] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:39.262 qpair failed and we were unable to recover it. 00:27:39.262 [2024-11-15 10:46:27.625927] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:39.262 [2024-11-15 10:46:27.626036] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:39.262 [2024-11-15 10:46:27.626061] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:39.262 [2024-11-15 10:46:27.626075] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:39.262 [2024-11-15 10:46:27.626088] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:39.262 [2024-11-15 10:46:27.626117] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:39.262 qpair failed and we were unable to recover it. 00:27:39.262 [2024-11-15 10:46:27.635968] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:39.262 [2024-11-15 10:46:27.636066] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:39.262 [2024-11-15 10:46:27.636092] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:39.262 [2024-11-15 10:46:27.636106] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:39.262 [2024-11-15 10:46:27.636118] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:39.262 [2024-11-15 10:46:27.636148] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:39.262 qpair failed and we were unable to recover it. 00:27:39.262 [2024-11-15 10:46:27.646009] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:39.262 [2024-11-15 10:46:27.646126] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:39.262 [2024-11-15 10:46:27.646152] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:39.263 [2024-11-15 10:46:27.646166] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:39.263 [2024-11-15 10:46:27.646178] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:39.263 [2024-11-15 10:46:27.646209] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:39.263 qpair failed and we were unable to recover it. 00:27:39.263 [2024-11-15 10:46:27.656013] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:39.263 [2024-11-15 10:46:27.656115] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:39.263 [2024-11-15 10:46:27.656140] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:39.263 [2024-11-15 10:46:27.656160] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:39.263 [2024-11-15 10:46:27.656173] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:39.263 [2024-11-15 10:46:27.656203] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:39.263 qpair failed and we were unable to recover it. 00:27:39.263 [2024-11-15 10:46:27.666050] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:39.263 [2024-11-15 10:46:27.666153] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:39.263 [2024-11-15 10:46:27.666178] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:39.263 [2024-11-15 10:46:27.666193] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:39.263 [2024-11-15 10:46:27.666204] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:39.263 [2024-11-15 10:46:27.666234] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:39.263 qpair failed and we were unable to recover it. 00:27:39.263 [2024-11-15 10:46:27.676094] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:39.263 [2024-11-15 10:46:27.676195] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:39.263 [2024-11-15 10:46:27.676220] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:39.263 [2024-11-15 10:46:27.676234] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:39.263 [2024-11-15 10:46:27.676245] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:39.263 [2024-11-15 10:46:27.676275] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:39.263 qpair failed and we were unable to recover it. 00:27:39.263 [2024-11-15 10:46:27.686119] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:39.263 [2024-11-15 10:46:27.686225] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:39.263 [2024-11-15 10:46:27.686250] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:39.263 [2024-11-15 10:46:27.686265] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:39.263 [2024-11-15 10:46:27.686277] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:39.263 [2024-11-15 10:46:27.686306] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:39.263 qpair failed and we were unable to recover it. 00:27:39.263 [2024-11-15 10:46:27.696134] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:39.263 [2024-11-15 10:46:27.696239] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:39.263 [2024-11-15 10:46:27.696265] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:39.263 [2024-11-15 10:46:27.696280] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:39.263 [2024-11-15 10:46:27.696292] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:39.263 [2024-11-15 10:46:27.696327] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:39.263 qpair failed and we were unable to recover it. 00:27:39.263 [2024-11-15 10:46:27.706177] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:39.263 [2024-11-15 10:46:27.706303] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:39.263 [2024-11-15 10:46:27.706328] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:39.263 [2024-11-15 10:46:27.706343] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:39.263 [2024-11-15 10:46:27.706355] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:39.263 [2024-11-15 10:46:27.706395] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:39.263 qpair failed and we were unable to recover it. 00:27:39.263 [2024-11-15 10:46:27.716193] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:39.263 [2024-11-15 10:46:27.716292] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:39.263 [2024-11-15 10:46:27.716317] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:39.263 [2024-11-15 10:46:27.716331] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:39.263 [2024-11-15 10:46:27.716343] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:39.263 [2024-11-15 10:46:27.716381] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:39.263 qpair failed and we were unable to recover it. 00:27:39.263 [2024-11-15 10:46:27.726270] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:39.263 [2024-11-15 10:46:27.726411] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:39.263 [2024-11-15 10:46:27.726438] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:39.263 [2024-11-15 10:46:27.726453] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:39.263 [2024-11-15 10:46:27.726466] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:39.263 [2024-11-15 10:46:27.726496] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:39.263 qpair failed and we were unable to recover it. 00:27:39.533 [2024-11-15 10:46:27.736276] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:39.533 [2024-11-15 10:46:27.736390] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:39.533 [2024-11-15 10:46:27.736417] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:39.533 [2024-11-15 10:46:27.736431] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:39.533 [2024-11-15 10:46:27.736443] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:39.533 [2024-11-15 10:46:27.736473] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:39.533 qpair failed and we were unable to recover it. 00:27:39.533 [2024-11-15 10:46:27.746288] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:39.533 [2024-11-15 10:46:27.746400] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:39.533 [2024-11-15 10:46:27.746426] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:39.533 [2024-11-15 10:46:27.746440] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:39.533 [2024-11-15 10:46:27.746453] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:39.533 [2024-11-15 10:46:27.746484] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:39.533 qpair failed and we were unable to recover it. 00:27:39.533 [2024-11-15 10:46:27.756304] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:39.533 [2024-11-15 10:46:27.756420] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:39.533 [2024-11-15 10:46:27.756445] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:39.533 [2024-11-15 10:46:27.756460] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:39.533 [2024-11-15 10:46:27.756472] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:39.533 [2024-11-15 10:46:27.756503] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:39.533 qpair failed and we were unable to recover it. 00:27:39.533 [2024-11-15 10:46:27.766393] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:39.533 [2024-11-15 10:46:27.766488] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:39.533 [2024-11-15 10:46:27.766512] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:39.533 [2024-11-15 10:46:27.766526] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:39.533 [2024-11-15 10:46:27.766538] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:39.533 [2024-11-15 10:46:27.766568] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:39.533 qpair failed and we were unable to recover it. 00:27:39.533 [2024-11-15 10:46:27.776416] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:39.533 [2024-11-15 10:46:27.776513] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:39.533 [2024-11-15 10:46:27.776539] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:39.533 [2024-11-15 10:46:27.776553] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:39.533 [2024-11-15 10:46:27.776566] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:39.533 [2024-11-15 10:46:27.776596] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:39.533 qpair failed and we were unable to recover it. 00:27:39.533 [2024-11-15 10:46:27.786418] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:39.533 [2024-11-15 10:46:27.786502] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:39.533 [2024-11-15 10:46:27.786531] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:39.533 [2024-11-15 10:46:27.786547] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:39.533 [2024-11-15 10:46:27.786558] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:39.533 [2024-11-15 10:46:27.786588] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:39.533 qpair failed and we were unable to recover it. 00:27:39.533 [2024-11-15 10:46:27.796494] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:39.533 [2024-11-15 10:46:27.796582] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:39.533 [2024-11-15 10:46:27.796610] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:39.533 [2024-11-15 10:46:27.796625] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:39.533 [2024-11-15 10:46:27.796637] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:39.533 [2024-11-15 10:46:27.796666] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:39.533 qpair failed and we were unable to recover it. 00:27:39.533 [2024-11-15 10:46:27.806467] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:39.533 [2024-11-15 10:46:27.806581] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:39.533 [2024-11-15 10:46:27.806607] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:39.533 [2024-11-15 10:46:27.806621] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:39.533 [2024-11-15 10:46:27.806634] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:39.533 [2024-11-15 10:46:27.806663] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:39.533 qpair failed and we were unable to recover it. 00:27:39.533 [2024-11-15 10:46:27.816553] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:39.533 [2024-11-15 10:46:27.816647] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:39.533 [2024-11-15 10:46:27.816673] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:39.533 [2024-11-15 10:46:27.816688] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:39.533 [2024-11-15 10:46:27.816700] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:39.533 [2024-11-15 10:46:27.816730] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:39.533 qpair failed and we were unable to recover it. 00:27:39.533 [2024-11-15 10:46:27.826517] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:39.533 [2024-11-15 10:46:27.826606] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:39.533 [2024-11-15 10:46:27.826629] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:39.533 [2024-11-15 10:46:27.826643] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:39.533 [2024-11-15 10:46:27.826660] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:39.534 [2024-11-15 10:46:27.826691] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:39.534 qpair failed and we were unable to recover it. 00:27:39.534 [2024-11-15 10:46:27.836535] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:39.534 [2024-11-15 10:46:27.836633] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:39.534 [2024-11-15 10:46:27.836659] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:39.534 [2024-11-15 10:46:27.836672] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:39.534 [2024-11-15 10:46:27.836685] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:39.534 [2024-11-15 10:46:27.836714] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:39.534 qpair failed and we were unable to recover it. 00:27:39.534 [2024-11-15 10:46:27.846648] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:39.534 [2024-11-15 10:46:27.846788] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:39.534 [2024-11-15 10:46:27.846814] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:39.534 [2024-11-15 10:46:27.846828] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:39.534 [2024-11-15 10:46:27.846840] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:39.534 [2024-11-15 10:46:27.846870] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:39.534 qpair failed and we were unable to recover it. 00:27:39.534 [2024-11-15 10:46:27.856624] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:39.534 [2024-11-15 10:46:27.856767] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:39.534 [2024-11-15 10:46:27.856791] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:39.534 [2024-11-15 10:46:27.856805] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:39.534 [2024-11-15 10:46:27.856817] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:39.534 [2024-11-15 10:46:27.856851] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:39.534 qpair failed and we were unable to recover it. 00:27:39.534 [2024-11-15 10:46:27.866670] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:39.534 [2024-11-15 10:46:27.866773] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:39.534 [2024-11-15 10:46:27.866798] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:39.534 [2024-11-15 10:46:27.866813] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:39.534 [2024-11-15 10:46:27.866825] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:39.534 [2024-11-15 10:46:27.866866] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:39.534 qpair failed and we were unable to recover it. 00:27:39.534 [2024-11-15 10:46:27.876695] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:39.534 [2024-11-15 10:46:27.876803] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:39.534 [2024-11-15 10:46:27.876829] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:39.534 [2024-11-15 10:46:27.876844] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:39.534 [2024-11-15 10:46:27.876856] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:39.534 [2024-11-15 10:46:27.876886] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:39.534 qpair failed and we were unable to recover it. 00:27:39.534 [2024-11-15 10:46:27.886767] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:39.534 [2024-11-15 10:46:27.886876] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:39.534 [2024-11-15 10:46:27.886901] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:39.534 [2024-11-15 10:46:27.886915] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:39.534 [2024-11-15 10:46:27.886928] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:39.534 [2024-11-15 10:46:27.886958] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:39.534 qpair failed and we were unable to recover it. 00:27:39.534 [2024-11-15 10:46:27.896744] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:39.534 [2024-11-15 10:46:27.896886] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:39.534 [2024-11-15 10:46:27.896911] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:39.534 [2024-11-15 10:46:27.896926] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:39.534 [2024-11-15 10:46:27.896938] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:39.534 [2024-11-15 10:46:27.896974] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:39.534 qpair failed and we were unable to recover it. 00:27:39.534 [2024-11-15 10:46:27.906744] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:39.534 [2024-11-15 10:46:27.906840] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:39.534 [2024-11-15 10:46:27.906865] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:39.534 [2024-11-15 10:46:27.906878] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:39.534 [2024-11-15 10:46:27.906890] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:39.534 [2024-11-15 10:46:27.906919] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:39.534 qpair failed and we were unable to recover it. 00:27:39.534 [2024-11-15 10:46:27.916780] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:39.534 [2024-11-15 10:46:27.916892] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:39.534 [2024-11-15 10:46:27.916924] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:39.534 [2024-11-15 10:46:27.916939] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:39.534 [2024-11-15 10:46:27.916952] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:39.534 [2024-11-15 10:46:27.916982] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:39.534 qpair failed and we were unable to recover it. 00:27:39.534 [2024-11-15 10:46:27.926868] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:39.534 [2024-11-15 10:46:27.926973] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:39.534 [2024-11-15 10:46:27.926998] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:39.534 [2024-11-15 10:46:27.927013] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:39.534 [2024-11-15 10:46:27.927025] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:39.534 [2024-11-15 10:46:27.927055] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:39.534 qpair failed and we were unable to recover it. 00:27:39.534 [2024-11-15 10:46:27.936857] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:39.534 [2024-11-15 10:46:27.936958] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:39.534 [2024-11-15 10:46:27.936985] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:39.534 [2024-11-15 10:46:27.937000] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:39.534 [2024-11-15 10:46:27.937012] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:39.534 [2024-11-15 10:46:27.937042] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:39.534 qpair failed and we were unable to recover it. 00:27:39.534 [2024-11-15 10:46:27.946884] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:39.534 [2024-11-15 10:46:27.947002] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:39.534 [2024-11-15 10:46:27.947027] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:39.534 [2024-11-15 10:46:27.947042] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:39.534 [2024-11-15 10:46:27.947055] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:39.534 [2024-11-15 10:46:27.947085] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:39.534 qpair failed and we were unable to recover it. 00:27:39.534 [2024-11-15 10:46:27.956945] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:39.534 [2024-11-15 10:46:27.957063] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:39.534 [2024-11-15 10:46:27.957089] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:39.534 [2024-11-15 10:46:27.957103] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:39.534 [2024-11-15 10:46:27.957121] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:39.535 [2024-11-15 10:46:27.957152] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:39.535 qpair failed and we were unable to recover it. 00:27:39.535 [2024-11-15 10:46:27.966988] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:39.535 [2024-11-15 10:46:27.967120] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:39.535 [2024-11-15 10:46:27.967145] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:39.535 [2024-11-15 10:46:27.967160] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:39.535 [2024-11-15 10:46:27.967172] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:39.535 [2024-11-15 10:46:27.967202] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:39.535 qpair failed and we were unable to recover it. 00:27:39.535 [2024-11-15 10:46:27.976989] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:39.535 [2024-11-15 10:46:27.977091] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:39.535 [2024-11-15 10:46:27.977117] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:39.535 [2024-11-15 10:46:27.977131] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:39.535 [2024-11-15 10:46:27.977143] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:39.535 [2024-11-15 10:46:27.977173] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:39.535 qpair failed and we were unable to recover it. 00:27:39.535 [2024-11-15 10:46:27.987007] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:39.535 [2024-11-15 10:46:27.987092] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:39.535 [2024-11-15 10:46:27.987116] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:39.535 [2024-11-15 10:46:27.987130] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:39.535 [2024-11-15 10:46:27.987142] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:39.535 [2024-11-15 10:46:27.987173] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:39.535 qpair failed and we were unable to recover it. 00:27:39.870 [2024-11-15 10:46:27.997100] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:39.870 [2024-11-15 10:46:27.997194] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:39.870 [2024-11-15 10:46:27.997220] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:39.870 [2024-11-15 10:46:27.997235] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:39.870 [2024-11-15 10:46:27.997247] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:39.870 [2024-11-15 10:46:27.997278] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:39.870 qpair failed and we were unable to recover it. 00:27:39.870 [2024-11-15 10:46:28.007157] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:39.870 [2024-11-15 10:46:28.007267] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:39.870 [2024-11-15 10:46:28.007293] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:39.870 [2024-11-15 10:46:28.007307] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:39.870 [2024-11-15 10:46:28.007319] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:39.870 [2024-11-15 10:46:28.007350] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:39.870 qpair failed and we were unable to recover it. 00:27:39.870 [2024-11-15 10:46:28.017147] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:39.870 [2024-11-15 10:46:28.017235] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:39.870 [2024-11-15 10:46:28.017260] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:39.870 [2024-11-15 10:46:28.017274] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:39.870 [2024-11-15 10:46:28.017286] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:39.870 [2024-11-15 10:46:28.017316] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:39.870 qpair failed and we were unable to recover it. 00:27:39.870 [2024-11-15 10:46:28.027091] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:39.870 [2024-11-15 10:46:28.027193] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:39.870 [2024-11-15 10:46:28.027219] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:39.870 [2024-11-15 10:46:28.027234] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:39.870 [2024-11-15 10:46:28.027246] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:39.870 [2024-11-15 10:46:28.027275] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:39.870 qpair failed and we were unable to recover it. 00:27:39.870 [2024-11-15 10:46:28.037159] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:39.870 [2024-11-15 10:46:28.037317] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:39.870 [2024-11-15 10:46:28.037343] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:39.870 [2024-11-15 10:46:28.037358] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:39.870 [2024-11-15 10:46:28.037381] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:39.870 [2024-11-15 10:46:28.037413] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:39.870 qpair failed and we were unable to recover it. 00:27:39.870 [2024-11-15 10:46:28.047203] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:39.870 [2024-11-15 10:46:28.047307] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:39.870 [2024-11-15 10:46:28.047341] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:39.870 [2024-11-15 10:46:28.047356] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:39.870 [2024-11-15 10:46:28.047377] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:39.871 [2024-11-15 10:46:28.047408] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:39.871 qpair failed and we were unable to recover it. 00:27:39.871 [2024-11-15 10:46:28.057218] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:39.871 [2024-11-15 10:46:28.057320] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:39.871 [2024-11-15 10:46:28.057345] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:39.871 [2024-11-15 10:46:28.057359] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:39.871 [2024-11-15 10:46:28.057380] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:39.871 [2024-11-15 10:46:28.057412] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:39.871 qpair failed and we were unable to recover it. 00:27:39.871 [2024-11-15 10:46:28.067233] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:39.871 [2024-11-15 10:46:28.067375] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:39.871 [2024-11-15 10:46:28.067401] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:39.871 [2024-11-15 10:46:28.067416] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:39.871 [2024-11-15 10:46:28.067428] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:39.871 [2024-11-15 10:46:28.067459] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:39.871 qpair failed and we were unable to recover it. 00:27:39.871 [2024-11-15 10:46:28.077278] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:39.871 [2024-11-15 10:46:28.077438] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:39.871 [2024-11-15 10:46:28.077464] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:39.871 [2024-11-15 10:46:28.077479] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:39.871 [2024-11-15 10:46:28.077491] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:39.871 [2024-11-15 10:46:28.077521] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:39.871 qpair failed and we were unable to recover it. 00:27:39.871 [2024-11-15 10:46:28.087314] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:39.871 [2024-11-15 10:46:28.087448] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:39.871 [2024-11-15 10:46:28.087473] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:39.871 [2024-11-15 10:46:28.087492] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:39.871 [2024-11-15 10:46:28.087505] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:39.871 [2024-11-15 10:46:28.087547] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:39.871 qpair failed and we were unable to recover it. 00:27:39.871 [2024-11-15 10:46:28.097396] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:39.871 [2024-11-15 10:46:28.097490] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:39.871 [2024-11-15 10:46:28.097516] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:39.871 [2024-11-15 10:46:28.097531] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:39.871 [2024-11-15 10:46:28.097543] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:39.871 [2024-11-15 10:46:28.097574] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:39.871 qpair failed and we were unable to recover it. 00:27:39.871 [2024-11-15 10:46:28.107374] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:39.871 [2024-11-15 10:46:28.107459] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:39.871 [2024-11-15 10:46:28.107484] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:39.871 [2024-11-15 10:46:28.107498] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:39.871 [2024-11-15 10:46:28.107511] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:39.871 [2024-11-15 10:46:28.107541] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:39.871 qpair failed and we were unable to recover it. 00:27:39.871 [2024-11-15 10:46:28.117336] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:39.871 [2024-11-15 10:46:28.117453] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:39.871 [2024-11-15 10:46:28.117478] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:39.871 [2024-11-15 10:46:28.117491] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:39.871 [2024-11-15 10:46:28.117503] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:39.871 [2024-11-15 10:46:28.117533] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:39.871 qpair failed and we were unable to recover it. 00:27:39.871 [2024-11-15 10:46:28.127428] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:39.871 [2024-11-15 10:46:28.127521] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:39.871 [2024-11-15 10:46:28.127549] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:39.871 [2024-11-15 10:46:28.127563] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:39.871 [2024-11-15 10:46:28.127575] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:39.871 [2024-11-15 10:46:28.127611] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:39.871 qpair failed and we were unable to recover it. 00:27:39.871 [2024-11-15 10:46:28.137461] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:39.871 [2024-11-15 10:46:28.137555] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:39.871 [2024-11-15 10:46:28.137581] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:39.871 [2024-11-15 10:46:28.137596] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:39.871 [2024-11-15 10:46:28.137608] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:39.871 [2024-11-15 10:46:28.137639] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:39.871 qpair failed and we were unable to recover it. 00:27:39.871 [2024-11-15 10:46:28.147463] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:39.871 [2024-11-15 10:46:28.147552] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:39.871 [2024-11-15 10:46:28.147581] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:39.871 [2024-11-15 10:46:28.147596] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:39.871 [2024-11-15 10:46:28.147607] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:39.871 [2024-11-15 10:46:28.147638] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:39.871 qpair failed and we were unable to recover it. 00:27:39.871 [2024-11-15 10:46:28.157491] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:39.871 [2024-11-15 10:46:28.157574] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:39.871 [2024-11-15 10:46:28.157598] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:39.871 [2024-11-15 10:46:28.157613] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:39.871 [2024-11-15 10:46:28.157624] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:39.871 [2024-11-15 10:46:28.157654] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:39.871 qpair failed and we were unable to recover it. 00:27:39.871 [2024-11-15 10:46:28.167583] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:39.871 [2024-11-15 10:46:28.167708] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:39.871 [2024-11-15 10:46:28.167734] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:39.871 [2024-11-15 10:46:28.167749] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:39.871 [2024-11-15 10:46:28.167761] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:39.871 [2024-11-15 10:46:28.167791] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:39.871 qpair failed and we were unable to recover it. 00:27:39.871 [2024-11-15 10:46:28.177604] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:39.871 [2024-11-15 10:46:28.177699] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:39.871 [2024-11-15 10:46:28.177725] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:39.871 [2024-11-15 10:46:28.177739] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:39.871 [2024-11-15 10:46:28.177751] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:39.871 [2024-11-15 10:46:28.177781] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:39.872 qpair failed and we were unable to recover it. 00:27:39.872 [2024-11-15 10:46:28.187636] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:39.872 [2024-11-15 10:46:28.187756] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:39.872 [2024-11-15 10:46:28.187781] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:39.872 [2024-11-15 10:46:28.187795] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:39.872 [2024-11-15 10:46:28.187807] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:39.872 [2024-11-15 10:46:28.187838] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:39.872 qpair failed and we were unable to recover it. 00:27:39.872 [2024-11-15 10:46:28.197640] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:39.872 [2024-11-15 10:46:28.197767] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:39.872 [2024-11-15 10:46:28.197792] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:39.872 [2024-11-15 10:46:28.197807] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:39.872 [2024-11-15 10:46:28.197819] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:39.872 [2024-11-15 10:46:28.197849] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:39.872 qpair failed and we were unable to recover it. 00:27:39.872 [2024-11-15 10:46:28.207626] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:39.872 [2024-11-15 10:46:28.207751] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:39.872 [2024-11-15 10:46:28.207775] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:39.872 [2024-11-15 10:46:28.207789] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:39.872 [2024-11-15 10:46:28.207801] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:39.872 [2024-11-15 10:46:28.207831] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:39.872 qpair failed and we were unable to recover it. 00:27:39.872 [2024-11-15 10:46:28.217735] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:39.872 [2024-11-15 10:46:28.217845] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:39.872 [2024-11-15 10:46:28.217871] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:39.872 [2024-11-15 10:46:28.217891] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:39.872 [2024-11-15 10:46:28.217904] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:39.872 [2024-11-15 10:46:28.217935] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:39.872 qpair failed and we were unable to recover it. 00:27:39.872 [2024-11-15 10:46:28.227734] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:39.872 [2024-11-15 10:46:28.227841] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:39.872 [2024-11-15 10:46:28.227867] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:39.872 [2024-11-15 10:46:28.227882] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:39.872 [2024-11-15 10:46:28.227895] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:39.872 [2024-11-15 10:46:28.227936] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:39.872 qpair failed and we were unable to recover it. 00:27:39.872 [2024-11-15 10:46:28.237746] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:39.872 [2024-11-15 10:46:28.237852] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:39.872 [2024-11-15 10:46:28.237877] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:39.872 [2024-11-15 10:46:28.237892] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:39.872 [2024-11-15 10:46:28.237903] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:39.872 [2024-11-15 10:46:28.237934] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:39.872 qpair failed and we were unable to recover it. 00:27:39.872 [2024-11-15 10:46:28.247823] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:39.872 [2024-11-15 10:46:28.247936] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:39.872 [2024-11-15 10:46:28.247961] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:39.872 [2024-11-15 10:46:28.247976] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:39.872 [2024-11-15 10:46:28.247988] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:39.872 [2024-11-15 10:46:28.248018] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:39.872 qpair failed and we were unable to recover it. 00:27:39.872 [2024-11-15 10:46:28.257841] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:39.872 [2024-11-15 10:46:28.257943] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:39.872 [2024-11-15 10:46:28.257968] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:39.872 [2024-11-15 10:46:28.257982] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:39.872 [2024-11-15 10:46:28.257994] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:39.872 [2024-11-15 10:46:28.258031] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:39.872 qpair failed and we were unable to recover it. 00:27:39.872 [2024-11-15 10:46:28.267858] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:39.872 [2024-11-15 10:46:28.267987] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:39.872 [2024-11-15 10:46:28.268013] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:39.872 [2024-11-15 10:46:28.268027] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:39.872 [2024-11-15 10:46:28.268040] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:39.872 [2024-11-15 10:46:28.268071] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:39.872 qpair failed and we were unable to recover it. 00:27:39.872 [2024-11-15 10:46:28.277890] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:39.872 [2024-11-15 10:46:28.277981] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:39.872 [2024-11-15 10:46:28.278005] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:39.872 [2024-11-15 10:46:28.278019] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:39.872 [2024-11-15 10:46:28.278031] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:39.872 [2024-11-15 10:46:28.278061] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:39.872 qpair failed and we were unable to recover it. 00:27:39.872 [2024-11-15 10:46:28.287920] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:39.872 [2024-11-15 10:46:28.288071] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:39.872 [2024-11-15 10:46:28.288097] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:39.872 [2024-11-15 10:46:28.288111] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:39.872 [2024-11-15 10:46:28.288123] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:39.872 [2024-11-15 10:46:28.288153] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:39.872 qpair failed and we were unable to recover it. 00:27:39.872 [2024-11-15 10:46:28.297945] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:39.872 [2024-11-15 10:46:28.298073] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:39.872 [2024-11-15 10:46:28.298098] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:39.872 [2024-11-15 10:46:28.298113] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:39.872 [2024-11-15 10:46:28.298125] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:39.872 [2024-11-15 10:46:28.298155] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:39.872 qpair failed and we were unable to recover it. 00:27:39.872 [2024-11-15 10:46:28.307908] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:39.872 [2024-11-15 10:46:28.308010] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:39.872 [2024-11-15 10:46:28.308036] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:39.872 [2024-11-15 10:46:28.308050] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:39.872 [2024-11-15 10:46:28.308062] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:39.872 [2024-11-15 10:46:28.308092] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:39.872 qpair failed and we were unable to recover it. 00:27:40.154 [2024-11-15 10:46:28.317993] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:40.154 [2024-11-15 10:46:28.318079] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:40.154 [2024-11-15 10:46:28.318103] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:40.154 [2024-11-15 10:46:28.318117] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:40.154 [2024-11-15 10:46:28.318129] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:40.154 [2024-11-15 10:46:28.318159] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:40.154 qpair failed and we were unable to recover it. 00:27:40.154 [2024-11-15 10:46:28.327999] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:40.154 [2024-11-15 10:46:28.328089] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:40.154 [2024-11-15 10:46:28.328113] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:40.154 [2024-11-15 10:46:28.328127] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:40.154 [2024-11-15 10:46:28.328138] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:40.154 [2024-11-15 10:46:28.328169] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:40.154 qpair failed and we were unable to recover it. 00:27:40.154 [2024-11-15 10:46:28.338049] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:40.154 [2024-11-15 10:46:28.338179] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:40.154 [2024-11-15 10:46:28.338205] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:40.154 [2024-11-15 10:46:28.338219] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:40.154 [2024-11-15 10:46:28.338231] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:40.154 [2024-11-15 10:46:28.338261] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:40.154 qpair failed and we were unable to recover it. 00:27:40.154 [2024-11-15 10:46:28.348073] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:40.154 [2024-11-15 10:46:28.348176] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:40.154 [2024-11-15 10:46:28.348207] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:40.154 [2024-11-15 10:46:28.348223] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:40.154 [2024-11-15 10:46:28.348235] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:40.154 [2024-11-15 10:46:28.348266] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:40.154 qpair failed and we were unable to recover it. 00:27:40.154 [2024-11-15 10:46:28.358077] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:40.154 [2024-11-15 10:46:28.358234] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:40.154 [2024-11-15 10:46:28.358260] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:40.154 [2024-11-15 10:46:28.358274] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:40.154 [2024-11-15 10:46:28.358287] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:40.154 [2024-11-15 10:46:28.358317] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:40.154 qpair failed and we were unable to recover it. 00:27:40.154 [2024-11-15 10:46:28.368115] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:40.154 [2024-11-15 10:46:28.368228] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:40.154 [2024-11-15 10:46:28.368265] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:40.154 [2024-11-15 10:46:28.368280] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:40.154 [2024-11-15 10:46:28.368293] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:40.154 [2024-11-15 10:46:28.368329] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:40.154 qpair failed and we were unable to recover it. 00:27:40.154 [2024-11-15 10:46:28.378140] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:40.154 [2024-11-15 10:46:28.378249] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:40.154 [2024-11-15 10:46:28.378275] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:40.154 [2024-11-15 10:46:28.378289] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:40.154 [2024-11-15 10:46:28.378302] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:40.154 [2024-11-15 10:46:28.378331] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:40.154 qpair failed and we were unable to recover it. 00:27:40.154 [2024-11-15 10:46:28.388167] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:40.154 [2024-11-15 10:46:28.388266] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:40.154 [2024-11-15 10:46:28.388292] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:40.154 [2024-11-15 10:46:28.388307] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:40.154 [2024-11-15 10:46:28.388325] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:40.154 [2024-11-15 10:46:28.388356] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:40.154 qpair failed and we were unable to recover it. 00:27:40.154 [2024-11-15 10:46:28.398234] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:40.154 [2024-11-15 10:46:28.398334] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:40.154 [2024-11-15 10:46:28.398360] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:40.154 [2024-11-15 10:46:28.398385] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:40.154 [2024-11-15 10:46:28.398397] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:40.154 [2024-11-15 10:46:28.398427] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:40.154 qpair failed and we were unable to recover it. 00:27:40.154 [2024-11-15 10:46:28.408247] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:40.154 [2024-11-15 10:46:28.408356] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:40.155 [2024-11-15 10:46:28.408390] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:40.155 [2024-11-15 10:46:28.408405] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:40.155 [2024-11-15 10:46:28.408417] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:40.155 [2024-11-15 10:46:28.408447] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:40.155 qpair failed and we were unable to recover it. 00:27:40.155 [2024-11-15 10:46:28.418269] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:40.155 [2024-11-15 10:46:28.418382] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:40.155 [2024-11-15 10:46:28.418408] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:40.155 [2024-11-15 10:46:28.418423] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:40.155 [2024-11-15 10:46:28.418435] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:40.155 [2024-11-15 10:46:28.418465] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:40.155 qpair failed and we were unable to recover it. 00:27:40.155 [2024-11-15 10:46:28.428248] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:40.155 [2024-11-15 10:46:28.428350] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:40.155 [2024-11-15 10:46:28.428386] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:40.155 [2024-11-15 10:46:28.428401] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:40.155 [2024-11-15 10:46:28.428413] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:40.155 [2024-11-15 10:46:28.428443] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:40.155 qpair failed and we were unable to recover it. 00:27:40.155 [2024-11-15 10:46:28.438300] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:40.155 [2024-11-15 10:46:28.438419] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:40.155 [2024-11-15 10:46:28.438445] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:40.155 [2024-11-15 10:46:28.438460] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:40.155 [2024-11-15 10:46:28.438472] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:40.155 [2024-11-15 10:46:28.438502] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:40.155 qpair failed and we were unable to recover it. 00:27:40.155 [2024-11-15 10:46:28.448411] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:40.155 [2024-11-15 10:46:28.448512] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:40.155 [2024-11-15 10:46:28.448538] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:40.155 [2024-11-15 10:46:28.448552] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:40.155 [2024-11-15 10:46:28.448564] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:40.155 [2024-11-15 10:46:28.448594] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:40.155 qpair failed and we were unable to recover it. 00:27:40.155 [2024-11-15 10:46:28.458405] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:40.155 [2024-11-15 10:46:28.458512] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:40.155 [2024-11-15 10:46:28.458538] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:40.155 [2024-11-15 10:46:28.458552] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:40.155 [2024-11-15 10:46:28.458564] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:40.155 [2024-11-15 10:46:28.458595] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:40.155 qpair failed and we were unable to recover it. 00:27:40.155 [2024-11-15 10:46:28.468447] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:40.155 [2024-11-15 10:46:28.468561] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:40.155 [2024-11-15 10:46:28.468587] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:40.155 [2024-11-15 10:46:28.468601] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:40.155 [2024-11-15 10:46:28.468613] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:40.155 [2024-11-15 10:46:28.468643] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:40.155 qpair failed and we were unable to recover it. 00:27:40.155 [2024-11-15 10:46:28.478464] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:40.155 [2024-11-15 10:46:28.478552] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:40.155 [2024-11-15 10:46:28.478584] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:40.155 [2024-11-15 10:46:28.478599] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:40.155 [2024-11-15 10:46:28.478611] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:40.155 [2024-11-15 10:46:28.478641] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:40.155 qpair failed and we were unable to recover it. 00:27:40.155 [2024-11-15 10:46:28.488460] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:40.155 [2024-11-15 10:46:28.488601] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:40.155 [2024-11-15 10:46:28.488627] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:40.155 [2024-11-15 10:46:28.488642] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:40.155 [2024-11-15 10:46:28.488653] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:40.155 [2024-11-15 10:46:28.488683] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:40.155 qpair failed and we were unable to recover it. 00:27:40.155 [2024-11-15 10:46:28.498473] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:40.155 [2024-11-15 10:46:28.498561] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:40.155 [2024-11-15 10:46:28.498586] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:40.155 [2024-11-15 10:46:28.498600] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:40.155 [2024-11-15 10:46:28.498612] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:40.155 [2024-11-15 10:46:28.498641] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:40.155 qpair failed and we were unable to recover it. 00:27:40.155 [2024-11-15 10:46:28.508473] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:40.155 [2024-11-15 10:46:28.508565] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:40.155 [2024-11-15 10:46:28.508589] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:40.155 [2024-11-15 10:46:28.508603] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:40.155 [2024-11-15 10:46:28.508615] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:40.155 [2024-11-15 10:46:28.508645] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:40.155 qpair failed and we were unable to recover it. 00:27:40.155 [2024-11-15 10:46:28.518605] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:40.155 [2024-11-15 10:46:28.518689] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:40.155 [2024-11-15 10:46:28.518713] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:40.155 [2024-11-15 10:46:28.518727] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:40.155 [2024-11-15 10:46:28.518746] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:40.155 [2024-11-15 10:46:28.518776] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:40.155 qpair failed and we were unable to recover it. 00:27:40.155 [2024-11-15 10:46:28.528561] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:40.155 [2024-11-15 10:46:28.528660] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:40.155 [2024-11-15 10:46:28.528684] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:40.155 [2024-11-15 10:46:28.528698] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:40.155 [2024-11-15 10:46:28.528710] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:40.155 [2024-11-15 10:46:28.528740] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:40.155 qpair failed and we were unable to recover it. 00:27:40.155 [2024-11-15 10:46:28.538584] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:40.156 [2024-11-15 10:46:28.538688] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:40.156 [2024-11-15 10:46:28.538714] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:40.156 [2024-11-15 10:46:28.538728] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:40.156 [2024-11-15 10:46:28.538740] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:40.156 [2024-11-15 10:46:28.538770] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:40.156 qpair failed and we were unable to recover it. 00:27:40.156 [2024-11-15 10:46:28.548587] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:40.156 [2024-11-15 10:46:28.548674] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:40.156 [2024-11-15 10:46:28.548697] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:40.156 [2024-11-15 10:46:28.548711] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:40.156 [2024-11-15 10:46:28.548723] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:40.156 [2024-11-15 10:46:28.548753] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:40.156 qpair failed and we were unable to recover it. 00:27:40.156 [2024-11-15 10:46:28.558641] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:40.156 [2024-11-15 10:46:28.558778] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:40.156 [2024-11-15 10:46:28.558803] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:40.156 [2024-11-15 10:46:28.558817] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:40.156 [2024-11-15 10:46:28.558829] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:40.156 [2024-11-15 10:46:28.558859] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:40.156 qpair failed and we were unable to recover it. 00:27:40.156 [2024-11-15 10:46:28.568716] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:40.156 [2024-11-15 10:46:28.568864] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:40.156 [2024-11-15 10:46:28.568890] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:40.156 [2024-11-15 10:46:28.568905] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:40.156 [2024-11-15 10:46:28.568917] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:40.156 [2024-11-15 10:46:28.568946] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:40.156 qpair failed and we were unable to recover it. 00:27:40.156 [2024-11-15 10:46:28.578708] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:40.156 [2024-11-15 10:46:28.578808] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:40.156 [2024-11-15 10:46:28.578833] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:40.156 [2024-11-15 10:46:28.578848] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:40.156 [2024-11-15 10:46:28.578859] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:40.156 [2024-11-15 10:46:28.578890] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:40.156 qpair failed and we were unable to recover it. 00:27:40.156 [2024-11-15 10:46:28.588758] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:40.156 [2024-11-15 10:46:28.588861] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:40.156 [2024-11-15 10:46:28.588889] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:40.156 [2024-11-15 10:46:28.588904] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:40.156 [2024-11-15 10:46:28.588916] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:40.156 [2024-11-15 10:46:28.588955] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:40.156 qpair failed and we were unable to recover it. 00:27:40.156 [2024-11-15 10:46:28.598760] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:40.156 [2024-11-15 10:46:28.598861] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:40.156 [2024-11-15 10:46:28.598887] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:40.156 [2024-11-15 10:46:28.598902] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:40.156 [2024-11-15 10:46:28.598917] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:40.156 [2024-11-15 10:46:28.598947] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:40.156 qpair failed and we were unable to recover it. 00:27:40.156 [2024-11-15 10:46:28.608807] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:40.156 [2024-11-15 10:46:28.608912] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:40.156 [2024-11-15 10:46:28.608944] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:40.156 [2024-11-15 10:46:28.608970] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:40.156 [2024-11-15 10:46:28.608982] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:40.156 [2024-11-15 10:46:28.609013] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:40.156 qpair failed and we were unable to recover it. 00:27:40.156 [2024-11-15 10:46:28.618828] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:40.156 [2024-11-15 10:46:28.618930] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:40.156 [2024-11-15 10:46:28.618956] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:40.156 [2024-11-15 10:46:28.618971] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:40.156 [2024-11-15 10:46:28.618983] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:40.156 [2024-11-15 10:46:28.619016] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:40.156 qpair failed and we were unable to recover it. 00:27:40.414 [2024-11-15 10:46:28.628851] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:40.414 [2024-11-15 10:46:28.628957] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:40.414 [2024-11-15 10:46:28.628983] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:40.414 [2024-11-15 10:46:28.628998] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:40.414 [2024-11-15 10:46:28.629010] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:40.414 [2024-11-15 10:46:28.629039] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:40.414 qpair failed and we were unable to recover it. 00:27:40.414 [2024-11-15 10:46:28.638861] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:40.414 [2024-11-15 10:46:28.638960] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:40.414 [2024-11-15 10:46:28.638986] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:40.414 [2024-11-15 10:46:28.639001] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:40.414 [2024-11-15 10:46:28.639013] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:40.414 [2024-11-15 10:46:28.639054] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:40.414 qpair failed and we were unable to recover it. 00:27:40.414 [2024-11-15 10:46:28.648935] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:40.414 [2024-11-15 10:46:28.649082] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:40.414 [2024-11-15 10:46:28.649108] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:40.414 [2024-11-15 10:46:28.649130] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:40.414 [2024-11-15 10:46:28.649143] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:40.414 [2024-11-15 10:46:28.649174] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:40.414 qpair failed and we were unable to recover it. 00:27:40.414 [2024-11-15 10:46:28.658938] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:40.414 [2024-11-15 10:46:28.659041] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:40.414 [2024-11-15 10:46:28.659066] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:40.414 [2024-11-15 10:46:28.659080] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:40.414 [2024-11-15 10:46:28.659093] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:40.414 [2024-11-15 10:46:28.659122] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:40.414 qpair failed and we were unable to recover it. 00:27:40.414 [2024-11-15 10:46:28.668938] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:40.414 [2024-11-15 10:46:28.669037] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:40.414 [2024-11-15 10:46:28.669063] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:40.415 [2024-11-15 10:46:28.669077] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:40.415 [2024-11-15 10:46:28.669089] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:40.415 [2024-11-15 10:46:28.669120] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:40.415 qpair failed and we were unable to recover it. 00:27:40.415 [2024-11-15 10:46:28.678990] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:40.415 [2024-11-15 10:46:28.679090] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:40.415 [2024-11-15 10:46:28.679116] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:40.415 [2024-11-15 10:46:28.679130] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:40.415 [2024-11-15 10:46:28.679142] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:40.415 [2024-11-15 10:46:28.679179] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:40.415 qpair failed and we were unable to recover it. 00:27:40.415 [2024-11-15 10:46:28.689061] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:40.415 [2024-11-15 10:46:28.689178] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:40.415 [2024-11-15 10:46:28.689203] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:40.415 [2024-11-15 10:46:28.689218] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:40.415 [2024-11-15 10:46:28.689230] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:40.415 [2024-11-15 10:46:28.689265] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:40.415 qpair failed and we were unable to recover it. 00:27:40.415 [2024-11-15 10:46:28.699023] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:40.415 [2024-11-15 10:46:28.699124] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:40.415 [2024-11-15 10:46:28.699150] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:40.415 [2024-11-15 10:46:28.699165] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:40.415 [2024-11-15 10:46:28.699177] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:40.415 [2024-11-15 10:46:28.699208] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:40.415 qpair failed and we were unable to recover it. 00:27:40.415 [2024-11-15 10:46:28.709055] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:40.415 [2024-11-15 10:46:28.709163] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:40.415 [2024-11-15 10:46:28.709189] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:40.415 [2024-11-15 10:46:28.709203] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:40.415 [2024-11-15 10:46:28.709215] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:40.415 [2024-11-15 10:46:28.709253] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:40.415 qpair failed and we were unable to recover it. 00:27:40.415 [2024-11-15 10:46:28.719089] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:40.415 [2024-11-15 10:46:28.719191] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:40.415 [2024-11-15 10:46:28.719217] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:40.415 [2024-11-15 10:46:28.719231] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:40.415 [2024-11-15 10:46:28.719243] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:40.415 [2024-11-15 10:46:28.719274] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:40.415 qpair failed and we were unable to recover it. 00:27:40.415 [2024-11-15 10:46:28.729102] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:40.415 [2024-11-15 10:46:28.729209] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:40.415 [2024-11-15 10:46:28.729234] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:40.415 [2024-11-15 10:46:28.729249] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:40.415 [2024-11-15 10:46:28.729260] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:40.415 [2024-11-15 10:46:28.729301] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:40.415 qpair failed and we were unable to recover it. 00:27:40.415 [2024-11-15 10:46:28.739171] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:40.415 [2024-11-15 10:46:28.739306] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:40.415 [2024-11-15 10:46:28.739333] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:40.415 [2024-11-15 10:46:28.739347] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:40.415 [2024-11-15 10:46:28.739360] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:40.415 [2024-11-15 10:46:28.739408] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:40.415 qpair failed and we were unable to recover it. 00:27:40.415 [2024-11-15 10:46:28.749171] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:40.415 [2024-11-15 10:46:28.749271] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:40.415 [2024-11-15 10:46:28.749297] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:40.415 [2024-11-15 10:46:28.749312] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:40.415 [2024-11-15 10:46:28.749324] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:40.415 [2024-11-15 10:46:28.749371] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:40.415 qpair failed and we were unable to recover it. 00:27:40.415 [2024-11-15 10:46:28.759193] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:40.415 [2024-11-15 10:46:28.759306] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:40.415 [2024-11-15 10:46:28.759331] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:40.415 [2024-11-15 10:46:28.759346] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:40.415 [2024-11-15 10:46:28.759359] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:40.415 [2024-11-15 10:46:28.759400] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:40.415 qpair failed and we were unable to recover it. 00:27:40.415 [2024-11-15 10:46:28.769240] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:40.415 [2024-11-15 10:46:28.769343] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:40.415 [2024-11-15 10:46:28.769375] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:40.415 [2024-11-15 10:46:28.769392] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:40.415 [2024-11-15 10:46:28.769404] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:40.415 [2024-11-15 10:46:28.769434] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:40.415 qpair failed and we were unable to recover it. 00:27:40.415 [2024-11-15 10:46:28.779285] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:40.415 [2024-11-15 10:46:28.779381] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:40.415 [2024-11-15 10:46:28.779406] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:40.415 [2024-11-15 10:46:28.779429] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:40.415 [2024-11-15 10:46:28.779441] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:40.416 [2024-11-15 10:46:28.779471] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:40.416 qpair failed and we were unable to recover it. 00:27:40.416 [2024-11-15 10:46:28.789270] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:40.416 [2024-11-15 10:46:28.789379] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:40.416 [2024-11-15 10:46:28.789404] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:40.416 [2024-11-15 10:46:28.789418] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:40.416 [2024-11-15 10:46:28.789430] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:40.416 [2024-11-15 10:46:28.789466] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:40.416 qpair failed and we were unable to recover it. 00:27:40.416 [2024-11-15 10:46:28.799303] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:40.416 [2024-11-15 10:46:28.799416] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:40.416 [2024-11-15 10:46:28.799442] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:40.416 [2024-11-15 10:46:28.799457] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:40.416 [2024-11-15 10:46:28.799470] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:40.416 [2024-11-15 10:46:28.799501] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:40.416 qpair failed and we were unable to recover it. 00:27:40.416 [2024-11-15 10:46:28.809339] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:40.416 [2024-11-15 10:46:28.809465] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:40.416 [2024-11-15 10:46:28.809491] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:40.416 [2024-11-15 10:46:28.809516] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:40.416 [2024-11-15 10:46:28.809528] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:40.416 [2024-11-15 10:46:28.809558] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:40.416 qpair failed and we were unable to recover it. 00:27:40.416 [2024-11-15 10:46:28.819415] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:40.416 [2024-11-15 10:46:28.819501] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:40.416 [2024-11-15 10:46:28.819526] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:40.416 [2024-11-15 10:46:28.819540] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:40.416 [2024-11-15 10:46:28.819551] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:40.416 [2024-11-15 10:46:28.819587] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:40.416 qpair failed and we were unable to recover it. 00:27:40.416 [2024-11-15 10:46:28.829403] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:40.416 [2024-11-15 10:46:28.829533] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:40.416 [2024-11-15 10:46:28.829559] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:40.416 [2024-11-15 10:46:28.829573] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:40.416 [2024-11-15 10:46:28.829585] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:40.416 [2024-11-15 10:46:28.829615] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:40.416 qpair failed and we were unable to recover it. 00:27:40.416 [2024-11-15 10:46:28.839460] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:40.416 [2024-11-15 10:46:28.839602] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:40.416 [2024-11-15 10:46:28.839628] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:40.416 [2024-11-15 10:46:28.839642] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:40.416 [2024-11-15 10:46:28.839654] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:40.416 [2024-11-15 10:46:28.839685] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:40.416 qpair failed and we were unable to recover it. 00:27:40.416 [2024-11-15 10:46:28.849453] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:40.416 [2024-11-15 10:46:28.849575] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:40.416 [2024-11-15 10:46:28.849601] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:40.416 [2024-11-15 10:46:28.849615] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:40.416 [2024-11-15 10:46:28.849626] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:40.416 [2024-11-15 10:46:28.849657] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:40.416 qpair failed and we were unable to recover it. 00:27:40.416 [2024-11-15 10:46:28.859523] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:40.416 [2024-11-15 10:46:28.859609] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:40.416 [2024-11-15 10:46:28.859634] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:40.416 [2024-11-15 10:46:28.859648] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:40.416 [2024-11-15 10:46:28.859660] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:40.416 [2024-11-15 10:46:28.859690] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:40.416 qpair failed and we were unable to recover it. 00:27:40.416 [2024-11-15 10:46:28.869468] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:40.416 [2024-11-15 10:46:28.869602] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:40.416 [2024-11-15 10:46:28.869628] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:40.416 [2024-11-15 10:46:28.869642] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:40.416 [2024-11-15 10:46:28.869653] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:40.416 [2024-11-15 10:46:28.869683] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:40.416 qpair failed and we were unable to recover it. 00:27:40.416 [2024-11-15 10:46:28.879547] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:40.416 [2024-11-15 10:46:28.879638] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:40.416 [2024-11-15 10:46:28.879666] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:40.416 [2024-11-15 10:46:28.879679] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:40.416 [2024-11-15 10:46:28.879691] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:40.416 [2024-11-15 10:46:28.879721] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:40.416 qpair failed and we were unable to recover it. 00:27:40.676 [2024-11-15 10:46:28.889555] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:40.676 [2024-11-15 10:46:28.889656] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:40.676 [2024-11-15 10:46:28.889681] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:40.676 [2024-11-15 10:46:28.889695] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:40.676 [2024-11-15 10:46:28.889707] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:40.676 [2024-11-15 10:46:28.889738] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:40.676 qpair failed and we were unable to recover it. 00:27:40.676 [2024-11-15 10:46:28.899565] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:40.676 [2024-11-15 10:46:28.899657] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:40.676 [2024-11-15 10:46:28.899682] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:40.676 [2024-11-15 10:46:28.899695] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:40.676 [2024-11-15 10:46:28.899707] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:40.676 [2024-11-15 10:46:28.899737] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:40.676 qpair failed and we were unable to recover it. 00:27:40.676 [2024-11-15 10:46:28.909613] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:40.676 [2024-11-15 10:46:28.909726] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:40.676 [2024-11-15 10:46:28.909755] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:40.676 [2024-11-15 10:46:28.909770] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:40.676 [2024-11-15 10:46:28.909781] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:40.676 [2024-11-15 10:46:28.909812] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:40.676 qpair failed and we were unable to recover it. 00:27:40.676 [2024-11-15 10:46:28.919607] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:40.676 [2024-11-15 10:46:28.919696] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:40.676 [2024-11-15 10:46:28.919720] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:40.676 [2024-11-15 10:46:28.919734] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:40.676 [2024-11-15 10:46:28.919746] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:40.676 [2024-11-15 10:46:28.919776] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:40.676 qpair failed and we were unable to recover it. 00:27:40.676 [2024-11-15 10:46:28.929656] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:40.676 [2024-11-15 10:46:28.929780] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:40.676 [2024-11-15 10:46:28.929804] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:40.676 [2024-11-15 10:46:28.929818] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:40.676 [2024-11-15 10:46:28.929830] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:40.676 [2024-11-15 10:46:28.929860] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:40.676 qpair failed and we were unable to recover it. 00:27:40.676 [2024-11-15 10:46:28.939722] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:40.676 [2024-11-15 10:46:28.939820] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:40.676 [2024-11-15 10:46:28.939844] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:40.676 [2024-11-15 10:46:28.939858] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:40.676 [2024-11-15 10:46:28.939870] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:40.676 [2024-11-15 10:46:28.939899] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:40.676 qpair failed and we were unable to recover it. 00:27:40.676 [2024-11-15 10:46:28.949726] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:40.676 [2024-11-15 10:46:28.949825] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:40.676 [2024-11-15 10:46:28.949850] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:40.676 [2024-11-15 10:46:28.949864] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:40.676 [2024-11-15 10:46:28.949881] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:40.676 [2024-11-15 10:46:28.949912] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:40.676 qpair failed and we were unable to recover it. 00:27:40.676 [2024-11-15 10:46:28.959827] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:40.676 [2024-11-15 10:46:28.959924] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:40.676 [2024-11-15 10:46:28.959950] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:40.676 [2024-11-15 10:46:28.959964] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:40.676 [2024-11-15 10:46:28.959975] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:40.676 [2024-11-15 10:46:28.960005] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:40.676 qpair failed and we were unable to recover it. 00:27:40.676 [2024-11-15 10:46:28.969896] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:40.676 [2024-11-15 10:46:28.970025] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:40.676 [2024-11-15 10:46:28.970051] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:40.676 [2024-11-15 10:46:28.970066] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:40.676 [2024-11-15 10:46:28.970078] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:40.676 [2024-11-15 10:46:28.970109] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:40.676 qpair failed and we were unable to recover it. 00:27:40.676 [2024-11-15 10:46:28.979827] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:40.676 [2024-11-15 10:46:28.979927] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:40.676 [2024-11-15 10:46:28.979952] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:40.676 [2024-11-15 10:46:28.979966] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:40.676 [2024-11-15 10:46:28.979978] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:40.677 [2024-11-15 10:46:28.980008] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:40.677 qpair failed and we were unable to recover it. 00:27:40.677 [2024-11-15 10:46:28.989825] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:40.677 [2024-11-15 10:46:28.989923] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:40.677 [2024-11-15 10:46:28.989952] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:40.677 [2024-11-15 10:46:28.989967] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:40.677 [2024-11-15 10:46:28.989979] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:40.677 [2024-11-15 10:46:28.990009] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:40.677 qpair failed and we were unable to recover it. 00:27:40.677 [2024-11-15 10:46:28.999879] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:40.677 [2024-11-15 10:46:28.999999] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:40.677 [2024-11-15 10:46:29.000025] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:40.677 [2024-11-15 10:46:29.000040] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:40.677 [2024-11-15 10:46:29.000052] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:40.677 [2024-11-15 10:46:29.000083] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:40.677 qpair failed and we were unable to recover it. 00:27:40.677 [2024-11-15 10:46:29.009900] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:40.677 [2024-11-15 10:46:29.010004] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:40.677 [2024-11-15 10:46:29.010030] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:40.677 [2024-11-15 10:46:29.010044] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:40.677 [2024-11-15 10:46:29.010056] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:40.677 [2024-11-15 10:46:29.010086] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:40.677 qpair failed and we were unable to recover it. 00:27:40.677 [2024-11-15 10:46:29.020011] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:40.677 [2024-11-15 10:46:29.020117] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:40.677 [2024-11-15 10:46:29.020142] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:40.677 [2024-11-15 10:46:29.020155] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:40.677 [2024-11-15 10:46:29.020167] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:40.677 [2024-11-15 10:46:29.020198] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:40.677 qpair failed and we were unable to recover it. 00:27:40.677 [2024-11-15 10:46:29.029975] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:40.677 [2024-11-15 10:46:29.030082] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:40.677 [2024-11-15 10:46:29.030107] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:40.677 [2024-11-15 10:46:29.030122] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:40.677 [2024-11-15 10:46:29.030133] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:40.677 [2024-11-15 10:46:29.030163] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:40.677 qpair failed and we were unable to recover it. 00:27:40.677 [2024-11-15 10:46:29.040027] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:40.677 [2024-11-15 10:46:29.040127] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:40.677 [2024-11-15 10:46:29.040158] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:40.677 [2024-11-15 10:46:29.040173] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:40.677 [2024-11-15 10:46:29.040185] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:40.677 [2024-11-15 10:46:29.040215] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:40.677 qpair failed and we were unable to recover it. 00:27:40.677 [2024-11-15 10:46:29.050091] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:40.677 [2024-11-15 10:46:29.050201] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:40.677 [2024-11-15 10:46:29.050226] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:40.677 [2024-11-15 10:46:29.050241] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:40.677 [2024-11-15 10:46:29.050253] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:40.677 [2024-11-15 10:46:29.050282] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:40.677 qpair failed and we were unable to recover it. 00:27:40.677 [2024-11-15 10:46:29.060014] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:40.677 [2024-11-15 10:46:29.060113] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:40.677 [2024-11-15 10:46:29.060139] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:40.677 [2024-11-15 10:46:29.060153] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:40.677 [2024-11-15 10:46:29.060165] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:40.677 [2024-11-15 10:46:29.060195] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:40.677 qpair failed and we were unable to recover it. 00:27:40.677 [2024-11-15 10:46:29.070078] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:40.677 [2024-11-15 10:46:29.070182] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:40.677 [2024-11-15 10:46:29.070208] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:40.677 [2024-11-15 10:46:29.070223] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:40.677 [2024-11-15 10:46:29.070235] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:40.677 [2024-11-15 10:46:29.070265] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:40.677 qpair failed and we were unable to recover it. 00:27:40.677 [2024-11-15 10:46:29.080110] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:40.677 [2024-11-15 10:46:29.080256] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:40.677 [2024-11-15 10:46:29.080282] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:40.677 [2024-11-15 10:46:29.080297] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:40.677 [2024-11-15 10:46:29.080315] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:40.677 [2024-11-15 10:46:29.080347] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:40.677 qpair failed and we were unable to recover it. 00:27:40.677 [2024-11-15 10:46:29.090140] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:40.677 [2024-11-15 10:46:29.090247] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:40.677 [2024-11-15 10:46:29.090271] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:40.677 [2024-11-15 10:46:29.090285] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:40.677 [2024-11-15 10:46:29.090296] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:40.677 [2024-11-15 10:46:29.090327] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:40.677 qpair failed and we were unable to recover it. 00:27:40.677 [2024-11-15 10:46:29.100159] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:40.677 [2024-11-15 10:46:29.100260] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:40.677 [2024-11-15 10:46:29.100285] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:40.677 [2024-11-15 10:46:29.100299] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:40.677 [2024-11-15 10:46:29.100311] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:40.677 [2024-11-15 10:46:29.100340] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:40.677 qpair failed and we were unable to recover it. 00:27:40.677 [2024-11-15 10:46:29.110191] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:40.678 [2024-11-15 10:46:29.110290] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:40.678 [2024-11-15 10:46:29.110317] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:40.678 [2024-11-15 10:46:29.110331] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:40.678 [2024-11-15 10:46:29.110343] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:40.678 [2024-11-15 10:46:29.110380] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:40.678 qpair failed and we were unable to recover it. 00:27:40.678 [2024-11-15 10:46:29.120203] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:40.678 [2024-11-15 10:46:29.120302] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:40.678 [2024-11-15 10:46:29.120328] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:40.678 [2024-11-15 10:46:29.120343] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:40.678 [2024-11-15 10:46:29.120355] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:40.678 [2024-11-15 10:46:29.120393] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:40.678 qpair failed and we were unable to recover it. 00:27:40.678 [2024-11-15 10:46:29.130270] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:40.678 [2024-11-15 10:46:29.130389] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:40.678 [2024-11-15 10:46:29.130424] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:40.678 [2024-11-15 10:46:29.130439] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:40.678 [2024-11-15 10:46:29.130450] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:40.678 [2024-11-15 10:46:29.130481] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:40.678 qpair failed and we were unable to recover it. 00:27:40.678 [2024-11-15 10:46:29.140282] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:40.678 [2024-11-15 10:46:29.140388] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:40.678 [2024-11-15 10:46:29.140413] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:40.678 [2024-11-15 10:46:29.140427] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:40.678 [2024-11-15 10:46:29.140439] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:40.678 [2024-11-15 10:46:29.140469] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:40.678 qpair failed and we were unable to recover it. 00:27:40.936 [2024-11-15 10:46:29.150271] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:40.936 [2024-11-15 10:46:29.150400] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:40.936 [2024-11-15 10:46:29.150426] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:40.936 [2024-11-15 10:46:29.150440] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:40.936 [2024-11-15 10:46:29.150452] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b4c000b90 00:27:40.936 [2024-11-15 10:46:29.150482] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:40.936 qpair failed and we were unable to recover it. 00:27:40.936 [2024-11-15 10:46:29.150629] nvme_ctrlr.c:4518:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Submitting Keep Alive failed 00:27:40.936 A controller has encountered a failure and is being reset. 00:27:40.936 [2024-11-15 10:46:29.150695] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1214f30 (9): Bad file descriptor 00:27:40.936 Controller properly reset. 00:27:40.936 Initializing NVMe Controllers 00:27:40.936 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:40.936 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:40.936 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:27:40.936 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:27:40.936 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:27:40.936 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:27:40.936 Initialization complete. Launching workers. 00:27:40.936 Starting thread on core 1 00:27:40.936 Starting thread on core 2 00:27:40.936 Starting thread on core 3 00:27:40.936 Starting thread on core 0 00:27:40.936 10:46:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@51 -- # sync 00:27:40.936 00:27:40.936 real 0m10.731s 00:27:40.936 user 0m19.173s 00:27:40.936 sys 0m5.198s 00:27:40.936 10:46:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1128 -- # xtrace_disable 00:27:40.936 10:46:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:40.936 ************************************ 00:27:40.936 END TEST nvmf_target_disconnect_tc2 00:27:40.936 ************************************ 00:27:40.936 10:46:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@72 -- # '[' -n '' ']' 00:27:40.936 10:46:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:27:40.936 10:46:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestfini 00:27:40.936 10:46:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@516 -- # nvmfcleanup 00:27:40.936 10:46:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@121 -- # sync 00:27:40.936 10:46:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:40.936 10:46:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@124 -- # set +e 00:27:40.936 10:46:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:40.936 10:46:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:40.936 rmmod nvme_tcp 00:27:40.936 rmmod nvme_fabrics 00:27:40.936 rmmod nvme_keyring 00:27:40.936 10:46:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:40.936 10:46:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@128 -- # set -e 00:27:40.936 10:46:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@129 -- # return 0 00:27:40.936 10:46:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@517 -- # '[' -n 491606 ']' 00:27:40.936 10:46:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@518 -- # killprocess 491606 00:27:40.936 10:46:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@952 -- # '[' -z 491606 ']' 00:27:40.936 10:46:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@956 -- # kill -0 491606 00:27:40.936 10:46:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@957 -- # uname 00:27:40.936 10:46:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:27:40.936 10:46:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 491606 00:27:40.936 10:46:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@958 -- # process_name=reactor_4 00:27:40.936 10:46:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@962 -- # '[' reactor_4 = sudo ']' 00:27:40.936 10:46:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@970 -- # echo 'killing process with pid 491606' 00:27:40.936 killing process with pid 491606 00:27:40.936 10:46:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@971 -- # kill 491606 00:27:40.936 10:46:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@976 -- # wait 491606 00:27:41.194 10:46:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:27:41.194 10:46:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:27:41.194 10:46:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:27:41.194 10:46:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@297 -- # iptr 00:27:41.194 10:46:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # iptables-save 00:27:41.194 10:46:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:27:41.194 10:46:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # iptables-restore 00:27:41.194 10:46:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:41.194 10:46:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@302 -- # remove_spdk_ns 00:27:41.194 10:46:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:41.194 10:46:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:41.194 10:46:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:43.726 10:46:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:27:43.726 00:27:43.726 real 0m15.752s 00:27:43.726 user 0m45.162s 00:27:43.726 sys 0m7.323s 00:27:43.726 10:46:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1128 -- # xtrace_disable 00:27:43.726 10:46:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:27:43.726 ************************************ 00:27:43.726 END TEST nvmf_target_disconnect 00:27:43.726 ************************************ 00:27:43.726 10:46:31 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:27:43.726 00:27:43.726 real 5m16.246s 00:27:43.727 user 11m7.251s 00:27:43.727 sys 1m17.866s 00:27:43.727 10:46:31 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1128 -- # xtrace_disable 00:27:43.727 10:46:31 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:27:43.727 ************************************ 00:27:43.727 END TEST nvmf_host 00:27:43.727 ************************************ 00:27:43.727 10:46:31 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ tcp = \t\c\p ]] 00:27:43.727 10:46:31 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ 0 -eq 0 ]] 00:27:43.727 10:46:31 nvmf_tcp -- nvmf/nvmf.sh@20 -- # run_test nvmf_target_core_interrupt_mode /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:27:43.727 10:46:31 nvmf_tcp -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:27:43.727 10:46:31 nvmf_tcp -- common/autotest_common.sh@1109 -- # xtrace_disable 00:27:43.727 10:46:31 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:43.727 ************************************ 00:27:43.727 START TEST nvmf_target_core_interrupt_mode 00:27:43.727 ************************************ 00:27:43.727 10:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:27:43.727 * Looking for test storage... 00:27:43.727 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:27:43.727 10:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:27:43.727 10:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1691 -- # lcov --version 00:27:43.727 10:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:27:43.727 10:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:27:43.727 10:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:43.727 10:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:43.727 10:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:43.727 10:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # IFS=.-: 00:27:43.727 10:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # read -ra ver1 00:27:43.727 10:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # IFS=.-: 00:27:43.727 10:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # read -ra ver2 00:27:43.727 10:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@338 -- # local 'op=<' 00:27:43.727 10:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@340 -- # ver1_l=2 00:27:43.727 10:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@341 -- # ver2_l=1 00:27:43.727 10:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:43.727 10:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@344 -- # case "$op" in 00:27:43.727 10:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@345 -- # : 1 00:27:43.727 10:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:43.727 10:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:43.727 10:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # decimal 1 00:27:43.727 10:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=1 00:27:43.727 10:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:43.727 10:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 1 00:27:43.727 10:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # ver1[v]=1 00:27:43.727 10:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # decimal 2 00:27:43.727 10:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=2 00:27:43.727 10:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:43.727 10:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 2 00:27:43.727 10:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # ver2[v]=2 00:27:43.727 10:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:43.727 10:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:43.727 10:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # return 0 00:27:43.727 10:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:43.727 10:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:27:43.727 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:43.727 --rc genhtml_branch_coverage=1 00:27:43.727 --rc genhtml_function_coverage=1 00:27:43.727 --rc genhtml_legend=1 00:27:43.727 --rc geninfo_all_blocks=1 00:27:43.727 --rc geninfo_unexecuted_blocks=1 00:27:43.727 00:27:43.727 ' 00:27:43.727 10:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:27:43.727 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:43.727 --rc genhtml_branch_coverage=1 00:27:43.727 --rc genhtml_function_coverage=1 00:27:43.727 --rc genhtml_legend=1 00:27:43.727 --rc geninfo_all_blocks=1 00:27:43.727 --rc geninfo_unexecuted_blocks=1 00:27:43.727 00:27:43.727 ' 00:27:43.727 10:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:27:43.727 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:43.727 --rc genhtml_branch_coverage=1 00:27:43.727 --rc genhtml_function_coverage=1 00:27:43.727 --rc genhtml_legend=1 00:27:43.727 --rc geninfo_all_blocks=1 00:27:43.727 --rc geninfo_unexecuted_blocks=1 00:27:43.727 00:27:43.727 ' 00:27:43.727 10:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:27:43.727 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:43.727 --rc genhtml_branch_coverage=1 00:27:43.727 --rc genhtml_function_coverage=1 00:27:43.727 --rc genhtml_legend=1 00:27:43.727 --rc geninfo_all_blocks=1 00:27:43.727 --rc geninfo_unexecuted_blocks=1 00:27:43.727 00:27:43.727 ' 00:27:43.727 10:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:27:43.727 10:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:27:43.727 10:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:43.727 10:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # uname -s 00:27:43.727 10:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:43.727 10:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:43.727 10:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:43.727 10:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:43.727 10:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:43.727 10:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:43.727 10:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:43.727 10:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:43.727 10:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:43.727 10:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:43.727 10:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:27:43.727 10:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@18 -- # NVME_HOSTID=8b464f06-2980-e311-ba20-001e67a94acd 00:27:43.727 10:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:43.727 10:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:43.727 10:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:43.727 10:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:43.727 10:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:43.727 10:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@15 -- # shopt -s extglob 00:27:43.727 10:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:43.727 10:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:43.727 10:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:43.727 10:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:43.727 10:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:43.727 10:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:43.727 10:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@5 -- # export PATH 00:27:43.727 10:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:43.728 10:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@51 -- # : 0 00:27:43.728 10:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:43.728 10:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:43.728 10:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:43.728 10:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:43.728 10:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:43.728 10:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:27:43.728 10:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:27:43.728 10:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:43.728 10:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:43.728 10:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:43.728 10:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:27:43.728 10:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:27:43.728 10:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:27:43.728 10:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:27:43.728 10:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:27:43.728 10:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1109 -- # xtrace_disable 00:27:43.728 10:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:27:43.728 ************************************ 00:27:43.728 START TEST nvmf_abort 00:27:43.728 ************************************ 00:27:43.728 10:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:27:43.728 * Looking for test storage... 00:27:43.728 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:27:43.728 10:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:27:43.728 10:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1691 -- # lcov --version 00:27:43.728 10:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:27:43.728 10:46:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:27:43.728 10:46:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:43.728 10:46:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:43.728 10:46:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:43.728 10:46:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:27:43.728 10:46:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:27:43.728 10:46:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:27:43.728 10:46:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:27:43.728 10:46:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:27:43.728 10:46:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:27:43.728 10:46:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:27:43.728 10:46:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:43.728 10:46:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:27:43.728 10:46:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:27:43.728 10:46:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:43.728 10:46:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:43.728 10:46:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:27:43.728 10:46:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:27:43.728 10:46:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:43.728 10:46:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:27:43.728 10:46:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:27:43.728 10:46:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:27:43.728 10:46:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:27:43.728 10:46:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:43.728 10:46:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:27:43.728 10:46:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:27:43.728 10:46:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:43.728 10:46:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:43.728 10:46:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:27:43.728 10:46:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:43.728 10:46:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:27:43.728 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:43.728 --rc genhtml_branch_coverage=1 00:27:43.728 --rc genhtml_function_coverage=1 00:27:43.728 --rc genhtml_legend=1 00:27:43.728 --rc geninfo_all_blocks=1 00:27:43.728 --rc geninfo_unexecuted_blocks=1 00:27:43.728 00:27:43.728 ' 00:27:43.728 10:46:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:27:43.728 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:43.728 --rc genhtml_branch_coverage=1 00:27:43.728 --rc genhtml_function_coverage=1 00:27:43.728 --rc genhtml_legend=1 00:27:43.728 --rc geninfo_all_blocks=1 00:27:43.728 --rc geninfo_unexecuted_blocks=1 00:27:43.728 00:27:43.728 ' 00:27:43.728 10:46:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:27:43.728 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:43.728 --rc genhtml_branch_coverage=1 00:27:43.728 --rc genhtml_function_coverage=1 00:27:43.728 --rc genhtml_legend=1 00:27:43.728 --rc geninfo_all_blocks=1 00:27:43.728 --rc geninfo_unexecuted_blocks=1 00:27:43.728 00:27:43.728 ' 00:27:43.728 10:46:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:27:43.728 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:43.728 --rc genhtml_branch_coverage=1 00:27:43.728 --rc genhtml_function_coverage=1 00:27:43.728 --rc genhtml_legend=1 00:27:43.728 --rc geninfo_all_blocks=1 00:27:43.728 --rc geninfo_unexecuted_blocks=1 00:27:43.728 00:27:43.728 ' 00:27:43.728 10:46:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:43.728 10:46:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:27:43.728 10:46:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:43.728 10:46:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:43.728 10:46:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:43.728 10:46:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:43.728 10:46:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:43.728 10:46:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:43.728 10:46:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:43.728 10:46:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:43.728 10:46:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:43.728 10:46:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:43.728 10:46:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:27:43.728 10:46:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=8b464f06-2980-e311-ba20-001e67a94acd 00:27:43.728 10:46:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:43.728 10:46:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:43.728 10:46:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:43.728 10:46:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:43.728 10:46:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:43.728 10:46:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:27:43.728 10:46:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:43.728 10:46:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:43.728 10:46:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:43.728 10:46:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:43.729 10:46:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:43.729 10:46:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:43.729 10:46:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:27:43.729 10:46:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:43.729 10:46:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:27:43.729 10:46:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:43.729 10:46:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:43.729 10:46:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:43.729 10:46:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:43.729 10:46:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:43.729 10:46:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:27:43.729 10:46:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:27:43.729 10:46:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:43.729 10:46:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:43.729 10:46:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:43.729 10:46:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:27:43.729 10:46:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:27:43.729 10:46:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:27:43.729 10:46:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:27:43.729 10:46:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:43.729 10:46:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@476 -- # prepare_net_devs 00:27:43.729 10:46:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@438 -- # local -g is_hw=no 00:27:43.729 10:46:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@440 -- # remove_spdk_ns 00:27:43.729 10:46:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:43.729 10:46:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:43.729 10:46:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:43.729 10:46:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:27:43.729 10:46:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:27:43.729 10:46:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@309 -- # xtrace_disable 00:27:43.729 10:46:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:27:46.259 10:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:46.259 10:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@315 -- # pci_devs=() 00:27:46.259 10:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:46.259 10:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:46.259 10:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:46.259 10:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:46.259 10:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:46.259 10:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@319 -- # net_devs=() 00:27:46.259 10:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:46.259 10:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@320 -- # e810=() 00:27:46.259 10:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@320 -- # local -ga e810 00:27:46.259 10:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@321 -- # x722=() 00:27:46.259 10:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@321 -- # local -ga x722 00:27:46.259 10:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@322 -- # mlx=() 00:27:46.259 10:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@322 -- # local -ga mlx 00:27:46.259 10:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:46.259 10:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:46.259 10:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:46.259 10:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:46.259 10:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:46.259 10:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:46.259 10:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:46.259 10:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:46.259 10:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:46.259 10:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:46.259 10:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:46.259 10:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:46.259 10:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:46.259 10:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:27:46.259 10:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:27:46.259 10:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:27:46.259 10:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:27:46.259 10:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:46.259 10:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:46.259 10:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:82:00.0 (0x8086 - 0x159b)' 00:27:46.259 Found 0000:82:00.0 (0x8086 - 0x159b) 00:27:46.259 10:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:46.259 10:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:46.259 10:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:46.259 10:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:46.259 10:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:46.259 10:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:46.259 10:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:82:00.1 (0x8086 - 0x159b)' 00:27:46.259 Found 0000:82:00.1 (0x8086 - 0x159b) 00:27:46.259 10:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:46.259 10:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:46.259 10:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:46.259 10:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:46.259 10:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:46.259 10:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:46.259 10:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:27:46.259 10:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:27:46.259 10:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:46.259 10:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:46.259 10:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:46.259 10:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:46.259 10:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:46.259 10:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:46.259 10:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:46.260 10:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:82:00.0: cvl_0_0' 00:27:46.260 Found net devices under 0000:82:00.0: cvl_0_0 00:27:46.260 10:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:46.260 10:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:46.260 10:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:46.260 10:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:46.260 10:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:46.260 10:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:46.260 10:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:46.260 10:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:46.260 10:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:82:00.1: cvl_0_1' 00:27:46.260 Found net devices under 0000:82:00.1: cvl_0_1 00:27:46.260 10:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:46.260 10:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:27:46.260 10:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # is_hw=yes 00:27:46.260 10:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:27:46.260 10:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:27:46.260 10:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:27:46.260 10:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:46.260 10:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:46.260 10:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:46.260 10:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:46.260 10:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:27:46.260 10:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:46.260 10:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:46.260 10:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:27:46.260 10:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:27:46.260 10:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:46.260 10:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:46.260 10:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:27:46.260 10:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:27:46.260 10:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:27:46.260 10:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:46.260 10:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:46.260 10:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:46.260 10:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:27:46.260 10:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:46.260 10:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:46.260 10:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:46.260 10:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:27:46.260 10:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:27:46.260 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:46.260 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.316 ms 00:27:46.260 00:27:46.260 --- 10.0.0.2 ping statistics --- 00:27:46.260 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:46.260 rtt min/avg/max/mdev = 0.316/0.316/0.316/0.000 ms 00:27:46.260 10:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:46.260 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:46.260 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.159 ms 00:27:46.260 00:27:46.260 --- 10.0.0.1 ping statistics --- 00:27:46.260 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:46.260 rtt min/avg/max/mdev = 0.159/0.159/0.159/0.000 ms 00:27:46.260 10:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:46.260 10:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@450 -- # return 0 00:27:46.260 10:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:27:46.260 10:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:46.260 10:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:27:46.260 10:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:27:46.260 10:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:46.260 10:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:27:46.260 10:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:27:46.260 10:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:27:46.260 10:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:27:46.260 10:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@724 -- # xtrace_disable 00:27:46.260 10:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:27:46.260 10:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@509 -- # nvmfpid=494421 00:27:46.260 10:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:27:46.260 10:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@510 -- # waitforlisten 494421 00:27:46.261 10:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@833 -- # '[' -z 494421 ']' 00:27:46.261 10:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:46.261 10:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@838 -- # local max_retries=100 00:27:46.261 10:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:46.261 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:46.261 10:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@842 -- # xtrace_disable 00:27:46.261 10:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:27:46.261 [2024-11-15 10:46:34.392059] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:27:46.261 [2024-11-15 10:46:34.393099] Starting SPDK v25.01-pre git sha1 318515b44 / DPDK 24.03.0 initialization... 00:27:46.261 [2024-11-15 10:46:34.393162] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:46.261 [2024-11-15 10:46:34.462989] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:27:46.261 [2024-11-15 10:46:34.515962] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:46.261 [2024-11-15 10:46:34.516019] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:46.261 [2024-11-15 10:46:34.516046] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:46.261 [2024-11-15 10:46:34.516056] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:46.261 [2024-11-15 10:46:34.516066] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:46.261 [2024-11-15 10:46:34.517512] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:27:46.261 [2024-11-15 10:46:34.517566] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:27:46.261 [2024-11-15 10:46:34.517569] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:46.261 [2024-11-15 10:46:34.602461] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:27:46.261 [2024-11-15 10:46:34.602675] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:27:46.261 [2024-11-15 10:46:34.602708] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:27:46.261 [2024-11-15 10:46:34.602970] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:27:46.261 10:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:27:46.261 10:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@866 -- # return 0 00:27:46.261 10:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:27:46.261 10:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@730 -- # xtrace_disable 00:27:46.261 10:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:27:46.261 10:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:46.261 10:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:27:46.261 10:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:46.261 10:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:27:46.261 [2024-11-15 10:46:34.658230] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:46.261 10:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:46.261 10:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:27:46.261 10:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:46.261 10:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:27:46.261 Malloc0 00:27:46.261 10:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:46.261 10:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:27:46.261 10:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:46.261 10:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:27:46.261 Delay0 00:27:46.261 10:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:46.261 10:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:27:46.261 10:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:46.261 10:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:27:46.519 10:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:46.519 10:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:27:46.519 10:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:46.519 10:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:27:46.519 10:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:46.519 10:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:27:46.519 10:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:46.519 10:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:27:46.519 [2024-11-15 10:46:34.738442] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:46.519 10:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:46.519 10:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:27:46.519 10:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:46.519 10:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:27:46.519 10:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:46.519 10:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:27:46.519 [2024-11-15 10:46:34.847408] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:27:49.050 Initializing NVMe Controllers 00:27:49.050 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:27:49.050 controller IO queue size 128 less than required 00:27:49.050 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:27:49.050 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:27:49.050 Initialization complete. Launching workers. 00:27:49.050 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 28744 00:27:49.050 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 28801, failed to submit 66 00:27:49.050 success 28744, unsuccessful 57, failed 0 00:27:49.050 10:46:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:27:49.050 10:46:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:49.050 10:46:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:27:49.050 10:46:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:49.050 10:46:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:27:49.050 10:46:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:27:49.050 10:46:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@516 -- # nvmfcleanup 00:27:49.050 10:46:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:27:49.050 10:46:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:49.050 10:46:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:27:49.050 10:46:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:49.050 10:46:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:49.050 rmmod nvme_tcp 00:27:49.050 rmmod nvme_fabrics 00:27:49.050 rmmod nvme_keyring 00:27:49.050 10:46:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:49.050 10:46:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:27:49.050 10:46:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:27:49.050 10:46:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@517 -- # '[' -n 494421 ']' 00:27:49.051 10:46:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@518 -- # killprocess 494421 00:27:49.051 10:46:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@952 -- # '[' -z 494421 ']' 00:27:49.051 10:46:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@956 -- # kill -0 494421 00:27:49.051 10:46:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@957 -- # uname 00:27:49.051 10:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:27:49.051 10:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 494421 00:27:49.051 10:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:27:49.051 10:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:27:49.051 10:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@970 -- # echo 'killing process with pid 494421' 00:27:49.051 killing process with pid 494421 00:27:49.051 10:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@971 -- # kill 494421 00:27:49.051 10:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@976 -- # wait 494421 00:27:49.051 10:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:27:49.051 10:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:27:49.051 10:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:27:49.051 10:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:27:49.051 10:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # iptables-save 00:27:49.051 10:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:27:49.051 10:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # iptables-restore 00:27:49.051 10:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:49.051 10:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@302 -- # remove_spdk_ns 00:27:49.051 10:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:49.051 10:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:49.051 10:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:50.958 10:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:27:50.958 00:27:50.958 real 0m7.396s 00:27:50.958 user 0m9.293s 00:27:50.958 sys 0m2.979s 00:27:50.958 10:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1128 -- # xtrace_disable 00:27:50.958 10:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:27:50.958 ************************************ 00:27:50.958 END TEST nvmf_abort 00:27:50.958 ************************************ 00:27:50.958 10:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:27:50.958 10:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:27:50.958 10:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1109 -- # xtrace_disable 00:27:50.958 10:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:27:50.958 ************************************ 00:27:50.958 START TEST nvmf_ns_hotplug_stress 00:27:50.958 ************************************ 00:27:50.958 10:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:27:50.958 * Looking for test storage... 00:27:50.958 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:27:50.958 10:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:27:50.958 10:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1691 -- # lcov --version 00:27:50.958 10:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:27:51.217 10:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:27:51.217 10:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:51.217 10:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:51.217 10:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:51.217 10:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:27:51.217 10:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:27:51.217 10:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:27:51.217 10:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:27:51.217 10:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:27:51.217 10:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:27:51.217 10:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:27:51.217 10:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:51.217 10:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:27:51.217 10:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:27:51.217 10:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:51.217 10:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:51.217 10:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:27:51.217 10:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:27:51.217 10:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:51.217 10:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:27:51.217 10:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:27:51.217 10:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:27:51.217 10:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:27:51.217 10:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:51.217 10:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:27:51.217 10:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:27:51.217 10:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:51.217 10:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:51.217 10:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:27:51.217 10:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:51.217 10:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:27:51.217 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:51.217 --rc genhtml_branch_coverage=1 00:27:51.217 --rc genhtml_function_coverage=1 00:27:51.217 --rc genhtml_legend=1 00:27:51.217 --rc geninfo_all_blocks=1 00:27:51.217 --rc geninfo_unexecuted_blocks=1 00:27:51.217 00:27:51.217 ' 00:27:51.217 10:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:27:51.217 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:51.217 --rc genhtml_branch_coverage=1 00:27:51.217 --rc genhtml_function_coverage=1 00:27:51.217 --rc genhtml_legend=1 00:27:51.217 --rc geninfo_all_blocks=1 00:27:51.217 --rc geninfo_unexecuted_blocks=1 00:27:51.217 00:27:51.217 ' 00:27:51.217 10:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:27:51.217 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:51.217 --rc genhtml_branch_coverage=1 00:27:51.217 --rc genhtml_function_coverage=1 00:27:51.217 --rc genhtml_legend=1 00:27:51.217 --rc geninfo_all_blocks=1 00:27:51.217 --rc geninfo_unexecuted_blocks=1 00:27:51.217 00:27:51.217 ' 00:27:51.217 10:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:27:51.217 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:51.217 --rc genhtml_branch_coverage=1 00:27:51.217 --rc genhtml_function_coverage=1 00:27:51.217 --rc genhtml_legend=1 00:27:51.217 --rc geninfo_all_blocks=1 00:27:51.217 --rc geninfo_unexecuted_blocks=1 00:27:51.217 00:27:51.217 ' 00:27:51.217 10:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:51.217 10:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:27:51.217 10:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:51.217 10:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:51.217 10:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:51.217 10:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:51.217 10:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:51.217 10:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:51.217 10:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:51.217 10:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:51.217 10:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:51.217 10:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:51.217 10:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:27:51.217 10:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=8b464f06-2980-e311-ba20-001e67a94acd 00:27:51.217 10:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:51.217 10:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:51.217 10:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:51.217 10:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:51.217 10:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:51.218 10:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:27:51.218 10:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:51.218 10:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:51.218 10:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:51.218 10:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:51.218 10:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:51.218 10:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:51.218 10:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:27:51.218 10:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:51.218 10:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:27:51.218 10:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:51.218 10:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:51.218 10:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:51.218 10:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:51.218 10:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:51.218 10:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:27:51.218 10:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:27:51.218 10:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:51.218 10:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:51.218 10:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:51.218 10:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:27:51.218 10:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:27:51.218 10:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:27:51.218 10:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:51.218 10:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:27:51.218 10:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:27:51.218 10:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:27:51.218 10:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:51.218 10:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:51.218 10:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:51.218 10:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:27:51.218 10:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:27:51.218 10:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:27:51.218 10:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:27:53.121 10:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:53.121 10:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:27:53.121 10:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:53.121 10:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:53.121 10:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:53.121 10:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:53.121 10:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:53.121 10:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # net_devs=() 00:27:53.121 10:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:53.121 10:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # e810=() 00:27:53.121 10:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # local -ga e810 00:27:53.121 10:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # x722=() 00:27:53.121 10:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # local -ga x722 00:27:53.121 10:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # mlx=() 00:27:53.121 10:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:27:53.121 10:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:53.122 10:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:53.122 10:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:53.122 10:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:53.122 10:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:53.122 10:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:53.122 10:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:53.122 10:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:53.122 10:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:53.122 10:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:53.122 10:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:53.122 10:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:53.122 10:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:53.122 10:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:27:53.122 10:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:27:53.122 10:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:27:53.122 10:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:27:53.122 10:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:53.122 10:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:53.122 10:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:82:00.0 (0x8086 - 0x159b)' 00:27:53.122 Found 0000:82:00.0 (0x8086 - 0x159b) 00:27:53.122 10:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:53.122 10:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:53.122 10:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:53.122 10:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:53.122 10:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:53.122 10:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:53.122 10:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:82:00.1 (0x8086 - 0x159b)' 00:27:53.122 Found 0000:82:00.1 (0x8086 - 0x159b) 00:27:53.122 10:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:53.122 10:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:53.122 10:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:53.122 10:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:53.122 10:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:53.122 10:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:53.122 10:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:27:53.122 10:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:27:53.122 10:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:53.122 10:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:53.122 10:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:53.122 10:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:53.122 10:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:53.122 10:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:53.122 10:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:53.122 10:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:82:00.0: cvl_0_0' 00:27:53.122 Found net devices under 0000:82:00.0: cvl_0_0 00:27:53.122 10:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:53.122 10:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:53.122 10:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:53.122 10:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:53.122 10:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:53.122 10:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:53.122 10:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:53.122 10:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:53.122 10:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:82:00.1: cvl_0_1' 00:27:53.122 Found net devices under 0000:82:00.1: cvl_0_1 00:27:53.122 10:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:53.122 10:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:27:53.122 10:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:27:53.122 10:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:27:53.122 10:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:27:53.122 10:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:27:53.122 10:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:53.122 10:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:53.122 10:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:53.122 10:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:53.122 10:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:27:53.122 10:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:53.122 10:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:53.122 10:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:27:53.122 10:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:27:53.122 10:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:53.122 10:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:53.122 10:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:27:53.122 10:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:27:53.122 10:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:27:53.122 10:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:53.381 10:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:53.381 10:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:53.381 10:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:27:53.381 10:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:53.381 10:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:53.381 10:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:53.381 10:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:27:53.381 10:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:27:53.381 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:53.381 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.200 ms 00:27:53.382 00:27:53.382 --- 10.0.0.2 ping statistics --- 00:27:53.382 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:53.382 rtt min/avg/max/mdev = 0.200/0.200/0.200/0.000 ms 00:27:53.382 10:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:53.382 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:53.382 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.072 ms 00:27:53.382 00:27:53.382 --- 10.0.0.1 ping statistics --- 00:27:53.382 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:53.382 rtt min/avg/max/mdev = 0.072/0.072/0.072/0.000 ms 00:27:53.382 10:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:53.382 10:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # return 0 00:27:53.382 10:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:27:53.382 10:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:53.382 10:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:27:53.382 10:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:27:53.382 10:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:53.382 10:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:27:53.382 10:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:27:53.382 10:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:27:53.382 10:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:27:53.382 10:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@724 -- # xtrace_disable 00:27:53.382 10:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:27:53.382 10:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # nvmfpid=496760 00:27:53.382 10:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:27:53.382 10:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # waitforlisten 496760 00:27:53.382 10:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@833 -- # '[' -z 496760 ']' 00:27:53.382 10:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:53.382 10:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@838 -- # local max_retries=100 00:27:53.382 10:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:53.382 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:53.382 10:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@842 -- # xtrace_disable 00:27:53.382 10:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:27:53.382 [2024-11-15 10:46:41.772630] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:27:53.382 [2024-11-15 10:46:41.773697] Starting SPDK v25.01-pre git sha1 318515b44 / DPDK 24.03.0 initialization... 00:27:53.382 [2024-11-15 10:46:41.773750] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:53.382 [2024-11-15 10:46:41.847467] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:27:53.642 [2024-11-15 10:46:41.901884] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:53.642 [2024-11-15 10:46:41.901940] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:53.642 [2024-11-15 10:46:41.901966] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:53.642 [2024-11-15 10:46:41.901976] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:53.642 [2024-11-15 10:46:41.901985] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:53.642 [2024-11-15 10:46:41.903299] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:27:53.642 [2024-11-15 10:46:41.903395] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:27:53.642 [2024-11-15 10:46:41.903400] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:53.642 [2024-11-15 10:46:41.986087] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:27:53.642 [2024-11-15 10:46:41.986302] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:27:53.642 [2024-11-15 10:46:41.986316] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:27:53.642 [2024-11-15 10:46:41.986576] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:27:53.642 10:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:27:53.642 10:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@866 -- # return 0 00:27:53.642 10:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:27:53.642 10:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@730 -- # xtrace_disable 00:27:53.642 10:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:27:53.642 10:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:53.642 10:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:27:53.642 10:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:27:53.900 [2024-11-15 10:46:42.284111] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:53.900 10:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:27:54.464 10:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:54.464 [2024-11-15 10:46:42.910856] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:54.464 10:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:27:55.028 10:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:27:55.285 Malloc0 00:27:55.285 10:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:27:55.542 Delay0 00:27:55.543 10:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:55.799 10:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:27:56.055 NULL1 00:27:56.056 10:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:27:56.619 10:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=497066 00:27:56.619 10:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:27:56.619 10:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 497066 00:27:56.619 10:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:57.991 Read completed with error (sct=0, sc=11) 00:27:57.991 10:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:57.991 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:57.991 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:57.991 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:57.991 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:57.991 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:57.991 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:57.991 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:57.991 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:57.991 10:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:27:57.991 10:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:27:58.249 true 00:27:58.249 10:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 497066 00:27:58.249 10:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:59.182 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:59.182 10:46:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:59.439 10:46:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:27:59.439 10:46:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:27:59.697 true 00:27:59.697 10:46:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 497066 00:27:59.697 10:46:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:59.954 10:46:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:00.212 10:46:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:28:00.212 10:46:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:28:00.470 true 00:28:00.470 10:46:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 497066 00:28:00.470 10:46:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:00.728 10:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:00.985 10:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:28:00.985 10:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:28:01.243 true 00:28:01.243 10:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 497066 00:28:01.243 10:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:02.176 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:02.176 10:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:02.176 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:02.433 10:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:28:02.433 10:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:28:02.691 true 00:28:02.691 10:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 497066 00:28:02.691 10:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:02.949 10:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:03.207 10:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:28:03.207 10:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:28:03.465 true 00:28:03.465 10:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 497066 00:28:03.465 10:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:04.399 10:46:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:04.656 10:46:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:28:04.656 10:46:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:28:04.913 true 00:28:04.913 10:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 497066 00:28:04.913 10:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:05.171 10:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:05.429 10:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:28:05.429 10:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:28:05.687 true 00:28:05.687 10:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 497066 00:28:05.687 10:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:05.945 10:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:06.202 10:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:28:06.202 10:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:28:06.460 true 00:28:06.460 10:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 497066 00:28:06.460 10:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:07.392 10:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:07.392 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:07.392 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:07.649 10:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:28:07.649 10:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:28:07.907 true 00:28:07.907 10:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 497066 00:28:07.907 10:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:08.472 10:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:08.472 10:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:28:08.472 10:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:28:08.729 true 00:28:08.729 10:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 497066 00:28:08.729 10:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:09.293 10:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:09.293 10:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:28:09.293 10:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:28:09.550 true 00:28:09.550 10:46:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 497066 00:28:09.550 10:46:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:10.922 10:46:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:10.922 10:46:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:28:10.922 10:46:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:28:11.178 true 00:28:11.178 10:46:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 497066 00:28:11.178 10:46:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:11.435 10:46:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:11.693 10:47:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:28:11.693 10:47:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:28:11.950 true 00:28:11.950 10:47:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 497066 00:28:11.950 10:47:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:12.208 10:47:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:12.465 10:47:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:28:12.465 10:47:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:28:12.723 true 00:28:12.981 10:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 497066 00:28:12.981 10:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:13.914 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:13.914 10:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:14.171 10:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:28:14.171 10:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:28:14.171 true 00:28:14.428 10:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 497066 00:28:14.429 10:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:14.686 10:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:14.944 10:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:28:14.944 10:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:28:15.202 true 00:28:15.202 10:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 497066 00:28:15.202 10:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:15.459 10:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:15.716 10:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:28:15.716 10:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:28:15.973 true 00:28:15.973 10:47:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 497066 00:28:15.973 10:47:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:16.907 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:16.907 10:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:16.907 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:16.907 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:16.907 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:17.164 10:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:28:17.164 10:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:28:17.422 true 00:28:17.422 10:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 497066 00:28:17.422 10:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:17.679 10:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:17.936 10:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:28:17.936 10:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:28:18.194 true 00:28:18.194 10:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 497066 00:28:18.194 10:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:19.125 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:19.125 10:47:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:19.382 10:47:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:28:19.382 10:47:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:28:19.639 true 00:28:19.639 10:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 497066 00:28:19.639 10:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:19.897 10:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:20.154 10:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:28:20.154 10:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:28:20.411 true 00:28:20.411 10:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 497066 00:28:20.411 10:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:21.342 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:21.342 10:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:21.600 10:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:28:21.600 10:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:28:21.858 true 00:28:21.858 10:47:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 497066 00:28:21.858 10:47:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:22.114 10:47:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:22.371 10:47:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:28:22.371 10:47:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:28:22.628 true 00:28:22.628 10:47:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 497066 00:28:22.628 10:47:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:22.885 10:47:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:23.143 10:47:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:28:23.143 10:47:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:28:23.400 true 00:28:23.400 10:47:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 497066 00:28:23.400 10:47:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:24.333 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:24.333 10:47:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:24.591 10:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:28:24.591 10:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:28:24.847 true 00:28:24.847 10:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 497066 00:28:24.847 10:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:25.104 10:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:25.361 10:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:28:25.361 10:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:28:25.618 true 00:28:25.876 10:47:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 497066 00:28:25.876 10:47:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:26.133 10:47:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:26.396 10:47:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:28:26.396 10:47:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:28:26.658 true 00:28:26.658 10:47:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 497066 00:28:26.658 10:47:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:27.590 Initializing NVMe Controllers 00:28:27.590 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:27.590 Controller IO queue size 128, less than required. 00:28:27.590 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:27.590 Controller IO queue size 128, less than required. 00:28:27.590 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:27.590 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:28:27.590 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:28:27.590 Initialization complete. Launching workers. 00:28:27.590 ======================================================== 00:28:27.590 Latency(us) 00:28:27.590 Device Information : IOPS MiB/s Average min max 00:28:27.590 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 755.47 0.37 75930.49 2797.28 1013534.11 00:28:27.591 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 9175.03 4.48 13950.55 2371.02 537234.81 00:28:27.591 ======================================================== 00:28:27.591 Total : 9930.50 4.85 18665.70 2371.02 1013534.11 00:28:27.591 00:28:27.591 10:47:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:27.848 10:47:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:28:27.848 10:47:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:28:28.105 true 00:28:28.105 10:47:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 497066 00:28:28.105 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (497066) - No such process 00:28:28.105 10:47:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 497066 00:28:28.105 10:47:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:28.361 10:47:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:28:28.618 10:47:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:28:28.618 10:47:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:28:28.618 10:47:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:28:28.618 10:47:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:28:28.618 10:47:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:28:28.875 null0 00:28:28.875 10:47:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:28:28.875 10:47:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:28:28.876 10:47:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:28:29.133 null1 00:28:29.133 10:47:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:28:29.133 10:47:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:28:29.133 10:47:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:28:29.390 null2 00:28:29.390 10:47:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:28:29.390 10:47:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:28:29.390 10:47:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:28:29.648 null3 00:28:29.648 10:47:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:28:29.648 10:47:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:28:29.648 10:47:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:28:29.905 null4 00:28:29.905 10:47:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:28:29.905 10:47:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:28:29.905 10:47:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:28:30.162 null5 00:28:30.162 10:47:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:28:30.162 10:47:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:28:30.162 10:47:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:28:30.419 null6 00:28:30.419 10:47:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:28:30.419 10:47:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:28:30.419 10:47:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:28:30.677 null7 00:28:30.677 10:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:28:30.677 10:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:28:30.677 10:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:28:30.677 10:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:28:30.677 10:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:28:30.677 10:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:28:30.677 10:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:28:30.677 10:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:28:30.677 10:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:28:30.677 10:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:28:30.677 10:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:30.677 10:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:28:30.677 10:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:28:30.677 10:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:28:30.677 10:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:28:30.677 10:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:28:30.677 10:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:28:30.677 10:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:28:30.677 10:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:30.677 10:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:28:30.677 10:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:28:30.677 10:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:28:30.677 10:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:28:30.677 10:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:28:30.677 10:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:28:30.677 10:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:28:30.677 10:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:30.677 10:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:28:30.677 10:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:28:30.677 10:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:28:30.677 10:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:28:30.677 10:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:28:30.677 10:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:28:30.677 10:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:28:30.677 10:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:30.677 10:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:28:30.677 10:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:28:30.677 10:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:28:30.677 10:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:28:30.677 10:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:28:30.677 10:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:28:30.677 10:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:28:30.677 10:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:30.677 10:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:28:30.677 10:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:28:30.677 10:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:28:30.677 10:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:28:30.677 10:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:28:30.677 10:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:28:30.677 10:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:28:30.677 10:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:30.677 10:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:28:30.677 10:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:28:30.677 10:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:28:30.677 10:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:28:30.677 10:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:28:30.677 10:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:28:30.677 10:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:28:30.677 10:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:30.677 10:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:28:30.677 10:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:28:30.677 10:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:28:30.677 10:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:28:30.677 10:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:28:30.677 10:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:28:30.677 10:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:28:30.677 10:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 501194 501195 501197 501199 501201 501203 501205 501207 00:28:30.677 10:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:30.677 10:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:28:31.242 10:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:31.242 10:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:28:31.242 10:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:28:31.242 10:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:28:31.242 10:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:28:31.242 10:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:28:31.242 10:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:28:31.242 10:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:28:31.242 10:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:31.242 10:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:31.242 10:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:28:31.500 10:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:31.500 10:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:31.500 10:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:28:31.500 10:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:31.500 10:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:31.500 10:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:28:31.500 10:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:31.500 10:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:31.500 10:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:28:31.500 10:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:31.500 10:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:31.500 10:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:28:31.500 10:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:31.500 10:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:31.500 10:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:28:31.500 10:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:31.500 10:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:31.500 10:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:28:31.500 10:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:31.500 10:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:31.500 10:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:28:31.758 10:47:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:28:31.758 10:47:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:31.758 10:47:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:28:31.758 10:47:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:28:31.758 10:47:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:28:31.758 10:47:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:28:31.758 10:47:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:28:31.758 10:47:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:28:32.016 10:47:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:32.016 10:47:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:32.016 10:47:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:28:32.016 10:47:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:32.016 10:47:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:32.016 10:47:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:28:32.016 10:47:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:32.016 10:47:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:32.016 10:47:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:28:32.016 10:47:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:32.016 10:47:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:32.016 10:47:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:32.016 10:47:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:28:32.016 10:47:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:32.016 10:47:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:28:32.016 10:47:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:32.016 10:47:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:32.016 10:47:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:28:32.016 10:47:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:32.016 10:47:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:32.016 10:47:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:28:32.016 10:47:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:32.016 10:47:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:32.016 10:47:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:28:32.274 10:47:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:28:32.274 10:47:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:28:32.274 10:47:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:28:32.274 10:47:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:32.274 10:47:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:28:32.274 10:47:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:28:32.274 10:47:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:28:32.274 10:47:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:28:32.532 10:47:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:32.532 10:47:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:32.532 10:47:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:28:32.532 10:47:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:32.532 10:47:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:32.532 10:47:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:28:32.532 10:47:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:32.532 10:47:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:32.532 10:47:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:28:32.532 10:47:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:32.532 10:47:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:32.532 10:47:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:28:32.532 10:47:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:32.532 10:47:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:32.532 10:47:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:28:32.532 10:47:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:32.532 10:47:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:32.532 10:47:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:28:32.532 10:47:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:32.532 10:47:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:32.532 10:47:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:28:32.532 10:47:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:32.532 10:47:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:32.532 10:47:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:28:32.790 10:47:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:28:32.790 10:47:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:28:32.790 10:47:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:28:32.790 10:47:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:32.790 10:47:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:28:32.790 10:47:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:28:32.790 10:47:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:28:32.790 10:47:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:28:33.047 10:47:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:33.047 10:47:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:33.047 10:47:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:28:33.047 10:47:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:33.047 10:47:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:33.047 10:47:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:28:33.304 10:47:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:33.304 10:47:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:33.304 10:47:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:28:33.304 10:47:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:33.304 10:47:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:33.304 10:47:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:28:33.304 10:47:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:33.304 10:47:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:33.304 10:47:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:28:33.304 10:47:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:33.304 10:47:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:33.304 10:47:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:28:33.304 10:47:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:33.304 10:47:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:33.304 10:47:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:28:33.304 10:47:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:33.304 10:47:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:33.305 10:47:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:28:33.562 10:47:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:28:33.562 10:47:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:28:33.562 10:47:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:33.562 10:47:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:28:33.562 10:47:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:28:33.562 10:47:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:28:33.562 10:47:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:28:33.562 10:47:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:28:33.821 10:47:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:33.821 10:47:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:33.821 10:47:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:28:33.821 10:47:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:33.821 10:47:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:33.821 10:47:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:28:33.821 10:47:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:33.821 10:47:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:33.821 10:47:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:28:33.821 10:47:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:33.821 10:47:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:33.821 10:47:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:33.821 10:47:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:33.821 10:47:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:28:33.821 10:47:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:28:33.821 10:47:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:33.821 10:47:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:33.821 10:47:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:28:33.821 10:47:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:33.821 10:47:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:33.821 10:47:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:28:33.821 10:47:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:33.821 10:47:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:33.821 10:47:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:28:34.079 10:47:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:28:34.079 10:47:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:28:34.079 10:47:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:34.079 10:47:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:28:34.079 10:47:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:28:34.079 10:47:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:28:34.079 10:47:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:28:34.080 10:47:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:28:34.349 10:47:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:34.349 10:47:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:34.349 10:47:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:28:34.349 10:47:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:34.349 10:47:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:34.349 10:47:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:28:34.349 10:47:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:34.349 10:47:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:34.349 10:47:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:28:34.349 10:47:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:34.349 10:47:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:34.349 10:47:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:28:34.349 10:47:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:34.349 10:47:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:34.349 10:47:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:28:34.349 10:47:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:34.349 10:47:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:34.349 10:47:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:28:34.349 10:47:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:34.349 10:47:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:34.349 10:47:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:28:34.349 10:47:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:34.349 10:47:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:34.349 10:47:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:28:34.606 10:47:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:28:34.606 10:47:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:34.606 10:47:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:28:34.606 10:47:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:28:34.606 10:47:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:28:34.606 10:47:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:28:34.606 10:47:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:28:34.606 10:47:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:28:34.864 10:47:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:34.864 10:47:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:34.864 10:47:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:28:34.864 10:47:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:34.864 10:47:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:34.864 10:47:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:28:34.864 10:47:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:34.864 10:47:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:34.864 10:47:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:28:34.864 10:47:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:34.864 10:47:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:34.864 10:47:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:28:34.864 10:47:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:34.864 10:47:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:34.864 10:47:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:28:34.864 10:47:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:34.864 10:47:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:34.864 10:47:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:28:34.864 10:47:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:34.864 10:47:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:34.864 10:47:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:28:35.123 10:47:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:35.123 10:47:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:35.123 10:47:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:28:35.381 10:47:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:35.381 10:47:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:28:35.381 10:47:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:28:35.381 10:47:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:28:35.381 10:47:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:28:35.381 10:47:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:28:35.381 10:47:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:28:35.381 10:47:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:28:35.638 10:47:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:35.638 10:47:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:35.638 10:47:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:28:35.638 10:47:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:35.638 10:47:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:35.638 10:47:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:28:35.638 10:47:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:35.638 10:47:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:35.638 10:47:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:28:35.638 10:47:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:35.638 10:47:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:35.638 10:47:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:28:35.638 10:47:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:35.638 10:47:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:35.638 10:47:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:28:35.638 10:47:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:35.638 10:47:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:35.639 10:47:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:28:35.639 10:47:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:35.639 10:47:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:35.639 10:47:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:28:35.639 10:47:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:35.639 10:47:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:35.639 10:47:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:28:35.896 10:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:28:35.896 10:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:28:35.896 10:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:35.896 10:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:28:35.896 10:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:28:35.896 10:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:28:35.896 10:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:28:35.896 10:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:28:36.154 10:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:36.154 10:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:36.154 10:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:28:36.154 10:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:36.154 10:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:36.154 10:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:28:36.154 10:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:36.154 10:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:36.154 10:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:28:36.154 10:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:36.154 10:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:36.154 10:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:28:36.155 10:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:36.155 10:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:36.155 10:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:28:36.155 10:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:36.155 10:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:36.155 10:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:28:36.155 10:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:36.155 10:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:36.155 10:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:28:36.155 10:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:36.155 10:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:36.155 10:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:28:36.412 10:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:28:36.412 10:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:28:36.412 10:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:36.412 10:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:28:36.412 10:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:28:36.412 10:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:28:36.412 10:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:28:36.412 10:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:28:36.670 10:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:36.670 10:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:36.670 10:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:36.670 10:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:36.670 10:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:36.670 10:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:36.670 10:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:36.670 10:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:36.670 10:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:36.670 10:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:36.670 10:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:36.670 10:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:36.670 10:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:36.670 10:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:36.670 10:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:36.670 10:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:36.670 10:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:28:36.670 10:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:28:36.670 10:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:36.670 10:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:28:36.670 10:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:36.670 10:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:28:36.670 10:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:36.670 10:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:36.670 rmmod nvme_tcp 00:28:36.928 rmmod nvme_fabrics 00:28:36.928 rmmod nvme_keyring 00:28:36.928 10:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:36.928 10:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:28:36.928 10:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:28:36.928 10:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@517 -- # '[' -n 496760 ']' 00:28:36.928 10:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # killprocess 496760 00:28:36.928 10:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@952 -- # '[' -z 496760 ']' 00:28:36.928 10:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@956 -- # kill -0 496760 00:28:36.928 10:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@957 -- # uname 00:28:36.928 10:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:28:36.928 10:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 496760 00:28:36.928 10:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:28:36.928 10:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:28:36.928 10:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@970 -- # echo 'killing process with pid 496760' 00:28:36.928 killing process with pid 496760 00:28:36.928 10:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@971 -- # kill 496760 00:28:36.928 10:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@976 -- # wait 496760 00:28:37.200 10:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:37.200 10:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:37.200 10:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:37.200 10:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:28:37.200 10:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-save 00:28:37.200 10:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:37.200 10:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-restore 00:28:37.200 10:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:37.200 10:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:37.200 10:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:37.200 10:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:37.200 10:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:39.149 10:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:39.149 00:28:39.149 real 0m48.154s 00:28:39.149 user 3m21.605s 00:28:39.149 sys 0m22.170s 00:28:39.149 10:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1128 -- # xtrace_disable 00:28:39.149 10:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:28:39.149 ************************************ 00:28:39.149 END TEST nvmf_ns_hotplug_stress 00:28:39.149 ************************************ 00:28:39.149 10:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:28:39.149 10:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:28:39.149 10:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1109 -- # xtrace_disable 00:28:39.149 10:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:28:39.149 ************************************ 00:28:39.149 START TEST nvmf_delete_subsystem 00:28:39.149 ************************************ 00:28:39.149 10:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:28:39.149 * Looking for test storage... 00:28:39.149 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:28:39.149 10:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:28:39.149 10:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:28:39.149 10:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1691 -- # lcov --version 00:28:39.407 10:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:28:39.407 10:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:39.407 10:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:39.408 10:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:39.408 10:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:28:39.408 10:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:28:39.408 10:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:28:39.408 10:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:28:39.408 10:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:28:39.408 10:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:28:39.408 10:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:28:39.408 10:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:39.408 10:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:28:39.408 10:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:28:39.408 10:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:39.408 10:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:39.408 10:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:28:39.408 10:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:28:39.408 10:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:39.408 10:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:28:39.408 10:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:28:39.408 10:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:28:39.408 10:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:28:39.408 10:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:39.408 10:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:28:39.408 10:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:28:39.408 10:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:39.408 10:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:39.408 10:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:28:39.408 10:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:39.408 10:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:28:39.408 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:39.408 --rc genhtml_branch_coverage=1 00:28:39.408 --rc genhtml_function_coverage=1 00:28:39.408 --rc genhtml_legend=1 00:28:39.408 --rc geninfo_all_blocks=1 00:28:39.408 --rc geninfo_unexecuted_blocks=1 00:28:39.408 00:28:39.408 ' 00:28:39.408 10:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:28:39.408 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:39.408 --rc genhtml_branch_coverage=1 00:28:39.408 --rc genhtml_function_coverage=1 00:28:39.408 --rc genhtml_legend=1 00:28:39.408 --rc geninfo_all_blocks=1 00:28:39.408 --rc geninfo_unexecuted_blocks=1 00:28:39.408 00:28:39.408 ' 00:28:39.408 10:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:28:39.408 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:39.408 --rc genhtml_branch_coverage=1 00:28:39.408 --rc genhtml_function_coverage=1 00:28:39.408 --rc genhtml_legend=1 00:28:39.408 --rc geninfo_all_blocks=1 00:28:39.408 --rc geninfo_unexecuted_blocks=1 00:28:39.408 00:28:39.408 ' 00:28:39.408 10:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:28:39.408 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:39.408 --rc genhtml_branch_coverage=1 00:28:39.408 --rc genhtml_function_coverage=1 00:28:39.408 --rc genhtml_legend=1 00:28:39.408 --rc geninfo_all_blocks=1 00:28:39.408 --rc geninfo_unexecuted_blocks=1 00:28:39.408 00:28:39.408 ' 00:28:39.408 10:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:39.408 10:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:28:39.408 10:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:39.408 10:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:39.408 10:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:39.408 10:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:39.408 10:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:39.408 10:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:39.408 10:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:39.408 10:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:39.408 10:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:39.408 10:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:39.408 10:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:28:39.408 10:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=8b464f06-2980-e311-ba20-001e67a94acd 00:28:39.408 10:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:39.408 10:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:39.408 10:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:39.408 10:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:39.408 10:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:39.408 10:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:28:39.408 10:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:39.408 10:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:39.408 10:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:39.408 10:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:39.408 10:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:39.408 10:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:39.408 10:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:28:39.408 10:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:39.408 10:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:28:39.408 10:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:39.408 10:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:39.408 10:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:39.408 10:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:39.408 10:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:39.408 10:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:28:39.408 10:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:28:39.408 10:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:39.408 10:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:39.409 10:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:39.409 10:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:28:39.409 10:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:28:39.409 10:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:39.409 10:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:39.409 10:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:39.409 10:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:39.409 10:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:39.409 10:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:39.409 10:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:39.409 10:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:28:39.409 10:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:28:39.409 10:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # xtrace_disable 00:28:39.409 10:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:28:41.311 10:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:41.311 10:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # pci_devs=() 00:28:41.311 10:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:41.311 10:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:41.311 10:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:41.311 10:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:41.311 10:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:41.312 10:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # net_devs=() 00:28:41.312 10:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:41.312 10:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # e810=() 00:28:41.312 10:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # local -ga e810 00:28:41.312 10:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # x722=() 00:28:41.312 10:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # local -ga x722 00:28:41.312 10:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # mlx=() 00:28:41.312 10:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # local -ga mlx 00:28:41.312 10:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:41.312 10:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:41.312 10:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:41.312 10:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:41.312 10:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:41.312 10:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:41.312 10:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:41.312 10:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:41.312 10:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:41.312 10:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:41.312 10:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:41.312 10:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:41.312 10:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:41.312 10:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:41.312 10:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:41.312 10:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:41.312 10:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:41.312 10:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:41.312 10:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:41.312 10:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:82:00.0 (0x8086 - 0x159b)' 00:28:41.312 Found 0000:82:00.0 (0x8086 - 0x159b) 00:28:41.312 10:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:41.312 10:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:41.312 10:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:41.312 10:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:41.312 10:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:41.312 10:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:41.312 10:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:82:00.1 (0x8086 - 0x159b)' 00:28:41.312 Found 0000:82:00.1 (0x8086 - 0x159b) 00:28:41.312 10:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:41.312 10:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:41.312 10:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:41.312 10:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:41.312 10:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:41.312 10:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:41.312 10:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:41.312 10:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:41.312 10:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:41.312 10:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:41.312 10:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:41.312 10:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:41.312 10:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:41.312 10:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:41.312 10:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:41.312 10:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:82:00.0: cvl_0_0' 00:28:41.312 Found net devices under 0000:82:00.0: cvl_0_0 00:28:41.312 10:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:41.312 10:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:41.312 10:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:41.312 10:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:41.312 10:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:41.312 10:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:41.312 10:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:41.312 10:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:41.312 10:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:82:00.1: cvl_0_1' 00:28:41.312 Found net devices under 0000:82:00.1: cvl_0_1 00:28:41.312 10:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:41.313 10:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:41.313 10:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # is_hw=yes 00:28:41.313 10:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:28:41.313 10:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:28:41.313 10:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:28:41.313 10:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:41.313 10:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:41.313 10:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:41.313 10:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:41.313 10:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:41.313 10:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:41.313 10:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:41.313 10:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:41.313 10:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:41.313 10:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:41.313 10:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:41.313 10:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:41.313 10:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:41.313 10:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:41.313 10:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:41.571 10:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:41.571 10:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:41.571 10:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:41.571 10:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:41.571 10:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:41.571 10:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:41.571 10:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:41.571 10:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:41.571 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:41.571 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.166 ms 00:28:41.571 00:28:41.571 --- 10.0.0.2 ping statistics --- 00:28:41.571 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:41.571 rtt min/avg/max/mdev = 0.166/0.166/0.166/0.000 ms 00:28:41.571 10:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:41.571 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:41.571 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.071 ms 00:28:41.571 00:28:41.571 --- 10.0.0.1 ping statistics --- 00:28:41.571 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:41.571 rtt min/avg/max/mdev = 0.071/0.071/0.071/0.000 ms 00:28:41.571 10:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:41.571 10:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # return 0 00:28:41.571 10:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:41.571 10:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:41.571 10:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:28:41.571 10:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:28:41.571 10:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:41.571 10:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:28:41.571 10:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:28:41.571 10:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:28:41.571 10:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:41.571 10:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@724 -- # xtrace_disable 00:28:41.571 10:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:28:41.571 10:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # nvmfpid=503966 00:28:41.571 10:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:28:41.571 10:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # waitforlisten 503966 00:28:41.571 10:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@833 -- # '[' -z 503966 ']' 00:28:41.571 10:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:41.571 10:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@838 -- # local max_retries=100 00:28:41.571 10:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:41.571 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:41.571 10:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@842 -- # xtrace_disable 00:28:41.571 10:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:28:41.571 [2024-11-15 10:47:29.953003] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:28:41.571 [2024-11-15 10:47:29.954134] Starting SPDK v25.01-pre git sha1 318515b44 / DPDK 24.03.0 initialization... 00:28:41.571 [2024-11-15 10:47:29.954201] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:41.571 [2024-11-15 10:47:30.027996] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:28:41.857 [2024-11-15 10:47:30.091233] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:41.857 [2024-11-15 10:47:30.091293] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:41.857 [2024-11-15 10:47:30.091308] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:41.857 [2024-11-15 10:47:30.091319] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:41.857 [2024-11-15 10:47:30.091330] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:41.857 [2024-11-15 10:47:30.092827] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:41.857 [2024-11-15 10:47:30.092832] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:41.858 [2024-11-15 10:47:30.184949] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:28:41.858 [2024-11-15 10:47:30.184998] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:28:41.858 [2024-11-15 10:47:30.185274] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:28:41.858 10:47:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:28:41.858 10:47:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@866 -- # return 0 00:28:41.858 10:47:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:41.858 10:47:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@730 -- # xtrace_disable 00:28:41.858 10:47:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:28:41.858 10:47:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:41.858 10:47:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:41.858 10:47:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:41.858 10:47:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:28:41.858 [2024-11-15 10:47:30.241499] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:41.858 10:47:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:41.858 10:47:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:28:41.858 10:47:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:41.858 10:47:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:28:41.858 10:47:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:41.858 10:47:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:41.858 10:47:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:41.858 10:47:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:28:41.858 [2024-11-15 10:47:30.261769] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:41.858 10:47:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:41.858 10:47:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:28:41.858 10:47:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:41.858 10:47:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:28:41.858 NULL1 00:28:41.858 10:47:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:41.858 10:47:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:28:41.858 10:47:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:41.858 10:47:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:28:41.858 Delay0 00:28:41.858 10:47:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:41.858 10:47:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:41.858 10:47:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:41.858 10:47:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:28:41.858 10:47:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:41.858 10:47:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=504103 00:28:41.858 10:47:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:28:41.858 10:47:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:28:42.116 [2024-11-15 10:47:30.342473] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:28:44.026 10:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:44.026 10:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:44.026 10:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:28:44.285 Read completed with error (sct=0, sc=8) 00:28:44.285 Read completed with error (sct=0, sc=8) 00:28:44.285 Write completed with error (sct=0, sc=8) 00:28:44.285 Read completed with error (sct=0, sc=8) 00:28:44.285 starting I/O failed: -6 00:28:44.285 Write completed with error (sct=0, sc=8) 00:28:44.285 Read completed with error (sct=0, sc=8) 00:28:44.285 Write completed with error (sct=0, sc=8) 00:28:44.285 Read completed with error (sct=0, sc=8) 00:28:44.285 starting I/O failed: -6 00:28:44.285 Read completed with error (sct=0, sc=8) 00:28:44.285 Read completed with error (sct=0, sc=8) 00:28:44.285 Write completed with error (sct=0, sc=8) 00:28:44.285 Read completed with error (sct=0, sc=8) 00:28:44.285 starting I/O failed: -6 00:28:44.285 Write completed with error (sct=0, sc=8) 00:28:44.285 Write completed with error (sct=0, sc=8) 00:28:44.285 Read completed with error (sct=0, sc=8) 00:28:44.285 Read completed with error (sct=0, sc=8) 00:28:44.285 starting I/O failed: -6 00:28:44.285 Read completed with error (sct=0, sc=8) 00:28:44.285 Read completed with error (sct=0, sc=8) 00:28:44.285 Read completed with error (sct=0, sc=8) 00:28:44.285 Write completed with error (sct=0, sc=8) 00:28:44.285 starting I/O failed: -6 00:28:44.285 Read completed with error (sct=0, sc=8) 00:28:44.285 Read completed with error (sct=0, sc=8) 00:28:44.285 Write completed with error (sct=0, sc=8) 00:28:44.285 Read completed with error (sct=0, sc=8) 00:28:44.285 starting I/O failed: -6 00:28:44.285 Read completed with error (sct=0, sc=8) 00:28:44.285 Read completed with error (sct=0, sc=8) 00:28:44.285 Write completed with error (sct=0, sc=8) 00:28:44.285 Read completed with error (sct=0, sc=8) 00:28:44.285 starting I/O failed: -6 00:28:44.285 Read completed with error (sct=0, sc=8) 00:28:44.285 Read completed with error (sct=0, sc=8) 00:28:44.285 Write completed with error (sct=0, sc=8) 00:28:44.285 Read completed with error (sct=0, sc=8) 00:28:44.285 starting I/O failed: -6 00:28:44.285 Read completed with error (sct=0, sc=8) 00:28:44.285 Write completed with error (sct=0, sc=8) 00:28:44.285 Read completed with error (sct=0, sc=8) 00:28:44.285 Write completed with error (sct=0, sc=8) 00:28:44.285 starting I/O failed: -6 00:28:44.285 Read completed with error (sct=0, sc=8) 00:28:44.285 Read completed with error (sct=0, sc=8) 00:28:44.285 Write completed with error (sct=0, sc=8) 00:28:44.285 Read completed with error (sct=0, sc=8) 00:28:44.285 starting I/O failed: -6 00:28:44.285 Read completed with error (sct=0, sc=8) 00:28:44.285 Read completed with error (sct=0, sc=8) 00:28:44.285 Read completed with error (sct=0, sc=8) 00:28:44.285 Read completed with error (sct=0, sc=8) 00:28:44.285 starting I/O failed: -6 00:28:44.285 Read completed with error (sct=0, sc=8) 00:28:44.285 Read completed with error (sct=0, sc=8) 00:28:44.285 Read completed with error (sct=0, sc=8) 00:28:44.285 [2024-11-15 10:47:32.631276] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf422c0 is same with the state(6) to be set 00:28:44.285 Read completed with error (sct=0, sc=8) 00:28:44.285 Write completed with error (sct=0, sc=8) 00:28:44.285 Read completed with error (sct=0, sc=8) 00:28:44.285 Read completed with error (sct=0, sc=8) 00:28:44.285 Read completed with error (sct=0, sc=8) 00:28:44.285 Read completed with error (sct=0, sc=8) 00:28:44.285 Write completed with error (sct=0, sc=8) 00:28:44.285 Read completed with error (sct=0, sc=8) 00:28:44.285 Write completed with error (sct=0, sc=8) 00:28:44.285 Write completed with error (sct=0, sc=8) 00:28:44.285 Read completed with error (sct=0, sc=8) 00:28:44.285 Write completed with error (sct=0, sc=8) 00:28:44.285 Read completed with error (sct=0, sc=8) 00:28:44.285 Read completed with error (sct=0, sc=8) 00:28:44.285 Read completed with error (sct=0, sc=8) 00:28:44.285 Read completed with error (sct=0, sc=8) 00:28:44.285 Read completed with error (sct=0, sc=8) 00:28:44.285 Read completed with error (sct=0, sc=8) 00:28:44.285 Read completed with error (sct=0, sc=8) 00:28:44.285 Read completed with error (sct=0, sc=8) 00:28:44.285 Read completed with error (sct=0, sc=8) 00:28:44.285 Read completed with error (sct=0, sc=8) 00:28:44.285 Read completed with error (sct=0, sc=8) 00:28:44.285 Write completed with error (sct=0, sc=8) 00:28:44.286 Write completed with error (sct=0, sc=8) 00:28:44.286 Read completed with error (sct=0, sc=8) 00:28:44.286 Write completed with error (sct=0, sc=8) 00:28:44.286 Read completed with error (sct=0, sc=8) 00:28:44.286 Read completed with error (sct=0, sc=8) 00:28:44.286 Write completed with error (sct=0, sc=8) 00:28:44.286 Read completed with error (sct=0, sc=8) 00:28:44.286 Write completed with error (sct=0, sc=8) 00:28:44.286 Write completed with error (sct=0, sc=8) 00:28:44.286 Write completed with error (sct=0, sc=8) 00:28:44.286 Read completed with error (sct=0, sc=8) 00:28:44.286 Read completed with error (sct=0, sc=8) 00:28:44.286 Read completed with error (sct=0, sc=8) 00:28:44.286 Read completed with error (sct=0, sc=8) 00:28:44.286 Read completed with error (sct=0, sc=8) 00:28:44.286 Read completed with error (sct=0, sc=8) 00:28:44.286 Write completed with error (sct=0, sc=8) 00:28:44.286 Write completed with error (sct=0, sc=8) 00:28:44.286 Read completed with error (sct=0, sc=8) 00:28:44.286 Write completed with error (sct=0, sc=8) 00:28:44.286 Read completed with error (sct=0, sc=8) 00:28:44.286 Read completed with error (sct=0, sc=8) 00:28:44.286 Write completed with error (sct=0, sc=8) 00:28:44.286 Read completed with error (sct=0, sc=8) 00:28:44.286 Read completed with error (sct=0, sc=8) 00:28:44.286 Read completed with error (sct=0, sc=8) 00:28:44.286 Read completed with error (sct=0, sc=8) 00:28:44.286 Write completed with error (sct=0, sc=8) 00:28:44.286 Read completed with error (sct=0, sc=8) 00:28:44.286 Read completed with error (sct=0, sc=8) 00:28:44.286 Read completed with error (sct=0, sc=8) 00:28:44.286 Read completed with error (sct=0, sc=8) 00:28:44.286 Read completed with error (sct=0, sc=8) 00:28:44.286 Write completed with error (sct=0, sc=8) 00:28:44.286 Read completed with error (sct=0, sc=8) 00:28:44.286 Write completed with error (sct=0, sc=8) 00:28:44.286 starting I/O failed: -6 00:28:44.286 Write completed with error (sct=0, sc=8) 00:28:44.286 Write completed with error (sct=0, sc=8) 00:28:44.286 Read completed with error (sct=0, sc=8) 00:28:44.286 Read completed with error (sct=0, sc=8) 00:28:44.286 starting I/O failed: -6 00:28:44.286 Read completed with error (sct=0, sc=8) 00:28:44.286 Write completed with error (sct=0, sc=8) 00:28:44.286 Read completed with error (sct=0, sc=8) 00:28:44.286 Read completed with error (sct=0, sc=8) 00:28:44.286 starting I/O failed: -6 00:28:44.286 Read completed with error (sct=0, sc=8) 00:28:44.286 Read completed with error (sct=0, sc=8) 00:28:44.286 Read completed with error (sct=0, sc=8) 00:28:44.286 Read completed with error (sct=0, sc=8) 00:28:44.286 starting I/O failed: -6 00:28:44.286 Read completed with error (sct=0, sc=8) 00:28:44.286 Read completed with error (sct=0, sc=8) 00:28:44.286 Write completed with error (sct=0, sc=8) 00:28:44.286 Read completed with error (sct=0, sc=8) 00:28:44.286 starting I/O failed: -6 00:28:44.286 Read completed with error (sct=0, sc=8) 00:28:44.286 Write completed with error (sct=0, sc=8) 00:28:44.286 Read completed with error (sct=0, sc=8) 00:28:44.286 Read completed with error (sct=0, sc=8) 00:28:44.286 starting I/O failed: -6 00:28:44.286 Read completed with error (sct=0, sc=8) 00:28:44.286 Read completed with error (sct=0, sc=8) 00:28:44.286 Read completed with error (sct=0, sc=8) 00:28:44.286 Read completed with error (sct=0, sc=8) 00:28:44.286 starting I/O failed: -6 00:28:44.286 Write completed with error (sct=0, sc=8) 00:28:44.286 Read completed with error (sct=0, sc=8) 00:28:44.286 Write completed with error (sct=0, sc=8) 00:28:44.286 Read completed with error (sct=0, sc=8) 00:28:44.286 starting I/O failed: -6 00:28:44.286 Read completed with error (sct=0, sc=8) 00:28:44.286 Write completed with error (sct=0, sc=8) 00:28:44.286 Write completed with error (sct=0, sc=8) 00:28:44.286 Read completed with error (sct=0, sc=8) 00:28:44.286 starting I/O failed: -6 00:28:44.286 Write completed with error (sct=0, sc=8) 00:28:44.286 Read completed with error (sct=0, sc=8) 00:28:44.286 Read completed with error (sct=0, sc=8) 00:28:44.286 [2024-11-15 10:47:32.632521] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f5084000c40 is same with the state(6) to be set 00:28:44.286 Write completed with error (sct=0, sc=8) 00:28:44.286 starting I/O failed: -6 00:28:44.286 Read completed with error (sct=0, sc=8) 00:28:44.286 Read completed with error (sct=0, sc=8) 00:28:44.286 starting I/O failed: -6 00:28:44.286 Read completed with error (sct=0, sc=8) 00:28:44.286 Read completed with error (sct=0, sc=8) 00:28:44.286 starting I/O failed: -6 00:28:44.286 Read completed with error (sct=0, sc=8) 00:28:44.286 Read completed with error (sct=0, sc=8) 00:28:44.286 [2024-11-15 10:47:32.632633] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x832000 is same with tstarting I/O failed: -6 00:28:44.286 he state(6) to be set 00:28:44.286 Write completed with error (sct=0, sc=8) 00:28:44.286 Write completed with error (sct=0, sc=8) 00:28:44.286 starting I/O failed: -6 00:28:44.286 [2024-11-15 10:47:32.632682] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x832000 is same with the state(6) to be set 00:28:44.286 Read completed with error (sct=0, sc=8) 00:28:44.286 Read completed with error (sct=0, sc=8) 00:28:44.286 [2024-11-15 10:47:32.632698] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x832000 is same with the state(6) to be set 00:28:44.286 starting I/O failed: -6 00:28:44.286 Read completed with error (sct=0, sc=8) 00:28:44.286 [2024-11-15 10:47:32.632709] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x832000 is same with the state(6) to be set 00:28:44.286 Read completed with error (sct=0, sc=8) 00:28:44.286 starting I/O failed: -6 00:28:44.286 [2024-11-15 10:47:32.632721] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x832000 is same with tRead completed with error (sct=0, sc=8) 00:28:44.286 he state(6) to be set 00:28:44.286 Read completed with error (sct=0, sc=8) 00:28:44.286 [2024-11-15 10:47:32.632734] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x832000 is same with tstarting I/O failed: -6 00:28:44.286 he state(6) to be set 00:28:44.286 Write completed with error (sct=0, sc=8) 00:28:44.286 [2024-11-15 10:47:32.632747] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x832000 is same with tRead completed with error (sct=0, sc=8) 00:28:44.286 he state(6) to be set 00:28:44.286 starting I/O failed: -6 00:28:44.286 Read completed with error (sct=0, sc=8) 00:28:44.286 [2024-11-15 10:47:32.632765] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x832000 is same with the state(6) to be set 00:28:44.286 Read completed with error (sct=0, sc=8) 00:28:44.286 starting I/O failed: -6 00:28:44.286 [2024-11-15 10:47:32.632778] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x832000 is same with tRead completed with error (sct=0, sc=8) 00:28:44.286 he state(6) to be set 00:28:44.286 Read completed with error (sct=0, sc=8) 00:28:44.286 [2024-11-15 10:47:32.632790] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x832000 is same with the state(6) to be set 00:28:44.286 starting I/O failed: -6 00:28:44.286 Read completed with error (sct=0, sc=8) 00:28:44.286 [2024-11-15 10:47:32.632802] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x832000 is same with the state(6) to be set 00:28:44.286 Write completed with error (sct=0, sc=8) 00:28:44.286 starting I/O failed: -6 00:28:44.286 [2024-11-15 10:47:32.632814] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x832000 is same with tWrite completed with error (sct=0, sc=8) 00:28:44.286 he state(6) to be set 00:28:44.286 Write completed with error (sct=0, sc=8) 00:28:44.286 [2024-11-15 10:47:32.632827] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x832000 is same with tstarting I/O failed: -6 00:28:44.286 he state(6) to be set 00:28:44.286 Read completed with error (sct=0, sc=8) 00:28:44.286 [2024-11-15 10:47:32.632839] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x832000 is same with the state(6) to be set 00:28:44.286 Write completed with error (sct=0, sc=8) 00:28:44.286 starting I/O failed: -6 00:28:44.286 [2024-11-15 10:47:32.632851] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x832000 is same with the state(6) to be set 00:28:44.286 Write completed with error (sct=0, sc=8) 00:28:44.286 [2024-11-15 10:47:32.632863] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x832000 is same with tRead completed with error (sct=0, sc=8) 00:28:44.286 he state(6) to be set 00:28:44.286 starting I/O failed: -6 00:28:44.286 Write completed with error (sct=0, sc=8) 00:28:44.286 [2024-11-15 10:47:32.632875] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x832000 is same with the state(6) to be set 00:28:44.286 Read completed with error (sct=0, sc=8) 00:28:44.286 starting I/O failed: -6 00:28:44.286 [2024-11-15 10:47:32.632886] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x832000 is same with the state(6) to be set 00:28:44.286 Read completed with error (sct=0, sc=8) 00:28:44.286 [2024-11-15 10:47:32.632898] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x832000 is same with tWrite completed with error (sct=0, sc=8) 00:28:44.286 he state(6) to be set 00:28:44.286 starting I/O failed: -6 00:28:44.286 Write completed with error (sct=0, sc=8) 00:28:44.286 [2024-11-15 10:47:32.632910] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x832000 is same with the state(6) to be set 00:28:44.286 Read completed with error (sct=0, sc=8) 00:28:44.286 starting I/O failed: -6 00:28:44.286 [2024-11-15 10:47:32.632922] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x832000 is same with the state(6) to be set 00:28:44.286 Write completed with error (sct=0, sc=8) 00:28:44.286 [2024-11-15 10:47:32.632934] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x832000 is same with the state(6) to be set 00:28:44.286 Read completed with error (sct=0, sc=8) 00:28:44.286 starting I/O failed: -6 00:28:44.286 [2024-11-15 10:47:32.632945] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x832000 is same with tRead completed with error (sct=0, sc=8) 00:28:44.286 he state(6) to be set 00:28:44.286 Read completed with error (sct=0, sc=8) 00:28:44.286 starting I/O failed: -6 00:28:44.286 Read completed with error (sct=0, sc=8) 00:28:44.286 Write completed with error (sct=0, sc=8) 00:28:44.286 starting I/O failed: -6 00:28:44.286 Read completed with error (sct=0, sc=8) 00:28:44.286 Read completed with error (sct=0, sc=8) 00:28:44.286 starting I/O failed: -6 00:28:44.286 Read completed with error (sct=0, sc=8) 00:28:44.286 Read completed with error (sct=0, sc=8) 00:28:44.286 starting I/O failed: -6 00:28:44.286 Read completed with error (sct=0, sc=8) 00:28:44.286 [2024-11-15 10:47:32.633031] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f508400d350 is same with the state(6) to be set 00:28:44.287 [2024-11-15 10:47:32.633119] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f6be0 is same with the state(6) to be set 00:28:44.287 [2024-11-15 10:47:32.633139] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f6be0 is same with the state(6) to be set 00:28:44.287 [2024-11-15 10:47:32.633151] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f6be0 is same with the state(6) to be set 00:28:44.287 [2024-11-15 10:47:32.633162] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f6be0 is same with the state(6) to be set 00:28:44.287 [2024-11-15 10:47:32.633173] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f6be0 is same with the state(6) to be set 00:28:44.287 [2024-11-15 10:47:32.633185] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f6be0 is same with the state(6) to be set 00:28:44.287 [2024-11-15 10:47:32.633195] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f6be0 is same with the state(6) to be set 00:28:44.287 [2024-11-15 10:47:32.633207] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f6be0 is same with the state(6) to be set 00:28:44.287 [2024-11-15 10:47:32.633218] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f6be0 is same with the state(6) to be set 00:28:44.287 [2024-11-15 10:47:32.633229] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f6be0 is same with the state(6) to be set 00:28:44.287 [2024-11-15 10:47:32.633240] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f6be0 is same with the state(6) to be set 00:28:44.287 [2024-11-15 10:47:32.633254] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f6be0 is same with the state(6) to be set 00:28:44.287 [2024-11-15 10:47:32.633265] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f6be0 is same with the state(6) to be set 00:28:44.287 [2024-11-15 10:47:32.633276] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f6be0 is same with the state(6) to be set 00:28:44.287 [2024-11-15 10:47:32.633287] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f6be0 is same with the state(6) to be set 00:28:44.287 [2024-11-15 10:47:32.633297] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f6be0 is same with the state(6) to be set 00:28:44.287 [2024-11-15 10:47:32.633308] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f6be0 is same with the state(6) to be set 00:28:44.287 [2024-11-15 10:47:32.633319] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f6be0 is same with the state(6) to be set 00:28:44.287 [2024-11-15 10:47:32.633330] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f6be0 is same with the state(6) to be set 00:28:44.287 [2024-11-15 10:47:32.633341] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f6be0 is same with the state(6) to be set 00:28:44.287 Write completed with error (sct=0, sc=8) 00:28:44.287 Write completed with error (sct=0, sc=8) 00:28:44.287 Read completed with error (sct=0, sc=8) 00:28:44.287 Read completed with error (sct=0, sc=8) 00:28:44.287 Write completed with error (sct=0, sc=8) 00:28:44.287 Write completed with error (sct=0, sc=8) 00:28:44.287 Read completed with error (sct=0, sc=8) 00:28:44.287 Write completed with error (sct=0, sc=8) 00:28:44.287 Read completed with error (sct=0, sc=8) 00:28:44.287 Read completed with error (sct=0, sc=8) 00:28:44.287 Read completed with error (sct=0, sc=8) 00:28:44.287 Read completed with error (sct=0, sc=8) 00:28:44.287 Read completed with error (sct=0, sc=8) 00:28:44.287 Read completed with error (sct=0, sc=8) 00:28:44.287 Read completed with error (sct=0, sc=8) 00:28:44.287 Write completed with error (sct=0, sc=8) 00:28:44.287 Read completed with error (sct=0, sc=8) 00:28:44.287 Read completed with error (sct=0, sc=8) 00:28:44.287 Read completed with error (sct=0, sc=8) 00:28:44.287 Read completed with error (sct=0, sc=8) 00:28:44.287 Read completed with error (sct=0, sc=8) 00:28:44.287 Read completed with error (sct=0, sc=8) 00:28:44.287 Read completed with error (sct=0, sc=8) 00:28:44.287 Read completed with error (sct=0, sc=8) 00:28:44.287 Read completed with error (sct=0, sc=8) 00:28:44.287 Write completed with error (sct=0, sc=8) 00:28:44.287 [2024-11-15 10:47:32.633855] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f508400d020 is same with the state(6) to be set 00:28:44.287 Read completed with error (sct=0, sc=8) 00:28:44.287 Write completed with error (sct=0, sc=8) 00:28:44.287 Write completed with error (sct=0, sc=8) 00:28:44.287 Write completed with error (sct=0, sc=8) 00:28:44.287 Write completed with error (sct=0, sc=8) 00:28:44.287 Read completed with error (sct=0, sc=8) 00:28:44.287 Read completed with error (sct=0, sc=8) 00:28:44.287 Read completed with error (sct=0, sc=8) 00:28:44.287 Read completed with error (sct=0, sc=8) 00:28:44.287 Read completed with error (sct=0, sc=8) 00:28:44.287 Read completed with error (sct=0, sc=8) 00:28:44.287 Read completed with error (sct=0, sc=8) 00:28:44.287 Read completed with error (sct=0, sc=8) 00:28:44.287 Read completed with error (sct=0, sc=8) 00:28:44.287 Write completed with error (sct=0, sc=8) 00:28:44.287 Read completed with error (sct=0, sc=8) 00:28:44.287 Read completed with error (sct=0, sc=8) 00:28:44.287 Read completed with error (sct=0, sc=8) 00:28:44.287 Write completed with error (sct=0, sc=8) 00:28:44.287 Read completed with error (sct=0, sc=8) 00:28:44.287 Read completed with error (sct=0, sc=8) 00:28:44.287 Read completed with error (sct=0, sc=8) 00:28:44.287 Write completed with error (sct=0, sc=8) 00:28:44.287 Read completed with error (sct=0, sc=8) 00:28:44.287 Read completed with error (sct=0, sc=8) 00:28:44.287 [2024-11-15 10:47:32.634032] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f508400d680 is same with the state(6) to be set 00:28:45.219 [2024-11-15 10:47:33.604841] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf439a0 is same with the state(6) to be set 00:28:45.219 Read completed with error (sct=0, sc=8) 00:28:45.219 Write completed with error (sct=0, sc=8) 00:28:45.219 Read completed with error (sct=0, sc=8) 00:28:45.219 Write completed with error (sct=0, sc=8) 00:28:45.219 Write completed with error (sct=0, sc=8) 00:28:45.219 Read completed with error (sct=0, sc=8) 00:28:45.219 Write completed with error (sct=0, sc=8) 00:28:45.219 Read completed with error (sct=0, sc=8) 00:28:45.219 Write completed with error (sct=0, sc=8) 00:28:45.219 Write completed with error (sct=0, sc=8) 00:28:45.219 Read completed with error (sct=0, sc=8) 00:28:45.219 Read completed with error (sct=0, sc=8) 00:28:45.219 Read completed with error (sct=0, sc=8) 00:28:45.219 Write completed with error (sct=0, sc=8) 00:28:45.219 Read completed with error (sct=0, sc=8) 00:28:45.219 Read completed with error (sct=0, sc=8) 00:28:45.219 Write completed with error (sct=0, sc=8) 00:28:45.219 Write completed with error (sct=0, sc=8) 00:28:45.219 Write completed with error (sct=0, sc=8) 00:28:45.219 Read completed with error (sct=0, sc=8) 00:28:45.219 Read completed with error (sct=0, sc=8) 00:28:45.219 Read completed with error (sct=0, sc=8) 00:28:45.219 Read completed with error (sct=0, sc=8) 00:28:45.219 Read completed with error (sct=0, sc=8) 00:28:45.219 Read completed with error (sct=0, sc=8) 00:28:45.219 Read completed with error (sct=0, sc=8) 00:28:45.219 [2024-11-15 10:47:33.635222] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf424a0 is same with the state(6) to be set 00:28:45.219 Read completed with error (sct=0, sc=8) 00:28:45.219 Read completed with error (sct=0, sc=8) 00:28:45.219 Read completed with error (sct=0, sc=8) 00:28:45.219 Read completed with error (sct=0, sc=8) 00:28:45.219 Read completed with error (sct=0, sc=8) 00:28:45.219 Write completed with error (sct=0, sc=8) 00:28:45.219 Read completed with error (sct=0, sc=8) 00:28:45.219 Read completed with error (sct=0, sc=8) 00:28:45.219 Write completed with error (sct=0, sc=8) 00:28:45.219 Write completed with error (sct=0, sc=8) 00:28:45.219 Write completed with error (sct=0, sc=8) 00:28:45.219 Read completed with error (sct=0, sc=8) 00:28:45.219 Read completed with error (sct=0, sc=8) 00:28:45.219 Write completed with error (sct=0, sc=8) 00:28:45.219 Read completed with error (sct=0, sc=8) 00:28:45.219 Read completed with error (sct=0, sc=8) 00:28:45.219 Read completed with error (sct=0, sc=8) 00:28:45.219 Read completed with error (sct=0, sc=8) 00:28:45.219 Write completed with error (sct=0, sc=8) 00:28:45.219 Read completed with error (sct=0, sc=8) 00:28:45.219 Write completed with error (sct=0, sc=8) 00:28:45.219 Read completed with error (sct=0, sc=8) 00:28:45.219 Read completed with error (sct=0, sc=8) 00:28:45.219 Read completed with error (sct=0, sc=8) 00:28:45.219 Write completed with error (sct=0, sc=8) 00:28:45.219 Read completed with error (sct=0, sc=8) 00:28:45.219 [2024-11-15 10:47:33.636179] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf42860 is same with the state(6) to be set 00:28:45.219 Initializing NVMe Controllers 00:28:45.219 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:45.220 Controller IO queue size 128, less than required. 00:28:45.220 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:45.220 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:28:45.220 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:28:45.220 Initialization complete. Launching workers. 00:28:45.220 ======================================================== 00:28:45.220 Latency(us) 00:28:45.220 Device Information : IOPS MiB/s Average min max 00:28:45.220 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 173.08 0.08 888461.70 501.40 1013585.95 00:28:45.220 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 161.67 0.08 792135.73 472.12 1011801.50 00:28:45.220 ======================================================== 00:28:45.220 Total : 334.75 0.16 841939.83 472.12 1013585.95 00:28:45.220 00:28:45.220 [2024-11-15 10:47:33.637109] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf439a0 (9): Bad file descriptor 00:28:45.220 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:28:45.220 10:47:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:45.220 10:47:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:28:45.220 10:47:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 504103 00:28:45.220 10:47:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:28:45.785 10:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:28:45.785 10:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 504103 00:28:45.785 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (504103) - No such process 00:28:45.785 10:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 504103 00:28:45.785 10:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@650 -- # local es=0 00:28:45.785 10:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # valid_exec_arg wait 504103 00:28:45.785 10:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@638 -- # local arg=wait 00:28:45.785 10:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:28:45.785 10:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # type -t wait 00:28:45.785 10:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:28:45.785 10:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@653 -- # wait 504103 00:28:45.785 10:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@653 -- # es=1 00:28:45.785 10:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:28:45.785 10:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:28:45.785 10:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:28:45.785 10:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:28:45.785 10:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:45.785 10:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:28:45.785 10:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:45.785 10:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:45.785 10:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:45.785 10:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:28:45.785 [2024-11-15 10:47:34.161713] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:45.785 10:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:45.785 10:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:45.785 10:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:45.785 10:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:28:45.785 10:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:45.785 10:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=504513 00:28:45.785 10:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:28:45.785 10:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 504513 00:28:45.785 10:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:28:45.785 10:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:28:45.785 [2024-11-15 10:47:34.226907] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:28:46.350 10:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:28:46.350 10:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 504513 00:28:46.350 10:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:28:46.914 10:47:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:28:46.914 10:47:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 504513 00:28:46.914 10:47:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:28:47.478 10:47:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:28:47.478 10:47:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 504513 00:28:47.478 10:47:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:28:47.735 10:47:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:28:47.735 10:47:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 504513 00:28:47.735 10:47:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:28:48.300 10:47:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:28:48.300 10:47:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 504513 00:28:48.300 10:47:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:28:48.864 10:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:28:48.864 10:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 504513 00:28:48.864 10:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:28:49.122 Initializing NVMe Controllers 00:28:49.122 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:49.122 Controller IO queue size 128, less than required. 00:28:49.122 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:49.122 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:28:49.122 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:28:49.122 Initialization complete. Launching workers. 00:28:49.122 ======================================================== 00:28:49.122 Latency(us) 00:28:49.122 Device Information : IOPS MiB/s Average min max 00:28:49.122 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1005019.15 1000272.48 1043027.32 00:28:49.122 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1006881.58 1000263.11 1045080.29 00:28:49.122 ======================================================== 00:28:49.122 Total : 256.00 0.12 1005950.37 1000263.11 1045080.29 00:28:49.122 00:28:49.379 10:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:28:49.379 10:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 504513 00:28:49.379 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (504513) - No such process 00:28:49.379 10:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 504513 00:28:49.379 10:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:28:49.380 10:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:28:49.380 10:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:49.380 10:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:28:49.380 10:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:49.380 10:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:28:49.380 10:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:49.380 10:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:49.380 rmmod nvme_tcp 00:28:49.380 rmmod nvme_fabrics 00:28:49.380 rmmod nvme_keyring 00:28:49.380 10:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:49.380 10:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:28:49.380 10:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:28:49.380 10:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@517 -- # '[' -n 503966 ']' 00:28:49.380 10:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # killprocess 503966 00:28:49.380 10:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@952 -- # '[' -z 503966 ']' 00:28:49.380 10:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@956 -- # kill -0 503966 00:28:49.380 10:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@957 -- # uname 00:28:49.380 10:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:28:49.380 10:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 503966 00:28:49.380 10:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:28:49.380 10:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:28:49.380 10:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@970 -- # echo 'killing process with pid 503966' 00:28:49.380 killing process with pid 503966 00:28:49.380 10:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@971 -- # kill 503966 00:28:49.380 10:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@976 -- # wait 503966 00:28:49.640 10:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:49.640 10:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:49.640 10:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:49.640 10:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:28:49.640 10:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-save 00:28:49.640 10:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:49.640 10:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-restore 00:28:49.640 10:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:49.640 10:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:49.640 10:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:49.640 10:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:49.640 10:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:52.174 10:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:52.174 00:28:52.174 real 0m12.507s 00:28:52.174 user 0m24.035s 00:28:52.174 sys 0m3.831s 00:28:52.174 10:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1128 -- # xtrace_disable 00:28:52.174 10:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:28:52.174 ************************************ 00:28:52.174 END TEST nvmf_delete_subsystem 00:28:52.174 ************************************ 00:28:52.174 10:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:28:52.174 10:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:28:52.174 10:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1109 -- # xtrace_disable 00:28:52.174 10:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:28:52.174 ************************************ 00:28:52.174 START TEST nvmf_host_management 00:28:52.174 ************************************ 00:28:52.174 10:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:28:52.174 * Looking for test storage... 00:28:52.174 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:28:52.174 10:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:28:52.174 10:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1691 -- # lcov --version 00:28:52.174 10:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:28:52.174 10:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:28:52.174 10:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:52.174 10:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:52.175 10:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:52.175 10:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:28:52.175 10:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:28:52.175 10:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:28:52.175 10:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:28:52.175 10:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:28:52.175 10:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:28:52.175 10:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:28:52.175 10:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:52.175 10:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:28:52.175 10:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:28:52.175 10:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:52.175 10:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:52.175 10:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:28:52.175 10:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:28:52.175 10:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:52.175 10:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:28:52.175 10:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:28:52.175 10:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:28:52.175 10:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:28:52.175 10:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:52.175 10:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:28:52.175 10:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:28:52.175 10:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:52.175 10:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:52.175 10:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:28:52.175 10:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:52.175 10:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:28:52.175 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:52.175 --rc genhtml_branch_coverage=1 00:28:52.175 --rc genhtml_function_coverage=1 00:28:52.175 --rc genhtml_legend=1 00:28:52.175 --rc geninfo_all_blocks=1 00:28:52.175 --rc geninfo_unexecuted_blocks=1 00:28:52.175 00:28:52.175 ' 00:28:52.175 10:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:28:52.175 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:52.175 --rc genhtml_branch_coverage=1 00:28:52.175 --rc genhtml_function_coverage=1 00:28:52.175 --rc genhtml_legend=1 00:28:52.175 --rc geninfo_all_blocks=1 00:28:52.175 --rc geninfo_unexecuted_blocks=1 00:28:52.175 00:28:52.175 ' 00:28:52.175 10:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:28:52.175 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:52.175 --rc genhtml_branch_coverage=1 00:28:52.175 --rc genhtml_function_coverage=1 00:28:52.175 --rc genhtml_legend=1 00:28:52.175 --rc geninfo_all_blocks=1 00:28:52.175 --rc geninfo_unexecuted_blocks=1 00:28:52.175 00:28:52.175 ' 00:28:52.175 10:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:28:52.175 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:52.175 --rc genhtml_branch_coverage=1 00:28:52.175 --rc genhtml_function_coverage=1 00:28:52.175 --rc genhtml_legend=1 00:28:52.175 --rc geninfo_all_blocks=1 00:28:52.175 --rc geninfo_unexecuted_blocks=1 00:28:52.175 00:28:52.175 ' 00:28:52.175 10:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:52.175 10:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:28:52.175 10:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:52.175 10:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:52.175 10:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:52.175 10:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:52.175 10:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:52.175 10:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:52.175 10:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:52.175 10:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:52.175 10:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:52.175 10:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:52.175 10:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:28:52.175 10:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=8b464f06-2980-e311-ba20-001e67a94acd 00:28:52.175 10:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:52.175 10:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:52.175 10:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:52.175 10:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:52.175 10:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:52.175 10:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:28:52.175 10:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:52.175 10:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:52.175 10:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:52.175 10:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:52.175 10:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:52.175 10:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:52.175 10:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:28:52.175 10:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:52.175 10:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:28:52.175 10:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:52.175 10:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:52.175 10:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:52.175 10:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:52.175 10:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:52.175 10:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:28:52.175 10:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:28:52.175 10:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:52.175 10:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:52.176 10:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:52.176 10:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:28:52.176 10:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:28:52.176 10:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:28:52.176 10:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:28:52.176 10:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:52.176 10:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:52.176 10:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:52.176 10:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:52.176 10:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:52.176 10:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:52.176 10:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:52.176 10:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:28:52.176 10:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:28:52.176 10:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@309 -- # xtrace_disable 00:28:52.176 10:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:28:54.074 10:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:54.074 10:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@315 -- # pci_devs=() 00:28:54.074 10:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:54.074 10:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:54.074 10:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:54.074 10:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:54.074 10:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:54.074 10:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@319 -- # net_devs=() 00:28:54.074 10:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:54.074 10:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@320 -- # e810=() 00:28:54.074 10:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@320 -- # local -ga e810 00:28:54.074 10:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@321 -- # x722=() 00:28:54.074 10:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@321 -- # local -ga x722 00:28:54.074 10:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@322 -- # mlx=() 00:28:54.074 10:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@322 -- # local -ga mlx 00:28:54.074 10:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:54.074 10:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:54.074 10:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:54.074 10:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:54.074 10:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:54.074 10:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:54.074 10:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:54.074 10:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:54.074 10:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:54.074 10:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:54.074 10:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:54.074 10:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:54.074 10:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:54.074 10:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:54.074 10:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:54.074 10:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:54.074 10:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:54.074 10:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:54.074 10:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:54.074 10:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:82:00.0 (0x8086 - 0x159b)' 00:28:54.074 Found 0000:82:00.0 (0x8086 - 0x159b) 00:28:54.074 10:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:54.074 10:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:54.074 10:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:54.074 10:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:54.074 10:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:54.074 10:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:54.074 10:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:82:00.1 (0x8086 - 0x159b)' 00:28:54.074 Found 0000:82:00.1 (0x8086 - 0x159b) 00:28:54.074 10:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:54.074 10:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:54.074 10:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:54.074 10:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:54.074 10:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:54.074 10:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:54.074 10:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:54.074 10:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:54.074 10:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:54.075 10:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:54.075 10:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:54.075 10:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:54.075 10:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:54.075 10:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:54.075 10:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:54.075 10:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:82:00.0: cvl_0_0' 00:28:54.075 Found net devices under 0000:82:00.0: cvl_0_0 00:28:54.075 10:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:54.075 10:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:54.075 10:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:54.075 10:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:54.075 10:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:54.075 10:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:54.075 10:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:54.075 10:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:54.075 10:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:82:00.1: cvl_0_1' 00:28:54.075 Found net devices under 0000:82:00.1: cvl_0_1 00:28:54.075 10:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:54.075 10:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:54.075 10:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # is_hw=yes 00:28:54.075 10:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:28:54.075 10:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:28:54.075 10:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:28:54.075 10:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:54.075 10:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:54.075 10:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:54.075 10:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:54.075 10:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:54.075 10:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:54.075 10:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:54.075 10:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:54.075 10:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:54.075 10:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:54.075 10:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:54.075 10:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:54.075 10:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:54.075 10:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:54.075 10:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:54.075 10:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:54.075 10:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:54.075 10:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:54.075 10:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:54.333 10:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:54.333 10:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:54.333 10:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:54.333 10:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:54.333 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:54.333 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.217 ms 00:28:54.333 00:28:54.333 --- 10.0.0.2 ping statistics --- 00:28:54.333 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:54.333 rtt min/avg/max/mdev = 0.217/0.217/0.217/0.000 ms 00:28:54.333 10:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:54.333 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:54.333 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.106 ms 00:28:54.333 00:28:54.333 --- 10.0.0.1 ping statistics --- 00:28:54.333 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:54.333 rtt min/avg/max/mdev = 0.106/0.106/0.106/0.000 ms 00:28:54.333 10:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:54.333 10:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@450 -- # return 0 00:28:54.333 10:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:54.333 10:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:54.333 10:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:28:54.333 10:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:28:54.333 10:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:54.333 10:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:28:54.333 10:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:28:54.333 10:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:28:54.333 10:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:28:54.333 10:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:28:54.333 10:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:54.333 10:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:28:54.333 10:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:28:54.333 10:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@509 -- # nvmfpid=506861 00:28:54.333 10:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1E 00:28:54.333 10:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@510 -- # waitforlisten 506861 00:28:54.333 10:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@833 -- # '[' -z 506861 ']' 00:28:54.333 10:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:54.333 10:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@838 -- # local max_retries=100 00:28:54.333 10:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:54.333 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:54.333 10:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@842 -- # xtrace_disable 00:28:54.333 10:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:28:54.333 [2024-11-15 10:47:42.631582] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:28:54.333 [2024-11-15 10:47:42.632722] Starting SPDK v25.01-pre git sha1 318515b44 / DPDK 24.03.0 initialization... 00:28:54.333 [2024-11-15 10:47:42.632780] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:54.333 [2024-11-15 10:47:42.707427] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:54.333 [2024-11-15 10:47:42.767247] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:54.333 [2024-11-15 10:47:42.767303] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:54.333 [2024-11-15 10:47:42.767331] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:54.333 [2024-11-15 10:47:42.767342] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:54.333 [2024-11-15 10:47:42.767358] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:54.333 [2024-11-15 10:47:42.768992] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:28:54.333 [2024-11-15 10:47:42.769056] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:28:54.333 [2024-11-15 10:47:42.769124] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:28:54.333 [2024-11-15 10:47:42.769127] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:54.591 [2024-11-15 10:47:42.855415] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:28:54.591 [2024-11-15 10:47:42.855682] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:28:54.591 [2024-11-15 10:47:42.855934] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:28:54.591 [2024-11-15 10:47:42.856516] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:28:54.591 [2024-11-15 10:47:42.856783] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:28:54.591 10:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:28:54.591 10:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@866 -- # return 0 00:28:54.591 10:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:54.591 10:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:28:54.591 10:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:28:54.591 10:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:54.591 10:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:54.591 10:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:54.591 10:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:28:54.591 [2024-11-15 10:47:42.901794] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:54.591 10:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:54.591 10:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:28:54.591 10:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:28:54.591 10:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:28:54.591 10:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:28:54.591 10:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:28:54.591 10:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:28:54.591 10:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:54.591 10:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:28:54.591 Malloc0 00:28:54.591 [2024-11-15 10:47:42.986070] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:54.591 10:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:54.591 10:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:28:54.591 10:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:28:54.591 10:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:28:54.591 10:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=507019 00:28:54.591 10:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 507019 /var/tmp/bdevperf.sock 00:28:54.591 10:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@833 -- # '[' -z 507019 ']' 00:28:54.591 10:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:28:54.591 10:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:28:54.591 10:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:28:54.591 10:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@838 -- # local max_retries=100 00:28:54.591 10:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:28:54.591 10:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:28:54.591 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:28:54.591 10:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:28:54.591 10:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@842 -- # xtrace_disable 00:28:54.591 10:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:54.591 10:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:28:54.591 10:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:54.591 { 00:28:54.591 "params": { 00:28:54.591 "name": "Nvme$subsystem", 00:28:54.591 "trtype": "$TEST_TRANSPORT", 00:28:54.591 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:54.591 "adrfam": "ipv4", 00:28:54.591 "trsvcid": "$NVMF_PORT", 00:28:54.591 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:54.591 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:54.591 "hdgst": ${hdgst:-false}, 00:28:54.591 "ddgst": ${ddgst:-false} 00:28:54.591 }, 00:28:54.591 "method": "bdev_nvme_attach_controller" 00:28:54.591 } 00:28:54.591 EOF 00:28:54.591 )") 00:28:54.591 10:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:28:54.591 10:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:28:54.591 10:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:28:54.591 10:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:28:54.591 "params": { 00:28:54.591 "name": "Nvme0", 00:28:54.591 "trtype": "tcp", 00:28:54.591 "traddr": "10.0.0.2", 00:28:54.591 "adrfam": "ipv4", 00:28:54.591 "trsvcid": "4420", 00:28:54.591 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:54.591 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:28:54.591 "hdgst": false, 00:28:54.591 "ddgst": false 00:28:54.591 }, 00:28:54.591 "method": "bdev_nvme_attach_controller" 00:28:54.591 }' 00:28:54.849 [2024-11-15 10:47:43.063005] Starting SPDK v25.01-pre git sha1 318515b44 / DPDK 24.03.0 initialization... 00:28:54.849 [2024-11-15 10:47:43.063080] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid507019 ] 00:28:54.849 [2024-11-15 10:47:43.135071] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:54.849 [2024-11-15 10:47:43.194013] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:55.107 Running I/O for 10 seconds... 00:28:55.107 10:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:28:55.107 10:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@866 -- # return 0 00:28:55.107 10:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:28:55.107 10:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:55.107 10:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:28:55.107 10:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:55.107 10:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:28:55.107 10:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:28:55.107 10:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:28:55.107 10:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:28:55.107 10:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:28:55.107 10:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:28:55.107 10:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:28:55.107 10:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:28:55.107 10:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:28:55.107 10:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:28:55.107 10:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:55.107 10:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:28:55.107 10:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:55.107 10:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=67 00:28:55.107 10:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@58 -- # '[' 67 -ge 100 ']' 00:28:55.107 10:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@62 -- # sleep 0.25 00:28:55.365 10:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i-- )) 00:28:55.365 10:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:28:55.365 10:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:28:55.365 10:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:55.365 10:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:28:55.365 10:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:28:55.624 10:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:55.624 10:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=556 00:28:55.624 10:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@58 -- # '[' 556 -ge 100 ']' 00:28:55.624 10:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:28:55.624 10:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@60 -- # break 00:28:55.624 10:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:28:55.624 10:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:28:55.624 10:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:55.624 10:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:28:55.624 [2024-11-15 10:47:43.860276] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:55.624 [2024-11-15 10:47:43.860355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:55.624 [2024-11-15 10:47:43.860386] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:55.624 [2024-11-15 10:47:43.860402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:55.624 [2024-11-15 10:47:43.860424] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:55.624 [2024-11-15 10:47:43.860438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:55.624 [2024-11-15 10:47:43.860452] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:55.624 [2024-11-15 10:47:43.860465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:55.624 [2024-11-15 10:47:43.860478] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23eda40 is same with the state(6) to be set 00:28:55.624 10:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:55.624 10:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:28:55.624 10:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:55.624 10:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:28:55.624 10:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:55.624 10:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:28:55.624 [2024-11-15 10:47:43.871058] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23eda40 (9): Bad file descriptor 00:28:55.624 [2024-11-15 10:47:43.871150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.624 [2024-11-15 10:47:43.871173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:55.624 [2024-11-15 10:47:43.871200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:82048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.624 [2024-11-15 10:47:43.871216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:55.624 [2024-11-15 10:47:43.871233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:82176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.624 [2024-11-15 10:47:43.871248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:55.624 [2024-11-15 10:47:43.871264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:82304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.624 [2024-11-15 10:47:43.871278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:55.624 [2024-11-15 10:47:43.871295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:82432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.624 [2024-11-15 10:47:43.871320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:55.624 [2024-11-15 10:47:43.871336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:82560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.624 [2024-11-15 10:47:43.871354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:55.624 [2024-11-15 10:47:43.871384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:82688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.624 [2024-11-15 10:47:43.871400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:55.624 [2024-11-15 10:47:43.871415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:82816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.624 [2024-11-15 10:47:43.871430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:55.624 [2024-11-15 10:47:43.871446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:82944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.624 [2024-11-15 10:47:43.871459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:55.624 [2024-11-15 10:47:43.871475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:83072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.624 [2024-11-15 10:47:43.871489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:55.624 [2024-11-15 10:47:43.871505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:83200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.624 [2024-11-15 10:47:43.871520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:55.624 [2024-11-15 10:47:43.871536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:83328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.624 [2024-11-15 10:47:43.871549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:55.624 [2024-11-15 10:47:43.871566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:83456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.624 [2024-11-15 10:47:43.871580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:55.624 [2024-11-15 10:47:43.871596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:83584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.624 [2024-11-15 10:47:43.871610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:55.624 [2024-11-15 10:47:43.871627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:83712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.624 [2024-11-15 10:47:43.871641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:55.624 [2024-11-15 10:47:43.871664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:83840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.624 [2024-11-15 10:47:43.871678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:55.624 [2024-11-15 10:47:43.871694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:83968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.624 [2024-11-15 10:47:43.871709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:55.624 [2024-11-15 10:47:43.871729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:84096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.624 [2024-11-15 10:47:43.871743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:55.624 [2024-11-15 10:47:43.871759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:84224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.624 [2024-11-15 10:47:43.871772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:55.624 [2024-11-15 10:47:43.871788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:84352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.624 [2024-11-15 10:47:43.871802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:55.624 [2024-11-15 10:47:43.871817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:84480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.624 [2024-11-15 10:47:43.871831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:55.624 [2024-11-15 10:47:43.871847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:84608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.624 [2024-11-15 10:47:43.871860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:55.624 [2024-11-15 10:47:43.871876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:84736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.624 [2024-11-15 10:47:43.871890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:55.624 [2024-11-15 10:47:43.871905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:84864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.624 [2024-11-15 10:47:43.871919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:55.625 [2024-11-15 10:47:43.871934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:84992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.625 [2024-11-15 10:47:43.871948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:55.625 [2024-11-15 10:47:43.871963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:85120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.625 [2024-11-15 10:47:43.871977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:55.625 [2024-11-15 10:47:43.871992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:85248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.625 [2024-11-15 10:47:43.872009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:55.625 [2024-11-15 10:47:43.872024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:85376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.625 [2024-11-15 10:47:43.872038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:55.625 [2024-11-15 10:47:43.872053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:85504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.625 [2024-11-15 10:47:43.872067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:55.625 [2024-11-15 10:47:43.872082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:85632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.625 [2024-11-15 10:47:43.872104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:55.625 [2024-11-15 10:47:43.872121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:85760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.625 [2024-11-15 10:47:43.872136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:55.625 [2024-11-15 10:47:43.872151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:85888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.625 [2024-11-15 10:47:43.872165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:55.625 [2024-11-15 10:47:43.872181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:86016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.625 [2024-11-15 10:47:43.872195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:55.625 [2024-11-15 10:47:43.872210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:86144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.625 [2024-11-15 10:47:43.872224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:55.625 [2024-11-15 10:47:43.872240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:86272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.625 [2024-11-15 10:47:43.872253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:55.625 [2024-11-15 10:47:43.872268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:86400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.625 [2024-11-15 10:47:43.872282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:55.625 [2024-11-15 10:47:43.872298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:86528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.625 [2024-11-15 10:47:43.872311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:55.625 [2024-11-15 10:47:43.872326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:86656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.625 [2024-11-15 10:47:43.872340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:55.625 [2024-11-15 10:47:43.872356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:86784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.625 [2024-11-15 10:47:43.872378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:55.625 [2024-11-15 10:47:43.872395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:86912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.625 [2024-11-15 10:47:43.872409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:55.625 [2024-11-15 10:47:43.872432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:87040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.625 [2024-11-15 10:47:43.872446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:55.625 [2024-11-15 10:47:43.872461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:87168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.625 [2024-11-15 10:47:43.872475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:55.625 [2024-11-15 10:47:43.872494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:87296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.625 [2024-11-15 10:47:43.872509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:55.625 [2024-11-15 10:47:43.872524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:87424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.625 [2024-11-15 10:47:43.872538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:55.625 [2024-11-15 10:47:43.872553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:87552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.625 [2024-11-15 10:47:43.872567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:55.625 [2024-11-15 10:47:43.872582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:87680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.625 [2024-11-15 10:47:43.872596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:55.625 [2024-11-15 10:47:43.872613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:87808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.625 [2024-11-15 10:47:43.872626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:55.625 [2024-11-15 10:47:43.872642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:87936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.625 [2024-11-15 10:47:43.872656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:55.625 [2024-11-15 10:47:43.872671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:88064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.625 [2024-11-15 10:47:43.872693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:55.625 [2024-11-15 10:47:43.872708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:88192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.625 [2024-11-15 10:47:43.872721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:55.625 [2024-11-15 10:47:43.872745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:88320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.625 [2024-11-15 10:47:43.872759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:55.625 [2024-11-15 10:47:43.872775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:88448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.625 [2024-11-15 10:47:43.872789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:55.625 [2024-11-15 10:47:43.872804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:88576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.625 [2024-11-15 10:47:43.872818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:55.625 [2024-11-15 10:47:43.872834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:88704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.625 [2024-11-15 10:47:43.872848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:55.625 [2024-11-15 10:47:43.872863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:88832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.625 [2024-11-15 10:47:43.872881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:55.625 [2024-11-15 10:47:43.872897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:88960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.625 [2024-11-15 10:47:43.872911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:55.625 [2024-11-15 10:47:43.872926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:89088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.625 [2024-11-15 10:47:43.872940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:55.625 [2024-11-15 10:47:43.872955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:89216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.625 [2024-11-15 10:47:43.872969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:55.625 [2024-11-15 10:47:43.872985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:89344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.625 [2024-11-15 10:47:43.872998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:55.625 [2024-11-15 10:47:43.873014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:89472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.625 [2024-11-15 10:47:43.873028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:55.625 [2024-11-15 10:47:43.873043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:89600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.625 [2024-11-15 10:47:43.873057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:55.625 [2024-11-15 10:47:43.873072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:89728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.625 [2024-11-15 10:47:43.873088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:55.625 [2024-11-15 10:47:43.873106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:89856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.625 [2024-11-15 10:47:43.873120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:55.626 [2024-11-15 10:47:43.873137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:89984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.626 [2024-11-15 10:47:43.873151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:55.626 [2024-11-15 10:47:43.874353] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:28:55.626 task offset: 81920 on job bdev=Nvme0n1 fails 00:28:55.626 00:28:55.626 Latency(us) 00:28:55.626 [2024-11-15T09:47:44.089Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:55.626 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:55.626 Job: Nvme0n1 ended in about 0.42 seconds with error 00:28:55.626 Verification LBA range: start 0x0 length 0x400 00:28:55.626 Nvme0n1 : 0.42 1530.73 95.67 153.07 0.00 36968.46 2524.35 34564.17 00:28:55.626 [2024-11-15T09:47:44.089Z] =================================================================================================================== 00:28:55.626 [2024-11-15T09:47:44.089Z] Total : 1530.73 95.67 153.07 0.00 36968.46 2524.35 34564.17 00:28:55.626 [2024-11-15 10:47:43.877178] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:28:55.626 [2024-11-15 10:47:43.970565] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:28:56.558 10:47:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 507019 00:28:56.558 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (507019) - No such process 00:28:56.558 10:47:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # true 00:28:56.558 10:47:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:28:56.558 10:47:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:28:56.558 10:47:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:28:56.559 10:47:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:28:56.559 10:47:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:28:56.559 10:47:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:56.559 10:47:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:56.559 { 00:28:56.559 "params": { 00:28:56.559 "name": "Nvme$subsystem", 00:28:56.559 "trtype": "$TEST_TRANSPORT", 00:28:56.559 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:56.559 "adrfam": "ipv4", 00:28:56.559 "trsvcid": "$NVMF_PORT", 00:28:56.559 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:56.559 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:56.559 "hdgst": ${hdgst:-false}, 00:28:56.559 "ddgst": ${ddgst:-false} 00:28:56.559 }, 00:28:56.559 "method": "bdev_nvme_attach_controller" 00:28:56.559 } 00:28:56.559 EOF 00:28:56.559 )") 00:28:56.559 10:47:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:28:56.559 10:47:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:28:56.559 10:47:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:28:56.559 10:47:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:28:56.559 "params": { 00:28:56.559 "name": "Nvme0", 00:28:56.559 "trtype": "tcp", 00:28:56.559 "traddr": "10.0.0.2", 00:28:56.559 "adrfam": "ipv4", 00:28:56.559 "trsvcid": "4420", 00:28:56.559 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:56.559 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:28:56.559 "hdgst": false, 00:28:56.559 "ddgst": false 00:28:56.559 }, 00:28:56.559 "method": "bdev_nvme_attach_controller" 00:28:56.559 }' 00:28:56.559 [2024-11-15 10:47:44.922589] Starting SPDK v25.01-pre git sha1 318515b44 / DPDK 24.03.0 initialization... 00:28:56.559 [2024-11-15 10:47:44.922698] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid507181 ] 00:28:56.559 [2024-11-15 10:47:44.995534] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:56.816 [2024-11-15 10:47:45.054014] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:57.073 Running I/O for 1 seconds... 00:28:58.012 1536.00 IOPS, 96.00 MiB/s 00:28:58.012 Latency(us) 00:28:58.012 [2024-11-15T09:47:46.475Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:58.012 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:58.012 Verification LBA range: start 0x0 length 0x400 00:28:58.012 Nvme0n1 : 1.02 1568.84 98.05 0.00 0.00 40149.10 6844.87 34175.81 00:28:58.012 [2024-11-15T09:47:46.475Z] =================================================================================================================== 00:28:58.012 [2024-11-15T09:47:46.475Z] Total : 1568.84 98.05 0.00 0.00 40149.10 6844.87 34175.81 00:28:58.269 10:47:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:28:58.269 10:47:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:28:58.269 10:47:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:28:58.269 10:47:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:28:58.269 10:47:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:28:58.269 10:47:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:58.269 10:47:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:28:58.269 10:47:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:58.269 10:47:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:28:58.269 10:47:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:58.269 10:47:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:58.269 rmmod nvme_tcp 00:28:58.269 rmmod nvme_fabrics 00:28:58.269 rmmod nvme_keyring 00:28:58.269 10:47:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:58.269 10:47:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:28:58.269 10:47:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:28:58.269 10:47:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@517 -- # '[' -n 506861 ']' 00:28:58.269 10:47:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@518 -- # killprocess 506861 00:28:58.269 10:47:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@952 -- # '[' -z 506861 ']' 00:28:58.269 10:47:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@956 -- # kill -0 506861 00:28:58.269 10:47:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@957 -- # uname 00:28:58.269 10:47:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:28:58.269 10:47:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 506861 00:28:58.269 10:47:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:28:58.269 10:47:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:28:58.269 10:47:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@970 -- # echo 'killing process with pid 506861' 00:28:58.269 killing process with pid 506861 00:28:58.269 10:47:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@971 -- # kill 506861 00:28:58.269 10:47:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@976 -- # wait 506861 00:28:58.529 [2024-11-15 10:47:46.876620] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:28:58.529 10:47:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:58.529 10:47:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:58.529 10:47:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:58.529 10:47:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:28:58.529 10:47:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-save 00:28:58.529 10:47:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:58.529 10:47:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-restore 00:28:58.529 10:47:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:58.529 10:47:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:58.529 10:47:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:58.529 10:47:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:58.529 10:47:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:01.066 10:47:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:01.066 10:47:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:29:01.066 00:29:01.066 real 0m8.833s 00:29:01.066 user 0m17.686s 00:29:01.066 sys 0m3.835s 00:29:01.066 10:47:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1128 -- # xtrace_disable 00:29:01.066 10:47:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:29:01.066 ************************************ 00:29:01.066 END TEST nvmf_host_management 00:29:01.066 ************************************ 00:29:01.066 10:47:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:29:01.066 10:47:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:29:01.066 10:47:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1109 -- # xtrace_disable 00:29:01.066 10:47:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:29:01.066 ************************************ 00:29:01.066 START TEST nvmf_lvol 00:29:01.066 ************************************ 00:29:01.066 10:47:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:29:01.066 * Looking for test storage... 00:29:01.066 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:29:01.066 10:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:29:01.066 10:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1691 -- # lcov --version 00:29:01.066 10:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:29:01.066 10:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:29:01.066 10:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:01.066 10:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:01.066 10:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:01.066 10:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:29:01.066 10:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:29:01.066 10:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:29:01.066 10:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:29:01.066 10:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:29:01.066 10:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:29:01.066 10:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:29:01.066 10:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:01.066 10:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:29:01.066 10:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:29:01.066 10:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:01.066 10:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:01.066 10:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:29:01.066 10:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:29:01.066 10:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:01.066 10:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:29:01.066 10:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:29:01.066 10:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:29:01.066 10:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:29:01.066 10:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:01.066 10:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:29:01.066 10:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:29:01.066 10:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:01.066 10:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:01.066 10:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:29:01.066 10:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:01.066 10:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:29:01.066 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:01.066 --rc genhtml_branch_coverage=1 00:29:01.066 --rc genhtml_function_coverage=1 00:29:01.066 --rc genhtml_legend=1 00:29:01.066 --rc geninfo_all_blocks=1 00:29:01.066 --rc geninfo_unexecuted_blocks=1 00:29:01.066 00:29:01.066 ' 00:29:01.066 10:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:29:01.066 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:01.066 --rc genhtml_branch_coverage=1 00:29:01.066 --rc genhtml_function_coverage=1 00:29:01.066 --rc genhtml_legend=1 00:29:01.066 --rc geninfo_all_blocks=1 00:29:01.066 --rc geninfo_unexecuted_blocks=1 00:29:01.066 00:29:01.066 ' 00:29:01.066 10:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:29:01.066 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:01.066 --rc genhtml_branch_coverage=1 00:29:01.066 --rc genhtml_function_coverage=1 00:29:01.066 --rc genhtml_legend=1 00:29:01.066 --rc geninfo_all_blocks=1 00:29:01.067 --rc geninfo_unexecuted_blocks=1 00:29:01.067 00:29:01.067 ' 00:29:01.067 10:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:29:01.067 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:01.067 --rc genhtml_branch_coverage=1 00:29:01.067 --rc genhtml_function_coverage=1 00:29:01.067 --rc genhtml_legend=1 00:29:01.067 --rc geninfo_all_blocks=1 00:29:01.067 --rc geninfo_unexecuted_blocks=1 00:29:01.067 00:29:01.067 ' 00:29:01.067 10:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:01.067 10:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:29:01.067 10:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:01.067 10:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:01.067 10:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:01.067 10:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:01.067 10:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:01.067 10:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:01.067 10:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:01.067 10:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:01.067 10:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:01.067 10:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:01.067 10:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:29:01.067 10:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=8b464f06-2980-e311-ba20-001e67a94acd 00:29:01.067 10:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:01.067 10:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:01.067 10:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:01.067 10:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:01.067 10:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:01.067 10:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:29:01.067 10:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:01.067 10:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:01.067 10:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:01.067 10:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:01.067 10:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:01.067 10:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:01.067 10:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:29:01.067 10:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:01.067 10:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:29:01.067 10:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:01.067 10:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:01.067 10:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:01.067 10:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:01.067 10:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:01.067 10:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:29:01.067 10:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:29:01.067 10:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:01.067 10:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:01.067 10:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:01.067 10:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:29:01.067 10:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:29:01.067 10:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:29:01.067 10:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:29:01.067 10:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:29:01.067 10:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:29:01.067 10:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:01.067 10:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:01.067 10:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:01.067 10:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:01.068 10:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:01.068 10:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:01.068 10:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:01.068 10:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:01.068 10:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:01.068 10:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:01.068 10:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@309 -- # xtrace_disable 00:29:01.068 10:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:29:02.974 10:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:02.974 10:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@315 -- # pci_devs=() 00:29:02.974 10:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:02.974 10:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:02.974 10:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:02.974 10:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:02.974 10:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:02.974 10:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@319 -- # net_devs=() 00:29:02.974 10:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:02.974 10:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@320 -- # e810=() 00:29:02.974 10:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@320 -- # local -ga e810 00:29:02.974 10:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@321 -- # x722=() 00:29:02.974 10:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@321 -- # local -ga x722 00:29:02.974 10:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@322 -- # mlx=() 00:29:02.974 10:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@322 -- # local -ga mlx 00:29:02.974 10:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:02.974 10:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:02.974 10:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:02.974 10:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:02.974 10:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:02.974 10:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:02.974 10:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:02.974 10:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:02.974 10:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:02.974 10:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:02.974 10:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:02.974 10:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:02.974 10:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:02.974 10:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:02.974 10:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:02.974 10:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:02.974 10:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:02.974 10:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:02.974 10:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:02.974 10:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:82:00.0 (0x8086 - 0x159b)' 00:29:02.974 Found 0000:82:00.0 (0x8086 - 0x159b) 00:29:02.974 10:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:02.974 10:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:02.974 10:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:02.974 10:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:02.974 10:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:02.974 10:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:02.974 10:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:82:00.1 (0x8086 - 0x159b)' 00:29:02.974 Found 0000:82:00.1 (0x8086 - 0x159b) 00:29:02.974 10:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:02.974 10:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:02.974 10:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:02.974 10:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:02.974 10:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:02.974 10:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:02.974 10:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:02.974 10:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:02.974 10:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:02.974 10:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:02.974 10:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:02.974 10:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:02.974 10:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:02.974 10:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:02.974 10:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:02.974 10:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:82:00.0: cvl_0_0' 00:29:02.974 Found net devices under 0000:82:00.0: cvl_0_0 00:29:02.974 10:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:02.974 10:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:02.974 10:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:02.974 10:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:02.974 10:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:02.974 10:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:02.974 10:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:02.974 10:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:02.974 10:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:82:00.1: cvl_0_1' 00:29:02.974 Found net devices under 0000:82:00.1: cvl_0_1 00:29:02.974 10:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:02.974 10:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:02.974 10:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # is_hw=yes 00:29:02.974 10:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:02.974 10:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:02.974 10:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:02.974 10:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:02.974 10:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:02.974 10:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:02.974 10:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:02.974 10:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:02.974 10:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:02.974 10:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:02.975 10:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:02.975 10:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:02.975 10:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:02.975 10:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:02.975 10:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:02.975 10:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:02.975 10:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:02.975 10:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:02.975 10:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:02.975 10:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:02.975 10:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:02.975 10:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:02.975 10:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:02.975 10:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:02.975 10:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:02.975 10:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:02.975 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:02.975 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.198 ms 00:29:02.975 00:29:02.975 --- 10.0.0.2 ping statistics --- 00:29:02.975 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:02.975 rtt min/avg/max/mdev = 0.198/0.198/0.198/0.000 ms 00:29:02.975 10:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:03.233 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:03.233 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.163 ms 00:29:03.233 00:29:03.233 --- 10.0.0.1 ping statistics --- 00:29:03.233 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:03.233 rtt min/avg/max/mdev = 0.163/0.163/0.163/0.000 ms 00:29:03.233 10:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:03.233 10:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@450 -- # return 0 00:29:03.233 10:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:03.233 10:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:03.233 10:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:03.233 10:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:03.233 10:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:03.233 10:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:03.233 10:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:03.233 10:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:29:03.233 10:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:03.233 10:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@724 -- # xtrace_disable 00:29:03.233 10:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:29:03.233 10:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@509 -- # nvmfpid=509376 00:29:03.233 10:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@510 -- # waitforlisten 509376 00:29:03.233 10:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x7 00:29:03.233 10:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@833 -- # '[' -z 509376 ']' 00:29:03.233 10:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:03.233 10:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@838 -- # local max_retries=100 00:29:03.233 10:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:03.233 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:03.233 10:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@842 -- # xtrace_disable 00:29:03.233 10:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:29:03.233 [2024-11-15 10:47:51.518190] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:29:03.233 [2024-11-15 10:47:51.519221] Starting SPDK v25.01-pre git sha1 318515b44 / DPDK 24.03.0 initialization... 00:29:03.233 [2024-11-15 10:47:51.519283] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:03.233 [2024-11-15 10:47:51.589234] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:29:03.233 [2024-11-15 10:47:51.644025] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:03.233 [2024-11-15 10:47:51.644081] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:03.233 [2024-11-15 10:47:51.644108] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:03.233 [2024-11-15 10:47:51.644119] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:03.234 [2024-11-15 10:47:51.644128] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:03.234 [2024-11-15 10:47:51.645456] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:03.234 [2024-11-15 10:47:51.645518] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:29:03.234 [2024-11-15 10:47:51.645522] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:03.491 [2024-11-15 10:47:51.731299] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:29:03.491 [2024-11-15 10:47:51.731559] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:29:03.491 [2024-11-15 10:47:51.731560] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:29:03.491 [2024-11-15 10:47:51.731848] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:29:03.491 10:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:29:03.491 10:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@866 -- # return 0 00:29:03.491 10:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:03.491 10:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@730 -- # xtrace_disable 00:29:03.491 10:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:29:03.491 10:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:03.491 10:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:29:03.749 [2024-11-15 10:47:52.038209] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:03.749 10:47:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:29:04.006 10:47:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:29:04.006 10:47:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:29:04.264 10:47:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:29:04.264 10:47:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:29:04.522 10:47:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:29:04.780 10:47:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=601fac0c-af77-4f72-bfa0-655933e8429c 00:29:04.780 10:47:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 601fac0c-af77-4f72-bfa0-655933e8429c lvol 20 00:29:05.038 10:47:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=ae29a4d0-a8a9-4190-be2f-38d338e106ca 00:29:05.038 10:47:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:29:05.604 10:47:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 ae29a4d0-a8a9-4190-be2f-38d338e106ca 00:29:05.604 10:47:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:29:05.861 [2024-11-15 10:47:54.294461] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:05.861 10:47:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:29:06.427 10:47:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=509800 00:29:06.427 10:47:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:29:06.427 10:47:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:29:07.359 10:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot ae29a4d0-a8a9-4190-be2f-38d338e106ca MY_SNAPSHOT 00:29:07.617 10:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=1cd13ad4-5cd2-4743-92a0-94fcd5919098 00:29:07.617 10:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize ae29a4d0-a8a9-4190-be2f-38d338e106ca 30 00:29:07.875 10:47:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 1cd13ad4-5cd2-4743-92a0-94fcd5919098 MY_CLONE 00:29:08.439 10:47:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=d27e76d7-c8e4-4b78-99ba-a3702ca23e8d 00:29:08.439 10:47:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate d27e76d7-c8e4-4b78-99ba-a3702ca23e8d 00:29:09.004 10:47:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 509800 00:29:17.106 Initializing NVMe Controllers 00:29:17.106 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:29:17.106 Controller IO queue size 128, less than required. 00:29:17.106 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:17.106 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:29:17.106 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:29:17.106 Initialization complete. Launching workers. 00:29:17.106 ======================================================== 00:29:17.106 Latency(us) 00:29:17.106 Device Information : IOPS MiB/s Average min max 00:29:17.106 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 10541.20 41.18 12151.73 5089.81 73691.58 00:29:17.106 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 10427.90 40.73 12278.73 5720.61 74110.91 00:29:17.106 ======================================================== 00:29:17.106 Total : 20969.10 81.91 12214.89 5089.81 74110.91 00:29:17.106 00:29:17.106 10:48:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:29:17.106 10:48:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete ae29a4d0-a8a9-4190-be2f-38d338e106ca 00:29:17.364 10:48:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 601fac0c-af77-4f72-bfa0-655933e8429c 00:29:17.622 10:48:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:29:17.622 10:48:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:29:17.622 10:48:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:29:17.622 10:48:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:17.622 10:48:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:29:17.622 10:48:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:17.622 10:48:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:29:17.622 10:48:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:17.622 10:48:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:17.622 rmmod nvme_tcp 00:29:17.622 rmmod nvme_fabrics 00:29:17.622 rmmod nvme_keyring 00:29:17.622 10:48:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:17.622 10:48:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:29:17.622 10:48:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:29:17.622 10:48:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@517 -- # '[' -n 509376 ']' 00:29:17.622 10:48:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@518 -- # killprocess 509376 00:29:17.622 10:48:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@952 -- # '[' -z 509376 ']' 00:29:17.622 10:48:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@956 -- # kill -0 509376 00:29:17.622 10:48:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@957 -- # uname 00:29:17.622 10:48:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:29:17.622 10:48:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 509376 00:29:17.622 10:48:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:29:17.622 10:48:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:29:17.622 10:48:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@970 -- # echo 'killing process with pid 509376' 00:29:17.622 killing process with pid 509376 00:29:17.622 10:48:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@971 -- # kill 509376 00:29:17.622 10:48:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@976 -- # wait 509376 00:29:17.881 10:48:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:17.881 10:48:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:17.881 10:48:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:17.881 10:48:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:29:17.881 10:48:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-save 00:29:17.881 10:48:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:17.881 10:48:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-restore 00:29:17.881 10:48:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:17.881 10:48:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:17.881 10:48:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:17.881 10:48:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:17.881 10:48:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:20.411 10:48:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:20.411 00:29:20.411 real 0m19.392s 00:29:20.411 user 0m57.001s 00:29:20.411 sys 0m8.030s 00:29:20.411 10:48:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1128 -- # xtrace_disable 00:29:20.411 10:48:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:29:20.411 ************************************ 00:29:20.411 END TEST nvmf_lvol 00:29:20.411 ************************************ 00:29:20.411 10:48:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:29:20.411 10:48:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:29:20.411 10:48:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1109 -- # xtrace_disable 00:29:20.411 10:48:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:29:20.411 ************************************ 00:29:20.411 START TEST nvmf_lvs_grow 00:29:20.411 ************************************ 00:29:20.411 10:48:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:29:20.411 * Looking for test storage... 00:29:20.411 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:29:20.411 10:48:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:29:20.411 10:48:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1691 -- # lcov --version 00:29:20.411 10:48:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:29:20.411 10:48:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:29:20.411 10:48:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:20.411 10:48:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:20.411 10:48:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:20.411 10:48:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:29:20.411 10:48:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:29:20.411 10:48:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:29:20.411 10:48:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:29:20.411 10:48:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:29:20.411 10:48:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:29:20.411 10:48:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:29:20.411 10:48:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:20.411 10:48:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:29:20.411 10:48:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:29:20.411 10:48:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:20.411 10:48:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:20.411 10:48:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:29:20.411 10:48:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:29:20.411 10:48:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:20.411 10:48:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:29:20.411 10:48:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:29:20.411 10:48:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:29:20.411 10:48:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:29:20.411 10:48:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:20.412 10:48:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:29:20.412 10:48:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:29:20.412 10:48:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:20.412 10:48:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:20.412 10:48:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:29:20.412 10:48:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:20.412 10:48:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:29:20.412 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:20.412 --rc genhtml_branch_coverage=1 00:29:20.412 --rc genhtml_function_coverage=1 00:29:20.412 --rc genhtml_legend=1 00:29:20.412 --rc geninfo_all_blocks=1 00:29:20.412 --rc geninfo_unexecuted_blocks=1 00:29:20.412 00:29:20.412 ' 00:29:20.412 10:48:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:29:20.412 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:20.412 --rc genhtml_branch_coverage=1 00:29:20.412 --rc genhtml_function_coverage=1 00:29:20.412 --rc genhtml_legend=1 00:29:20.412 --rc geninfo_all_blocks=1 00:29:20.412 --rc geninfo_unexecuted_blocks=1 00:29:20.412 00:29:20.412 ' 00:29:20.412 10:48:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:29:20.412 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:20.412 --rc genhtml_branch_coverage=1 00:29:20.412 --rc genhtml_function_coverage=1 00:29:20.412 --rc genhtml_legend=1 00:29:20.412 --rc geninfo_all_blocks=1 00:29:20.412 --rc geninfo_unexecuted_blocks=1 00:29:20.412 00:29:20.412 ' 00:29:20.412 10:48:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:29:20.412 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:20.412 --rc genhtml_branch_coverage=1 00:29:20.412 --rc genhtml_function_coverage=1 00:29:20.412 --rc genhtml_legend=1 00:29:20.412 --rc geninfo_all_blocks=1 00:29:20.412 --rc geninfo_unexecuted_blocks=1 00:29:20.412 00:29:20.412 ' 00:29:20.412 10:48:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:20.412 10:48:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:29:20.412 10:48:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:20.412 10:48:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:20.412 10:48:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:20.412 10:48:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:20.412 10:48:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:20.412 10:48:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:20.412 10:48:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:20.412 10:48:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:20.412 10:48:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:20.412 10:48:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:20.412 10:48:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:29:20.412 10:48:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=8b464f06-2980-e311-ba20-001e67a94acd 00:29:20.412 10:48:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:20.412 10:48:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:20.412 10:48:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:20.412 10:48:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:20.412 10:48:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:20.412 10:48:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:29:20.412 10:48:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:20.412 10:48:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:20.412 10:48:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:20.412 10:48:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:20.412 10:48:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:20.412 10:48:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:20.412 10:48:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:29:20.412 10:48:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:20.412 10:48:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:29:20.413 10:48:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:20.413 10:48:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:20.413 10:48:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:20.413 10:48:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:20.413 10:48:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:20.413 10:48:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:29:20.413 10:48:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:29:20.413 10:48:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:20.413 10:48:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:20.413 10:48:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:20.413 10:48:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:29:20.413 10:48:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:29:20.413 10:48:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:29:20.413 10:48:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:20.413 10:48:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:20.413 10:48:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:20.413 10:48:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:20.413 10:48:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:20.413 10:48:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:20.413 10:48:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:20.413 10:48:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:20.413 10:48:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:20.413 10:48:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:20.413 10:48:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@309 -- # xtrace_disable 00:29:20.413 10:48:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:29:22.312 10:48:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:22.312 10:48:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@315 -- # pci_devs=() 00:29:22.312 10:48:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:22.312 10:48:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:22.312 10:48:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:22.312 10:48:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:22.312 10:48:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:22.312 10:48:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@319 -- # net_devs=() 00:29:22.312 10:48:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:22.312 10:48:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@320 -- # e810=() 00:29:22.312 10:48:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@320 -- # local -ga e810 00:29:22.312 10:48:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@321 -- # x722=() 00:29:22.312 10:48:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@321 -- # local -ga x722 00:29:22.312 10:48:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@322 -- # mlx=() 00:29:22.312 10:48:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@322 -- # local -ga mlx 00:29:22.312 10:48:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:22.312 10:48:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:22.312 10:48:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:22.312 10:48:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:22.312 10:48:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:22.312 10:48:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:22.312 10:48:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:22.312 10:48:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:22.312 10:48:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:22.312 10:48:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:22.312 10:48:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:22.312 10:48:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:22.312 10:48:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:22.312 10:48:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:22.312 10:48:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:22.312 10:48:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:22.312 10:48:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:22.312 10:48:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:22.312 10:48:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:22.312 10:48:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:82:00.0 (0x8086 - 0x159b)' 00:29:22.312 Found 0000:82:00.0 (0x8086 - 0x159b) 00:29:22.312 10:48:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:22.312 10:48:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:22.312 10:48:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:22.312 10:48:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:22.312 10:48:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:22.312 10:48:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:22.312 10:48:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:82:00.1 (0x8086 - 0x159b)' 00:29:22.312 Found 0000:82:00.1 (0x8086 - 0x159b) 00:29:22.312 10:48:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:22.312 10:48:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:22.312 10:48:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:22.312 10:48:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:22.312 10:48:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:22.312 10:48:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:22.312 10:48:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:22.312 10:48:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:22.312 10:48:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:22.312 10:48:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:22.312 10:48:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:22.312 10:48:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:22.312 10:48:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:22.312 10:48:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:22.312 10:48:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:22.312 10:48:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:82:00.0: cvl_0_0' 00:29:22.312 Found net devices under 0000:82:00.0: cvl_0_0 00:29:22.312 10:48:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:22.312 10:48:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:22.312 10:48:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:22.312 10:48:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:22.312 10:48:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:22.312 10:48:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:22.312 10:48:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:22.312 10:48:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:22.312 10:48:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:82:00.1: cvl_0_1' 00:29:22.312 Found net devices under 0000:82:00.1: cvl_0_1 00:29:22.312 10:48:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:22.312 10:48:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:22.312 10:48:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # is_hw=yes 00:29:22.312 10:48:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:22.312 10:48:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:22.312 10:48:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:22.312 10:48:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:22.313 10:48:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:22.313 10:48:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:22.313 10:48:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:22.313 10:48:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:22.313 10:48:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:22.313 10:48:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:22.313 10:48:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:22.313 10:48:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:22.313 10:48:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:22.313 10:48:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:22.313 10:48:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:22.313 10:48:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:22.313 10:48:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:22.313 10:48:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:22.313 10:48:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:22.313 10:48:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:22.313 10:48:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:22.313 10:48:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:22.313 10:48:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:22.313 10:48:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:22.313 10:48:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:22.313 10:48:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:22.313 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:22.313 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.166 ms 00:29:22.313 00:29:22.313 --- 10.0.0.2 ping statistics --- 00:29:22.313 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:22.313 rtt min/avg/max/mdev = 0.166/0.166/0.166/0.000 ms 00:29:22.313 10:48:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:22.313 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:22.313 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.083 ms 00:29:22.313 00:29:22.313 --- 10.0.0.1 ping statistics --- 00:29:22.313 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:22.313 rtt min/avg/max/mdev = 0.083/0.083/0.083/0.000 ms 00:29:22.313 10:48:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:22.313 10:48:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@450 -- # return 0 00:29:22.313 10:48:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:22.313 10:48:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:22.313 10:48:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:22.313 10:48:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:22.313 10:48:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:22.313 10:48:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:22.313 10:48:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:22.313 10:48:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:29:22.313 10:48:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:22.313 10:48:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@724 -- # xtrace_disable 00:29:22.313 10:48:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:29:22.313 10:48:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@509 -- # nvmfpid=513673 00:29:22.313 10:48:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:29:22.313 10:48:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@510 -- # waitforlisten 513673 00:29:22.313 10:48:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@833 -- # '[' -z 513673 ']' 00:29:22.313 10:48:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:22.313 10:48:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@838 -- # local max_retries=100 00:29:22.313 10:48:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:22.313 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:22.313 10:48:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # xtrace_disable 00:29:22.313 10:48:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:29:22.313 [2024-11-15 10:48:10.762451] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:29:22.313 [2024-11-15 10:48:10.763597] Starting SPDK v25.01-pre git sha1 318515b44 / DPDK 24.03.0 initialization... 00:29:22.313 [2024-11-15 10:48:10.763675] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:22.570 [2024-11-15 10:48:10.835862] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:22.571 [2024-11-15 10:48:10.891274] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:22.571 [2024-11-15 10:48:10.891326] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:22.571 [2024-11-15 10:48:10.891372] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:22.571 [2024-11-15 10:48:10.891385] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:22.571 [2024-11-15 10:48:10.891408] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:22.571 [2024-11-15 10:48:10.892040] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:22.571 [2024-11-15 10:48:10.978883] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:29:22.571 [2024-11-15 10:48:10.979194] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:29:22.571 10:48:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:29:22.571 10:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@866 -- # return 0 00:29:22.571 10:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:22.571 10:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@730 -- # xtrace_disable 00:29:22.571 10:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:29:22.571 10:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:22.571 10:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:29:22.828 [2024-11-15 10:48:11.276624] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:23.085 10:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:29:23.085 10:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:29:23.085 10:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1109 -- # xtrace_disable 00:29:23.085 10:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:29:23.085 ************************************ 00:29:23.085 START TEST lvs_grow_clean 00:29:23.085 ************************************ 00:29:23.085 10:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1127 -- # lvs_grow 00:29:23.085 10:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:29:23.085 10:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:29:23.085 10:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:29:23.085 10:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:29:23.085 10:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:29:23.085 10:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:29:23.085 10:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:29:23.085 10:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:29:23.085 10:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:29:23.342 10:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:29:23.342 10:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:29:23.600 10:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=86d7bda5-c11a-4d31-98c6-aaef44295da0 00:29:23.600 10:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 86d7bda5-c11a-4d31-98c6-aaef44295da0 00:29:23.600 10:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:29:23.858 10:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:29:23.858 10:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:29:23.858 10:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 86d7bda5-c11a-4d31-98c6-aaef44295da0 lvol 150 00:29:24.116 10:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=f9702c1e-30a8-43f4-b4ac-03a85f053d0d 00:29:24.116 10:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:29:24.116 10:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:29:24.374 [2024-11-15 10:48:12.692509] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:29:24.374 [2024-11-15 10:48:12.692612] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:29:24.374 true 00:29:24.374 10:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 86d7bda5-c11a-4d31-98c6-aaef44295da0 00:29:24.374 10:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:29:24.631 10:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:29:24.631 10:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:29:24.888 10:48:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 f9702c1e-30a8-43f4-b4ac-03a85f053d0d 00:29:25.146 10:48:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:29:25.403 [2024-11-15 10:48:13.784831] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:25.403 10:48:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:29:25.661 10:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=514109 00:29:25.661 10:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:29:25.661 10:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:29:25.661 10:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 514109 /var/tmp/bdevperf.sock 00:29:25.661 10:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@833 -- # '[' -z 514109 ']' 00:29:25.661 10:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:29:25.661 10:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@838 -- # local max_retries=100 00:29:25.661 10:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:29:25.661 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:29:25.661 10:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # xtrace_disable 00:29:25.661 10:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:29:25.661 [2024-11-15 10:48:14.112874] Starting SPDK v25.01-pre git sha1 318515b44 / DPDK 24.03.0 initialization... 00:29:25.661 [2024-11-15 10:48:14.112964] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid514109 ] 00:29:25.919 [2024-11-15 10:48:14.181209] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:25.919 [2024-11-15 10:48:14.240047] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:25.919 10:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:29:25.919 10:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@866 -- # return 0 00:29:25.919 10:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:29:26.484 Nvme0n1 00:29:26.484 10:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:29:26.741 [ 00:29:26.741 { 00:29:26.741 "name": "Nvme0n1", 00:29:26.741 "aliases": [ 00:29:26.741 "f9702c1e-30a8-43f4-b4ac-03a85f053d0d" 00:29:26.741 ], 00:29:26.741 "product_name": "NVMe disk", 00:29:26.741 "block_size": 4096, 00:29:26.741 "num_blocks": 38912, 00:29:26.741 "uuid": "f9702c1e-30a8-43f4-b4ac-03a85f053d0d", 00:29:26.741 "numa_id": 1, 00:29:26.741 "assigned_rate_limits": { 00:29:26.741 "rw_ios_per_sec": 0, 00:29:26.741 "rw_mbytes_per_sec": 0, 00:29:26.741 "r_mbytes_per_sec": 0, 00:29:26.741 "w_mbytes_per_sec": 0 00:29:26.741 }, 00:29:26.741 "claimed": false, 00:29:26.741 "zoned": false, 00:29:26.741 "supported_io_types": { 00:29:26.741 "read": true, 00:29:26.741 "write": true, 00:29:26.741 "unmap": true, 00:29:26.741 "flush": true, 00:29:26.741 "reset": true, 00:29:26.741 "nvme_admin": true, 00:29:26.741 "nvme_io": true, 00:29:26.741 "nvme_io_md": false, 00:29:26.741 "write_zeroes": true, 00:29:26.741 "zcopy": false, 00:29:26.741 "get_zone_info": false, 00:29:26.741 "zone_management": false, 00:29:26.742 "zone_append": false, 00:29:26.742 "compare": true, 00:29:26.742 "compare_and_write": true, 00:29:26.742 "abort": true, 00:29:26.742 "seek_hole": false, 00:29:26.742 "seek_data": false, 00:29:26.742 "copy": true, 00:29:26.742 "nvme_iov_md": false 00:29:26.742 }, 00:29:26.742 "memory_domains": [ 00:29:26.742 { 00:29:26.742 "dma_device_id": "system", 00:29:26.742 "dma_device_type": 1 00:29:26.742 } 00:29:26.742 ], 00:29:26.742 "driver_specific": { 00:29:26.742 "nvme": [ 00:29:26.742 { 00:29:26.742 "trid": { 00:29:26.742 "trtype": "TCP", 00:29:26.742 "adrfam": "IPv4", 00:29:26.742 "traddr": "10.0.0.2", 00:29:26.742 "trsvcid": "4420", 00:29:26.742 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:29:26.742 }, 00:29:26.742 "ctrlr_data": { 00:29:26.742 "cntlid": 1, 00:29:26.742 "vendor_id": "0x8086", 00:29:26.742 "model_number": "SPDK bdev Controller", 00:29:26.742 "serial_number": "SPDK0", 00:29:26.742 "firmware_revision": "25.01", 00:29:26.742 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:26.742 "oacs": { 00:29:26.742 "security": 0, 00:29:26.742 "format": 0, 00:29:26.742 "firmware": 0, 00:29:26.742 "ns_manage": 0 00:29:26.742 }, 00:29:26.742 "multi_ctrlr": true, 00:29:26.742 "ana_reporting": false 00:29:26.742 }, 00:29:26.742 "vs": { 00:29:26.742 "nvme_version": "1.3" 00:29:26.742 }, 00:29:26.742 "ns_data": { 00:29:26.742 "id": 1, 00:29:26.742 "can_share": true 00:29:26.742 } 00:29:26.742 } 00:29:26.742 ], 00:29:26.742 "mp_policy": "active_passive" 00:29:26.742 } 00:29:26.742 } 00:29:26.742 ] 00:29:26.742 10:48:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=514245 00:29:26.742 10:48:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:29:26.742 10:48:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:29:26.742 Running I/O for 10 seconds... 00:29:28.110 Latency(us) 00:29:28.110 [2024-11-15T09:48:16.573Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:28.110 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:28.110 Nvme0n1 : 1.00 16129.00 63.00 0.00 0.00 0.00 0.00 0.00 00:29:28.110 [2024-11-15T09:48:16.573Z] =================================================================================================================== 00:29:28.110 [2024-11-15T09:48:16.573Z] Total : 16129.00 63.00 0.00 0.00 0.00 0.00 0.00 00:29:28.110 00:29:28.674 10:48:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 86d7bda5-c11a-4d31-98c6-aaef44295da0 00:29:28.932 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:28.932 Nvme0n1 : 2.00 16129.00 63.00 0.00 0.00 0.00 0.00 0.00 00:29:28.932 [2024-11-15T09:48:17.395Z] =================================================================================================================== 00:29:28.932 [2024-11-15T09:48:17.395Z] Total : 16129.00 63.00 0.00 0.00 0.00 0.00 0.00 00:29:28.932 00:29:29.189 true 00:29:29.189 10:48:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 86d7bda5-c11a-4d31-98c6-aaef44295da0 00:29:29.189 10:48:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:29:29.447 10:48:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:29:29.447 10:48:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:29:29.447 10:48:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 514245 00:29:30.012 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:30.012 Nvme0n1 : 3.00 16213.67 63.33 0.00 0.00 0.00 0.00 0.00 00:29:30.012 [2024-11-15T09:48:18.475Z] =================================================================================================================== 00:29:30.012 [2024-11-15T09:48:18.475Z] Total : 16213.67 63.33 0.00 0.00 0.00 0.00 0.00 00:29:30.012 00:29:30.947 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:30.947 Nvme0n1 : 4.00 16399.00 64.06 0.00 0.00 0.00 0.00 0.00 00:29:30.947 [2024-11-15T09:48:19.410Z] =================================================================================================================== 00:29:30.947 [2024-11-15T09:48:19.410Z] Total : 16399.00 64.06 0.00 0.00 0.00 0.00 0.00 00:29:30.947 00:29:31.880 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:31.880 Nvme0n1 : 5.00 16491.40 64.42 0.00 0.00 0.00 0.00 0.00 00:29:31.880 [2024-11-15T09:48:20.343Z] =================================================================================================================== 00:29:31.880 [2024-11-15T09:48:20.343Z] Total : 16491.40 64.42 0.00 0.00 0.00 0.00 0.00 00:29:31.880 00:29:32.814 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:32.814 Nvme0n1 : 6.00 16558.00 64.68 0.00 0.00 0.00 0.00 0.00 00:29:32.814 [2024-11-15T09:48:21.277Z] =================================================================================================================== 00:29:32.814 [2024-11-15T09:48:21.277Z] Total : 16558.00 64.68 0.00 0.00 0.00 0.00 0.00 00:29:32.814 00:29:34.187 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:34.187 Nvme0n1 : 7.00 16623.71 64.94 0.00 0.00 0.00 0.00 0.00 00:29:34.187 [2024-11-15T09:48:22.650Z] =================================================================================================================== 00:29:34.187 [2024-11-15T09:48:22.650Z] Total : 16623.71 64.94 0.00 0.00 0.00 0.00 0.00 00:29:34.187 00:29:34.821 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:34.821 Nvme0n1 : 8.00 16649.25 65.04 0.00 0.00 0.00 0.00 0.00 00:29:34.821 [2024-11-15T09:48:23.284Z] =================================================================================================================== 00:29:34.821 [2024-11-15T09:48:23.284Z] Total : 16649.25 65.04 0.00 0.00 0.00 0.00 0.00 00:29:34.821 00:29:35.777 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:35.777 Nvme0n1 : 9.00 16683.11 65.17 0.00 0.00 0.00 0.00 0.00 00:29:35.777 [2024-11-15T09:48:24.240Z] =================================================================================================================== 00:29:35.777 [2024-11-15T09:48:24.240Z] Total : 16683.11 65.17 0.00 0.00 0.00 0.00 0.00 00:29:35.777 00:29:37.151 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:37.151 Nvme0n1 : 10.00 16716.60 65.30 0.00 0.00 0.00 0.00 0.00 00:29:37.151 [2024-11-15T09:48:25.614Z] =================================================================================================================== 00:29:37.151 [2024-11-15T09:48:25.614Z] Total : 16716.60 65.30 0.00 0.00 0.00 0.00 0.00 00:29:37.151 00:29:37.151 00:29:37.151 Latency(us) 00:29:37.151 [2024-11-15T09:48:25.614Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:37.151 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:37.151 Nvme0n1 : 10.01 16719.62 65.31 0.00 0.00 7651.69 3907.89 17476.27 00:29:37.151 [2024-11-15T09:48:25.614Z] =================================================================================================================== 00:29:37.151 [2024-11-15T09:48:25.614Z] Total : 16719.62 65.31 0.00 0.00 7651.69 3907.89 17476.27 00:29:37.151 { 00:29:37.151 "results": [ 00:29:37.151 { 00:29:37.151 "job": "Nvme0n1", 00:29:37.151 "core_mask": "0x2", 00:29:37.151 "workload": "randwrite", 00:29:37.151 "status": "finished", 00:29:37.151 "queue_depth": 128, 00:29:37.151 "io_size": 4096, 00:29:37.151 "runtime": 10.005851, 00:29:37.151 "iops": 16719.61735188741, 00:29:37.151 "mibps": 65.31100528081019, 00:29:37.151 "io_failed": 0, 00:29:37.151 "io_timeout": 0, 00:29:37.151 "avg_latency_us": 7651.685406069333, 00:29:37.151 "min_latency_us": 3907.8874074074074, 00:29:37.151 "max_latency_us": 17476.266666666666 00:29:37.151 } 00:29:37.152 ], 00:29:37.152 "core_count": 1 00:29:37.152 } 00:29:37.152 10:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 514109 00:29:37.152 10:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@952 -- # '[' -z 514109 ']' 00:29:37.152 10:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # kill -0 514109 00:29:37.152 10:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@957 -- # uname 00:29:37.152 10:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:29:37.152 10:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 514109 00:29:37.152 10:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:29:37.152 10:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:29:37.152 10:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@970 -- # echo 'killing process with pid 514109' 00:29:37.152 killing process with pid 514109 00:29:37.152 10:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@971 -- # kill 514109 00:29:37.152 Received shutdown signal, test time was about 10.000000 seconds 00:29:37.152 00:29:37.152 Latency(us) 00:29:37.152 [2024-11-15T09:48:25.615Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:37.152 [2024-11-15T09:48:25.615Z] =================================================================================================================== 00:29:37.152 [2024-11-15T09:48:25.615Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:37.152 10:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@976 -- # wait 514109 00:29:37.152 10:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:29:37.410 10:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:29:37.668 10:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 86d7bda5-c11a-4d31-98c6-aaef44295da0 00:29:37.668 10:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:29:37.925 10:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:29:37.925 10:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:29:37.925 10:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:29:38.183 [2024-11-15 10:48:26.612560] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:29:38.183 10:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 86d7bda5-c11a-4d31-98c6-aaef44295da0 00:29:38.183 10:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@650 -- # local es=0 00:29:38.183 10:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 86d7bda5-c11a-4d31-98c6-aaef44295da0 00:29:38.183 10:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:29:38.183 10:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:29:38.183 10:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:29:38.183 10:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:29:38.183 10:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:29:38.183 10:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:29:38.183 10:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:29:38.183 10:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:29:38.183 10:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 86d7bda5-c11a-4d31-98c6-aaef44295da0 00:29:38.441 request: 00:29:38.441 { 00:29:38.441 "uuid": "86d7bda5-c11a-4d31-98c6-aaef44295da0", 00:29:38.441 "method": "bdev_lvol_get_lvstores", 00:29:38.441 "req_id": 1 00:29:38.441 } 00:29:38.441 Got JSON-RPC error response 00:29:38.441 response: 00:29:38.441 { 00:29:38.441 "code": -19, 00:29:38.441 "message": "No such device" 00:29:38.441 } 00:29:38.698 10:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # es=1 00:29:38.698 10:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:29:38.698 10:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:29:38.698 10:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:29:38.699 10:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:29:38.956 aio_bdev 00:29:38.956 10:48:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev f9702c1e-30a8-43f4-b4ac-03a85f053d0d 00:29:38.956 10:48:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@901 -- # local bdev_name=f9702c1e-30a8-43f4-b4ac-03a85f053d0d 00:29:38.956 10:48:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:29:38.956 10:48:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local i 00:29:38.956 10:48:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:29:38.956 10:48:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:29:38.956 10:48:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:29:39.214 10:48:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b f9702c1e-30a8-43f4-b4ac-03a85f053d0d -t 2000 00:29:39.473 [ 00:29:39.473 { 00:29:39.473 "name": "f9702c1e-30a8-43f4-b4ac-03a85f053d0d", 00:29:39.473 "aliases": [ 00:29:39.473 "lvs/lvol" 00:29:39.473 ], 00:29:39.473 "product_name": "Logical Volume", 00:29:39.473 "block_size": 4096, 00:29:39.473 "num_blocks": 38912, 00:29:39.473 "uuid": "f9702c1e-30a8-43f4-b4ac-03a85f053d0d", 00:29:39.473 "assigned_rate_limits": { 00:29:39.473 "rw_ios_per_sec": 0, 00:29:39.473 "rw_mbytes_per_sec": 0, 00:29:39.473 "r_mbytes_per_sec": 0, 00:29:39.473 "w_mbytes_per_sec": 0 00:29:39.473 }, 00:29:39.473 "claimed": false, 00:29:39.473 "zoned": false, 00:29:39.473 "supported_io_types": { 00:29:39.473 "read": true, 00:29:39.473 "write": true, 00:29:39.473 "unmap": true, 00:29:39.473 "flush": false, 00:29:39.473 "reset": true, 00:29:39.473 "nvme_admin": false, 00:29:39.473 "nvme_io": false, 00:29:39.473 "nvme_io_md": false, 00:29:39.473 "write_zeroes": true, 00:29:39.473 "zcopy": false, 00:29:39.473 "get_zone_info": false, 00:29:39.473 "zone_management": false, 00:29:39.473 "zone_append": false, 00:29:39.473 "compare": false, 00:29:39.473 "compare_and_write": false, 00:29:39.473 "abort": false, 00:29:39.473 "seek_hole": true, 00:29:39.473 "seek_data": true, 00:29:39.473 "copy": false, 00:29:39.473 "nvme_iov_md": false 00:29:39.473 }, 00:29:39.473 "driver_specific": { 00:29:39.473 "lvol": { 00:29:39.473 "lvol_store_uuid": "86d7bda5-c11a-4d31-98c6-aaef44295da0", 00:29:39.473 "base_bdev": "aio_bdev", 00:29:39.473 "thin_provision": false, 00:29:39.473 "num_allocated_clusters": 38, 00:29:39.473 "snapshot": false, 00:29:39.473 "clone": false, 00:29:39.473 "esnap_clone": false 00:29:39.473 } 00:29:39.473 } 00:29:39.473 } 00:29:39.473 ] 00:29:39.473 10:48:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@909 -- # return 0 00:29:39.473 10:48:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 86d7bda5-c11a-4d31-98c6-aaef44295da0 00:29:39.473 10:48:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:29:39.731 10:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:29:39.731 10:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 86d7bda5-c11a-4d31-98c6-aaef44295da0 00:29:39.731 10:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:29:39.989 10:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:29:39.989 10:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete f9702c1e-30a8-43f4-b4ac-03a85f053d0d 00:29:40.248 10:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 86d7bda5-c11a-4d31-98c6-aaef44295da0 00:29:40.506 10:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:29:40.764 10:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:29:40.764 00:29:40.764 real 0m17.794s 00:29:40.764 user 0m17.420s 00:29:40.764 sys 0m1.898s 00:29:40.764 10:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1128 -- # xtrace_disable 00:29:40.764 10:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:29:40.764 ************************************ 00:29:40.764 END TEST lvs_grow_clean 00:29:40.764 ************************************ 00:29:40.764 10:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:29:40.764 10:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:29:40.764 10:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1109 -- # xtrace_disable 00:29:40.764 10:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:29:40.764 ************************************ 00:29:40.764 START TEST lvs_grow_dirty 00:29:40.764 ************************************ 00:29:40.764 10:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1127 -- # lvs_grow dirty 00:29:40.764 10:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:29:40.764 10:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:29:40.764 10:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:29:40.764 10:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:29:40.764 10:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:29:40.764 10:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:29:40.764 10:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:29:40.764 10:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:29:40.764 10:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:29:41.022 10:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:29:41.022 10:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:29:41.280 10:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=9310029f-8598-4b86-b3ac-38ac0ab404b2 00:29:41.280 10:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9310029f-8598-4b86-b3ac-38ac0ab404b2 00:29:41.280 10:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:29:41.846 10:48:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:29:41.846 10:48:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:29:41.846 10:48:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 9310029f-8598-4b86-b3ac-38ac0ab404b2 lvol 150 00:29:41.846 10:48:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=8e44e9e9-5c4e-4567-96c0-afc0db3346b8 00:29:41.846 10:48:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:29:41.846 10:48:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:29:42.412 [2024-11-15 10:48:30.572531] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:29:42.412 [2024-11-15 10:48:30.572625] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:29:42.412 true 00:29:42.412 10:48:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9310029f-8598-4b86-b3ac-38ac0ab404b2 00:29:42.412 10:48:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:29:42.412 10:48:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:29:42.412 10:48:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:29:42.669 10:48:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 8e44e9e9-5c4e-4567-96c0-afc0db3346b8 00:29:43.234 10:48:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:29:43.234 [2024-11-15 10:48:31.656821] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:43.234 10:48:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:29:43.491 10:48:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=516273 00:29:43.492 10:48:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:29:43.492 10:48:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:29:43.492 10:48:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 516273 /var/tmp/bdevperf.sock 00:29:43.492 10:48:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # '[' -z 516273 ']' 00:29:43.492 10:48:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:29:43.492 10:48:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # local max_retries=100 00:29:43.492 10:48:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:29:43.492 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:29:43.492 10:48:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # xtrace_disable 00:29:43.492 10:48:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:29:43.749 [2024-11-15 10:48:31.993406] Starting SPDK v25.01-pre git sha1 318515b44 / DPDK 24.03.0 initialization... 00:29:43.749 [2024-11-15 10:48:31.993506] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid516273 ] 00:29:43.749 [2024-11-15 10:48:32.061873] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:43.749 [2024-11-15 10:48:32.124681] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:44.007 10:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:29:44.007 10:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@866 -- # return 0 00:29:44.007 10:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:29:44.264 Nvme0n1 00:29:44.264 10:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:29:44.522 [ 00:29:44.522 { 00:29:44.522 "name": "Nvme0n1", 00:29:44.522 "aliases": [ 00:29:44.522 "8e44e9e9-5c4e-4567-96c0-afc0db3346b8" 00:29:44.522 ], 00:29:44.522 "product_name": "NVMe disk", 00:29:44.522 "block_size": 4096, 00:29:44.522 "num_blocks": 38912, 00:29:44.522 "uuid": "8e44e9e9-5c4e-4567-96c0-afc0db3346b8", 00:29:44.522 "numa_id": 1, 00:29:44.522 "assigned_rate_limits": { 00:29:44.522 "rw_ios_per_sec": 0, 00:29:44.522 "rw_mbytes_per_sec": 0, 00:29:44.522 "r_mbytes_per_sec": 0, 00:29:44.522 "w_mbytes_per_sec": 0 00:29:44.522 }, 00:29:44.522 "claimed": false, 00:29:44.522 "zoned": false, 00:29:44.522 "supported_io_types": { 00:29:44.522 "read": true, 00:29:44.522 "write": true, 00:29:44.522 "unmap": true, 00:29:44.522 "flush": true, 00:29:44.522 "reset": true, 00:29:44.522 "nvme_admin": true, 00:29:44.522 "nvme_io": true, 00:29:44.522 "nvme_io_md": false, 00:29:44.522 "write_zeroes": true, 00:29:44.522 "zcopy": false, 00:29:44.522 "get_zone_info": false, 00:29:44.522 "zone_management": false, 00:29:44.522 "zone_append": false, 00:29:44.522 "compare": true, 00:29:44.522 "compare_and_write": true, 00:29:44.522 "abort": true, 00:29:44.522 "seek_hole": false, 00:29:44.522 "seek_data": false, 00:29:44.522 "copy": true, 00:29:44.522 "nvme_iov_md": false 00:29:44.522 }, 00:29:44.522 "memory_domains": [ 00:29:44.522 { 00:29:44.522 "dma_device_id": "system", 00:29:44.522 "dma_device_type": 1 00:29:44.522 } 00:29:44.522 ], 00:29:44.523 "driver_specific": { 00:29:44.523 "nvme": [ 00:29:44.523 { 00:29:44.523 "trid": { 00:29:44.523 "trtype": "TCP", 00:29:44.523 "adrfam": "IPv4", 00:29:44.523 "traddr": "10.0.0.2", 00:29:44.523 "trsvcid": "4420", 00:29:44.523 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:29:44.523 }, 00:29:44.523 "ctrlr_data": { 00:29:44.523 "cntlid": 1, 00:29:44.523 "vendor_id": "0x8086", 00:29:44.523 "model_number": "SPDK bdev Controller", 00:29:44.523 "serial_number": "SPDK0", 00:29:44.523 "firmware_revision": "25.01", 00:29:44.523 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:44.523 "oacs": { 00:29:44.523 "security": 0, 00:29:44.523 "format": 0, 00:29:44.523 "firmware": 0, 00:29:44.523 "ns_manage": 0 00:29:44.523 }, 00:29:44.523 "multi_ctrlr": true, 00:29:44.523 "ana_reporting": false 00:29:44.523 }, 00:29:44.523 "vs": { 00:29:44.523 "nvme_version": "1.3" 00:29:44.523 }, 00:29:44.523 "ns_data": { 00:29:44.523 "id": 1, 00:29:44.523 "can_share": true 00:29:44.523 } 00:29:44.523 } 00:29:44.523 ], 00:29:44.523 "mp_policy": "active_passive" 00:29:44.523 } 00:29:44.523 } 00:29:44.523 ] 00:29:44.523 10:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=516317 00:29:44.523 10:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:29:44.523 10:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:29:44.780 Running I/O for 10 seconds... 00:29:45.712 Latency(us) 00:29:45.712 [2024-11-15T09:48:34.176Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:45.713 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:45.713 Nvme0n1 : 1.00 16129.00 63.00 0.00 0.00 0.00 0.00 0.00 00:29:45.713 [2024-11-15T09:48:34.176Z] =================================================================================================================== 00:29:45.713 [2024-11-15T09:48:34.176Z] Total : 16129.00 63.00 0.00 0.00 0.00 0.00 0.00 00:29:45.713 00:29:46.645 10:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 9310029f-8598-4b86-b3ac-38ac0ab404b2 00:29:46.645 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:46.645 Nvme0n1 : 2.00 16383.00 64.00 0.00 0.00 0.00 0.00 0.00 00:29:46.645 [2024-11-15T09:48:35.108Z] =================================================================================================================== 00:29:46.645 [2024-11-15T09:48:35.108Z] Total : 16383.00 64.00 0.00 0.00 0.00 0.00 0.00 00:29:46.645 00:29:46.902 true 00:29:46.902 10:48:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9310029f-8598-4b86-b3ac-38ac0ab404b2 00:29:46.902 10:48:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:29:47.160 10:48:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:29:47.160 10:48:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:29:47.160 10:48:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 516317 00:29:47.724 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:47.724 Nvme0n1 : 3.00 16436.67 64.21 0.00 0.00 0.00 0.00 0.00 00:29:47.724 [2024-11-15T09:48:36.187Z] =================================================================================================================== 00:29:47.724 [2024-11-15T09:48:36.187Z] Total : 16436.67 64.21 0.00 0.00 0.00 0.00 0.00 00:29:47.724 00:29:48.657 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:48.657 Nvme0n1 : 4.00 16527.00 64.56 0.00 0.00 0.00 0.00 0.00 00:29:48.657 [2024-11-15T09:48:37.120Z] =================================================================================================================== 00:29:48.657 [2024-11-15T09:48:37.120Z] Total : 16527.00 64.56 0.00 0.00 0.00 0.00 0.00 00:29:48.657 00:29:49.589 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:49.589 Nvme0n1 : 5.00 16599.80 64.84 0.00 0.00 0.00 0.00 0.00 00:29:49.589 [2024-11-15T09:48:38.052Z] =================================================================================================================== 00:29:49.589 [2024-11-15T09:48:38.052Z] Total : 16599.80 64.84 0.00 0.00 0.00 0.00 0.00 00:29:49.589 00:29:50.960 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:50.960 Nvme0n1 : 6.00 16659.00 65.07 0.00 0.00 0.00 0.00 0.00 00:29:50.960 [2024-11-15T09:48:39.423Z] =================================================================================================================== 00:29:50.960 [2024-11-15T09:48:39.423Z] Total : 16659.00 65.07 0.00 0.00 0.00 0.00 0.00 00:29:50.960 00:29:51.893 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:51.893 Nvme0n1 : 7.00 16628.57 64.96 0.00 0.00 0.00 0.00 0.00 00:29:51.893 [2024-11-15T09:48:40.356Z] =================================================================================================================== 00:29:51.893 [2024-11-15T09:48:40.356Z] Total : 16628.57 64.96 0.00 0.00 0.00 0.00 0.00 00:29:51.893 00:29:52.825 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:52.825 Nvme0n1 : 8.00 16629.62 64.96 0.00 0.00 0.00 0.00 0.00 00:29:52.825 [2024-11-15T09:48:41.288Z] =================================================================================================================== 00:29:52.825 [2024-11-15T09:48:41.288Z] Total : 16629.62 64.96 0.00 0.00 0.00 0.00 0.00 00:29:52.825 00:29:53.757 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:53.757 Nvme0n1 : 9.00 16630.44 64.96 0.00 0.00 0.00 0.00 0.00 00:29:53.757 [2024-11-15T09:48:42.220Z] =================================================================================================================== 00:29:53.757 [2024-11-15T09:48:42.220Z] Total : 16630.44 64.96 0.00 0.00 0.00 0.00 0.00 00:29:53.757 00:29:54.689 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:54.689 Nvme0n1 : 10.00 16656.50 65.06 0.00 0.00 0.00 0.00 0.00 00:29:54.689 [2024-11-15T09:48:43.152Z] =================================================================================================================== 00:29:54.689 [2024-11-15T09:48:43.152Z] Total : 16656.50 65.06 0.00 0.00 0.00 0.00 0.00 00:29:54.689 00:29:54.689 00:29:54.689 Latency(us) 00:29:54.689 [2024-11-15T09:48:43.152Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:54.689 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:54.689 Nvme0n1 : 10.01 16655.99 65.06 0.00 0.00 7680.93 3131.16 17476.27 00:29:54.689 [2024-11-15T09:48:43.152Z] =================================================================================================================== 00:29:54.689 [2024-11-15T09:48:43.152Z] Total : 16655.99 65.06 0.00 0.00 7680.93 3131.16 17476.27 00:29:54.689 { 00:29:54.689 "results": [ 00:29:54.689 { 00:29:54.689 "job": "Nvme0n1", 00:29:54.689 "core_mask": "0x2", 00:29:54.689 "workload": "randwrite", 00:29:54.689 "status": "finished", 00:29:54.689 "queue_depth": 128, 00:29:54.689 "io_size": 4096, 00:29:54.689 "runtime": 10.007989, 00:29:54.690 "iops": 16655.99352677146, 00:29:54.690 "mibps": 65.06247471395102, 00:29:54.690 "io_failed": 0, 00:29:54.690 "io_timeout": 0, 00:29:54.690 "avg_latency_us": 7680.934828012729, 00:29:54.690 "min_latency_us": 3131.1644444444446, 00:29:54.690 "max_latency_us": 17476.266666666666 00:29:54.690 } 00:29:54.690 ], 00:29:54.690 "core_count": 1 00:29:54.690 } 00:29:54.690 10:48:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 516273 00:29:54.690 10:48:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@952 -- # '[' -z 516273 ']' 00:29:54.690 10:48:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # kill -0 516273 00:29:54.690 10:48:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@957 -- # uname 00:29:54.690 10:48:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:29:54.690 10:48:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 516273 00:29:54.690 10:48:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:29:54.690 10:48:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:29:54.690 10:48:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@970 -- # echo 'killing process with pid 516273' 00:29:54.690 killing process with pid 516273 00:29:54.690 10:48:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@971 -- # kill 516273 00:29:54.690 Received shutdown signal, test time was about 10.000000 seconds 00:29:54.690 00:29:54.690 Latency(us) 00:29:54.690 [2024-11-15T09:48:43.153Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:54.690 [2024-11-15T09:48:43.153Z] =================================================================================================================== 00:29:54.690 [2024-11-15T09:48:43.153Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:54.690 10:48:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@976 -- # wait 516273 00:29:54.947 10:48:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:29:55.205 10:48:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:29:55.462 10:48:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9310029f-8598-4b86-b3ac-38ac0ab404b2 00:29:55.462 10:48:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:29:56.027 10:48:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:29:56.027 10:48:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:29:56.027 10:48:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 513673 00:29:56.027 10:48:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 513673 00:29:56.027 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 513673 Killed "${NVMF_APP[@]}" "$@" 00:29:56.027 10:48:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:29:56.027 10:48:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:29:56.027 10:48:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:56.027 10:48:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@724 -- # xtrace_disable 00:29:56.027 10:48:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:29:56.027 10:48:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # nvmfpid=517626 00:29:56.027 10:48:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:29:56.027 10:48:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # waitforlisten 517626 00:29:56.027 10:48:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # '[' -z 517626 ']' 00:29:56.027 10:48:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:56.027 10:48:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # local max_retries=100 00:29:56.028 10:48:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:56.028 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:56.028 10:48:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # xtrace_disable 00:29:56.028 10:48:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:29:56.028 [2024-11-15 10:48:44.305293] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:29:56.028 [2024-11-15 10:48:44.306394] Starting SPDK v25.01-pre git sha1 318515b44 / DPDK 24.03.0 initialization... 00:29:56.028 [2024-11-15 10:48:44.306464] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:56.028 [2024-11-15 10:48:44.382030] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:56.028 [2024-11-15 10:48:44.441982] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:56.028 [2024-11-15 10:48:44.442044] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:56.028 [2024-11-15 10:48:44.442058] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:56.028 [2024-11-15 10:48:44.442069] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:56.028 [2024-11-15 10:48:44.442078] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:56.028 [2024-11-15 10:48:44.442709] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:56.286 [2024-11-15 10:48:44.528996] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:29:56.286 [2024-11-15 10:48:44.529294] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:29:56.286 10:48:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:29:56.286 10:48:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@866 -- # return 0 00:29:56.286 10:48:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:56.286 10:48:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@730 -- # xtrace_disable 00:29:56.286 10:48:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:29:56.286 10:48:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:56.286 10:48:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:29:56.543 [2024-11-15 10:48:44.869721] blobstore.c:4875:bs_recover: *NOTICE*: Performing recovery on blobstore 00:29:56.543 [2024-11-15 10:48:44.869875] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:29:56.543 [2024-11-15 10:48:44.869932] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:29:56.543 10:48:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:29:56.543 10:48:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 8e44e9e9-5c4e-4567-96c0-afc0db3346b8 00:29:56.543 10:48:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local bdev_name=8e44e9e9-5c4e-4567-96c0-afc0db3346b8 00:29:56.543 10:48:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:29:56.543 10:48:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local i 00:29:56.543 10:48:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:29:56.543 10:48:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:29:56.543 10:48:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:29:56.800 10:48:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 8e44e9e9-5c4e-4567-96c0-afc0db3346b8 -t 2000 00:29:57.058 [ 00:29:57.058 { 00:29:57.058 "name": "8e44e9e9-5c4e-4567-96c0-afc0db3346b8", 00:29:57.058 "aliases": [ 00:29:57.058 "lvs/lvol" 00:29:57.058 ], 00:29:57.058 "product_name": "Logical Volume", 00:29:57.058 "block_size": 4096, 00:29:57.058 "num_blocks": 38912, 00:29:57.058 "uuid": "8e44e9e9-5c4e-4567-96c0-afc0db3346b8", 00:29:57.058 "assigned_rate_limits": { 00:29:57.058 "rw_ios_per_sec": 0, 00:29:57.058 "rw_mbytes_per_sec": 0, 00:29:57.058 "r_mbytes_per_sec": 0, 00:29:57.058 "w_mbytes_per_sec": 0 00:29:57.058 }, 00:29:57.058 "claimed": false, 00:29:57.058 "zoned": false, 00:29:57.058 "supported_io_types": { 00:29:57.058 "read": true, 00:29:57.058 "write": true, 00:29:57.058 "unmap": true, 00:29:57.058 "flush": false, 00:29:57.058 "reset": true, 00:29:57.058 "nvme_admin": false, 00:29:57.058 "nvme_io": false, 00:29:57.058 "nvme_io_md": false, 00:29:57.058 "write_zeroes": true, 00:29:57.058 "zcopy": false, 00:29:57.058 "get_zone_info": false, 00:29:57.058 "zone_management": false, 00:29:57.058 "zone_append": false, 00:29:57.058 "compare": false, 00:29:57.058 "compare_and_write": false, 00:29:57.058 "abort": false, 00:29:57.058 "seek_hole": true, 00:29:57.058 "seek_data": true, 00:29:57.058 "copy": false, 00:29:57.058 "nvme_iov_md": false 00:29:57.058 }, 00:29:57.058 "driver_specific": { 00:29:57.058 "lvol": { 00:29:57.058 "lvol_store_uuid": "9310029f-8598-4b86-b3ac-38ac0ab404b2", 00:29:57.058 "base_bdev": "aio_bdev", 00:29:57.058 "thin_provision": false, 00:29:57.058 "num_allocated_clusters": 38, 00:29:57.058 "snapshot": false, 00:29:57.058 "clone": false, 00:29:57.058 "esnap_clone": false 00:29:57.058 } 00:29:57.058 } 00:29:57.058 } 00:29:57.058 ] 00:29:57.058 10:48:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@909 -- # return 0 00:29:57.058 10:48:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9310029f-8598-4b86-b3ac-38ac0ab404b2 00:29:57.058 10:48:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:29:57.317 10:48:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:29:57.317 10:48:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9310029f-8598-4b86-b3ac-38ac0ab404b2 00:29:57.317 10:48:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:29:57.575 10:48:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:29:57.575 10:48:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:29:57.832 [2024-11-15 10:48:46.235208] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:29:57.832 10:48:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9310029f-8598-4b86-b3ac-38ac0ab404b2 00:29:57.832 10:48:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@650 -- # local es=0 00:29:57.832 10:48:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9310029f-8598-4b86-b3ac-38ac0ab404b2 00:29:57.832 10:48:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:29:57.832 10:48:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:29:57.832 10:48:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:29:57.832 10:48:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:29:57.832 10:48:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:29:57.832 10:48:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:29:57.832 10:48:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:29:57.832 10:48:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:29:57.832 10:48:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9310029f-8598-4b86-b3ac-38ac0ab404b2 00:29:58.090 request: 00:29:58.090 { 00:29:58.090 "uuid": "9310029f-8598-4b86-b3ac-38ac0ab404b2", 00:29:58.090 "method": "bdev_lvol_get_lvstores", 00:29:58.090 "req_id": 1 00:29:58.090 } 00:29:58.090 Got JSON-RPC error response 00:29:58.090 response: 00:29:58.090 { 00:29:58.090 "code": -19, 00:29:58.090 "message": "No such device" 00:29:58.090 } 00:29:58.090 10:48:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # es=1 00:29:58.090 10:48:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:29:58.090 10:48:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:29:58.090 10:48:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:29:58.090 10:48:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:29:58.348 aio_bdev 00:29:58.606 10:48:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 8e44e9e9-5c4e-4567-96c0-afc0db3346b8 00:29:58.606 10:48:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local bdev_name=8e44e9e9-5c4e-4567-96c0-afc0db3346b8 00:29:58.606 10:48:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:29:58.606 10:48:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local i 00:29:58.606 10:48:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:29:58.606 10:48:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:29:58.606 10:48:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:29:58.891 10:48:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 8e44e9e9-5c4e-4567-96c0-afc0db3346b8 -t 2000 00:29:59.149 [ 00:29:59.149 { 00:29:59.149 "name": "8e44e9e9-5c4e-4567-96c0-afc0db3346b8", 00:29:59.149 "aliases": [ 00:29:59.149 "lvs/lvol" 00:29:59.149 ], 00:29:59.149 "product_name": "Logical Volume", 00:29:59.149 "block_size": 4096, 00:29:59.149 "num_blocks": 38912, 00:29:59.149 "uuid": "8e44e9e9-5c4e-4567-96c0-afc0db3346b8", 00:29:59.149 "assigned_rate_limits": { 00:29:59.149 "rw_ios_per_sec": 0, 00:29:59.149 "rw_mbytes_per_sec": 0, 00:29:59.149 "r_mbytes_per_sec": 0, 00:29:59.149 "w_mbytes_per_sec": 0 00:29:59.149 }, 00:29:59.149 "claimed": false, 00:29:59.149 "zoned": false, 00:29:59.149 "supported_io_types": { 00:29:59.149 "read": true, 00:29:59.149 "write": true, 00:29:59.149 "unmap": true, 00:29:59.149 "flush": false, 00:29:59.149 "reset": true, 00:29:59.149 "nvme_admin": false, 00:29:59.149 "nvme_io": false, 00:29:59.149 "nvme_io_md": false, 00:29:59.149 "write_zeroes": true, 00:29:59.149 "zcopy": false, 00:29:59.149 "get_zone_info": false, 00:29:59.149 "zone_management": false, 00:29:59.149 "zone_append": false, 00:29:59.149 "compare": false, 00:29:59.149 "compare_and_write": false, 00:29:59.149 "abort": false, 00:29:59.149 "seek_hole": true, 00:29:59.149 "seek_data": true, 00:29:59.149 "copy": false, 00:29:59.149 "nvme_iov_md": false 00:29:59.149 }, 00:29:59.149 "driver_specific": { 00:29:59.149 "lvol": { 00:29:59.149 "lvol_store_uuid": "9310029f-8598-4b86-b3ac-38ac0ab404b2", 00:29:59.149 "base_bdev": "aio_bdev", 00:29:59.149 "thin_provision": false, 00:29:59.149 "num_allocated_clusters": 38, 00:29:59.149 "snapshot": false, 00:29:59.149 "clone": false, 00:29:59.149 "esnap_clone": false 00:29:59.149 } 00:29:59.149 } 00:29:59.149 } 00:29:59.149 ] 00:29:59.149 10:48:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@909 -- # return 0 00:29:59.149 10:48:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9310029f-8598-4b86-b3ac-38ac0ab404b2 00:29:59.149 10:48:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:29:59.406 10:48:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:29:59.406 10:48:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9310029f-8598-4b86-b3ac-38ac0ab404b2 00:29:59.406 10:48:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:29:59.663 10:48:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:29:59.663 10:48:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 8e44e9e9-5c4e-4567-96c0-afc0db3346b8 00:29:59.920 10:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 9310029f-8598-4b86-b3ac-38ac0ab404b2 00:30:00.177 10:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:30:00.435 10:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:30:00.435 00:30:00.435 real 0m19.617s 00:30:00.435 user 0m36.327s 00:30:00.435 sys 0m5.018s 00:30:00.435 10:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1128 -- # xtrace_disable 00:30:00.435 10:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:30:00.435 ************************************ 00:30:00.435 END TEST lvs_grow_dirty 00:30:00.435 ************************************ 00:30:00.435 10:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:30:00.435 10:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@810 -- # type=--id 00:30:00.435 10:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@811 -- # id=0 00:30:00.435 10:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # '[' --id = --pid ']' 00:30:00.435 10:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@816 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:30:00.435 10:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@816 -- # shm_files=nvmf_trace.0 00:30:00.435 10:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # [[ -z nvmf_trace.0 ]] 00:30:00.435 10:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@822 -- # for n in $shm_files 00:30:00.435 10:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@823 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:30:00.435 nvmf_trace.0 00:30:00.435 10:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # return 0 00:30:00.435 10:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:30:00.435 10:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:00.435 10:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:30:00.435 10:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:00.435 10:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:30:00.435 10:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:00.435 10:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:00.435 rmmod nvme_tcp 00:30:00.435 rmmod nvme_fabrics 00:30:00.435 rmmod nvme_keyring 00:30:00.693 10:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:00.693 10:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:30:00.693 10:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:30:00.693 10:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@517 -- # '[' -n 517626 ']' 00:30:00.693 10:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@518 -- # killprocess 517626 00:30:00.693 10:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@952 -- # '[' -z 517626 ']' 00:30:00.693 10:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # kill -0 517626 00:30:00.693 10:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@957 -- # uname 00:30:00.693 10:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:30:00.693 10:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 517626 00:30:00.693 10:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:30:00.693 10:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:30:00.693 10:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@970 -- # echo 'killing process with pid 517626' 00:30:00.693 killing process with pid 517626 00:30:00.693 10:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@971 -- # kill 517626 00:30:00.693 10:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@976 -- # wait 517626 00:30:00.952 10:48:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:00.952 10:48:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:30:00.952 10:48:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:30:00.952 10:48:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:30:00.952 10:48:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-save 00:30:00.952 10:48:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:30:00.952 10:48:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-restore 00:30:00.952 10:48:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:00.952 10:48:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:00.952 10:48:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:00.952 10:48:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:00.952 10:48:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:02.860 10:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:02.860 00:30:02.860 real 0m42.799s 00:30:02.860 user 0m55.494s 00:30:02.860 sys 0m8.836s 00:30:02.860 10:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1128 -- # xtrace_disable 00:30:02.860 10:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:30:02.860 ************************************ 00:30:02.860 END TEST nvmf_lvs_grow 00:30:02.860 ************************************ 00:30:02.860 10:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:30:02.860 10:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:30:02.860 10:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1109 -- # xtrace_disable 00:30:02.860 10:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:30:02.860 ************************************ 00:30:02.860 START TEST nvmf_bdev_io_wait 00:30:02.860 ************************************ 00:30:02.860 10:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:30:02.860 * Looking for test storage... 00:30:02.860 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:30:03.120 10:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:30:03.120 10:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1691 -- # lcov --version 00:30:03.120 10:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:30:03.120 10:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:30:03.120 10:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:03.120 10:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:03.120 10:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:03.120 10:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:30:03.120 10:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:30:03.120 10:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:30:03.120 10:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:30:03.120 10:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:30:03.120 10:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:30:03.120 10:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:30:03.120 10:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:03.120 10:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:30:03.120 10:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:30:03.120 10:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:03.120 10:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:03.120 10:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:30:03.120 10:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:30:03.120 10:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:03.120 10:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:30:03.120 10:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:30:03.120 10:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:30:03.120 10:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:30:03.120 10:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:03.120 10:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:30:03.120 10:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:30:03.120 10:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:03.120 10:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:03.120 10:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:30:03.120 10:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:03.120 10:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:30:03.120 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:03.120 --rc genhtml_branch_coverage=1 00:30:03.120 --rc genhtml_function_coverage=1 00:30:03.120 --rc genhtml_legend=1 00:30:03.120 --rc geninfo_all_blocks=1 00:30:03.120 --rc geninfo_unexecuted_blocks=1 00:30:03.120 00:30:03.120 ' 00:30:03.120 10:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:30:03.120 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:03.120 --rc genhtml_branch_coverage=1 00:30:03.120 --rc genhtml_function_coverage=1 00:30:03.120 --rc genhtml_legend=1 00:30:03.120 --rc geninfo_all_blocks=1 00:30:03.120 --rc geninfo_unexecuted_blocks=1 00:30:03.120 00:30:03.120 ' 00:30:03.120 10:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:30:03.120 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:03.120 --rc genhtml_branch_coverage=1 00:30:03.120 --rc genhtml_function_coverage=1 00:30:03.120 --rc genhtml_legend=1 00:30:03.120 --rc geninfo_all_blocks=1 00:30:03.120 --rc geninfo_unexecuted_blocks=1 00:30:03.120 00:30:03.120 ' 00:30:03.120 10:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:30:03.120 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:03.120 --rc genhtml_branch_coverage=1 00:30:03.120 --rc genhtml_function_coverage=1 00:30:03.120 --rc genhtml_legend=1 00:30:03.120 --rc geninfo_all_blocks=1 00:30:03.120 --rc geninfo_unexecuted_blocks=1 00:30:03.120 00:30:03.120 ' 00:30:03.120 10:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:03.120 10:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:30:03.120 10:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:03.120 10:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:03.120 10:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:03.120 10:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:03.120 10:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:03.120 10:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:03.120 10:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:03.120 10:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:03.120 10:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:03.120 10:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:03.120 10:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:30:03.120 10:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=8b464f06-2980-e311-ba20-001e67a94acd 00:30:03.120 10:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:03.120 10:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:03.120 10:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:03.120 10:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:03.120 10:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:03.120 10:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:30:03.120 10:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:03.120 10:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:03.121 10:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:03.121 10:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:03.121 10:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:03.121 10:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:03.121 10:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:30:03.121 10:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:03.121 10:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:30:03.121 10:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:03.121 10:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:03.121 10:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:03.121 10:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:03.121 10:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:03.121 10:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:30:03.121 10:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:30:03.121 10:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:03.121 10:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:03.121 10:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:03.121 10:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:30:03.121 10:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:30:03.121 10:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:30:03.121 10:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:30:03.121 10:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:03.121 10:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:03.121 10:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:03.121 10:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:03.121 10:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:03.121 10:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:03.121 10:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:03.121 10:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:30:03.121 10:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:30:03.121 10:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # xtrace_disable 00:30:03.121 10:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:30:05.652 10:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:05.652 10:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # pci_devs=() 00:30:05.652 10:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:05.652 10:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:05.652 10:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:05.652 10:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:05.652 10:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:05.652 10:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # net_devs=() 00:30:05.652 10:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:05.652 10:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # e810=() 00:30:05.652 10:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # local -ga e810 00:30:05.652 10:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # x722=() 00:30:05.652 10:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # local -ga x722 00:30:05.652 10:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # mlx=() 00:30:05.652 10:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # local -ga mlx 00:30:05.652 10:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:05.653 10:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:05.653 10:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:05.653 10:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:05.653 10:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:05.653 10:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:05.653 10:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:05.653 10:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:05.653 10:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:05.653 10:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:05.653 10:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:05.653 10:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:05.653 10:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:05.653 10:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:05.653 10:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:05.653 10:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:05.653 10:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:05.653 10:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:05.653 10:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:05.653 10:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:82:00.0 (0x8086 - 0x159b)' 00:30:05.653 Found 0000:82:00.0 (0x8086 - 0x159b) 00:30:05.653 10:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:05.653 10:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:05.653 10:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:05.653 10:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:05.653 10:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:05.653 10:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:05.653 10:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:82:00.1 (0x8086 - 0x159b)' 00:30:05.653 Found 0000:82:00.1 (0x8086 - 0x159b) 00:30:05.653 10:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:05.653 10:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:05.653 10:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:05.653 10:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:05.653 10:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:05.653 10:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:05.653 10:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:05.653 10:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:05.653 10:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:05.653 10:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:05.653 10:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:05.653 10:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:05.653 10:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:05.653 10:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:05.653 10:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:05.653 10:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:82:00.0: cvl_0_0' 00:30:05.653 Found net devices under 0000:82:00.0: cvl_0_0 00:30:05.653 10:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:05.653 10:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:05.653 10:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:05.653 10:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:05.653 10:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:05.653 10:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:05.653 10:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:05.653 10:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:05.653 10:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:82:00.1: cvl_0_1' 00:30:05.653 Found net devices under 0000:82:00.1: cvl_0_1 00:30:05.653 10:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:05.653 10:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:30:05.653 10:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # is_hw=yes 00:30:05.653 10:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:30:05.653 10:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:30:05.653 10:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:30:05.653 10:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:05.653 10:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:05.653 10:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:05.653 10:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:05.653 10:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:05.653 10:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:05.653 10:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:05.653 10:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:05.653 10:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:05.653 10:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:05.653 10:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:05.653 10:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:05.653 10:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:05.653 10:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:05.653 10:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:05.653 10:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:05.653 10:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:05.653 10:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:05.653 10:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:05.653 10:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:05.653 10:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:05.653 10:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:05.653 10:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:05.653 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:05.653 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.184 ms 00:30:05.653 00:30:05.653 --- 10.0.0.2 ping statistics --- 00:30:05.653 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:05.653 rtt min/avg/max/mdev = 0.184/0.184/0.184/0.000 ms 00:30:05.653 10:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:05.653 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:05.653 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.098 ms 00:30:05.653 00:30:05.653 --- 10.0.0.1 ping statistics --- 00:30:05.653 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:05.653 rtt min/avg/max/mdev = 0.098/0.098/0.098/0.000 ms 00:30:05.653 10:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:05.653 10:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # return 0 00:30:05.653 10:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:30:05.653 10:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:05.653 10:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:30:05.654 10:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:30:05.654 10:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:05.654 10:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:30:05.654 10:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:30:05.654 10:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:30:05.654 10:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:05.654 10:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@724 -- # xtrace_disable 00:30:05.654 10:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:30:05.654 10:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # nvmfpid=520267 00:30:05.654 10:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF --wait-for-rpc 00:30:05.654 10:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # waitforlisten 520267 00:30:05.654 10:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@833 -- # '[' -z 520267 ']' 00:30:05.654 10:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:05.654 10:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@838 -- # local max_retries=100 00:30:05.654 10:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:05.654 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:05.654 10:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # xtrace_disable 00:30:05.654 10:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:30:05.654 [2024-11-15 10:48:53.765815] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:30:05.654 [2024-11-15 10:48:53.766803] Starting SPDK v25.01-pre git sha1 318515b44 / DPDK 24.03.0 initialization... 00:30:05.654 [2024-11-15 10:48:53.766850] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:05.654 [2024-11-15 10:48:53.839184] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:05.654 [2024-11-15 10:48:53.903012] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:05.654 [2024-11-15 10:48:53.903073] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:05.654 [2024-11-15 10:48:53.903103] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:05.654 [2024-11-15 10:48:53.903115] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:05.654 [2024-11-15 10:48:53.903124] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:05.654 [2024-11-15 10:48:53.904825] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:05.654 [2024-11-15 10:48:53.904891] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:30:05.654 [2024-11-15 10:48:53.904957] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:30:05.654 [2024-11-15 10:48:53.904960] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:05.654 [2024-11-15 10:48:53.905449] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:30:05.654 10:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:30:05.654 10:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@866 -- # return 0 00:30:05.654 10:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:05.654 10:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@730 -- # xtrace_disable 00:30:05.654 10:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:30:05.654 10:48:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:05.654 10:48:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:30:05.654 10:48:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:05.654 10:48:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:30:05.654 10:48:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:05.654 10:48:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:30:05.654 10:48:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:05.654 10:48:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:30:05.654 [2024-11-15 10:48:54.095230] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:30:05.654 [2024-11-15 10:48:54.095462] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:30:05.654 [2024-11-15 10:48:54.096379] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:30:05.654 [2024-11-15 10:48:54.097202] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:30:05.654 10:48:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:05.654 10:48:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:30:05.654 10:48:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:05.654 10:48:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:30:05.654 [2024-11-15 10:48:54.105634] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:05.654 10:48:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:05.654 10:48:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:30:05.654 10:48:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:05.654 10:48:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:30:05.912 Malloc0 00:30:05.912 10:48:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:05.912 10:48:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:05.913 10:48:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:05.913 10:48:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:30:05.913 10:48:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:05.913 10:48:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:30:05.913 10:48:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:05.913 10:48:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:30:05.913 10:48:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:05.913 10:48:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:05.913 10:48:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:05.913 10:48:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:30:05.913 [2024-11-15 10:48:54.157831] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:05.913 10:48:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:05.913 10:48:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=520289 00:30:05.913 10:48:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=520291 00:30:05.913 10:48:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:30:05.913 10:48:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:30:05.913 10:48:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:30:05.913 10:48:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=520293 00:30:05.913 10:48:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:30:05.913 10:48:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:30:05.913 10:48:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:30:05.913 10:48:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:30:05.913 10:48:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:30:05.913 { 00:30:05.913 "params": { 00:30:05.913 "name": "Nvme$subsystem", 00:30:05.913 "trtype": "$TEST_TRANSPORT", 00:30:05.913 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:05.913 "adrfam": "ipv4", 00:30:05.913 "trsvcid": "$NVMF_PORT", 00:30:05.913 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:05.913 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:05.913 "hdgst": ${hdgst:-false}, 00:30:05.913 "ddgst": ${ddgst:-false} 00:30:05.913 }, 00:30:05.913 "method": "bdev_nvme_attach_controller" 00:30:05.913 } 00:30:05.913 EOF 00:30:05.913 )") 00:30:05.913 10:48:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:30:05.913 10:48:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:30:05.913 10:48:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:30:05.913 10:48:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=520295 00:30:05.913 10:48:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:30:05.913 10:48:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:30:05.913 10:48:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:30:05.913 { 00:30:05.913 "params": { 00:30:05.913 "name": "Nvme$subsystem", 00:30:05.913 "trtype": "$TEST_TRANSPORT", 00:30:05.913 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:05.913 "adrfam": "ipv4", 00:30:05.913 "trsvcid": "$NVMF_PORT", 00:30:05.913 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:05.913 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:05.913 "hdgst": ${hdgst:-false}, 00:30:05.913 "ddgst": ${ddgst:-false} 00:30:05.913 }, 00:30:05.913 "method": "bdev_nvme_attach_controller" 00:30:05.913 } 00:30:05.913 EOF 00:30:05.913 )") 00:30:05.913 10:48:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:30:05.913 10:48:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:30:05.913 10:48:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:30:05.913 10:48:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:30:05.913 10:48:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:30:05.913 10:48:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:30:05.913 { 00:30:05.913 "params": { 00:30:05.913 "name": "Nvme$subsystem", 00:30:05.913 "trtype": "$TEST_TRANSPORT", 00:30:05.913 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:05.913 "adrfam": "ipv4", 00:30:05.913 "trsvcid": "$NVMF_PORT", 00:30:05.913 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:05.913 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:05.913 "hdgst": ${hdgst:-false}, 00:30:05.913 "ddgst": ${ddgst:-false} 00:30:05.913 }, 00:30:05.913 "method": "bdev_nvme_attach_controller" 00:30:05.913 } 00:30:05.913 EOF 00:30:05.913 )") 00:30:05.913 10:48:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:30:05.913 10:48:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:30:05.913 10:48:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:30:05.913 10:48:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:30:05.913 10:48:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:30:05.913 10:48:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:30:05.913 10:48:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:30:05.913 { 00:30:05.913 "params": { 00:30:05.913 "name": "Nvme$subsystem", 00:30:05.913 "trtype": "$TEST_TRANSPORT", 00:30:05.913 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:05.913 "adrfam": "ipv4", 00:30:05.913 "trsvcid": "$NVMF_PORT", 00:30:05.913 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:05.913 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:05.913 "hdgst": ${hdgst:-false}, 00:30:05.913 "ddgst": ${ddgst:-false} 00:30:05.913 }, 00:30:05.913 "method": "bdev_nvme_attach_controller" 00:30:05.913 } 00:30:05.913 EOF 00:30:05.913 )") 00:30:05.913 10:48:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:30:05.913 10:48:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 520289 00:30:05.913 10:48:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:30:05.913 10:48:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:30:05.913 10:48:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:30:05.913 10:48:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:30:05.913 10:48:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:30:05.913 10:48:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:30:05.913 10:48:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:30:05.913 "params": { 00:30:05.913 "name": "Nvme1", 00:30:05.913 "trtype": "tcp", 00:30:05.913 "traddr": "10.0.0.2", 00:30:05.913 "adrfam": "ipv4", 00:30:05.913 "trsvcid": "4420", 00:30:05.913 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:05.913 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:30:05.913 "hdgst": false, 00:30:05.913 "ddgst": false 00:30:05.913 }, 00:30:05.913 "method": "bdev_nvme_attach_controller" 00:30:05.913 }' 00:30:05.913 10:48:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:30:05.913 10:48:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:30:05.913 "params": { 00:30:05.913 "name": "Nvme1", 00:30:05.913 "trtype": "tcp", 00:30:05.913 "traddr": "10.0.0.2", 00:30:05.913 "adrfam": "ipv4", 00:30:05.913 "trsvcid": "4420", 00:30:05.913 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:05.913 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:30:05.913 "hdgst": false, 00:30:05.913 "ddgst": false 00:30:05.913 }, 00:30:05.913 "method": "bdev_nvme_attach_controller" 00:30:05.913 }' 00:30:05.914 10:48:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:30:05.914 10:48:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:30:05.914 "params": { 00:30:05.914 "name": "Nvme1", 00:30:05.914 "trtype": "tcp", 00:30:05.914 "traddr": "10.0.0.2", 00:30:05.914 "adrfam": "ipv4", 00:30:05.914 "trsvcid": "4420", 00:30:05.914 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:05.914 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:30:05.914 "hdgst": false, 00:30:05.914 "ddgst": false 00:30:05.914 }, 00:30:05.914 "method": "bdev_nvme_attach_controller" 00:30:05.914 }' 00:30:05.914 10:48:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:30:05.914 10:48:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:30:05.914 "params": { 00:30:05.914 "name": "Nvme1", 00:30:05.914 "trtype": "tcp", 00:30:05.914 "traddr": "10.0.0.2", 00:30:05.914 "adrfam": "ipv4", 00:30:05.914 "trsvcid": "4420", 00:30:05.914 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:05.914 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:30:05.914 "hdgst": false, 00:30:05.914 "ddgst": false 00:30:05.914 }, 00:30:05.914 "method": "bdev_nvme_attach_controller" 00:30:05.914 }' 00:30:05.914 [2024-11-15 10:48:54.209996] Starting SPDK v25.01-pre git sha1 318515b44 / DPDK 24.03.0 initialization... 00:30:05.914 [2024-11-15 10:48:54.210028] Starting SPDK v25.01-pre git sha1 318515b44 / DPDK 24.03.0 initialization... 00:30:05.914 [2024-11-15 10:48:54.210029] Starting SPDK v25.01-pre git sha1 318515b44 / DPDK 24.03.0 initialization... 00:30:05.914 [2024-11-15 10:48:54.210028] Starting SPDK v25.01-pre git sha1 318515b44 / DPDK 24.03.0 initialization... 00:30:05.914 [2024-11-15 10:48:54.210076] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:30:05.914 [2024-11-15 10:48:54.210107] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-11-15 10:48:54.210108] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-11-15 10:48:54.210107] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:30:05.914 --proc-type=auto ] 00:30:05.914 --proc-type=auto ] 00:30:06.171 [2024-11-15 10:48:54.391463] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:06.171 [2024-11-15 10:48:54.445959] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:30:06.171 [2024-11-15 10:48:54.496314] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:06.171 [2024-11-15 10:48:54.550847] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:30:06.171 [2024-11-15 10:48:54.594760] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:06.430 [2024-11-15 10:48:54.649848] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:30:06.430 [2024-11-15 10:48:54.665580] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:06.430 [2024-11-15 10:48:54.715972] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:30:06.430 Running I/O for 1 seconds... 00:30:06.430 Running I/O for 1 seconds... 00:30:06.430 Running I/O for 1 seconds... 00:30:06.690 Running I/O for 1 seconds... 00:30:07.623 10707.00 IOPS, 41.82 MiB/s 00:30:07.623 Latency(us) 00:30:07.623 [2024-11-15T09:48:56.086Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:07.623 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:30:07.623 Nvme1n1 : 1.01 10751.82 42.00 0.00 0.00 11855.86 4369.07 13689.74 00:30:07.623 [2024-11-15T09:48:56.086Z] =================================================================================================================== 00:30:07.623 [2024-11-15T09:48:56.086Z] Total : 10751.82 42.00 0.00 0.00 11855.86 4369.07 13689.74 00:30:07.623 201064.00 IOPS, 785.41 MiB/s 00:30:07.623 Latency(us) 00:30:07.623 [2024-11-15T09:48:56.086Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:07.623 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:30:07.623 Nvme1n1 : 1.00 200690.89 783.95 0.00 0.00 634.57 282.17 1856.85 00:30:07.623 [2024-11-15T09:48:56.086Z] =================================================================================================================== 00:30:07.623 [2024-11-15T09:48:56.086Z] Total : 200690.89 783.95 0.00 0.00 634.57 282.17 1856.85 00:30:07.623 9424.00 IOPS, 36.81 MiB/s 00:30:07.623 Latency(us) 00:30:07.623 [2024-11-15T09:48:56.086Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:07.623 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:30:07.623 Nvme1n1 : 1.01 9499.78 37.11 0.00 0.00 13426.38 2439.40 18155.90 00:30:07.623 [2024-11-15T09:48:56.086Z] =================================================================================================================== 00:30:07.623 [2024-11-15T09:48:56.086Z] Total : 9499.78 37.11 0.00 0.00 13426.38 2439.40 18155.90 00:30:07.623 8687.00 IOPS, 33.93 MiB/s 00:30:07.623 Latency(us) 00:30:07.623 [2024-11-15T09:48:56.086Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:07.623 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:30:07.623 Nvme1n1 : 1.01 8762.75 34.23 0.00 0.00 14551.84 2051.03 21359.88 00:30:07.623 [2024-11-15T09:48:56.086Z] =================================================================================================================== 00:30:07.623 [2024-11-15T09:48:56.086Z] Total : 8762.75 34.23 0.00 0.00 14551.84 2051.03 21359.88 00:30:07.880 10:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 520291 00:30:07.880 10:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 520293 00:30:07.880 10:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 520295 00:30:07.880 10:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:07.880 10:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:07.880 10:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:30:07.880 10:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:07.880 10:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:30:07.880 10:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:30:07.880 10:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:07.880 10:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:30:07.880 10:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:07.880 10:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:30:07.880 10:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:07.880 10:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:07.880 rmmod nvme_tcp 00:30:07.880 rmmod nvme_fabrics 00:30:07.881 rmmod nvme_keyring 00:30:07.881 10:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:07.881 10:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:30:07.881 10:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:30:07.881 10:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@517 -- # '[' -n 520267 ']' 00:30:07.881 10:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # killprocess 520267 00:30:07.881 10:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@952 -- # '[' -z 520267 ']' 00:30:07.881 10:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # kill -0 520267 00:30:07.881 10:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@957 -- # uname 00:30:07.881 10:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:30:07.881 10:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 520267 00:30:07.881 10:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:30:07.881 10:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:30:07.881 10:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@970 -- # echo 'killing process with pid 520267' 00:30:07.881 killing process with pid 520267 00:30:07.881 10:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@971 -- # kill 520267 00:30:07.881 10:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@976 -- # wait 520267 00:30:08.140 10:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:08.140 10:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:30:08.140 10:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:30:08.140 10:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:30:08.140 10:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-save 00:30:08.141 10:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:30:08.141 10:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-restore 00:30:08.141 10:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:08.141 10:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:08.141 10:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:08.141 10:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:08.141 10:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:10.043 10:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:10.043 00:30:10.043 real 0m7.201s 00:30:10.043 user 0m14.172s 00:30:10.043 sys 0m4.053s 00:30:10.043 10:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1128 -- # xtrace_disable 00:30:10.043 10:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:30:10.043 ************************************ 00:30:10.043 END TEST nvmf_bdev_io_wait 00:30:10.043 ************************************ 00:30:10.043 10:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:30:10.043 10:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:30:10.043 10:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1109 -- # xtrace_disable 00:30:10.043 10:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:30:10.302 ************************************ 00:30:10.302 START TEST nvmf_queue_depth 00:30:10.302 ************************************ 00:30:10.302 10:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:30:10.302 * Looking for test storage... 00:30:10.302 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:30:10.302 10:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:30:10.302 10:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1691 -- # lcov --version 00:30:10.302 10:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:30:10.302 10:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:30:10.302 10:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:10.302 10:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:10.302 10:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:10.302 10:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:30:10.302 10:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:30:10.302 10:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:30:10.302 10:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:30:10.302 10:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:30:10.302 10:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:30:10.302 10:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:30:10.302 10:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:10.302 10:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:30:10.302 10:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:30:10.302 10:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:10.302 10:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:10.302 10:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:30:10.302 10:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:30:10.302 10:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:10.302 10:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:30:10.302 10:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:30:10.302 10:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:30:10.302 10:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:30:10.302 10:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:10.302 10:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:30:10.302 10:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:30:10.302 10:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:10.302 10:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:10.302 10:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:30:10.302 10:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:10.302 10:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:30:10.302 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:10.302 --rc genhtml_branch_coverage=1 00:30:10.302 --rc genhtml_function_coverage=1 00:30:10.302 --rc genhtml_legend=1 00:30:10.302 --rc geninfo_all_blocks=1 00:30:10.302 --rc geninfo_unexecuted_blocks=1 00:30:10.302 00:30:10.302 ' 00:30:10.302 10:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:30:10.302 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:10.302 --rc genhtml_branch_coverage=1 00:30:10.302 --rc genhtml_function_coverage=1 00:30:10.302 --rc genhtml_legend=1 00:30:10.302 --rc geninfo_all_blocks=1 00:30:10.302 --rc geninfo_unexecuted_blocks=1 00:30:10.302 00:30:10.302 ' 00:30:10.302 10:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:30:10.302 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:10.302 --rc genhtml_branch_coverage=1 00:30:10.302 --rc genhtml_function_coverage=1 00:30:10.302 --rc genhtml_legend=1 00:30:10.302 --rc geninfo_all_blocks=1 00:30:10.302 --rc geninfo_unexecuted_blocks=1 00:30:10.302 00:30:10.302 ' 00:30:10.302 10:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:30:10.302 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:10.302 --rc genhtml_branch_coverage=1 00:30:10.302 --rc genhtml_function_coverage=1 00:30:10.302 --rc genhtml_legend=1 00:30:10.302 --rc geninfo_all_blocks=1 00:30:10.302 --rc geninfo_unexecuted_blocks=1 00:30:10.302 00:30:10.302 ' 00:30:10.302 10:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:10.302 10:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:30:10.302 10:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:10.302 10:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:10.302 10:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:10.302 10:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:10.302 10:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:10.302 10:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:10.302 10:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:10.302 10:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:10.302 10:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:10.302 10:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:10.302 10:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:30:10.302 10:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=8b464f06-2980-e311-ba20-001e67a94acd 00:30:10.302 10:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:10.302 10:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:10.302 10:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:10.302 10:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:10.302 10:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:10.302 10:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:30:10.302 10:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:10.302 10:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:10.303 10:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:10.303 10:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:10.303 10:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:10.303 10:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:10.303 10:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:30:10.303 10:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:10.303 10:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:30:10.303 10:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:10.303 10:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:10.303 10:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:10.303 10:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:10.303 10:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:10.303 10:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:30:10.303 10:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:30:10.303 10:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:10.303 10:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:10.303 10:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:10.303 10:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:30:10.303 10:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:30:10.303 10:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:30:10.303 10:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:30:10.303 10:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:30:10.303 10:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:10.303 10:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:10.303 10:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:10.303 10:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:10.303 10:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:10.303 10:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:10.303 10:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:10.303 10:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:30:10.303 10:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:30:10.303 10:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@309 -- # xtrace_disable 00:30:10.303 10:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:30:12.830 10:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:12.830 10:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@315 -- # pci_devs=() 00:30:12.830 10:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:12.830 10:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:12.830 10:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:12.830 10:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:12.830 10:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:12.830 10:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@319 -- # net_devs=() 00:30:12.830 10:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:12.830 10:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@320 -- # e810=() 00:30:12.830 10:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@320 -- # local -ga e810 00:30:12.830 10:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@321 -- # x722=() 00:30:12.830 10:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@321 -- # local -ga x722 00:30:12.830 10:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@322 -- # mlx=() 00:30:12.830 10:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@322 -- # local -ga mlx 00:30:12.830 10:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:12.830 10:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:12.830 10:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:12.830 10:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:12.830 10:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:12.830 10:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:12.830 10:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:12.830 10:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:12.830 10:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:12.830 10:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:12.830 10:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:12.830 10:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:12.830 10:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:12.830 10:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:12.830 10:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:12.830 10:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:12.830 10:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:12.830 10:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:12.831 10:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:12.831 10:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:82:00.0 (0x8086 - 0x159b)' 00:30:12.831 Found 0000:82:00.0 (0x8086 - 0x159b) 00:30:12.831 10:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:12.831 10:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:12.831 10:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:12.831 10:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:12.831 10:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:12.831 10:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:12.831 10:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:82:00.1 (0x8086 - 0x159b)' 00:30:12.831 Found 0000:82:00.1 (0x8086 - 0x159b) 00:30:12.831 10:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:12.831 10:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:12.831 10:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:12.831 10:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:12.831 10:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:12.831 10:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:12.831 10:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:12.831 10:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:12.831 10:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:12.831 10:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:12.831 10:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:12.831 10:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:12.831 10:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:12.831 10:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:12.831 10:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:12.831 10:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:82:00.0: cvl_0_0' 00:30:12.831 Found net devices under 0000:82:00.0: cvl_0_0 00:30:12.831 10:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:12.831 10:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:12.831 10:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:12.831 10:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:12.831 10:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:12.831 10:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:12.831 10:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:12.831 10:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:12.831 10:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:82:00.1: cvl_0_1' 00:30:12.831 Found net devices under 0000:82:00.1: cvl_0_1 00:30:12.831 10:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:12.831 10:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:30:12.831 10:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # is_hw=yes 00:30:12.831 10:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:30:12.831 10:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:30:12.831 10:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:30:12.831 10:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:12.831 10:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:12.831 10:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:12.831 10:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:12.831 10:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:12.831 10:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:12.831 10:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:12.831 10:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:12.831 10:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:12.831 10:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:12.831 10:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:12.831 10:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:12.831 10:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:12.831 10:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:12.831 10:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:12.831 10:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:12.831 10:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:12.831 10:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:12.831 10:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:12.831 10:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:12.831 10:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:12.831 10:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:12.831 10:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:12.831 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:12.831 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.162 ms 00:30:12.831 00:30:12.831 --- 10.0.0.2 ping statistics --- 00:30:12.831 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:12.831 rtt min/avg/max/mdev = 0.162/0.162/0.162/0.000 ms 00:30:12.831 10:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:12.831 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:12.831 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.070 ms 00:30:12.831 00:30:12.831 --- 10.0.0.1 ping statistics --- 00:30:12.831 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:12.831 rtt min/avg/max/mdev = 0.070/0.070/0.070/0.000 ms 00:30:12.831 10:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:12.831 10:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@450 -- # return 0 00:30:12.831 10:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:30:12.831 10:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:12.831 10:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:30:12.831 10:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:30:12.831 10:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:12.831 10:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:30:12.831 10:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:30:12.831 10:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:30:12.831 10:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:12.831 10:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@724 -- # xtrace_disable 00:30:12.831 10:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:30:12.831 10:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@509 -- # nvmfpid=522516 00:30:12.831 10:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@510 -- # waitforlisten 522516 00:30:12.831 10:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@833 -- # '[' -z 522516 ']' 00:30:12.831 10:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:12.831 10:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:30:12.831 10:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@838 -- # local max_retries=100 00:30:12.831 10:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:12.831 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:12.831 10:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@842 -- # xtrace_disable 00:30:12.831 10:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:30:12.831 [2024-11-15 10:49:01.063399] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:30:12.832 [2024-11-15 10:49:01.064589] Starting SPDK v25.01-pre git sha1 318515b44 / DPDK 24.03.0 initialization... 00:30:12.832 [2024-11-15 10:49:01.064651] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:12.832 [2024-11-15 10:49:01.140940] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:12.832 [2024-11-15 10:49:01.198504] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:12.832 [2024-11-15 10:49:01.198570] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:12.832 [2024-11-15 10:49:01.198599] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:12.832 [2024-11-15 10:49:01.198611] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:12.832 [2024-11-15 10:49:01.198620] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:12.832 [2024-11-15 10:49:01.199296] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:12.832 [2024-11-15 10:49:01.289619] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:30:12.832 [2024-11-15 10:49:01.289931] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:30:13.090 10:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:30:13.090 10:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@866 -- # return 0 00:30:13.090 10:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:13.090 10:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@730 -- # xtrace_disable 00:30:13.090 10:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:30:13.090 10:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:13.090 10:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:30:13.090 10:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:13.090 10:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:30:13.090 [2024-11-15 10:49:01.339950] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:13.090 10:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:13.090 10:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:30:13.090 10:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:13.090 10:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:30:13.090 Malloc0 00:30:13.090 10:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:13.090 10:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:13.090 10:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:13.090 10:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:30:13.090 10:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:13.090 10:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:30:13.090 10:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:13.090 10:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:30:13.090 10:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:13.090 10:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:13.090 10:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:13.090 10:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:30:13.090 [2024-11-15 10:49:01.400182] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:13.090 10:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:13.090 10:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=522540 00:30:13.090 10:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:30:13.090 10:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:30:13.090 10:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 522540 /var/tmp/bdevperf.sock 00:30:13.090 10:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@833 -- # '[' -z 522540 ']' 00:30:13.090 10:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:30:13.090 10:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@838 -- # local max_retries=100 00:30:13.090 10:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:30:13.090 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:30:13.090 10:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@842 -- # xtrace_disable 00:30:13.090 10:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:30:13.090 [2024-11-15 10:49:01.452094] Starting SPDK v25.01-pre git sha1 318515b44 / DPDK 24.03.0 initialization... 00:30:13.090 [2024-11-15 10:49:01.452177] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid522540 ] 00:30:13.090 [2024-11-15 10:49:01.518570] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:13.347 [2024-11-15 10:49:01.577931] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:13.347 10:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:30:13.347 10:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@866 -- # return 0 00:30:13.347 10:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:30:13.347 10:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:13.347 10:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:30:13.347 NVMe0n1 00:30:13.347 10:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:13.347 10:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:30:13.605 Running I/O for 10 seconds... 00:30:15.469 9216.00 IOPS, 36.00 MiB/s [2024-11-15T09:49:05.301Z] 9265.50 IOPS, 36.19 MiB/s [2024-11-15T09:49:06.234Z] 9362.33 IOPS, 36.57 MiB/s [2024-11-15T09:49:07.166Z] 9473.00 IOPS, 37.00 MiB/s [2024-11-15T09:49:08.097Z] 9459.80 IOPS, 36.95 MiB/s [2024-11-15T09:49:09.027Z] 9555.17 IOPS, 37.32 MiB/s [2024-11-15T09:49:09.958Z] 9555.57 IOPS, 37.33 MiB/s [2024-11-15T09:49:11.329Z] 9598.25 IOPS, 37.49 MiB/s [2024-11-15T09:49:12.261Z] 9566.11 IOPS, 37.37 MiB/s [2024-11-15T09:49:12.261Z] 9620.70 IOPS, 37.58 MiB/s 00:30:23.798 Latency(us) 00:30:23.798 [2024-11-15T09:49:12.261Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:23.798 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:30:23.798 Verification LBA range: start 0x0 length 0x4000 00:30:23.798 NVMe0n1 : 10.09 9633.30 37.63 0.00 0.00 105906.71 20486.07 64856.37 00:30:23.798 [2024-11-15T09:49:12.261Z] =================================================================================================================== 00:30:23.798 [2024-11-15T09:49:12.261Z] Total : 9633.30 37.63 0.00 0.00 105906.71 20486.07 64856.37 00:30:23.798 { 00:30:23.798 "results": [ 00:30:23.798 { 00:30:23.798 "job": "NVMe0n1", 00:30:23.798 "core_mask": "0x1", 00:30:23.798 "workload": "verify", 00:30:23.798 "status": "finished", 00:30:23.798 "verify_range": { 00:30:23.798 "start": 0, 00:30:23.798 "length": 16384 00:30:23.798 }, 00:30:23.798 "queue_depth": 1024, 00:30:23.798 "io_size": 4096, 00:30:23.798 "runtime": 10.093215, 00:30:23.798 "iops": 9633.303164551631, 00:30:23.798 "mibps": 37.63009048652981, 00:30:23.798 "io_failed": 0, 00:30:23.798 "io_timeout": 0, 00:30:23.798 "avg_latency_us": 105906.71392085364, 00:30:23.798 "min_latency_us": 20486.068148148148, 00:30:23.798 "max_latency_us": 64856.36740740741 00:30:23.798 } 00:30:23.798 ], 00:30:23.798 "core_count": 1 00:30:23.798 } 00:30:23.798 10:49:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 522540 00:30:23.798 10:49:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@952 -- # '[' -z 522540 ']' 00:30:23.798 10:49:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@956 -- # kill -0 522540 00:30:23.798 10:49:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@957 -- # uname 00:30:23.798 10:49:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:30:23.798 10:49:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 522540 00:30:23.798 10:49:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:30:23.798 10:49:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:30:23.798 10:49:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@970 -- # echo 'killing process with pid 522540' 00:30:23.798 killing process with pid 522540 00:30:23.798 10:49:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@971 -- # kill 522540 00:30:23.798 Received shutdown signal, test time was about 10.000000 seconds 00:30:23.798 00:30:23.798 Latency(us) 00:30:23.798 [2024-11-15T09:49:12.261Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:23.798 [2024-11-15T09:49:12.261Z] =================================================================================================================== 00:30:23.798 [2024-11-15T09:49:12.261Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:30:23.798 10:49:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@976 -- # wait 522540 00:30:23.798 10:49:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:30:23.798 10:49:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:30:23.798 10:49:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:23.798 10:49:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:30:23.798 10:49:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:23.798 10:49:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:30:23.798 10:49:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:23.798 10:49:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:24.055 rmmod nvme_tcp 00:30:24.055 rmmod nvme_fabrics 00:30:24.055 rmmod nvme_keyring 00:30:24.055 10:49:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:24.055 10:49:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:30:24.055 10:49:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:30:24.055 10:49:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@517 -- # '[' -n 522516 ']' 00:30:24.055 10:49:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@518 -- # killprocess 522516 00:30:24.055 10:49:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@952 -- # '[' -z 522516 ']' 00:30:24.055 10:49:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@956 -- # kill -0 522516 00:30:24.055 10:49:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@957 -- # uname 00:30:24.055 10:49:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:30:24.055 10:49:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 522516 00:30:24.055 10:49:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:30:24.055 10:49:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:30:24.055 10:49:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@970 -- # echo 'killing process with pid 522516' 00:30:24.055 killing process with pid 522516 00:30:24.055 10:49:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@971 -- # kill 522516 00:30:24.055 10:49:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@976 -- # wait 522516 00:30:24.313 10:49:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:24.313 10:49:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:30:24.313 10:49:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:30:24.313 10:49:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:30:24.313 10:49:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-save 00:30:24.313 10:49:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:30:24.313 10:49:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-restore 00:30:24.313 10:49:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:24.313 10:49:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:24.313 10:49:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:24.313 10:49:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:24.313 10:49:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:26.213 10:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:26.213 00:30:26.213 real 0m16.080s 00:30:26.213 user 0m21.780s 00:30:26.213 sys 0m3.779s 00:30:26.213 10:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1128 -- # xtrace_disable 00:30:26.213 10:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:30:26.213 ************************************ 00:30:26.213 END TEST nvmf_queue_depth 00:30:26.213 ************************************ 00:30:26.213 10:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:30:26.213 10:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:30:26.213 10:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1109 -- # xtrace_disable 00:30:26.213 10:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:30:26.213 ************************************ 00:30:26.213 START TEST nvmf_target_multipath 00:30:26.213 ************************************ 00:30:26.213 10:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:30:26.470 * Looking for test storage... 00:30:26.470 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:30:26.470 10:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:30:26.470 10:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1691 -- # lcov --version 00:30:26.470 10:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:30:26.470 10:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:30:26.470 10:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:26.470 10:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:26.470 10:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:26.470 10:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:30:26.470 10:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:30:26.470 10:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:30:26.470 10:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:30:26.470 10:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:30:26.470 10:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:30:26.470 10:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:30:26.470 10:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:26.470 10:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:30:26.470 10:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:30:26.470 10:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:26.470 10:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:26.470 10:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:30:26.470 10:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:30:26.470 10:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:26.470 10:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:30:26.470 10:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:30:26.470 10:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:30:26.470 10:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:30:26.470 10:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:26.470 10:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:30:26.470 10:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:30:26.470 10:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:26.470 10:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:26.470 10:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:30:26.470 10:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:26.470 10:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:30:26.470 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:26.470 --rc genhtml_branch_coverage=1 00:30:26.470 --rc genhtml_function_coverage=1 00:30:26.470 --rc genhtml_legend=1 00:30:26.470 --rc geninfo_all_blocks=1 00:30:26.470 --rc geninfo_unexecuted_blocks=1 00:30:26.470 00:30:26.470 ' 00:30:26.470 10:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:30:26.470 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:26.470 --rc genhtml_branch_coverage=1 00:30:26.470 --rc genhtml_function_coverage=1 00:30:26.470 --rc genhtml_legend=1 00:30:26.470 --rc geninfo_all_blocks=1 00:30:26.470 --rc geninfo_unexecuted_blocks=1 00:30:26.470 00:30:26.470 ' 00:30:26.470 10:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:30:26.470 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:26.470 --rc genhtml_branch_coverage=1 00:30:26.470 --rc genhtml_function_coverage=1 00:30:26.470 --rc genhtml_legend=1 00:30:26.470 --rc geninfo_all_blocks=1 00:30:26.470 --rc geninfo_unexecuted_blocks=1 00:30:26.470 00:30:26.470 ' 00:30:26.470 10:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:30:26.470 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:26.470 --rc genhtml_branch_coverage=1 00:30:26.470 --rc genhtml_function_coverage=1 00:30:26.470 --rc genhtml_legend=1 00:30:26.470 --rc geninfo_all_blocks=1 00:30:26.470 --rc geninfo_unexecuted_blocks=1 00:30:26.470 00:30:26.470 ' 00:30:26.470 10:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:26.470 10:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:30:26.470 10:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:26.470 10:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:26.470 10:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:26.470 10:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:26.470 10:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:26.470 10:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:26.470 10:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:26.470 10:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:26.470 10:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:26.470 10:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:26.470 10:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:30:26.470 10:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=8b464f06-2980-e311-ba20-001e67a94acd 00:30:26.470 10:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:26.470 10:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:26.470 10:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:26.470 10:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:26.470 10:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:26.470 10:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:30:26.470 10:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:26.470 10:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:26.471 10:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:26.471 10:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:26.471 10:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:26.471 10:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:26.471 10:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:30:26.471 10:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:26.471 10:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:30:26.471 10:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:26.471 10:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:26.471 10:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:26.471 10:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:26.471 10:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:26.471 10:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:30:26.471 10:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:30:26.471 10:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:26.471 10:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:26.471 10:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:26.471 10:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:30:26.471 10:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:30:26.471 10:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:30:26.471 10:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:30:26.471 10:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:30:26.471 10:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:30:26.471 10:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:26.471 10:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:26.471 10:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:26.471 10:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:26.471 10:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:26.471 10:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:26.471 10:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:26.471 10:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:30:26.471 10:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:30:26.471 10:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@309 -- # xtrace_disable 00:30:26.471 10:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:30:28.999 10:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:28.999 10:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@315 -- # pci_devs=() 00:30:28.999 10:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:28.999 10:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:28.999 10:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:28.999 10:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:28.999 10:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:28.999 10:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@319 -- # net_devs=() 00:30:28.999 10:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:28.999 10:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@320 -- # e810=() 00:30:28.999 10:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@320 -- # local -ga e810 00:30:28.999 10:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@321 -- # x722=() 00:30:28.999 10:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@321 -- # local -ga x722 00:30:28.999 10:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@322 -- # mlx=() 00:30:28.999 10:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@322 -- # local -ga mlx 00:30:28.999 10:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:28.999 10:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:28.999 10:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:28.999 10:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:28.999 10:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:28.999 10:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:28.999 10:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:28.999 10:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:28.999 10:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:28.999 10:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:28.999 10:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:28.999 10:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:28.999 10:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:28.999 10:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:28.999 10:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:28.999 10:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:28.999 10:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:28.999 10:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:28.999 10:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:28.999 10:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:82:00.0 (0x8086 - 0x159b)' 00:30:28.999 Found 0000:82:00.0 (0x8086 - 0x159b) 00:30:28.999 10:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:28.999 10:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:28.999 10:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:28.999 10:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:28.999 10:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:28.999 10:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:28.999 10:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:82:00.1 (0x8086 - 0x159b)' 00:30:28.999 Found 0000:82:00.1 (0x8086 - 0x159b) 00:30:28.999 10:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:28.999 10:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:28.999 10:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:28.999 10:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:28.999 10:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:28.999 10:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:28.999 10:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:28.999 10:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:28.999 10:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:28.999 10:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:28.999 10:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:28.999 10:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:28.999 10:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:28.999 10:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:28.999 10:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:28.999 10:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:82:00.0: cvl_0_0' 00:30:28.999 Found net devices under 0000:82:00.0: cvl_0_0 00:30:28.999 10:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:28.999 10:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:28.999 10:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:28.999 10:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:29.000 10:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:29.000 10:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:29.000 10:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:29.000 10:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:29.000 10:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:82:00.1: cvl_0_1' 00:30:29.000 Found net devices under 0000:82:00.1: cvl_0_1 00:30:29.000 10:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:29.000 10:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:30:29.000 10:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # is_hw=yes 00:30:29.000 10:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:30:29.000 10:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:30:29.000 10:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:30:29.000 10:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:29.000 10:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:29.000 10:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:29.000 10:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:29.000 10:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:29.000 10:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:29.000 10:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:29.000 10:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:29.000 10:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:29.000 10:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:29.000 10:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:29.000 10:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:29.000 10:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:29.000 10:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:29.000 10:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:29.000 10:49:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:29.000 10:49:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:29.000 10:49:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:29.000 10:49:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:29.000 10:49:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:29.000 10:49:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:29.000 10:49:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:29.000 10:49:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:29.000 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:29.000 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.157 ms 00:30:29.000 00:30:29.000 --- 10.0.0.2 ping statistics --- 00:30:29.000 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:29.000 rtt min/avg/max/mdev = 0.157/0.157/0.157/0.000 ms 00:30:29.000 10:49:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:29.000 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:29.000 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.141 ms 00:30:29.000 00:30:29.000 --- 10.0.0.1 ping statistics --- 00:30:29.000 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:29.000 rtt min/avg/max/mdev = 0.141/0.141/0.141/0.000 ms 00:30:29.000 10:49:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:29.000 10:49:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@450 -- # return 0 00:30:29.000 10:49:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:30:29.000 10:49:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:29.000 10:49:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:30:29.000 10:49:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:30:29.000 10:49:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:29.000 10:49:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:30:29.000 10:49:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:30:29.000 10:49:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:30:29.000 10:49:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:30:29.000 only one NIC for nvmf test 00:30:29.000 10:49:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:30:29.000 10:49:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:29.000 10:49:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:30:29.000 10:49:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:29.000 10:49:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:30:29.000 10:49:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:29.000 10:49:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:29.000 rmmod nvme_tcp 00:30:29.000 rmmod nvme_fabrics 00:30:29.000 rmmod nvme_keyring 00:30:29.000 10:49:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:29.000 10:49:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:30:29.000 10:49:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:30:29.000 10:49:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:30:29.000 10:49:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:29.000 10:49:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:30:29.000 10:49:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:30:29.000 10:49:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:30:29.000 10:49:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:30:29.000 10:49:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:30:29.000 10:49:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:30:29.000 10:49:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:29.000 10:49:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:29.000 10:49:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:29.000 10:49:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:29.000 10:49:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:30.901 10:49:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:30.901 10:49:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:30:30.901 10:49:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:30:30.901 10:49:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:30.901 10:49:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:30:30.901 10:49:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:30.901 10:49:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:30:30.901 10:49:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:30.901 10:49:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:30.901 10:49:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:30.901 10:49:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:30:30.901 10:49:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:30:30.901 10:49:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:30:30.901 10:49:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:30.901 10:49:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:30:30.901 10:49:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:30:30.901 10:49:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:30:30.901 10:49:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:30:30.901 10:49:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:30:30.901 10:49:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:30:30.901 10:49:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:30.901 10:49:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:30.901 10:49:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:30.901 10:49:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:30.901 10:49:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:30.901 10:49:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:30.901 00:30:30.901 real 0m4.585s 00:30:30.901 user 0m0.969s 00:30:30.901 sys 0m1.613s 00:30:30.901 10:49:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1128 -- # xtrace_disable 00:30:30.901 10:49:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:30:30.901 ************************************ 00:30:30.901 END TEST nvmf_target_multipath 00:30:30.901 ************************************ 00:30:30.901 10:49:19 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:30:30.901 10:49:19 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:30:30.901 10:49:19 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1109 -- # xtrace_disable 00:30:30.901 10:49:19 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:30:30.901 ************************************ 00:30:30.901 START TEST nvmf_zcopy 00:30:30.901 ************************************ 00:30:30.901 10:49:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:30:30.901 * Looking for test storage... 00:30:30.901 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:30:30.901 10:49:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:30:30.901 10:49:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1691 -- # lcov --version 00:30:30.901 10:49:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:30:31.213 10:49:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:30:31.213 10:49:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:31.213 10:49:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:31.213 10:49:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:31.213 10:49:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:30:31.213 10:49:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:30:31.213 10:49:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:30:31.213 10:49:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:30:31.213 10:49:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:30:31.213 10:49:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:30:31.213 10:49:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:30:31.213 10:49:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:31.213 10:49:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:30:31.213 10:49:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:30:31.213 10:49:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:31.213 10:49:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:31.213 10:49:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:30:31.213 10:49:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:30:31.213 10:49:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:31.213 10:49:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:30:31.213 10:49:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:30:31.213 10:49:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:30:31.213 10:49:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:30:31.213 10:49:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:31.213 10:49:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:30:31.213 10:49:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:30:31.213 10:49:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:31.213 10:49:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:31.213 10:49:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:30:31.213 10:49:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:31.213 10:49:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:30:31.213 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:31.213 --rc genhtml_branch_coverage=1 00:30:31.213 --rc genhtml_function_coverage=1 00:30:31.213 --rc genhtml_legend=1 00:30:31.213 --rc geninfo_all_blocks=1 00:30:31.213 --rc geninfo_unexecuted_blocks=1 00:30:31.213 00:30:31.213 ' 00:30:31.213 10:49:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:30:31.213 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:31.213 --rc genhtml_branch_coverage=1 00:30:31.213 --rc genhtml_function_coverage=1 00:30:31.213 --rc genhtml_legend=1 00:30:31.213 --rc geninfo_all_blocks=1 00:30:31.213 --rc geninfo_unexecuted_blocks=1 00:30:31.213 00:30:31.213 ' 00:30:31.213 10:49:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:30:31.213 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:31.213 --rc genhtml_branch_coverage=1 00:30:31.213 --rc genhtml_function_coverage=1 00:30:31.213 --rc genhtml_legend=1 00:30:31.213 --rc geninfo_all_blocks=1 00:30:31.213 --rc geninfo_unexecuted_blocks=1 00:30:31.213 00:30:31.213 ' 00:30:31.213 10:49:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:30:31.213 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:31.213 --rc genhtml_branch_coverage=1 00:30:31.213 --rc genhtml_function_coverage=1 00:30:31.213 --rc genhtml_legend=1 00:30:31.213 --rc geninfo_all_blocks=1 00:30:31.213 --rc geninfo_unexecuted_blocks=1 00:30:31.213 00:30:31.213 ' 00:30:31.213 10:49:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:31.213 10:49:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:30:31.213 10:49:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:31.213 10:49:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:31.213 10:49:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:31.213 10:49:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:31.213 10:49:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:31.213 10:49:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:31.213 10:49:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:31.213 10:49:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:31.213 10:49:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:31.213 10:49:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:31.213 10:49:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:30:31.213 10:49:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=8b464f06-2980-e311-ba20-001e67a94acd 00:30:31.213 10:49:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:31.213 10:49:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:31.213 10:49:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:31.213 10:49:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:31.213 10:49:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:31.213 10:49:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:30:31.213 10:49:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:31.213 10:49:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:31.213 10:49:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:31.213 10:49:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:31.213 10:49:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:31.213 10:49:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:31.213 10:49:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:30:31.213 10:49:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:31.213 10:49:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:30:31.213 10:49:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:31.213 10:49:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:31.213 10:49:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:31.213 10:49:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:31.213 10:49:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:31.213 10:49:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:30:31.214 10:49:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:30:31.214 10:49:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:31.214 10:49:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:31.214 10:49:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:31.214 10:49:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:30:31.214 10:49:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:30:31.214 10:49:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:31.214 10:49:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:31.214 10:49:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:31.214 10:49:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:31.214 10:49:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:31.214 10:49:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:31.214 10:49:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:31.214 10:49:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:30:31.214 10:49:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:30:31.214 10:49:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@309 -- # xtrace_disable 00:30:31.214 10:49:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:30:33.179 10:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:33.179 10:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@315 -- # pci_devs=() 00:30:33.179 10:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:33.179 10:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:33.179 10:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:33.179 10:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:33.179 10:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:33.179 10:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@319 -- # net_devs=() 00:30:33.179 10:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:33.179 10:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@320 -- # e810=() 00:30:33.179 10:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@320 -- # local -ga e810 00:30:33.179 10:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@321 -- # x722=() 00:30:33.179 10:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@321 -- # local -ga x722 00:30:33.179 10:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@322 -- # mlx=() 00:30:33.179 10:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@322 -- # local -ga mlx 00:30:33.179 10:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:33.179 10:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:33.179 10:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:33.179 10:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:33.179 10:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:33.179 10:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:33.179 10:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:33.179 10:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:33.179 10:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:33.179 10:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:33.179 10:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:33.179 10:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:33.179 10:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:33.179 10:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:33.179 10:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:33.179 10:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:33.179 10:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:33.179 10:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:33.179 10:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:33.179 10:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:82:00.0 (0x8086 - 0x159b)' 00:30:33.179 Found 0000:82:00.0 (0x8086 - 0x159b) 00:30:33.179 10:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:33.179 10:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:33.179 10:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:33.179 10:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:33.179 10:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:33.179 10:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:33.180 10:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:82:00.1 (0x8086 - 0x159b)' 00:30:33.180 Found 0000:82:00.1 (0x8086 - 0x159b) 00:30:33.180 10:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:33.180 10:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:33.180 10:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:33.180 10:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:33.180 10:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:33.180 10:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:33.180 10:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:33.180 10:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:33.180 10:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:33.180 10:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:33.180 10:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:33.180 10:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:33.180 10:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:33.180 10:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:33.180 10:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:33.180 10:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:82:00.0: cvl_0_0' 00:30:33.180 Found net devices under 0000:82:00.0: cvl_0_0 00:30:33.180 10:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:33.180 10:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:33.180 10:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:33.180 10:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:33.180 10:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:33.180 10:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:33.180 10:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:33.180 10:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:33.180 10:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:82:00.1: cvl_0_1' 00:30:33.180 Found net devices under 0000:82:00.1: cvl_0_1 00:30:33.180 10:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:33.180 10:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:30:33.180 10:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # is_hw=yes 00:30:33.180 10:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:30:33.180 10:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:30:33.180 10:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:30:33.180 10:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:33.180 10:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:33.180 10:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:33.180 10:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:33.180 10:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:33.180 10:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:33.180 10:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:33.180 10:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:33.180 10:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:33.180 10:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:33.180 10:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:33.180 10:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:33.180 10:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:33.180 10:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:33.180 10:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:33.438 10:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:33.438 10:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:33.438 10:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:33.438 10:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:33.438 10:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:33.438 10:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:33.438 10:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:33.438 10:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:33.438 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:33.438 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.204 ms 00:30:33.438 00:30:33.438 --- 10.0.0.2 ping statistics --- 00:30:33.438 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:33.438 rtt min/avg/max/mdev = 0.204/0.204/0.204/0.000 ms 00:30:33.438 10:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:33.438 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:33.438 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.111 ms 00:30:33.438 00:30:33.438 --- 10.0.0.1 ping statistics --- 00:30:33.438 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:33.438 rtt min/avg/max/mdev = 0.111/0.111/0.111/0.000 ms 00:30:33.438 10:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:33.438 10:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@450 -- # return 0 00:30:33.438 10:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:30:33.438 10:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:33.438 10:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:30:33.438 10:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:30:33.438 10:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:33.438 10:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:30:33.438 10:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:30:33.438 10:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:30:33.438 10:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:33.438 10:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@724 -- # xtrace_disable 00:30:33.438 10:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:30:33.438 10:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@509 -- # nvmfpid=527720 00:30:33.438 10:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:30:33.438 10:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@510 -- # waitforlisten 527720 00:30:33.438 10:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@833 -- # '[' -z 527720 ']' 00:30:33.438 10:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:33.438 10:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@838 -- # local max_retries=100 00:30:33.438 10:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:33.438 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:33.438 10:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@842 -- # xtrace_disable 00:30:33.438 10:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:30:33.438 [2024-11-15 10:49:21.811468] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:30:33.438 [2024-11-15 10:49:21.812606] Starting SPDK v25.01-pre git sha1 318515b44 / DPDK 24.03.0 initialization... 00:30:33.438 [2024-11-15 10:49:21.812665] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:33.438 [2024-11-15 10:49:21.884485] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:33.696 [2024-11-15 10:49:21.945056] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:33.696 [2024-11-15 10:49:21.945102] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:33.696 [2024-11-15 10:49:21.945115] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:33.696 [2024-11-15 10:49:21.945127] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:33.696 [2024-11-15 10:49:21.945137] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:33.696 [2024-11-15 10:49:21.945737] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:33.696 [2024-11-15 10:49:22.035392] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:30:33.696 [2024-11-15 10:49:22.035697] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:30:33.696 10:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:30:33.696 10:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@866 -- # return 0 00:30:33.696 10:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:33.696 10:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@730 -- # xtrace_disable 00:30:33.696 10:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:30:33.696 10:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:33.696 10:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:30:33.696 10:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:30:33.696 10:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:33.696 10:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:30:33.696 [2024-11-15 10:49:22.086284] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:33.696 10:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:33.696 10:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:30:33.696 10:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:33.696 10:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:30:33.696 10:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:33.696 10:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:33.696 10:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:33.696 10:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:30:33.696 [2024-11-15 10:49:22.102510] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:33.696 10:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:33.696 10:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:30:33.696 10:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:33.696 10:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:30:33.696 10:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:33.696 10:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:30:33.696 10:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:33.696 10:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:30:33.696 malloc0 00:30:33.696 10:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:33.696 10:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:30:33.696 10:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:33.696 10:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:30:33.696 10:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:33.696 10:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:30:33.696 10:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:30:33.696 10:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:30:33.696 10:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:30:33.696 10:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:30:33.697 10:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:30:33.697 { 00:30:33.697 "params": { 00:30:33.697 "name": "Nvme$subsystem", 00:30:33.697 "trtype": "$TEST_TRANSPORT", 00:30:33.697 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:33.697 "adrfam": "ipv4", 00:30:33.697 "trsvcid": "$NVMF_PORT", 00:30:33.697 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:33.697 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:33.697 "hdgst": ${hdgst:-false}, 00:30:33.697 "ddgst": ${ddgst:-false} 00:30:33.697 }, 00:30:33.697 "method": "bdev_nvme_attach_controller" 00:30:33.697 } 00:30:33.697 EOF 00:30:33.697 )") 00:30:33.697 10:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:30:33.697 10:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:30:33.697 10:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:30:33.697 10:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:30:33.697 "params": { 00:30:33.697 "name": "Nvme1", 00:30:33.697 "trtype": "tcp", 00:30:33.697 "traddr": "10.0.0.2", 00:30:33.697 "adrfam": "ipv4", 00:30:33.697 "trsvcid": "4420", 00:30:33.697 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:33.697 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:30:33.697 "hdgst": false, 00:30:33.697 "ddgst": false 00:30:33.697 }, 00:30:33.697 "method": "bdev_nvme_attach_controller" 00:30:33.697 }' 00:30:33.954 [2024-11-15 10:49:22.191356] Starting SPDK v25.01-pre git sha1 318515b44 / DPDK 24.03.0 initialization... 00:30:33.954 [2024-11-15 10:49:22.191451] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid527751 ] 00:30:33.954 [2024-11-15 10:49:22.263156] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:33.954 [2024-11-15 10:49:22.321188] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:34.212 Running I/O for 10 seconds... 00:30:36.078 6249.00 IOPS, 48.82 MiB/s [2024-11-15T09:49:25.912Z] 6248.50 IOPS, 48.82 MiB/s [2024-11-15T09:49:26.843Z] 6290.33 IOPS, 49.14 MiB/s [2024-11-15T09:49:27.775Z] 6270.00 IOPS, 48.98 MiB/s [2024-11-15T09:49:28.706Z] 6298.60 IOPS, 49.21 MiB/s [2024-11-15T09:49:29.639Z] 6285.00 IOPS, 49.10 MiB/s [2024-11-15T09:49:30.570Z] 6284.86 IOPS, 49.10 MiB/s [2024-11-15T09:49:31.942Z] 6278.25 IOPS, 49.05 MiB/s [2024-11-15T09:49:32.874Z] 6263.11 IOPS, 48.93 MiB/s [2024-11-15T09:49:32.874Z] 6246.50 IOPS, 48.80 MiB/s 00:30:44.411 Latency(us) 00:30:44.411 [2024-11-15T09:49:32.874Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:44.411 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:30:44.411 Verification LBA range: start 0x0 length 0x1000 00:30:44.411 Nvme1n1 : 10.01 6249.20 48.82 0.00 0.00 20430.79 506.69 26408.58 00:30:44.411 [2024-11-15T09:49:32.874Z] =================================================================================================================== 00:30:44.411 [2024-11-15T09:49:32.874Z] Total : 6249.20 48.82 0.00 0.00 20430.79 506.69 26408.58 00:30:44.411 10:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=529043 00:30:44.411 10:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:30:44.411 10:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:30:44.411 10:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:30:44.411 10:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:30:44.411 10:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:30:44.411 10:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:30:44.411 10:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:30:44.411 10:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:30:44.411 { 00:30:44.411 "params": { 00:30:44.411 "name": "Nvme$subsystem", 00:30:44.411 "trtype": "$TEST_TRANSPORT", 00:30:44.411 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:44.411 "adrfam": "ipv4", 00:30:44.411 "trsvcid": "$NVMF_PORT", 00:30:44.412 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:44.412 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:44.412 "hdgst": ${hdgst:-false}, 00:30:44.412 "ddgst": ${ddgst:-false} 00:30:44.412 }, 00:30:44.412 "method": "bdev_nvme_attach_controller" 00:30:44.412 } 00:30:44.412 EOF 00:30:44.412 )") 00:30:44.412 [2024-11-15 10:49:32.798204] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:44.412 [2024-11-15 10:49:32.798243] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:44.412 10:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:30:44.412 10:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:30:44.412 10:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:30:44.412 10:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:30:44.412 "params": { 00:30:44.412 "name": "Nvme1", 00:30:44.412 "trtype": "tcp", 00:30:44.412 "traddr": "10.0.0.2", 00:30:44.412 "adrfam": "ipv4", 00:30:44.412 "trsvcid": "4420", 00:30:44.412 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:44.412 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:30:44.412 "hdgst": false, 00:30:44.412 "ddgst": false 00:30:44.412 }, 00:30:44.412 "method": "bdev_nvme_attach_controller" 00:30:44.412 }' 00:30:44.412 [2024-11-15 10:49:32.806138] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:44.412 [2024-11-15 10:49:32.806161] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:44.412 [2024-11-15 10:49:32.814138] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:44.412 [2024-11-15 10:49:32.814159] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:44.412 [2024-11-15 10:49:32.822137] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:44.412 [2024-11-15 10:49:32.822158] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:44.412 [2024-11-15 10:49:32.830138] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:44.412 [2024-11-15 10:49:32.830159] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:44.412 [2024-11-15 10:49:32.838138] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:44.412 [2024-11-15 10:49:32.838158] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:44.412 [2024-11-15 10:49:32.838404] Starting SPDK v25.01-pre git sha1 318515b44 / DPDK 24.03.0 initialization... 00:30:44.412 [2024-11-15 10:49:32.838464] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid529043 ] 00:30:44.412 [2024-11-15 10:49:32.846138] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:44.412 [2024-11-15 10:49:32.846158] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:44.412 [2024-11-15 10:49:32.854137] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:44.412 [2024-11-15 10:49:32.854157] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:44.412 [2024-11-15 10:49:32.862138] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:44.412 [2024-11-15 10:49:32.862159] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:44.412 [2024-11-15 10:49:32.870140] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:44.412 [2024-11-15 10:49:32.870161] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:44.412 [2024-11-15 10:49:32.878137] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:44.412 [2024-11-15 10:49:32.878158] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:44.670 [2024-11-15 10:49:32.886138] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:44.670 [2024-11-15 10:49:32.886158] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:44.670 [2024-11-15 10:49:32.894140] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:44.670 [2024-11-15 10:49:32.894160] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:44.670 [2024-11-15 10:49:32.902137] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:44.670 [2024-11-15 10:49:32.902157] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:44.670 [2024-11-15 10:49:32.907163] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:44.670 [2024-11-15 10:49:32.910137] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:44.670 [2024-11-15 10:49:32.910157] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:44.670 [2024-11-15 10:49:32.918178] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:44.670 [2024-11-15 10:49:32.918217] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:44.670 [2024-11-15 10:49:32.926158] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:44.670 [2024-11-15 10:49:32.926188] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:44.670 [2024-11-15 10:49:32.934138] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:44.670 [2024-11-15 10:49:32.934159] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:44.670 [2024-11-15 10:49:32.942137] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:44.670 [2024-11-15 10:49:32.942157] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:44.670 [2024-11-15 10:49:32.950142] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:44.671 [2024-11-15 10:49:32.950163] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:44.671 [2024-11-15 10:49:32.958137] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:44.671 [2024-11-15 10:49:32.958158] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:44.671 [2024-11-15 10:49:32.966137] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:44.671 [2024-11-15 10:49:32.966158] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:44.671 [2024-11-15 10:49:32.967712] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:44.671 [2024-11-15 10:49:32.974138] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:44.671 [2024-11-15 10:49:32.974158] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:44.671 [2024-11-15 10:49:32.982151] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:44.671 [2024-11-15 10:49:32.982177] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:44.671 [2024-11-15 10:49:32.990171] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:44.671 [2024-11-15 10:49:32.990206] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:44.671 [2024-11-15 10:49:32.998170] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:44.671 [2024-11-15 10:49:32.998204] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:44.671 [2024-11-15 10:49:33.006171] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:44.671 [2024-11-15 10:49:33.006208] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:44.671 [2024-11-15 10:49:33.014170] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:44.671 [2024-11-15 10:49:33.014206] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:44.671 [2024-11-15 10:49:33.022171] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:44.671 [2024-11-15 10:49:33.022207] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:44.671 [2024-11-15 10:49:33.030170] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:44.671 [2024-11-15 10:49:33.030208] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:44.671 [2024-11-15 10:49:33.038139] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:44.671 [2024-11-15 10:49:33.038159] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:44.671 [2024-11-15 10:49:33.046167] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:44.671 [2024-11-15 10:49:33.046201] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:44.671 [2024-11-15 10:49:33.054170] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:44.671 [2024-11-15 10:49:33.054207] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:44.671 [2024-11-15 10:49:33.062165] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:44.671 [2024-11-15 10:49:33.062201] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:44.671 [2024-11-15 10:49:33.070138] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:44.671 [2024-11-15 10:49:33.070159] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:44.671 [2024-11-15 10:49:33.078138] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:44.671 [2024-11-15 10:49:33.078158] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:44.671 [2024-11-15 10:49:33.086143] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:44.671 [2024-11-15 10:49:33.086168] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:44.671 [2024-11-15 10:49:33.094142] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:44.671 [2024-11-15 10:49:33.094167] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:44.671 [2024-11-15 10:49:33.102141] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:44.671 [2024-11-15 10:49:33.102165] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:44.671 [2024-11-15 10:49:33.110143] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:44.671 [2024-11-15 10:49:33.110167] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:44.671 [2024-11-15 10:49:33.118141] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:44.671 [2024-11-15 10:49:33.118164] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:44.671 [2024-11-15 10:49:33.126143] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:44.671 [2024-11-15 10:49:33.126166] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:44.671 [2024-11-15 10:49:33.134144] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:44.671 [2024-11-15 10:49:33.134166] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:44.929 [2024-11-15 10:49:33.142143] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:44.929 [2024-11-15 10:49:33.142168] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:44.929 [2024-11-15 10:49:33.150140] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:44.929 [2024-11-15 10:49:33.150163] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:44.929 Running I/O for 5 seconds... 00:30:44.929 [2024-11-15 10:49:33.166585] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:44.929 [2024-11-15 10:49:33.166612] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:44.929 [2024-11-15 10:49:33.176929] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:44.929 [2024-11-15 10:49:33.176954] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:44.929 [2024-11-15 10:49:33.190491] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:44.929 [2024-11-15 10:49:33.190525] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:44.929 [2024-11-15 10:49:33.199928] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:44.929 [2024-11-15 10:49:33.199954] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:44.929 [2024-11-15 10:49:33.211281] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:44.929 [2024-11-15 10:49:33.211307] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:44.929 [2024-11-15 10:49:33.221558] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:44.929 [2024-11-15 10:49:33.221585] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:44.929 [2024-11-15 10:49:33.233135] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:44.929 [2024-11-15 10:49:33.233161] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:44.929 [2024-11-15 10:49:33.242335] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:44.929 [2024-11-15 10:49:33.242385] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:44.929 [2024-11-15 10:49:33.253309] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:44.929 [2024-11-15 10:49:33.253333] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:44.929 [2024-11-15 10:49:33.267851] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:44.929 [2024-11-15 10:49:33.267876] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:44.929 [2024-11-15 10:49:33.276577] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:44.929 [2024-11-15 10:49:33.276603] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:44.929 [2024-11-15 10:49:33.287579] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:44.929 [2024-11-15 10:49:33.287605] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:44.929 [2024-11-15 10:49:33.297242] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:44.929 [2024-11-15 10:49:33.297267] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:44.929 [2024-11-15 10:49:33.309741] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:44.929 [2024-11-15 10:49:33.309766] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:44.929 [2024-11-15 10:49:33.322285] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:44.929 [2024-11-15 10:49:33.322310] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:44.929 [2024-11-15 10:49:33.331604] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:44.929 [2024-11-15 10:49:33.331630] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:44.929 [2024-11-15 10:49:33.342313] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:44.929 [2024-11-15 10:49:33.342337] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:44.929 [2024-11-15 10:49:33.352173] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:44.929 [2024-11-15 10:49:33.352198] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:44.929 [2024-11-15 10:49:33.366061] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:44.929 [2024-11-15 10:49:33.366087] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:44.929 [2024-11-15 10:49:33.375125] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:44.929 [2024-11-15 10:49:33.375149] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:44.929 [2024-11-15 10:49:33.386172] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:44.929 [2024-11-15 10:49:33.386197] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:45.188 [2024-11-15 10:49:33.397244] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:45.188 [2024-11-15 10:49:33.397277] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:45.188 [2024-11-15 10:49:33.410486] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:45.188 [2024-11-15 10:49:33.410513] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:45.188 [2024-11-15 10:49:33.419693] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:45.188 [2024-11-15 10:49:33.419732] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:45.188 [2024-11-15 10:49:33.430337] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:45.188 [2024-11-15 10:49:33.430385] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:45.188 [2024-11-15 10:49:33.439871] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:45.188 [2024-11-15 10:49:33.439896] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:45.188 [2024-11-15 10:49:33.450433] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:45.188 [2024-11-15 10:49:33.450459] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:45.188 [2024-11-15 10:49:33.460810] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:45.188 [2024-11-15 10:49:33.460835] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:45.188 [2024-11-15 10:49:33.474230] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:45.188 [2024-11-15 10:49:33.474255] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:45.188 [2024-11-15 10:49:33.483577] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:45.188 [2024-11-15 10:49:33.483604] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:45.188 [2024-11-15 10:49:33.494457] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:45.188 [2024-11-15 10:49:33.494483] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:45.188 [2024-11-15 10:49:33.504556] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:45.188 [2024-11-15 10:49:33.504582] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:45.188 [2024-11-15 10:49:33.518331] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:45.188 [2024-11-15 10:49:33.518381] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:45.188 [2024-11-15 10:49:33.527604] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:45.188 [2024-11-15 10:49:33.527635] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:45.188 [2024-11-15 10:49:33.538515] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:45.188 [2024-11-15 10:49:33.538542] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:45.188 [2024-11-15 10:49:33.548890] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:45.188 [2024-11-15 10:49:33.548915] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:45.188 [2024-11-15 10:49:33.563698] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:45.188 [2024-11-15 10:49:33.563723] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:45.188 [2024-11-15 10:49:33.572768] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:45.188 [2024-11-15 10:49:33.572792] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:45.188 [2024-11-15 10:49:33.583722] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:45.188 [2024-11-15 10:49:33.583747] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:45.188 [2024-11-15 10:49:33.594111] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:45.188 [2024-11-15 10:49:33.594143] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:45.188 [2024-11-15 10:49:33.604823] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:45.188 [2024-11-15 10:49:33.604856] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:45.188 [2024-11-15 10:49:33.618187] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:45.188 [2024-11-15 10:49:33.618213] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:45.188 [2024-11-15 10:49:33.627738] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:45.188 [2024-11-15 10:49:33.627786] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:45.188 [2024-11-15 10:49:33.639553] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:45.188 [2024-11-15 10:49:33.639582] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:45.188 [2024-11-15 10:49:33.650269] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:45.188 [2024-11-15 10:49:33.650294] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:45.448 [2024-11-15 10:49:33.661446] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:45.448 [2024-11-15 10:49:33.661475] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:45.448 [2024-11-15 10:49:33.675792] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:45.448 [2024-11-15 10:49:33.675828] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:45.448 [2024-11-15 10:49:33.685654] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:45.448 [2024-11-15 10:49:33.685682] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:45.448 [2024-11-15 10:49:33.697471] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:45.448 [2024-11-15 10:49:33.697499] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:45.448 [2024-11-15 10:49:33.709439] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:45.448 [2024-11-15 10:49:33.709466] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:45.448 [2024-11-15 10:49:33.723225] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:45.448 [2024-11-15 10:49:33.723250] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:45.448 [2024-11-15 10:49:33.731926] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:45.448 [2024-11-15 10:49:33.731951] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:45.448 [2024-11-15 10:49:33.743497] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:45.448 [2024-11-15 10:49:33.743524] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:45.448 [2024-11-15 10:49:33.753890] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:45.448 [2024-11-15 10:49:33.753916] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:45.448 [2024-11-15 10:49:33.765872] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:45.448 [2024-11-15 10:49:33.765896] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:45.448 [2024-11-15 10:49:33.775128] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:45.448 [2024-11-15 10:49:33.775153] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:45.448 [2024-11-15 10:49:33.786603] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:45.448 [2024-11-15 10:49:33.786629] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:45.448 [2024-11-15 10:49:33.797172] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:45.448 [2024-11-15 10:49:33.797196] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:45.448 [2024-11-15 10:49:33.810008] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:45.448 [2024-11-15 10:49:33.810032] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:45.448 [2024-11-15 10:49:33.819298] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:45.448 [2024-11-15 10:49:33.819333] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:45.448 [2024-11-15 10:49:33.830890] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:45.448 [2024-11-15 10:49:33.830915] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:45.448 [2024-11-15 10:49:33.841473] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:45.448 [2024-11-15 10:49:33.841499] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:45.448 [2024-11-15 10:49:33.855086] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:45.448 [2024-11-15 10:49:33.855111] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:45.448 [2024-11-15 10:49:33.864521] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:45.448 [2024-11-15 10:49:33.864546] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:45.448 [2024-11-15 10:49:33.875539] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:45.448 [2024-11-15 10:49:33.875564] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:45.448 [2024-11-15 10:49:33.885074] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:45.448 [2024-11-15 10:49:33.885098] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:45.448 [2024-11-15 10:49:33.898244] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:45.448 [2024-11-15 10:49:33.898269] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:45.448 [2024-11-15 10:49:33.907224] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:45.448 [2024-11-15 10:49:33.907249] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:45.707 [2024-11-15 10:49:33.919147] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:45.707 [2024-11-15 10:49:33.919183] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:45.707 [2024-11-15 10:49:33.929924] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:45.707 [2024-11-15 10:49:33.929952] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:45.707 [2024-11-15 10:49:33.944424] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:45.707 [2024-11-15 10:49:33.944451] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:45.707 [2024-11-15 10:49:33.953581] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:45.707 [2024-11-15 10:49:33.953607] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:45.707 [2024-11-15 10:49:33.964966] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:45.707 [2024-11-15 10:49:33.964998] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:45.707 [2024-11-15 10:49:33.977604] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:45.707 [2024-11-15 10:49:33.977630] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:45.708 [2024-11-15 10:49:33.991332] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:45.708 [2024-11-15 10:49:33.991380] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:45.708 [2024-11-15 10:49:34.000574] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:45.708 [2024-11-15 10:49:34.000600] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:45.708 [2024-11-15 10:49:34.011786] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:45.708 [2024-11-15 10:49:34.011811] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:45.708 [2024-11-15 10:49:34.022326] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:45.708 [2024-11-15 10:49:34.022378] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:45.708 [2024-11-15 10:49:34.032888] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:45.708 [2024-11-15 10:49:34.032913] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:45.708 [2024-11-15 10:49:34.046963] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:45.708 [2024-11-15 10:49:34.046987] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:45.708 [2024-11-15 10:49:34.056064] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:45.708 [2024-11-15 10:49:34.056089] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:45.708 [2024-11-15 10:49:34.067762] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:45.708 [2024-11-15 10:49:34.067788] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:45.708 [2024-11-15 10:49:34.078237] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:45.708 [2024-11-15 10:49:34.078262] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:45.708 [2024-11-15 10:49:34.088513] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:45.708 [2024-11-15 10:49:34.088539] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:45.708 [2024-11-15 10:49:34.102288] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:45.708 [2024-11-15 10:49:34.102314] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:45.708 [2024-11-15 10:49:34.111179] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:45.708 [2024-11-15 10:49:34.111228] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:45.708 [2024-11-15 10:49:34.122780] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:45.708 [2024-11-15 10:49:34.122805] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:45.708 [2024-11-15 10:49:34.133418] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:45.708 [2024-11-15 10:49:34.133446] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:45.708 [2024-11-15 10:49:34.145691] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:45.708 [2024-11-15 10:49:34.145717] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:45.708 12154.00 IOPS, 94.95 MiB/s [2024-11-15T09:49:34.171Z] [2024-11-15 10:49:34.157249] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:45.708 [2024-11-15 10:49:34.157274] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:45.708 [2024-11-15 10:49:34.171274] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:45.708 [2024-11-15 10:49:34.171301] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:45.965 [2024-11-15 10:49:34.181505] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:45.966 [2024-11-15 10:49:34.181532] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:45.966 [2024-11-15 10:49:34.192962] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:45.966 [2024-11-15 10:49:34.192986] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:45.966 [2024-11-15 10:49:34.208272] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:45.966 [2024-11-15 10:49:34.208303] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:45.966 [2024-11-15 10:49:34.217653] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:45.966 [2024-11-15 10:49:34.217678] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:45.966 [2024-11-15 10:49:34.228881] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:45.966 [2024-11-15 10:49:34.228906] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:45.966 [2024-11-15 10:49:34.242741] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:45.966 [2024-11-15 10:49:34.242765] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:45.966 [2024-11-15 10:49:34.251962] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:45.966 [2024-11-15 10:49:34.251986] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:45.966 [2024-11-15 10:49:34.263184] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:45.966 [2024-11-15 10:49:34.263208] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:45.966 [2024-11-15 10:49:34.273825] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:45.966 [2024-11-15 10:49:34.273849] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:45.966 [2024-11-15 10:49:34.286601] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:45.966 [2024-11-15 10:49:34.286627] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:45.966 [2024-11-15 10:49:34.295437] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:45.966 [2024-11-15 10:49:34.295463] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:45.966 [2024-11-15 10:49:34.308445] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:45.966 [2024-11-15 10:49:34.308472] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:45.966 [2024-11-15 10:49:34.317924] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:45.966 [2024-11-15 10:49:34.317949] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:45.966 [2024-11-15 10:49:34.329216] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:45.966 [2024-11-15 10:49:34.329240] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:45.966 [2024-11-15 10:49:34.343114] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:45.966 [2024-11-15 10:49:34.343139] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:45.966 [2024-11-15 10:49:34.352423] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:45.966 [2024-11-15 10:49:34.352450] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:45.966 [2024-11-15 10:49:34.364202] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:45.966 [2024-11-15 10:49:34.364227] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:45.966 [2024-11-15 10:49:34.379313] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:45.966 [2024-11-15 10:49:34.379338] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:45.966 [2024-11-15 10:49:34.389145] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:45.966 [2024-11-15 10:49:34.389170] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:45.966 [2024-11-15 10:49:34.404288] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:45.966 [2024-11-15 10:49:34.404313] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:45.966 [2024-11-15 10:49:34.413092] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:45.966 [2024-11-15 10:49:34.413116] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:45.966 [2024-11-15 10:49:34.426548] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:45.966 [2024-11-15 10:49:34.426575] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.224 [2024-11-15 10:49:34.436396] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.224 [2024-11-15 10:49:34.436439] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.224 [2024-11-15 10:49:34.447803] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.224 [2024-11-15 10:49:34.447829] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.224 [2024-11-15 10:49:34.458084] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.224 [2024-11-15 10:49:34.458116] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.224 [2024-11-15 10:49:34.468875] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.224 [2024-11-15 10:49:34.468900] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.224 [2024-11-15 10:49:34.482058] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.224 [2024-11-15 10:49:34.482087] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.224 [2024-11-15 10:49:34.491179] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.224 [2024-11-15 10:49:34.491205] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.224 [2024-11-15 10:49:34.503051] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.224 [2024-11-15 10:49:34.503076] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.224 [2024-11-15 10:49:34.513572] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.224 [2024-11-15 10:49:34.513599] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.224 [2024-11-15 10:49:34.524783] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.224 [2024-11-15 10:49:34.524808] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.224 [2024-11-15 10:49:34.538758] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.224 [2024-11-15 10:49:34.538783] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.224 [2024-11-15 10:49:34.548396] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.224 [2024-11-15 10:49:34.548423] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.224 [2024-11-15 10:49:34.559826] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.224 [2024-11-15 10:49:34.559851] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.224 [2024-11-15 10:49:34.570193] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.224 [2024-11-15 10:49:34.570218] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.224 [2024-11-15 10:49:34.580663] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.224 [2024-11-15 10:49:34.580689] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.224 [2024-11-15 10:49:34.594590] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.224 [2024-11-15 10:49:34.594616] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.224 [2024-11-15 10:49:34.603552] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.224 [2024-11-15 10:49:34.603578] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.224 [2024-11-15 10:49:34.614522] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.224 [2024-11-15 10:49:34.614549] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.224 [2024-11-15 10:49:34.625462] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.224 [2024-11-15 10:49:34.625488] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.224 [2024-11-15 10:49:34.638133] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.224 [2024-11-15 10:49:34.638159] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.224 [2024-11-15 10:49:34.647463] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.224 [2024-11-15 10:49:34.647489] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.224 [2024-11-15 10:49:34.659078] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.224 [2024-11-15 10:49:34.659102] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.224 [2024-11-15 10:49:34.669607] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.224 [2024-11-15 10:49:34.669642] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.224 [2024-11-15 10:49:34.684298] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.224 [2024-11-15 10:49:34.684323] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.482 [2024-11-15 10:49:34.693826] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.482 [2024-11-15 10:49:34.693853] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.482 [2024-11-15 10:49:34.705493] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.482 [2024-11-15 10:49:34.705519] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.482 [2024-11-15 10:49:34.719113] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.482 [2024-11-15 10:49:34.719137] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.482 [2024-11-15 10:49:34.728888] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.482 [2024-11-15 10:49:34.728913] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.482 [2024-11-15 10:49:34.743919] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.482 [2024-11-15 10:49:34.743944] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.482 [2024-11-15 10:49:34.753176] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.482 [2024-11-15 10:49:34.753202] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.482 [2024-11-15 10:49:34.766882] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.482 [2024-11-15 10:49:34.766909] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.482 [2024-11-15 10:49:34.776901] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.482 [2024-11-15 10:49:34.776926] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.482 [2024-11-15 10:49:34.791097] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.482 [2024-11-15 10:49:34.791122] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.482 [2024-11-15 10:49:34.801084] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.482 [2024-11-15 10:49:34.801109] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.482 [2024-11-15 10:49:34.812065] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.482 [2024-11-15 10:49:34.812090] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.482 [2024-11-15 10:49:34.822075] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.482 [2024-11-15 10:49:34.822101] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.482 [2024-11-15 10:49:34.833874] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.482 [2024-11-15 10:49:34.833899] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.482 [2024-11-15 10:49:34.845798] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.482 [2024-11-15 10:49:34.845824] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.482 [2024-11-15 10:49:34.859631] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.483 [2024-11-15 10:49:34.859682] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.483 [2024-11-15 10:49:34.869290] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.483 [2024-11-15 10:49:34.869315] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.483 [2024-11-15 10:49:34.880405] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.483 [2024-11-15 10:49:34.880432] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.483 [2024-11-15 10:49:34.896056] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.483 [2024-11-15 10:49:34.896089] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.483 [2024-11-15 10:49:34.905436] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.483 [2024-11-15 10:49:34.905463] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.483 [2024-11-15 10:49:34.916340] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.483 [2024-11-15 10:49:34.916389] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.483 [2024-11-15 10:49:34.930795] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.483 [2024-11-15 10:49:34.930823] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.483 [2024-11-15 10:49:34.939859] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.483 [2024-11-15 10:49:34.939885] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.740 [2024-11-15 10:49:34.951961] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.740 [2024-11-15 10:49:34.951987] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.740 [2024-11-15 10:49:34.962252] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.740 [2024-11-15 10:49:34.962277] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.740 [2024-11-15 10:49:34.973613] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.740 [2024-11-15 10:49:34.973654] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.740 [2024-11-15 10:49:34.983996] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.740 [2024-11-15 10:49:34.984022] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.740 [2024-11-15 10:49:34.995278] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.740 [2024-11-15 10:49:34.995304] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.740 [2024-11-15 10:49:35.005978] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.740 [2024-11-15 10:49:35.006003] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.740 [2024-11-15 10:49:35.016284] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.740 [2024-11-15 10:49:35.016309] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.740 [2024-11-15 10:49:35.027557] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.740 [2024-11-15 10:49:35.027584] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.740 [2024-11-15 10:49:35.038216] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.740 [2024-11-15 10:49:35.038241] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.740 [2024-11-15 10:49:35.048337] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.740 [2024-11-15 10:49:35.048385] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.740 [2024-11-15 10:49:35.061050] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.740 [2024-11-15 10:49:35.061075] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.740 [2024-11-15 10:49:35.076089] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.740 [2024-11-15 10:49:35.076114] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.740 [2024-11-15 10:49:35.084862] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.740 [2024-11-15 10:49:35.084887] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.740 [2024-11-15 10:49:35.096387] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.740 [2024-11-15 10:49:35.096414] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.740 [2024-11-15 10:49:35.108792] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.740 [2024-11-15 10:49:35.108825] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.741 [2024-11-15 10:49:35.118174] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.741 [2024-11-15 10:49:35.118200] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.741 [2024-11-15 10:49:35.129810] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.741 [2024-11-15 10:49:35.129835] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.741 [2024-11-15 10:49:35.141928] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.741 [2024-11-15 10:49:35.141953] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.741 [2024-11-15 10:49:35.151249] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.741 [2024-11-15 10:49:35.151275] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.741 12095.50 IOPS, 94.50 MiB/s [2024-11-15T09:49:35.204Z] [2024-11-15 10:49:35.162985] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.741 [2024-11-15 10:49:35.163009] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.741 [2024-11-15 10:49:35.173492] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.741 [2024-11-15 10:49:35.173518] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.741 [2024-11-15 10:49:35.186131] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.741 [2024-11-15 10:49:35.186156] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.741 [2024-11-15 10:49:35.195315] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.741 [2024-11-15 10:49:35.195340] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.741 [2024-11-15 10:49:35.207020] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.741 [2024-11-15 10:49:35.207046] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.999 [2024-11-15 10:49:35.217830] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.999 [2024-11-15 10:49:35.217854] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.999 [2024-11-15 10:49:35.228823] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.999 [2024-11-15 10:49:35.228848] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.999 [2024-11-15 10:49:35.241255] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.999 [2024-11-15 10:49:35.241280] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.999 [2024-11-15 10:49:35.251178] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.999 [2024-11-15 10:49:35.251203] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.999 [2024-11-15 10:49:35.262551] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.999 [2024-11-15 10:49:35.262578] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.999 [2024-11-15 10:49:35.272936] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.999 [2024-11-15 10:49:35.272961] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.999 [2024-11-15 10:49:35.287037] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.999 [2024-11-15 10:49:35.287061] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.999 [2024-11-15 10:49:35.296820] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.999 [2024-11-15 10:49:35.296845] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.999 [2024-11-15 10:49:35.312276] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.999 [2024-11-15 10:49:35.312301] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.999 [2024-11-15 10:49:35.321747] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.999 [2024-11-15 10:49:35.321772] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.999 [2024-11-15 10:49:35.332846] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.999 [2024-11-15 10:49:35.332872] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.999 [2024-11-15 10:49:35.347919] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.999 [2024-11-15 10:49:35.347943] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.999 [2024-11-15 10:49:35.357013] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.999 [2024-11-15 10:49:35.357037] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.999 [2024-11-15 10:49:35.368022] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.999 [2024-11-15 10:49:35.368047] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.999 [2024-11-15 10:49:35.378374] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.999 [2024-11-15 10:49:35.378402] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.999 [2024-11-15 10:49:35.388802] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.999 [2024-11-15 10:49:35.388826] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.999 [2024-11-15 10:49:35.403397] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.999 [2024-11-15 10:49:35.403423] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.999 [2024-11-15 10:49:35.412455] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.999 [2024-11-15 10:49:35.412481] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.999 [2024-11-15 10:49:35.423505] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.999 [2024-11-15 10:49:35.423532] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.999 [2024-11-15 10:49:35.433875] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.999 [2024-11-15 10:49:35.433900] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.999 [2024-11-15 10:49:35.444531] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.999 [2024-11-15 10:49:35.444557] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.999 [2024-11-15 10:49:35.459407] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.999 [2024-11-15 10:49:35.459434] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.257 [2024-11-15 10:49:35.469786] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.257 [2024-11-15 10:49:35.469812] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.257 [2024-11-15 10:49:35.481082] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.257 [2024-11-15 10:49:35.481107] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.257 [2024-11-15 10:49:35.494706] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.257 [2024-11-15 10:49:35.494746] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.257 [2024-11-15 10:49:35.503859] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.257 [2024-11-15 10:49:35.503883] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.257 [2024-11-15 10:49:35.515241] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.257 [2024-11-15 10:49:35.515266] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.257 [2024-11-15 10:49:35.525822] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.257 [2024-11-15 10:49:35.525847] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.257 [2024-11-15 10:49:35.539502] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.257 [2024-11-15 10:49:35.539528] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.257 [2024-11-15 10:49:35.548476] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.257 [2024-11-15 10:49:35.548502] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.257 [2024-11-15 10:49:35.559319] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.257 [2024-11-15 10:49:35.559359] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.257 [2024-11-15 10:49:35.569522] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.257 [2024-11-15 10:49:35.569549] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.257 [2024-11-15 10:49:35.582840] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.257 [2024-11-15 10:49:35.582864] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.257 [2024-11-15 10:49:35.592181] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.257 [2024-11-15 10:49:35.592206] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.257 [2024-11-15 10:49:35.603609] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.257 [2024-11-15 10:49:35.603635] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.257 [2024-11-15 10:49:35.613774] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.257 [2024-11-15 10:49:35.613798] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.258 [2024-11-15 10:49:35.625433] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.258 [2024-11-15 10:49:35.625459] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.258 [2024-11-15 10:49:35.639098] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.258 [2024-11-15 10:49:35.639122] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.258 [2024-11-15 10:49:35.648475] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.258 [2024-11-15 10:49:35.648501] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.258 [2024-11-15 10:49:35.659601] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.258 [2024-11-15 10:49:35.659627] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.258 [2024-11-15 10:49:35.669794] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.258 [2024-11-15 10:49:35.669819] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.258 [2024-11-15 10:49:35.683381] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.258 [2024-11-15 10:49:35.683422] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.258 [2024-11-15 10:49:35.692266] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.258 [2024-11-15 10:49:35.692290] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.258 [2024-11-15 10:49:35.703555] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.258 [2024-11-15 10:49:35.703580] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.258 [2024-11-15 10:49:35.714088] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.258 [2024-11-15 10:49:35.714112] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.258 [2024-11-15 10:49:35.724706] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.258 [2024-11-15 10:49:35.724733] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.515 [2024-11-15 10:49:35.739944] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.515 [2024-11-15 10:49:35.739969] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.515 [2024-11-15 10:49:35.749236] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.515 [2024-11-15 10:49:35.749261] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.515 [2024-11-15 10:49:35.763990] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.515 [2024-11-15 10:49:35.764015] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.515 [2024-11-15 10:49:35.772814] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.515 [2024-11-15 10:49:35.772839] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.515 [2024-11-15 10:49:35.783846] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.516 [2024-11-15 10:49:35.783871] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.516 [2024-11-15 10:49:35.794031] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.516 [2024-11-15 10:49:35.794056] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.516 [2024-11-15 10:49:35.804440] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.516 [2024-11-15 10:49:35.804465] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.516 [2024-11-15 10:49:35.819318] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.516 [2024-11-15 10:49:35.819357] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.516 [2024-11-15 10:49:35.828609] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.516 [2024-11-15 10:49:35.828638] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.516 [2024-11-15 10:49:35.840220] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.516 [2024-11-15 10:49:35.840245] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.516 [2024-11-15 10:49:35.855999] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.516 [2024-11-15 10:49:35.856024] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.516 [2024-11-15 10:49:35.865337] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.516 [2024-11-15 10:49:35.865391] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.516 [2024-11-15 10:49:35.878581] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.516 [2024-11-15 10:49:35.878607] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.516 [2024-11-15 10:49:35.887969] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.516 [2024-11-15 10:49:35.887995] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.516 [2024-11-15 10:49:35.899910] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.516 [2024-11-15 10:49:35.899936] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.516 [2024-11-15 10:49:35.915176] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.516 [2024-11-15 10:49:35.915202] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.516 [2024-11-15 10:49:35.924813] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.516 [2024-11-15 10:49:35.924839] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.516 [2024-11-15 10:49:35.939455] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.516 [2024-11-15 10:49:35.939483] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.516 [2024-11-15 10:49:35.948578] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.516 [2024-11-15 10:49:35.948606] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.516 [2024-11-15 10:49:35.959618] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.516 [2024-11-15 10:49:35.959669] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.516 [2024-11-15 10:49:35.969805] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.516 [2024-11-15 10:49:35.969830] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.516 [2024-11-15 10:49:35.981270] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.516 [2024-11-15 10:49:35.981297] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.774 [2024-11-15 10:49:35.994031] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.774 [2024-11-15 10:49:35.994056] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.774 [2024-11-15 10:49:36.003400] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.774 [2024-11-15 10:49:36.003427] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.774 [2024-11-15 10:49:36.014866] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.774 [2024-11-15 10:49:36.014891] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.774 [2024-11-15 10:49:36.025045] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.774 [2024-11-15 10:49:36.025069] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.774 [2024-11-15 10:49:36.040268] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.774 [2024-11-15 10:49:36.040292] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.774 [2024-11-15 10:49:36.050069] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.774 [2024-11-15 10:49:36.050094] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.774 [2024-11-15 10:49:36.061508] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.774 [2024-11-15 10:49:36.061535] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.774 [2024-11-15 10:49:36.076113] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.774 [2024-11-15 10:49:36.076149] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.774 [2024-11-15 10:49:36.085407] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.774 [2024-11-15 10:49:36.085443] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.774 [2024-11-15 10:49:36.100520] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.774 [2024-11-15 10:49:36.100556] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.774 [2024-11-15 10:49:36.117226] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.774 [2024-11-15 10:49:36.117251] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.774 [2024-11-15 10:49:36.127017] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.774 [2024-11-15 10:49:36.127047] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.774 [2024-11-15 10:49:36.138541] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.774 [2024-11-15 10:49:36.138568] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.774 [2024-11-15 10:49:36.148801] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.774 [2024-11-15 10:49:36.148826] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.774 12108.00 IOPS, 94.59 MiB/s [2024-11-15T09:49:36.237Z] [2024-11-15 10:49:36.162935] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.774 [2024-11-15 10:49:36.162968] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.774 [2024-11-15 10:49:36.172135] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.774 [2024-11-15 10:49:36.172159] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.774 [2024-11-15 10:49:36.183102] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.774 [2024-11-15 10:49:36.183148] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.774 [2024-11-15 10:49:36.193271] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.774 [2024-11-15 10:49:36.193295] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.774 [2024-11-15 10:49:36.205969] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.774 [2024-11-15 10:49:36.205993] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.774 [2024-11-15 10:49:36.215384] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.774 [2024-11-15 10:49:36.215410] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.774 [2024-11-15 10:49:36.226630] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.774 [2024-11-15 10:49:36.226680] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.774 [2024-11-15 10:49:36.237042] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.774 [2024-11-15 10:49:36.237066] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.032 [2024-11-15 10:49:36.249887] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.032 [2024-11-15 10:49:36.249911] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.032 [2024-11-15 10:49:36.261283] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.032 [2024-11-15 10:49:36.261308] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.032 [2024-11-15 10:49:36.275218] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.032 [2024-11-15 10:49:36.275243] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.032 [2024-11-15 10:49:36.284287] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.032 [2024-11-15 10:49:36.284311] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.032 [2024-11-15 10:49:36.295749] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.032 [2024-11-15 10:49:36.295784] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.032 [2024-11-15 10:49:36.306077] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.032 [2024-11-15 10:49:36.306101] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.032 [2024-11-15 10:49:36.316495] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.032 [2024-11-15 10:49:36.316526] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.032 [2024-11-15 10:49:36.330483] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.032 [2024-11-15 10:49:36.330509] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.032 [2024-11-15 10:49:36.339564] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.032 [2024-11-15 10:49:36.339590] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.032 [2024-11-15 10:49:36.350582] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.032 [2024-11-15 10:49:36.350608] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.032 [2024-11-15 10:49:36.360536] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.032 [2024-11-15 10:49:36.360569] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.032 [2024-11-15 10:49:36.374136] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.032 [2024-11-15 10:49:36.374161] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.032 [2024-11-15 10:49:36.383116] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.032 [2024-11-15 10:49:36.383141] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.032 [2024-11-15 10:49:36.394102] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.032 [2024-11-15 10:49:36.394142] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.032 [2024-11-15 10:49:36.404495] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.032 [2024-11-15 10:49:36.404522] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.032 [2024-11-15 10:49:36.419957] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.032 [2024-11-15 10:49:36.419981] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.032 [2024-11-15 10:49:36.428982] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.032 [2024-11-15 10:49:36.429006] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.032 [2024-11-15 10:49:36.440529] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.032 [2024-11-15 10:49:36.440555] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.032 [2024-11-15 10:49:36.453971] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.032 [2024-11-15 10:49:36.453996] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.032 [2024-11-15 10:49:36.463081] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.032 [2024-11-15 10:49:36.463106] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.032 [2024-11-15 10:49:36.475674] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.032 [2024-11-15 10:49:36.475700] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.032 [2024-11-15 10:49:36.484679] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.032 [2024-11-15 10:49:36.484705] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.032 [2024-11-15 10:49:36.496722] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.032 [2024-11-15 10:49:36.496764] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.289 [2024-11-15 10:49:36.509804] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.289 [2024-11-15 10:49:36.509829] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.289 [2024-11-15 10:49:36.521379] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.289 [2024-11-15 10:49:36.521406] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.289 [2024-11-15 10:49:36.535040] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.289 [2024-11-15 10:49:36.535065] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.289 [2024-11-15 10:49:36.544553] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.289 [2024-11-15 10:49:36.544580] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.289 [2024-11-15 10:49:36.555848] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.289 [2024-11-15 10:49:36.555873] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.290 [2024-11-15 10:49:36.566175] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.290 [2024-11-15 10:49:36.566201] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.290 [2024-11-15 10:49:36.576778] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.290 [2024-11-15 10:49:36.576803] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.290 [2024-11-15 10:49:36.590175] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.290 [2024-11-15 10:49:36.590200] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.290 [2024-11-15 10:49:36.599696] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.290 [2024-11-15 10:49:36.599737] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.290 [2024-11-15 10:49:36.610942] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.290 [2024-11-15 10:49:36.610967] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.290 [2024-11-15 10:49:36.620956] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.290 [2024-11-15 10:49:36.620982] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.290 [2024-11-15 10:49:36.633799] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.290 [2024-11-15 10:49:36.633823] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.290 [2024-11-15 10:49:36.643193] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.290 [2024-11-15 10:49:36.643218] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.290 [2024-11-15 10:49:36.654527] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.290 [2024-11-15 10:49:36.654554] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.290 [2024-11-15 10:49:36.665109] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.290 [2024-11-15 10:49:36.665134] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.290 [2024-11-15 10:49:36.678111] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.290 [2024-11-15 10:49:36.678136] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.290 [2024-11-15 10:49:36.687391] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.290 [2024-11-15 10:49:36.687418] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.290 [2024-11-15 10:49:36.698567] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.290 [2024-11-15 10:49:36.698594] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.290 [2024-11-15 10:49:36.708786] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.290 [2024-11-15 10:49:36.708811] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.290 [2024-11-15 10:49:36.721907] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.290 [2024-11-15 10:49:36.721932] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.290 [2024-11-15 10:49:36.731503] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.290 [2024-11-15 10:49:36.731529] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.290 [2024-11-15 10:49:36.742910] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.290 [2024-11-15 10:49:36.742935] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.290 [2024-11-15 10:49:36.753414] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.290 [2024-11-15 10:49:36.753442] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.548 [2024-11-15 10:49:36.764976] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.548 [2024-11-15 10:49:36.765001] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.548 [2024-11-15 10:49:36.779082] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.548 [2024-11-15 10:49:36.779107] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.548 [2024-11-15 10:49:36.788476] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.548 [2024-11-15 10:49:36.788503] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.548 [2024-11-15 10:49:36.799891] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.548 [2024-11-15 10:49:36.799916] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.548 [2024-11-15 10:49:36.815120] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.548 [2024-11-15 10:49:36.815145] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.548 [2024-11-15 10:49:36.824331] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.548 [2024-11-15 10:49:36.824384] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.548 [2024-11-15 10:49:36.836009] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.548 [2024-11-15 10:49:36.836035] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.548 [2024-11-15 10:49:36.846460] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.548 [2024-11-15 10:49:36.846487] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.548 [2024-11-15 10:49:36.856750] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.548 [2024-11-15 10:49:36.856776] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.548 [2024-11-15 10:49:36.869750] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.548 [2024-11-15 10:49:36.869775] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.548 [2024-11-15 10:49:36.879306] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.548 [2024-11-15 10:49:36.879331] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.548 [2024-11-15 10:49:36.890697] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.548 [2024-11-15 10:49:36.890737] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.548 [2024-11-15 10:49:36.901086] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.548 [2024-11-15 10:49:36.901111] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.548 [2024-11-15 10:49:36.915576] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.548 [2024-11-15 10:49:36.915602] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.548 [2024-11-15 10:49:36.925240] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.548 [2024-11-15 10:49:36.925265] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.548 [2024-11-15 10:49:36.938923] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.548 [2024-11-15 10:49:36.938948] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.548 [2024-11-15 10:49:36.947892] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.548 [2024-11-15 10:49:36.947917] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.548 [2024-11-15 10:49:36.959119] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.548 [2024-11-15 10:49:36.959144] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.548 [2024-11-15 10:49:36.968765] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.548 [2024-11-15 10:49:36.968790] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.548 [2024-11-15 10:49:36.982524] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.548 [2024-11-15 10:49:36.982551] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.548 [2024-11-15 10:49:36.991408] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.548 [2024-11-15 10:49:36.991434] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.548 [2024-11-15 10:49:37.002393] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.548 [2024-11-15 10:49:37.002419] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.548 [2024-11-15 10:49:37.011850] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.548 [2024-11-15 10:49:37.011877] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.806 [2024-11-15 10:49:37.026371] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.806 [2024-11-15 10:49:37.026409] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.806 [2024-11-15 10:49:37.035174] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.806 [2024-11-15 10:49:37.035200] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.806 [2024-11-15 10:49:37.046873] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.806 [2024-11-15 10:49:37.046899] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.806 [2024-11-15 10:49:37.057550] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.806 [2024-11-15 10:49:37.057576] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.806 [2024-11-15 10:49:37.070927] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.806 [2024-11-15 10:49:37.070952] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.806 [2024-11-15 10:49:37.079982] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.806 [2024-11-15 10:49:37.080007] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.806 [2024-11-15 10:49:37.091389] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.806 [2024-11-15 10:49:37.091431] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.806 [2024-11-15 10:49:37.101627] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.806 [2024-11-15 10:49:37.101667] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.806 [2024-11-15 10:49:37.113982] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.806 [2024-11-15 10:49:37.114007] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.806 [2024-11-15 10:49:37.126988] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.806 [2024-11-15 10:49:37.127012] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.806 [2024-11-15 10:49:37.136478] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.806 [2024-11-15 10:49:37.136505] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.806 [2024-11-15 10:49:37.147494] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.806 [2024-11-15 10:49:37.147521] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.806 [2024-11-15 10:49:37.158279] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.806 [2024-11-15 10:49:37.158305] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.806 12120.25 IOPS, 94.69 MiB/s [2024-11-15T09:49:37.269Z] [2024-11-15 10:49:37.169028] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.806 [2024-11-15 10:49:37.169053] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.806 [2024-11-15 10:49:37.182450] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.806 [2024-11-15 10:49:37.182476] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.806 [2024-11-15 10:49:37.191876] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.806 [2024-11-15 10:49:37.191900] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.806 [2024-11-15 10:49:37.203148] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.806 [2024-11-15 10:49:37.203181] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.806 [2024-11-15 10:49:37.213583] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.806 [2024-11-15 10:49:37.213609] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.806 [2024-11-15 10:49:37.226117] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.806 [2024-11-15 10:49:37.226141] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.806 [2024-11-15 10:49:37.235542] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.806 [2024-11-15 10:49:37.235576] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.806 [2024-11-15 10:49:37.247405] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.806 [2024-11-15 10:49:37.247431] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.806 [2024-11-15 10:49:37.257815] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.806 [2024-11-15 10:49:37.257840] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.806 [2024-11-15 10:49:37.268776] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.806 [2024-11-15 10:49:37.268800] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:49.064 [2024-11-15 10:49:37.283201] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:49.064 [2024-11-15 10:49:37.283226] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:49.064 [2024-11-15 10:49:37.292654] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:49.064 [2024-11-15 10:49:37.292680] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:49.064 [2024-11-15 10:49:37.303968] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:49.064 [2024-11-15 10:49:37.303992] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:49.064 [2024-11-15 10:49:37.314012] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:49.064 [2024-11-15 10:49:37.314036] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:49.064 [2024-11-15 10:49:37.325114] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:49.064 [2024-11-15 10:49:37.325138] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:49.064 [2024-11-15 10:49:37.337882] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:49.064 [2024-11-15 10:49:37.337907] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:49.064 [2024-11-15 10:49:37.347470] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:49.064 [2024-11-15 10:49:37.347497] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:49.064 [2024-11-15 10:49:37.358915] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:49.064 [2024-11-15 10:49:37.358940] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:49.064 [2024-11-15 10:49:37.369504] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:49.064 [2024-11-15 10:49:37.369531] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:49.064 [2024-11-15 10:49:37.383242] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:49.064 [2024-11-15 10:49:37.383268] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:49.064 [2024-11-15 10:49:37.392594] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:49.064 [2024-11-15 10:49:37.392620] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:49.064 [2024-11-15 10:49:37.403972] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:49.064 [2024-11-15 10:49:37.403997] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:49.064 [2024-11-15 10:49:37.414049] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:49.064 [2024-11-15 10:49:37.414076] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:49.064 [2024-11-15 10:49:37.424695] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:49.064 [2024-11-15 10:49:37.424720] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:49.064 [2024-11-15 10:49:37.437569] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:49.064 [2024-11-15 10:49:37.437596] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:49.064 [2024-11-15 10:49:37.447059] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:49.064 [2024-11-15 10:49:37.447091] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:49.064 [2024-11-15 10:49:37.458500] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:49.064 [2024-11-15 10:49:37.458526] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:49.064 [2024-11-15 10:49:37.469215] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:49.064 [2024-11-15 10:49:37.469241] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:49.064 [2024-11-15 10:49:37.483728] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:49.064 [2024-11-15 10:49:37.483754] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:49.064 [2024-11-15 10:49:37.493686] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:49.064 [2024-11-15 10:49:37.493727] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:49.064 [2024-11-15 10:49:37.505849] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:49.064 [2024-11-15 10:49:37.505875] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:49.064 [2024-11-15 10:49:37.516904] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:49.064 [2024-11-15 10:49:37.516930] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:49.064 [2024-11-15 10:49:37.530593] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:49.064 [2024-11-15 10:49:37.530621] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:49.322 [2024-11-15 10:49:37.540301] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:49.322 [2024-11-15 10:49:37.540326] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:49.322 [2024-11-15 10:49:37.552226] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:49.322 [2024-11-15 10:49:37.552251] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:49.322 [2024-11-15 10:49:37.566628] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:49.322 [2024-11-15 10:49:37.566669] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:49.322 [2024-11-15 10:49:37.576271] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:49.322 [2024-11-15 10:49:37.576297] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:49.322 [2024-11-15 10:49:37.588489] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:49.322 [2024-11-15 10:49:37.588516] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:49.322 [2024-11-15 10:49:37.604041] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:49.322 [2024-11-15 10:49:37.604066] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:49.322 [2024-11-15 10:49:37.613438] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:49.322 [2024-11-15 10:49:37.613464] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:49.322 [2024-11-15 10:49:37.625318] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:49.322 [2024-11-15 10:49:37.625358] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:49.322 [2024-11-15 10:49:37.636405] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:49.322 [2024-11-15 10:49:37.636432] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:49.322 [2024-11-15 10:49:37.649840] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:49.322 [2024-11-15 10:49:37.649866] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:49.322 [2024-11-15 10:49:37.659229] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:49.322 [2024-11-15 10:49:37.659255] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:49.322 [2024-11-15 10:49:37.670974] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:49.322 [2024-11-15 10:49:37.671005] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:49.322 [2024-11-15 10:49:37.680765] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:49.322 [2024-11-15 10:49:37.680792] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:49.322 [2024-11-15 10:49:37.695502] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:49.322 [2024-11-15 10:49:37.695531] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:49.322 [2024-11-15 10:49:37.705464] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:49.322 [2024-11-15 10:49:37.705492] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:49.322 [2024-11-15 10:49:37.716830] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:49.322 [2024-11-15 10:49:37.716856] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:49.322 [2024-11-15 10:49:37.731099] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:49.322 [2024-11-15 10:49:37.731125] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:49.322 [2024-11-15 10:49:37.740408] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:49.322 [2024-11-15 10:49:37.740435] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:49.322 [2024-11-15 10:49:37.751838] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:49.322 [2024-11-15 10:49:37.751864] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:49.322 [2024-11-15 10:49:37.767985] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:49.322 [2024-11-15 10:49:37.768012] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:49.322 [2024-11-15 10:49:37.777395] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:49.322 [2024-11-15 10:49:37.777423] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:49.579 [2024-11-15 10:49:37.790108] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:49.579 [2024-11-15 10:49:37.790135] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:49.579 [2024-11-15 10:49:37.801496] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:49.579 [2024-11-15 10:49:37.801524] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:49.579 [2024-11-15 10:49:37.814112] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:49.579 [2024-11-15 10:49:37.814138] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:49.579 [2024-11-15 10:49:37.824446] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:49.579 [2024-11-15 10:49:37.824475] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:49.579 [2024-11-15 10:49:37.836297] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:49.579 [2024-11-15 10:49:37.836323] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:49.579 [2024-11-15 10:49:37.849089] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:49.579 [2024-11-15 10:49:37.849115] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:49.579 [2024-11-15 10:49:37.863420] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:49.579 [2024-11-15 10:49:37.863447] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:49.579 [2024-11-15 10:49:37.873012] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:49.579 [2024-11-15 10:49:37.873038] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:49.579 [2024-11-15 10:49:37.884401] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:49.579 [2024-11-15 10:49:37.884428] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:49.579 [2024-11-15 10:49:37.900898] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:49.579 [2024-11-15 10:49:37.900932] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:49.579 [2024-11-15 10:49:37.915214] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:49.579 [2024-11-15 10:49:37.915240] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:49.579 [2024-11-15 10:49:37.924873] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:49.579 [2024-11-15 10:49:37.924898] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:49.579 [2024-11-15 10:49:37.938781] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:49.579 [2024-11-15 10:49:37.938806] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:49.579 [2024-11-15 10:49:37.948978] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:49.579 [2024-11-15 10:49:37.949004] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:49.579 [2024-11-15 10:49:37.960875] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:49.579 [2024-11-15 10:49:37.960901] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:49.579 [2024-11-15 10:49:37.973730] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:49.579 [2024-11-15 10:49:37.973756] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:49.579 [2024-11-15 10:49:37.983910] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:49.579 [2024-11-15 10:49:37.983936] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:49.579 [2024-11-15 10:49:37.996088] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:49.579 [2024-11-15 10:49:37.996129] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:49.579 [2024-11-15 10:49:38.012106] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:49.579 [2024-11-15 10:49:38.012132] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:49.579 [2024-11-15 10:49:38.022207] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:49.580 [2024-11-15 10:49:38.022233] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:49.580 [2024-11-15 10:49:38.034587] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:49.580 [2024-11-15 10:49:38.034613] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:49.580 [2024-11-15 10:49:38.045735] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:49.580 [2024-11-15 10:49:38.045764] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:49.837 [2024-11-15 10:49:38.057268] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:49.837 [2024-11-15 10:49:38.057295] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:49.837 [2024-11-15 10:49:38.068546] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:49.837 [2024-11-15 10:49:38.068574] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:49.837 [2024-11-15 10:49:38.082811] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:49.837 [2024-11-15 10:49:38.082837] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:49.837 [2024-11-15 10:49:38.093016] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:49.837 [2024-11-15 10:49:38.093042] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:49.837 [2024-11-15 10:49:38.106830] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:49.837 [2024-11-15 10:49:38.106856] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:49.837 [2024-11-15 10:49:38.117085] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:49.837 [2024-11-15 10:49:38.117111] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:49.837 [2024-11-15 10:49:38.129692] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:49.837 [2024-11-15 10:49:38.129733] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:49.837 [2024-11-15 10:49:38.144184] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:49.837 [2024-11-15 10:49:38.144210] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:49.837 [2024-11-15 10:49:38.154032] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:49.837 [2024-11-15 10:49:38.154058] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:49.837 [2024-11-15 10:49:38.165753] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:49.837 [2024-11-15 10:49:38.165781] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:49.837 12019.60 IOPS, 93.90 MiB/s [2024-11-15T09:49:38.300Z] [2024-11-15 10:49:38.175068] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:49.837 [2024-11-15 10:49:38.175095] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:49.837 00:30:49.837 Latency(us) 00:30:49.837 [2024-11-15T09:49:38.300Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:49.837 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:30:49.837 Nvme1n1 : 5.01 12021.25 93.92 0.00 0.00 10635.38 2997.67 18252.99 00:30:49.837 [2024-11-15T09:49:38.300Z] =================================================================================================================== 00:30:49.837 [2024-11-15T09:49:38.300Z] Total : 12021.25 93.92 0.00 0.00 10635.38 2997.67 18252.99 00:30:49.837 [2024-11-15 10:49:38.182146] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:49.837 [2024-11-15 10:49:38.182170] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:49.837 [2024-11-15 10:49:38.190162] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:49.837 [2024-11-15 10:49:38.190187] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:49.837 [2024-11-15 10:49:38.198152] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:49.837 [2024-11-15 10:49:38.198178] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:49.837 [2024-11-15 10:49:38.206215] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:49.837 [2024-11-15 10:49:38.206260] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:49.837 [2024-11-15 10:49:38.218223] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:49.837 [2024-11-15 10:49:38.218281] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:49.837 [2024-11-15 10:49:38.226212] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:49.837 [2024-11-15 10:49:38.226260] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:49.837 [2024-11-15 10:49:38.234205] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:49.837 [2024-11-15 10:49:38.234248] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:49.837 [2024-11-15 10:49:38.242214] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:49.837 [2024-11-15 10:49:38.242262] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:49.837 [2024-11-15 10:49:38.250207] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:49.837 [2024-11-15 10:49:38.250252] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:49.837 [2024-11-15 10:49:38.258217] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:49.837 [2024-11-15 10:49:38.258262] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:49.838 [2024-11-15 10:49:38.266209] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:49.838 [2024-11-15 10:49:38.266256] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:49.838 [2024-11-15 10:49:38.274216] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:49.838 [2024-11-15 10:49:38.274266] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:49.838 [2024-11-15 10:49:38.282212] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:49.838 [2024-11-15 10:49:38.282256] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:49.838 [2024-11-15 10:49:38.290215] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:49.838 [2024-11-15 10:49:38.290267] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:49.838 [2024-11-15 10:49:38.298201] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:49.838 [2024-11-15 10:49:38.298261] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:50.095 [2024-11-15 10:49:38.306213] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:50.095 [2024-11-15 10:49:38.306263] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:50.095 [2024-11-15 10:49:38.314158] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:50.095 [2024-11-15 10:49:38.314189] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:50.095 [2024-11-15 10:49:38.322141] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:50.095 [2024-11-15 10:49:38.322162] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:50.095 [2024-11-15 10:49:38.330140] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:50.095 [2024-11-15 10:49:38.330161] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:50.095 [2024-11-15 10:49:38.338143] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:50.095 [2024-11-15 10:49:38.338164] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:50.095 [2024-11-15 10:49:38.346158] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:50.095 [2024-11-15 10:49:38.346185] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:50.095 [2024-11-15 10:49:38.354213] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:50.095 [2024-11-15 10:49:38.354264] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:50.095 [2024-11-15 10:49:38.362202] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:50.095 [2024-11-15 10:49:38.362252] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:50.095 [2024-11-15 10:49:38.370157] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:50.095 [2024-11-15 10:49:38.370185] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:50.095 [2024-11-15 10:49:38.378142] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:50.095 [2024-11-15 10:49:38.378164] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:50.095 [2024-11-15 10:49:38.386140] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:50.095 [2024-11-15 10:49:38.386161] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:50.095 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (529043) - No such process 00:30:50.095 10:49:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 529043 00:30:50.095 10:49:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:50.095 10:49:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:50.095 10:49:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:30:50.095 10:49:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:50.095 10:49:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:30:50.095 10:49:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:50.095 10:49:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:30:50.095 delay0 00:30:50.095 10:49:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:50.095 10:49:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:30:50.095 10:49:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:50.095 10:49:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:30:50.095 10:49:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:50.096 10:49:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:30:50.096 [2024-11-15 10:49:38.552504] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:30:58.225 Initializing NVMe Controllers 00:30:58.225 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:58.225 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:30:58.225 Initialization complete. Launching workers. 00:30:58.225 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 236, failed: 22583 00:30:58.225 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 22690, failed to submit 129 00:30:58.225 success 22618, unsuccessful 72, failed 0 00:30:58.225 10:49:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:30:58.225 10:49:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:30:58.225 10:49:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:58.225 10:49:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:30:58.225 10:49:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:58.225 10:49:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:30:58.225 10:49:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:58.225 10:49:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:58.225 rmmod nvme_tcp 00:30:58.225 rmmod nvme_fabrics 00:30:58.225 rmmod nvme_keyring 00:30:58.225 10:49:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:58.225 10:49:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:30:58.225 10:49:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:30:58.225 10:49:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@517 -- # '[' -n 527720 ']' 00:30:58.225 10:49:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@518 -- # killprocess 527720 00:30:58.225 10:49:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@952 -- # '[' -z 527720 ']' 00:30:58.225 10:49:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@956 -- # kill -0 527720 00:30:58.225 10:49:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@957 -- # uname 00:30:58.225 10:49:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:30:58.225 10:49:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 527720 00:30:58.225 10:49:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:30:58.225 10:49:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:30:58.225 10:49:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@970 -- # echo 'killing process with pid 527720' 00:30:58.225 killing process with pid 527720 00:30:58.225 10:49:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@971 -- # kill 527720 00:30:58.225 10:49:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@976 -- # wait 527720 00:30:58.225 10:49:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:58.225 10:49:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:30:58.225 10:49:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:30:58.225 10:49:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:30:58.225 10:49:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-save 00:30:58.225 10:49:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:30:58.225 10:49:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-restore 00:30:58.225 10:49:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:58.225 10:49:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:58.225 10:49:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:58.225 10:49:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:58.225 10:49:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:59.605 10:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:59.864 00:30:59.864 real 0m28.781s 00:30:59.864 user 0m39.248s 00:30:59.864 sys 0m11.690s 00:30:59.864 10:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1128 -- # xtrace_disable 00:30:59.864 10:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:30:59.864 ************************************ 00:30:59.864 END TEST nvmf_zcopy 00:30:59.864 ************************************ 00:30:59.864 10:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:30:59.864 10:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:30:59.864 10:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1109 -- # xtrace_disable 00:30:59.864 10:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:30:59.864 ************************************ 00:30:59.864 START TEST nvmf_nmic 00:30:59.864 ************************************ 00:30:59.864 10:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:30:59.864 * Looking for test storage... 00:30:59.864 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:30:59.864 10:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:30:59.864 10:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1691 -- # lcov --version 00:30:59.864 10:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:30:59.864 10:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:30:59.864 10:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:59.864 10:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:59.864 10:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:59.864 10:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:30:59.864 10:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:30:59.864 10:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:30:59.864 10:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:30:59.864 10:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:30:59.864 10:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:30:59.864 10:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:30:59.864 10:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:59.864 10:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:30:59.864 10:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:30:59.864 10:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:59.864 10:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:59.864 10:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:30:59.864 10:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:30:59.864 10:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:59.864 10:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:30:59.864 10:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:30:59.864 10:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:30:59.864 10:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:30:59.864 10:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:59.864 10:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:30:59.864 10:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:30:59.864 10:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:59.864 10:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:59.864 10:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:30:59.864 10:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:59.864 10:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:30:59.864 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:59.864 --rc genhtml_branch_coverage=1 00:30:59.864 --rc genhtml_function_coverage=1 00:30:59.864 --rc genhtml_legend=1 00:30:59.864 --rc geninfo_all_blocks=1 00:30:59.864 --rc geninfo_unexecuted_blocks=1 00:30:59.864 00:30:59.864 ' 00:30:59.864 10:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:30:59.864 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:59.864 --rc genhtml_branch_coverage=1 00:30:59.864 --rc genhtml_function_coverage=1 00:30:59.864 --rc genhtml_legend=1 00:30:59.864 --rc geninfo_all_blocks=1 00:30:59.864 --rc geninfo_unexecuted_blocks=1 00:30:59.864 00:30:59.864 ' 00:30:59.864 10:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:30:59.864 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:59.864 --rc genhtml_branch_coverage=1 00:30:59.864 --rc genhtml_function_coverage=1 00:30:59.864 --rc genhtml_legend=1 00:30:59.864 --rc geninfo_all_blocks=1 00:30:59.864 --rc geninfo_unexecuted_blocks=1 00:30:59.864 00:30:59.864 ' 00:30:59.864 10:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:30:59.864 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:59.864 --rc genhtml_branch_coverage=1 00:30:59.864 --rc genhtml_function_coverage=1 00:30:59.864 --rc genhtml_legend=1 00:30:59.864 --rc geninfo_all_blocks=1 00:30:59.864 --rc geninfo_unexecuted_blocks=1 00:30:59.864 00:30:59.864 ' 00:30:59.864 10:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:59.864 10:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:30:59.864 10:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:59.864 10:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:59.864 10:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:59.864 10:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:59.864 10:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:59.864 10:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:59.864 10:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:59.864 10:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:59.864 10:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:59.864 10:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:59.864 10:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:30:59.864 10:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=8b464f06-2980-e311-ba20-001e67a94acd 00:30:59.864 10:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:59.864 10:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:59.864 10:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:59.865 10:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:59.865 10:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:59.865 10:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:30:59.865 10:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:59.865 10:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:59.865 10:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:59.865 10:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:59.865 10:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:59.865 10:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:59.865 10:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:30:59.865 10:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:59.865 10:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:30:59.865 10:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:59.865 10:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:59.865 10:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:59.865 10:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:59.865 10:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:59.865 10:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:30:59.865 10:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:30:59.865 10:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:59.865 10:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:59.865 10:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:59.865 10:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:30:59.865 10:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:30:59.865 10:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:30:59.865 10:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:30:59.865 10:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:59.865 10:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:59.865 10:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:59.865 10:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:59.865 10:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:59.865 10:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:59.865 10:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:59.865 10:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:30:59.865 10:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:30:59.865 10:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@309 -- # xtrace_disable 00:30:59.865 10:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:02.395 10:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:02.395 10:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@315 -- # pci_devs=() 00:31:02.395 10:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:02.395 10:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:02.395 10:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:02.395 10:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:02.395 10:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:02.395 10:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@319 -- # net_devs=() 00:31:02.395 10:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:02.395 10:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@320 -- # e810=() 00:31:02.395 10:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@320 -- # local -ga e810 00:31:02.395 10:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@321 -- # x722=() 00:31:02.395 10:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@321 -- # local -ga x722 00:31:02.395 10:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@322 -- # mlx=() 00:31:02.395 10:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@322 -- # local -ga mlx 00:31:02.395 10:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:02.395 10:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:02.395 10:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:02.395 10:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:02.395 10:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:02.395 10:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:02.395 10:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:02.395 10:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:02.395 10:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:02.395 10:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:02.395 10:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:02.395 10:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:02.395 10:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:02.395 10:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:31:02.395 10:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:02.395 10:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:02.395 10:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:02.395 10:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:02.395 10:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:02.395 10:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:82:00.0 (0x8086 - 0x159b)' 00:31:02.395 Found 0000:82:00.0 (0x8086 - 0x159b) 00:31:02.395 10:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:02.395 10:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:02.395 10:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:02.395 10:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:02.395 10:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:02.395 10:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:02.395 10:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:82:00.1 (0x8086 - 0x159b)' 00:31:02.395 Found 0000:82:00.1 (0x8086 - 0x159b) 00:31:02.395 10:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:02.395 10:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:02.395 10:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:02.395 10:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:02.395 10:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:02.395 10:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:02.395 10:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:02.395 10:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:31:02.395 10:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:02.395 10:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:02.395 10:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:02.395 10:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:02.395 10:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:02.395 10:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:02.395 10:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:02.395 10:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:82:00.0: cvl_0_0' 00:31:02.395 Found net devices under 0000:82:00.0: cvl_0_0 00:31:02.395 10:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:02.395 10:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:02.395 10:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:02.395 10:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:02.395 10:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:02.395 10:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:02.395 10:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:02.395 10:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:02.395 10:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:82:00.1: cvl_0_1' 00:31:02.395 Found net devices under 0000:82:00.1: cvl_0_1 00:31:02.395 10:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:02.395 10:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:31:02.395 10:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # is_hw=yes 00:31:02.395 10:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:31:02.395 10:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:31:02.395 10:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:31:02.395 10:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:02.395 10:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:02.395 10:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:02.395 10:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:02.395 10:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:02.395 10:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:02.395 10:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:02.395 10:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:02.395 10:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:02.395 10:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:02.395 10:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:02.395 10:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:02.395 10:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:02.395 10:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:02.395 10:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:02.395 10:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:02.395 10:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:02.395 10:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:02.395 10:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:02.395 10:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:02.395 10:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:02.395 10:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:02.395 10:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:02.395 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:02.395 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.258 ms 00:31:02.395 00:31:02.395 --- 10.0.0.2 ping statistics --- 00:31:02.395 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:02.395 rtt min/avg/max/mdev = 0.258/0.258/0.258/0.000 ms 00:31:02.395 10:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:02.395 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:02.395 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.112 ms 00:31:02.395 00:31:02.395 --- 10.0.0.1 ping statistics --- 00:31:02.395 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:02.395 rtt min/avg/max/mdev = 0.112/0.112/0.112/0.000 ms 00:31:02.395 10:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:02.395 10:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@450 -- # return 0 00:31:02.395 10:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:31:02.395 10:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:02.395 10:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:31:02.395 10:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:31:02.395 10:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:02.395 10:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:31:02.395 10:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:31:02.395 10:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:31:02.395 10:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:31:02.395 10:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@724 -- # xtrace_disable 00:31:02.395 10:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:02.395 10:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@509 -- # nvmfpid=532428 00:31:02.395 10:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:31:02.395 10:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@510 -- # waitforlisten 532428 00:31:02.395 10:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@833 -- # '[' -z 532428 ']' 00:31:02.395 10:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:02.395 10:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@838 -- # local max_retries=100 00:31:02.395 10:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:02.395 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:02.395 10:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@842 -- # xtrace_disable 00:31:02.395 10:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:02.395 [2024-11-15 10:49:50.592117] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:31:02.395 [2024-11-15 10:49:50.593314] Starting SPDK v25.01-pre git sha1 318515b44 / DPDK 24.03.0 initialization... 00:31:02.396 [2024-11-15 10:49:50.593391] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:02.396 [2024-11-15 10:49:50.668261] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:31:02.396 [2024-11-15 10:49:50.729078] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:02.396 [2024-11-15 10:49:50.729131] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:02.396 [2024-11-15 10:49:50.729154] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:02.396 [2024-11-15 10:49:50.729165] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:02.396 [2024-11-15 10:49:50.729175] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:02.396 [2024-11-15 10:49:50.730740] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:02.396 [2024-11-15 10:49:50.730839] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:31:02.396 [2024-11-15 10:49:50.730915] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:02.396 [2024-11-15 10:49:50.730911] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:31:02.396 [2024-11-15 10:49:50.819162] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:31:02.396 [2024-11-15 10:49:50.819418] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:31:02.396 [2024-11-15 10:49:50.819667] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:31:02.396 [2024-11-15 10:49:50.820236] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:31:02.396 [2024-11-15 10:49:50.820483] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:31:02.396 10:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:31:02.396 10:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@866 -- # return 0 00:31:02.396 10:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:31:02.396 10:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@730 -- # xtrace_disable 00:31:02.396 10:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:02.653 10:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:02.653 10:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:31:02.653 10:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:02.653 10:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:02.653 [2024-11-15 10:49:50.871616] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:02.653 10:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:02.653 10:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:31:02.653 10:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:02.653 10:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:02.653 Malloc0 00:31:02.653 10:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:02.653 10:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:31:02.653 10:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:02.653 10:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:02.653 10:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:02.653 10:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:31:02.653 10:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:02.653 10:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:02.653 10:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:02.653 10:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:02.654 10:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:02.654 10:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:02.654 [2024-11-15 10:49:50.947868] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:02.654 10:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:02.654 10:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:31:02.654 test case1: single bdev can't be used in multiple subsystems 00:31:02.654 10:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:31:02.654 10:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:02.654 10:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:02.654 10:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:02.654 10:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:31:02.654 10:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:02.654 10:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:02.654 10:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:02.654 10:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:31:02.654 10:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:31:02.654 10:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:02.654 10:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:02.654 [2024-11-15 10:49:50.971504] bdev.c:8198:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:31:02.654 [2024-11-15 10:49:50.971544] subsystem.c:2150:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:31:02.654 [2024-11-15 10:49:50.971570] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:02.654 request: 00:31:02.654 { 00:31:02.654 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:31:02.654 "namespace": { 00:31:02.654 "bdev_name": "Malloc0", 00:31:02.654 "no_auto_visible": false 00:31:02.654 }, 00:31:02.654 "method": "nvmf_subsystem_add_ns", 00:31:02.654 "req_id": 1 00:31:02.654 } 00:31:02.654 Got JSON-RPC error response 00:31:02.654 response: 00:31:02.654 { 00:31:02.654 "code": -32602, 00:31:02.654 "message": "Invalid parameters" 00:31:02.654 } 00:31:02.654 10:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:31:02.654 10:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:31:02.654 10:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:31:02.654 10:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:31:02.654 Adding namespace failed - expected result. 00:31:02.654 10:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:31:02.654 test case2: host connect to nvmf target in multiple paths 00:31:02.654 10:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:31:02.654 10:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:02.654 10:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:02.654 [2024-11-15 10:49:50.983609] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:31:02.654 10:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:02.654 10:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid=8b464f06-2980-e311-ba20-001e67a94acd -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:31:02.911 10:49:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid=8b464f06-2980-e311-ba20-001e67a94acd -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:31:03.167 10:49:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:31:03.167 10:49:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1200 -- # local i=0 00:31:03.167 10:49:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:31:03.167 10:49:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:31:03.167 10:49:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1207 -- # sleep 2 00:31:05.061 10:49:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:31:05.061 10:49:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:31:05.061 10:49:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:31:05.061 10:49:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:31:05.061 10:49:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:31:05.061 10:49:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1210 -- # return 0 00:31:05.061 10:49:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:31:05.061 [global] 00:31:05.061 thread=1 00:31:05.061 invalidate=1 00:31:05.061 rw=write 00:31:05.061 time_based=1 00:31:05.061 runtime=1 00:31:05.061 ioengine=libaio 00:31:05.061 direct=1 00:31:05.061 bs=4096 00:31:05.061 iodepth=1 00:31:05.061 norandommap=0 00:31:05.061 numjobs=1 00:31:05.061 00:31:05.061 verify_dump=1 00:31:05.061 verify_backlog=512 00:31:05.061 verify_state_save=0 00:31:05.061 do_verify=1 00:31:05.061 verify=crc32c-intel 00:31:05.061 [job0] 00:31:05.061 filename=/dev/nvme0n1 00:31:05.061 Could not set queue depth (nvme0n1) 00:31:05.319 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:31:05.319 fio-3.35 00:31:05.319 Starting 1 thread 00:31:06.689 00:31:06.689 job0: (groupid=0, jobs=1): err= 0: pid=532925: Fri Nov 15 10:49:54 2024 00:31:06.689 read: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec) 00:31:06.689 slat (nsec): min=4622, max=65945, avg=11968.48, stdev=10029.96 00:31:06.689 clat (usec): min=190, max=557, avg=251.53, stdev=64.30 00:31:06.689 lat (usec): min=197, max=573, avg=263.49, stdev=72.88 00:31:06.689 clat percentiles (usec): 00:31:06.689 | 1.00th=[ 196], 5.00th=[ 198], 10.00th=[ 202], 20.00th=[ 204], 00:31:06.689 | 30.00th=[ 210], 40.00th=[ 217], 50.00th=[ 225], 60.00th=[ 235], 00:31:06.689 | 70.00th=[ 255], 80.00th=[ 285], 90.00th=[ 371], 95.00th=[ 388], 00:31:06.689 | 99.00th=[ 465], 99.50th=[ 474], 99.90th=[ 510], 99.95th=[ 545], 00:31:06.689 | 99.99th=[ 562] 00:31:06.689 write: IOPS=2429, BW=9718KiB/s (9952kB/s)(9728KiB/1001msec); 0 zone resets 00:31:06.689 slat (usec): min=6, max=28781, avg=22.64, stdev=583.41 00:31:06.689 clat (usec): min=136, max=346, avg=160.57, stdev=18.63 00:31:06.689 lat (usec): min=145, max=29048, avg=183.21, stdev=585.95 00:31:06.689 clat percentiles (usec): 00:31:06.689 | 1.00th=[ 141], 5.00th=[ 143], 10.00th=[ 143], 20.00th=[ 145], 00:31:06.689 | 30.00th=[ 147], 40.00th=[ 151], 50.00th=[ 155], 60.00th=[ 163], 00:31:06.689 | 70.00th=[ 169], 80.00th=[ 174], 90.00th=[ 186], 95.00th=[ 194], 00:31:06.689 | 99.00th=[ 217], 99.50th=[ 229], 99.90th=[ 269], 99.95th=[ 343], 00:31:06.689 | 99.99th=[ 347] 00:31:06.689 bw ( KiB/s): min= 8192, max= 8192, per=84.29%, avg=8192.00, stdev= 0.00, samples=1 00:31:06.689 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:31:06.689 lat (usec) : 250=85.65%, 500=14.29%, 750=0.07% 00:31:06.689 cpu : usr=2.80%, sys=5.80%, ctx=4482, majf=0, minf=1 00:31:06.689 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:06.689 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:06.689 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:06.689 issued rwts: total=2048,2432,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:06.689 latency : target=0, window=0, percentile=100.00%, depth=1 00:31:06.689 00:31:06.689 Run status group 0 (all jobs): 00:31:06.689 READ: bw=8184KiB/s (8380kB/s), 8184KiB/s-8184KiB/s (8380kB/s-8380kB/s), io=8192KiB (8389kB), run=1001-1001msec 00:31:06.689 WRITE: bw=9718KiB/s (9952kB/s), 9718KiB/s-9718KiB/s (9952kB/s-9952kB/s), io=9728KiB (9961kB), run=1001-1001msec 00:31:06.689 00:31:06.689 Disk stats (read/write): 00:31:06.689 nvme0n1: ios=1892/2048, merge=0/0, ticks=1462/339, in_queue=1801, util=98.70% 00:31:06.689 10:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:31:06.689 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:31:06.689 10:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:31:06.689 10:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1221 -- # local i=0 00:31:06.689 10:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:31:06.689 10:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:31:06.689 10:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:31:06.689 10:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:31:06.689 10:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1233 -- # return 0 00:31:06.689 10:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:31:06.689 10:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:31:06.689 10:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@516 -- # nvmfcleanup 00:31:06.689 10:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:31:06.689 10:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:06.689 10:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:31:06.689 10:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:06.689 10:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:06.689 rmmod nvme_tcp 00:31:06.689 rmmod nvme_fabrics 00:31:06.689 rmmod nvme_keyring 00:31:06.689 10:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:06.689 10:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:31:06.689 10:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:31:06.689 10:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@517 -- # '[' -n 532428 ']' 00:31:06.689 10:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@518 -- # killprocess 532428 00:31:06.689 10:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@952 -- # '[' -z 532428 ']' 00:31:06.689 10:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@956 -- # kill -0 532428 00:31:06.689 10:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@957 -- # uname 00:31:06.689 10:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:31:06.689 10:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 532428 00:31:06.689 10:49:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:31:06.689 10:49:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:31:06.689 10:49:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@970 -- # echo 'killing process with pid 532428' 00:31:06.689 killing process with pid 532428 00:31:06.689 10:49:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@971 -- # kill 532428 00:31:06.689 10:49:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@976 -- # wait 532428 00:31:06.947 10:49:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:31:06.947 10:49:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:31:06.947 10:49:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:31:06.947 10:49:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:31:06.947 10:49:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-save 00:31:06.947 10:49:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:31:06.947 10:49:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-restore 00:31:06.947 10:49:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:06.947 10:49:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:06.947 10:49:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:06.947 10:49:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:06.947 10:49:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:08.852 10:49:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:08.852 00:31:08.852 real 0m9.147s 00:31:08.852 user 0m16.897s 00:31:08.852 sys 0m3.747s 00:31:08.852 10:49:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1128 -- # xtrace_disable 00:31:08.852 10:49:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:08.852 ************************************ 00:31:08.852 END TEST nvmf_nmic 00:31:08.852 ************************************ 00:31:08.852 10:49:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:31:08.852 10:49:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:31:08.852 10:49:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1109 -- # xtrace_disable 00:31:08.852 10:49:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:31:09.110 ************************************ 00:31:09.110 START TEST nvmf_fio_target 00:31:09.110 ************************************ 00:31:09.110 10:49:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:31:09.110 * Looking for test storage... 00:31:09.110 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:31:09.110 10:49:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:31:09.110 10:49:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1691 -- # lcov --version 00:31:09.110 10:49:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:31:09.111 10:49:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:31:09.111 10:49:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:09.111 10:49:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:09.111 10:49:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:09.111 10:49:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:31:09.111 10:49:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:31:09.111 10:49:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:31:09.111 10:49:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:31:09.111 10:49:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:31:09.111 10:49:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:31:09.111 10:49:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:31:09.111 10:49:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:09.111 10:49:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:31:09.111 10:49:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:31:09.111 10:49:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:09.111 10:49:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:09.111 10:49:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:31:09.111 10:49:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:31:09.111 10:49:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:09.111 10:49:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:31:09.111 10:49:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:31:09.111 10:49:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:31:09.111 10:49:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:31:09.111 10:49:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:09.111 10:49:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:31:09.111 10:49:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:31:09.111 10:49:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:09.111 10:49:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:09.111 10:49:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:31:09.111 10:49:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:09.111 10:49:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:31:09.111 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:09.111 --rc genhtml_branch_coverage=1 00:31:09.111 --rc genhtml_function_coverage=1 00:31:09.111 --rc genhtml_legend=1 00:31:09.111 --rc geninfo_all_blocks=1 00:31:09.111 --rc geninfo_unexecuted_blocks=1 00:31:09.111 00:31:09.111 ' 00:31:09.111 10:49:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:31:09.111 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:09.111 --rc genhtml_branch_coverage=1 00:31:09.111 --rc genhtml_function_coverage=1 00:31:09.111 --rc genhtml_legend=1 00:31:09.111 --rc geninfo_all_blocks=1 00:31:09.111 --rc geninfo_unexecuted_blocks=1 00:31:09.111 00:31:09.111 ' 00:31:09.111 10:49:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:31:09.111 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:09.111 --rc genhtml_branch_coverage=1 00:31:09.111 --rc genhtml_function_coverage=1 00:31:09.111 --rc genhtml_legend=1 00:31:09.111 --rc geninfo_all_blocks=1 00:31:09.111 --rc geninfo_unexecuted_blocks=1 00:31:09.111 00:31:09.111 ' 00:31:09.111 10:49:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:31:09.111 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:09.111 --rc genhtml_branch_coverage=1 00:31:09.111 --rc genhtml_function_coverage=1 00:31:09.111 --rc genhtml_legend=1 00:31:09.111 --rc geninfo_all_blocks=1 00:31:09.111 --rc geninfo_unexecuted_blocks=1 00:31:09.111 00:31:09.111 ' 00:31:09.111 10:49:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:09.111 10:49:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:31:09.111 10:49:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:09.111 10:49:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:09.111 10:49:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:09.111 10:49:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:09.111 10:49:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:09.111 10:49:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:09.111 10:49:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:09.111 10:49:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:09.111 10:49:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:09.111 10:49:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:09.111 10:49:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:31:09.111 10:49:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=8b464f06-2980-e311-ba20-001e67a94acd 00:31:09.111 10:49:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:09.111 10:49:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:09.111 10:49:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:09.111 10:49:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:09.111 10:49:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:09.112 10:49:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:31:09.112 10:49:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:09.112 10:49:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:09.112 10:49:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:09.112 10:49:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:09.112 10:49:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:09.112 10:49:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:09.112 10:49:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:31:09.112 10:49:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:09.112 10:49:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:31:09.112 10:49:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:09.112 10:49:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:09.112 10:49:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:09.112 10:49:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:09.112 10:49:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:09.112 10:49:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:31:09.112 10:49:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:31:09.112 10:49:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:09.112 10:49:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:09.112 10:49:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:09.112 10:49:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:31:09.112 10:49:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:31:09.112 10:49:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:31:09.112 10:49:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:31:09.112 10:49:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:31:09.112 10:49:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:09.112 10:49:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:31:09.112 10:49:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:31:09.112 10:49:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:31:09.112 10:49:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:09.112 10:49:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:09.112 10:49:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:09.112 10:49:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:31:09.112 10:49:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:31:09.112 10:49:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@309 -- # xtrace_disable 00:31:09.112 10:49:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:31:11.642 10:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:11.642 10:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@315 -- # pci_devs=() 00:31:11.642 10:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:11.642 10:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:11.642 10:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:11.642 10:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:11.642 10:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:11.642 10:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@319 -- # net_devs=() 00:31:11.642 10:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:11.642 10:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@320 -- # e810=() 00:31:11.642 10:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@320 -- # local -ga e810 00:31:11.642 10:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@321 -- # x722=() 00:31:11.642 10:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@321 -- # local -ga x722 00:31:11.642 10:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@322 -- # mlx=() 00:31:11.642 10:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@322 -- # local -ga mlx 00:31:11.642 10:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:11.642 10:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:11.642 10:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:11.642 10:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:11.642 10:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:11.642 10:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:11.642 10:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:11.642 10:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:11.642 10:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:11.642 10:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:11.642 10:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:11.642 10:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:11.642 10:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:11.642 10:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:31:11.642 10:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:11.642 10:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:11.642 10:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:11.642 10:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:11.642 10:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:11.642 10:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:82:00.0 (0x8086 - 0x159b)' 00:31:11.642 Found 0000:82:00.0 (0x8086 - 0x159b) 00:31:11.642 10:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:11.642 10:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:11.642 10:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:11.642 10:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:11.642 10:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:11.642 10:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:11.642 10:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:82:00.1 (0x8086 - 0x159b)' 00:31:11.642 Found 0000:82:00.1 (0x8086 - 0x159b) 00:31:11.642 10:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:11.642 10:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:11.642 10:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:11.643 10:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:11.643 10:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:11.643 10:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:11.643 10:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:11.643 10:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:31:11.643 10:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:11.643 10:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:11.643 10:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:11.643 10:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:11.643 10:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:11.643 10:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:11.643 10:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:11.643 10:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:82:00.0: cvl_0_0' 00:31:11.643 Found net devices under 0000:82:00.0: cvl_0_0 00:31:11.643 10:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:11.643 10:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:11.643 10:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:11.643 10:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:11.643 10:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:11.643 10:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:11.643 10:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:11.643 10:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:11.643 10:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:82:00.1: cvl_0_1' 00:31:11.643 Found net devices under 0000:82:00.1: cvl_0_1 00:31:11.643 10:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:11.643 10:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:31:11.643 10:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # is_hw=yes 00:31:11.643 10:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:31:11.643 10:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:31:11.643 10:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:31:11.643 10:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:11.643 10:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:11.643 10:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:11.643 10:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:11.643 10:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:11.643 10:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:11.643 10:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:11.643 10:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:11.643 10:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:11.643 10:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:11.643 10:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:11.643 10:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:11.643 10:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:11.643 10:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:11.643 10:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:11.643 10:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:11.643 10:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:11.643 10:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:11.643 10:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:11.643 10:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:11.643 10:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:11.643 10:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:11.643 10:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:11.643 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:11.643 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.205 ms 00:31:11.643 00:31:11.643 --- 10.0.0.2 ping statistics --- 00:31:11.643 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:11.643 rtt min/avg/max/mdev = 0.205/0.205/0.205/0.000 ms 00:31:11.643 10:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:11.643 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:11.643 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.070 ms 00:31:11.643 00:31:11.643 --- 10.0.0.1 ping statistics --- 00:31:11.643 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:11.643 rtt min/avg/max/mdev = 0.070/0.070/0.070/0.000 ms 00:31:11.643 10:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:11.643 10:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@450 -- # return 0 00:31:11.643 10:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:31:11.643 10:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:11.643 10:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:31:11.643 10:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:31:11.643 10:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:11.643 10:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:31:11.643 10:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:31:11.643 10:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:31:11.643 10:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:31:11.643 10:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:31:11.643 10:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:31:11.643 10:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@509 -- # nvmfpid=535002 00:31:11.643 10:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:31:11.643 10:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@510 -- # waitforlisten 535002 00:31:11.643 10:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@833 -- # '[' -z 535002 ']' 00:31:11.643 10:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:11.644 10:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@838 -- # local max_retries=100 00:31:11.644 10:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:11.644 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:11.644 10:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@842 -- # xtrace_disable 00:31:11.644 10:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:31:11.644 [2024-11-15 10:49:59.697640] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:31:11.644 [2024-11-15 10:49:59.698688] Starting SPDK v25.01-pre git sha1 318515b44 / DPDK 24.03.0 initialization... 00:31:11.644 [2024-11-15 10:49:59.698734] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:11.644 [2024-11-15 10:49:59.767419] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:31:11.644 [2024-11-15 10:49:59.821455] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:11.644 [2024-11-15 10:49:59.821511] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:11.644 [2024-11-15 10:49:59.821537] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:11.644 [2024-11-15 10:49:59.821547] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:11.644 [2024-11-15 10:49:59.821557] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:11.644 [2024-11-15 10:49:59.823116] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:11.644 [2024-11-15 10:49:59.823222] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:31:11.644 [2024-11-15 10:49:59.823295] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:31:11.644 [2024-11-15 10:49:59.823298] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:11.644 [2024-11-15 10:49:59.905530] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:31:11.644 [2024-11-15 10:49:59.905680] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:31:11.644 [2024-11-15 10:49:59.905987] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:31:11.644 [2024-11-15 10:49:59.906628] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:31:11.644 [2024-11-15 10:49:59.906885] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:31:11.644 10:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:31:11.644 10:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@866 -- # return 0 00:31:11.644 10:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:31:11.644 10:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:31:11.644 10:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:31:11.644 10:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:11.644 10:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:31:11.901 [2024-11-15 10:50:00.236029] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:11.901 10:50:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:31:12.159 10:50:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:31:12.159 10:50:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:31:12.725 10:50:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:31:12.725 10:50:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:31:12.725 10:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:31:12.725 10:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:31:13.290 10:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:31:13.290 10:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:31:13.547 10:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:31:13.805 10:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:31:13.805 10:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:31:14.063 10:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:31:14.063 10:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:31:14.321 10:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:31:14.321 10:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:31:14.578 10:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:31:14.836 10:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:31:14.836 10:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:31:15.093 10:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:31:15.093 10:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:31:15.350 10:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:15.607 [2024-11-15 10:50:03.968156] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:15.607 10:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:31:15.864 10:50:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:31:16.122 10:50:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid=8b464f06-2980-e311-ba20-001e67a94acd -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:31:16.379 10:50:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:31:16.379 10:50:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1200 -- # local i=0 00:31:16.379 10:50:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:31:16.379 10:50:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1202 -- # [[ -n 4 ]] 00:31:16.379 10:50:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1203 -- # nvme_device_counter=4 00:31:16.379 10:50:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1207 -- # sleep 2 00:31:18.274 10:50:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:31:18.274 10:50:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:31:18.274 10:50:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:31:18.274 10:50:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1209 -- # nvme_devices=4 00:31:18.274 10:50:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:31:18.274 10:50:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1210 -- # return 0 00:31:18.274 10:50:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:31:18.274 [global] 00:31:18.274 thread=1 00:31:18.274 invalidate=1 00:31:18.274 rw=write 00:31:18.274 time_based=1 00:31:18.274 runtime=1 00:31:18.274 ioengine=libaio 00:31:18.274 direct=1 00:31:18.274 bs=4096 00:31:18.274 iodepth=1 00:31:18.274 norandommap=0 00:31:18.274 numjobs=1 00:31:18.274 00:31:18.274 verify_dump=1 00:31:18.274 verify_backlog=512 00:31:18.274 verify_state_save=0 00:31:18.274 do_verify=1 00:31:18.274 verify=crc32c-intel 00:31:18.274 [job0] 00:31:18.274 filename=/dev/nvme0n1 00:31:18.274 [job1] 00:31:18.274 filename=/dev/nvme0n2 00:31:18.274 [job2] 00:31:18.274 filename=/dev/nvme0n3 00:31:18.274 [job3] 00:31:18.274 filename=/dev/nvme0n4 00:31:18.274 Could not set queue depth (nvme0n1) 00:31:18.274 Could not set queue depth (nvme0n2) 00:31:18.274 Could not set queue depth (nvme0n3) 00:31:18.274 Could not set queue depth (nvme0n4) 00:31:18.531 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:31:18.531 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:31:18.531 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:31:18.531 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:31:18.531 fio-3.35 00:31:18.531 Starting 4 threads 00:31:19.901 00:31:19.901 job0: (groupid=0, jobs=1): err= 0: pid=536026: Fri Nov 15 10:50:08 2024 00:31:19.901 read: IOPS=393, BW=1574KiB/s (1612kB/s)(1576KiB/1001msec) 00:31:19.901 slat (nsec): min=7112, max=31915, avg=9024.82, stdev=3459.08 00:31:19.901 clat (usec): min=217, max=41128, avg=2220.31, stdev=8558.47 00:31:19.901 lat (usec): min=225, max=41135, avg=2229.34, stdev=8559.98 00:31:19.901 clat percentiles (usec): 00:31:19.901 | 1.00th=[ 221], 5.00th=[ 231], 10.00th=[ 269], 20.00th=[ 281], 00:31:19.901 | 30.00th=[ 289], 40.00th=[ 293], 50.00th=[ 302], 60.00th=[ 310], 00:31:19.901 | 70.00th=[ 318], 80.00th=[ 355], 90.00th=[ 371], 95.00th=[ 1336], 00:31:19.901 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:31:19.901 | 99.99th=[41157] 00:31:19.901 write: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec); 0 zone resets 00:31:19.901 slat (nsec): min=6175, max=39941, avg=9325.95, stdev=2209.09 00:31:19.901 clat (usec): min=158, max=385, avg=223.78, stdev=25.53 00:31:19.901 lat (usec): min=168, max=394, avg=233.10, stdev=25.90 00:31:19.901 clat percentiles (usec): 00:31:19.901 | 1.00th=[ 165], 5.00th=[ 180], 10.00th=[ 190], 20.00th=[ 204], 00:31:19.901 | 30.00th=[ 212], 40.00th=[ 219], 50.00th=[ 225], 60.00th=[ 231], 00:31:19.901 | 70.00th=[ 237], 80.00th=[ 245], 90.00th=[ 253], 95.00th=[ 260], 00:31:19.901 | 99.00th=[ 281], 99.50th=[ 310], 99.90th=[ 388], 99.95th=[ 388], 00:31:19.901 | 99.99th=[ 388] 00:31:19.901 bw ( KiB/s): min= 4096, max= 4096, per=20.34%, avg=4096.00, stdev= 0.00, samples=1 00:31:19.901 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:31:19.901 lat (usec) : 250=52.98%, 500=44.59%, 750=0.11% 00:31:19.901 lat (msec) : 2=0.22%, 50=2.10% 00:31:19.901 cpu : usr=0.50%, sys=1.10%, ctx=906, majf=0, minf=1 00:31:19.901 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:19.901 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:19.901 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:19.901 issued rwts: total=394,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:19.901 latency : target=0, window=0, percentile=100.00%, depth=1 00:31:19.901 job1: (groupid=0, jobs=1): err= 0: pid=536044: Fri Nov 15 10:50:08 2024 00:31:19.901 read: IOPS=1813, BW=7252KiB/s (7427kB/s)(7296KiB/1006msec) 00:31:19.901 slat (nsec): min=6016, max=34751, avg=8448.60, stdev=3972.61 00:31:19.901 clat (usec): min=200, max=42011, avg=315.67, stdev=1665.48 00:31:19.901 lat (usec): min=207, max=42026, avg=324.12, stdev=1665.86 00:31:19.901 clat percentiles (usec): 00:31:19.901 | 1.00th=[ 206], 5.00th=[ 210], 10.00th=[ 215], 20.00th=[ 223], 00:31:19.901 | 30.00th=[ 231], 40.00th=[ 241], 50.00th=[ 247], 60.00th=[ 249], 00:31:19.902 | 70.00th=[ 253], 80.00th=[ 265], 90.00th=[ 289], 95.00th=[ 306], 00:31:19.902 | 99.00th=[ 330], 99.50th=[ 424], 99.90th=[41157], 99.95th=[42206], 00:31:19.902 | 99.99th=[42206] 00:31:19.902 write: IOPS=2035, BW=8143KiB/s (8339kB/s)(8192KiB/1006msec); 0 zone resets 00:31:19.902 slat (nsec): min=7443, max=60287, avg=10576.79, stdev=4848.94 00:31:19.902 clat (usec): min=127, max=422, avg=186.05, stdev=38.45 00:31:19.902 lat (usec): min=136, max=430, avg=196.62, stdev=39.89 00:31:19.902 clat percentiles (usec): 00:31:19.902 | 1.00th=[ 139], 5.00th=[ 145], 10.00th=[ 147], 20.00th=[ 153], 00:31:19.902 | 30.00th=[ 157], 40.00th=[ 163], 50.00th=[ 174], 60.00th=[ 190], 00:31:19.902 | 70.00th=[ 204], 80.00th=[ 229], 90.00th=[ 243], 95.00th=[ 247], 00:31:19.902 | 99.00th=[ 273], 99.50th=[ 322], 99.90th=[ 383], 99.95th=[ 388], 00:31:19.902 | 99.99th=[ 424] 00:31:19.902 bw ( KiB/s): min= 6768, max= 9616, per=40.68%, avg=8192.00, stdev=2013.84, samples=2 00:31:19.902 iops : min= 1692, max= 2404, avg=2048.00, stdev=503.46, samples=2 00:31:19.902 lat (usec) : 250=80.73%, 500=19.14%, 750=0.03% 00:31:19.902 lat (msec) : 2=0.03%, 50=0.08% 00:31:19.902 cpu : usr=3.68%, sys=4.28%, ctx=3872, majf=0, minf=2 00:31:19.902 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:19.902 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:19.902 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:19.902 issued rwts: total=1824,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:19.902 latency : target=0, window=0, percentile=100.00%, depth=1 00:31:19.902 job2: (groupid=0, jobs=1): err= 0: pid=536063: Fri Nov 15 10:50:08 2024 00:31:19.902 read: IOPS=1879, BW=7516KiB/s (7697kB/s)(7524KiB/1001msec) 00:31:19.902 slat (nsec): min=7142, max=36579, avg=9633.57, stdev=4202.96 00:31:19.902 clat (usec): min=203, max=41474, avg=296.71, stdev=1362.27 00:31:19.902 lat (usec): min=211, max=41485, avg=306.34, stdev=1362.54 00:31:19.902 clat percentiles (usec): 00:31:19.902 | 1.00th=[ 208], 5.00th=[ 210], 10.00th=[ 215], 20.00th=[ 219], 00:31:19.902 | 30.00th=[ 225], 40.00th=[ 229], 50.00th=[ 237], 60.00th=[ 243], 00:31:19.902 | 70.00th=[ 251], 80.00th=[ 269], 90.00th=[ 302], 95.00th=[ 318], 00:31:19.902 | 99.00th=[ 371], 99.50th=[ 375], 99.90th=[41157], 99.95th=[41681], 00:31:19.902 | 99.99th=[41681] 00:31:19.902 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:31:19.902 slat (usec): min=8, max=887, avg=12.42, stdev=20.04 00:31:19.902 clat (usec): min=147, max=331, avg=188.22, stdev=25.22 00:31:19.902 lat (usec): min=156, max=1068, avg=200.63, stdev=34.12 00:31:19.902 clat percentiles (usec): 00:31:19.902 | 1.00th=[ 153], 5.00th=[ 157], 10.00th=[ 159], 20.00th=[ 165], 00:31:19.902 | 30.00th=[ 169], 40.00th=[ 176], 50.00th=[ 184], 60.00th=[ 194], 00:31:19.902 | 70.00th=[ 200], 80.00th=[ 208], 90.00th=[ 223], 95.00th=[ 237], 00:31:19.902 | 99.00th=[ 262], 99.50th=[ 273], 99.90th=[ 289], 99.95th=[ 293], 00:31:19.902 | 99.99th=[ 330] 00:31:19.902 bw ( KiB/s): min=10488, max=10488, per=52.08%, avg=10488.00, stdev= 0.00, samples=1 00:31:19.902 iops : min= 2622, max= 2622, avg=2622.00, stdev= 0.00, samples=1 00:31:19.902 lat (usec) : 250=84.47%, 500=15.42% 00:31:19.902 lat (msec) : 2=0.03%, 20=0.03%, 50=0.05% 00:31:19.902 cpu : usr=2.90%, sys=5.90%, ctx=3931, majf=0, minf=1 00:31:19.902 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:19.902 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:19.902 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:19.902 issued rwts: total=1881,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:19.902 latency : target=0, window=0, percentile=100.00%, depth=1 00:31:19.902 job3: (groupid=0, jobs=1): err= 0: pid=536064: Fri Nov 15 10:50:08 2024 00:31:19.902 read: IOPS=21, BW=86.5KiB/s (88.6kB/s)(88.0KiB/1017msec) 00:31:19.902 slat (nsec): min=8080, max=26861, avg=14973.64, stdev=4469.44 00:31:19.902 clat (usec): min=40833, max=41092, avg=40978.01, stdev=57.69 00:31:19.902 lat (usec): min=40860, max=41109, avg=40992.99, stdev=55.87 00:31:19.902 clat percentiles (usec): 00:31:19.902 | 1.00th=[40633], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:31:19.902 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:31:19.902 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:31:19.902 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:31:19.902 | 99.99th=[41157] 00:31:19.902 write: IOPS=503, BW=2014KiB/s (2062kB/s)(2048KiB/1017msec); 0 zone resets 00:31:19.902 slat (nsec): min=7966, max=48540, avg=9215.81, stdev=2320.88 00:31:19.902 clat (usec): min=148, max=1096, avg=212.56, stdev=66.56 00:31:19.902 lat (usec): min=157, max=1105, avg=221.78, stdev=66.72 00:31:19.902 clat percentiles (usec): 00:31:19.902 | 1.00th=[ 161], 5.00th=[ 165], 10.00th=[ 169], 20.00th=[ 174], 00:31:19.902 | 30.00th=[ 180], 40.00th=[ 188], 50.00th=[ 200], 60.00th=[ 217], 00:31:19.902 | 70.00th=[ 229], 80.00th=[ 243], 90.00th=[ 260], 95.00th=[ 277], 00:31:19.902 | 99.00th=[ 306], 99.50th=[ 742], 99.90th=[ 1090], 99.95th=[ 1090], 00:31:19.902 | 99.99th=[ 1090] 00:31:19.902 bw ( KiB/s): min= 4096, max= 4096, per=20.34%, avg=4096.00, stdev= 0.00, samples=1 00:31:19.902 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:31:19.902 lat (usec) : 250=80.71%, 500=14.23%, 750=0.56%, 1000=0.19% 00:31:19.902 lat (msec) : 2=0.19%, 50=4.12% 00:31:19.902 cpu : usr=0.00%, sys=0.98%, ctx=534, majf=0, minf=1 00:31:19.902 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:19.902 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:19.902 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:19.902 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:19.902 latency : target=0, window=0, percentile=100.00%, depth=1 00:31:19.902 00:31:19.902 Run status group 0 (all jobs): 00:31:19.902 READ: bw=15.8MiB/s (16.6MB/s), 86.5KiB/s-7516KiB/s (88.6kB/s-7697kB/s), io=16.1MiB (16.9MB), run=1001-1017msec 00:31:19.902 WRITE: bw=19.7MiB/s (20.6MB/s), 2014KiB/s-8184KiB/s (2062kB/s-8380kB/s), io=20.0MiB (21.0MB), run=1001-1017msec 00:31:19.902 00:31:19.902 Disk stats (read/write): 00:31:19.902 nvme0n1: ios=67/512, merge=0/0, ticks=724/112, in_queue=836, util=86.47% 00:31:19.902 nvme0n2: ios=1768/2048, merge=0/0, ticks=547/373, in_queue=920, util=98.98% 00:31:19.902 nvme0n3: ios=1777/2048, merge=0/0, ticks=653/389, in_queue=1042, util=100.00% 00:31:19.902 nvme0n4: ios=30/512, merge=0/0, ticks=997/109, in_queue=1106, util=90.40% 00:31:19.902 10:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:31:19.902 [global] 00:31:19.902 thread=1 00:31:19.902 invalidate=1 00:31:19.902 rw=randwrite 00:31:19.902 time_based=1 00:31:19.902 runtime=1 00:31:19.902 ioengine=libaio 00:31:19.902 direct=1 00:31:19.902 bs=4096 00:31:19.902 iodepth=1 00:31:19.902 norandommap=0 00:31:19.902 numjobs=1 00:31:19.902 00:31:19.902 verify_dump=1 00:31:19.902 verify_backlog=512 00:31:19.902 verify_state_save=0 00:31:19.902 do_verify=1 00:31:19.902 verify=crc32c-intel 00:31:19.902 [job0] 00:31:19.902 filename=/dev/nvme0n1 00:31:19.902 [job1] 00:31:19.902 filename=/dev/nvme0n2 00:31:19.902 [job2] 00:31:19.902 filename=/dev/nvme0n3 00:31:19.902 [job3] 00:31:19.902 filename=/dev/nvme0n4 00:31:19.902 Could not set queue depth (nvme0n1) 00:31:19.902 Could not set queue depth (nvme0n2) 00:31:19.902 Could not set queue depth (nvme0n3) 00:31:19.902 Could not set queue depth (nvme0n4) 00:31:20.159 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:31:20.159 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:31:20.159 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:31:20.159 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:31:20.159 fio-3.35 00:31:20.159 Starting 4 threads 00:31:21.530 00:31:21.530 job0: (groupid=0, jobs=1): err= 0: pid=536287: Fri Nov 15 10:50:09 2024 00:31:21.530 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:31:21.530 slat (nsec): min=5669, max=27965, avg=7128.88, stdev=2868.89 00:31:21.530 clat (usec): min=178, max=41986, avg=1595.30, stdev=7318.83 00:31:21.530 lat (usec): min=184, max=41999, avg=1602.43, stdev=7319.82 00:31:21.530 clat percentiles (usec): 00:31:21.530 | 1.00th=[ 186], 5.00th=[ 190], 10.00th=[ 194], 20.00th=[ 204], 00:31:21.530 | 30.00th=[ 219], 40.00th=[ 233], 50.00th=[ 245], 60.00th=[ 249], 00:31:21.530 | 70.00th=[ 255], 80.00th=[ 265], 90.00th=[ 281], 95.00th=[ 383], 00:31:21.530 | 99.00th=[41157], 99.50th=[41157], 99.90th=[42206], 99.95th=[42206], 00:31:21.530 | 99.99th=[42206] 00:31:21.530 write: IOPS=727, BW=2909KiB/s (2979kB/s)(2912KiB/1001msec); 0 zone resets 00:31:21.530 slat (nsec): min=7504, max=44557, avg=12353.38, stdev=6379.39 00:31:21.530 clat (usec): min=147, max=492, avg=229.22, stdev=68.96 00:31:21.530 lat (usec): min=156, max=502, avg=241.58, stdev=71.16 00:31:21.531 clat percentiles (usec): 00:31:21.531 | 1.00th=[ 151], 5.00th=[ 157], 10.00th=[ 161], 20.00th=[ 169], 00:31:21.531 | 30.00th=[ 178], 40.00th=[ 194], 50.00th=[ 217], 60.00th=[ 231], 00:31:21.531 | 70.00th=[ 243], 80.00th=[ 273], 90.00th=[ 343], 95.00th=[ 367], 00:31:21.531 | 99.00th=[ 437], 99.50th=[ 453], 99.90th=[ 494], 99.95th=[ 494], 00:31:21.531 | 99.99th=[ 494] 00:31:21.531 bw ( KiB/s): min= 4096, max= 4096, per=29.27%, avg=4096.00, stdev= 0.00, samples=1 00:31:21.531 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:31:21.531 lat (usec) : 250=69.52%, 500=28.87%, 750=0.24% 00:31:21.531 lat (msec) : 50=1.37% 00:31:21.531 cpu : usr=1.00%, sys=1.60%, ctx=1240, majf=0, minf=1 00:31:21.531 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:21.531 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:21.531 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:21.531 issued rwts: total=512,728,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:21.531 latency : target=0, window=0, percentile=100.00%, depth=1 00:31:21.531 job1: (groupid=0, jobs=1): err= 0: pid=536288: Fri Nov 15 10:50:09 2024 00:31:21.531 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:31:21.531 slat (nsec): min=5923, max=32852, avg=7463.51, stdev=2209.73 00:31:21.531 clat (usec): min=204, max=41952, avg=1561.99, stdev=7094.79 00:31:21.531 lat (usec): min=210, max=41965, avg=1569.45, stdev=7095.84 00:31:21.531 clat percentiles (usec): 00:31:21.531 | 1.00th=[ 217], 5.00th=[ 239], 10.00th=[ 253], 20.00th=[ 260], 00:31:21.531 | 30.00th=[ 269], 40.00th=[ 277], 50.00th=[ 285], 60.00th=[ 293], 00:31:21.531 | 70.00th=[ 297], 80.00th=[ 310], 90.00th=[ 347], 95.00th=[ 433], 00:31:21.531 | 99.00th=[41157], 99.50th=[41157], 99.90th=[42206], 99.95th=[42206], 00:31:21.531 | 99.99th=[42206] 00:31:21.531 write: IOPS=844, BW=3377KiB/s (3458kB/s)(3380KiB/1001msec); 0 zone resets 00:31:21.531 slat (nsec): min=7133, max=77369, avg=12813.01, stdev=9112.84 00:31:21.531 clat (usec): min=153, max=379, avg=214.44, stdev=31.19 00:31:21.531 lat (usec): min=161, max=434, avg=227.25, stdev=33.29 00:31:21.531 clat percentiles (usec): 00:31:21.531 | 1.00th=[ 161], 5.00th=[ 172], 10.00th=[ 180], 20.00th=[ 190], 00:31:21.531 | 30.00th=[ 198], 40.00th=[ 204], 50.00th=[ 212], 60.00th=[ 219], 00:31:21.531 | 70.00th=[ 227], 80.00th=[ 235], 90.00th=[ 249], 95.00th=[ 265], 00:31:21.531 | 99.00th=[ 326], 99.50th=[ 338], 99.90th=[ 379], 99.95th=[ 379], 00:31:21.531 | 99.99th=[ 379] 00:31:21.531 bw ( KiB/s): min= 4096, max= 4096, per=29.27%, avg=4096.00, stdev= 0.00, samples=1 00:31:21.531 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:31:21.531 lat (usec) : 250=59.32%, 500=39.28%, 750=0.22% 00:31:21.531 lat (msec) : 50=1.18% 00:31:21.531 cpu : usr=1.30%, sys=1.60%, ctx=1357, majf=0, minf=1 00:31:21.531 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:21.531 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:21.531 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:21.531 issued rwts: total=512,845,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:21.531 latency : target=0, window=0, percentile=100.00%, depth=1 00:31:21.531 job2: (groupid=0, jobs=1): err= 0: pid=536289: Fri Nov 15 10:50:09 2024 00:31:21.531 read: IOPS=995, BW=3981KiB/s (4076kB/s)(4120KiB/1035msec) 00:31:21.531 slat (nsec): min=5156, max=45405, avg=8795.73, stdev=5227.40 00:31:21.531 clat (usec): min=213, max=41182, avg=628.22, stdev=3723.78 00:31:21.531 lat (usec): min=218, max=41188, avg=637.02, stdev=3723.89 00:31:21.531 clat percentiles (usec): 00:31:21.531 | 1.00th=[ 219], 5.00th=[ 223], 10.00th=[ 227], 20.00th=[ 237], 00:31:21.531 | 30.00th=[ 247], 40.00th=[ 255], 50.00th=[ 262], 60.00th=[ 273], 00:31:21.531 | 70.00th=[ 293], 80.00th=[ 314], 90.00th=[ 371], 95.00th=[ 400], 00:31:21.531 | 99.00th=[ 537], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:31:21.531 | 99.99th=[41157] 00:31:21.531 write: IOPS=1484, BW=5936KiB/s (6079kB/s)(6144KiB/1035msec); 0 zone resets 00:31:21.531 slat (nsec): min=6674, max=48074, avg=10195.14, stdev=5180.71 00:31:21.531 clat (usec): min=145, max=483, avg=231.68, stdev=54.54 00:31:21.531 lat (usec): min=152, max=493, avg=241.88, stdev=56.22 00:31:21.531 clat percentiles (usec): 00:31:21.531 | 1.00th=[ 153], 5.00th=[ 163], 10.00th=[ 176], 20.00th=[ 190], 00:31:21.531 | 30.00th=[ 200], 40.00th=[ 210], 50.00th=[ 221], 60.00th=[ 233], 00:31:21.531 | 70.00th=[ 247], 80.00th=[ 269], 90.00th=[ 310], 95.00th=[ 347], 00:31:21.531 | 99.00th=[ 400], 99.50th=[ 429], 99.90th=[ 482], 99.95th=[ 486], 00:31:21.531 | 99.99th=[ 486] 00:31:21.531 bw ( KiB/s): min= 4096, max= 8192, per=43.90%, avg=6144.00, stdev=2896.31, samples=2 00:31:21.531 iops : min= 1024, max= 2048, avg=1536.00, stdev=724.08, samples=2 00:31:21.531 lat (usec) : 250=56.82%, 500=42.63%, 750=0.19% 00:31:21.531 lat (msec) : 50=0.35% 00:31:21.531 cpu : usr=1.16%, sys=2.61%, ctx=2566, majf=0, minf=1 00:31:21.531 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:21.531 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:21.531 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:21.531 issued rwts: total=1030,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:21.531 latency : target=0, window=0, percentile=100.00%, depth=1 00:31:21.531 job3: (groupid=0, jobs=1): err= 0: pid=536290: Fri Nov 15 10:50:09 2024 00:31:21.531 read: IOPS=31, BW=126KiB/s (129kB/s)(128KiB/1013msec) 00:31:21.531 slat (nsec): min=6638, max=27845, avg=12816.31, stdev=5186.50 00:31:21.531 clat (usec): min=286, max=42002, avg=27110.10, stdev=19675.64 00:31:21.531 lat (usec): min=294, max=42016, avg=27122.92, stdev=19679.13 00:31:21.531 clat percentiles (usec): 00:31:21.531 | 1.00th=[ 289], 5.00th=[ 293], 10.00th=[ 302], 20.00th=[ 322], 00:31:21.531 | 30.00th=[ 523], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:31:21.531 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[42206], 00:31:21.531 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:31:21.531 | 99.99th=[42206] 00:31:21.531 write: IOPS=505, BW=2022KiB/s (2070kB/s)(2048KiB/1013msec); 0 zone resets 00:31:21.531 slat (nsec): min=8994, max=53193, avg=15875.71, stdev=8133.10 00:31:21.531 clat (usec): min=169, max=470, avg=262.65, stdev=64.15 00:31:21.531 lat (usec): min=182, max=481, avg=278.52, stdev=63.61 00:31:21.531 clat percentiles (usec): 00:31:21.531 | 1.00th=[ 176], 5.00th=[ 194], 10.00th=[ 200], 20.00th=[ 215], 00:31:21.531 | 30.00th=[ 227], 40.00th=[ 237], 50.00th=[ 245], 60.00th=[ 253], 00:31:21.531 | 70.00th=[ 269], 80.00th=[ 302], 90.00th=[ 388], 95.00th=[ 404], 00:31:21.531 | 99.00th=[ 433], 99.50th=[ 457], 99.90th=[ 469], 99.95th=[ 469], 00:31:21.531 | 99.99th=[ 469] 00:31:21.531 bw ( KiB/s): min= 4096, max= 4096, per=29.27%, avg=4096.00, stdev= 0.00, samples=1 00:31:21.531 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:31:21.531 lat (usec) : 250=52.94%, 500=42.83%, 750=0.37% 00:31:21.531 lat (msec) : 50=3.86% 00:31:21.531 cpu : usr=0.10%, sys=1.09%, ctx=545, majf=0, minf=1 00:31:21.531 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:21.531 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:21.531 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:21.531 issued rwts: total=32,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:21.531 latency : target=0, window=0, percentile=100.00%, depth=1 00:31:21.531 00:31:21.531 Run status group 0 (all jobs): 00:31:21.531 READ: bw=8062KiB/s (8255kB/s), 126KiB/s-3981KiB/s (129kB/s-4076kB/s), io=8344KiB (8544kB), run=1001-1035msec 00:31:21.531 WRITE: bw=13.7MiB/s (14.3MB/s), 2022KiB/s-5936KiB/s (2070kB/s-6079kB/s), io=14.1MiB (14.8MB), run=1001-1035msec 00:31:21.531 00:31:21.531 Disk stats (read/write): 00:31:21.531 nvme0n1: ios=558/512, merge=0/0, ticks=756/129, in_queue=885, util=91.08% 00:31:21.531 nvme0n2: ios=280/512, merge=0/0, ticks=729/104, in_queue=833, util=86.89% 00:31:21.531 nvme0n3: ios=934/1024, merge=0/0, ticks=576/247, in_queue=823, util=89.06% 00:31:21.531 nvme0n4: ios=86/512, merge=0/0, ticks=1508/134, in_queue=1642, util=98.32% 00:31:21.531 10:50:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:31:21.531 [global] 00:31:21.531 thread=1 00:31:21.531 invalidate=1 00:31:21.531 rw=write 00:31:21.531 time_based=1 00:31:21.531 runtime=1 00:31:21.531 ioengine=libaio 00:31:21.531 direct=1 00:31:21.531 bs=4096 00:31:21.531 iodepth=128 00:31:21.531 norandommap=0 00:31:21.531 numjobs=1 00:31:21.531 00:31:21.531 verify_dump=1 00:31:21.531 verify_backlog=512 00:31:21.531 verify_state_save=0 00:31:21.531 do_verify=1 00:31:21.531 verify=crc32c-intel 00:31:21.531 [job0] 00:31:21.531 filename=/dev/nvme0n1 00:31:21.531 [job1] 00:31:21.531 filename=/dev/nvme0n2 00:31:21.531 [job2] 00:31:21.531 filename=/dev/nvme0n3 00:31:21.531 [job3] 00:31:21.531 filename=/dev/nvme0n4 00:31:21.531 Could not set queue depth (nvme0n1) 00:31:21.531 Could not set queue depth (nvme0n2) 00:31:21.531 Could not set queue depth (nvme0n3) 00:31:21.531 Could not set queue depth (nvme0n4) 00:31:21.531 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:31:21.531 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:31:21.531 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:31:21.531 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:31:21.531 fio-3.35 00:31:21.531 Starting 4 threads 00:31:22.902 00:31:22.903 job0: (groupid=0, jobs=1): err= 0: pid=536522: Fri Nov 15 10:50:11 2024 00:31:22.903 read: IOPS=4039, BW=15.8MiB/s (16.5MB/s)(16.0MiB/1014msec) 00:31:22.903 slat (usec): min=3, max=12960, avg=106.62, stdev=839.08 00:31:22.903 clat (usec): min=3037, max=52830, avg=13696.61, stdev=5301.10 00:31:22.903 lat (usec): min=3051, max=52836, avg=13803.23, stdev=5375.31 00:31:22.903 clat percentiles (usec): 00:31:22.903 | 1.00th=[ 5014], 5.00th=[ 7439], 10.00th=[ 8979], 20.00th=[ 9896], 00:31:22.903 | 30.00th=[11207], 40.00th=[12125], 50.00th=[13304], 60.00th=[14222], 00:31:22.903 | 70.00th=[14746], 80.00th=[16057], 90.00th=[18744], 95.00th=[20579], 00:31:22.903 | 99.00th=[38011], 99.50th=[43254], 99.90th=[48497], 99.95th=[52691], 00:31:22.903 | 99.99th=[52691] 00:31:22.903 write: IOPS=4282, BW=16.7MiB/s (17.5MB/s)(17.0MiB/1014msec); 0 zone resets 00:31:22.903 slat (usec): min=3, max=19200, avg=113.41, stdev=767.34 00:31:22.903 clat (usec): min=2258, max=64198, avg=16492.64, stdev=10801.98 00:31:22.903 lat (usec): min=2266, max=64215, avg=16606.05, stdev=10868.56 00:31:22.903 clat percentiles (usec): 00:31:22.903 | 1.00th=[ 3326], 5.00th=[ 6194], 10.00th=[ 7570], 20.00th=[ 9634], 00:31:22.903 | 30.00th=[10683], 40.00th=[11338], 50.00th=[12387], 60.00th=[13304], 00:31:22.903 | 70.00th=[15270], 80.00th=[23725], 90.00th=[35390], 95.00th=[40633], 00:31:22.903 | 99.00th=[51119], 99.50th=[60031], 99.90th=[64226], 99.95th=[64226], 00:31:22.903 | 99.99th=[64226] 00:31:22.903 bw ( KiB/s): min=13232, max=20480, per=26.56%, avg=16856.00, stdev=5125.11, samples=2 00:31:22.903 iops : min= 3308, max= 5120, avg=4214.00, stdev=1281.28, samples=2 00:31:22.903 lat (msec) : 4=1.13%, 10=22.04%, 20=60.77%, 50=15.42%, 100=0.64% 00:31:22.903 cpu : usr=3.75%, sys=5.03%, ctx=355, majf=0, minf=1 00:31:22.903 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:31:22.903 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:22.903 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:31:22.903 issued rwts: total=4096,4342,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:22.903 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:22.903 job1: (groupid=0, jobs=1): err= 0: pid=536523: Fri Nov 15 10:50:11 2024 00:31:22.903 read: IOPS=4652, BW=18.2MiB/s (19.1MB/s)(19.0MiB/1044msec) 00:31:22.903 slat (usec): min=2, max=23044, avg=92.66, stdev=607.29 00:31:22.903 clat (usec): min=4416, max=50794, avg=13310.27, stdev=7864.94 00:31:22.903 lat (usec): min=4425, max=50812, avg=13402.93, stdev=7880.23 00:31:22.903 clat percentiles (usec): 00:31:22.903 | 1.00th=[ 5407], 5.00th=[ 8160], 10.00th=[ 8979], 20.00th=[ 9896], 00:31:22.903 | 30.00th=[10421], 40.00th=[11076], 50.00th=[11338], 60.00th=[11731], 00:31:22.903 | 70.00th=[12387], 80.00th=[13173], 90.00th=[15664], 95.00th=[35390], 00:31:22.903 | 99.00th=[47973], 99.50th=[50594], 99.90th=[50594], 99.95th=[50594], 00:31:22.903 | 99.99th=[50594] 00:31:22.903 write: IOPS=4904, BW=19.2MiB/s (20.1MB/s)(20.0MiB/1044msec); 0 zone resets 00:31:22.903 slat (usec): min=3, max=27856, avg=98.95, stdev=785.72 00:31:22.903 clat (usec): min=5237, max=75796, avg=13089.52, stdev=8385.86 00:31:22.903 lat (usec): min=5246, max=75814, avg=13188.47, stdev=8442.39 00:31:22.903 clat percentiles (usec): 00:31:22.903 | 1.00th=[ 6390], 5.00th=[ 7898], 10.00th=[ 8848], 20.00th=[ 9503], 00:31:22.903 | 30.00th=[ 9765], 40.00th=[10552], 50.00th=[10945], 60.00th=[11338], 00:31:22.903 | 70.00th=[11994], 80.00th=[13304], 90.00th=[19530], 95.00th=[24773], 00:31:22.903 | 99.00th=[58459], 99.50th=[60556], 99.90th=[60556], 99.95th=[61604], 00:31:22.903 | 99.99th=[76022] 00:31:22.903 bw ( KiB/s): min=16392, max=24568, per=32.27%, avg=20480.00, stdev=5781.31, samples=2 00:31:22.903 iops : min= 4098, max= 6142, avg=5120.00, stdev=1445.33, samples=2 00:31:22.903 lat (msec) : 10=28.90%, 20=62.67%, 50=6.79%, 100=1.64% 00:31:22.903 cpu : usr=4.99%, sys=6.52%, ctx=503, majf=0, minf=1 00:31:22.903 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:31:22.903 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:22.903 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:31:22.903 issued rwts: total=4857,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:22.903 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:22.903 job2: (groupid=0, jobs=1): err= 0: pid=536524: Fri Nov 15 10:50:11 2024 00:31:22.903 read: IOPS=3050, BW=11.9MiB/s (12.5MB/s)(12.0MiB/1007msec) 00:31:22.903 slat (usec): min=2, max=29512, avg=181.44, stdev=1338.23 00:31:22.903 clat (usec): min=3872, max=71132, avg=23611.65, stdev=13926.06 00:31:22.903 lat (usec): min=3886, max=71147, avg=23793.09, stdev=14018.35 00:31:22.903 clat percentiles (usec): 00:31:22.903 | 1.00th=[11207], 5.00th=[11731], 10.00th=[12518], 20.00th=[13304], 00:31:22.903 | 30.00th=[15664], 40.00th=[17171], 50.00th=[17695], 60.00th=[19530], 00:31:22.903 | 70.00th=[22676], 80.00th=[34866], 90.00th=[46924], 95.00th=[53216], 00:31:22.903 | 99.00th=[70779], 99.50th=[70779], 99.90th=[70779], 99.95th=[70779], 00:31:22.903 | 99.99th=[70779] 00:31:22.903 write: IOPS=3285, BW=12.8MiB/s (13.5MB/s)(12.9MiB/1007msec); 0 zone resets 00:31:22.903 slat (usec): min=3, max=22853, avg=123.81, stdev=952.22 00:31:22.903 clat (usec): min=1038, max=71088, avg=16713.41, stdev=10040.10 00:31:22.903 lat (usec): min=1046, max=71106, avg=16837.22, stdev=10138.42 00:31:22.903 clat percentiles (usec): 00:31:22.903 | 1.00th=[ 3818], 5.00th=[ 7504], 10.00th=[ 8979], 20.00th=[11207], 00:31:22.903 | 30.00th=[12911], 40.00th=[13960], 50.00th=[14484], 60.00th=[15008], 00:31:22.903 | 70.00th=[15926], 80.00th=[17171], 90.00th=[29754], 95.00th=[38011], 00:31:22.903 | 99.00th=[57410], 99.50th=[58983], 99.90th=[60556], 99.95th=[66847], 00:31:22.903 | 99.99th=[70779] 00:31:22.903 bw ( KiB/s): min=12328, max=13112, per=20.04%, avg=12720.00, stdev=554.37, samples=2 00:31:22.903 iops : min= 3082, max= 3278, avg=3180.00, stdev=138.59, samples=2 00:31:22.903 lat (msec) : 2=0.09%, 4=0.60%, 10=6.93%, 20=66.85%, 50=20.92% 00:31:22.903 lat (msec) : 100=4.61% 00:31:22.903 cpu : usr=3.18%, sys=5.17%, ctx=248, majf=0, minf=2 00:31:22.903 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:31:22.903 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:22.903 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:31:22.903 issued rwts: total=3072,3308,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:22.903 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:22.903 job3: (groupid=0, jobs=1): err= 0: pid=536525: Fri Nov 15 10:50:11 2024 00:31:22.903 read: IOPS=3552, BW=13.9MiB/s (14.5MB/s)(14.0MiB/1009msec) 00:31:22.903 slat (usec): min=2, max=54871, avg=154.20, stdev=1384.86 00:31:22.903 clat (msec): min=4, max=111, avg=19.94, stdev=18.31 00:31:22.903 lat (msec): min=4, max=111, avg=20.10, stdev=18.41 00:31:22.903 clat percentiles (msec): 00:31:22.903 | 1.00th=[ 8], 5.00th=[ 11], 10.00th=[ 12], 20.00th=[ 12], 00:31:22.903 | 30.00th=[ 13], 40.00th=[ 14], 50.00th=[ 14], 60.00th=[ 15], 00:31:22.903 | 70.00th=[ 16], 80.00th=[ 20], 90.00th=[ 28], 95.00th=[ 71], 00:31:22.903 | 99.00th=[ 99], 99.50th=[ 112], 99.90th=[ 112], 99.95th=[ 112], 00:31:22.903 | 99.99th=[ 112] 00:31:22.903 write: IOPS=3761, BW=14.7MiB/s (15.4MB/s)(14.8MiB/1009msec); 0 zone resets 00:31:22.903 slat (usec): min=3, max=28730, avg=110.23, stdev=790.47 00:31:22.903 clat (usec): min=1226, max=41918, avg=14847.71, stdev=6581.45 00:31:22.903 lat (usec): min=3015, max=41931, avg=14957.94, stdev=6591.95 00:31:22.903 clat percentiles (usec): 00:31:22.903 | 1.00th=[ 3818], 5.00th=[ 7767], 10.00th=[ 9372], 20.00th=[11338], 00:31:22.903 | 30.00th=[12125], 40.00th=[13173], 50.00th=[13698], 60.00th=[14222], 00:31:22.903 | 70.00th=[14615], 80.00th=[15926], 90.00th=[23200], 95.00th=[25035], 00:31:22.903 | 99.00th=[41681], 99.50th=[41681], 99.90th=[41681], 99.95th=[41681], 00:31:22.903 | 99.99th=[41681] 00:31:22.903 bw ( KiB/s): min=11304, max=18032, per=23.11%, avg=14668.00, stdev=4757.41, samples=2 00:31:22.903 iops : min= 2826, max= 4508, avg=3667.00, stdev=1189.35, samples=2 00:31:22.903 lat (msec) : 2=0.01%, 4=0.51%, 10=7.44%, 20=77.08%, 50=10.67% 00:31:22.903 lat (msec) : 100=3.86%, 250=0.42% 00:31:22.903 cpu : usr=3.77%, sys=7.24%, ctx=356, majf=0, minf=1 00:31:22.903 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.1% 00:31:22.903 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:22.903 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:31:22.903 issued rwts: total=3584,3795,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:22.903 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:22.903 00:31:22.903 Run status group 0 (all jobs): 00:31:22.903 READ: bw=58.4MiB/s (61.2MB/s), 11.9MiB/s-18.2MiB/s (12.5MB/s-19.1MB/s), io=61.0MiB (63.9MB), run=1007-1044msec 00:31:22.903 WRITE: bw=62.0MiB/s (65.0MB/s), 12.8MiB/s-19.2MiB/s (13.5MB/s-20.1MB/s), io=64.7MiB (67.8MB), run=1007-1044msec 00:31:22.903 00:31:22.903 Disk stats (read/write): 00:31:22.903 nvme0n1: ios=3610/3664, merge=0/0, ticks=47405/56412, in_queue=103817, util=96.59% 00:31:22.903 nvme0n2: ios=3892/4096, merge=0/0, ticks=19274/24291, in_queue=43565, util=86.86% 00:31:22.903 nvme0n3: ios=2629/3072, merge=0/0, ticks=31609/30366, in_queue=61975, util=88.73% 00:31:22.903 nvme0n4: ios=3340/3584, merge=0/0, ticks=25418/24672, in_queue=50090, util=98.53% 00:31:22.903 10:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:31:22.903 [global] 00:31:22.903 thread=1 00:31:22.903 invalidate=1 00:31:22.903 rw=randwrite 00:31:22.903 time_based=1 00:31:22.903 runtime=1 00:31:22.903 ioengine=libaio 00:31:22.903 direct=1 00:31:22.903 bs=4096 00:31:22.903 iodepth=128 00:31:22.903 norandommap=0 00:31:22.903 numjobs=1 00:31:22.903 00:31:22.903 verify_dump=1 00:31:22.903 verify_backlog=512 00:31:22.903 verify_state_save=0 00:31:22.903 do_verify=1 00:31:22.903 verify=crc32c-intel 00:31:22.903 [job0] 00:31:22.903 filename=/dev/nvme0n1 00:31:22.903 [job1] 00:31:22.903 filename=/dev/nvme0n2 00:31:22.903 [job2] 00:31:22.903 filename=/dev/nvme0n3 00:31:22.903 [job3] 00:31:22.903 filename=/dev/nvme0n4 00:31:22.903 Could not set queue depth (nvme0n1) 00:31:22.903 Could not set queue depth (nvme0n2) 00:31:22.903 Could not set queue depth (nvme0n3) 00:31:22.903 Could not set queue depth (nvme0n4) 00:31:22.903 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:31:22.903 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:31:22.904 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:31:22.904 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:31:22.904 fio-3.35 00:31:22.904 Starting 4 threads 00:31:24.275 00:31:24.275 job0: (groupid=0, jobs=1): err= 0: pid=536748: Fri Nov 15 10:50:12 2024 00:31:24.275 read: IOPS=4347, BW=17.0MiB/s (17.8MB/s)(17.0MiB/1002msec) 00:31:24.275 slat (usec): min=2, max=11132, avg=105.06, stdev=662.30 00:31:24.275 clat (usec): min=758, max=60443, avg=13995.12, stdev=7171.10 00:31:24.275 lat (usec): min=2722, max=60448, avg=14100.18, stdev=7205.65 00:31:24.275 clat percentiles (usec): 00:31:24.275 | 1.00th=[ 4015], 5.00th=[ 4883], 10.00th=[ 6587], 20.00th=[ 9503], 00:31:24.275 | 30.00th=[10552], 40.00th=[11731], 50.00th=[11994], 60.00th=[12256], 00:31:24.275 | 70.00th=[15533], 80.00th=[19530], 90.00th=[22676], 95.00th=[25035], 00:31:24.275 | 99.00th=[44827], 99.50th=[47449], 99.90th=[60556], 99.95th=[60556], 00:31:24.275 | 99.99th=[60556] 00:31:24.275 write: IOPS=4598, BW=18.0MiB/s (18.8MB/s)(18.0MiB/1002msec); 0 zone resets 00:31:24.275 slat (usec): min=3, max=21874, avg=107.19, stdev=659.12 00:31:24.275 clat (usec): min=767, max=66906, avg=14352.65, stdev=8272.34 00:31:24.275 lat (usec): min=796, max=66913, avg=14459.84, stdev=8313.30 00:31:24.275 clat percentiles (usec): 00:31:24.275 | 1.00th=[ 2671], 5.00th=[ 5211], 10.00th=[ 6521], 20.00th=[10028], 00:31:24.275 | 30.00th=[11469], 40.00th=[11731], 50.00th=[11994], 60.00th=[12387], 00:31:24.275 | 70.00th=[14353], 80.00th=[16319], 90.00th=[25822], 95.00th=[32637], 00:31:24.275 | 99.00th=[43254], 99.50th=[50594], 99.90th=[66847], 99.95th=[66847], 00:31:24.275 | 99.99th=[66847] 00:31:24.275 bw ( KiB/s): min=16608, max=20256, per=25.90%, avg=18432.00, stdev=2579.53, samples=2 00:31:24.275 iops : min= 4152, max= 5064, avg=4608.00, stdev=644.88, samples=2 00:31:24.275 lat (usec) : 1000=0.02% 00:31:24.275 lat (msec) : 2=0.30%, 4=1.29%, 10=20.67%, 20=62.41%, 50=14.69% 00:31:24.275 lat (msec) : 100=0.61% 00:31:24.275 cpu : usr=5.19%, sys=8.99%, ctx=422, majf=0, minf=1 00:31:24.275 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:31:24.275 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:24.275 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:31:24.275 issued rwts: total=4356,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:24.275 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:24.275 job1: (groupid=0, jobs=1): err= 0: pid=536749: Fri Nov 15 10:50:12 2024 00:31:24.275 read: IOPS=5109, BW=20.0MiB/s (20.9MB/s)(20.0MiB/1002msec) 00:31:24.275 slat (usec): min=3, max=3966, avg=86.43, stdev=401.67 00:31:24.275 clat (usec): min=6474, max=15715, avg=11762.01, stdev=1243.86 00:31:24.275 lat (usec): min=6480, max=15730, avg=11848.45, stdev=1241.37 00:31:24.275 clat percentiles (usec): 00:31:24.275 | 1.00th=[ 8979], 5.00th=[ 9896], 10.00th=[10421], 20.00th=[10683], 00:31:24.275 | 30.00th=[10945], 40.00th=[11338], 50.00th=[11600], 60.00th=[11994], 00:31:24.276 | 70.00th=[12518], 80.00th=[12911], 90.00th=[13304], 95.00th=[13829], 00:31:24.276 | 99.00th=[14746], 99.50th=[15008], 99.90th=[15533], 99.95th=[15533], 00:31:24.276 | 99.99th=[15664] 00:31:24.276 write: IOPS=5406, BW=21.1MiB/s (22.1MB/s)(21.2MiB/1002msec); 0 zone resets 00:31:24.276 slat (usec): min=4, max=8950, avg=91.43, stdev=489.64 00:31:24.276 clat (usec): min=369, max=39343, avg=12280.70, stdev=4552.97 00:31:24.276 lat (usec): min=2755, max=39351, avg=12372.13, stdev=4586.75 00:31:24.276 clat percentiles (usec): 00:31:24.276 | 1.00th=[ 6718], 5.00th=[ 9503], 10.00th=[ 9765], 20.00th=[10159], 00:31:24.276 | 30.00th=[10421], 40.00th=[10683], 50.00th=[11338], 60.00th=[11863], 00:31:24.276 | 70.00th=[12256], 80.00th=[12518], 90.00th=[13435], 95.00th=[22676], 00:31:24.276 | 99.00th=[35390], 99.50th=[37487], 99.90th=[38536], 99.95th=[39584], 00:31:24.276 | 99.99th=[39584] 00:31:24.276 bw ( KiB/s): min=20480, max=21832, per=29.73%, avg=21156.00, stdev=956.01, samples=2 00:31:24.276 iops : min= 5120, max= 5458, avg=5289.00, stdev=239.00, samples=2 00:31:24.276 lat (usec) : 500=0.01% 00:31:24.276 lat (msec) : 4=0.34%, 10=9.77%, 20=86.44%, 50=3.45% 00:31:24.276 cpu : usr=8.59%, sys=10.69%, ctx=493, majf=0, minf=1 00:31:24.276 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:31:24.276 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:24.276 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:31:24.276 issued rwts: total=5120,5417,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:24.276 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:24.276 job2: (groupid=0, jobs=1): err= 0: pid=536750: Fri Nov 15 10:50:12 2024 00:31:24.276 read: IOPS=3418, BW=13.4MiB/s (14.0MB/s)(13.4MiB/1003msec) 00:31:24.276 slat (usec): min=2, max=29362, avg=158.11, stdev=1017.19 00:31:24.276 clat (usec): min=2289, max=68454, avg=19958.72, stdev=9815.20 00:31:24.276 lat (usec): min=2308, max=68458, avg=20116.83, stdev=9878.34 00:31:24.276 clat percentiles (usec): 00:31:24.276 | 1.00th=[ 5669], 5.00th=[12256], 10.00th=[12911], 20.00th=[13435], 00:31:24.276 | 30.00th=[13829], 40.00th=[14484], 50.00th=[16057], 60.00th=[17957], 00:31:24.276 | 70.00th=[20841], 80.00th=[26870], 90.00th=[32375], 95.00th=[39060], 00:31:24.276 | 99.00th=[58983], 99.50th=[68682], 99.90th=[68682], 99.95th=[68682], 00:31:24.276 | 99.99th=[68682] 00:31:24.276 write: IOPS=3573, BW=14.0MiB/s (14.6MB/s)(14.0MiB/1003msec); 0 zone resets 00:31:24.276 slat (usec): min=3, max=8479, avg=117.32, stdev=687.57 00:31:24.276 clat (usec): min=8017, max=54336, avg=16280.27, stdev=5673.28 00:31:24.276 lat (usec): min=8021, max=54349, avg=16397.60, stdev=5696.03 00:31:24.276 clat percentiles (usec): 00:31:24.276 | 1.00th=[ 9634], 5.00th=[11994], 10.00th=[12387], 20.00th=[12649], 00:31:24.276 | 30.00th=[13042], 40.00th=[13566], 50.00th=[14091], 60.00th=[15008], 00:31:24.276 | 70.00th=[17695], 80.00th=[19792], 90.00th=[22414], 95.00th=[24249], 00:31:24.276 | 99.00th=[42206], 99.50th=[53216], 99.90th=[53216], 99.95th=[53216], 00:31:24.276 | 99.99th=[54264] 00:31:24.276 bw ( KiB/s): min=13288, max=15384, per=20.15%, avg=14336.00, stdev=1482.10, samples=2 00:31:24.276 iops : min= 3322, max= 3846, avg=3584.00, stdev=370.52, samples=2 00:31:24.276 lat (msec) : 4=0.34%, 10=1.45%, 20=72.92%, 50=23.94%, 100=1.34% 00:31:24.276 cpu : usr=4.19%, sys=6.39%, ctx=288, majf=0, minf=1 00:31:24.276 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:31:24.276 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:24.276 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:31:24.276 issued rwts: total=3429,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:24.276 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:24.276 job3: (groupid=0, jobs=1): err= 0: pid=536751: Fri Nov 15 10:50:12 2024 00:31:24.276 read: IOPS=4083, BW=16.0MiB/s (16.7MB/s)(16.0MiB/1003msec) 00:31:24.276 slat (usec): min=3, max=11088, avg=105.96, stdev=545.47 00:31:24.276 clat (usec): min=2785, max=35407, avg=14259.45, stdev=3928.98 00:31:24.276 lat (usec): min=2791, max=35422, avg=14365.41, stdev=3943.63 00:31:24.276 clat percentiles (usec): 00:31:24.276 | 1.00th=[10028], 5.00th=[11207], 10.00th=[11731], 20.00th=[12125], 00:31:24.276 | 30.00th=[12780], 40.00th=[13042], 50.00th=[13304], 60.00th=[13566], 00:31:24.276 | 70.00th=[13960], 80.00th=[14746], 90.00th=[16712], 95.00th=[25035], 00:31:24.276 | 99.00th=[30540], 99.50th=[31589], 99.90th=[34866], 99.95th=[34866], 00:31:24.276 | 99.99th=[35390] 00:31:24.276 write: IOPS=4221, BW=16.5MiB/s (17.3MB/s)(16.5MiB/1003msec); 0 zone resets 00:31:24.276 slat (usec): min=4, max=33024, avg=116.83, stdev=838.85 00:31:24.276 clat (usec): min=387, max=90237, avg=15391.15, stdev=10868.05 00:31:24.276 lat (usec): min=3044, max=90245, avg=15507.98, stdev=10932.16 00:31:24.276 clat percentiles (usec): 00:31:24.276 | 1.00th=[ 5604], 5.00th=[ 8979], 10.00th=[10945], 20.00th=[11600], 00:31:24.276 | 30.00th=[12387], 40.00th=[12518], 50.00th=[12911], 60.00th=[13566], 00:31:24.276 | 70.00th=[13960], 80.00th=[14484], 90.00th=[17695], 95.00th=[34866], 00:31:24.276 | 99.00th=[72877], 99.50th=[79168], 99.90th=[90702], 99.95th=[90702], 00:31:24.276 | 99.99th=[90702] 00:31:24.276 bw ( KiB/s): min=14576, max=18328, per=23.12%, avg=16452.00, stdev=2653.06, samples=2 00:31:24.276 iops : min= 3644, max= 4582, avg=4113.00, stdev=663.27, samples=2 00:31:24.276 lat (usec) : 500=0.01% 00:31:24.276 lat (msec) : 4=0.56%, 10=3.55%, 20=88.27%, 50=6.10%, 100=1.50% 00:31:24.276 cpu : usr=6.39%, sys=10.68%, ctx=430, majf=0, minf=1 00:31:24.276 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:31:24.276 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:24.276 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:31:24.276 issued rwts: total=4096,4234,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:24.276 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:24.276 00:31:24.276 Run status group 0 (all jobs): 00:31:24.276 READ: bw=66.2MiB/s (69.4MB/s), 13.4MiB/s-20.0MiB/s (14.0MB/s-20.9MB/s), io=66.4MiB (69.6MB), run=1002-1003msec 00:31:24.276 WRITE: bw=69.5MiB/s (72.9MB/s), 14.0MiB/s-21.1MiB/s (14.6MB/s-22.1MB/s), io=69.7MiB (73.1MB), run=1002-1003msec 00:31:24.276 00:31:24.276 Disk stats (read/write): 00:31:24.276 nvme0n1: ios=3634/3802, merge=0/0, ticks=24223/28557, in_queue=52780, util=84.87% 00:31:24.276 nvme0n2: ios=4433/4608, merge=0/0, ticks=13895/14733, in_queue=28628, util=89.75% 00:31:24.276 nvme0n3: ios=2613/3071, merge=0/0, ticks=22009/16516, in_queue=38525, util=96.98% 00:31:24.276 nvme0n4: ios=3415/3584, merge=0/0, ticks=16085/24214, in_queue=40299, util=98.43% 00:31:24.276 10:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:31:24.276 10:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=536885 00:31:24.276 10:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:31:24.276 10:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:31:24.276 [global] 00:31:24.276 thread=1 00:31:24.276 invalidate=1 00:31:24.276 rw=read 00:31:24.276 time_based=1 00:31:24.276 runtime=10 00:31:24.276 ioengine=libaio 00:31:24.276 direct=1 00:31:24.276 bs=4096 00:31:24.276 iodepth=1 00:31:24.276 norandommap=1 00:31:24.276 numjobs=1 00:31:24.276 00:31:24.276 [job0] 00:31:24.276 filename=/dev/nvme0n1 00:31:24.276 [job1] 00:31:24.276 filename=/dev/nvme0n2 00:31:24.276 [job2] 00:31:24.276 filename=/dev/nvme0n3 00:31:24.276 [job3] 00:31:24.276 filename=/dev/nvme0n4 00:31:24.276 Could not set queue depth (nvme0n1) 00:31:24.276 Could not set queue depth (nvme0n2) 00:31:24.276 Could not set queue depth (nvme0n3) 00:31:24.276 Could not set queue depth (nvme0n4) 00:31:24.533 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:31:24.533 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:31:24.533 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:31:24.533 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:31:24.533 fio-3.35 00:31:24.533 Starting 4 threads 00:31:27.057 10:50:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:31:27.620 10:50:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:31:27.620 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=42156032, buflen=4096 00:31:27.620 fio: pid=537104, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:31:27.877 10:50:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:31:27.877 10:50:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:31:27.877 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=2990080, buflen=4096 00:31:27.877 fio: pid=537103, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:31:28.135 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=6619136, buflen=4096 00:31:28.135 fio: pid=537101, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:31:28.135 10:50:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:31:28.135 10:50:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:31:28.394 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=35389440, buflen=4096 00:31:28.394 fio: pid=537102, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:31:28.394 10:50:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:31:28.394 10:50:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:31:28.394 00:31:28.394 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=537101: Fri Nov 15 10:50:16 2024 00:31:28.394 read: IOPS=459, BW=1837KiB/s (1881kB/s)(6464KiB/3519msec) 00:31:28.394 slat (usec): min=4, max=12779, avg=32.04, stdev=498.63 00:31:28.394 clat (usec): min=189, max=41990, avg=2128.01, stdev=8500.59 00:31:28.394 lat (usec): min=195, max=53784, avg=2160.06, stdev=8549.13 00:31:28.394 clat percentiles (usec): 00:31:28.394 | 1.00th=[ 198], 5.00th=[ 208], 10.00th=[ 217], 20.00th=[ 227], 00:31:28.394 | 30.00th=[ 235], 40.00th=[ 249], 50.00th=[ 265], 60.00th=[ 281], 00:31:28.394 | 70.00th=[ 289], 80.00th=[ 306], 90.00th=[ 334], 95.00th=[ 482], 00:31:28.394 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[42206], 00:31:28.394 | 99.99th=[42206] 00:31:28.394 bw ( KiB/s): min= 136, max= 6856, per=7.23%, avg=1617.33, stdev=2625.56, samples=6 00:31:28.394 iops : min= 34, max= 1714, avg=404.33, stdev=656.39, samples=6 00:31:28.394 lat (usec) : 250=41.87%, 500=53.18%, 750=0.19%, 1000=0.06% 00:31:28.394 lat (msec) : 2=0.06%, 50=4.58% 00:31:28.394 cpu : usr=0.26%, sys=0.60%, ctx=1620, majf=0, minf=2 00:31:28.394 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:28.394 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:28.394 complete : 0=0.1%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:28.394 issued rwts: total=1617,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:28.394 latency : target=0, window=0, percentile=100.00%, depth=1 00:31:28.394 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=537102: Fri Nov 15 10:50:16 2024 00:31:28.394 read: IOPS=2269, BW=9078KiB/s (9296kB/s)(33.8MiB/3807msec) 00:31:28.394 slat (usec): min=4, max=15896, avg=16.24, stdev=262.87 00:31:28.394 clat (usec): min=180, max=41244, avg=420.70, stdev=2586.70 00:31:28.394 lat (usec): min=186, max=57040, avg=436.94, stdev=2628.88 00:31:28.394 clat percentiles (usec): 00:31:28.394 | 1.00th=[ 198], 5.00th=[ 204], 10.00th=[ 208], 20.00th=[ 217], 00:31:28.394 | 30.00th=[ 225], 40.00th=[ 231], 50.00th=[ 241], 60.00th=[ 253], 00:31:28.394 | 70.00th=[ 277], 80.00th=[ 293], 90.00th=[ 310], 95.00th=[ 330], 00:31:28.394 | 99.00th=[ 519], 99.50th=[ 635], 99.90th=[41157], 99.95th=[41157], 00:31:28.394 | 99.99th=[41157] 00:31:28.394 bw ( KiB/s): min= 120, max=16208, per=39.19%, avg=8762.14, stdev=5732.09, samples=7 00:31:28.394 iops : min= 30, max= 4052, avg=2190.43, stdev=1433.05, samples=7 00:31:28.394 lat (usec) : 250=57.00%, 500=41.75%, 750=0.82%, 1000=0.01% 00:31:28.394 lat (msec) : 50=0.41% 00:31:28.394 cpu : usr=1.05%, sys=3.02%, ctx=8648, majf=0, minf=1 00:31:28.394 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:28.394 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:28.394 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:28.394 issued rwts: total=8641,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:28.394 latency : target=0, window=0, percentile=100.00%, depth=1 00:31:28.394 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=537103: Fri Nov 15 10:50:16 2024 00:31:28.394 read: IOPS=226, BW=905KiB/s (926kB/s)(2920KiB/3228msec) 00:31:28.394 slat (nsec): min=4871, max=52027, avg=12373.45, stdev=8495.67 00:31:28.394 clat (usec): min=204, max=41297, avg=4374.33, stdev=12212.53 00:31:28.394 lat (usec): min=209, max=41311, avg=4386.70, stdev=12214.49 00:31:28.394 clat percentiles (usec): 00:31:28.394 | 1.00th=[ 223], 5.00th=[ 231], 10.00th=[ 241], 20.00th=[ 249], 00:31:28.394 | 30.00th=[ 255], 40.00th=[ 269], 50.00th=[ 285], 60.00th=[ 322], 00:31:28.394 | 70.00th=[ 355], 80.00th=[ 388], 90.00th=[ 3228], 95.00th=[41157], 00:31:28.394 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:31:28.394 | 99.99th=[41157] 00:31:28.394 bw ( KiB/s): min= 96, max= 3472, per=4.32%, avg=966.67, stdev=1422.60, samples=6 00:31:28.394 iops : min= 24, max= 868, avg=241.67, stdev=355.65, samples=6 00:31:28.394 lat (usec) : 250=20.38%, 500=67.58%, 750=1.78% 00:31:28.394 lat (msec) : 4=0.14%, 50=9.99% 00:31:28.394 cpu : usr=0.12%, sys=0.28%, ctx=731, majf=0, minf=1 00:31:28.394 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:28.394 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:28.394 complete : 0=0.1%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:28.394 issued rwts: total=731,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:28.395 latency : target=0, window=0, percentile=100.00%, depth=1 00:31:28.395 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=537104: Fri Nov 15 10:50:16 2024 00:31:28.395 read: IOPS=3515, BW=13.7MiB/s (14.4MB/s)(40.2MiB/2928msec) 00:31:28.395 slat (nsec): min=4614, max=54009, avg=8980.60, stdev=6082.52 00:31:28.395 clat (usec): min=199, max=41345, avg=271.08, stdev=571.29 00:31:28.395 lat (usec): min=205, max=41351, avg=280.06, stdev=571.78 00:31:28.395 clat percentiles (usec): 00:31:28.395 | 1.00th=[ 208], 5.00th=[ 212], 10.00th=[ 215], 20.00th=[ 221], 00:31:28.395 | 30.00th=[ 231], 40.00th=[ 237], 50.00th=[ 245], 60.00th=[ 251], 00:31:28.395 | 70.00th=[ 265], 80.00th=[ 293], 90.00th=[ 322], 95.00th=[ 392], 00:31:28.395 | 99.00th=[ 562], 99.50th=[ 603], 99.90th=[ 635], 99.95th=[ 668], 00:31:28.395 | 99.99th=[40633] 00:31:28.395 bw ( KiB/s): min=10272, max=16248, per=61.82%, avg=13822.40, stdev=2321.72, samples=5 00:31:28.395 iops : min= 2568, max= 4062, avg=3455.60, stdev=580.43, samples=5 00:31:28.395 lat (usec) : 250=58.14%, 500=39.96%, 750=1.88% 00:31:28.395 lat (msec) : 50=0.02% 00:31:28.395 cpu : usr=1.61%, sys=4.51%, ctx=10293, majf=0, minf=2 00:31:28.395 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:28.395 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:28.395 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:28.395 issued rwts: total=10293,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:28.395 latency : target=0, window=0, percentile=100.00%, depth=1 00:31:28.395 00:31:28.395 Run status group 0 (all jobs): 00:31:28.395 READ: bw=21.8MiB/s (22.9MB/s), 905KiB/s-13.7MiB/s (926kB/s-14.4MB/s), io=83.1MiB (87.2MB), run=2928-3807msec 00:31:28.395 00:31:28.395 Disk stats (read/write): 00:31:28.395 nvme0n1: ios=1611/0, merge=0/0, ticks=3265/0, in_queue=3265, util=95.17% 00:31:28.395 nvme0n2: ios=7900/0, merge=0/0, ticks=3611/0, in_queue=3611, util=98.55% 00:31:28.395 nvme0n3: ios=727/0, merge=0/0, ticks=3068/0, in_queue=3068, util=96.82% 00:31:28.395 nvme0n4: ios=10086/0, merge=0/0, ticks=2726/0, in_queue=2726, util=96.75% 00:31:28.653 10:50:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:31:28.653 10:50:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:31:28.911 10:50:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:31:28.911 10:50:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:31:29.170 10:50:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:31:29.170 10:50:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:31:29.427 10:50:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:31:29.427 10:50:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:31:29.684 10:50:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:31:29.684 10:50:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # wait 536885 00:31:29.684 10:50:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:31:29.684 10:50:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:31:29.944 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:31:29.944 10:50:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:31:29.944 10:50:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1221 -- # local i=0 00:31:29.944 10:50:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:31:29.944 10:50:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:31:29.944 10:50:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:31:29.944 10:50:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:31:29.944 10:50:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1233 -- # return 0 00:31:29.944 10:50:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:31:29.944 10:50:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:31:29.944 nvmf hotplug test: fio failed as expected 00:31:29.944 10:50:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:30.202 10:50:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:31:30.202 10:50:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:31:30.202 10:50:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:31:30.202 10:50:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:31:30.202 10:50:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:31:30.202 10:50:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:31:30.202 10:50:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:31:30.202 10:50:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:30.202 10:50:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:31:30.202 10:50:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:30.202 10:50:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:30.202 rmmod nvme_tcp 00:31:30.202 rmmod nvme_fabrics 00:31:30.202 rmmod nvme_keyring 00:31:30.202 10:50:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:30.202 10:50:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:31:30.202 10:50:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:31:30.202 10:50:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@517 -- # '[' -n 535002 ']' 00:31:30.202 10:50:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@518 -- # killprocess 535002 00:31:30.202 10:50:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@952 -- # '[' -z 535002 ']' 00:31:30.202 10:50:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@956 -- # kill -0 535002 00:31:30.202 10:50:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@957 -- # uname 00:31:30.202 10:50:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:31:30.202 10:50:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 535002 00:31:30.202 10:50:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:31:30.202 10:50:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:31:30.202 10:50:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@970 -- # echo 'killing process with pid 535002' 00:31:30.202 killing process with pid 535002 00:31:30.202 10:50:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@971 -- # kill 535002 00:31:30.202 10:50:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@976 -- # wait 535002 00:31:30.465 10:50:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:31:30.465 10:50:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:31:30.465 10:50:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:31:30.465 10:50:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:31:30.465 10:50:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-save 00:31:30.465 10:50:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:31:30.465 10:50:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-restore 00:31:30.465 10:50:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:30.465 10:50:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:30.465 10:50:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:30.465 10:50:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:30.465 10:50:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:33.069 10:50:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:33.070 00:31:33.070 real 0m23.607s 00:31:33.070 user 1m7.946s 00:31:33.070 sys 0m10.056s 00:31:33.070 10:50:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1128 -- # xtrace_disable 00:31:33.070 10:50:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:31:33.070 ************************************ 00:31:33.070 END TEST nvmf_fio_target 00:31:33.070 ************************************ 00:31:33.070 10:50:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:31:33.070 10:50:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:31:33.070 10:50:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1109 -- # xtrace_disable 00:31:33.070 10:50:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:31:33.070 ************************************ 00:31:33.070 START TEST nvmf_bdevio 00:31:33.070 ************************************ 00:31:33.070 10:50:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:31:33.070 * Looking for test storage... 00:31:33.070 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:31:33.070 10:50:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:31:33.070 10:50:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1691 -- # lcov --version 00:31:33.070 10:50:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:31:33.070 10:50:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:31:33.070 10:50:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:33.070 10:50:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:33.070 10:50:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:33.070 10:50:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:31:33.070 10:50:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:31:33.070 10:50:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:31:33.070 10:50:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:31:33.070 10:50:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:31:33.070 10:50:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:31:33.070 10:50:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:31:33.070 10:50:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:33.070 10:50:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:31:33.070 10:50:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:31:33.070 10:50:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:33.070 10:50:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:33.070 10:50:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:31:33.070 10:50:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:31:33.070 10:50:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:33.070 10:50:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:31:33.070 10:50:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:31:33.070 10:50:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:31:33.070 10:50:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:31:33.070 10:50:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:33.070 10:50:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:31:33.070 10:50:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:31:33.070 10:50:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:33.070 10:50:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:33.070 10:50:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:31:33.070 10:50:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:33.070 10:50:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:31:33.070 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:33.070 --rc genhtml_branch_coverage=1 00:31:33.070 --rc genhtml_function_coverage=1 00:31:33.070 --rc genhtml_legend=1 00:31:33.070 --rc geninfo_all_blocks=1 00:31:33.070 --rc geninfo_unexecuted_blocks=1 00:31:33.070 00:31:33.070 ' 00:31:33.070 10:50:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:31:33.070 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:33.070 --rc genhtml_branch_coverage=1 00:31:33.070 --rc genhtml_function_coverage=1 00:31:33.070 --rc genhtml_legend=1 00:31:33.070 --rc geninfo_all_blocks=1 00:31:33.070 --rc geninfo_unexecuted_blocks=1 00:31:33.070 00:31:33.070 ' 00:31:33.070 10:50:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:31:33.070 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:33.070 --rc genhtml_branch_coverage=1 00:31:33.070 --rc genhtml_function_coverage=1 00:31:33.070 --rc genhtml_legend=1 00:31:33.070 --rc geninfo_all_blocks=1 00:31:33.070 --rc geninfo_unexecuted_blocks=1 00:31:33.070 00:31:33.070 ' 00:31:33.070 10:50:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:31:33.070 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:33.070 --rc genhtml_branch_coverage=1 00:31:33.070 --rc genhtml_function_coverage=1 00:31:33.070 --rc genhtml_legend=1 00:31:33.070 --rc geninfo_all_blocks=1 00:31:33.070 --rc geninfo_unexecuted_blocks=1 00:31:33.070 00:31:33.070 ' 00:31:33.070 10:50:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:33.070 10:50:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:31:33.070 10:50:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:33.070 10:50:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:33.070 10:50:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:33.070 10:50:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:33.070 10:50:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:33.070 10:50:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:33.070 10:50:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:33.070 10:50:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:33.070 10:50:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:33.070 10:50:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:33.070 10:50:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:31:33.070 10:50:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=8b464f06-2980-e311-ba20-001e67a94acd 00:31:33.070 10:50:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:33.070 10:50:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:33.070 10:50:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:33.070 10:50:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:33.070 10:50:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:33.070 10:50:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:31:33.070 10:50:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:33.070 10:50:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:33.070 10:50:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:33.070 10:50:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:33.070 10:50:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:33.070 10:50:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:33.070 10:50:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:31:33.070 10:50:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:33.070 10:50:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:31:33.070 10:50:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:33.070 10:50:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:33.070 10:50:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:33.070 10:50:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:33.070 10:50:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:33.070 10:50:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:31:33.070 10:50:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:31:33.070 10:50:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:33.070 10:50:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:33.070 10:50:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:33.070 10:50:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:31:33.070 10:50:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:31:33.070 10:50:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:31:33.070 10:50:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:31:33.070 10:50:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:33.070 10:50:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@476 -- # prepare_net_devs 00:31:33.070 10:50:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@438 -- # local -g is_hw=no 00:31:33.070 10:50:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@440 -- # remove_spdk_ns 00:31:33.070 10:50:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:33.070 10:50:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:33.070 10:50:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:33.070 10:50:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:31:33.070 10:50:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:31:33.070 10:50:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@309 -- # xtrace_disable 00:31:33.071 10:50:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:31:34.977 10:50:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:34.977 10:50:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@315 -- # pci_devs=() 00:31:34.977 10:50:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:34.977 10:50:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:34.977 10:50:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:34.977 10:50:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:34.977 10:50:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:34.977 10:50:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@319 -- # net_devs=() 00:31:34.977 10:50:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:34.977 10:50:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@320 -- # e810=() 00:31:34.977 10:50:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@320 -- # local -ga e810 00:31:34.977 10:50:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@321 -- # x722=() 00:31:34.977 10:50:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@321 -- # local -ga x722 00:31:34.977 10:50:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@322 -- # mlx=() 00:31:34.977 10:50:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@322 -- # local -ga mlx 00:31:34.977 10:50:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:34.977 10:50:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:34.977 10:50:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:34.977 10:50:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:34.977 10:50:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:34.977 10:50:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:34.977 10:50:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:34.977 10:50:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:34.977 10:50:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:34.977 10:50:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:34.977 10:50:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:34.977 10:50:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:34.977 10:50:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:34.977 10:50:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:31:34.977 10:50:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:34.977 10:50:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:34.977 10:50:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:34.977 10:50:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:34.977 10:50:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:34.977 10:50:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:82:00.0 (0x8086 - 0x159b)' 00:31:34.977 Found 0000:82:00.0 (0x8086 - 0x159b) 00:31:34.977 10:50:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:34.977 10:50:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:34.977 10:50:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:34.977 10:50:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:34.977 10:50:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:34.977 10:50:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:34.977 10:50:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:82:00.1 (0x8086 - 0x159b)' 00:31:34.977 Found 0000:82:00.1 (0x8086 - 0x159b) 00:31:34.977 10:50:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:34.977 10:50:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:34.977 10:50:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:34.977 10:50:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:34.977 10:50:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:34.977 10:50:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:34.978 10:50:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:34.978 10:50:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:31:34.978 10:50:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:34.978 10:50:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:34.978 10:50:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:34.978 10:50:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:34.978 10:50:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:34.978 10:50:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:34.978 10:50:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:34.978 10:50:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:82:00.0: cvl_0_0' 00:31:34.978 Found net devices under 0000:82:00.0: cvl_0_0 00:31:34.978 10:50:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:34.978 10:50:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:34.978 10:50:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:34.978 10:50:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:34.978 10:50:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:34.978 10:50:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:34.978 10:50:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:34.978 10:50:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:34.978 10:50:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:82:00.1: cvl_0_1' 00:31:34.978 Found net devices under 0000:82:00.1: cvl_0_1 00:31:34.978 10:50:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:34.978 10:50:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:31:34.978 10:50:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # is_hw=yes 00:31:34.978 10:50:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:31:34.978 10:50:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:31:34.978 10:50:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:31:34.978 10:50:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:34.978 10:50:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:34.978 10:50:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:34.978 10:50:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:34.978 10:50:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:34.978 10:50:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:34.978 10:50:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:34.978 10:50:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:34.978 10:50:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:34.978 10:50:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:34.978 10:50:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:34.978 10:50:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:34.978 10:50:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:34.978 10:50:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:34.978 10:50:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:34.978 10:50:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:34.978 10:50:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:34.978 10:50:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:34.978 10:50:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:34.978 10:50:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:34.978 10:50:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:34.978 10:50:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:34.978 10:50:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:34.978 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:34.978 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.271 ms 00:31:34.978 00:31:34.978 --- 10.0.0.2 ping statistics --- 00:31:34.978 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:34.978 rtt min/avg/max/mdev = 0.271/0.271/0.271/0.000 ms 00:31:34.978 10:50:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:34.978 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:34.978 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.112 ms 00:31:34.978 00:31:34.978 --- 10.0.0.1 ping statistics --- 00:31:34.978 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:34.978 rtt min/avg/max/mdev = 0.112/0.112/0.112/0.000 ms 00:31:34.978 10:50:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:34.978 10:50:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@450 -- # return 0 00:31:34.978 10:50:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:31:34.978 10:50:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:34.978 10:50:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:31:34.978 10:50:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:31:34.978 10:50:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:34.978 10:50:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:31:34.978 10:50:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:31:34.978 10:50:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:31:34.978 10:50:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:31:34.978 10:50:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@724 -- # xtrace_disable 00:31:34.978 10:50:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:31:34.978 10:50:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@509 -- # nvmfpid=539737 00:31:34.978 10:50:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x78 00:31:34.978 10:50:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@510 -- # waitforlisten 539737 00:31:34.979 10:50:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@833 -- # '[' -z 539737 ']' 00:31:34.979 10:50:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:34.979 10:50:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@838 -- # local max_retries=100 00:31:34.979 10:50:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:34.979 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:34.979 10:50:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@842 -- # xtrace_disable 00:31:34.979 10:50:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:31:34.979 [2024-11-15 10:50:23.374855] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:31:34.979 [2024-11-15 10:50:23.375946] Starting SPDK v25.01-pre git sha1 318515b44 / DPDK 24.03.0 initialization... 00:31:34.979 [2024-11-15 10:50:23.376002] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:35.238 [2024-11-15 10:50:23.448036] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:31:35.238 [2024-11-15 10:50:23.506058] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:35.238 [2024-11-15 10:50:23.506114] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:35.238 [2024-11-15 10:50:23.506136] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:35.238 [2024-11-15 10:50:23.506146] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:35.238 [2024-11-15 10:50:23.506156] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:35.238 [2024-11-15 10:50:23.507854] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:31:35.238 [2024-11-15 10:50:23.507916] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:31:35.238 [2024-11-15 10:50:23.507984] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:31:35.238 [2024-11-15 10:50:23.507987] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:31:35.238 [2024-11-15 10:50:23.594670] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:31:35.238 [2024-11-15 10:50:23.594908] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:31:35.238 [2024-11-15 10:50:23.595166] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:31:35.238 [2024-11-15 10:50:23.595769] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:31:35.238 [2024-11-15 10:50:23.595992] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:31:35.238 10:50:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:31:35.238 10:50:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@866 -- # return 0 00:31:35.238 10:50:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:31:35.238 10:50:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@730 -- # xtrace_disable 00:31:35.238 10:50:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:31:35.238 10:50:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:35.238 10:50:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:31:35.238 10:50:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:35.238 10:50:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:31:35.238 [2024-11-15 10:50:23.644651] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:35.238 10:50:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:35.238 10:50:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:31:35.238 10:50:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:35.238 10:50:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:31:35.238 Malloc0 00:31:35.238 10:50:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:35.238 10:50:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:31:35.238 10:50:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:35.238 10:50:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:31:35.238 10:50:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:35.238 10:50:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:31:35.497 10:50:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:35.497 10:50:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:31:35.497 10:50:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:35.497 10:50:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:35.497 10:50:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:35.497 10:50:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:31:35.497 [2024-11-15 10:50:23.716955] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:35.497 10:50:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:35.497 10:50:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:31:35.497 10:50:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:31:35.497 10:50:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@560 -- # config=() 00:31:35.497 10:50:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@560 -- # local subsystem config 00:31:35.497 10:50:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:31:35.497 10:50:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:31:35.497 { 00:31:35.497 "params": { 00:31:35.497 "name": "Nvme$subsystem", 00:31:35.497 "trtype": "$TEST_TRANSPORT", 00:31:35.497 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:35.497 "adrfam": "ipv4", 00:31:35.497 "trsvcid": "$NVMF_PORT", 00:31:35.497 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:35.497 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:35.497 "hdgst": ${hdgst:-false}, 00:31:35.497 "ddgst": ${ddgst:-false} 00:31:35.497 }, 00:31:35.497 "method": "bdev_nvme_attach_controller" 00:31:35.497 } 00:31:35.497 EOF 00:31:35.497 )") 00:31:35.497 10:50:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@582 -- # cat 00:31:35.497 10:50:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@584 -- # jq . 00:31:35.497 10:50:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@585 -- # IFS=, 00:31:35.497 10:50:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:31:35.497 "params": { 00:31:35.497 "name": "Nvme1", 00:31:35.497 "trtype": "tcp", 00:31:35.497 "traddr": "10.0.0.2", 00:31:35.497 "adrfam": "ipv4", 00:31:35.497 "trsvcid": "4420", 00:31:35.497 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:31:35.497 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:31:35.497 "hdgst": false, 00:31:35.497 "ddgst": false 00:31:35.497 }, 00:31:35.497 "method": "bdev_nvme_attach_controller" 00:31:35.497 }' 00:31:35.497 [2024-11-15 10:50:23.764514] Starting SPDK v25.01-pre git sha1 318515b44 / DPDK 24.03.0 initialization... 00:31:35.497 [2024-11-15 10:50:23.764589] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid539761 ] 00:31:35.497 [2024-11-15 10:50:23.833910] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:31:35.497 [2024-11-15 10:50:23.896627] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:35.497 [2024-11-15 10:50:23.896653] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:31:35.497 [2024-11-15 10:50:23.896657] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:36.062 I/O targets: 00:31:36.062 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:31:36.062 00:31:36.062 00:31:36.062 CUnit - A unit testing framework for C - Version 2.1-3 00:31:36.062 http://cunit.sourceforge.net/ 00:31:36.062 00:31:36.062 00:31:36.062 Suite: bdevio tests on: Nvme1n1 00:31:36.062 Test: blockdev write read block ...passed 00:31:36.062 Test: blockdev write zeroes read block ...passed 00:31:36.062 Test: blockdev write zeroes read no split ...passed 00:31:36.062 Test: blockdev write zeroes read split ...passed 00:31:36.062 Test: blockdev write zeroes read split partial ...passed 00:31:36.062 Test: blockdev reset ...[2024-11-15 10:50:24.380865] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:31:36.062 [2024-11-15 10:50:24.380987] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d95640 (9): Bad file descriptor 00:31:36.062 [2024-11-15 10:50:24.386206] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:31:36.062 passed 00:31:36.062 Test: blockdev write read 8 blocks ...passed 00:31:36.062 Test: blockdev write read size > 128k ...passed 00:31:36.062 Test: blockdev write read invalid size ...passed 00:31:36.062 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:31:36.062 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:31:36.062 Test: blockdev write read max offset ...passed 00:31:36.319 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:31:36.319 Test: blockdev writev readv 8 blocks ...passed 00:31:36.319 Test: blockdev writev readv 30 x 1block ...passed 00:31:36.319 Test: blockdev writev readv block ...passed 00:31:36.319 Test: blockdev writev readv size > 128k ...passed 00:31:36.319 Test: blockdev writev readv size > 128k in two iovs ...passed 00:31:36.319 Test: blockdev comparev and writev ...[2024-11-15 10:50:24.642671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:31:36.319 [2024-11-15 10:50:24.642709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:36.319 [2024-11-15 10:50:24.642734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:31:36.319 [2024-11-15 10:50:24.642751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:36.319 [2024-11-15 10:50:24.643235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:31:36.319 [2024-11-15 10:50:24.643271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:31:36.319 [2024-11-15 10:50:24.643308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:31:36.319 [2024-11-15 10:50:24.643336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:31:36.319 [2024-11-15 10:50:24.643832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:31:36.319 [2024-11-15 10:50:24.643860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:31:36.319 [2024-11-15 10:50:24.643882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:31:36.319 [2024-11-15 10:50:24.643900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:31:36.319 [2024-11-15 10:50:24.644348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:31:36.319 [2024-11-15 10:50:24.644388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:31:36.319 [2024-11-15 10:50:24.644416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:31:36.319 [2024-11-15 10:50:24.644432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:31:36.319 passed 00:31:36.319 Test: blockdev nvme passthru rw ...passed 00:31:36.319 Test: blockdev nvme passthru vendor specific ...[2024-11-15 10:50:24.726824] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:31:36.319 [2024-11-15 10:50:24.726852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:31:36.319 [2024-11-15 10:50:24.727111] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:31:36.319 [2024-11-15 10:50:24.727133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:31:36.319 [2024-11-15 10:50:24.727287] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:31:36.319 [2024-11-15 10:50:24.727311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:31:36.319 [2024-11-15 10:50:24.727480] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:31:36.319 [2024-11-15 10:50:24.727504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:31:36.319 passed 00:31:36.319 Test: blockdev nvme admin passthru ...passed 00:31:36.319 Test: blockdev copy ...passed 00:31:36.319 00:31:36.319 Run Summary: Type Total Ran Passed Failed Inactive 00:31:36.319 suites 1 1 n/a 0 0 00:31:36.320 tests 23 23 23 0 0 00:31:36.320 asserts 152 152 152 0 n/a 00:31:36.320 00:31:36.320 Elapsed time = 1.028 seconds 00:31:36.577 10:50:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:36.577 10:50:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:36.577 10:50:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:31:36.577 10:50:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:36.577 10:50:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:31:36.577 10:50:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:31:36.577 10:50:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@516 -- # nvmfcleanup 00:31:36.577 10:50:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:31:36.577 10:50:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:36.578 10:50:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:31:36.578 10:50:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:36.578 10:50:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:36.578 rmmod nvme_tcp 00:31:36.578 rmmod nvme_fabrics 00:31:36.578 rmmod nvme_keyring 00:31:36.578 10:50:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:36.578 10:50:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:31:36.578 10:50:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:31:36.578 10:50:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@517 -- # '[' -n 539737 ']' 00:31:36.578 10:50:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@518 -- # killprocess 539737 00:31:36.578 10:50:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@952 -- # '[' -z 539737 ']' 00:31:36.578 10:50:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@956 -- # kill -0 539737 00:31:36.578 10:50:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@957 -- # uname 00:31:36.578 10:50:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:31:36.578 10:50:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 539737 00:31:36.835 10:50:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@958 -- # process_name=reactor_3 00:31:36.835 10:50:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@962 -- # '[' reactor_3 = sudo ']' 00:31:36.835 10:50:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@970 -- # echo 'killing process with pid 539737' 00:31:36.835 killing process with pid 539737 00:31:36.835 10:50:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@971 -- # kill 539737 00:31:36.835 10:50:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@976 -- # wait 539737 00:31:37.094 10:50:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:31:37.094 10:50:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:31:37.094 10:50:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:31:37.094 10:50:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:31:37.094 10:50:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-save 00:31:37.094 10:50:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:31:37.094 10:50:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-restore 00:31:37.094 10:50:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:37.094 10:50:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:37.094 10:50:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:37.094 10:50:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:37.094 10:50:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:38.994 10:50:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:38.994 00:31:38.994 real 0m6.385s 00:31:38.994 user 0m9.046s 00:31:38.994 sys 0m2.469s 00:31:38.994 10:50:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1128 -- # xtrace_disable 00:31:38.994 10:50:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:31:38.994 ************************************ 00:31:38.994 END TEST nvmf_bdevio 00:31:38.994 ************************************ 00:31:38.994 10:50:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:31:38.994 00:31:38.994 real 3m55.663s 00:31:38.994 user 8m55.508s 00:31:38.994 sys 1m27.598s 00:31:38.994 10:50:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1128 -- # xtrace_disable 00:31:38.994 10:50:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:31:38.994 ************************************ 00:31:38.994 END TEST nvmf_target_core_interrupt_mode 00:31:38.994 ************************************ 00:31:38.994 10:50:27 nvmf_tcp -- nvmf/nvmf.sh@21 -- # run_test nvmf_interrupt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:31:38.994 10:50:27 nvmf_tcp -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:31:38.994 10:50:27 nvmf_tcp -- common/autotest_common.sh@1109 -- # xtrace_disable 00:31:38.994 10:50:27 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:31:38.994 ************************************ 00:31:38.994 START TEST nvmf_interrupt 00:31:38.994 ************************************ 00:31:38.994 10:50:27 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:31:39.253 * Looking for test storage... 00:31:39.253 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:31:39.253 10:50:27 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:31:39.253 10:50:27 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1691 -- # lcov --version 00:31:39.253 10:50:27 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:31:39.253 10:50:27 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:31:39.253 10:50:27 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:39.253 10:50:27 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:39.253 10:50:27 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:39.253 10:50:27 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # IFS=.-: 00:31:39.253 10:50:27 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # read -ra ver1 00:31:39.253 10:50:27 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # IFS=.-: 00:31:39.253 10:50:27 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # read -ra ver2 00:31:39.253 10:50:27 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@338 -- # local 'op=<' 00:31:39.253 10:50:27 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@340 -- # ver1_l=2 00:31:39.253 10:50:27 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@341 -- # ver2_l=1 00:31:39.253 10:50:27 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:39.253 10:50:27 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@344 -- # case "$op" in 00:31:39.253 10:50:27 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@345 -- # : 1 00:31:39.253 10:50:27 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:39.253 10:50:27 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:39.253 10:50:27 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # decimal 1 00:31:39.253 10:50:27 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=1 00:31:39.253 10:50:27 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:39.253 10:50:27 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 1 00:31:39.253 10:50:27 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # ver1[v]=1 00:31:39.253 10:50:27 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # decimal 2 00:31:39.253 10:50:27 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=2 00:31:39.253 10:50:27 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:39.253 10:50:27 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 2 00:31:39.253 10:50:27 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # ver2[v]=2 00:31:39.253 10:50:27 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:39.253 10:50:27 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:39.253 10:50:27 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # return 0 00:31:39.253 10:50:27 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:39.253 10:50:27 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:31:39.253 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:39.253 --rc genhtml_branch_coverage=1 00:31:39.253 --rc genhtml_function_coverage=1 00:31:39.253 --rc genhtml_legend=1 00:31:39.253 --rc geninfo_all_blocks=1 00:31:39.253 --rc geninfo_unexecuted_blocks=1 00:31:39.253 00:31:39.253 ' 00:31:39.253 10:50:27 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:31:39.253 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:39.253 --rc genhtml_branch_coverage=1 00:31:39.253 --rc genhtml_function_coverage=1 00:31:39.253 --rc genhtml_legend=1 00:31:39.253 --rc geninfo_all_blocks=1 00:31:39.253 --rc geninfo_unexecuted_blocks=1 00:31:39.253 00:31:39.253 ' 00:31:39.253 10:50:27 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:31:39.253 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:39.253 --rc genhtml_branch_coverage=1 00:31:39.253 --rc genhtml_function_coverage=1 00:31:39.253 --rc genhtml_legend=1 00:31:39.253 --rc geninfo_all_blocks=1 00:31:39.253 --rc geninfo_unexecuted_blocks=1 00:31:39.253 00:31:39.253 ' 00:31:39.253 10:50:27 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:31:39.253 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:39.253 --rc genhtml_branch_coverage=1 00:31:39.253 --rc genhtml_function_coverage=1 00:31:39.253 --rc genhtml_legend=1 00:31:39.253 --rc geninfo_all_blocks=1 00:31:39.253 --rc geninfo_unexecuted_blocks=1 00:31:39.253 00:31:39.253 ' 00:31:39.253 10:50:27 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:39.253 10:50:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # uname -s 00:31:39.253 10:50:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:39.253 10:50:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:39.253 10:50:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:39.253 10:50:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:39.253 10:50:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:39.253 10:50:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:39.253 10:50:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:39.253 10:50:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:39.253 10:50:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:39.253 10:50:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:39.253 10:50:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:31:39.253 10:50:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@18 -- # NVME_HOSTID=8b464f06-2980-e311-ba20-001e67a94acd 00:31:39.253 10:50:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:39.253 10:50:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:39.253 10:50:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:39.253 10:50:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:39.253 10:50:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:39.253 10:50:27 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@15 -- # shopt -s extglob 00:31:39.253 10:50:27 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:39.253 10:50:27 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:39.253 10:50:27 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:39.253 10:50:27 nvmf_tcp.nvmf_interrupt -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:39.253 10:50:27 nvmf_tcp.nvmf_interrupt -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:39.253 10:50:27 nvmf_tcp.nvmf_interrupt -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:39.253 10:50:27 nvmf_tcp.nvmf_interrupt -- paths/export.sh@5 -- # export PATH 00:31:39.253 10:50:27 nvmf_tcp.nvmf_interrupt -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:39.253 10:50:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@51 -- # : 0 00:31:39.253 10:50:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:39.253 10:50:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:39.254 10:50:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:39.254 10:50:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:39.254 10:50:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:39.254 10:50:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:31:39.254 10:50:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:31:39.254 10:50:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:39.254 10:50:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:39.254 10:50:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:39.254 10:50:27 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/interrupt/common.sh 00:31:39.254 10:50:27 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@12 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:31:39.254 10:50:27 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@14 -- # nvmftestinit 00:31:39.254 10:50:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:31:39.254 10:50:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:39.254 10:50:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@476 -- # prepare_net_devs 00:31:39.254 10:50:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@438 -- # local -g is_hw=no 00:31:39.254 10:50:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@440 -- # remove_spdk_ns 00:31:39.254 10:50:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:39.254 10:50:27 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:31:39.254 10:50:27 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:39.254 10:50:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:31:39.254 10:50:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:31:39.254 10:50:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@309 -- # xtrace_disable 00:31:39.254 10:50:27 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:31:41.786 10:50:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:41.786 10:50:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@315 -- # pci_devs=() 00:31:41.786 10:50:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:41.786 10:50:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:41.786 10:50:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:41.786 10:50:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:41.786 10:50:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:41.786 10:50:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@319 -- # net_devs=() 00:31:41.786 10:50:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:41.786 10:50:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@320 -- # e810=() 00:31:41.786 10:50:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@320 -- # local -ga e810 00:31:41.786 10:50:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@321 -- # x722=() 00:31:41.786 10:50:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@321 -- # local -ga x722 00:31:41.786 10:50:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@322 -- # mlx=() 00:31:41.786 10:50:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@322 -- # local -ga mlx 00:31:41.786 10:50:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:41.786 10:50:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:41.786 10:50:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:41.786 10:50:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:41.786 10:50:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:41.786 10:50:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:41.786 10:50:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:41.786 10:50:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:41.786 10:50:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:41.786 10:50:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:41.786 10:50:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:41.786 10:50:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:41.786 10:50:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:41.786 10:50:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:31:41.786 10:50:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:41.786 10:50:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:41.786 10:50:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:41.786 10:50:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:41.786 10:50:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:41.786 10:50:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@367 -- # echo 'Found 0000:82:00.0 (0x8086 - 0x159b)' 00:31:41.786 Found 0000:82:00.0 (0x8086 - 0x159b) 00:31:41.786 10:50:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:41.786 10:50:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:41.786 10:50:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:41.786 10:50:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:41.786 10:50:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:41.786 10:50:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:41.786 10:50:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@367 -- # echo 'Found 0000:82:00.1 (0x8086 - 0x159b)' 00:31:41.786 Found 0000:82:00.1 (0x8086 - 0x159b) 00:31:41.786 10:50:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:41.786 10:50:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:41.786 10:50:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:41.786 10:50:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:41.786 10:50:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:41.786 10:50:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:41.786 10:50:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:41.786 10:50:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:31:41.786 10:50:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:41.786 10:50:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:41.786 10:50:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:41.786 10:50:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:41.786 10:50:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:41.786 10:50:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:41.786 10:50:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:41.786 10:50:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:82:00.0: cvl_0_0' 00:31:41.786 Found net devices under 0000:82:00.0: cvl_0_0 00:31:41.786 10:50:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:41.786 10:50:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:41.786 10:50:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:41.786 10:50:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:41.786 10:50:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:41.786 10:50:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:41.786 10:50:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:41.786 10:50:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:41.786 10:50:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:82:00.1: cvl_0_1' 00:31:41.786 Found net devices under 0000:82:00.1: cvl_0_1 00:31:41.786 10:50:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:41.786 10:50:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:31:41.786 10:50:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # is_hw=yes 00:31:41.786 10:50:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:31:41.786 10:50:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:31:41.786 10:50:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:31:41.786 10:50:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:41.786 10:50:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:41.786 10:50:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:41.787 10:50:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:41.787 10:50:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:41.787 10:50:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:41.787 10:50:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:41.787 10:50:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:41.787 10:50:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:41.787 10:50:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:41.787 10:50:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:41.787 10:50:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:41.787 10:50:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:41.787 10:50:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:41.787 10:50:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:41.787 10:50:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:41.787 10:50:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:41.787 10:50:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:41.787 10:50:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:41.787 10:50:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:41.787 10:50:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:41.787 10:50:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:41.787 10:50:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:41.787 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:41.787 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.169 ms 00:31:41.787 00:31:41.787 --- 10.0.0.2 ping statistics --- 00:31:41.787 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:41.787 rtt min/avg/max/mdev = 0.169/0.169/0.169/0.000 ms 00:31:41.787 10:50:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:41.787 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:41.787 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.049 ms 00:31:41.787 00:31:41.787 --- 10.0.0.1 ping statistics --- 00:31:41.787 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:41.787 rtt min/avg/max/mdev = 0.049/0.049/0.049/0.000 ms 00:31:41.787 10:50:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:41.787 10:50:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@450 -- # return 0 00:31:41.787 10:50:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:31:41.787 10:50:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:41.787 10:50:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:31:41.787 10:50:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:31:41.787 10:50:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:41.787 10:50:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:31:41.787 10:50:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:31:41.787 10:50:29 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@15 -- # nvmfappstart -m 0x3 00:31:41.787 10:50:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:31:41.787 10:50:29 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@724 -- # xtrace_disable 00:31:41.787 10:50:29 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:31:41.787 10:50:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@509 -- # nvmfpid=541953 00:31:41.787 10:50:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:31:41.787 10:50:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@510 -- # waitforlisten 541953 00:31:41.787 10:50:29 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@833 -- # '[' -z 541953 ']' 00:31:41.787 10:50:29 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:41.787 10:50:29 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@838 -- # local max_retries=100 00:31:41.787 10:50:29 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:41.787 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:41.787 10:50:29 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@842 -- # xtrace_disable 00:31:41.787 10:50:29 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:31:41.787 [2024-11-15 10:50:29.886321] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:31:41.787 [2024-11-15 10:50:29.887384] Starting SPDK v25.01-pre git sha1 318515b44 / DPDK 24.03.0 initialization... 00:31:41.787 [2024-11-15 10:50:29.887469] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:41.787 [2024-11-15 10:50:29.961159] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:31:41.787 [2024-11-15 10:50:30.020796] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:41.787 [2024-11-15 10:50:30.020844] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:41.787 [2024-11-15 10:50:30.020868] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:41.787 [2024-11-15 10:50:30.020878] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:41.787 [2024-11-15 10:50:30.020887] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:41.787 [2024-11-15 10:50:30.022246] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:41.787 [2024-11-15 10:50:30.022251] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:41.787 [2024-11-15 10:50:30.109009] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:31:41.787 [2024-11-15 10:50:30.109060] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:31:41.787 [2024-11-15 10:50:30.109272] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:31:41.787 10:50:30 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:31:41.787 10:50:30 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@866 -- # return 0 00:31:41.787 10:50:30 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:31:41.787 10:50:30 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@730 -- # xtrace_disable 00:31:41.787 10:50:30 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:31:41.787 10:50:30 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:41.787 10:50:30 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@16 -- # setup_bdev_aio 00:31:41.787 10:50:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # uname -s 00:31:41.787 10:50:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # [[ Linux != \F\r\e\e\B\S\D ]] 00:31:41.787 10:50:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@78 -- # dd if=/dev/zero of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aiofile bs=2048 count=5000 00:31:41.787 5000+0 records in 00:31:41.787 5000+0 records out 00:31:41.787 10240000 bytes (10 MB, 9.8 MiB) copied, 0.0136975 s, 748 MB/s 00:31:41.787 10:50:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@79 -- # rpc_cmd bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aiofile AIO0 2048 00:31:41.787 10:50:30 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:41.787 10:50:30 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:31:41.787 AIO0 00:31:41.787 10:50:30 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:41.787 10:50:30 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -q 256 00:31:41.787 10:50:30 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:41.787 10:50:30 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:31:41.787 [2024-11-15 10:50:30.214927] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:41.787 10:50:30 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:41.787 10:50:30 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:31:41.787 10:50:30 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:41.787 10:50:30 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:31:41.787 10:50:30 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:41.787 10:50:30 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 AIO0 00:31:41.787 10:50:30 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:41.787 10:50:30 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:31:41.787 10:50:30 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:41.787 10:50:30 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:41.787 10:50:30 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:41.787 10:50:30 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:31:41.787 [2024-11-15 10:50:30.243162] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:41.787 10:50:30 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:41.787 10:50:30 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:31:41.787 10:50:30 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 541953 0 00:31:41.787 10:50:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 541953 0 idle 00:31:41.787 10:50:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=541953 00:31:41.787 10:50:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:31:41.787 10:50:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:31:41.787 10:50:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:31:41.787 10:50:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:31:41.787 10:50:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:31:41.787 10:50:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:31:41.787 10:50:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:31:41.787 10:50:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:31:41.787 10:50:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:31:42.045 10:50:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 541953 -w 256 00:31:42.045 10:50:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:31:42.045 10:50:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 541953 root 20 0 128.2g 47616 34944 S 0.0 0.1 0:00.27 reactor_0' 00:31:42.045 10:50:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 541953 root 20 0 128.2g 47616 34944 S 0.0 0.1 0:00.27 reactor_0 00:31:42.045 10:50:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:31:42.045 10:50:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:31:42.045 10:50:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:31:42.045 10:50:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:31:42.045 10:50:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:31:42.045 10:50:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:31:42.045 10:50:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:31:42.045 10:50:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:31:42.045 10:50:30 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:31:42.045 10:50:30 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 541953 1 00:31:42.045 10:50:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 541953 1 idle 00:31:42.045 10:50:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=541953 00:31:42.045 10:50:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:31:42.045 10:50:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:31:42.045 10:50:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:31:42.045 10:50:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:31:42.045 10:50:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:31:42.045 10:50:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:31:42.045 10:50:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:31:42.045 10:50:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:31:42.045 10:50:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:31:42.045 10:50:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 541953 -w 256 00:31:42.045 10:50:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:31:42.302 10:50:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 541979 root 20 0 128.2g 47616 34944 S 0.0 0.1 0:00.00 reactor_1' 00:31:42.302 10:50:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 541979 root 20 0 128.2g 47616 34944 S 0.0 0.1 0:00.00 reactor_1 00:31:42.302 10:50:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:31:42.302 10:50:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:31:42.302 10:50:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:31:42.302 10:50:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:31:42.302 10:50:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:31:42.302 10:50:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:31:42.302 10:50:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:31:42.302 10:50:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:31:42.302 10:50:30 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@28 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:31:42.302 10:50:30 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@35 -- # perf_pid=542018 00:31:42.302 10:50:30 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 256 -o 4096 -w randrw -M 30 -t 10 -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:31:42.302 10:50:30 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:31:42.302 10:50:30 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:31:42.302 10:50:30 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 541953 0 00:31:42.302 10:50:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 541953 0 busy 00:31:42.302 10:50:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=541953 00:31:42.302 10:50:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:31:42.302 10:50:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:31:42.302 10:50:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:31:42.302 10:50:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:31:42.302 10:50:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:31:42.302 10:50:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:31:42.303 10:50:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:31:42.303 10:50:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:31:42.303 10:50:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 541953 -w 256 00:31:42.303 10:50:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:31:42.303 10:50:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 541953 root 20 0 128.2g 48768 35328 R 99.9 0.1 0:00.48 reactor_0' 00:31:42.303 10:50:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 541953 root 20 0 128.2g 48768 35328 R 99.9 0.1 0:00.48 reactor_0 00:31:42.303 10:50:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:31:42.303 10:50:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:31:42.560 10:50:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=99.9 00:31:42.560 10:50:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=99 00:31:42.560 10:50:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:31:42.560 10:50:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:31:42.560 10:50:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:31:42.560 10:50:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:31:42.560 10:50:30 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:31:42.560 10:50:30 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:31:42.560 10:50:30 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 541953 1 00:31:42.560 10:50:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 541953 1 busy 00:31:42.560 10:50:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=541953 00:31:42.560 10:50:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:31:42.560 10:50:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:31:42.560 10:50:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:31:42.560 10:50:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:31:42.560 10:50:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:31:42.560 10:50:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:31:42.560 10:50:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:31:42.560 10:50:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:31:42.560 10:50:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 541953 -w 256 00:31:42.560 10:50:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:31:42.560 10:50:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 541979 root 20 0 128.2g 48768 35328 R 93.3 0.1 0:00.26 reactor_1' 00:31:42.560 10:50:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 541979 root 20 0 128.2g 48768 35328 R 93.3 0.1 0:00.26 reactor_1 00:31:42.560 10:50:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:31:42.560 10:50:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:31:42.560 10:50:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=93.3 00:31:42.560 10:50:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=93 00:31:42.560 10:50:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:31:42.560 10:50:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:31:42.560 10:50:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:31:42.561 10:50:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:31:42.561 10:50:30 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@42 -- # wait 542018 00:31:52.526 Initializing NVMe Controllers 00:31:52.526 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:52.526 Controller IO queue size 256, less than required. 00:31:52.526 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:31:52.526 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:31:52.526 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:31:52.526 Initialization complete. Launching workers. 00:31:52.526 ======================================================== 00:31:52.526 Latency(us) 00:31:52.526 Device Information : IOPS MiB/s Average min max 00:31:52.526 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 14362.20 56.10 17835.59 4475.92 22084.67 00:31:52.526 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 14102.20 55.09 18164.57 4597.15 22719.56 00:31:52.526 ======================================================== 00:31:52.526 Total : 28464.40 111.19 17998.58 4475.92 22719.56 00:31:52.526 00:31:52.526 10:50:40 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:31:52.526 10:50:40 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 541953 0 00:31:52.526 10:50:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 541953 0 idle 00:31:52.526 10:50:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=541953 00:31:52.526 10:50:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:31:52.526 10:50:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:31:52.526 10:50:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:31:52.526 10:50:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:31:52.526 10:50:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:31:52.526 10:50:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:31:52.526 10:50:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:31:52.526 10:50:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:31:52.526 10:50:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:31:52.526 10:50:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 541953 -w 256 00:31:52.526 10:50:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:31:52.526 10:50:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 541953 root 20 0 128.2g 48768 35328 S 0.0 0.1 0:20.22 reactor_0' 00:31:52.526 10:50:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 541953 root 20 0 128.2g 48768 35328 S 0.0 0.1 0:20.22 reactor_0 00:31:52.526 10:50:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:31:52.526 10:50:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:31:52.526 10:50:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:31:52.526 10:50:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:31:52.526 10:50:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:31:52.526 10:50:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:31:52.526 10:50:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:31:52.526 10:50:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:31:52.526 10:50:40 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:31:52.526 10:50:40 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 541953 1 00:31:52.526 10:50:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 541953 1 idle 00:31:52.526 10:50:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=541953 00:31:52.526 10:50:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:31:52.526 10:50:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:31:52.526 10:50:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:31:52.526 10:50:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:31:52.526 10:50:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:31:52.526 10:50:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:31:52.526 10:50:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:31:52.526 10:50:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:31:52.526 10:50:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:31:52.526 10:50:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 541953 -w 256 00:31:52.526 10:50:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:31:52.784 10:50:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 541979 root 20 0 128.2g 48768 35328 S 0.0 0.1 0:09.97 reactor_1' 00:31:52.785 10:50:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 541979 root 20 0 128.2g 48768 35328 S 0.0 0.1 0:09.97 reactor_1 00:31:52.785 10:50:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:31:52.785 10:50:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:31:52.785 10:50:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:31:52.785 10:50:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:31:52.785 10:50:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:31:52.785 10:50:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:31:52.785 10:50:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:31:52.785 10:50:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:31:52.785 10:50:41 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@50 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid=8b464f06-2980-e311-ba20-001e67a94acd -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:31:53.043 10:50:41 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@51 -- # waitforserial SPDKISFASTANDAWESOME 00:31:53.043 10:50:41 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1200 -- # local i=0 00:31:53.043 10:50:41 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:31:53.043 10:50:41 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:31:53.043 10:50:41 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1207 -- # sleep 2 00:31:54.942 10:50:43 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:31:54.942 10:50:43 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:31:54.942 10:50:43 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:31:54.942 10:50:43 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:31:54.942 10:50:43 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:31:54.942 10:50:43 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1210 -- # return 0 00:31:54.942 10:50:43 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:31:54.942 10:50:43 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 541953 0 00:31:54.942 10:50:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 541953 0 idle 00:31:54.942 10:50:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=541953 00:31:54.942 10:50:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:31:54.942 10:50:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:31:54.942 10:50:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:31:54.942 10:50:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:31:54.942 10:50:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:31:54.942 10:50:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:31:54.942 10:50:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:31:54.942 10:50:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:31:54.942 10:50:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:31:54.942 10:50:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 541953 -w 256 00:31:54.942 10:50:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:31:55.199 10:50:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 541953 root 20 0 128.2g 61056 35328 S 0.0 0.1 0:20.31 reactor_0' 00:31:55.199 10:50:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 541953 root 20 0 128.2g 61056 35328 S 0.0 0.1 0:20.31 reactor_0 00:31:55.199 10:50:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:31:55.199 10:50:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:31:55.199 10:50:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:31:55.199 10:50:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:31:55.199 10:50:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:31:55.199 10:50:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:31:55.199 10:50:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:31:55.199 10:50:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:31:55.199 10:50:43 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:31:55.199 10:50:43 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 541953 1 00:31:55.199 10:50:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 541953 1 idle 00:31:55.199 10:50:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=541953 00:31:55.199 10:50:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:31:55.199 10:50:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:31:55.199 10:50:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:31:55.199 10:50:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:31:55.199 10:50:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:31:55.199 10:50:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:31:55.199 10:50:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:31:55.199 10:50:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:31:55.199 10:50:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:31:55.199 10:50:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 541953 -w 256 00:31:55.199 10:50:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:31:55.458 10:50:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 541979 root 20 0 128.2g 61056 35328 S 0.0 0.1 0:10.01 reactor_1' 00:31:55.458 10:50:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 541979 root 20 0 128.2g 61056 35328 S 0.0 0.1 0:10.01 reactor_1 00:31:55.458 10:50:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:31:55.458 10:50:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:31:55.458 10:50:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:31:55.458 10:50:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:31:55.458 10:50:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:31:55.458 10:50:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:31:55.458 10:50:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:31:55.458 10:50:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:31:55.458 10:50:43 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@55 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:31:55.458 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:31:55.458 10:50:43 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@56 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:31:55.458 10:50:43 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1221 -- # local i=0 00:31:55.458 10:50:43 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:31:55.458 10:50:43 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:31:55.458 10:50:43 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:31:55.458 10:50:43 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:31:55.458 10:50:43 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1233 -- # return 0 00:31:55.458 10:50:43 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@58 -- # trap - SIGINT SIGTERM EXIT 00:31:55.458 10:50:43 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@59 -- # nvmftestfini 00:31:55.458 10:50:43 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@516 -- # nvmfcleanup 00:31:55.458 10:50:43 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@121 -- # sync 00:31:55.458 10:50:43 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:55.458 10:50:43 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@124 -- # set +e 00:31:55.458 10:50:43 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:55.458 10:50:43 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:55.458 rmmod nvme_tcp 00:31:55.458 rmmod nvme_fabrics 00:31:55.458 rmmod nvme_keyring 00:31:55.458 10:50:43 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:55.458 10:50:43 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@128 -- # set -e 00:31:55.458 10:50:43 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@129 -- # return 0 00:31:55.458 10:50:43 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@517 -- # '[' -n 541953 ']' 00:31:55.458 10:50:43 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@518 -- # killprocess 541953 00:31:55.458 10:50:43 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@952 -- # '[' -z 541953 ']' 00:31:55.458 10:50:43 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@956 -- # kill -0 541953 00:31:55.458 10:50:43 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@957 -- # uname 00:31:55.458 10:50:43 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:31:55.458 10:50:43 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 541953 00:31:55.716 10:50:43 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:31:55.716 10:50:43 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:31:55.716 10:50:43 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@970 -- # echo 'killing process with pid 541953' 00:31:55.716 killing process with pid 541953 00:31:55.716 10:50:43 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@971 -- # kill 541953 00:31:55.717 10:50:43 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@976 -- # wait 541953 00:31:55.717 10:50:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:31:55.717 10:50:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:31:55.717 10:50:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:31:55.717 10:50:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@297 -- # iptr 00:31:55.717 10:50:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # iptables-save 00:31:55.975 10:50:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:31:55.975 10:50:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # iptables-restore 00:31:55.975 10:50:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:55.975 10:50:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:55.975 10:50:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:55.975 10:50:44 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:31:55.975 10:50:44 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:57.884 10:50:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:57.884 00:31:57.884 real 0m18.792s 00:31:57.884 user 0m36.996s 00:31:57.884 sys 0m6.864s 00:31:57.884 10:50:46 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1128 -- # xtrace_disable 00:31:57.884 10:50:46 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:31:57.884 ************************************ 00:31:57.884 END TEST nvmf_interrupt 00:31:57.884 ************************************ 00:31:57.884 00:31:57.884 real 25m17.964s 00:31:57.884 user 59m2.895s 00:31:57.884 sys 6m54.566s 00:31:57.884 10:50:46 nvmf_tcp -- common/autotest_common.sh@1128 -- # xtrace_disable 00:31:57.884 10:50:46 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:31:57.884 ************************************ 00:31:57.884 END TEST nvmf_tcp 00:31:57.884 ************************************ 00:31:57.884 10:50:46 -- spdk/autotest.sh@281 -- # [[ 0 -eq 0 ]] 00:31:57.884 10:50:46 -- spdk/autotest.sh@282 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:31:57.884 10:50:46 -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:31:57.884 10:50:46 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:31:57.884 10:50:46 -- common/autotest_common.sh@10 -- # set +x 00:31:57.884 ************************************ 00:31:57.884 START TEST spdkcli_nvmf_tcp 00:31:57.884 ************************************ 00:31:57.884 10:50:46 spdkcli_nvmf_tcp -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:31:58.144 * Looking for test storage... 00:31:58.144 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:31:58.144 10:50:46 spdkcli_nvmf_tcp -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:31:58.144 10:50:46 spdkcli_nvmf_tcp -- common/autotest_common.sh@1691 -- # lcov --version 00:31:58.144 10:50:46 spdkcli_nvmf_tcp -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:31:58.144 10:50:46 spdkcli_nvmf_tcp -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:31:58.144 10:50:46 spdkcli_nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:58.144 10:50:46 spdkcli_nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:58.144 10:50:46 spdkcli_nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:58.144 10:50:46 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:31:58.144 10:50:46 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:31:58.144 10:50:46 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:31:58.144 10:50:46 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:31:58.144 10:50:46 spdkcli_nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:31:58.144 10:50:46 spdkcli_nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:31:58.144 10:50:46 spdkcli_nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:31:58.144 10:50:46 spdkcli_nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:58.144 10:50:46 spdkcli_nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:31:58.144 10:50:46 spdkcli_nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:31:58.144 10:50:46 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:58.144 10:50:46 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:58.144 10:50:46 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:31:58.144 10:50:46 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:31:58.144 10:50:46 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:58.144 10:50:46 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:31:58.144 10:50:46 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:31:58.144 10:50:46 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:31:58.144 10:50:46 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:31:58.144 10:50:46 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:58.144 10:50:46 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:31:58.144 10:50:46 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:31:58.144 10:50:46 spdkcli_nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:58.144 10:50:46 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:58.144 10:50:46 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:31:58.144 10:50:46 spdkcli_nvmf_tcp -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:58.144 10:50:46 spdkcli_nvmf_tcp -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:31:58.144 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:58.144 --rc genhtml_branch_coverage=1 00:31:58.144 --rc genhtml_function_coverage=1 00:31:58.144 --rc genhtml_legend=1 00:31:58.144 --rc geninfo_all_blocks=1 00:31:58.144 --rc geninfo_unexecuted_blocks=1 00:31:58.144 00:31:58.144 ' 00:31:58.144 10:50:46 spdkcli_nvmf_tcp -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:31:58.144 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:58.144 --rc genhtml_branch_coverage=1 00:31:58.144 --rc genhtml_function_coverage=1 00:31:58.144 --rc genhtml_legend=1 00:31:58.144 --rc geninfo_all_blocks=1 00:31:58.144 --rc geninfo_unexecuted_blocks=1 00:31:58.144 00:31:58.144 ' 00:31:58.144 10:50:46 spdkcli_nvmf_tcp -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:31:58.144 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:58.144 --rc genhtml_branch_coverage=1 00:31:58.144 --rc genhtml_function_coverage=1 00:31:58.144 --rc genhtml_legend=1 00:31:58.144 --rc geninfo_all_blocks=1 00:31:58.144 --rc geninfo_unexecuted_blocks=1 00:31:58.144 00:31:58.144 ' 00:31:58.144 10:50:46 spdkcli_nvmf_tcp -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:31:58.144 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:58.144 --rc genhtml_branch_coverage=1 00:31:58.144 --rc genhtml_function_coverage=1 00:31:58.144 --rc genhtml_legend=1 00:31:58.144 --rc geninfo_all_blocks=1 00:31:58.144 --rc geninfo_unexecuted_blocks=1 00:31:58.144 00:31:58.144 ' 00:31:58.144 10:50:46 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:31:58.144 10:50:46 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:31:58.144 10:50:46 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:31:58.144 10:50:46 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:58.144 10:50:46 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:31:58.144 10:50:46 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:58.144 10:50:46 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:58.144 10:50:46 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:58.144 10:50:46 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:58.144 10:50:46 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:58.144 10:50:46 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:58.144 10:50:46 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:58.144 10:50:46 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:58.144 10:50:46 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:58.144 10:50:46 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:58.144 10:50:46 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:31:58.144 10:50:46 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=8b464f06-2980-e311-ba20-001e67a94acd 00:31:58.144 10:50:46 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:58.144 10:50:46 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:58.144 10:50:46 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:58.144 10:50:46 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:58.144 10:50:46 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:58.144 10:50:46 spdkcli_nvmf_tcp -- scripts/common.sh@15 -- # shopt -s extglob 00:31:58.144 10:50:46 spdkcli_nvmf_tcp -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:58.144 10:50:46 spdkcli_nvmf_tcp -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:58.144 10:50:46 spdkcli_nvmf_tcp -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:58.144 10:50:46 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:58.144 10:50:46 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:58.144 10:50:46 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:58.144 10:50:46 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:31:58.145 10:50:46 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:58.145 10:50:46 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # : 0 00:31:58.145 10:50:46 spdkcli_nvmf_tcp -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:58.145 10:50:46 spdkcli_nvmf_tcp -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:58.145 10:50:46 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:58.145 10:50:46 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:58.145 10:50:46 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:58.145 10:50:46 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:31:58.145 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:31:58.145 10:50:46 spdkcli_nvmf_tcp -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:58.145 10:50:46 spdkcli_nvmf_tcp -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:58.145 10:50:46 spdkcli_nvmf_tcp -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:58.145 10:50:46 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:31:58.145 10:50:46 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:31:58.145 10:50:46 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:31:58.145 10:50:46 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:31:58.145 10:50:46 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:31:58.145 10:50:46 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:31:58.145 10:50:46 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:31:58.145 10:50:46 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=544017 00:31:58.145 10:50:46 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:31:58.145 10:50:46 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 544017 00:31:58.145 10:50:46 spdkcli_nvmf_tcp -- common/autotest_common.sh@833 -- # '[' -z 544017 ']' 00:31:58.145 10:50:46 spdkcli_nvmf_tcp -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:58.145 10:50:46 spdkcli_nvmf_tcp -- common/autotest_common.sh@838 -- # local max_retries=100 00:31:58.145 10:50:46 spdkcli_nvmf_tcp -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:58.145 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:58.145 10:50:46 spdkcli_nvmf_tcp -- common/autotest_common.sh@842 -- # xtrace_disable 00:31:58.145 10:50:46 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:31:58.145 [2024-11-15 10:50:46.533066] Starting SPDK v25.01-pre git sha1 318515b44 / DPDK 24.03.0 initialization... 00:31:58.145 [2024-11-15 10:50:46.533149] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid544017 ] 00:31:58.145 [2024-11-15 10:50:46.599864] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:31:58.403 [2024-11-15 10:50:46.660911] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:58.403 [2024-11-15 10:50:46.660914] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:58.403 10:50:46 spdkcli_nvmf_tcp -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:31:58.403 10:50:46 spdkcli_nvmf_tcp -- common/autotest_common.sh@866 -- # return 0 00:31:58.403 10:50:46 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:31:58.403 10:50:46 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:31:58.403 10:50:46 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:31:58.403 10:50:46 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:31:58.403 10:50:46 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:31:58.403 10:50:46 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:31:58.403 10:50:46 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:31:58.403 10:50:46 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:31:58.403 10:50:46 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:31:58.403 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:31:58.403 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:31:58.403 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:31:58.403 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:31:58.403 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:31:58.403 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:31:58.403 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:31:58.403 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:31:58.403 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:31:58.403 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:31:58.403 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:31:58.403 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:31:58.403 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:31:58.403 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:31:58.403 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:31:58.403 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:31:58.403 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:31:58.403 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:31:58.403 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:31:58.403 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:31:58.403 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:31:58.403 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:31:58.403 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:31:58.403 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:31:58.403 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:31:58.403 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:31:58.403 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:31:58.403 ' 00:32:01.704 [2024-11-15 10:50:49.501623] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:02.634 [2024-11-15 10:50:50.781980] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:32:05.157 [2024-11-15 10:50:53.125159] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:32:07.052 [2024-11-15 10:50:55.135344] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:32:08.425 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:32:08.425 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:32:08.425 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:32:08.425 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:32:08.425 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:32:08.425 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:32:08.425 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:32:08.425 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:32:08.425 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:32:08.425 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:32:08.425 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:32:08.425 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:32:08.425 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:32:08.425 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:32:08.425 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:32:08.425 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:32:08.425 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:32:08.425 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:32:08.425 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:32:08.425 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:32:08.425 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:32:08.425 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:32:08.425 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:32:08.425 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:32:08.425 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:32:08.425 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:32:08.425 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:32:08.425 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:32:08.425 10:50:56 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:32:08.425 10:50:56 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:32:08.425 10:50:56 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:08.425 10:50:56 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:32:08.425 10:50:56 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:32:08.425 10:50:56 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:08.425 10:50:56 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:32:08.425 10:50:56 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:32:08.990 10:50:57 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:32:08.990 10:50:57 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:32:08.990 10:50:57 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:32:08.990 10:50:57 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:32:08.990 10:50:57 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:08.990 10:50:57 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:32:08.990 10:50:57 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:32:08.990 10:50:57 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:08.990 10:50:57 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:32:08.990 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:32:08.990 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:32:08.990 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:32:08.990 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:32:08.990 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:32:08.990 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:32:08.990 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:32:08.990 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:32:08.990 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:32:08.990 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:32:08.990 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:32:08.990 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:32:08.990 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:32:08.990 ' 00:32:14.249 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:32:14.249 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:32:14.249 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:32:14.250 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:32:14.250 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:32:14.250 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:32:14.250 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:32:14.250 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:32:14.250 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:32:14.250 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:32:14.250 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:32:14.250 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:32:14.250 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:32:14.250 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:32:14.508 10:51:02 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:32:14.508 10:51:02 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:32:14.508 10:51:02 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:14.508 10:51:02 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 544017 00:32:14.508 10:51:02 spdkcli_nvmf_tcp -- common/autotest_common.sh@952 -- # '[' -z 544017 ']' 00:32:14.508 10:51:02 spdkcli_nvmf_tcp -- common/autotest_common.sh@956 -- # kill -0 544017 00:32:14.508 10:51:02 spdkcli_nvmf_tcp -- common/autotest_common.sh@957 -- # uname 00:32:14.508 10:51:02 spdkcli_nvmf_tcp -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:32:14.508 10:51:02 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 544017 00:32:14.508 10:51:02 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:32:14.508 10:51:02 spdkcli_nvmf_tcp -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:32:14.508 10:51:02 spdkcli_nvmf_tcp -- common/autotest_common.sh@970 -- # echo 'killing process with pid 544017' 00:32:14.508 killing process with pid 544017 00:32:14.508 10:51:02 spdkcli_nvmf_tcp -- common/autotest_common.sh@971 -- # kill 544017 00:32:14.508 10:51:02 spdkcli_nvmf_tcp -- common/autotest_common.sh@976 -- # wait 544017 00:32:14.766 10:51:03 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:32:14.766 10:51:03 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:32:14.766 10:51:03 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 544017 ']' 00:32:14.766 10:51:03 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 544017 00:32:14.766 10:51:03 spdkcli_nvmf_tcp -- common/autotest_common.sh@952 -- # '[' -z 544017 ']' 00:32:14.766 10:51:03 spdkcli_nvmf_tcp -- common/autotest_common.sh@956 -- # kill -0 544017 00:32:14.766 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 956: kill: (544017) - No such process 00:32:14.766 10:51:03 spdkcli_nvmf_tcp -- common/autotest_common.sh@979 -- # echo 'Process with pid 544017 is not found' 00:32:14.766 Process with pid 544017 is not found 00:32:14.766 10:51:03 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:32:14.766 10:51:03 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:32:14.766 10:51:03 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:32:14.766 00:32:14.766 real 0m16.727s 00:32:14.766 user 0m35.646s 00:32:14.766 sys 0m0.834s 00:32:14.766 10:51:03 spdkcli_nvmf_tcp -- common/autotest_common.sh@1128 -- # xtrace_disable 00:32:14.766 10:51:03 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:14.766 ************************************ 00:32:14.766 END TEST spdkcli_nvmf_tcp 00:32:14.766 ************************************ 00:32:14.766 10:51:03 -- spdk/autotest.sh@283 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:32:14.766 10:51:03 -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:32:14.766 10:51:03 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:32:14.766 10:51:03 -- common/autotest_common.sh@10 -- # set +x 00:32:14.766 ************************************ 00:32:14.766 START TEST nvmf_identify_passthru 00:32:14.766 ************************************ 00:32:14.766 10:51:03 nvmf_identify_passthru -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:32:14.766 * Looking for test storage... 00:32:14.766 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:32:14.766 10:51:03 nvmf_identify_passthru -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:32:14.766 10:51:03 nvmf_identify_passthru -- common/autotest_common.sh@1691 -- # lcov --version 00:32:14.766 10:51:03 nvmf_identify_passthru -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:32:14.766 10:51:03 nvmf_identify_passthru -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:32:14.766 10:51:03 nvmf_identify_passthru -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:14.766 10:51:03 nvmf_identify_passthru -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:14.766 10:51:03 nvmf_identify_passthru -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:14.766 10:51:03 nvmf_identify_passthru -- scripts/common.sh@336 -- # IFS=.-: 00:32:14.766 10:51:03 nvmf_identify_passthru -- scripts/common.sh@336 -- # read -ra ver1 00:32:14.766 10:51:03 nvmf_identify_passthru -- scripts/common.sh@337 -- # IFS=.-: 00:32:14.766 10:51:03 nvmf_identify_passthru -- scripts/common.sh@337 -- # read -ra ver2 00:32:14.766 10:51:03 nvmf_identify_passthru -- scripts/common.sh@338 -- # local 'op=<' 00:32:14.766 10:51:03 nvmf_identify_passthru -- scripts/common.sh@340 -- # ver1_l=2 00:32:14.766 10:51:03 nvmf_identify_passthru -- scripts/common.sh@341 -- # ver2_l=1 00:32:14.766 10:51:03 nvmf_identify_passthru -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:14.766 10:51:03 nvmf_identify_passthru -- scripts/common.sh@344 -- # case "$op" in 00:32:14.766 10:51:03 nvmf_identify_passthru -- scripts/common.sh@345 -- # : 1 00:32:14.766 10:51:03 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:14.766 10:51:03 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:14.766 10:51:03 nvmf_identify_passthru -- scripts/common.sh@365 -- # decimal 1 00:32:14.766 10:51:03 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=1 00:32:14.766 10:51:03 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:14.766 10:51:03 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 1 00:32:14.766 10:51:03 nvmf_identify_passthru -- scripts/common.sh@365 -- # ver1[v]=1 00:32:14.766 10:51:03 nvmf_identify_passthru -- scripts/common.sh@366 -- # decimal 2 00:32:14.766 10:51:03 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=2 00:32:14.766 10:51:03 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:14.766 10:51:03 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 2 00:32:14.766 10:51:03 nvmf_identify_passthru -- scripts/common.sh@366 -- # ver2[v]=2 00:32:14.766 10:51:03 nvmf_identify_passthru -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:14.766 10:51:03 nvmf_identify_passthru -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:14.766 10:51:03 nvmf_identify_passthru -- scripts/common.sh@368 -- # return 0 00:32:14.766 10:51:03 nvmf_identify_passthru -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:14.766 10:51:03 nvmf_identify_passthru -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:32:14.766 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:14.766 --rc genhtml_branch_coverage=1 00:32:14.766 --rc genhtml_function_coverage=1 00:32:14.766 --rc genhtml_legend=1 00:32:14.766 --rc geninfo_all_blocks=1 00:32:14.766 --rc geninfo_unexecuted_blocks=1 00:32:14.766 00:32:14.766 ' 00:32:14.766 10:51:03 nvmf_identify_passthru -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:32:14.766 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:14.766 --rc genhtml_branch_coverage=1 00:32:14.766 --rc genhtml_function_coverage=1 00:32:14.766 --rc genhtml_legend=1 00:32:14.766 --rc geninfo_all_blocks=1 00:32:14.766 --rc geninfo_unexecuted_blocks=1 00:32:14.766 00:32:14.766 ' 00:32:14.766 10:51:03 nvmf_identify_passthru -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:32:14.766 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:14.766 --rc genhtml_branch_coverage=1 00:32:14.766 --rc genhtml_function_coverage=1 00:32:14.766 --rc genhtml_legend=1 00:32:14.766 --rc geninfo_all_blocks=1 00:32:14.766 --rc geninfo_unexecuted_blocks=1 00:32:14.766 00:32:14.766 ' 00:32:14.766 10:51:03 nvmf_identify_passthru -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:32:14.766 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:14.766 --rc genhtml_branch_coverage=1 00:32:14.766 --rc genhtml_function_coverage=1 00:32:14.766 --rc genhtml_legend=1 00:32:14.766 --rc geninfo_all_blocks=1 00:32:14.766 --rc geninfo_unexecuted_blocks=1 00:32:14.766 00:32:14.766 ' 00:32:14.766 10:51:03 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:14.766 10:51:03 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:32:14.766 10:51:03 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:14.766 10:51:03 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:14.766 10:51:03 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:14.766 10:51:03 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:15.025 10:51:03 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:15.025 10:51:03 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:15.025 10:51:03 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:15.025 10:51:03 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:15.025 10:51:03 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:15.025 10:51:03 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:15.025 10:51:03 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:32:15.025 10:51:03 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=8b464f06-2980-e311-ba20-001e67a94acd 00:32:15.025 10:51:03 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:15.025 10:51:03 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:15.025 10:51:03 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:15.025 10:51:03 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:15.025 10:51:03 nvmf_identify_passthru -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:15.025 10:51:03 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:32:15.025 10:51:03 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:15.025 10:51:03 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:15.025 10:51:03 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:15.025 10:51:03 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:15.025 10:51:03 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:15.025 10:51:03 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:15.025 10:51:03 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:32:15.025 10:51:03 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:15.025 10:51:03 nvmf_identify_passthru -- nvmf/common.sh@51 -- # : 0 00:32:15.025 10:51:03 nvmf_identify_passthru -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:15.025 10:51:03 nvmf_identify_passthru -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:15.025 10:51:03 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:15.025 10:51:03 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:15.025 10:51:03 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:15.025 10:51:03 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:32:15.025 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:32:15.025 10:51:03 nvmf_identify_passthru -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:15.025 10:51:03 nvmf_identify_passthru -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:15.025 10:51:03 nvmf_identify_passthru -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:15.025 10:51:03 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:15.025 10:51:03 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:32:15.025 10:51:03 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:15.025 10:51:03 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:15.025 10:51:03 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:15.025 10:51:03 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:15.025 10:51:03 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:15.025 10:51:03 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:15.025 10:51:03 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:32:15.025 10:51:03 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:15.025 10:51:03 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:32:15.025 10:51:03 nvmf_identify_passthru -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:32:15.025 10:51:03 nvmf_identify_passthru -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:15.025 10:51:03 nvmf_identify_passthru -- nvmf/common.sh@476 -- # prepare_net_devs 00:32:15.025 10:51:03 nvmf_identify_passthru -- nvmf/common.sh@438 -- # local -g is_hw=no 00:32:15.025 10:51:03 nvmf_identify_passthru -- nvmf/common.sh@440 -- # remove_spdk_ns 00:32:15.025 10:51:03 nvmf_identify_passthru -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:15.025 10:51:03 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:32:15.025 10:51:03 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:15.025 10:51:03 nvmf_identify_passthru -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:32:15.025 10:51:03 nvmf_identify_passthru -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:32:15.025 10:51:03 nvmf_identify_passthru -- nvmf/common.sh@309 -- # xtrace_disable 00:32:15.025 10:51:03 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:32:17.552 10:51:05 nvmf_identify_passthru -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:17.552 10:51:05 nvmf_identify_passthru -- nvmf/common.sh@315 -- # pci_devs=() 00:32:17.552 10:51:05 nvmf_identify_passthru -- nvmf/common.sh@315 -- # local -a pci_devs 00:32:17.552 10:51:05 nvmf_identify_passthru -- nvmf/common.sh@316 -- # pci_net_devs=() 00:32:17.552 10:51:05 nvmf_identify_passthru -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:32:17.552 10:51:05 nvmf_identify_passthru -- nvmf/common.sh@317 -- # pci_drivers=() 00:32:17.552 10:51:05 nvmf_identify_passthru -- nvmf/common.sh@317 -- # local -A pci_drivers 00:32:17.552 10:51:05 nvmf_identify_passthru -- nvmf/common.sh@319 -- # net_devs=() 00:32:17.552 10:51:05 nvmf_identify_passthru -- nvmf/common.sh@319 -- # local -ga net_devs 00:32:17.552 10:51:05 nvmf_identify_passthru -- nvmf/common.sh@320 -- # e810=() 00:32:17.552 10:51:05 nvmf_identify_passthru -- nvmf/common.sh@320 -- # local -ga e810 00:32:17.552 10:51:05 nvmf_identify_passthru -- nvmf/common.sh@321 -- # x722=() 00:32:17.552 10:51:05 nvmf_identify_passthru -- nvmf/common.sh@321 -- # local -ga x722 00:32:17.552 10:51:05 nvmf_identify_passthru -- nvmf/common.sh@322 -- # mlx=() 00:32:17.552 10:51:05 nvmf_identify_passthru -- nvmf/common.sh@322 -- # local -ga mlx 00:32:17.552 10:51:05 nvmf_identify_passthru -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:17.552 10:51:05 nvmf_identify_passthru -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:17.552 10:51:05 nvmf_identify_passthru -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:17.552 10:51:05 nvmf_identify_passthru -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:17.553 10:51:05 nvmf_identify_passthru -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:17.553 10:51:05 nvmf_identify_passthru -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:17.553 10:51:05 nvmf_identify_passthru -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:17.553 10:51:05 nvmf_identify_passthru -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:32:17.553 10:51:05 nvmf_identify_passthru -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:17.553 10:51:05 nvmf_identify_passthru -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:17.553 10:51:05 nvmf_identify_passthru -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:17.553 10:51:05 nvmf_identify_passthru -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:17.553 10:51:05 nvmf_identify_passthru -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:32:17.553 10:51:05 nvmf_identify_passthru -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:32:17.553 10:51:05 nvmf_identify_passthru -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:32:17.553 10:51:05 nvmf_identify_passthru -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:32:17.553 10:51:05 nvmf_identify_passthru -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:32:17.553 10:51:05 nvmf_identify_passthru -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:32:17.553 10:51:05 nvmf_identify_passthru -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:17.553 10:51:05 nvmf_identify_passthru -- nvmf/common.sh@367 -- # echo 'Found 0000:82:00.0 (0x8086 - 0x159b)' 00:32:17.553 Found 0000:82:00.0 (0x8086 - 0x159b) 00:32:17.553 10:51:05 nvmf_identify_passthru -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:17.553 10:51:05 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:17.553 10:51:05 nvmf_identify_passthru -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:17.553 10:51:05 nvmf_identify_passthru -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:17.553 10:51:05 nvmf_identify_passthru -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:17.553 10:51:05 nvmf_identify_passthru -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:17.553 10:51:05 nvmf_identify_passthru -- nvmf/common.sh@367 -- # echo 'Found 0000:82:00.1 (0x8086 - 0x159b)' 00:32:17.553 Found 0000:82:00.1 (0x8086 - 0x159b) 00:32:17.553 10:51:05 nvmf_identify_passthru -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:17.553 10:51:05 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:17.553 10:51:05 nvmf_identify_passthru -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:17.553 10:51:05 nvmf_identify_passthru -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:17.553 10:51:05 nvmf_identify_passthru -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:17.553 10:51:05 nvmf_identify_passthru -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:32:17.553 10:51:05 nvmf_identify_passthru -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:32:17.553 10:51:05 nvmf_identify_passthru -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:32:17.553 10:51:05 nvmf_identify_passthru -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:17.553 10:51:05 nvmf_identify_passthru -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:17.553 10:51:05 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:17.553 10:51:05 nvmf_identify_passthru -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:17.553 10:51:05 nvmf_identify_passthru -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:17.553 10:51:05 nvmf_identify_passthru -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:17.553 10:51:05 nvmf_identify_passthru -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:17.553 10:51:05 nvmf_identify_passthru -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:82:00.0: cvl_0_0' 00:32:17.553 Found net devices under 0000:82:00.0: cvl_0_0 00:32:17.553 10:51:05 nvmf_identify_passthru -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:17.553 10:51:05 nvmf_identify_passthru -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:17.553 10:51:05 nvmf_identify_passthru -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:17.553 10:51:05 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:17.553 10:51:05 nvmf_identify_passthru -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:17.553 10:51:05 nvmf_identify_passthru -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:17.553 10:51:05 nvmf_identify_passthru -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:17.553 10:51:05 nvmf_identify_passthru -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:17.553 10:51:05 nvmf_identify_passthru -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:82:00.1: cvl_0_1' 00:32:17.553 Found net devices under 0000:82:00.1: cvl_0_1 00:32:17.553 10:51:05 nvmf_identify_passthru -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:17.553 10:51:05 nvmf_identify_passthru -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:32:17.553 10:51:05 nvmf_identify_passthru -- nvmf/common.sh@442 -- # is_hw=yes 00:32:17.553 10:51:05 nvmf_identify_passthru -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:32:17.553 10:51:05 nvmf_identify_passthru -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:32:17.553 10:51:05 nvmf_identify_passthru -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:32:17.553 10:51:05 nvmf_identify_passthru -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:17.553 10:51:05 nvmf_identify_passthru -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:17.553 10:51:05 nvmf_identify_passthru -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:17.553 10:51:05 nvmf_identify_passthru -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:17.553 10:51:05 nvmf_identify_passthru -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:32:17.553 10:51:05 nvmf_identify_passthru -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:17.553 10:51:05 nvmf_identify_passthru -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:17.553 10:51:05 nvmf_identify_passthru -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:32:17.553 10:51:05 nvmf_identify_passthru -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:32:17.553 10:51:05 nvmf_identify_passthru -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:17.553 10:51:05 nvmf_identify_passthru -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:17.553 10:51:05 nvmf_identify_passthru -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:32:17.553 10:51:05 nvmf_identify_passthru -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:32:17.553 10:51:05 nvmf_identify_passthru -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:32:17.553 10:51:05 nvmf_identify_passthru -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:17.553 10:51:05 nvmf_identify_passthru -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:17.553 10:51:05 nvmf_identify_passthru -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:17.553 10:51:05 nvmf_identify_passthru -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:32:17.553 10:51:05 nvmf_identify_passthru -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:17.553 10:51:05 nvmf_identify_passthru -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:17.553 10:51:05 nvmf_identify_passthru -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:17.553 10:51:05 nvmf_identify_passthru -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:32:17.553 10:51:05 nvmf_identify_passthru -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:32:17.553 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:17.553 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.277 ms 00:32:17.553 00:32:17.553 --- 10.0.0.2 ping statistics --- 00:32:17.553 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:17.553 rtt min/avg/max/mdev = 0.277/0.277/0.277/0.000 ms 00:32:17.553 10:51:05 nvmf_identify_passthru -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:17.553 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:17.553 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.160 ms 00:32:17.553 00:32:17.553 --- 10.0.0.1 ping statistics --- 00:32:17.553 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:17.553 rtt min/avg/max/mdev = 0.160/0.160/0.160/0.000 ms 00:32:17.553 10:51:05 nvmf_identify_passthru -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:17.553 10:51:05 nvmf_identify_passthru -- nvmf/common.sh@450 -- # return 0 00:32:17.553 10:51:05 nvmf_identify_passthru -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:32:17.553 10:51:05 nvmf_identify_passthru -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:17.553 10:51:05 nvmf_identify_passthru -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:32:17.553 10:51:05 nvmf_identify_passthru -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:32:17.553 10:51:05 nvmf_identify_passthru -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:17.553 10:51:05 nvmf_identify_passthru -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:32:17.553 10:51:05 nvmf_identify_passthru -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:32:17.553 10:51:05 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:32:17.553 10:51:05 nvmf_identify_passthru -- common/autotest_common.sh@724 -- # xtrace_disable 00:32:17.553 10:51:05 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:32:17.553 10:51:05 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:32:17.553 10:51:05 nvmf_identify_passthru -- common/autotest_common.sh@1507 -- # bdfs=() 00:32:17.553 10:51:05 nvmf_identify_passthru -- common/autotest_common.sh@1507 -- # local bdfs 00:32:17.553 10:51:05 nvmf_identify_passthru -- common/autotest_common.sh@1508 -- # bdfs=($(get_nvme_bdfs)) 00:32:17.553 10:51:05 nvmf_identify_passthru -- common/autotest_common.sh@1508 -- # get_nvme_bdfs 00:32:17.553 10:51:05 nvmf_identify_passthru -- common/autotest_common.sh@1496 -- # bdfs=() 00:32:17.553 10:51:05 nvmf_identify_passthru -- common/autotest_common.sh@1496 -- # local bdfs 00:32:17.553 10:51:05 nvmf_identify_passthru -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:32:17.553 10:51:05 nvmf_identify_passthru -- common/autotest_common.sh@1497 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:32:17.553 10:51:05 nvmf_identify_passthru -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:32:17.553 10:51:05 nvmf_identify_passthru -- common/autotest_common.sh@1498 -- # (( 1 == 0 )) 00:32:17.553 10:51:05 nvmf_identify_passthru -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:81:00.0 00:32:17.553 10:51:05 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # echo 0000:81:00.0 00:32:17.553 10:51:05 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:81:00.0 00:32:17.553 10:51:05 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:81:00.0 ']' 00:32:17.553 10:51:05 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:81:00.0' -i 0 00:32:17.553 10:51:05 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:32:17.553 10:51:05 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:32:22.815 10:51:10 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=PHLJ951302VM2P0BGN 00:32:22.815 10:51:10 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:81:00.0' -i 0 00:32:22.815 10:51:10 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:32:22.815 10:51:10 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:32:28.072 10:51:15 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=INTEL 00:32:28.072 10:51:15 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:32:28.072 10:51:15 nvmf_identify_passthru -- common/autotest_common.sh@730 -- # xtrace_disable 00:32:28.072 10:51:15 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:32:28.072 10:51:15 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:32:28.073 10:51:15 nvmf_identify_passthru -- common/autotest_common.sh@724 -- # xtrace_disable 00:32:28.073 10:51:15 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:32:28.073 10:51:15 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=549536 00:32:28.073 10:51:15 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:32:28.073 10:51:15 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:32:28.073 10:51:15 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 549536 00:32:28.073 10:51:15 nvmf_identify_passthru -- common/autotest_common.sh@833 -- # '[' -z 549536 ']' 00:32:28.073 10:51:15 nvmf_identify_passthru -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:28.073 10:51:15 nvmf_identify_passthru -- common/autotest_common.sh@838 -- # local max_retries=100 00:32:28.073 10:51:15 nvmf_identify_passthru -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:28.073 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:28.073 10:51:15 nvmf_identify_passthru -- common/autotest_common.sh@842 -- # xtrace_disable 00:32:28.073 10:51:15 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:32:28.073 [2024-11-15 10:51:15.852070] Starting SPDK v25.01-pre git sha1 318515b44 / DPDK 24.03.0 initialization... 00:32:28.073 [2024-11-15 10:51:15.852159] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:28.073 [2024-11-15 10:51:15.925432] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:32:28.073 [2024-11-15 10:51:15.986405] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:28.073 [2024-11-15 10:51:15.986482] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:28.073 [2024-11-15 10:51:15.986496] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:28.073 [2024-11-15 10:51:15.986507] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:28.073 [2024-11-15 10:51:15.986517] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:28.073 [2024-11-15 10:51:15.988136] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:28.073 [2024-11-15 10:51:15.988202] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:32:28.073 [2024-11-15 10:51:15.988268] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:32:28.073 [2024-11-15 10:51:15.988272] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:28.073 10:51:16 nvmf_identify_passthru -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:32:28.073 10:51:16 nvmf_identify_passthru -- common/autotest_common.sh@866 -- # return 0 00:32:28.073 10:51:16 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:32:28.073 10:51:16 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:28.073 10:51:16 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:32:28.073 INFO: Log level set to 20 00:32:28.073 INFO: Requests: 00:32:28.073 { 00:32:28.073 "jsonrpc": "2.0", 00:32:28.073 "method": "nvmf_set_config", 00:32:28.073 "id": 1, 00:32:28.073 "params": { 00:32:28.073 "admin_cmd_passthru": { 00:32:28.073 "identify_ctrlr": true 00:32:28.073 } 00:32:28.073 } 00:32:28.073 } 00:32:28.073 00:32:28.073 INFO: response: 00:32:28.073 { 00:32:28.073 "jsonrpc": "2.0", 00:32:28.073 "id": 1, 00:32:28.073 "result": true 00:32:28.073 } 00:32:28.073 00:32:28.073 10:51:16 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:28.073 10:51:16 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:32:28.073 10:51:16 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:28.073 10:51:16 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:32:28.073 INFO: Setting log level to 20 00:32:28.073 INFO: Setting log level to 20 00:32:28.073 INFO: Log level set to 20 00:32:28.073 INFO: Log level set to 20 00:32:28.073 INFO: Requests: 00:32:28.073 { 00:32:28.073 "jsonrpc": "2.0", 00:32:28.073 "method": "framework_start_init", 00:32:28.073 "id": 1 00:32:28.073 } 00:32:28.073 00:32:28.073 INFO: Requests: 00:32:28.073 { 00:32:28.073 "jsonrpc": "2.0", 00:32:28.073 "method": "framework_start_init", 00:32:28.073 "id": 1 00:32:28.073 } 00:32:28.073 00:32:28.073 [2024-11-15 10:51:16.203423] nvmf_tgt.c: 462:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:32:28.073 INFO: response: 00:32:28.073 { 00:32:28.073 "jsonrpc": "2.0", 00:32:28.073 "id": 1, 00:32:28.073 "result": true 00:32:28.073 } 00:32:28.073 00:32:28.073 INFO: response: 00:32:28.073 { 00:32:28.073 "jsonrpc": "2.0", 00:32:28.073 "id": 1, 00:32:28.073 "result": true 00:32:28.073 } 00:32:28.073 00:32:28.073 10:51:16 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:28.073 10:51:16 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:32:28.073 10:51:16 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:28.073 10:51:16 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:32:28.073 INFO: Setting log level to 40 00:32:28.073 INFO: Setting log level to 40 00:32:28.073 INFO: Setting log level to 40 00:32:28.073 [2024-11-15 10:51:16.213542] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:28.073 10:51:16 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:28.073 10:51:16 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:32:28.073 10:51:16 nvmf_identify_passthru -- common/autotest_common.sh@730 -- # xtrace_disable 00:32:28.073 10:51:16 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:32:28.073 10:51:16 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:81:00.0 00:32:28.073 10:51:16 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:28.073 10:51:16 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:32:31.348 Nvme0n1 00:32:31.348 10:51:19 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:31.348 10:51:19 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:32:31.348 10:51:19 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:31.348 10:51:19 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:32:31.348 10:51:19 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:31.348 10:51:19 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:32:31.348 10:51:19 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:31.348 10:51:19 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:32:31.348 10:51:19 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:31.348 10:51:19 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:31.348 10:51:19 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:31.348 10:51:19 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:32:31.348 [2024-11-15 10:51:19.118117] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:31.348 10:51:19 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:31.348 10:51:19 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:32:31.348 10:51:19 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:31.348 10:51:19 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:32:31.348 [ 00:32:31.348 { 00:32:31.348 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:32:31.348 "subtype": "Discovery", 00:32:31.348 "listen_addresses": [], 00:32:31.348 "allow_any_host": true, 00:32:31.348 "hosts": [] 00:32:31.348 }, 00:32:31.348 { 00:32:31.348 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:32:31.348 "subtype": "NVMe", 00:32:31.348 "listen_addresses": [ 00:32:31.348 { 00:32:31.348 "trtype": "TCP", 00:32:31.348 "adrfam": "IPv4", 00:32:31.348 "traddr": "10.0.0.2", 00:32:31.348 "trsvcid": "4420" 00:32:31.348 } 00:32:31.348 ], 00:32:31.348 "allow_any_host": true, 00:32:31.348 "hosts": [], 00:32:31.348 "serial_number": "SPDK00000000000001", 00:32:31.348 "model_number": "SPDK bdev Controller", 00:32:31.348 "max_namespaces": 1, 00:32:31.348 "min_cntlid": 1, 00:32:31.348 "max_cntlid": 65519, 00:32:31.348 "namespaces": [ 00:32:31.348 { 00:32:31.348 "nsid": 1, 00:32:31.348 "bdev_name": "Nvme0n1", 00:32:31.348 "name": "Nvme0n1", 00:32:31.348 "nguid": "5B44E606CE9F4183BB5C4BC27C4981EA", 00:32:31.348 "uuid": "5b44e606-ce9f-4183-bb5c-4bc27c4981ea" 00:32:31.348 } 00:32:31.348 ] 00:32:31.348 } 00:32:31.348 ] 00:32:31.348 10:51:19 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:31.348 10:51:19 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:32:31.348 10:51:19 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:32:31.348 10:51:19 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:32:31.348 10:51:19 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=PHLJ951302VM2P0BGN 00:32:31.349 10:51:19 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:32:31.349 10:51:19 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:32:31.349 10:51:19 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:32:31.349 10:51:19 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=INTEL 00:32:31.349 10:51:19 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' PHLJ951302VM2P0BGN '!=' PHLJ951302VM2P0BGN ']' 00:32:31.349 10:51:19 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' INTEL '!=' INTEL ']' 00:32:31.349 10:51:19 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:32:31.349 10:51:19 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:31.349 10:51:19 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:32:31.349 10:51:19 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:31.349 10:51:19 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:32:31.349 10:51:19 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:32:31.349 10:51:19 nvmf_identify_passthru -- nvmf/common.sh@516 -- # nvmfcleanup 00:32:31.349 10:51:19 nvmf_identify_passthru -- nvmf/common.sh@121 -- # sync 00:32:31.349 10:51:19 nvmf_identify_passthru -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:31.349 10:51:19 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set +e 00:32:31.349 10:51:19 nvmf_identify_passthru -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:31.349 10:51:19 nvmf_identify_passthru -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:31.349 rmmod nvme_tcp 00:32:31.349 rmmod nvme_fabrics 00:32:31.349 rmmod nvme_keyring 00:32:31.349 10:51:19 nvmf_identify_passthru -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:31.349 10:51:19 nvmf_identify_passthru -- nvmf/common.sh@128 -- # set -e 00:32:31.349 10:51:19 nvmf_identify_passthru -- nvmf/common.sh@129 -- # return 0 00:32:31.349 10:51:19 nvmf_identify_passthru -- nvmf/common.sh@517 -- # '[' -n 549536 ']' 00:32:31.349 10:51:19 nvmf_identify_passthru -- nvmf/common.sh@518 -- # killprocess 549536 00:32:31.349 10:51:19 nvmf_identify_passthru -- common/autotest_common.sh@952 -- # '[' -z 549536 ']' 00:32:31.349 10:51:19 nvmf_identify_passthru -- common/autotest_common.sh@956 -- # kill -0 549536 00:32:31.349 10:51:19 nvmf_identify_passthru -- common/autotest_common.sh@957 -- # uname 00:32:31.349 10:51:19 nvmf_identify_passthru -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:32:31.349 10:51:19 nvmf_identify_passthru -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 549536 00:32:31.349 10:51:19 nvmf_identify_passthru -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:32:31.349 10:51:19 nvmf_identify_passthru -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:32:31.349 10:51:19 nvmf_identify_passthru -- common/autotest_common.sh@970 -- # echo 'killing process with pid 549536' 00:32:31.349 killing process with pid 549536 00:32:31.349 10:51:19 nvmf_identify_passthru -- common/autotest_common.sh@971 -- # kill 549536 00:32:31.349 10:51:19 nvmf_identify_passthru -- common/autotest_common.sh@976 -- # wait 549536 00:32:33.873 10:51:22 nvmf_identify_passthru -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:32:33.873 10:51:22 nvmf_identify_passthru -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:32:33.873 10:51:22 nvmf_identify_passthru -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:32:33.873 10:51:22 nvmf_identify_passthru -- nvmf/common.sh@297 -- # iptr 00:32:33.873 10:51:22 nvmf_identify_passthru -- nvmf/common.sh@791 -- # iptables-save 00:32:33.873 10:51:22 nvmf_identify_passthru -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:32:33.873 10:51:22 nvmf_identify_passthru -- nvmf/common.sh@791 -- # iptables-restore 00:32:33.873 10:51:22 nvmf_identify_passthru -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:33.873 10:51:22 nvmf_identify_passthru -- nvmf/common.sh@302 -- # remove_spdk_ns 00:32:33.873 10:51:22 nvmf_identify_passthru -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:33.873 10:51:22 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:32:33.873 10:51:22 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:35.774 10:51:24 nvmf_identify_passthru -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:32:35.774 00:32:35.774 real 0m21.072s 00:32:35.774 user 0m31.787s 00:32:35.774 sys 0m3.528s 00:32:35.774 10:51:24 nvmf_identify_passthru -- common/autotest_common.sh@1128 -- # xtrace_disable 00:32:35.774 10:51:24 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:32:35.774 ************************************ 00:32:35.774 END TEST nvmf_identify_passthru 00:32:35.774 ************************************ 00:32:35.774 10:51:24 -- spdk/autotest.sh@285 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:32:35.774 10:51:24 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:32:35.774 10:51:24 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:32:35.774 10:51:24 -- common/autotest_common.sh@10 -- # set +x 00:32:35.774 ************************************ 00:32:35.774 START TEST nvmf_dif 00:32:35.774 ************************************ 00:32:35.774 10:51:24 nvmf_dif -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:32:36.035 * Looking for test storage... 00:32:36.035 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:32:36.035 10:51:24 nvmf_dif -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:32:36.035 10:51:24 nvmf_dif -- common/autotest_common.sh@1691 -- # lcov --version 00:32:36.035 10:51:24 nvmf_dif -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:32:36.035 10:51:24 nvmf_dif -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:32:36.035 10:51:24 nvmf_dif -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:36.035 10:51:24 nvmf_dif -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:36.035 10:51:24 nvmf_dif -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:36.035 10:51:24 nvmf_dif -- scripts/common.sh@336 -- # IFS=.-: 00:32:36.035 10:51:24 nvmf_dif -- scripts/common.sh@336 -- # read -ra ver1 00:32:36.035 10:51:24 nvmf_dif -- scripts/common.sh@337 -- # IFS=.-: 00:32:36.035 10:51:24 nvmf_dif -- scripts/common.sh@337 -- # read -ra ver2 00:32:36.035 10:51:24 nvmf_dif -- scripts/common.sh@338 -- # local 'op=<' 00:32:36.035 10:51:24 nvmf_dif -- scripts/common.sh@340 -- # ver1_l=2 00:32:36.035 10:51:24 nvmf_dif -- scripts/common.sh@341 -- # ver2_l=1 00:32:36.035 10:51:24 nvmf_dif -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:36.035 10:51:24 nvmf_dif -- scripts/common.sh@344 -- # case "$op" in 00:32:36.035 10:51:24 nvmf_dif -- scripts/common.sh@345 -- # : 1 00:32:36.035 10:51:24 nvmf_dif -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:36.035 10:51:24 nvmf_dif -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:36.035 10:51:24 nvmf_dif -- scripts/common.sh@365 -- # decimal 1 00:32:36.035 10:51:24 nvmf_dif -- scripts/common.sh@353 -- # local d=1 00:32:36.035 10:51:24 nvmf_dif -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:36.035 10:51:24 nvmf_dif -- scripts/common.sh@355 -- # echo 1 00:32:36.035 10:51:24 nvmf_dif -- scripts/common.sh@365 -- # ver1[v]=1 00:32:36.035 10:51:24 nvmf_dif -- scripts/common.sh@366 -- # decimal 2 00:32:36.035 10:51:24 nvmf_dif -- scripts/common.sh@353 -- # local d=2 00:32:36.035 10:51:24 nvmf_dif -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:36.035 10:51:24 nvmf_dif -- scripts/common.sh@355 -- # echo 2 00:32:36.035 10:51:24 nvmf_dif -- scripts/common.sh@366 -- # ver2[v]=2 00:32:36.035 10:51:24 nvmf_dif -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:36.035 10:51:24 nvmf_dif -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:36.035 10:51:24 nvmf_dif -- scripts/common.sh@368 -- # return 0 00:32:36.035 10:51:24 nvmf_dif -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:36.035 10:51:24 nvmf_dif -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:32:36.035 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:36.036 --rc genhtml_branch_coverage=1 00:32:36.036 --rc genhtml_function_coverage=1 00:32:36.036 --rc genhtml_legend=1 00:32:36.036 --rc geninfo_all_blocks=1 00:32:36.036 --rc geninfo_unexecuted_blocks=1 00:32:36.036 00:32:36.036 ' 00:32:36.036 10:51:24 nvmf_dif -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:32:36.036 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:36.036 --rc genhtml_branch_coverage=1 00:32:36.036 --rc genhtml_function_coverage=1 00:32:36.036 --rc genhtml_legend=1 00:32:36.036 --rc geninfo_all_blocks=1 00:32:36.036 --rc geninfo_unexecuted_blocks=1 00:32:36.036 00:32:36.036 ' 00:32:36.036 10:51:24 nvmf_dif -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:32:36.036 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:36.036 --rc genhtml_branch_coverage=1 00:32:36.036 --rc genhtml_function_coverage=1 00:32:36.036 --rc genhtml_legend=1 00:32:36.036 --rc geninfo_all_blocks=1 00:32:36.036 --rc geninfo_unexecuted_blocks=1 00:32:36.036 00:32:36.036 ' 00:32:36.036 10:51:24 nvmf_dif -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:32:36.036 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:36.036 --rc genhtml_branch_coverage=1 00:32:36.036 --rc genhtml_function_coverage=1 00:32:36.036 --rc genhtml_legend=1 00:32:36.036 --rc geninfo_all_blocks=1 00:32:36.036 --rc geninfo_unexecuted_blocks=1 00:32:36.036 00:32:36.036 ' 00:32:36.036 10:51:24 nvmf_dif -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:36.036 10:51:24 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:32:36.036 10:51:24 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:36.036 10:51:24 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:36.036 10:51:24 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:36.036 10:51:24 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:36.036 10:51:24 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:36.036 10:51:24 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:36.036 10:51:24 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:36.036 10:51:24 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:36.036 10:51:24 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:36.036 10:51:24 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:36.036 10:51:24 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:32:36.036 10:51:24 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=8b464f06-2980-e311-ba20-001e67a94acd 00:32:36.036 10:51:24 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:36.036 10:51:24 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:36.036 10:51:24 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:36.036 10:51:24 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:36.036 10:51:24 nvmf_dif -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:36.036 10:51:24 nvmf_dif -- scripts/common.sh@15 -- # shopt -s extglob 00:32:36.036 10:51:24 nvmf_dif -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:36.036 10:51:24 nvmf_dif -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:36.036 10:51:24 nvmf_dif -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:36.036 10:51:24 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:36.036 10:51:24 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:36.036 10:51:24 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:36.036 10:51:24 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:32:36.036 10:51:24 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:36.036 10:51:24 nvmf_dif -- nvmf/common.sh@51 -- # : 0 00:32:36.036 10:51:24 nvmf_dif -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:36.036 10:51:24 nvmf_dif -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:36.036 10:51:24 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:36.036 10:51:24 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:36.036 10:51:24 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:36.036 10:51:24 nvmf_dif -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:32:36.036 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:32:36.036 10:51:24 nvmf_dif -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:36.036 10:51:24 nvmf_dif -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:36.036 10:51:24 nvmf_dif -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:36.036 10:51:24 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:32:36.036 10:51:24 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:32:36.036 10:51:24 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:32:36.036 10:51:24 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:32:36.036 10:51:24 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:32:36.036 10:51:24 nvmf_dif -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:32:36.036 10:51:24 nvmf_dif -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:36.036 10:51:24 nvmf_dif -- nvmf/common.sh@476 -- # prepare_net_devs 00:32:36.036 10:51:24 nvmf_dif -- nvmf/common.sh@438 -- # local -g is_hw=no 00:32:36.036 10:51:24 nvmf_dif -- nvmf/common.sh@440 -- # remove_spdk_ns 00:32:36.036 10:51:24 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:36.036 10:51:24 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:32:36.036 10:51:24 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:36.036 10:51:24 nvmf_dif -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:32:36.036 10:51:24 nvmf_dif -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:32:36.036 10:51:24 nvmf_dif -- nvmf/common.sh@309 -- # xtrace_disable 00:32:36.036 10:51:24 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:32:38.567 10:51:26 nvmf_dif -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:38.567 10:51:26 nvmf_dif -- nvmf/common.sh@315 -- # pci_devs=() 00:32:38.567 10:51:26 nvmf_dif -- nvmf/common.sh@315 -- # local -a pci_devs 00:32:38.567 10:51:26 nvmf_dif -- nvmf/common.sh@316 -- # pci_net_devs=() 00:32:38.567 10:51:26 nvmf_dif -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:32:38.567 10:51:26 nvmf_dif -- nvmf/common.sh@317 -- # pci_drivers=() 00:32:38.567 10:51:26 nvmf_dif -- nvmf/common.sh@317 -- # local -A pci_drivers 00:32:38.567 10:51:26 nvmf_dif -- nvmf/common.sh@319 -- # net_devs=() 00:32:38.567 10:51:26 nvmf_dif -- nvmf/common.sh@319 -- # local -ga net_devs 00:32:38.567 10:51:26 nvmf_dif -- nvmf/common.sh@320 -- # e810=() 00:32:38.567 10:51:26 nvmf_dif -- nvmf/common.sh@320 -- # local -ga e810 00:32:38.567 10:51:26 nvmf_dif -- nvmf/common.sh@321 -- # x722=() 00:32:38.567 10:51:26 nvmf_dif -- nvmf/common.sh@321 -- # local -ga x722 00:32:38.567 10:51:26 nvmf_dif -- nvmf/common.sh@322 -- # mlx=() 00:32:38.567 10:51:26 nvmf_dif -- nvmf/common.sh@322 -- # local -ga mlx 00:32:38.567 10:51:26 nvmf_dif -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:38.567 10:51:26 nvmf_dif -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:38.567 10:51:26 nvmf_dif -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:38.567 10:51:26 nvmf_dif -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:38.567 10:51:26 nvmf_dif -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:38.567 10:51:26 nvmf_dif -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:38.567 10:51:26 nvmf_dif -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:38.567 10:51:26 nvmf_dif -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:32:38.567 10:51:26 nvmf_dif -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:38.567 10:51:26 nvmf_dif -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:38.567 10:51:26 nvmf_dif -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:38.567 10:51:26 nvmf_dif -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:38.567 10:51:26 nvmf_dif -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:32:38.567 10:51:26 nvmf_dif -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:32:38.567 10:51:26 nvmf_dif -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:32:38.567 10:51:26 nvmf_dif -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:32:38.567 10:51:26 nvmf_dif -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:32:38.567 10:51:26 nvmf_dif -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:32:38.567 10:51:26 nvmf_dif -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:38.567 10:51:26 nvmf_dif -- nvmf/common.sh@367 -- # echo 'Found 0000:82:00.0 (0x8086 - 0x159b)' 00:32:38.567 Found 0000:82:00.0 (0x8086 - 0x159b) 00:32:38.567 10:51:26 nvmf_dif -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:38.567 10:51:26 nvmf_dif -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:38.567 10:51:26 nvmf_dif -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:38.567 10:51:26 nvmf_dif -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:38.567 10:51:26 nvmf_dif -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:38.567 10:51:26 nvmf_dif -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:38.567 10:51:26 nvmf_dif -- nvmf/common.sh@367 -- # echo 'Found 0000:82:00.1 (0x8086 - 0x159b)' 00:32:38.567 Found 0000:82:00.1 (0x8086 - 0x159b) 00:32:38.567 10:51:26 nvmf_dif -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:38.567 10:51:26 nvmf_dif -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:38.567 10:51:26 nvmf_dif -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:38.567 10:51:26 nvmf_dif -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:38.567 10:51:26 nvmf_dif -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:38.567 10:51:26 nvmf_dif -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:32:38.567 10:51:26 nvmf_dif -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:32:38.567 10:51:26 nvmf_dif -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:32:38.567 10:51:26 nvmf_dif -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:38.567 10:51:26 nvmf_dif -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:38.567 10:51:26 nvmf_dif -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:38.567 10:51:26 nvmf_dif -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:38.567 10:51:26 nvmf_dif -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:38.567 10:51:26 nvmf_dif -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:38.567 10:51:26 nvmf_dif -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:38.567 10:51:26 nvmf_dif -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:82:00.0: cvl_0_0' 00:32:38.567 Found net devices under 0000:82:00.0: cvl_0_0 00:32:38.567 10:51:26 nvmf_dif -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:38.567 10:51:26 nvmf_dif -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:38.567 10:51:26 nvmf_dif -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:38.567 10:51:26 nvmf_dif -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:38.567 10:51:26 nvmf_dif -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:38.567 10:51:26 nvmf_dif -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:38.567 10:51:26 nvmf_dif -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:38.567 10:51:26 nvmf_dif -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:38.567 10:51:26 nvmf_dif -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:82:00.1: cvl_0_1' 00:32:38.567 Found net devices under 0000:82:00.1: cvl_0_1 00:32:38.567 10:51:26 nvmf_dif -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:38.567 10:51:26 nvmf_dif -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:32:38.567 10:51:26 nvmf_dif -- nvmf/common.sh@442 -- # is_hw=yes 00:32:38.567 10:51:26 nvmf_dif -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:32:38.567 10:51:26 nvmf_dif -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:32:38.567 10:51:26 nvmf_dif -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:32:38.567 10:51:26 nvmf_dif -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:38.567 10:51:26 nvmf_dif -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:38.567 10:51:26 nvmf_dif -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:38.567 10:51:26 nvmf_dif -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:38.567 10:51:26 nvmf_dif -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:32:38.567 10:51:26 nvmf_dif -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:38.567 10:51:26 nvmf_dif -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:38.567 10:51:26 nvmf_dif -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:32:38.567 10:51:26 nvmf_dif -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:32:38.567 10:51:26 nvmf_dif -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:38.567 10:51:26 nvmf_dif -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:38.568 10:51:26 nvmf_dif -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:32:38.568 10:51:26 nvmf_dif -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:32:38.568 10:51:26 nvmf_dif -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:32:38.568 10:51:26 nvmf_dif -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:38.568 10:51:26 nvmf_dif -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:38.568 10:51:26 nvmf_dif -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:38.568 10:51:26 nvmf_dif -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:32:38.568 10:51:26 nvmf_dif -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:38.568 10:51:26 nvmf_dif -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:38.568 10:51:26 nvmf_dif -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:38.568 10:51:26 nvmf_dif -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:32:38.568 10:51:26 nvmf_dif -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:32:38.568 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:38.568 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.219 ms 00:32:38.568 00:32:38.568 --- 10.0.0.2 ping statistics --- 00:32:38.568 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:38.568 rtt min/avg/max/mdev = 0.219/0.219/0.219/0.000 ms 00:32:38.568 10:51:26 nvmf_dif -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:38.568 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:38.568 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.090 ms 00:32:38.568 00:32:38.568 --- 10.0.0.1 ping statistics --- 00:32:38.568 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:38.568 rtt min/avg/max/mdev = 0.090/0.090/0.090/0.000 ms 00:32:38.568 10:51:26 nvmf_dif -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:38.568 10:51:26 nvmf_dif -- nvmf/common.sh@450 -- # return 0 00:32:38.568 10:51:26 nvmf_dif -- nvmf/common.sh@478 -- # '[' iso == iso ']' 00:32:38.568 10:51:26 nvmf_dif -- nvmf/common.sh@479 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:32:39.502 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:32:39.502 0000:81:00.0 (8086 0a54): Already using the vfio-pci driver 00:32:39.502 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:32:39.502 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:32:39.502 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:32:39.502 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:32:39.502 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:32:39.502 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:32:39.502 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:32:39.502 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:32:39.502 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:32:39.502 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:32:39.502 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:32:39.502 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:32:39.502 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:32:39.502 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:32:39.502 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:32:39.502 10:51:27 nvmf_dif -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:39.502 10:51:27 nvmf_dif -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:32:39.502 10:51:27 nvmf_dif -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:32:39.502 10:51:27 nvmf_dif -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:39.502 10:51:27 nvmf_dif -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:32:39.502 10:51:27 nvmf_dif -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:32:39.502 10:51:27 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:32:39.502 10:51:27 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:32:39.502 10:51:27 nvmf_dif -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:32:39.502 10:51:27 nvmf_dif -- common/autotest_common.sh@724 -- # xtrace_disable 00:32:39.502 10:51:27 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:32:39.502 10:51:27 nvmf_dif -- nvmf/common.sh@509 -- # nvmfpid=552821 00:32:39.502 10:51:27 nvmf_dif -- nvmf/common.sh@510 -- # waitforlisten 552821 00:32:39.502 10:51:27 nvmf_dif -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:32:39.502 10:51:27 nvmf_dif -- common/autotest_common.sh@833 -- # '[' -z 552821 ']' 00:32:39.502 10:51:27 nvmf_dif -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:39.502 10:51:27 nvmf_dif -- common/autotest_common.sh@838 -- # local max_retries=100 00:32:39.502 10:51:27 nvmf_dif -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:39.502 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:39.502 10:51:27 nvmf_dif -- common/autotest_common.sh@842 -- # xtrace_disable 00:32:39.502 10:51:27 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:32:39.760 [2024-11-15 10:51:28.002524] Starting SPDK v25.01-pre git sha1 318515b44 / DPDK 24.03.0 initialization... 00:32:39.760 [2024-11-15 10:51:28.002612] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:39.760 [2024-11-15 10:51:28.075264] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:39.760 [2024-11-15 10:51:28.136868] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:39.760 [2024-11-15 10:51:28.136934] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:39.760 [2024-11-15 10:51:28.136973] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:39.760 [2024-11-15 10:51:28.136984] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:39.760 [2024-11-15 10:51:28.136993] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:39.760 [2024-11-15 10:51:28.137627] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:40.019 10:51:28 nvmf_dif -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:32:40.019 10:51:28 nvmf_dif -- common/autotest_common.sh@866 -- # return 0 00:32:40.019 10:51:28 nvmf_dif -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:32:40.019 10:51:28 nvmf_dif -- common/autotest_common.sh@730 -- # xtrace_disable 00:32:40.019 10:51:28 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:32:40.019 10:51:28 nvmf_dif -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:40.019 10:51:28 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:32:40.019 10:51:28 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:32:40.019 10:51:28 nvmf_dif -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:40.019 10:51:28 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:32:40.019 [2024-11-15 10:51:28.285390] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:40.019 10:51:28 nvmf_dif -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:40.019 10:51:28 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:32:40.019 10:51:28 nvmf_dif -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:32:40.019 10:51:28 nvmf_dif -- common/autotest_common.sh@1109 -- # xtrace_disable 00:32:40.019 10:51:28 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:32:40.019 ************************************ 00:32:40.019 START TEST fio_dif_1_default 00:32:40.019 ************************************ 00:32:40.019 10:51:28 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1127 -- # fio_dif_1 00:32:40.019 10:51:28 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:32:40.019 10:51:28 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:32:40.019 10:51:28 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:32:40.019 10:51:28 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:32:40.019 10:51:28 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:32:40.019 10:51:28 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:32:40.019 10:51:28 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:40.019 10:51:28 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:32:40.019 bdev_null0 00:32:40.019 10:51:28 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:40.019 10:51:28 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:32:40.019 10:51:28 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:40.019 10:51:28 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:32:40.019 10:51:28 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:40.019 10:51:28 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:32:40.019 10:51:28 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:40.019 10:51:28 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:32:40.019 10:51:28 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:40.019 10:51:28 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:32:40.019 10:51:28 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:40.019 10:51:28 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:32:40.019 [2024-11-15 10:51:28.341747] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:40.019 10:51:28 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:40.019 10:51:28 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:32:40.019 10:51:28 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:32:40.019 10:51:28 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:32:40.019 10:51:28 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # config=() 00:32:40.019 10:51:28 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # local subsystem config 00:32:40.020 10:51:28 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:32:40.020 10:51:28 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:32:40.020 { 00:32:40.020 "params": { 00:32:40.020 "name": "Nvme$subsystem", 00:32:40.020 "trtype": "$TEST_TRANSPORT", 00:32:40.020 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:40.020 "adrfam": "ipv4", 00:32:40.020 "trsvcid": "$NVMF_PORT", 00:32:40.020 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:40.020 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:40.020 "hdgst": ${hdgst:-false}, 00:32:40.020 "ddgst": ${ddgst:-false} 00:32:40.020 }, 00:32:40.020 "method": "bdev_nvme_attach_controller" 00:32:40.020 } 00:32:40.020 EOF 00:32:40.020 )") 00:32:40.020 10:51:28 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:40.020 10:51:28 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1358 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:40.020 10:51:28 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:32:40.020 10:51:28 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:32:40.020 10:51:28 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # local sanitizers 00:32:40.020 10:51:28 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1342 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:32:40.020 10:51:28 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # shift 00:32:40.020 10:51:28 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # local asan_lib= 00:32:40.020 10:51:28 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:32:40.020 10:51:28 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:32:40.020 10:51:28 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:32:40.020 10:51:28 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:32:40.020 10:51:28 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # cat 00:32:40.020 10:51:28 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:32:40.020 10:51:28 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # grep libasan 00:32:40.020 10:51:28 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:32:40.020 10:51:28 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:32:40.020 10:51:28 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:32:40.020 10:51:28 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@584 -- # jq . 00:32:40.020 10:51:28 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@585 -- # IFS=, 00:32:40.020 10:51:28 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:32:40.020 "params": { 00:32:40.020 "name": "Nvme0", 00:32:40.020 "trtype": "tcp", 00:32:40.020 "traddr": "10.0.0.2", 00:32:40.020 "adrfam": "ipv4", 00:32:40.020 "trsvcid": "4420", 00:32:40.020 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:32:40.020 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:32:40.020 "hdgst": false, 00:32:40.020 "ddgst": false 00:32:40.020 }, 00:32:40.020 "method": "bdev_nvme_attach_controller" 00:32:40.020 }' 00:32:40.020 10:51:28 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # asan_lib= 00:32:40.020 10:51:28 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:32:40.020 10:51:28 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:32:40.020 10:51:28 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:32:40.020 10:51:28 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # grep libclang_rt.asan 00:32:40.020 10:51:28 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:32:40.020 10:51:28 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # asan_lib= 00:32:40.020 10:51:28 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:32:40.020 10:51:28 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1354 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:32:40.020 10:51:28 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:40.278 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:32:40.278 fio-3.35 00:32:40.278 Starting 1 thread 00:32:52.473 00:32:52.473 filename0: (groupid=0, jobs=1): err= 0: pid=553050: Fri Nov 15 10:51:39 2024 00:32:52.473 read: IOPS=199, BW=797KiB/s (816kB/s)(7984KiB/10014msec) 00:32:52.473 slat (nsec): min=5315, max=50252, avg=8957.82, stdev=2503.32 00:32:52.473 clat (usec): min=471, max=42504, avg=20039.34, stdev=20440.35 00:32:52.473 lat (usec): min=493, max=42515, avg=20048.30, stdev=20440.06 00:32:52.473 clat percentiles (usec): 00:32:52.473 | 1.00th=[ 510], 5.00th=[ 545], 10.00th=[ 553], 20.00th=[ 570], 00:32:52.473 | 30.00th=[ 586], 40.00th=[ 627], 50.00th=[ 685], 60.00th=[41157], 00:32:52.473 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41681], 95.00th=[42206], 00:32:52.473 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42730], 99.95th=[42730], 00:32:52.473 | 99.99th=[42730] 00:32:52.473 bw ( KiB/s): min= 704, max= 1536, per=99.84%, avg=796.80, stdev=175.55, samples=20 00:32:52.473 iops : min= 176, max= 384, avg=199.20, stdev=43.89, samples=20 00:32:52.473 lat (usec) : 500=0.20%, 750=51.90%, 1000=0.20% 00:32:52.473 lat (msec) : 10=0.20%, 50=47.49% 00:32:52.473 cpu : usr=90.44%, sys=9.27%, ctx=20, majf=0, minf=9 00:32:52.473 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:52.473 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:52.473 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:52.473 issued rwts: total=1996,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:52.473 latency : target=0, window=0, percentile=100.00%, depth=4 00:32:52.473 00:32:52.473 Run status group 0 (all jobs): 00:32:52.473 READ: bw=797KiB/s (816kB/s), 797KiB/s-797KiB/s (816kB/s-816kB/s), io=7984KiB (8176kB), run=10014-10014msec 00:32:52.473 10:51:39 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:32:52.473 10:51:39 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:32:52.473 10:51:39 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:32:52.473 10:51:39 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:32:52.473 10:51:39 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:32:52.473 10:51:39 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:32:52.473 10:51:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:52.473 10:51:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:32:52.473 10:51:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:52.473 10:51:39 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:32:52.473 10:51:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:52.473 10:51:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:32:52.473 10:51:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:52.473 00:32:52.473 real 0m11.281s 00:32:52.473 user 0m10.441s 00:32:52.473 sys 0m1.233s 00:32:52.473 10:51:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1128 -- # xtrace_disable 00:32:52.473 10:51:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:32:52.473 ************************************ 00:32:52.473 END TEST fio_dif_1_default 00:32:52.473 ************************************ 00:32:52.473 10:51:39 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:32:52.473 10:51:39 nvmf_dif -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:32:52.473 10:51:39 nvmf_dif -- common/autotest_common.sh@1109 -- # xtrace_disable 00:32:52.473 10:51:39 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:32:52.473 ************************************ 00:32:52.473 START TEST fio_dif_1_multi_subsystems 00:32:52.473 ************************************ 00:32:52.473 10:51:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1127 -- # fio_dif_1_multi_subsystems 00:32:52.473 10:51:39 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:32:52.473 10:51:39 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:32:52.473 10:51:39 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:32:52.473 10:51:39 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:32:52.473 10:51:39 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:32:52.473 10:51:39 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:32:52.473 10:51:39 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:32:52.473 10:51:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:52.473 10:51:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:32:52.473 bdev_null0 00:32:52.473 10:51:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:52.473 10:51:39 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:32:52.473 10:51:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:52.473 10:51:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:32:52.473 10:51:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:52.473 10:51:39 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:32:52.473 10:51:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:52.473 10:51:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:32:52.473 10:51:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:52.473 10:51:39 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:32:52.473 10:51:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:52.473 10:51:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:32:52.473 [2024-11-15 10:51:39.675965] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:52.473 10:51:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:52.473 10:51:39 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:32:52.473 10:51:39 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:32:52.473 10:51:39 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:32:52.473 10:51:39 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:32:52.473 10:51:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:52.473 10:51:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:32:52.473 bdev_null1 00:32:52.473 10:51:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:52.473 10:51:39 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:32:52.473 10:51:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:52.473 10:51:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:32:52.473 10:51:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:52.473 10:51:39 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:32:52.473 10:51:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:52.473 10:51:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:32:52.473 10:51:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:52.473 10:51:39 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:52.473 10:51:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:52.473 10:51:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:32:52.473 10:51:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:52.473 10:51:39 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:32:52.473 10:51:39 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:32:52.473 10:51:39 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:52.473 10:51:39 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:32:52.473 10:51:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1358 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:52.473 10:51:39 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:32:52.473 10:51:39 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # config=() 00:32:52.473 10:51:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:32:52.473 10:51:39 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # local subsystem config 00:32:52.473 10:51:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:32:52.473 10:51:39 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:32:52.473 10:51:39 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:32:52.473 10:51:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # local sanitizers 00:32:52.473 10:51:39 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:32:52.473 10:51:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1342 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:32:52.473 10:51:39 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:32:52.473 { 00:32:52.473 "params": { 00:32:52.473 "name": "Nvme$subsystem", 00:32:52.473 "trtype": "$TEST_TRANSPORT", 00:32:52.474 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:52.474 "adrfam": "ipv4", 00:32:52.474 "trsvcid": "$NVMF_PORT", 00:32:52.474 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:52.474 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:52.474 "hdgst": ${hdgst:-false}, 00:32:52.474 "ddgst": ${ddgst:-false} 00:32:52.474 }, 00:32:52.474 "method": "bdev_nvme_attach_controller" 00:32:52.474 } 00:32:52.474 EOF 00:32:52.474 )") 00:32:52.474 10:51:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # shift 00:32:52.474 10:51:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # local asan_lib= 00:32:52.474 10:51:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:32:52.474 10:51:39 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:32:52.474 10:51:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:32:52.474 10:51:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # grep libasan 00:32:52.474 10:51:39 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:32:52.474 10:51:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:32:52.474 10:51:39 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:32:52.474 10:51:39 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:32:52.474 10:51:39 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:32:52.474 10:51:39 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:32:52.474 { 00:32:52.474 "params": { 00:32:52.474 "name": "Nvme$subsystem", 00:32:52.474 "trtype": "$TEST_TRANSPORT", 00:32:52.474 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:52.474 "adrfam": "ipv4", 00:32:52.474 "trsvcid": "$NVMF_PORT", 00:32:52.474 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:52.474 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:52.474 "hdgst": ${hdgst:-false}, 00:32:52.474 "ddgst": ${ddgst:-false} 00:32:52.474 }, 00:32:52.474 "method": "bdev_nvme_attach_controller" 00:32:52.474 } 00:32:52.474 EOF 00:32:52.474 )") 00:32:52.474 10:51:39 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:32:52.474 10:51:39 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:32:52.474 10:51:39 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:32:52.474 10:51:39 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@584 -- # jq . 00:32:52.474 10:51:39 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@585 -- # IFS=, 00:32:52.474 10:51:39 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:32:52.474 "params": { 00:32:52.474 "name": "Nvme0", 00:32:52.474 "trtype": "tcp", 00:32:52.474 "traddr": "10.0.0.2", 00:32:52.474 "adrfam": "ipv4", 00:32:52.474 "trsvcid": "4420", 00:32:52.474 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:32:52.474 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:32:52.474 "hdgst": false, 00:32:52.474 "ddgst": false 00:32:52.474 }, 00:32:52.474 "method": "bdev_nvme_attach_controller" 00:32:52.474 },{ 00:32:52.474 "params": { 00:32:52.474 "name": "Nvme1", 00:32:52.474 "trtype": "tcp", 00:32:52.474 "traddr": "10.0.0.2", 00:32:52.474 "adrfam": "ipv4", 00:32:52.474 "trsvcid": "4420", 00:32:52.474 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:32:52.474 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:32:52.474 "hdgst": false, 00:32:52.474 "ddgst": false 00:32:52.474 }, 00:32:52.474 "method": "bdev_nvme_attach_controller" 00:32:52.474 }' 00:32:52.474 10:51:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # asan_lib= 00:32:52.474 10:51:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:32:52.474 10:51:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:32:52.474 10:51:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:32:52.474 10:51:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # grep libclang_rt.asan 00:32:52.474 10:51:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:32:52.474 10:51:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # asan_lib= 00:32:52.474 10:51:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:32:52.474 10:51:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1354 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:32:52.474 10:51:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:52.474 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:32:52.474 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:32:52.474 fio-3.35 00:32:52.474 Starting 2 threads 00:33:02.568 00:33:02.568 filename0: (groupid=0, jobs=1): err= 0: pid=554564: Fri Nov 15 10:51:50 2024 00:33:02.568 read: IOPS=212, BW=850KiB/s (871kB/s)(8528KiB/10031msec) 00:33:02.568 slat (nsec): min=7609, max=38556, avg=9561.47, stdev=2877.34 00:33:02.568 clat (usec): min=507, max=42698, avg=18789.42, stdev=20381.17 00:33:02.568 lat (usec): min=516, max=42712, avg=18798.98, stdev=20380.95 00:33:02.568 clat percentiles (usec): 00:33:02.568 | 1.00th=[ 545], 5.00th=[ 553], 10.00th=[ 562], 20.00th=[ 578], 00:33:02.568 | 30.00th=[ 586], 40.00th=[ 627], 50.00th=[ 717], 60.00th=[41157], 00:33:02.568 | 70.00th=[41157], 80.00th=[41157], 90.00th=[42206], 95.00th=[42206], 00:33:02.568 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42730], 99.95th=[42730], 00:33:02.568 | 99.99th=[42730] 00:33:02.568 bw ( KiB/s): min= 736, max= 1120, per=52.31%, avg=851.20, stdev=101.94, samples=20 00:33:02.568 iops : min= 184, max= 280, avg=212.80, stdev=25.48, samples=20 00:33:02.568 lat (usec) : 750=50.38%, 1000=4.74% 00:33:02.568 lat (msec) : 2=0.42%, 4=0.19%, 50=44.28% 00:33:02.568 cpu : usr=94.82%, sys=4.82%, ctx=16, majf=0, minf=115 00:33:02.568 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:02.568 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:02.568 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:02.568 issued rwts: total=2132,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:02.568 latency : target=0, window=0, percentile=100.00%, depth=4 00:33:02.569 filename1: (groupid=0, jobs=1): err= 0: pid=554565: Fri Nov 15 10:51:50 2024 00:33:02.569 read: IOPS=194, BW=777KiB/s (795kB/s)(7792KiB/10031msec) 00:33:02.569 slat (nsec): min=7184, max=89351, avg=9540.00, stdev=3710.05 00:33:02.569 clat (usec): min=465, max=43555, avg=20566.87, stdev=20698.89 00:33:02.569 lat (usec): min=473, max=43593, avg=20576.41, stdev=20698.87 00:33:02.569 clat percentiles (usec): 00:33:02.569 | 1.00th=[ 486], 5.00th=[ 515], 10.00th=[ 529], 20.00th=[ 545], 00:33:02.569 | 30.00th=[ 562], 40.00th=[ 594], 50.00th=[ 840], 60.00th=[41157], 00:33:02.569 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:33:02.569 | 99.00th=[42206], 99.50th=[42206], 99.90th=[43779], 99.95th=[43779], 00:33:02.569 | 99.99th=[43779] 00:33:02.569 bw ( KiB/s): min= 448, max= 1408, per=47.76%, avg=777.60, stdev=286.80, samples=20 00:33:02.569 iops : min= 112, max= 352, avg=194.40, stdev=71.70, samples=20 00:33:02.569 lat (usec) : 500=2.57%, 750=44.25%, 1000=4.93% 00:33:02.569 lat (msec) : 50=48.25% 00:33:02.569 cpu : usr=94.59%, sys=5.00%, ctx=32, majf=0, minf=188 00:33:02.569 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:02.569 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:02.569 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:02.569 issued rwts: total=1948,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:02.569 latency : target=0, window=0, percentile=100.00%, depth=4 00:33:02.569 00:33:02.569 Run status group 0 (all jobs): 00:33:02.569 READ: bw=1627KiB/s (1666kB/s), 777KiB/s-850KiB/s (795kB/s-871kB/s), io=15.9MiB (16.7MB), run=10031-10031msec 00:33:02.827 10:51:51 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:33:02.827 10:51:51 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:33:02.827 10:51:51 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:33:02.827 10:51:51 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:33:02.827 10:51:51 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:33:02.827 10:51:51 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:33:02.827 10:51:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:02.827 10:51:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:33:02.827 10:51:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:02.827 10:51:51 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:33:02.827 10:51:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:02.827 10:51:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:33:02.827 10:51:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:02.827 10:51:51 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:33:02.827 10:51:51 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:33:02.827 10:51:51 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:33:02.827 10:51:51 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:33:02.827 10:51:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:02.827 10:51:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:33:02.827 10:51:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:02.827 10:51:51 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:33:02.827 10:51:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:02.827 10:51:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:33:02.827 10:51:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:02.827 00:33:02.827 real 0m11.425s 00:33:02.827 user 0m20.583s 00:33:02.827 sys 0m1.280s 00:33:02.827 10:51:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1128 -- # xtrace_disable 00:33:02.827 10:51:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:33:02.827 ************************************ 00:33:02.827 END TEST fio_dif_1_multi_subsystems 00:33:02.827 ************************************ 00:33:02.827 10:51:51 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:33:02.827 10:51:51 nvmf_dif -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:33:02.827 10:51:51 nvmf_dif -- common/autotest_common.sh@1109 -- # xtrace_disable 00:33:02.827 10:51:51 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:33:02.827 ************************************ 00:33:02.827 START TEST fio_dif_rand_params 00:33:02.827 ************************************ 00:33:02.827 10:51:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1127 -- # fio_dif_rand_params 00:33:02.827 10:51:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:33:02.827 10:51:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:33:02.827 10:51:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:33:02.827 10:51:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:33:02.827 10:51:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:33:02.827 10:51:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:33:02.827 10:51:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:33:02.827 10:51:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:33:02.827 10:51:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:33:02.827 10:51:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:33:02.827 10:51:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:33:02.827 10:51:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:33:02.827 10:51:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:33:02.827 10:51:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:02.827 10:51:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:02.827 bdev_null0 00:33:02.827 10:51:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:02.827 10:51:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:33:02.827 10:51:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:02.827 10:51:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:02.827 10:51:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:02.827 10:51:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:33:02.827 10:51:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:02.827 10:51:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:02.827 10:51:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:02.828 10:51:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:33:02.828 10:51:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:02.828 10:51:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:02.828 [2024-11-15 10:51:51.153393] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:02.828 10:51:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:02.828 10:51:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:33:02.828 10:51:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:33:02.828 10:51:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:33:02.828 10:51:51 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:33:02.828 10:51:51 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:33:02.828 10:51:51 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:33:02.828 10:51:51 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:33:02.828 { 00:33:02.828 "params": { 00:33:02.828 "name": "Nvme$subsystem", 00:33:02.828 "trtype": "$TEST_TRANSPORT", 00:33:02.828 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:02.828 "adrfam": "ipv4", 00:33:02.828 "trsvcid": "$NVMF_PORT", 00:33:02.828 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:02.828 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:02.828 "hdgst": ${hdgst:-false}, 00:33:02.828 "ddgst": ${ddgst:-false} 00:33:02.828 }, 00:33:02.828 "method": "bdev_nvme_attach_controller" 00:33:02.828 } 00:33:02.828 EOF 00:33:02.828 )") 00:33:02.828 10:51:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:02.828 10:51:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1358 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:02.828 10:51:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:33:02.828 10:51:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:33:02.828 10:51:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local sanitizers 00:33:02.828 10:51:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:33:02.828 10:51:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:02.828 10:51:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # shift 00:33:02.828 10:51:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:33:02.828 10:51:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # local asan_lib= 00:33:02.828 10:51:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:33:02.828 10:51:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:33:02.828 10:51:51 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:33:02.828 10:51:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:02.828 10:51:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # grep libasan 00:33:02.828 10:51:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:33:02.828 10:51:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:33:02.828 10:51:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:33:02.828 10:51:51 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:33:02.828 10:51:51 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:33:02.828 10:51:51 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:33:02.828 "params": { 00:33:02.828 "name": "Nvme0", 00:33:02.828 "trtype": "tcp", 00:33:02.828 "traddr": "10.0.0.2", 00:33:02.828 "adrfam": "ipv4", 00:33:02.828 "trsvcid": "4420", 00:33:02.828 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:33:02.828 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:33:02.828 "hdgst": false, 00:33:02.828 "ddgst": false 00:33:02.828 }, 00:33:02.828 "method": "bdev_nvme_attach_controller" 00:33:02.828 }' 00:33:02.828 10:51:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # asan_lib= 00:33:02.828 10:51:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:33:02.828 10:51:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:33:02.828 10:51:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:02.828 10:51:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # grep libclang_rt.asan 00:33:02.828 10:51:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:33:02.828 10:51:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # asan_lib= 00:33:02.828 10:51:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:33:02.828 10:51:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1354 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:33:02.828 10:51:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:03.086 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:33:03.086 ... 00:33:03.086 fio-3.35 00:33:03.086 Starting 3 threads 00:33:09.658 00:33:09.658 filename0: (groupid=0, jobs=1): err= 0: pid=555958: Fri Nov 15 10:51:56 2024 00:33:09.658 read: IOPS=242, BW=30.4MiB/s (31.8MB/s)(152MiB/5007msec) 00:33:09.658 slat (nsec): min=3694, max=52749, avg=19218.44, stdev=5349.25 00:33:09.658 clat (usec): min=4612, max=51645, avg=12325.85, stdev=4078.31 00:33:09.658 lat (usec): min=4624, max=51665, avg=12345.07, stdev=4078.16 00:33:09.658 clat percentiles (usec): 00:33:09.658 | 1.00th=[ 7963], 5.00th=[ 9241], 10.00th=[ 9896], 20.00th=[10421], 00:33:09.658 | 30.00th=[10945], 40.00th=[11338], 50.00th=[11863], 60.00th=[12256], 00:33:09.658 | 70.00th=[13042], 80.00th=[13698], 90.00th=[14615], 95.00th=[15139], 00:33:09.658 | 99.00th=[17695], 99.50th=[49021], 99.90th=[51643], 99.95th=[51643], 00:33:09.658 | 99.99th=[51643] 00:33:09.658 bw ( KiB/s): min=27648, max=33792, per=32.97%, avg=31084.20, stdev=1955.41, samples=10 00:33:09.658 iops : min= 216, max= 264, avg=242.80, stdev=15.32, samples=10 00:33:09.658 lat (msec) : 10=12.25%, 20=86.76%, 50=0.82%, 100=0.16% 00:33:09.658 cpu : usr=94.45%, sys=5.05%, ctx=8, majf=0, minf=0 00:33:09.658 IO depths : 1=0.5%, 2=99.5%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:09.658 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:09.658 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:09.658 issued rwts: total=1216,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:09.658 latency : target=0, window=0, percentile=100.00%, depth=3 00:33:09.658 filename0: (groupid=0, jobs=1): err= 0: pid=555959: Fri Nov 15 10:51:56 2024 00:33:09.658 read: IOPS=243, BW=30.4MiB/s (31.9MB/s)(152MiB/5004msec) 00:33:09.658 slat (nsec): min=7627, max=53789, avg=18943.85, stdev=6187.77 00:33:09.658 clat (usec): min=6412, max=59889, avg=12320.59, stdev=4768.01 00:33:09.658 lat (usec): min=6426, max=59906, avg=12339.53, stdev=4768.24 00:33:09.658 clat percentiles (usec): 00:33:09.658 | 1.00th=[ 7832], 5.00th=[ 9634], 10.00th=[10028], 20.00th=[10683], 00:33:09.658 | 30.00th=[10945], 40.00th=[11338], 50.00th=[11731], 60.00th=[12125], 00:33:09.658 | 70.00th=[12518], 80.00th=[13042], 90.00th=[13829], 95.00th=[14877], 00:33:09.658 | 99.00th=[50070], 99.50th=[51119], 99.90th=[60031], 99.95th=[60031], 00:33:09.658 | 99.99th=[60031] 00:33:09.658 bw ( KiB/s): min=21504, max=34304, per=32.94%, avg=31052.80, stdev=3559.64, samples=10 00:33:09.658 iops : min= 168, max= 268, avg=242.60, stdev=27.81, samples=10 00:33:09.658 lat (msec) : 10=9.13%, 20=89.64%, 100=1.23% 00:33:09.658 cpu : usr=90.03%, sys=6.88%, ctx=318, majf=0, minf=0 00:33:09.658 IO depths : 1=0.4%, 2=99.6%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:09.658 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:09.658 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:09.658 issued rwts: total=1216,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:09.658 latency : target=0, window=0, percentile=100.00%, depth=3 00:33:09.658 filename0: (groupid=0, jobs=1): err= 0: pid=555960: Fri Nov 15 10:51:56 2024 00:33:09.658 read: IOPS=250, BW=31.4MiB/s (32.9MB/s)(157MiB/5004msec) 00:33:09.658 slat (nsec): min=7746, max=39393, avg=17611.21, stdev=4352.10 00:33:09.658 clat (usec): min=4074, max=46790, avg=11930.00, stdev=3104.23 00:33:09.658 lat (usec): min=4086, max=46811, avg=11947.61, stdev=3104.34 00:33:09.658 clat percentiles (usec): 00:33:09.658 | 1.00th=[ 5080], 5.00th=[ 8586], 10.00th=[ 9634], 20.00th=[10421], 00:33:09.658 | 30.00th=[10814], 40.00th=[11207], 50.00th=[11731], 60.00th=[12125], 00:33:09.658 | 70.00th=[12780], 80.00th=[13435], 90.00th=[14353], 95.00th=[14877], 00:33:09.658 | 99.00th=[15795], 99.50th=[20841], 99.90th=[46924], 99.95th=[46924], 00:33:09.658 | 99.99th=[46924] 00:33:09.658 bw ( KiB/s): min=29952, max=35584, per=34.02%, avg=32076.80, stdev=1877.53, samples=10 00:33:09.658 iops : min= 234, max= 278, avg=250.60, stdev=14.67, samples=10 00:33:09.658 lat (msec) : 10=13.38%, 20=86.07%, 50=0.56% 00:33:09.658 cpu : usr=95.28%, sys=4.16%, ctx=13, majf=0, minf=0 00:33:09.658 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:09.658 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:09.658 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:09.658 issued rwts: total=1256,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:09.658 latency : target=0, window=0, percentile=100.00%, depth=3 00:33:09.658 00:33:09.658 Run status group 0 (all jobs): 00:33:09.658 READ: bw=92.1MiB/s (96.5MB/s), 30.4MiB/s-31.4MiB/s (31.8MB/s-32.9MB/s), io=461MiB (483MB), run=5004-5007msec 00:33:09.658 10:51:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:33:09.658 10:51:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:33:09.658 10:51:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:33:09.659 10:51:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:33:09.659 10:51:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:33:09.659 10:51:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:33:09.659 10:51:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:09.659 10:51:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:09.659 10:51:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:09.659 10:51:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:33:09.659 10:51:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:09.659 10:51:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:09.659 10:51:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:09.659 10:51:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:33:09.659 10:51:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:33:09.659 10:51:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:33:09.659 10:51:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:33:09.659 10:51:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:33:09.659 10:51:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:33:09.659 10:51:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:33:09.659 10:51:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:33:09.659 10:51:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:33:09.659 10:51:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:33:09.659 10:51:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:33:09.659 10:51:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:33:09.659 10:51:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:09.659 10:51:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:09.659 bdev_null0 00:33:09.659 10:51:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:09.659 10:51:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:33:09.659 10:51:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:09.659 10:51:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:09.659 10:51:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:09.659 10:51:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:33:09.659 10:51:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:09.659 10:51:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:09.659 10:51:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:09.659 10:51:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:33:09.659 10:51:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:09.659 10:51:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:09.659 [2024-11-15 10:51:57.224731] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:09.659 10:51:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:09.659 10:51:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:33:09.659 10:51:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:33:09.659 10:51:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:33:09.659 10:51:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:33:09.659 10:51:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:09.659 10:51:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:09.659 bdev_null1 00:33:09.659 10:51:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:09.659 10:51:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:33:09.659 10:51:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:09.659 10:51:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:09.659 10:51:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:09.659 10:51:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:33:09.659 10:51:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:09.659 10:51:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:09.659 10:51:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:09.659 10:51:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:09.659 10:51:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:09.659 10:51:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:09.659 10:51:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:09.659 10:51:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:33:09.659 10:51:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:33:09.659 10:51:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:33:09.659 10:51:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:33:09.659 10:51:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:09.659 10:51:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:09.659 bdev_null2 00:33:09.659 10:51:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:09.659 10:51:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:33:09.659 10:51:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:09.659 10:51:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:09.659 10:51:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:09.659 10:51:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:33:09.659 10:51:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:09.659 10:51:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:09.659 10:51:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:09.659 10:51:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:33:09.659 10:51:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:09.659 10:51:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:09.659 10:51:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:09.659 10:51:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:33:09.659 10:51:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:33:09.659 10:51:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:33:09.659 10:51:57 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:33:09.659 10:51:57 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:33:09.659 10:51:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:09.659 10:51:57 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:33:09.659 10:51:57 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:33:09.659 { 00:33:09.659 "params": { 00:33:09.659 "name": "Nvme$subsystem", 00:33:09.659 "trtype": "$TEST_TRANSPORT", 00:33:09.659 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:09.659 "adrfam": "ipv4", 00:33:09.659 "trsvcid": "$NVMF_PORT", 00:33:09.659 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:09.659 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:09.659 "hdgst": ${hdgst:-false}, 00:33:09.659 "ddgst": ${ddgst:-false} 00:33:09.659 }, 00:33:09.659 "method": "bdev_nvme_attach_controller" 00:33:09.659 } 00:33:09.659 EOF 00:33:09.659 )") 00:33:09.659 10:51:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1358 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:09.659 10:51:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:33:09.659 10:51:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:33:09.659 10:51:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local sanitizers 00:33:09.659 10:51:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:09.659 10:51:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # shift 00:33:09.660 10:51:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # local asan_lib= 00:33:09.660 10:51:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:33:09.660 10:51:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:33:09.660 10:51:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:33:09.660 10:51:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:33:09.660 10:51:57 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:33:09.660 10:51:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:09.660 10:51:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # grep libasan 00:33:09.660 10:51:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:33:09.660 10:51:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:33:09.660 10:51:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:33:09.660 10:51:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:33:09.660 10:51:57 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:33:09.660 10:51:57 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:33:09.660 { 00:33:09.660 "params": { 00:33:09.660 "name": "Nvme$subsystem", 00:33:09.660 "trtype": "$TEST_TRANSPORT", 00:33:09.660 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:09.660 "adrfam": "ipv4", 00:33:09.660 "trsvcid": "$NVMF_PORT", 00:33:09.660 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:09.660 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:09.660 "hdgst": ${hdgst:-false}, 00:33:09.660 "ddgst": ${ddgst:-false} 00:33:09.660 }, 00:33:09.660 "method": "bdev_nvme_attach_controller" 00:33:09.660 } 00:33:09.660 EOF 00:33:09.660 )") 00:33:09.660 10:51:57 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:33:09.660 10:51:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:33:09.660 10:51:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:33:09.660 10:51:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:33:09.660 10:51:57 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:33:09.660 10:51:57 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:33:09.660 { 00:33:09.660 "params": { 00:33:09.660 "name": "Nvme$subsystem", 00:33:09.660 "trtype": "$TEST_TRANSPORT", 00:33:09.660 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:09.660 "adrfam": "ipv4", 00:33:09.660 "trsvcid": "$NVMF_PORT", 00:33:09.660 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:09.660 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:09.660 "hdgst": ${hdgst:-false}, 00:33:09.660 "ddgst": ${ddgst:-false} 00:33:09.660 }, 00:33:09.660 "method": "bdev_nvme_attach_controller" 00:33:09.660 } 00:33:09.660 EOF 00:33:09.660 )") 00:33:09.660 10:51:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:33:09.660 10:51:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:33:09.660 10:51:57 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:33:09.660 10:51:57 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:33:09.660 10:51:57 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:33:09.660 10:51:57 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:33:09.660 "params": { 00:33:09.660 "name": "Nvme0", 00:33:09.660 "trtype": "tcp", 00:33:09.660 "traddr": "10.0.0.2", 00:33:09.660 "adrfam": "ipv4", 00:33:09.660 "trsvcid": "4420", 00:33:09.660 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:33:09.660 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:33:09.660 "hdgst": false, 00:33:09.660 "ddgst": false 00:33:09.660 }, 00:33:09.660 "method": "bdev_nvme_attach_controller" 00:33:09.660 },{ 00:33:09.660 "params": { 00:33:09.660 "name": "Nvme1", 00:33:09.660 "trtype": "tcp", 00:33:09.660 "traddr": "10.0.0.2", 00:33:09.660 "adrfam": "ipv4", 00:33:09.660 "trsvcid": "4420", 00:33:09.660 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:33:09.660 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:33:09.660 "hdgst": false, 00:33:09.660 "ddgst": false 00:33:09.660 }, 00:33:09.660 "method": "bdev_nvme_attach_controller" 00:33:09.660 },{ 00:33:09.660 "params": { 00:33:09.660 "name": "Nvme2", 00:33:09.660 "trtype": "tcp", 00:33:09.660 "traddr": "10.0.0.2", 00:33:09.660 "adrfam": "ipv4", 00:33:09.660 "trsvcid": "4420", 00:33:09.660 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:33:09.660 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:33:09.660 "hdgst": false, 00:33:09.660 "ddgst": false 00:33:09.660 }, 00:33:09.660 "method": "bdev_nvme_attach_controller" 00:33:09.660 }' 00:33:09.660 10:51:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # asan_lib= 00:33:09.660 10:51:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:33:09.660 10:51:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:33:09.660 10:51:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:09.660 10:51:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # grep libclang_rt.asan 00:33:09.660 10:51:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:33:09.660 10:51:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # asan_lib= 00:33:09.660 10:51:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:33:09.660 10:51:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1354 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:33:09.660 10:51:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:09.660 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:33:09.660 ... 00:33:09.660 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:33:09.660 ... 00:33:09.660 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:33:09.660 ... 00:33:09.660 fio-3.35 00:33:09.660 Starting 24 threads 00:33:21.864 00:33:21.864 filename0: (groupid=0, jobs=1): err= 0: pid=556722: Fri Nov 15 10:52:08 2024 00:33:21.864 read: IOPS=82, BW=330KiB/s (338kB/s)(3328KiB/10079msec) 00:33:21.864 slat (usec): min=8, max=102, avg=50.92, stdev=25.68 00:33:21.864 clat (msec): min=22, max=406, avg=193.39, stdev=125.45 00:33:21.864 lat (msec): min=22, max=406, avg=193.44, stdev=125.46 00:33:21.864 clat percentiles (msec): 00:33:21.864 | 1.00th=[ 33], 5.00th=[ 34], 10.00th=[ 34], 20.00th=[ 34], 00:33:21.864 | 30.00th=[ 35], 40.00th=[ 186], 50.00th=[ 266], 60.00th=[ 279], 00:33:21.864 | 70.00th=[ 279], 80.00th=[ 313], 90.00th=[ 317], 95.00th=[ 317], 00:33:21.864 | 99.00th=[ 405], 99.50th=[ 405], 99.90th=[ 405], 99.95th=[ 405], 00:33:21.864 | 99.99th=[ 405] 00:33:21.864 bw ( KiB/s): min= 128, max= 1536, per=3.82%, avg=326.40, stdev=327.67, samples=20 00:33:21.864 iops : min= 32, max= 384, avg=81.60, stdev=81.92, samples=20 00:33:21.864 lat (msec) : 50=34.62%, 100=1.68%, 250=12.74%, 500=50.96% 00:33:21.864 cpu : usr=98.35%, sys=1.22%, ctx=14, majf=0, minf=9 00:33:21.864 IO depths : 1=4.6%, 2=10.8%, 4=25.0%, 8=51.7%, 16=7.9%, 32=0.0%, >=64=0.0% 00:33:21.864 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:21.864 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:21.864 issued rwts: total=832,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:21.864 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:21.864 filename0: (groupid=0, jobs=1): err= 0: pid=556723: Fri Nov 15 10:52:08 2024 00:33:21.864 read: IOPS=83, BW=335KiB/s (343kB/s)(3384KiB/10106msec) 00:33:21.864 slat (usec): min=11, max=113, avg=65.56, stdev=17.13 00:33:21.864 clat (msec): min=24, max=422, avg=190.34, stdev=122.84 00:33:21.864 lat (msec): min=24, max=422, avg=190.40, stdev=122.84 00:33:21.864 clat percentiles (msec): 00:33:21.864 | 1.00th=[ 33], 5.00th=[ 33], 10.00th=[ 34], 20.00th=[ 34], 00:33:21.864 | 30.00th=[ 35], 40.00th=[ 176], 50.00th=[ 211], 60.00th=[ 275], 00:33:21.864 | 70.00th=[ 279], 80.00th=[ 313], 90.00th=[ 317], 95.00th=[ 321], 00:33:21.864 | 99.00th=[ 393], 99.50th=[ 393], 99.90th=[ 422], 99.95th=[ 422], 00:33:21.864 | 99.99th=[ 422] 00:33:21.864 bw ( KiB/s): min= 128, max= 1536, per=3.89%, avg=332.00, stdev=340.60, samples=20 00:33:21.864 iops : min= 32, max= 384, avg=83.00, stdev=85.15, samples=20 00:33:21.864 lat (msec) : 50=34.04%, 250=17.97%, 500=47.99% 00:33:21.864 cpu : usr=98.29%, sys=1.28%, ctx=15, majf=0, minf=9 00:33:21.864 IO depths : 1=4.4%, 2=10.6%, 4=25.1%, 8=51.9%, 16=8.0%, 32=0.0%, >=64=0.0% 00:33:21.864 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:21.864 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:21.864 issued rwts: total=846,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:21.864 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:21.864 filename0: (groupid=0, jobs=1): err= 0: pid=556724: Fri Nov 15 10:52:08 2024 00:33:21.864 read: IOPS=81, BW=324KiB/s (332kB/s)(3264KiB/10074msec) 00:33:21.864 slat (nsec): min=17014, max=96121, avg=47320.75, stdev=18942.90 00:33:21.864 clat (msec): min=22, max=401, avg=197.12, stdev=127.26 00:33:21.864 lat (msec): min=22, max=401, avg=197.17, stdev=127.24 00:33:21.864 clat percentiles (msec): 00:33:21.864 | 1.00th=[ 33], 5.00th=[ 33], 10.00th=[ 34], 20.00th=[ 34], 00:33:21.864 | 30.00th=[ 36], 40.00th=[ 209], 50.00th=[ 275], 60.00th=[ 275], 00:33:21.864 | 70.00th=[ 292], 80.00th=[ 313], 90.00th=[ 317], 95.00th=[ 372], 00:33:21.864 | 99.00th=[ 384], 99.50th=[ 384], 99.90th=[ 401], 99.95th=[ 401], 00:33:21.864 | 99.99th=[ 401] 00:33:21.864 bw ( KiB/s): min= 128, max= 1664, per=3.75%, avg=320.00, stdev=355.50, samples=20 00:33:21.864 iops : min= 32, max= 416, avg=80.00, stdev=88.88, samples=20 00:33:21.864 lat (msec) : 50=35.05%, 100=0.25%, 250=11.03%, 500=53.68% 00:33:21.864 cpu : usr=97.88%, sys=1.49%, ctx=157, majf=0, minf=9 00:33:21.864 IO depths : 1=4.0%, 2=10.3%, 4=25.0%, 8=52.2%, 16=8.5%, 32=0.0%, >=64=0.0% 00:33:21.864 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:21.864 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:21.864 issued rwts: total=816,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:21.864 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:21.864 filename0: (groupid=0, jobs=1): err= 0: pid=556725: Fri Nov 15 10:52:08 2024 00:33:21.864 read: IOPS=106, BW=425KiB/s (435kB/s)(4288KiB/10099msec) 00:33:21.864 slat (nsec): min=8098, max=87555, avg=14546.95, stdev=11333.45 00:33:21.864 clat (msec): min=26, max=257, avg=149.39, stdev=74.16 00:33:21.864 lat (msec): min=26, max=257, avg=149.41, stdev=74.16 00:33:21.864 clat percentiles (msec): 00:33:21.864 | 1.00th=[ 27], 5.00th=[ 34], 10.00th=[ 34], 20.00th=[ 35], 00:33:21.864 | 30.00th=[ 150], 40.00th=[ 186], 50.00th=[ 188], 60.00th=[ 197], 00:33:21.864 | 70.00th=[ 199], 80.00th=[ 207], 90.00th=[ 207], 95.00th=[ 209], 00:33:21.864 | 99.00th=[ 241], 99.50th=[ 241], 99.90th=[ 257], 99.95th=[ 257], 00:33:21.864 | 99.99th=[ 257] 00:33:21.865 bw ( KiB/s): min= 256, max= 1664, per=5.02%, avg=428.00, stdev=334.25, samples=20 00:33:21.865 iops : min= 64, max= 416, avg=107.00, stdev=83.56, samples=20 00:33:21.865 lat (msec) : 50=28.36%, 250=71.46%, 500=0.19% 00:33:21.865 cpu : usr=98.25%, sys=1.27%, ctx=55, majf=0, minf=9 00:33:21.865 IO depths : 1=2.7%, 2=9.0%, 4=25.0%, 8=53.5%, 16=9.8%, 32=0.0%, >=64=0.0% 00:33:21.865 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:21.865 complete : 0=0.0%, 4=94.3%, 8=0.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:21.865 issued rwts: total=1072,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:21.865 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:21.865 filename0: (groupid=0, jobs=1): err= 0: pid=556726: Fri Nov 15 10:52:08 2024 00:33:21.865 read: IOPS=82, BW=330KiB/s (338kB/s)(3328KiB/10096msec) 00:33:21.865 slat (nsec): min=4371, max=48288, avg=24292.88, stdev=7291.19 00:33:21.865 clat (msec): min=26, max=423, avg=193.95, stdev=125.11 00:33:21.865 lat (msec): min=26, max=423, avg=193.97, stdev=125.11 00:33:21.865 clat percentiles (msec): 00:33:21.865 | 1.00th=[ 33], 5.00th=[ 34], 10.00th=[ 34], 20.00th=[ 34], 00:33:21.865 | 30.00th=[ 35], 40.00th=[ 197], 50.00th=[ 268], 60.00th=[ 275], 00:33:21.865 | 70.00th=[ 279], 80.00th=[ 313], 90.00th=[ 317], 95.00th=[ 326], 00:33:21.865 | 99.00th=[ 393], 99.50th=[ 393], 99.90th=[ 422], 99.95th=[ 422], 00:33:21.865 | 99.99th=[ 422] 00:33:21.865 bw ( KiB/s): min= 128, max= 1664, per=3.82%, avg=326.40, stdev=355.44, samples=20 00:33:21.865 iops : min= 32, max= 416, avg=81.60, stdev=88.86, samples=20 00:33:21.865 lat (msec) : 50=34.86%, 100=1.92%, 250=12.26%, 500=50.96% 00:33:21.865 cpu : usr=97.88%, sys=1.53%, ctx=33, majf=0, minf=10 00:33:21.865 IO depths : 1=4.3%, 2=10.6%, 4=25.0%, 8=51.9%, 16=8.2%, 32=0.0%, >=64=0.0% 00:33:21.865 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:21.865 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:21.865 issued rwts: total=832,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:21.865 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:21.865 filename0: (groupid=0, jobs=1): err= 0: pid=556727: Fri Nov 15 10:52:08 2024 00:33:21.865 read: IOPS=82, BW=330KiB/s (338kB/s)(3328KiB/10093msec) 00:33:21.865 slat (usec): min=6, max=114, avg=75.26, stdev=12.63 00:33:21.865 clat (msec): min=24, max=394, avg=193.46, stdev=123.55 00:33:21.865 lat (msec): min=24, max=394, avg=193.53, stdev=123.55 00:33:21.865 clat percentiles (msec): 00:33:21.865 | 1.00th=[ 33], 5.00th=[ 33], 10.00th=[ 34], 20.00th=[ 34], 00:33:21.865 | 30.00th=[ 35], 40.00th=[ 190], 50.00th=[ 249], 60.00th=[ 275], 00:33:21.865 | 70.00th=[ 279], 80.00th=[ 313], 90.00th=[ 317], 95.00th=[ 326], 00:33:21.865 | 99.00th=[ 393], 99.50th=[ 393], 99.90th=[ 397], 99.95th=[ 397], 00:33:21.865 | 99.99th=[ 397] 00:33:21.865 bw ( KiB/s): min= 128, max= 1552, per=3.82%, avg=326.40, stdev=343.49, samples=20 00:33:21.865 iops : min= 32, max= 388, avg=81.60, stdev=85.87, samples=20 00:33:21.865 lat (msec) : 50=34.62%, 100=0.24%, 250=16.11%, 500=49.04% 00:33:21.865 cpu : usr=98.42%, sys=1.14%, ctx=17, majf=0, minf=9 00:33:21.865 IO depths : 1=4.4%, 2=10.7%, 4=25.0%, 8=51.8%, 16=8.1%, 32=0.0%, >=64=0.0% 00:33:21.865 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:21.865 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:21.865 issued rwts: total=832,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:21.865 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:21.865 filename0: (groupid=0, jobs=1): err= 0: pid=556728: Fri Nov 15 10:52:08 2024 00:33:21.865 read: IOPS=80, BW=324KiB/s (332kB/s)(3264KiB/10080msec) 00:33:21.865 slat (usec): min=9, max=106, avg=25.89, stdev=10.60 00:33:21.865 clat (msec): min=32, max=401, avg=197.33, stdev=125.61 00:33:21.865 lat (msec): min=32, max=401, avg=197.35, stdev=125.61 00:33:21.865 clat percentiles (msec): 00:33:21.865 | 1.00th=[ 33], 5.00th=[ 34], 10.00th=[ 34], 20.00th=[ 35], 00:33:21.865 | 30.00th=[ 36], 40.00th=[ 224], 50.00th=[ 275], 60.00th=[ 275], 00:33:21.865 | 70.00th=[ 292], 80.00th=[ 313], 90.00th=[ 317], 95.00th=[ 317], 00:33:21.865 | 99.00th=[ 372], 99.50th=[ 384], 99.90th=[ 401], 99.95th=[ 401], 00:33:21.865 | 99.99th=[ 401] 00:33:21.865 bw ( KiB/s): min= 128, max= 1664, per=3.75%, avg=320.00, stdev=355.77, samples=20 00:33:21.865 iops : min= 32, max= 416, avg=80.00, stdev=88.94, samples=20 00:33:21.865 lat (msec) : 50=35.29%, 250=8.82%, 500=55.88% 00:33:21.865 cpu : usr=98.37%, sys=1.21%, ctx=15, majf=0, minf=9 00:33:21.865 IO depths : 1=5.5%, 2=11.8%, 4=25.0%, 8=50.7%, 16=7.0%, 32=0.0%, >=64=0.0% 00:33:21.865 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:21.865 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:21.865 issued rwts: total=816,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:21.865 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:21.865 filename0: (groupid=0, jobs=1): err= 0: pid=556729: Fri Nov 15 10:52:08 2024 00:33:21.865 read: IOPS=106, BW=425KiB/s (435kB/s)(4288KiB/10100msec) 00:33:21.865 slat (nsec): min=8113, max=51719, avg=16459.93, stdev=8276.44 00:33:21.865 clat (msec): min=26, max=282, avg=149.40, stdev=72.73 00:33:21.865 lat (msec): min=26, max=282, avg=149.42, stdev=72.72 00:33:21.865 clat percentiles (msec): 00:33:21.865 | 1.00th=[ 33], 5.00th=[ 34], 10.00th=[ 34], 20.00th=[ 35], 00:33:21.865 | 30.00th=[ 131], 40.00th=[ 186], 50.00th=[ 188], 60.00th=[ 197], 00:33:21.865 | 70.00th=[ 199], 80.00th=[ 205], 90.00th=[ 207], 95.00th=[ 207], 00:33:21.865 | 99.00th=[ 228], 99.50th=[ 228], 99.90th=[ 284], 99.95th=[ 284], 00:33:21.865 | 99.99th=[ 284] 00:33:21.865 bw ( KiB/s): min= 256, max= 1664, per=4.95%, avg=422.40, stdev=335.96, samples=20 00:33:21.865 iops : min= 64, max= 416, avg=105.60, stdev=83.99, samples=20 00:33:21.865 lat (msec) : 50=27.05%, 100=1.31%, 250=71.27%, 500=0.37% 00:33:21.865 cpu : usr=98.05%, sys=1.48%, ctx=49, majf=0, minf=9 00:33:21.865 IO depths : 1=2.2%, 2=8.5%, 4=25.0%, 8=54.0%, 16=10.3%, 32=0.0%, >=64=0.0% 00:33:21.865 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:21.865 complete : 0=0.0%, 4=94.3%, 8=0.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:21.865 issued rwts: total=1072,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:21.865 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:21.865 filename1: (groupid=0, jobs=1): err= 0: pid=556730: Fri Nov 15 10:52:08 2024 00:33:21.865 read: IOPS=81, BW=324KiB/s (332kB/s)(3264KiB/10074msec) 00:33:21.865 slat (usec): min=22, max=127, avg=71.32, stdev=11.85 00:33:21.865 clat (msec): min=21, max=380, avg=196.88, stdev=125.09 00:33:21.865 lat (msec): min=21, max=380, avg=196.95, stdev=125.09 00:33:21.865 clat percentiles (msec): 00:33:21.865 | 1.00th=[ 33], 5.00th=[ 33], 10.00th=[ 34], 20.00th=[ 34], 00:33:21.865 | 30.00th=[ 36], 40.00th=[ 234], 50.00th=[ 275], 60.00th=[ 275], 00:33:21.865 | 70.00th=[ 292], 80.00th=[ 313], 90.00th=[ 317], 95.00th=[ 317], 00:33:21.865 | 99.00th=[ 372], 99.50th=[ 372], 99.90th=[ 380], 99.95th=[ 380], 00:33:21.865 | 99.99th=[ 380] 00:33:21.865 bw ( KiB/s): min= 128, max= 1664, per=3.75%, avg=320.00, stdev=356.03, samples=20 00:33:21.865 iops : min= 32, max= 416, avg=80.00, stdev=89.01, samples=20 00:33:21.865 lat (msec) : 50=35.29%, 250=7.84%, 500=56.86% 00:33:21.865 cpu : usr=98.30%, sys=1.27%, ctx=15, majf=0, minf=9 00:33:21.865 IO depths : 1=6.0%, 2=12.3%, 4=25.0%, 8=50.2%, 16=6.5%, 32=0.0%, >=64=0.0% 00:33:21.865 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:21.865 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:21.865 issued rwts: total=816,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:21.865 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:21.865 filename1: (groupid=0, jobs=1): err= 0: pid=556731: Fri Nov 15 10:52:08 2024 00:33:21.865 read: IOPS=82, BW=330KiB/s (337kB/s)(3328KiB/10100msec) 00:33:21.865 slat (usec): min=10, max=102, avg=68.56, stdev=15.96 00:33:21.865 clat (msec): min=22, max=406, avg=192.11, stdev=125.01 00:33:21.865 lat (msec): min=22, max=406, avg=192.18, stdev=125.01 00:33:21.865 clat percentiles (msec): 00:33:21.865 | 1.00th=[ 33], 5.00th=[ 33], 10.00th=[ 34], 20.00th=[ 34], 00:33:21.865 | 30.00th=[ 35], 40.00th=[ 186], 50.00th=[ 266], 60.00th=[ 275], 00:33:21.865 | 70.00th=[ 279], 80.00th=[ 313], 90.00th=[ 317], 95.00th=[ 317], 00:33:21.865 | 99.00th=[ 405], 99.50th=[ 405], 99.90th=[ 405], 99.95th=[ 405], 00:33:21.865 | 99.99th=[ 405] 00:33:21.865 bw ( KiB/s): min= 128, max= 1664, per=3.82%, avg=326.40, stdev=352.74, samples=20 00:33:21.865 iops : min= 32, max= 416, avg=81.60, stdev=88.19, samples=20 00:33:21.865 lat (msec) : 50=34.62%, 100=1.68%, 250=12.74%, 500=50.96% 00:33:21.865 cpu : usr=98.68%, sys=0.89%, ctx=11, majf=0, minf=9 00:33:21.865 IO depths : 1=4.4%, 2=10.7%, 4=25.0%, 8=51.8%, 16=8.1%, 32=0.0%, >=64=0.0% 00:33:21.865 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:21.865 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:21.865 issued rwts: total=832,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:21.865 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:21.866 filename1: (groupid=0, jobs=1): err= 0: pid=556732: Fri Nov 15 10:52:08 2024 00:33:21.866 read: IOPS=82, BW=330KiB/s (338kB/s)(3328KiB/10090msec) 00:33:21.866 slat (usec): min=7, max=104, avg=71.85, stdev=14.02 00:33:21.866 clat (msec): min=24, max=394, avg=193.41, stdev=122.47 00:33:21.866 lat (msec): min=24, max=394, avg=193.49, stdev=122.47 00:33:21.866 clat percentiles (msec): 00:33:21.866 | 1.00th=[ 33], 5.00th=[ 33], 10.00th=[ 34], 20.00th=[ 34], 00:33:21.866 | 30.00th=[ 35], 40.00th=[ 190], 50.00th=[ 268], 60.00th=[ 275], 00:33:21.866 | 70.00th=[ 279], 80.00th=[ 313], 90.00th=[ 317], 95.00th=[ 321], 00:33:21.866 | 99.00th=[ 388], 99.50th=[ 393], 99.90th=[ 393], 99.95th=[ 393], 00:33:21.866 | 99.99th=[ 393] 00:33:21.866 bw ( KiB/s): min= 128, max= 1536, per=3.82%, avg=326.40, stdev=340.58, samples=20 00:33:21.866 iops : min= 32, max= 384, avg=81.60, stdev=85.14, samples=20 00:33:21.866 lat (msec) : 50=34.62%, 250=14.90%, 500=50.48% 00:33:21.866 cpu : usr=98.31%, sys=1.25%, ctx=16, majf=0, minf=9 00:33:21.866 IO depths : 1=5.2%, 2=11.4%, 4=25.0%, 8=51.1%, 16=7.3%, 32=0.0%, >=64=0.0% 00:33:21.866 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:21.866 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:21.866 issued rwts: total=832,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:21.866 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:21.866 filename1: (groupid=0, jobs=1): err= 0: pid=556733: Fri Nov 15 10:52:08 2024 00:33:21.866 read: IOPS=83, BW=336KiB/s (344kB/s)(3392KiB/10106msec) 00:33:21.866 slat (usec): min=8, max=115, avg=52.41, stdev=26.75 00:33:21.866 clat (msec): min=22, max=394, avg=190.13, stdev=121.19 00:33:21.866 lat (msec): min=22, max=394, avg=190.19, stdev=121.21 00:33:21.866 clat percentiles (msec): 00:33:21.866 | 1.00th=[ 33], 5.00th=[ 34], 10.00th=[ 34], 20.00th=[ 34], 00:33:21.866 | 30.00th=[ 35], 40.00th=[ 186], 50.00th=[ 239], 60.00th=[ 275], 00:33:21.866 | 70.00th=[ 279], 80.00th=[ 313], 90.00th=[ 313], 95.00th=[ 317], 00:33:21.866 | 99.00th=[ 393], 99.50th=[ 393], 99.90th=[ 397], 99.95th=[ 397], 00:33:21.866 | 99.99th=[ 397] 00:33:21.866 bw ( KiB/s): min= 128, max= 1664, per=3.89%, avg=332.80, stdev=352.52, samples=20 00:33:21.866 iops : min= 32, max= 416, avg=83.20, stdev=88.13, samples=20 00:33:21.866 lat (msec) : 50=33.96%, 100=1.65%, 250=15.33%, 500=49.06% 00:33:21.866 cpu : usr=98.24%, sys=1.33%, ctx=20, majf=0, minf=9 00:33:21.866 IO depths : 1=4.7%, 2=11.0%, 4=25.0%, 8=51.5%, 16=7.8%, 32=0.0%, >=64=0.0% 00:33:21.866 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:21.866 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:21.866 issued rwts: total=848,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:21.866 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:21.866 filename1: (groupid=0, jobs=1): err= 0: pid=556734: Fri Nov 15 10:52:08 2024 00:33:21.866 read: IOPS=82, BW=329KiB/s (337kB/s)(3320KiB/10080msec) 00:33:21.866 slat (usec): min=22, max=103, avg=70.55, stdev=10.47 00:33:21.866 clat (msec): min=22, max=406, avg=193.71, stdev=126.74 00:33:21.866 lat (msec): min=22, max=406, avg=193.78, stdev=126.74 00:33:21.866 clat percentiles (msec): 00:33:21.866 | 1.00th=[ 33], 5.00th=[ 33], 10.00th=[ 34], 20.00th=[ 34], 00:33:21.866 | 30.00th=[ 35], 40.00th=[ 186], 50.00th=[ 271], 60.00th=[ 279], 00:33:21.866 | 70.00th=[ 288], 80.00th=[ 313], 90.00th=[ 313], 95.00th=[ 317], 00:33:21.866 | 99.00th=[ 405], 99.50th=[ 405], 99.90th=[ 405], 99.95th=[ 405], 00:33:21.866 | 99.99th=[ 405] 00:33:21.866 bw ( KiB/s): min= 128, max= 1536, per=3.81%, avg=325.60, stdev=327.87, samples=20 00:33:21.866 iops : min= 32, max= 384, avg=81.40, stdev=81.97, samples=20 00:33:21.866 lat (msec) : 50=34.70%, 100=3.37%, 250=8.92%, 500=53.01% 00:33:21.866 cpu : usr=98.37%, sys=1.20%, ctx=14, majf=0, minf=9 00:33:21.866 IO depths : 1=4.2%, 2=10.5%, 4=25.1%, 8=52.0%, 16=8.2%, 32=0.0%, >=64=0.0% 00:33:21.866 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:21.866 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:21.866 issued rwts: total=830,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:21.866 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:21.866 filename1: (groupid=0, jobs=1): err= 0: pid=556735: Fri Nov 15 10:52:08 2024 00:33:21.866 read: IOPS=107, BW=430KiB/s (440kB/s)(4352KiB/10128msec) 00:33:21.866 slat (nsec): min=3841, max=87116, avg=14088.13, stdev=12514.02 00:33:21.866 clat (msec): min=33, max=277, avg=147.62, stdev=74.01 00:33:21.866 lat (msec): min=33, max=277, avg=147.64, stdev=74.01 00:33:21.866 clat percentiles (msec): 00:33:21.866 | 1.00th=[ 34], 5.00th=[ 34], 10.00th=[ 34], 20.00th=[ 35], 00:33:21.866 | 30.00th=[ 89], 40.00th=[ 186], 50.00th=[ 188], 60.00th=[ 197], 00:33:21.866 | 70.00th=[ 199], 80.00th=[ 207], 90.00th=[ 207], 95.00th=[ 209], 00:33:21.866 | 99.00th=[ 236], 99.50th=[ 236], 99.90th=[ 279], 99.95th=[ 279], 00:33:21.866 | 99.99th=[ 279] 00:33:21.866 bw ( KiB/s): min= 256, max= 1536, per=5.02%, avg=428.80, stdev=326.28, samples=20 00:33:21.866 iops : min= 64, max= 384, avg=107.20, stdev=81.57, samples=20 00:33:21.866 lat (msec) : 50=26.47%, 100=4.41%, 250=68.93%, 500=0.18% 00:33:21.866 cpu : usr=98.48%, sys=1.13%, ctx=18, majf=0, minf=9 00:33:21.866 IO depths : 1=2.8%, 2=9.0%, 4=25.0%, 8=53.5%, 16=9.7%, 32=0.0%, >=64=0.0% 00:33:21.866 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:21.866 complete : 0=0.0%, 4=94.3%, 8=0.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:21.866 issued rwts: total=1088,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:21.866 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:21.866 filename1: (groupid=0, jobs=1): err= 0: pid=556736: Fri Nov 15 10:52:08 2024 00:33:21.866 read: IOPS=82, BW=330KiB/s (338kB/s)(3328KiB/10081msec) 00:33:21.866 slat (nsec): min=8156, max=48325, avg=17555.28, stdev=7858.39 00:33:21.866 clat (msec): min=32, max=371, avg=193.69, stdev=123.65 00:33:21.866 lat (msec): min=32, max=371, avg=193.71, stdev=123.65 00:33:21.866 clat percentiles (msec): 00:33:21.866 | 1.00th=[ 33], 5.00th=[ 34], 10.00th=[ 34], 20.00th=[ 35], 00:33:21.866 | 30.00th=[ 36], 40.00th=[ 199], 50.00th=[ 271], 60.00th=[ 279], 00:33:21.866 | 70.00th=[ 279], 80.00th=[ 313], 90.00th=[ 317], 95.00th=[ 317], 00:33:21.866 | 99.00th=[ 372], 99.50th=[ 372], 99.90th=[ 372], 99.95th=[ 372], 00:33:21.866 | 99.99th=[ 372] 00:33:21.866 bw ( KiB/s): min= 128, max= 1664, per=3.82%, avg=326.40, stdev=353.54, samples=20 00:33:21.866 iops : min= 32, max= 416, avg=81.60, stdev=88.39, samples=20 00:33:21.866 lat (msec) : 50=34.62%, 100=1.92%, 250=9.62%, 500=53.85% 00:33:21.866 cpu : usr=98.30%, sys=1.29%, ctx=14, majf=0, minf=9 00:33:21.866 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:33:21.866 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:21.866 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:21.866 issued rwts: total=832,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:21.866 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:21.866 filename1: (groupid=0, jobs=1): err= 0: pid=556737: Fri Nov 15 10:52:08 2024 00:33:21.866 read: IOPS=85, BW=342KiB/s (350kB/s)(3456KiB/10107msec) 00:33:21.866 slat (usec): min=8, max=113, avg=67.65, stdev=16.40 00:33:21.866 clat (msec): min=24, max=393, avg=186.60, stdev=123.37 00:33:21.866 lat (msec): min=24, max=393, avg=186.67, stdev=123.37 00:33:21.866 clat percentiles (msec): 00:33:21.866 | 1.00th=[ 33], 5.00th=[ 33], 10.00th=[ 34], 20.00th=[ 34], 00:33:21.866 | 30.00th=[ 35], 40.00th=[ 167], 50.00th=[ 249], 60.00th=[ 275], 00:33:21.866 | 70.00th=[ 279], 80.00th=[ 313], 90.00th=[ 317], 95.00th=[ 321], 00:33:21.866 | 99.00th=[ 388], 99.50th=[ 393], 99.90th=[ 393], 99.95th=[ 393], 00:33:21.866 | 99.99th=[ 393] 00:33:21.866 bw ( KiB/s): min= 128, max= 1552, per=3.98%, avg=339.20, stdev=345.18, samples=20 00:33:21.866 iops : min= 32, max= 388, avg=84.80, stdev=86.30, samples=20 00:33:21.866 lat (msec) : 50=35.19%, 100=2.08%, 250=13.89%, 500=48.84% 00:33:21.866 cpu : usr=98.35%, sys=1.22%, ctx=15, majf=0, minf=9 00:33:21.866 IO depths : 1=4.9%, 2=11.1%, 4=25.0%, 8=51.4%, 16=7.6%, 32=0.0%, >=64=0.0% 00:33:21.866 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:21.866 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:21.866 issued rwts: total=864,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:21.866 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:21.866 filename2: (groupid=0, jobs=1): err= 0: pid=556738: Fri Nov 15 10:52:08 2024 00:33:21.866 read: IOPS=80, BW=324KiB/s (332kB/s)(3264KiB/10075msec) 00:33:21.866 slat (usec): min=16, max=101, avg=27.27, stdev=12.87 00:33:21.866 clat (msec): min=21, max=437, avg=197.32, stdev=125.27 00:33:21.866 lat (msec): min=21, max=437, avg=197.34, stdev=125.27 00:33:21.866 clat percentiles (msec): 00:33:21.866 | 1.00th=[ 33], 5.00th=[ 34], 10.00th=[ 34], 20.00th=[ 35], 00:33:21.866 | 30.00th=[ 36], 40.00th=[ 234], 50.00th=[ 275], 60.00th=[ 275], 00:33:21.866 | 70.00th=[ 292], 80.00th=[ 313], 90.00th=[ 317], 95.00th=[ 317], 00:33:21.866 | 99.00th=[ 372], 99.50th=[ 372], 99.90th=[ 439], 99.95th=[ 439], 00:33:21.866 | 99.99th=[ 439] 00:33:21.866 bw ( KiB/s): min= 128, max= 1664, per=3.75%, avg=320.00, stdev=356.03, samples=20 00:33:21.866 iops : min= 32, max= 416, avg=80.00, stdev=89.01, samples=20 00:33:21.866 lat (msec) : 50=35.29%, 250=7.84%, 500=56.86% 00:33:21.866 cpu : usr=98.37%, sys=1.18%, ctx=14, majf=0, minf=9 00:33:21.866 IO depths : 1=5.8%, 2=12.0%, 4=25.0%, 8=50.5%, 16=6.7%, 32=0.0%, >=64=0.0% 00:33:21.866 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:21.866 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:21.866 issued rwts: total=816,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:21.866 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:21.866 filename2: (groupid=0, jobs=1): err= 0: pid=556739: Fri Nov 15 10:52:08 2024 00:33:21.866 read: IOPS=82, BW=329KiB/s (337kB/s)(3320KiB/10090msec) 00:33:21.866 slat (usec): min=18, max=101, avg=70.02, stdev=10.99 00:33:21.866 clat (msec): min=22, max=406, avg=193.91, stdev=126.75 00:33:21.866 lat (msec): min=22, max=406, avg=193.98, stdev=126.75 00:33:21.866 clat percentiles (msec): 00:33:21.866 | 1.00th=[ 33], 5.00th=[ 33], 10.00th=[ 34], 20.00th=[ 34], 00:33:21.867 | 30.00th=[ 35], 40.00th=[ 186], 50.00th=[ 271], 60.00th=[ 279], 00:33:21.867 | 70.00th=[ 288], 80.00th=[ 313], 90.00th=[ 317], 95.00th=[ 317], 00:33:21.867 | 99.00th=[ 405], 99.50th=[ 405], 99.90th=[ 405], 99.95th=[ 405], 00:33:21.867 | 99.99th=[ 405] 00:33:21.867 bw ( KiB/s): min= 128, max= 1664, per=3.81%, avg=325.60, stdev=353.20, samples=20 00:33:21.867 iops : min= 32, max= 416, avg=81.40, stdev=88.30, samples=20 00:33:21.867 lat (msec) : 50=34.70%, 100=3.37%, 250=8.92%, 500=53.01% 00:33:21.867 cpu : usr=98.49%, sys=1.07%, ctx=17, majf=0, minf=9 00:33:21.867 IO depths : 1=4.3%, 2=10.6%, 4=25.1%, 8=51.9%, 16=8.1%, 32=0.0%, >=64=0.0% 00:33:21.867 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:21.867 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:21.867 issued rwts: total=830,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:21.867 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:21.867 filename2: (groupid=0, jobs=1): err= 0: pid=556740: Fri Nov 15 10:52:08 2024 00:33:21.867 read: IOPS=104, BW=418KiB/s (428kB/s)(4224KiB/10100msec) 00:33:21.867 slat (nsec): min=8040, max=60488, avg=14976.60, stdev=8161.84 00:33:21.867 clat (msec): min=23, max=309, avg=151.67, stdev=74.13 00:33:21.867 lat (msec): min=23, max=309, avg=151.68, stdev=74.13 00:33:21.867 clat percentiles (msec): 00:33:21.867 | 1.00th=[ 33], 5.00th=[ 34], 10.00th=[ 34], 20.00th=[ 35], 00:33:21.867 | 30.00th=[ 146], 40.00th=[ 186], 50.00th=[ 190], 60.00th=[ 197], 00:33:21.867 | 70.00th=[ 199], 80.00th=[ 207], 90.00th=[ 207], 95.00th=[ 211], 00:33:21.867 | 99.00th=[ 241], 99.50th=[ 241], 99.90th=[ 309], 99.95th=[ 309], 00:33:21.867 | 99.99th=[ 309] 00:33:21.867 bw ( KiB/s): min= 256, max= 1664, per=4.88%, avg=416.00, stdev=338.78, samples=20 00:33:21.867 iops : min= 64, max= 416, avg=104.00, stdev=84.69, samples=20 00:33:21.867 lat (msec) : 50=27.27%, 100=1.52%, 250=70.83%, 500=0.38% 00:33:21.867 cpu : usr=98.25%, sys=1.28%, ctx=24, majf=0, minf=9 00:33:21.867 IO depths : 1=3.2%, 2=9.5%, 4=25.0%, 8=53.0%, 16=9.3%, 32=0.0%, >=64=0.0% 00:33:21.867 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:21.867 complete : 0=0.0%, 4=94.3%, 8=0.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:21.867 issued rwts: total=1056,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:21.867 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:21.867 filename2: (groupid=0, jobs=1): err= 0: pid=556741: Fri Nov 15 10:52:08 2024 00:33:21.867 read: IOPS=107, BW=430KiB/s (440kB/s)(4344KiB/10107msec) 00:33:21.867 slat (nsec): min=7760, max=92949, avg=14441.95, stdev=15104.03 00:33:21.867 clat (msec): min=23, max=312, avg=148.51, stdev=74.31 00:33:21.867 lat (msec): min=23, max=312, avg=148.52, stdev=74.31 00:33:21.867 clat percentiles (msec): 00:33:21.867 | 1.00th=[ 34], 5.00th=[ 34], 10.00th=[ 34], 20.00th=[ 35], 00:33:21.867 | 30.00th=[ 128], 40.00th=[ 167], 50.00th=[ 192], 60.00th=[ 194], 00:33:21.867 | 70.00th=[ 197], 80.00th=[ 201], 90.00th=[ 203], 95.00th=[ 224], 00:33:21.867 | 99.00th=[ 288], 99.50th=[ 313], 99.90th=[ 313], 99.95th=[ 313], 00:33:21.867 | 99.99th=[ 313] 00:33:21.867 bw ( KiB/s): min= 224, max= 1664, per=5.02%, avg=428.00, stdev=332.59, samples=20 00:33:21.867 iops : min= 56, max= 416, avg=107.00, stdev=83.15, samples=20 00:33:21.867 lat (msec) : 50=26.52%, 100=1.47%, 250=69.24%, 500=2.76% 00:33:21.867 cpu : usr=98.57%, sys=1.02%, ctx=23, majf=0, minf=9 00:33:21.867 IO depths : 1=2.1%, 2=5.1%, 4=14.9%, 8=67.4%, 16=10.5%, 32=0.0%, >=64=0.0% 00:33:21.867 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:21.867 complete : 0=0.0%, 4=91.2%, 8=3.3%, 16=5.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:21.867 issued rwts: total=1086,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:21.867 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:21.867 filename2: (groupid=0, jobs=1): err= 0: pid=556742: Fri Nov 15 10:52:08 2024 00:33:21.867 read: IOPS=82, BW=330KiB/s (338kB/s)(3328KiB/10094msec) 00:33:21.867 slat (usec): min=22, max=111, avg=74.12, stdev=11.30 00:33:21.867 clat (msec): min=24, max=389, avg=193.44, stdev=121.42 00:33:21.867 lat (msec): min=24, max=389, avg=193.52, stdev=121.42 00:33:21.867 clat percentiles (msec): 00:33:21.867 | 1.00th=[ 33], 5.00th=[ 33], 10.00th=[ 34], 20.00th=[ 34], 00:33:21.867 | 30.00th=[ 35], 40.00th=[ 190], 50.00th=[ 271], 60.00th=[ 275], 00:33:21.867 | 70.00th=[ 279], 80.00th=[ 313], 90.00th=[ 317], 95.00th=[ 317], 00:33:21.867 | 99.00th=[ 330], 99.50th=[ 330], 99.90th=[ 388], 99.95th=[ 388], 00:33:21.867 | 99.99th=[ 388] 00:33:21.867 bw ( KiB/s): min= 128, max= 1536, per=3.82%, avg=326.40, stdev=341.17, samples=20 00:33:21.867 iops : min= 32, max= 384, avg=81.60, stdev=85.29, samples=20 00:33:21.867 lat (msec) : 50=34.62%, 250=13.46%, 500=51.92% 00:33:21.867 cpu : usr=98.33%, sys=1.23%, ctx=16, majf=0, minf=9 00:33:21.867 IO depths : 1=5.8%, 2=12.0%, 4=25.0%, 8=50.5%, 16=6.7%, 32=0.0%, >=64=0.0% 00:33:21.867 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:21.867 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:21.867 issued rwts: total=832,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:21.867 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:21.867 filename2: (groupid=0, jobs=1): err= 0: pid=556743: Fri Nov 15 10:52:08 2024 00:33:21.867 read: IOPS=127, BW=508KiB/s (520kB/s)(5148KiB/10131msec) 00:33:21.867 slat (nsec): min=4225, max=99651, avg=25073.31, stdev=25556.87 00:33:21.867 clat (usec): min=972, max=326435, avg=124727.62, stdev=88589.60 00:33:21.867 lat (usec): min=982, max=326492, avg=124752.70, stdev=88580.08 00:33:21.867 clat percentiles (usec): 00:33:21.867 | 1.00th=[ 1532], 5.00th=[ 1614], 10.00th=[ 1696], 20.00th=[ 32900], 00:33:21.867 | 30.00th=[ 33817], 40.00th=[ 39584], 50.00th=[185598], 60.00th=[193987], 00:33:21.867 | 70.00th=[196084], 80.00th=[200279], 90.00th=[206570], 95.00th=[221250], 00:33:21.867 | 99.00th=[248513], 99.50th=[250610], 99.90th=[325059], 99.95th=[325059], 00:33:21.867 | 99.99th=[325059] 00:33:21.867 bw ( KiB/s): min= 240, max= 2232, per=5.96%, avg=508.40, stdev=506.94, samples=20 00:33:21.867 iops : min= 60, max= 558, avg=127.10, stdev=126.73, samples=20 00:33:21.867 lat (usec) : 1000=0.16% 00:33:21.867 lat (msec) : 2=13.36%, 4=1.86%, 10=1.32%, 50=23.62%, 100=1.40% 00:33:21.867 lat (msec) : 250=57.96%, 500=0.31% 00:33:21.867 cpu : usr=98.34%, sys=1.26%, ctx=25, majf=0, minf=9 00:33:21.867 IO depths : 1=1.9%, 2=7.2%, 4=21.1%, 8=58.7%, 16=11.0%, 32=0.0%, >=64=0.0% 00:33:21.867 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:21.867 complete : 0=0.0%, 4=93.5%, 8=1.3%, 16=5.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:21.867 issued rwts: total=1287,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:21.867 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:21.867 filename2: (groupid=0, jobs=1): err= 0: pid=556744: Fri Nov 15 10:52:08 2024 00:33:21.867 read: IOPS=80, BW=324KiB/s (332kB/s)(3264KiB/10076msec) 00:33:21.867 slat (nsec): min=5564, max=74078, avg=26736.70, stdev=6907.05 00:33:21.867 clat (msec): min=32, max=371, avg=197.32, stdev=124.96 00:33:21.867 lat (msec): min=32, max=371, avg=197.34, stdev=124.96 00:33:21.867 clat percentiles (msec): 00:33:21.867 | 1.00th=[ 34], 5.00th=[ 34], 10.00th=[ 34], 20.00th=[ 35], 00:33:21.867 | 30.00th=[ 36], 40.00th=[ 234], 50.00th=[ 275], 60.00th=[ 275], 00:33:21.867 | 70.00th=[ 292], 80.00th=[ 313], 90.00th=[ 317], 95.00th=[ 317], 00:33:21.867 | 99.00th=[ 372], 99.50th=[ 372], 99.90th=[ 372], 99.95th=[ 372], 00:33:21.867 | 99.99th=[ 372] 00:33:21.867 bw ( KiB/s): min= 128, max= 1664, per=3.75%, avg=320.00, stdev=356.03, samples=20 00:33:21.867 iops : min= 32, max= 416, avg=80.00, stdev=89.01, samples=20 00:33:21.867 lat (msec) : 50=35.29%, 250=7.84%, 500=56.86% 00:33:21.867 cpu : usr=98.33%, sys=1.23%, ctx=26, majf=0, minf=9 00:33:21.867 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:33:21.867 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:21.867 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:21.867 issued rwts: total=816,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:21.867 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:21.867 filename2: (groupid=0, jobs=1): err= 0: pid=556745: Fri Nov 15 10:52:08 2024 00:33:21.867 read: IOPS=80, BW=324KiB/s (331kB/s)(3264KiB/10084msec) 00:33:21.867 slat (usec): min=19, max=199, avg=73.23, stdev=14.23 00:33:21.867 clat (msec): min=21, max=384, avg=197.10, stdev=127.03 00:33:21.867 lat (msec): min=21, max=384, avg=197.18, stdev=127.03 00:33:21.867 clat percentiles (msec): 00:33:21.867 | 1.00th=[ 33], 5.00th=[ 33], 10.00th=[ 34], 20.00th=[ 34], 00:33:21.867 | 30.00th=[ 36], 40.00th=[ 209], 50.00th=[ 271], 60.00th=[ 275], 00:33:21.867 | 70.00th=[ 292], 80.00th=[ 313], 90.00th=[ 317], 95.00th=[ 330], 00:33:21.867 | 99.00th=[ 384], 99.50th=[ 384], 99.90th=[ 384], 99.95th=[ 384], 00:33:21.867 | 99.99th=[ 384] 00:33:21.867 bw ( KiB/s): min= 128, max= 1664, per=3.75%, avg=320.00, stdev=355.50, samples=20 00:33:21.867 iops : min= 32, max= 416, avg=80.00, stdev=88.88, samples=20 00:33:21.867 lat (msec) : 50=35.29%, 250=10.54%, 500=54.17% 00:33:21.867 cpu : usr=98.35%, sys=1.20%, ctx=22, majf=0, minf=9 00:33:21.867 IO depths : 1=4.5%, 2=10.8%, 4=25.0%, 8=51.7%, 16=8.0%, 32=0.0%, >=64=0.0% 00:33:21.867 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:21.867 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:21.867 issued rwts: total=816,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:21.867 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:21.867 00:33:21.867 Run status group 0 (all jobs): 00:33:21.867 READ: bw=8528KiB/s (8733kB/s), 324KiB/s-508KiB/s (331kB/s-520kB/s), io=84.4MiB (88.5MB), run=10074-10131msec 00:33:21.867 10:52:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:33:21.867 10:52:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:33:21.867 10:52:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:33:21.867 10:52:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:33:21.867 10:52:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:33:21.867 10:52:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:33:21.867 10:52:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:21.867 10:52:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:21.867 10:52:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:21.868 10:52:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:33:21.868 10:52:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:21.868 10:52:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:21.868 10:52:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:21.868 10:52:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:33:21.868 10:52:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:33:21.868 10:52:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:33:21.868 10:52:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:33:21.868 10:52:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:21.868 10:52:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:21.868 10:52:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:21.868 10:52:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:33:21.868 10:52:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:21.868 10:52:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:21.868 10:52:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:21.868 10:52:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:33:21.868 10:52:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:33:21.868 10:52:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:33:21.868 10:52:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:33:21.868 10:52:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:21.868 10:52:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:21.868 10:52:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:21.868 10:52:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:33:21.868 10:52:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:21.868 10:52:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:21.868 10:52:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:21.868 10:52:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:33:21.868 10:52:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:33:21.868 10:52:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:33:21.868 10:52:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:33:21.868 10:52:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:33:21.868 10:52:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:33:21.868 10:52:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:33:21.868 10:52:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:33:21.868 10:52:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:33:21.868 10:52:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:33:21.868 10:52:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:33:21.868 10:52:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:33:21.868 10:52:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:21.868 10:52:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:21.868 bdev_null0 00:33:21.868 10:52:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:21.868 10:52:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:33:21.868 10:52:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:21.868 10:52:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:21.868 10:52:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:21.868 10:52:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:33:21.868 10:52:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:21.868 10:52:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:21.868 10:52:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:21.868 10:52:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:33:21.868 10:52:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:21.868 10:52:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:21.868 [2024-11-15 10:52:09.012388] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:21.868 10:52:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:21.868 10:52:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:33:21.868 10:52:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:33:21.868 10:52:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:33:21.868 10:52:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:33:21.868 10:52:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:21.868 10:52:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:21.868 bdev_null1 00:33:21.868 10:52:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:21.868 10:52:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:33:21.868 10:52:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:21.868 10:52:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:21.868 10:52:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:21.868 10:52:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:33:21.868 10:52:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:21.868 10:52:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:21.868 10:52:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:21.868 10:52:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:21.868 10:52:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:21.868 10:52:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:21.868 10:52:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:21.868 10:52:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:33:21.868 10:52:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:33:21.868 10:52:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:21.868 10:52:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:33:21.868 10:52:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1358 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:21.868 10:52:09 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:33:21.868 10:52:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:33:21.868 10:52:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:33:21.868 10:52:09 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:33:21.868 10:52:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:33:21.868 10:52:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:33:21.868 10:52:09 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:33:21.868 10:52:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local sanitizers 00:33:21.868 10:52:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:21.868 10:52:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:33:21.868 10:52:09 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:33:21.868 { 00:33:21.868 "params": { 00:33:21.868 "name": "Nvme$subsystem", 00:33:21.868 "trtype": "$TEST_TRANSPORT", 00:33:21.868 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:21.868 "adrfam": "ipv4", 00:33:21.868 "trsvcid": "$NVMF_PORT", 00:33:21.868 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:21.868 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:21.868 "hdgst": ${hdgst:-false}, 00:33:21.868 "ddgst": ${ddgst:-false} 00:33:21.868 }, 00:33:21.868 "method": "bdev_nvme_attach_controller" 00:33:21.868 } 00:33:21.868 EOF 00:33:21.868 )") 00:33:21.868 10:52:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # shift 00:33:21.868 10:52:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # local asan_lib= 00:33:21.868 10:52:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:33:21.868 10:52:09 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:33:21.868 10:52:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:21.868 10:52:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # grep libasan 00:33:21.869 10:52:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:33:21.869 10:52:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:33:21.869 10:52:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:33:21.869 10:52:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:33:21.869 10:52:09 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:33:21.869 10:52:09 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:33:21.869 { 00:33:21.869 "params": { 00:33:21.869 "name": "Nvme$subsystem", 00:33:21.869 "trtype": "$TEST_TRANSPORT", 00:33:21.869 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:21.869 "adrfam": "ipv4", 00:33:21.869 "trsvcid": "$NVMF_PORT", 00:33:21.869 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:21.869 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:21.869 "hdgst": ${hdgst:-false}, 00:33:21.869 "ddgst": ${ddgst:-false} 00:33:21.869 }, 00:33:21.869 "method": "bdev_nvme_attach_controller" 00:33:21.869 } 00:33:21.869 EOF 00:33:21.869 )") 00:33:21.869 10:52:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:33:21.869 10:52:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:33:21.869 10:52:09 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:33:21.869 10:52:09 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:33:21.869 10:52:09 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:33:21.869 10:52:09 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:33:21.869 "params": { 00:33:21.869 "name": "Nvme0", 00:33:21.869 "trtype": "tcp", 00:33:21.869 "traddr": "10.0.0.2", 00:33:21.869 "adrfam": "ipv4", 00:33:21.869 "trsvcid": "4420", 00:33:21.869 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:33:21.869 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:33:21.869 "hdgst": false, 00:33:21.869 "ddgst": false 00:33:21.869 }, 00:33:21.869 "method": "bdev_nvme_attach_controller" 00:33:21.869 },{ 00:33:21.869 "params": { 00:33:21.869 "name": "Nvme1", 00:33:21.869 "trtype": "tcp", 00:33:21.869 "traddr": "10.0.0.2", 00:33:21.869 "adrfam": "ipv4", 00:33:21.869 "trsvcid": "4420", 00:33:21.869 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:33:21.869 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:33:21.869 "hdgst": false, 00:33:21.869 "ddgst": false 00:33:21.869 }, 00:33:21.869 "method": "bdev_nvme_attach_controller" 00:33:21.869 }' 00:33:21.869 10:52:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # asan_lib= 00:33:21.869 10:52:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:33:21.869 10:52:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:33:21.869 10:52:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:21.869 10:52:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # grep libclang_rt.asan 00:33:21.869 10:52:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:33:21.869 10:52:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # asan_lib= 00:33:21.869 10:52:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:33:21.869 10:52:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1354 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:33:21.869 10:52:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:21.869 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:33:21.869 ... 00:33:21.869 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:33:21.869 ... 00:33:21.869 fio-3.35 00:33:21.869 Starting 4 threads 00:33:27.132 00:33:27.132 filename0: (groupid=0, jobs=1): err= 0: pid=558238: Fri Nov 15 10:52:15 2024 00:33:27.132 read: IOPS=2091, BW=16.3MiB/s (17.1MB/s)(81.7MiB/5001msec) 00:33:27.132 slat (nsec): min=4220, max=69276, avg=13080.86, stdev=7113.14 00:33:27.132 clat (usec): min=365, max=7342, avg=3780.95, stdev=584.75 00:33:27.132 lat (usec): min=384, max=7351, avg=3794.03, stdev=585.13 00:33:27.132 clat percentiles (usec): 00:33:27.132 | 1.00th=[ 2180], 5.00th=[ 2900], 10.00th=[ 3097], 20.00th=[ 3359], 00:33:27.132 | 30.00th=[ 3556], 40.00th=[ 3687], 50.00th=[ 3785], 60.00th=[ 3884], 00:33:27.132 | 70.00th=[ 3982], 80.00th=[ 4113], 90.00th=[ 4359], 95.00th=[ 4686], 00:33:27.132 | 99.00th=[ 5604], 99.50th=[ 6128], 99.90th=[ 6849], 99.95th=[ 6980], 00:33:27.132 | 99.99th=[ 7308] 00:33:27.132 bw ( KiB/s): min=15824, max=17472, per=25.47%, avg=16595.67, stdev=595.02, samples=9 00:33:27.132 iops : min= 1978, max= 2184, avg=2074.44, stdev=74.38, samples=9 00:33:27.132 lat (usec) : 500=0.01% 00:33:27.132 lat (msec) : 2=0.76%, 4=69.65%, 10=29.58% 00:33:27.132 cpu : usr=95.82%, sys=3.68%, ctx=9, majf=0, minf=0 00:33:27.132 IO depths : 1=0.3%, 2=11.1%, 4=60.5%, 8=28.1%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:27.132 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:27.132 complete : 0=0.0%, 4=92.8%, 8=7.2%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:27.132 issued rwts: total=10460,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:27.132 latency : target=0, window=0, percentile=100.00%, depth=8 00:33:27.132 filename0: (groupid=0, jobs=1): err= 0: pid=558239: Fri Nov 15 10:52:15 2024 00:33:27.132 read: IOPS=1994, BW=15.6MiB/s (16.3MB/s)(77.9MiB/5002msec) 00:33:27.132 slat (nsec): min=3982, max=76963, avg=17692.37, stdev=9984.87 00:33:27.132 clat (usec): min=800, max=7619, avg=3947.15, stdev=596.44 00:33:27.132 lat (usec): min=818, max=7658, avg=3964.85, stdev=596.29 00:33:27.132 clat percentiles (usec): 00:33:27.132 | 1.00th=[ 2376], 5.00th=[ 3130], 10.00th=[ 3392], 20.00th=[ 3621], 00:33:27.132 | 30.00th=[ 3720], 40.00th=[ 3785], 50.00th=[ 3884], 60.00th=[ 3949], 00:33:27.132 | 70.00th=[ 4080], 80.00th=[ 4228], 90.00th=[ 4621], 95.00th=[ 5014], 00:33:27.132 | 99.00th=[ 6063], 99.50th=[ 6325], 99.90th=[ 6915], 99.95th=[ 7111], 00:33:27.132 | 99.99th=[ 7635] 00:33:27.132 bw ( KiB/s): min=15120, max=16496, per=24.42%, avg=15912.89, stdev=415.78, samples=9 00:33:27.132 iops : min= 1890, max= 2062, avg=1989.11, stdev=51.97, samples=9 00:33:27.132 lat (usec) : 1000=0.06% 00:33:27.132 lat (msec) : 2=0.38%, 4=63.77%, 10=35.79% 00:33:27.132 cpu : usr=92.96%, sys=4.94%, ctx=218, majf=0, minf=9 00:33:27.132 IO depths : 1=0.2%, 2=13.1%, 4=59.2%, 8=27.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:27.132 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:27.132 complete : 0=0.0%, 4=92.1%, 8=7.9%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:27.132 issued rwts: total=9976,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:27.132 latency : target=0, window=0, percentile=100.00%, depth=8 00:33:27.132 filename1: (groupid=0, jobs=1): err= 0: pid=558240: Fri Nov 15 10:52:15 2024 00:33:27.132 read: IOPS=2056, BW=16.1MiB/s (16.8MB/s)(80.4MiB/5001msec) 00:33:27.132 slat (nsec): min=4012, max=92894, avg=15715.55, stdev=9099.82 00:33:27.132 clat (usec): min=585, max=7368, avg=3834.53, stdev=589.48 00:33:27.132 lat (usec): min=597, max=7407, avg=3850.24, stdev=590.08 00:33:27.132 clat percentiles (usec): 00:33:27.132 | 1.00th=[ 2245], 5.00th=[ 2966], 10.00th=[ 3228], 20.00th=[ 3490], 00:33:27.132 | 30.00th=[ 3621], 40.00th=[ 3720], 50.00th=[ 3818], 60.00th=[ 3884], 00:33:27.132 | 70.00th=[ 4015], 80.00th=[ 4146], 90.00th=[ 4424], 95.00th=[ 4817], 00:33:27.132 | 99.00th=[ 5800], 99.50th=[ 6128], 99.90th=[ 6783], 99.95th=[ 6980], 00:33:27.132 | 99.99th=[ 7308] 00:33:27.132 bw ( KiB/s): min=15424, max=16960, per=25.14%, avg=16385.78, stdev=468.56, samples=9 00:33:27.132 iops : min= 1928, max= 2120, avg=2048.22, stdev=58.57, samples=9 00:33:27.132 lat (usec) : 750=0.01%, 1000=0.04% 00:33:27.132 lat (msec) : 2=0.55%, 4=69.41%, 10=29.99% 00:33:27.132 cpu : usr=94.86%, sys=3.84%, ctx=58, majf=0, minf=9 00:33:27.132 IO depths : 1=0.3%, 2=13.7%, 4=58.2%, 8=27.8%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:27.132 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:27.132 complete : 0=0.0%, 4=92.5%, 8=7.5%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:27.132 issued rwts: total=10286,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:27.132 latency : target=0, window=0, percentile=100.00%, depth=8 00:33:27.132 filename1: (groupid=0, jobs=1): err= 0: pid=558241: Fri Nov 15 10:52:15 2024 00:33:27.132 read: IOPS=2003, BW=15.7MiB/s (16.4MB/s)(78.3MiB/5002msec) 00:33:27.132 slat (nsec): min=3670, max=65626, avg=17317.93, stdev=8466.52 00:33:27.132 clat (usec): min=958, max=7669, avg=3934.09, stdev=618.95 00:33:27.132 lat (usec): min=977, max=7692, avg=3951.41, stdev=619.01 00:33:27.132 clat percentiles (usec): 00:33:27.132 | 1.00th=[ 2343], 5.00th=[ 3064], 10.00th=[ 3326], 20.00th=[ 3589], 00:33:27.132 | 30.00th=[ 3720], 40.00th=[ 3785], 50.00th=[ 3884], 60.00th=[ 3949], 00:33:27.132 | 70.00th=[ 4080], 80.00th=[ 4228], 90.00th=[ 4621], 95.00th=[ 5080], 00:33:27.132 | 99.00th=[ 6128], 99.50th=[ 6390], 99.90th=[ 6915], 99.95th=[ 7046], 00:33:27.132 | 99.99th=[ 7504] 00:33:27.132 bw ( KiB/s): min=15568, max=16704, per=24.46%, avg=15939.56, stdev=425.26, samples=9 00:33:27.132 iops : min= 1946, max= 2088, avg=1992.44, stdev=53.16, samples=9 00:33:27.132 lat (usec) : 1000=0.01% 00:33:27.132 lat (msec) : 2=0.35%, 4=63.80%, 10=35.84% 00:33:27.132 cpu : usr=95.64%, sys=3.88%, ctx=7, majf=0, minf=9 00:33:27.132 IO depths : 1=0.1%, 2=12.0%, 4=59.8%, 8=28.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:27.132 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:27.132 complete : 0=0.0%, 4=92.6%, 8=7.4%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:27.132 issued rwts: total=10023,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:27.132 latency : target=0, window=0, percentile=100.00%, depth=8 00:33:27.132 00:33:27.132 Run status group 0 (all jobs): 00:33:27.132 READ: bw=63.6MiB/s (66.7MB/s), 15.6MiB/s-16.3MiB/s (16.3MB/s-17.1MB/s), io=318MiB (334MB), run=5001-5002msec 00:33:27.132 10:52:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:33:27.132 10:52:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:33:27.132 10:52:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:33:27.132 10:52:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:33:27.132 10:52:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:33:27.132 10:52:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:33:27.132 10:52:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:27.132 10:52:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:27.132 10:52:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:27.132 10:52:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:33:27.132 10:52:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:27.132 10:52:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:27.132 10:52:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:27.132 10:52:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:33:27.132 10:52:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:33:27.132 10:52:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:33:27.132 10:52:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:33:27.132 10:52:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:27.132 10:52:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:27.132 10:52:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:27.132 10:52:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:33:27.132 10:52:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:27.132 10:52:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:27.132 10:52:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:27.132 00:33:27.132 real 0m24.271s 00:33:27.132 user 4m35.795s 00:33:27.132 sys 0m5.582s 00:33:27.132 10:52:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1128 -- # xtrace_disable 00:33:27.132 10:52:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:27.132 ************************************ 00:33:27.132 END TEST fio_dif_rand_params 00:33:27.132 ************************************ 00:33:27.132 10:52:15 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:33:27.132 10:52:15 nvmf_dif -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:33:27.132 10:52:15 nvmf_dif -- common/autotest_common.sh@1109 -- # xtrace_disable 00:33:27.132 10:52:15 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:33:27.132 ************************************ 00:33:27.132 START TEST fio_dif_digest 00:33:27.132 ************************************ 00:33:27.132 10:52:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1127 -- # fio_dif_digest 00:33:27.132 10:52:15 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:33:27.132 10:52:15 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:33:27.132 10:52:15 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:33:27.132 10:52:15 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:33:27.132 10:52:15 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:33:27.132 10:52:15 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:33:27.132 10:52:15 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:33:27.132 10:52:15 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:33:27.132 10:52:15 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:33:27.132 10:52:15 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:33:27.132 10:52:15 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:33:27.132 10:52:15 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:33:27.132 10:52:15 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:33:27.132 10:52:15 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:33:27.133 10:52:15 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:33:27.133 10:52:15 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:33:27.133 10:52:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:27.133 10:52:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:33:27.133 bdev_null0 00:33:27.133 10:52:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:27.133 10:52:15 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:33:27.133 10:52:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:27.133 10:52:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:33:27.133 10:52:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:27.133 10:52:15 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:33:27.133 10:52:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:27.133 10:52:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:33:27.133 10:52:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:27.133 10:52:15 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:33:27.133 10:52:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:27.133 10:52:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:33:27.133 [2024-11-15 10:52:15.462698] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:27.133 10:52:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:27.133 10:52:15 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:33:27.133 10:52:15 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:33:27.133 10:52:15 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:33:27.133 10:52:15 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # config=() 00:33:27.133 10:52:15 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # local subsystem config 00:33:27.133 10:52:15 nvmf_dif.fio_dif_digest -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:33:27.133 10:52:15 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:33:27.133 { 00:33:27.133 "params": { 00:33:27.133 "name": "Nvme$subsystem", 00:33:27.133 "trtype": "$TEST_TRANSPORT", 00:33:27.133 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:27.133 "adrfam": "ipv4", 00:33:27.133 "trsvcid": "$NVMF_PORT", 00:33:27.133 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:27.133 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:27.133 "hdgst": ${hdgst:-false}, 00:33:27.133 "ddgst": ${ddgst:-false} 00:33:27.133 }, 00:33:27.133 "method": "bdev_nvme_attach_controller" 00:33:27.133 } 00:33:27.133 EOF 00:33:27.133 )") 00:33:27.133 10:52:15 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:27.133 10:52:15 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:33:27.133 10:52:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1358 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:27.133 10:52:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:33:27.133 10:52:15 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:33:27.133 10:52:15 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:33:27.133 10:52:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:33:27.133 10:52:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # local sanitizers 00:33:27.133 10:52:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1342 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:27.133 10:52:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # shift 00:33:27.133 10:52:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # local asan_lib= 00:33:27.133 10:52:15 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # cat 00:33:27.133 10:52:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:33:27.133 10:52:15 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:33:27.133 10:52:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:27.133 10:52:15 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:33:27.133 10:52:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # grep libasan 00:33:27.133 10:52:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:33:27.133 10:52:15 nvmf_dif.fio_dif_digest -- nvmf/common.sh@584 -- # jq . 00:33:27.133 10:52:15 nvmf_dif.fio_dif_digest -- nvmf/common.sh@585 -- # IFS=, 00:33:27.133 10:52:15 nvmf_dif.fio_dif_digest -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:33:27.133 "params": { 00:33:27.133 "name": "Nvme0", 00:33:27.133 "trtype": "tcp", 00:33:27.133 "traddr": "10.0.0.2", 00:33:27.133 "adrfam": "ipv4", 00:33:27.133 "trsvcid": "4420", 00:33:27.133 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:33:27.133 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:33:27.133 "hdgst": true, 00:33:27.133 "ddgst": true 00:33:27.133 }, 00:33:27.133 "method": "bdev_nvme_attach_controller" 00:33:27.133 }' 00:33:27.133 10:52:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # asan_lib= 00:33:27.133 10:52:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:33:27.133 10:52:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:33:27.133 10:52:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:27.133 10:52:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # grep libclang_rt.asan 00:33:27.133 10:52:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:33:27.133 10:52:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # asan_lib= 00:33:27.133 10:52:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:33:27.133 10:52:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1354 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:33:27.133 10:52:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:27.391 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:33:27.391 ... 00:33:27.391 fio-3.35 00:33:27.391 Starting 3 threads 00:33:39.601 00:33:39.601 filename0: (groupid=0, jobs=1): err= 0: pid=558992: Fri Nov 15 10:52:26 2024 00:33:39.601 read: IOPS=211, BW=26.5MiB/s (27.7MB/s)(266MiB/10047msec) 00:33:39.601 slat (nsec): min=3843, max=74496, avg=17839.59, stdev=5842.13 00:33:39.601 clat (usec): min=10830, max=61588, avg=14128.28, stdev=2305.60 00:33:39.601 lat (usec): min=10844, max=61600, avg=14146.12, stdev=2305.27 00:33:39.601 clat percentiles (usec): 00:33:39.601 | 1.00th=[11731], 5.00th=[12518], 10.00th=[12780], 20.00th=[13304], 00:33:39.601 | 30.00th=[13566], 40.00th=[13829], 50.00th=[14091], 60.00th=[14222], 00:33:39.601 | 70.00th=[14484], 80.00th=[14746], 90.00th=[15139], 95.00th=[15664], 00:33:39.601 | 99.00th=[16581], 99.50th=[17171], 99.90th=[61604], 99.95th=[61604], 00:33:39.601 | 99.99th=[61604] 00:33:39.601 bw ( KiB/s): min=24064, max=28416, per=34.19%, avg=27200.00, stdev=928.15, samples=20 00:33:39.601 iops : min= 188, max= 222, avg=212.50, stdev= 7.25, samples=20 00:33:39.601 lat (msec) : 20=99.76%, 50=0.05%, 100=0.19% 00:33:39.601 cpu : usr=93.26%, sys=5.92%, ctx=71, majf=0, minf=10 00:33:39.601 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:39.601 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:39.601 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:39.601 issued rwts: total=2127,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:39.601 latency : target=0, window=0, percentile=100.00%, depth=3 00:33:39.601 filename0: (groupid=0, jobs=1): err= 0: pid=558993: Fri Nov 15 10:52:26 2024 00:33:39.601 read: IOPS=203, BW=25.4MiB/s (26.7MB/s)(256MiB/10044msec) 00:33:39.601 slat (nsec): min=4373, max=46211, avg=17467.73, stdev=5029.90 00:33:39.601 clat (usec): min=9710, max=53128, avg=14699.85, stdev=1491.23 00:33:39.601 lat (usec): min=9723, max=53142, avg=14717.32, stdev=1491.23 00:33:39.601 clat percentiles (usec): 00:33:39.601 | 1.00th=[12256], 5.00th=[13173], 10.00th=[13435], 20.00th=[13829], 00:33:39.601 | 30.00th=[14222], 40.00th=[14484], 50.00th=[14615], 60.00th=[14877], 00:33:39.601 | 70.00th=[15139], 80.00th=[15401], 90.00th=[15926], 95.00th=[16319], 00:33:39.601 | 99.00th=[16909], 99.50th=[17171], 99.90th=[19792], 99.95th=[46924], 00:33:39.601 | 99.99th=[53216] 00:33:39.601 bw ( KiB/s): min=25344, max=27392, per=32.86%, avg=26137.60, stdev=476.40, samples=20 00:33:39.601 iops : min= 198, max= 214, avg=204.20, stdev= 3.72, samples=20 00:33:39.601 lat (msec) : 10=0.10%, 20=99.80%, 50=0.05%, 100=0.05% 00:33:39.601 cpu : usr=93.78%, sys=5.71%, ctx=20, majf=0, minf=11 00:33:39.601 IO depths : 1=0.1%, 2=100.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:39.601 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:39.601 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:39.601 issued rwts: total=2044,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:39.601 latency : target=0, window=0, percentile=100.00%, depth=3 00:33:39.601 filename0: (groupid=0, jobs=1): err= 0: pid=558994: Fri Nov 15 10:52:26 2024 00:33:39.601 read: IOPS=207, BW=25.9MiB/s (27.2MB/s)(259MiB/10005msec) 00:33:39.601 slat (nsec): min=4455, max=48010, avg=19911.55, stdev=5084.75 00:33:39.601 clat (usec): min=7612, max=20824, avg=14455.63, stdev=1091.85 00:33:39.601 lat (usec): min=7626, max=20835, avg=14475.54, stdev=1092.04 00:33:39.601 clat percentiles (usec): 00:33:39.601 | 1.00th=[11600], 5.00th=[12780], 10.00th=[13173], 20.00th=[13698], 00:33:39.601 | 30.00th=[13960], 40.00th=[14222], 50.00th=[14484], 60.00th=[14746], 00:33:39.601 | 70.00th=[14877], 80.00th=[15270], 90.00th=[15795], 95.00th=[16188], 00:33:39.601 | 99.00th=[17171], 99.50th=[17433], 99.90th=[19268], 99.95th=[19268], 00:33:39.601 | 99.99th=[20841] 00:33:39.601 bw ( KiB/s): min=25344, max=27648, per=33.31%, avg=26496.00, stdev=596.05, samples=20 00:33:39.601 iops : min= 198, max= 216, avg=207.00, stdev= 4.66, samples=20 00:33:39.601 lat (msec) : 10=0.63%, 20=99.32%, 50=0.05% 00:33:39.601 cpu : usr=94.03%, sys=5.00%, ctx=363, majf=0, minf=12 00:33:39.601 IO depths : 1=0.1%, 2=100.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:39.601 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:39.601 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:39.601 issued rwts: total=2073,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:39.601 latency : target=0, window=0, percentile=100.00%, depth=3 00:33:39.601 00:33:39.601 Run status group 0 (all jobs): 00:33:39.601 READ: bw=77.7MiB/s (81.5MB/s), 25.4MiB/s-26.5MiB/s (26.7MB/s-27.7MB/s), io=781MiB (818MB), run=10005-10047msec 00:33:39.601 10:52:26 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:33:39.601 10:52:26 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:33:39.601 10:52:26 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:33:39.601 10:52:26 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:33:39.601 10:52:26 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:33:39.601 10:52:26 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:33:39.601 10:52:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:39.601 10:52:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:33:39.601 10:52:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:39.601 10:52:26 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:33:39.601 10:52:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:39.601 10:52:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:33:39.601 10:52:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:39.601 00:33:39.601 real 0m11.036s 00:33:39.601 user 0m29.261s 00:33:39.601 sys 0m1.927s 00:33:39.601 10:52:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1128 -- # xtrace_disable 00:33:39.601 10:52:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:33:39.601 ************************************ 00:33:39.601 END TEST fio_dif_digest 00:33:39.601 ************************************ 00:33:39.601 10:52:26 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:33:39.601 10:52:26 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:33:39.601 10:52:26 nvmf_dif -- nvmf/common.sh@516 -- # nvmfcleanup 00:33:39.601 10:52:26 nvmf_dif -- nvmf/common.sh@121 -- # sync 00:33:39.601 10:52:26 nvmf_dif -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:33:39.601 10:52:26 nvmf_dif -- nvmf/common.sh@124 -- # set +e 00:33:39.601 10:52:26 nvmf_dif -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:39.601 10:52:26 nvmf_dif -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:33:39.601 rmmod nvme_tcp 00:33:39.601 rmmod nvme_fabrics 00:33:39.601 rmmod nvme_keyring 00:33:39.601 10:52:26 nvmf_dif -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:39.601 10:52:26 nvmf_dif -- nvmf/common.sh@128 -- # set -e 00:33:39.601 10:52:26 nvmf_dif -- nvmf/common.sh@129 -- # return 0 00:33:39.601 10:52:26 nvmf_dif -- nvmf/common.sh@517 -- # '[' -n 552821 ']' 00:33:39.601 10:52:26 nvmf_dif -- nvmf/common.sh@518 -- # killprocess 552821 00:33:39.601 10:52:26 nvmf_dif -- common/autotest_common.sh@952 -- # '[' -z 552821 ']' 00:33:39.601 10:52:26 nvmf_dif -- common/autotest_common.sh@956 -- # kill -0 552821 00:33:39.601 10:52:26 nvmf_dif -- common/autotest_common.sh@957 -- # uname 00:33:39.601 10:52:26 nvmf_dif -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:33:39.601 10:52:26 nvmf_dif -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 552821 00:33:39.601 10:52:26 nvmf_dif -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:33:39.601 10:52:26 nvmf_dif -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:33:39.601 10:52:26 nvmf_dif -- common/autotest_common.sh@970 -- # echo 'killing process with pid 552821' 00:33:39.601 killing process with pid 552821 00:33:39.601 10:52:26 nvmf_dif -- common/autotest_common.sh@971 -- # kill 552821 00:33:39.601 10:52:26 nvmf_dif -- common/autotest_common.sh@976 -- # wait 552821 00:33:39.601 10:52:26 nvmf_dif -- nvmf/common.sh@520 -- # '[' iso == iso ']' 00:33:39.602 10:52:26 nvmf_dif -- nvmf/common.sh@521 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:33:39.602 Waiting for block devices as requested 00:33:39.602 0000:81:00.0 (8086 0a54): vfio-pci -> nvme 00:33:39.860 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:33:39.861 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:33:39.861 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:33:40.119 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:33:40.119 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:33:40.119 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:33:40.119 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:33:40.377 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:33:40.377 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:33:40.377 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:33:40.377 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:33:40.635 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:33:40.635 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:33:40.635 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:33:40.635 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:33:40.893 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:33:40.893 10:52:29 nvmf_dif -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:33:40.893 10:52:29 nvmf_dif -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:33:40.893 10:52:29 nvmf_dif -- nvmf/common.sh@297 -- # iptr 00:33:40.893 10:52:29 nvmf_dif -- nvmf/common.sh@791 -- # iptables-save 00:33:40.893 10:52:29 nvmf_dif -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:33:40.893 10:52:29 nvmf_dif -- nvmf/common.sh@791 -- # iptables-restore 00:33:40.893 10:52:29 nvmf_dif -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:33:40.893 10:52:29 nvmf_dif -- nvmf/common.sh@302 -- # remove_spdk_ns 00:33:40.893 10:52:29 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:40.893 10:52:29 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:33:40.893 10:52:29 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:43.424 10:52:31 nvmf_dif -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:33:43.424 00:33:43.424 real 1m7.076s 00:33:43.424 user 6m33.105s 00:33:43.424 sys 0m17.106s 00:33:43.424 10:52:31 nvmf_dif -- common/autotest_common.sh@1128 -- # xtrace_disable 00:33:43.424 10:52:31 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:33:43.424 ************************************ 00:33:43.424 END TEST nvmf_dif 00:33:43.424 ************************************ 00:33:43.424 10:52:31 -- spdk/autotest.sh@286 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:33:43.424 10:52:31 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:33:43.424 10:52:31 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:33:43.424 10:52:31 -- common/autotest_common.sh@10 -- # set +x 00:33:43.424 ************************************ 00:33:43.424 START TEST nvmf_abort_qd_sizes 00:33:43.424 ************************************ 00:33:43.424 10:52:31 nvmf_abort_qd_sizes -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:33:43.424 * Looking for test storage... 00:33:43.424 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:33:43.424 10:52:31 nvmf_abort_qd_sizes -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:33:43.424 10:52:31 nvmf_abort_qd_sizes -- common/autotest_common.sh@1691 -- # lcov --version 00:33:43.424 10:52:31 nvmf_abort_qd_sizes -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:33:43.424 10:52:31 nvmf_abort_qd_sizes -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:33:43.424 10:52:31 nvmf_abort_qd_sizes -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:43.424 10:52:31 nvmf_abort_qd_sizes -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:43.424 10:52:31 nvmf_abort_qd_sizes -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:43.424 10:52:31 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # IFS=.-: 00:33:43.424 10:52:31 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # read -ra ver1 00:33:43.424 10:52:31 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # IFS=.-: 00:33:43.424 10:52:31 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # read -ra ver2 00:33:43.424 10:52:31 nvmf_abort_qd_sizes -- scripts/common.sh@338 -- # local 'op=<' 00:33:43.424 10:52:31 nvmf_abort_qd_sizes -- scripts/common.sh@340 -- # ver1_l=2 00:33:43.424 10:52:31 nvmf_abort_qd_sizes -- scripts/common.sh@341 -- # ver2_l=1 00:33:43.424 10:52:31 nvmf_abort_qd_sizes -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:43.424 10:52:31 nvmf_abort_qd_sizes -- scripts/common.sh@344 -- # case "$op" in 00:33:43.424 10:52:31 nvmf_abort_qd_sizes -- scripts/common.sh@345 -- # : 1 00:33:43.424 10:52:31 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:43.424 10:52:31 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:43.424 10:52:31 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # decimal 1 00:33:43.424 10:52:31 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=1 00:33:43.424 10:52:31 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:43.424 10:52:31 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 1 00:33:43.424 10:52:31 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # ver1[v]=1 00:33:43.424 10:52:31 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # decimal 2 00:33:43.424 10:52:31 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=2 00:33:43.424 10:52:31 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:43.424 10:52:31 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 2 00:33:43.424 10:52:31 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # ver2[v]=2 00:33:43.424 10:52:31 nvmf_abort_qd_sizes -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:43.424 10:52:31 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:43.424 10:52:31 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # return 0 00:33:43.424 10:52:31 nvmf_abort_qd_sizes -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:43.424 10:52:31 nvmf_abort_qd_sizes -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:33:43.424 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:43.424 --rc genhtml_branch_coverage=1 00:33:43.424 --rc genhtml_function_coverage=1 00:33:43.424 --rc genhtml_legend=1 00:33:43.424 --rc geninfo_all_blocks=1 00:33:43.424 --rc geninfo_unexecuted_blocks=1 00:33:43.424 00:33:43.424 ' 00:33:43.424 10:52:31 nvmf_abort_qd_sizes -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:33:43.424 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:43.424 --rc genhtml_branch_coverage=1 00:33:43.424 --rc genhtml_function_coverage=1 00:33:43.424 --rc genhtml_legend=1 00:33:43.424 --rc geninfo_all_blocks=1 00:33:43.424 --rc geninfo_unexecuted_blocks=1 00:33:43.424 00:33:43.424 ' 00:33:43.424 10:52:31 nvmf_abort_qd_sizes -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:33:43.424 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:43.424 --rc genhtml_branch_coverage=1 00:33:43.424 --rc genhtml_function_coverage=1 00:33:43.424 --rc genhtml_legend=1 00:33:43.424 --rc geninfo_all_blocks=1 00:33:43.424 --rc geninfo_unexecuted_blocks=1 00:33:43.424 00:33:43.425 ' 00:33:43.425 10:52:31 nvmf_abort_qd_sizes -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:33:43.425 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:43.425 --rc genhtml_branch_coverage=1 00:33:43.425 --rc genhtml_function_coverage=1 00:33:43.425 --rc genhtml_legend=1 00:33:43.425 --rc geninfo_all_blocks=1 00:33:43.425 --rc geninfo_unexecuted_blocks=1 00:33:43.425 00:33:43.425 ' 00:33:43.425 10:52:31 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:43.425 10:52:31 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:33:43.425 10:52:31 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:43.425 10:52:31 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:43.425 10:52:31 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:43.425 10:52:31 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:43.425 10:52:31 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:43.425 10:52:31 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:43.425 10:52:31 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:43.425 10:52:31 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:43.425 10:52:31 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:43.425 10:52:31 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:43.425 10:52:31 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:33:43.425 10:52:31 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=8b464f06-2980-e311-ba20-001e67a94acd 00:33:43.425 10:52:31 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:43.425 10:52:31 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:43.425 10:52:31 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:43.425 10:52:31 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:43.425 10:52:31 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:43.425 10:52:31 nvmf_abort_qd_sizes -- scripts/common.sh@15 -- # shopt -s extglob 00:33:43.425 10:52:31 nvmf_abort_qd_sizes -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:43.425 10:52:31 nvmf_abort_qd_sizes -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:43.425 10:52:31 nvmf_abort_qd_sizes -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:43.425 10:52:31 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:43.425 10:52:31 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:43.425 10:52:31 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:43.425 10:52:31 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:33:43.425 10:52:31 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:43.425 10:52:31 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # : 0 00:33:43.425 10:52:31 nvmf_abort_qd_sizes -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:43.425 10:52:31 nvmf_abort_qd_sizes -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:43.425 10:52:31 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:43.425 10:52:31 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:43.425 10:52:31 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:43.425 10:52:31 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:33:43.425 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:33:43.425 10:52:31 nvmf_abort_qd_sizes -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:43.425 10:52:31 nvmf_abort_qd_sizes -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:43.425 10:52:31 nvmf_abort_qd_sizes -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:43.425 10:52:31 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:33:43.425 10:52:31 nvmf_abort_qd_sizes -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:33:43.425 10:52:31 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:43.425 10:52:31 nvmf_abort_qd_sizes -- nvmf/common.sh@476 -- # prepare_net_devs 00:33:43.425 10:52:31 nvmf_abort_qd_sizes -- nvmf/common.sh@438 -- # local -g is_hw=no 00:33:43.425 10:52:31 nvmf_abort_qd_sizes -- nvmf/common.sh@440 -- # remove_spdk_ns 00:33:43.425 10:52:31 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:43.425 10:52:31 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:33:43.425 10:52:31 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:43.425 10:52:31 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:33:43.425 10:52:31 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:33:43.425 10:52:31 nvmf_abort_qd_sizes -- nvmf/common.sh@309 -- # xtrace_disable 00:33:43.425 10:52:31 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:33:45.322 10:52:33 nvmf_abort_qd_sizes -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:45.322 10:52:33 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # pci_devs=() 00:33:45.322 10:52:33 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # local -a pci_devs 00:33:45.322 10:52:33 nvmf_abort_qd_sizes -- nvmf/common.sh@316 -- # pci_net_devs=() 00:33:45.322 10:52:33 nvmf_abort_qd_sizes -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:33:45.322 10:52:33 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # pci_drivers=() 00:33:45.322 10:52:33 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # local -A pci_drivers 00:33:45.322 10:52:33 nvmf_abort_qd_sizes -- nvmf/common.sh@319 -- # net_devs=() 00:33:45.322 10:52:33 nvmf_abort_qd_sizes -- nvmf/common.sh@319 -- # local -ga net_devs 00:33:45.322 10:52:33 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # e810=() 00:33:45.322 10:52:33 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # local -ga e810 00:33:45.322 10:52:33 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # x722=() 00:33:45.322 10:52:33 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # local -ga x722 00:33:45.322 10:52:33 nvmf_abort_qd_sizes -- nvmf/common.sh@322 -- # mlx=() 00:33:45.322 10:52:33 nvmf_abort_qd_sizes -- nvmf/common.sh@322 -- # local -ga mlx 00:33:45.322 10:52:33 nvmf_abort_qd_sizes -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:45.322 10:52:33 nvmf_abort_qd_sizes -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:45.322 10:52:33 nvmf_abort_qd_sizes -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:45.322 10:52:33 nvmf_abort_qd_sizes -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:45.322 10:52:33 nvmf_abort_qd_sizes -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:45.322 10:52:33 nvmf_abort_qd_sizes -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:45.322 10:52:33 nvmf_abort_qd_sizes -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:45.322 10:52:33 nvmf_abort_qd_sizes -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:33:45.322 10:52:33 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:45.322 10:52:33 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:45.322 10:52:33 nvmf_abort_qd_sizes -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:45.322 10:52:33 nvmf_abort_qd_sizes -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:45.322 10:52:33 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:33:45.322 10:52:33 nvmf_abort_qd_sizes -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:33:45.322 10:52:33 nvmf_abort_qd_sizes -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:33:45.322 10:52:33 nvmf_abort_qd_sizes -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:33:45.322 10:52:33 nvmf_abort_qd_sizes -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:33:45.322 10:52:33 nvmf_abort_qd_sizes -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:33:45.322 10:52:33 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:45.322 10:52:33 nvmf_abort_qd_sizes -- nvmf/common.sh@367 -- # echo 'Found 0000:82:00.0 (0x8086 - 0x159b)' 00:33:45.322 Found 0000:82:00.0 (0x8086 - 0x159b) 00:33:45.322 10:52:33 nvmf_abort_qd_sizes -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:45.322 10:52:33 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:45.322 10:52:33 nvmf_abort_qd_sizes -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:45.322 10:52:33 nvmf_abort_qd_sizes -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:45.322 10:52:33 nvmf_abort_qd_sizes -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:45.322 10:52:33 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:45.322 10:52:33 nvmf_abort_qd_sizes -- nvmf/common.sh@367 -- # echo 'Found 0000:82:00.1 (0x8086 - 0x159b)' 00:33:45.322 Found 0000:82:00.1 (0x8086 - 0x159b) 00:33:45.322 10:52:33 nvmf_abort_qd_sizes -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:45.322 10:52:33 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:45.322 10:52:33 nvmf_abort_qd_sizes -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:45.322 10:52:33 nvmf_abort_qd_sizes -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:45.322 10:52:33 nvmf_abort_qd_sizes -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:45.322 10:52:33 nvmf_abort_qd_sizes -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:33:45.322 10:52:33 nvmf_abort_qd_sizes -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:33:45.322 10:52:33 nvmf_abort_qd_sizes -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:33:45.322 10:52:33 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:45.322 10:52:33 nvmf_abort_qd_sizes -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:45.322 10:52:33 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:45.322 10:52:33 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:45.322 10:52:33 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:45.322 10:52:33 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:45.322 10:52:33 nvmf_abort_qd_sizes -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:45.322 10:52:33 nvmf_abort_qd_sizes -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:82:00.0: cvl_0_0' 00:33:45.322 Found net devices under 0000:82:00.0: cvl_0_0 00:33:45.322 10:52:33 nvmf_abort_qd_sizes -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:45.322 10:52:33 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:45.323 10:52:33 nvmf_abort_qd_sizes -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:45.323 10:52:33 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:45.323 10:52:33 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:45.323 10:52:33 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:45.323 10:52:33 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:45.323 10:52:33 nvmf_abort_qd_sizes -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:45.323 10:52:33 nvmf_abort_qd_sizes -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:82:00.1: cvl_0_1' 00:33:45.323 Found net devices under 0000:82:00.1: cvl_0_1 00:33:45.323 10:52:33 nvmf_abort_qd_sizes -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:45.323 10:52:33 nvmf_abort_qd_sizes -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:33:45.323 10:52:33 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # is_hw=yes 00:33:45.323 10:52:33 nvmf_abort_qd_sizes -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:33:45.323 10:52:33 nvmf_abort_qd_sizes -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:33:45.323 10:52:33 nvmf_abort_qd_sizes -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:33:45.323 10:52:33 nvmf_abort_qd_sizes -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:33:45.323 10:52:33 nvmf_abort_qd_sizes -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:45.323 10:52:33 nvmf_abort_qd_sizes -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:45.323 10:52:33 nvmf_abort_qd_sizes -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:45.323 10:52:33 nvmf_abort_qd_sizes -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:33:45.323 10:52:33 nvmf_abort_qd_sizes -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:45.323 10:52:33 nvmf_abort_qd_sizes -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:45.323 10:52:33 nvmf_abort_qd_sizes -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:33:45.323 10:52:33 nvmf_abort_qd_sizes -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:33:45.323 10:52:33 nvmf_abort_qd_sizes -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:45.323 10:52:33 nvmf_abort_qd_sizes -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:45.323 10:52:33 nvmf_abort_qd_sizes -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:33:45.323 10:52:33 nvmf_abort_qd_sizes -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:33:45.323 10:52:33 nvmf_abort_qd_sizes -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:33:45.323 10:52:33 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:45.323 10:52:33 nvmf_abort_qd_sizes -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:45.323 10:52:33 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:45.323 10:52:33 nvmf_abort_qd_sizes -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:33:45.323 10:52:33 nvmf_abort_qd_sizes -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:45.323 10:52:33 nvmf_abort_qd_sizes -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:45.323 10:52:33 nvmf_abort_qd_sizes -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:45.323 10:52:33 nvmf_abort_qd_sizes -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:33:45.323 10:52:33 nvmf_abort_qd_sizes -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:33:45.323 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:45.323 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.200 ms 00:33:45.323 00:33:45.323 --- 10.0.0.2 ping statistics --- 00:33:45.323 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:45.323 rtt min/avg/max/mdev = 0.200/0.200/0.200/0.000 ms 00:33:45.323 10:52:33 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:45.323 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:45.323 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.140 ms 00:33:45.323 00:33:45.323 --- 10.0.0.1 ping statistics --- 00:33:45.323 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:45.323 rtt min/avg/max/mdev = 0.140/0.140/0.140/0.000 ms 00:33:45.323 10:52:33 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:45.323 10:52:33 nvmf_abort_qd_sizes -- nvmf/common.sh@450 -- # return 0 00:33:45.323 10:52:33 nvmf_abort_qd_sizes -- nvmf/common.sh@478 -- # '[' iso == iso ']' 00:33:45.323 10:52:33 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:33:46.702 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:33:46.702 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:33:46.702 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:33:46.702 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:33:46.702 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:33:46.702 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:33:46.702 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:33:46.702 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:33:46.702 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:33:46.702 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:33:46.702 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:33:46.702 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:33:46.702 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:33:46.702 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:33:46.702 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:33:46.702 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:33:48.610 0000:81:00.0 (8086 0a54): nvme -> vfio-pci 00:33:48.610 10:52:36 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:48.610 10:52:36 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:33:48.610 10:52:36 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:33:48.610 10:52:36 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:48.610 10:52:36 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:33:48.610 10:52:36 nvmf_abort_qd_sizes -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:33:48.610 10:52:36 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:33:48.610 10:52:36 nvmf_abort_qd_sizes -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:33:48.610 10:52:36 nvmf_abort_qd_sizes -- common/autotest_common.sh@724 -- # xtrace_disable 00:33:48.610 10:52:36 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:33:48.610 10:52:36 nvmf_abort_qd_sizes -- nvmf/common.sh@509 -- # nvmfpid=563919 00:33:48.610 10:52:36 nvmf_abort_qd_sizes -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:33:48.610 10:52:36 nvmf_abort_qd_sizes -- nvmf/common.sh@510 -- # waitforlisten 563919 00:33:48.610 10:52:36 nvmf_abort_qd_sizes -- common/autotest_common.sh@833 -- # '[' -z 563919 ']' 00:33:48.610 10:52:36 nvmf_abort_qd_sizes -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:48.610 10:52:36 nvmf_abort_qd_sizes -- common/autotest_common.sh@838 -- # local max_retries=100 00:33:48.610 10:52:36 nvmf_abort_qd_sizes -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:48.610 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:48.610 10:52:36 nvmf_abort_qd_sizes -- common/autotest_common.sh@842 -- # xtrace_disable 00:33:48.610 10:52:36 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:33:48.610 [2024-11-15 10:52:36.897772] Starting SPDK v25.01-pre git sha1 318515b44 / DPDK 24.03.0 initialization... 00:33:48.610 [2024-11-15 10:52:36.897839] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:48.610 [2024-11-15 10:52:36.972175] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:33:48.610 [2024-11-15 10:52:37.030511] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:48.610 [2024-11-15 10:52:37.030563] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:48.610 [2024-11-15 10:52:37.030587] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:48.610 [2024-11-15 10:52:37.030599] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:48.610 [2024-11-15 10:52:37.030608] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:48.610 [2024-11-15 10:52:37.032035] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:48.610 [2024-11-15 10:52:37.032090] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:33:48.610 [2024-11-15 10:52:37.032157] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:33:48.610 [2024-11-15 10:52:37.032161] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:48.867 10:52:37 nvmf_abort_qd_sizes -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:33:48.867 10:52:37 nvmf_abort_qd_sizes -- common/autotest_common.sh@866 -- # return 0 00:33:48.867 10:52:37 nvmf_abort_qd_sizes -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:33:48.867 10:52:37 nvmf_abort_qd_sizes -- common/autotest_common.sh@730 -- # xtrace_disable 00:33:48.867 10:52:37 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:33:48.867 10:52:37 nvmf_abort_qd_sizes -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:48.867 10:52:37 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:33:48.867 10:52:37 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:33:48.867 10:52:37 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:33:48.867 10:52:37 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # local bdf bdfs 00:33:48.867 10:52:37 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # local nvmes 00:33:48.867 10:52:37 nvmf_abort_qd_sizes -- scripts/common.sh@315 -- # [[ -n 0000:81:00.0 ]] 00:33:48.867 10:52:37 nvmf_abort_qd_sizes -- scripts/common.sh@316 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:33:48.867 10:52:37 nvmf_abort_qd_sizes -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:33:48.867 10:52:37 nvmf_abort_qd_sizes -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:81:00.0 ]] 00:33:48.867 10:52:37 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # uname -s 00:33:48.867 10:52:37 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:33:48.867 10:52:37 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:33:48.867 10:52:37 nvmf_abort_qd_sizes -- scripts/common.sh@328 -- # (( 1 )) 00:33:48.867 10:52:37 nvmf_abort_qd_sizes -- scripts/common.sh@329 -- # printf '%s\n' 0000:81:00.0 00:33:48.867 10:52:37 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 1 > 0 )) 00:33:48.867 10:52:37 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:81:00.0 00:33:48.867 10:52:37 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:33:48.867 10:52:37 nvmf_abort_qd_sizes -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:33:48.867 10:52:37 nvmf_abort_qd_sizes -- common/autotest_common.sh@1109 -- # xtrace_disable 00:33:48.867 10:52:37 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:33:48.867 ************************************ 00:33:48.867 START TEST spdk_target_abort 00:33:48.867 ************************************ 00:33:48.867 10:52:37 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1127 -- # spdk_target 00:33:48.867 10:52:37 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:33:48.867 10:52:37 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:81:00.0 -b spdk_target 00:33:48.867 10:52:37 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:48.868 10:52:37 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:33:52.144 spdk_targetn1 00:33:52.144 10:52:40 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:52.144 10:52:40 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:33:52.144 10:52:40 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:52.144 10:52:40 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:33:52.144 [2024-11-15 10:52:40.045604] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:52.144 10:52:40 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:52.144 10:52:40 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:33:52.144 10:52:40 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:52.144 10:52:40 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:33:52.144 10:52:40 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:52.144 10:52:40 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:33:52.144 10:52:40 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:52.144 10:52:40 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:33:52.144 10:52:40 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:52.144 10:52:40 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:33:52.145 10:52:40 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:52.145 10:52:40 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:33:52.145 [2024-11-15 10:52:40.094327] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:52.145 10:52:40 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:52.145 10:52:40 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:33:52.145 10:52:40 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:33:52.145 10:52:40 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:33:52.145 10:52:40 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:33:52.145 10:52:40 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:33:52.145 10:52:40 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:33:52.145 10:52:40 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:33:52.145 10:52:40 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:33:52.145 10:52:40 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:33:52.145 10:52:40 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:33:52.145 10:52:40 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:33:52.145 10:52:40 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:33:52.145 10:52:40 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:33:52.145 10:52:40 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:33:52.145 10:52:40 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:33:52.145 10:52:40 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:33:52.145 10:52:40 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:33:52.145 10:52:40 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:33:52.145 10:52:40 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:33:52.145 10:52:40 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:33:52.145 10:52:40 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:33:55.421 Initializing NVMe Controllers 00:33:55.421 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:33:55.421 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:33:55.421 Initialization complete. Launching workers. 00:33:55.421 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 11917, failed: 0 00:33:55.421 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1301, failed to submit 10616 00:33:55.421 success 713, unsuccessful 588, failed 0 00:33:55.421 10:52:43 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:33:55.421 10:52:43 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:33:58.696 Initializing NVMe Controllers 00:33:58.696 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:33:58.696 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:33:58.696 Initialization complete. Launching workers. 00:33:58.696 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8999, failed: 0 00:33:58.696 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1261, failed to submit 7738 00:33:58.696 success 337, unsuccessful 924, failed 0 00:33:58.696 10:52:46 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:33:58.696 10:52:46 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:34:01.971 Initializing NVMe Controllers 00:34:01.971 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:34:01.971 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:34:01.971 Initialization complete. Launching workers. 00:34:01.971 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 31282, failed: 0 00:34:01.971 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2661, failed to submit 28621 00:34:01.971 success 525, unsuccessful 2136, failed 0 00:34:01.971 10:52:49 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:34:01.971 10:52:49 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:01.971 10:52:49 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:34:01.971 10:52:49 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:01.971 10:52:49 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:34:01.971 10:52:49 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:01.971 10:52:49 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:34:03.866 10:52:52 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:03.866 10:52:52 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 563919 00:34:03.866 10:52:52 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@952 -- # '[' -z 563919 ']' 00:34:03.866 10:52:52 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@956 -- # kill -0 563919 00:34:03.866 10:52:52 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@957 -- # uname 00:34:03.866 10:52:52 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:34:03.866 10:52:52 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 563919 00:34:03.866 10:52:52 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:34:03.866 10:52:52 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:34:03.866 10:52:52 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@970 -- # echo 'killing process with pid 563919' 00:34:03.866 killing process with pid 563919 00:34:03.866 10:52:52 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@971 -- # kill 563919 00:34:03.866 10:52:52 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@976 -- # wait 563919 00:34:03.866 00:34:03.866 real 0m15.063s 00:34:03.866 user 0m56.965s 00:34:03.866 sys 0m2.956s 00:34:03.866 10:52:52 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1128 -- # xtrace_disable 00:34:03.866 10:52:52 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:34:03.866 ************************************ 00:34:03.866 END TEST spdk_target_abort 00:34:03.866 ************************************ 00:34:03.866 10:52:52 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:34:03.866 10:52:52 nvmf_abort_qd_sizes -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:34:03.866 10:52:52 nvmf_abort_qd_sizes -- common/autotest_common.sh@1109 -- # xtrace_disable 00:34:03.866 10:52:52 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:34:03.866 ************************************ 00:34:03.866 START TEST kernel_target_abort 00:34:03.866 ************************************ 00:34:03.866 10:52:52 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1127 -- # kernel_target 00:34:03.866 10:52:52 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:34:03.866 10:52:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@769 -- # local ip 00:34:03.866 10:52:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:03.866 10:52:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:03.866 10:52:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:03.866 10:52:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:03.866 10:52:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:03.866 10:52:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:03.866 10:52:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:03.866 10:52:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:03.866 10:52:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:03.866 10:52:52 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:34:03.866 10:52:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:34:03.866 10:52:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:34:03.866 10:52:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:34:03.866 10:52:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:34:03.866 10:52:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:34:03.866 10:52:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # local block nvme 00:34:03.866 10:52:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:34:03.866 10:52:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@670 -- # modprobe nvmet 00:34:04.123 10:52:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:34:04.123 10:52:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:34:05.070 Waiting for block devices as requested 00:34:05.329 0000:81:00.0 (8086 0a54): vfio-pci -> nvme 00:34:05.329 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:34:05.329 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:34:05.588 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:34:05.588 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:34:05.588 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:34:05.846 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:34:05.846 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:34:05.846 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:34:05.846 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:34:06.104 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:34:06.104 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:34:06.104 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:34:06.104 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:34:06.363 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:34:06.363 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:34:06.363 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:34:06.363 10:52:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:34:06.363 10:52:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:34:06.363 10:52:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:34:06.363 10:52:54 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:34:06.363 10:52:54 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:34:06.363 10:52:54 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:34:06.363 10:52:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:34:06.363 10:52:54 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:34:06.363 10:52:54 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:34:06.621 No valid GPT data, bailing 00:34:06.621 10:52:54 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:34:06.621 10:52:54 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:34:06.621 10:52:54 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:34:06.621 10:52:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:34:06.621 10:52:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:34:06.621 10:52:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:34:06.621 10:52:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:34:06.621 10:52:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:34:06.621 10:52:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:34:06.621 10:52:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # echo 1 00:34:06.621 10:52:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:34:06.621 10:52:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@697 -- # echo 1 00:34:06.621 10:52:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:34:06.621 10:52:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@700 -- # echo tcp 00:34:06.621 10:52:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@701 -- # echo 4420 00:34:06.621 10:52:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@702 -- # echo ipv4 00:34:06.621 10:52:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:34:06.621 10:52:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid=8b464f06-2980-e311-ba20-001e67a94acd -a 10.0.0.1 -t tcp -s 4420 00:34:06.621 00:34:06.621 Discovery Log Number of Records 2, Generation counter 2 00:34:06.621 =====Discovery Log Entry 0====== 00:34:06.621 trtype: tcp 00:34:06.621 adrfam: ipv4 00:34:06.621 subtype: current discovery subsystem 00:34:06.621 treq: not specified, sq flow control disable supported 00:34:06.621 portid: 1 00:34:06.621 trsvcid: 4420 00:34:06.621 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:34:06.621 traddr: 10.0.0.1 00:34:06.621 eflags: none 00:34:06.621 sectype: none 00:34:06.621 =====Discovery Log Entry 1====== 00:34:06.621 trtype: tcp 00:34:06.621 adrfam: ipv4 00:34:06.621 subtype: nvme subsystem 00:34:06.621 treq: not specified, sq flow control disable supported 00:34:06.621 portid: 1 00:34:06.621 trsvcid: 4420 00:34:06.621 subnqn: nqn.2016-06.io.spdk:testnqn 00:34:06.621 traddr: 10.0.0.1 00:34:06.621 eflags: none 00:34:06.621 sectype: none 00:34:06.622 10:52:54 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:34:06.622 10:52:54 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:34:06.622 10:52:54 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:34:06.622 10:52:54 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:34:06.622 10:52:54 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:34:06.622 10:52:54 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:34:06.622 10:52:54 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:34:06.622 10:52:54 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:34:06.622 10:52:54 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:34:06.622 10:52:54 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:34:06.622 10:52:54 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:34:06.622 10:52:54 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:34:06.622 10:52:54 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:34:06.622 10:52:54 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:34:06.622 10:52:54 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:34:06.622 10:52:54 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:34:06.622 10:52:54 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:34:06.622 10:52:54 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:34:06.622 10:52:54 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:34:06.622 10:52:54 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:34:06.622 10:52:54 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:34:09.898 Initializing NVMe Controllers 00:34:09.898 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:34:09.898 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:34:09.898 Initialization complete. Launching workers. 00:34:09.898 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 48127, failed: 0 00:34:09.898 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 48127, failed to submit 0 00:34:09.898 success 0, unsuccessful 48127, failed 0 00:34:09.898 10:52:58 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:34:09.898 10:52:58 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:34:13.172 Initializing NVMe Controllers 00:34:13.172 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:34:13.172 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:34:13.172 Initialization complete. Launching workers. 00:34:13.172 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 91895, failed: 0 00:34:13.172 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 20450, failed to submit 71445 00:34:13.172 success 0, unsuccessful 20450, failed 0 00:34:13.172 10:53:01 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:34:13.172 10:53:01 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:34:16.447 Initializing NVMe Controllers 00:34:16.447 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:34:16.447 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:34:16.447 Initialization complete. Launching workers. 00:34:16.447 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 87752, failed: 0 00:34:16.447 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 21918, failed to submit 65834 00:34:16.447 success 0, unsuccessful 21918, failed 0 00:34:16.447 10:53:04 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:34:16.447 10:53:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:34:16.447 10:53:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@714 -- # echo 0 00:34:16.447 10:53:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:34:16.447 10:53:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:34:16.447 10:53:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:34:16.448 10:53:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:34:16.448 10:53:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:34:16.448 10:53:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:34:16.448 10:53:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:34:17.011 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:34:17.012 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:34:17.012 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:34:17.012 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:34:17.012 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:34:17.271 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:34:17.271 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:34:17.271 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:34:17.271 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:34:17.271 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:34:17.271 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:34:17.271 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:34:17.271 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:34:17.271 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:34:17.271 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:34:17.271 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:34:19.175 0000:81:00.0 (8086 0a54): nvme -> vfio-pci 00:34:19.175 00:34:19.175 real 0m15.132s 00:34:19.175 user 0m6.023s 00:34:19.175 sys 0m3.534s 00:34:19.175 10:53:07 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1128 -- # xtrace_disable 00:34:19.175 10:53:07 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:34:19.175 ************************************ 00:34:19.175 END TEST kernel_target_abort 00:34:19.175 ************************************ 00:34:19.175 10:53:07 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:34:19.175 10:53:07 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:34:19.175 10:53:07 nvmf_abort_qd_sizes -- nvmf/common.sh@516 -- # nvmfcleanup 00:34:19.175 10:53:07 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # sync 00:34:19.175 10:53:07 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:34:19.175 10:53:07 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set +e 00:34:19.175 10:53:07 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # for i in {1..20} 00:34:19.176 10:53:07 nvmf_abort_qd_sizes -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:34:19.176 rmmod nvme_tcp 00:34:19.176 rmmod nvme_fabrics 00:34:19.176 rmmod nvme_keyring 00:34:19.176 10:53:07 nvmf_abort_qd_sizes -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:34:19.176 10:53:07 nvmf_abort_qd_sizes -- nvmf/common.sh@128 -- # set -e 00:34:19.176 10:53:07 nvmf_abort_qd_sizes -- nvmf/common.sh@129 -- # return 0 00:34:19.176 10:53:07 nvmf_abort_qd_sizes -- nvmf/common.sh@517 -- # '[' -n 563919 ']' 00:34:19.176 10:53:07 nvmf_abort_qd_sizes -- nvmf/common.sh@518 -- # killprocess 563919 00:34:19.176 10:53:07 nvmf_abort_qd_sizes -- common/autotest_common.sh@952 -- # '[' -z 563919 ']' 00:34:19.176 10:53:07 nvmf_abort_qd_sizes -- common/autotest_common.sh@956 -- # kill -0 563919 00:34:19.176 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 956: kill: (563919) - No such process 00:34:19.176 10:53:07 nvmf_abort_qd_sizes -- common/autotest_common.sh@979 -- # echo 'Process with pid 563919 is not found' 00:34:19.176 Process with pid 563919 is not found 00:34:19.176 10:53:07 nvmf_abort_qd_sizes -- nvmf/common.sh@520 -- # '[' iso == iso ']' 00:34:19.176 10:53:07 nvmf_abort_qd_sizes -- nvmf/common.sh@521 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:34:20.548 Waiting for block devices as requested 00:34:20.548 0000:81:00.0 (8086 0a54): vfio-pci -> nvme 00:34:20.548 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:34:20.548 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:34:20.806 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:34:20.806 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:34:20.806 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:34:20.806 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:34:21.065 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:34:21.065 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:34:21.065 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:34:21.065 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:34:21.323 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:34:21.323 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:34:21.323 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:34:21.323 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:34:21.580 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:34:21.580 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:34:21.580 10:53:10 nvmf_abort_qd_sizes -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:34:21.580 10:53:10 nvmf_abort_qd_sizes -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:34:21.580 10:53:10 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # iptr 00:34:21.580 10:53:10 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-save 00:34:21.580 10:53:10 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:34:21.580 10:53:10 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-restore 00:34:21.580 10:53:10 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:34:21.580 10:53:10 nvmf_abort_qd_sizes -- nvmf/common.sh@302 -- # remove_spdk_ns 00:34:21.580 10:53:10 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:21.580 10:53:10 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:34:21.580 10:53:10 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:24.112 10:53:12 nvmf_abort_qd_sizes -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:34:24.112 00:34:24.112 real 0m40.744s 00:34:24.112 user 1m5.303s 00:34:24.112 sys 0m10.082s 00:34:24.112 10:53:12 nvmf_abort_qd_sizes -- common/autotest_common.sh@1128 -- # xtrace_disable 00:34:24.112 10:53:12 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:34:24.112 ************************************ 00:34:24.112 END TEST nvmf_abort_qd_sizes 00:34:24.112 ************************************ 00:34:24.112 10:53:12 -- spdk/autotest.sh@288 -- # run_test keyring_file /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:34:24.112 10:53:12 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:34:24.112 10:53:12 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:34:24.112 10:53:12 -- common/autotest_common.sh@10 -- # set +x 00:34:24.112 ************************************ 00:34:24.112 START TEST keyring_file 00:34:24.112 ************************************ 00:34:24.112 10:53:12 keyring_file -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:34:24.112 * Looking for test storage... 00:34:24.112 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:34:24.112 10:53:12 keyring_file -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:34:24.112 10:53:12 keyring_file -- common/autotest_common.sh@1691 -- # lcov --version 00:34:24.112 10:53:12 keyring_file -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:34:24.112 10:53:12 keyring_file -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:34:24.112 10:53:12 keyring_file -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:24.112 10:53:12 keyring_file -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:24.112 10:53:12 keyring_file -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:24.112 10:53:12 keyring_file -- scripts/common.sh@336 -- # IFS=.-: 00:34:24.112 10:53:12 keyring_file -- scripts/common.sh@336 -- # read -ra ver1 00:34:24.112 10:53:12 keyring_file -- scripts/common.sh@337 -- # IFS=.-: 00:34:24.112 10:53:12 keyring_file -- scripts/common.sh@337 -- # read -ra ver2 00:34:24.112 10:53:12 keyring_file -- scripts/common.sh@338 -- # local 'op=<' 00:34:24.112 10:53:12 keyring_file -- scripts/common.sh@340 -- # ver1_l=2 00:34:24.112 10:53:12 keyring_file -- scripts/common.sh@341 -- # ver2_l=1 00:34:24.112 10:53:12 keyring_file -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:24.112 10:53:12 keyring_file -- scripts/common.sh@344 -- # case "$op" in 00:34:24.112 10:53:12 keyring_file -- scripts/common.sh@345 -- # : 1 00:34:24.112 10:53:12 keyring_file -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:24.112 10:53:12 keyring_file -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:24.112 10:53:12 keyring_file -- scripts/common.sh@365 -- # decimal 1 00:34:24.112 10:53:12 keyring_file -- scripts/common.sh@353 -- # local d=1 00:34:24.112 10:53:12 keyring_file -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:24.112 10:53:12 keyring_file -- scripts/common.sh@355 -- # echo 1 00:34:24.112 10:53:12 keyring_file -- scripts/common.sh@365 -- # ver1[v]=1 00:34:24.112 10:53:12 keyring_file -- scripts/common.sh@366 -- # decimal 2 00:34:24.112 10:53:12 keyring_file -- scripts/common.sh@353 -- # local d=2 00:34:24.112 10:53:12 keyring_file -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:24.112 10:53:12 keyring_file -- scripts/common.sh@355 -- # echo 2 00:34:24.112 10:53:12 keyring_file -- scripts/common.sh@366 -- # ver2[v]=2 00:34:24.112 10:53:12 keyring_file -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:24.112 10:53:12 keyring_file -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:24.112 10:53:12 keyring_file -- scripts/common.sh@368 -- # return 0 00:34:24.112 10:53:12 keyring_file -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:24.112 10:53:12 keyring_file -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:34:24.112 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:24.112 --rc genhtml_branch_coverage=1 00:34:24.112 --rc genhtml_function_coverage=1 00:34:24.112 --rc genhtml_legend=1 00:34:24.112 --rc geninfo_all_blocks=1 00:34:24.112 --rc geninfo_unexecuted_blocks=1 00:34:24.112 00:34:24.112 ' 00:34:24.112 10:53:12 keyring_file -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:34:24.112 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:24.112 --rc genhtml_branch_coverage=1 00:34:24.112 --rc genhtml_function_coverage=1 00:34:24.112 --rc genhtml_legend=1 00:34:24.112 --rc geninfo_all_blocks=1 00:34:24.112 --rc geninfo_unexecuted_blocks=1 00:34:24.112 00:34:24.112 ' 00:34:24.112 10:53:12 keyring_file -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:34:24.112 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:24.112 --rc genhtml_branch_coverage=1 00:34:24.112 --rc genhtml_function_coverage=1 00:34:24.112 --rc genhtml_legend=1 00:34:24.112 --rc geninfo_all_blocks=1 00:34:24.112 --rc geninfo_unexecuted_blocks=1 00:34:24.112 00:34:24.112 ' 00:34:24.112 10:53:12 keyring_file -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:34:24.112 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:24.112 --rc genhtml_branch_coverage=1 00:34:24.112 --rc genhtml_function_coverage=1 00:34:24.112 --rc genhtml_legend=1 00:34:24.112 --rc geninfo_all_blocks=1 00:34:24.112 --rc geninfo_unexecuted_blocks=1 00:34:24.112 00:34:24.112 ' 00:34:24.112 10:53:12 keyring_file -- keyring/file.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:34:24.112 10:53:12 keyring_file -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:24.112 10:53:12 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:34:24.112 10:53:12 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:24.112 10:53:12 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:24.112 10:53:12 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:24.112 10:53:12 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:24.112 10:53:12 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:24.112 10:53:12 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:24.112 10:53:12 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:24.112 10:53:12 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:24.112 10:53:12 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:24.112 10:53:12 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:24.112 10:53:12 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:34:24.112 10:53:12 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=8b464f06-2980-e311-ba20-001e67a94acd 00:34:24.112 10:53:12 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:24.113 10:53:12 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:24.113 10:53:12 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:24.113 10:53:12 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:24.113 10:53:12 keyring_file -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:24.113 10:53:12 keyring_file -- scripts/common.sh@15 -- # shopt -s extglob 00:34:24.113 10:53:12 keyring_file -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:24.113 10:53:12 keyring_file -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:24.113 10:53:12 keyring_file -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:24.113 10:53:12 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:24.113 10:53:12 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:24.113 10:53:12 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:24.113 10:53:12 keyring_file -- paths/export.sh@5 -- # export PATH 00:34:24.113 10:53:12 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:24.113 10:53:12 keyring_file -- nvmf/common.sh@51 -- # : 0 00:34:24.113 10:53:12 keyring_file -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:34:24.113 10:53:12 keyring_file -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:34:24.113 10:53:12 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:24.113 10:53:12 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:24.113 10:53:12 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:24.113 10:53:12 keyring_file -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:34:24.113 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:34:24.113 10:53:12 keyring_file -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:34:24.113 10:53:12 keyring_file -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:34:24.113 10:53:12 keyring_file -- nvmf/common.sh@55 -- # have_pci_nics=0 00:34:24.113 10:53:12 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:34:24.113 10:53:12 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:34:24.113 10:53:12 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:34:24.113 10:53:12 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:34:24.113 10:53:12 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:34:24.113 10:53:12 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:34:24.113 10:53:12 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:34:24.113 10:53:12 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:34:24.113 10:53:12 keyring_file -- keyring/common.sh@17 -- # name=key0 00:34:24.113 10:53:12 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:34:24.113 10:53:12 keyring_file -- keyring/common.sh@17 -- # digest=0 00:34:24.113 10:53:12 keyring_file -- keyring/common.sh@18 -- # mktemp 00:34:24.113 10:53:12 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.73CPXWb0na 00:34:24.113 10:53:12 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:34:24.113 10:53:12 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:34:24.113 10:53:12 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:34:24.113 10:53:12 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:34:24.113 10:53:12 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:34:24.113 10:53:12 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:34:24.113 10:53:12 keyring_file -- nvmf/common.sh@733 -- # python - 00:34:24.113 10:53:12 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.73CPXWb0na 00:34:24.113 10:53:12 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.73CPXWb0na 00:34:24.113 10:53:12 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.73CPXWb0na 00:34:24.113 10:53:12 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:34:24.113 10:53:12 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:34:24.113 10:53:12 keyring_file -- keyring/common.sh@17 -- # name=key1 00:34:24.113 10:53:12 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:34:24.113 10:53:12 keyring_file -- keyring/common.sh@17 -- # digest=0 00:34:24.113 10:53:12 keyring_file -- keyring/common.sh@18 -- # mktemp 00:34:24.113 10:53:12 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.ZQGOUADXJZ 00:34:24.113 10:53:12 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:34:24.113 10:53:12 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:34:24.113 10:53:12 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:34:24.113 10:53:12 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:34:24.113 10:53:12 keyring_file -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 00:34:24.113 10:53:12 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:34:24.113 10:53:12 keyring_file -- nvmf/common.sh@733 -- # python - 00:34:24.113 10:53:12 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.ZQGOUADXJZ 00:34:24.113 10:53:12 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.ZQGOUADXJZ 00:34:24.113 10:53:12 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.ZQGOUADXJZ 00:34:24.113 10:53:12 keyring_file -- keyring/file.sh@30 -- # tgtpid=569826 00:34:24.113 10:53:12 keyring_file -- keyring/file.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:34:24.113 10:53:12 keyring_file -- keyring/file.sh@32 -- # waitforlisten 569826 00:34:24.113 10:53:12 keyring_file -- common/autotest_common.sh@833 -- # '[' -z 569826 ']' 00:34:24.113 10:53:12 keyring_file -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:24.113 10:53:12 keyring_file -- common/autotest_common.sh@838 -- # local max_retries=100 00:34:24.113 10:53:12 keyring_file -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:24.113 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:24.113 10:53:12 keyring_file -- common/autotest_common.sh@842 -- # xtrace_disable 00:34:24.113 10:53:12 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:34:24.113 [2024-11-15 10:53:12.455223] Starting SPDK v25.01-pre git sha1 318515b44 / DPDK 24.03.0 initialization... 00:34:24.113 [2024-11-15 10:53:12.455305] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid569826 ] 00:34:24.113 [2024-11-15 10:53:12.521015] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:24.113 [2024-11-15 10:53:12.577978] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:24.371 10:53:12 keyring_file -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:34:24.371 10:53:12 keyring_file -- common/autotest_common.sh@866 -- # return 0 00:34:24.371 10:53:12 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:34:24.371 10:53:12 keyring_file -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:24.371 10:53:12 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:34:24.371 [2024-11-15 10:53:12.835652] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:24.628 null0 00:34:24.628 [2024-11-15 10:53:12.867716] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:34:24.628 [2024-11-15 10:53:12.867986] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:34:24.628 10:53:12 keyring_file -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:24.628 10:53:12 keyring_file -- keyring/file.sh@44 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:34:24.628 10:53:12 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:34:24.628 10:53:12 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:34:24.628 10:53:12 keyring_file -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:34:24.628 10:53:12 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:34:24.628 10:53:12 keyring_file -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:34:24.628 10:53:12 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:34:24.628 10:53:12 keyring_file -- common/autotest_common.sh@653 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:34:24.628 10:53:12 keyring_file -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:24.628 10:53:12 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:34:24.628 [2024-11-15 10:53:12.895775] nvmf_rpc.c: 762:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:34:24.628 request: 00:34:24.628 { 00:34:24.628 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:34:24.628 "secure_channel": false, 00:34:24.628 "listen_address": { 00:34:24.628 "trtype": "tcp", 00:34:24.628 "traddr": "127.0.0.1", 00:34:24.628 "trsvcid": "4420" 00:34:24.628 }, 00:34:24.628 "method": "nvmf_subsystem_add_listener", 00:34:24.628 "req_id": 1 00:34:24.628 } 00:34:24.628 Got JSON-RPC error response 00:34:24.628 response: 00:34:24.628 { 00:34:24.628 "code": -32602, 00:34:24.628 "message": "Invalid parameters" 00:34:24.628 } 00:34:24.628 10:53:12 keyring_file -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:34:24.628 10:53:12 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:34:24.628 10:53:12 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:34:24.628 10:53:12 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:34:24.628 10:53:12 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:34:24.628 10:53:12 keyring_file -- keyring/file.sh@47 -- # bperfpid=569839 00:34:24.628 10:53:12 keyring_file -- keyring/file.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:34:24.628 10:53:12 keyring_file -- keyring/file.sh@49 -- # waitforlisten 569839 /var/tmp/bperf.sock 00:34:24.628 10:53:12 keyring_file -- common/autotest_common.sh@833 -- # '[' -z 569839 ']' 00:34:24.628 10:53:12 keyring_file -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:34:24.628 10:53:12 keyring_file -- common/autotest_common.sh@838 -- # local max_retries=100 00:34:24.628 10:53:12 keyring_file -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:34:24.628 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:34:24.628 10:53:12 keyring_file -- common/autotest_common.sh@842 -- # xtrace_disable 00:34:24.628 10:53:12 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:34:24.628 [2024-11-15 10:53:12.945289] Starting SPDK v25.01-pre git sha1 318515b44 / DPDK 24.03.0 initialization... 00:34:24.628 [2024-11-15 10:53:12.945378] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid569839 ] 00:34:24.628 [2024-11-15 10:53:13.008795] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:24.628 [2024-11-15 10:53:13.069001] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:34:24.885 10:53:13 keyring_file -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:34:24.885 10:53:13 keyring_file -- common/autotest_common.sh@866 -- # return 0 00:34:24.885 10:53:13 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.73CPXWb0na 00:34:24.885 10:53:13 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.73CPXWb0na 00:34:25.141 10:53:13 keyring_file -- keyring/file.sh@51 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.ZQGOUADXJZ 00:34:25.141 10:53:13 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.ZQGOUADXJZ 00:34:25.399 10:53:13 keyring_file -- keyring/file.sh@52 -- # get_key key0 00:34:25.399 10:53:13 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:34:25.399 10:53:13 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:34:25.399 10:53:13 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:34:25.399 10:53:13 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:34:25.656 10:53:13 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.73CPXWb0na == \/\t\m\p\/\t\m\p\.\7\3\C\P\X\W\b\0\n\a ]] 00:34:25.656 10:53:13 keyring_file -- keyring/file.sh@53 -- # get_key key1 00:34:25.656 10:53:13 keyring_file -- keyring/file.sh@53 -- # jq -r .path 00:34:25.656 10:53:13 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:34:25.656 10:53:13 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:34:25.656 10:53:13 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:34:25.913 10:53:14 keyring_file -- keyring/file.sh@53 -- # [[ /tmp/tmp.ZQGOUADXJZ == \/\t\m\p\/\t\m\p\.\Z\Q\G\O\U\A\D\X\J\Z ]] 00:34:25.913 10:53:14 keyring_file -- keyring/file.sh@54 -- # get_refcnt key0 00:34:25.913 10:53:14 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:34:25.913 10:53:14 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:34:25.913 10:53:14 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:34:25.913 10:53:14 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:34:25.913 10:53:14 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:34:26.176 10:53:14 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:34:26.176 10:53:14 keyring_file -- keyring/file.sh@55 -- # get_refcnt key1 00:34:26.176 10:53:14 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:34:26.177 10:53:14 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:34:26.177 10:53:14 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:34:26.177 10:53:14 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:34:26.177 10:53:14 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:34:26.477 10:53:14 keyring_file -- keyring/file.sh@55 -- # (( 1 == 1 )) 00:34:26.477 10:53:14 keyring_file -- keyring/file.sh@58 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:34:26.477 10:53:14 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:34:26.760 [2024-11-15 10:53:15.031327] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:34:26.760 nvme0n1 00:34:26.760 10:53:15 keyring_file -- keyring/file.sh@60 -- # get_refcnt key0 00:34:26.760 10:53:15 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:34:26.760 10:53:15 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:34:26.760 10:53:15 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:34:26.760 10:53:15 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:34:26.760 10:53:15 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:34:27.043 10:53:15 keyring_file -- keyring/file.sh@60 -- # (( 2 == 2 )) 00:34:27.043 10:53:15 keyring_file -- keyring/file.sh@61 -- # get_refcnt key1 00:34:27.043 10:53:15 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:34:27.043 10:53:15 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:34:27.043 10:53:15 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:34:27.043 10:53:15 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:34:27.043 10:53:15 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:34:27.303 10:53:15 keyring_file -- keyring/file.sh@61 -- # (( 1 == 1 )) 00:34:27.303 10:53:15 keyring_file -- keyring/file.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:34:27.560 Running I/O for 1 seconds... 00:34:28.492 10363.00 IOPS, 40.48 MiB/s 00:34:28.492 Latency(us) 00:34:28.492 [2024-11-15T09:53:16.955Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:28.492 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:34:28.492 nvme0n1 : 1.01 10411.66 40.67 0.00 0.00 12255.35 7475.96 23107.51 00:34:28.492 [2024-11-15T09:53:16.955Z] =================================================================================================================== 00:34:28.492 [2024-11-15T09:53:16.955Z] Total : 10411.66 40.67 0.00 0.00 12255.35 7475.96 23107.51 00:34:28.492 { 00:34:28.492 "results": [ 00:34:28.492 { 00:34:28.492 "job": "nvme0n1", 00:34:28.492 "core_mask": "0x2", 00:34:28.492 "workload": "randrw", 00:34:28.492 "percentage": 50, 00:34:28.492 "status": "finished", 00:34:28.492 "queue_depth": 128, 00:34:28.492 "io_size": 4096, 00:34:28.492 "runtime": 1.00762, 00:34:28.492 "iops": 10411.66312697247, 00:34:28.492 "mibps": 40.67055908973621, 00:34:28.492 "io_failed": 0, 00:34:28.492 "io_timeout": 0, 00:34:28.492 "avg_latency_us": 12255.352541614153, 00:34:28.492 "min_latency_us": 7475.958518518519, 00:34:28.492 "max_latency_us": 23107.508148148147 00:34:28.492 } 00:34:28.492 ], 00:34:28.492 "core_count": 1 00:34:28.492 } 00:34:28.492 10:53:16 keyring_file -- keyring/file.sh@65 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:34:28.492 10:53:16 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:34:28.749 10:53:17 keyring_file -- keyring/file.sh@66 -- # get_refcnt key0 00:34:28.749 10:53:17 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:34:28.749 10:53:17 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:34:28.749 10:53:17 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:34:28.749 10:53:17 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:34:28.749 10:53:17 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:34:29.007 10:53:17 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:34:29.007 10:53:17 keyring_file -- keyring/file.sh@67 -- # get_refcnt key1 00:34:29.007 10:53:17 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:34:29.007 10:53:17 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:34:29.007 10:53:17 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:34:29.007 10:53:17 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:34:29.007 10:53:17 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:34:29.264 10:53:17 keyring_file -- keyring/file.sh@67 -- # (( 1 == 1 )) 00:34:29.264 10:53:17 keyring_file -- keyring/file.sh@70 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:34:29.264 10:53:17 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:34:29.264 10:53:17 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:34:29.264 10:53:17 keyring_file -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:34:29.264 10:53:17 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:34:29.264 10:53:17 keyring_file -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:34:29.264 10:53:17 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:34:29.264 10:53:17 keyring_file -- common/autotest_common.sh@653 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:34:29.264 10:53:17 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:34:29.521 [2024-11-15 10:53:17.895505] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:34:29.521 [2024-11-15 10:53:17.896137] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24eb510 (107): Transport endpoint is not connected 00:34:29.521 [2024-11-15 10:53:17.897130] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24eb510 (9): Bad file descriptor 00:34:29.521 [2024-11-15 10:53:17.898130] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:34:29.521 [2024-11-15 10:53:17.898151] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:34:29.521 [2024-11-15 10:53:17.898165] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:34:29.521 [2024-11-15 10:53:17.898181] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:34:29.521 request: 00:34:29.521 { 00:34:29.521 "name": "nvme0", 00:34:29.521 "trtype": "tcp", 00:34:29.521 "traddr": "127.0.0.1", 00:34:29.521 "adrfam": "ipv4", 00:34:29.521 "trsvcid": "4420", 00:34:29.521 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:34:29.521 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:34:29.521 "prchk_reftag": false, 00:34:29.521 "prchk_guard": false, 00:34:29.521 "hdgst": false, 00:34:29.521 "ddgst": false, 00:34:29.521 "psk": "key1", 00:34:29.521 "allow_unrecognized_csi": false, 00:34:29.521 "method": "bdev_nvme_attach_controller", 00:34:29.521 "req_id": 1 00:34:29.521 } 00:34:29.521 Got JSON-RPC error response 00:34:29.521 response: 00:34:29.521 { 00:34:29.521 "code": -5, 00:34:29.521 "message": "Input/output error" 00:34:29.521 } 00:34:29.521 10:53:17 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:34:29.521 10:53:17 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:34:29.521 10:53:17 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:34:29.521 10:53:17 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:34:29.521 10:53:17 keyring_file -- keyring/file.sh@72 -- # get_refcnt key0 00:34:29.521 10:53:17 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:34:29.521 10:53:17 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:34:29.521 10:53:17 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:34:29.521 10:53:17 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:34:29.521 10:53:17 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:34:29.779 10:53:18 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:34:29.779 10:53:18 keyring_file -- keyring/file.sh@73 -- # get_refcnt key1 00:34:29.779 10:53:18 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:34:29.779 10:53:18 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:34:29.779 10:53:18 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:34:29.779 10:53:18 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:34:29.779 10:53:18 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:34:30.036 10:53:18 keyring_file -- keyring/file.sh@73 -- # (( 1 == 1 )) 00:34:30.036 10:53:18 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key0 00:34:30.036 10:53:18 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:34:30.293 10:53:18 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_file_remove_key key1 00:34:30.293 10:53:18 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:34:30.550 10:53:19 keyring_file -- keyring/file.sh@78 -- # bperf_cmd keyring_get_keys 00:34:30.550 10:53:19 keyring_file -- keyring/file.sh@78 -- # jq length 00:34:30.550 10:53:19 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:34:30.807 10:53:19 keyring_file -- keyring/file.sh@78 -- # (( 0 == 0 )) 00:34:30.807 10:53:19 keyring_file -- keyring/file.sh@81 -- # chmod 0660 /tmp/tmp.73CPXWb0na 00:34:30.807 10:53:19 keyring_file -- keyring/file.sh@82 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.73CPXWb0na 00:34:30.807 10:53:19 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:34:30.807 10:53:19 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.73CPXWb0na 00:34:30.807 10:53:19 keyring_file -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:34:30.807 10:53:19 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:34:30.807 10:53:19 keyring_file -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:34:30.807 10:53:19 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:34:30.807 10:53:19 keyring_file -- common/autotest_common.sh@653 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.73CPXWb0na 00:34:30.807 10:53:19 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.73CPXWb0na 00:34:31.064 [2024-11-15 10:53:19.520133] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.73CPXWb0na': 0100660 00:34:31.064 [2024-11-15 10:53:19.520167] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:34:31.064 request: 00:34:31.064 { 00:34:31.064 "name": "key0", 00:34:31.064 "path": "/tmp/tmp.73CPXWb0na", 00:34:31.064 "method": "keyring_file_add_key", 00:34:31.064 "req_id": 1 00:34:31.064 } 00:34:31.064 Got JSON-RPC error response 00:34:31.064 response: 00:34:31.064 { 00:34:31.064 "code": -1, 00:34:31.064 "message": "Operation not permitted" 00:34:31.064 } 00:34:31.320 10:53:19 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:34:31.320 10:53:19 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:34:31.320 10:53:19 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:34:31.320 10:53:19 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:34:31.320 10:53:19 keyring_file -- keyring/file.sh@85 -- # chmod 0600 /tmp/tmp.73CPXWb0na 00:34:31.320 10:53:19 keyring_file -- keyring/file.sh@86 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.73CPXWb0na 00:34:31.320 10:53:19 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.73CPXWb0na 00:34:31.577 10:53:19 keyring_file -- keyring/file.sh@87 -- # rm -f /tmp/tmp.73CPXWb0na 00:34:31.577 10:53:19 keyring_file -- keyring/file.sh@89 -- # get_refcnt key0 00:34:31.577 10:53:19 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:34:31.577 10:53:19 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:34:31.577 10:53:19 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:34:31.577 10:53:19 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:34:31.577 10:53:19 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:34:31.834 10:53:20 keyring_file -- keyring/file.sh@89 -- # (( 1 == 1 )) 00:34:31.834 10:53:20 keyring_file -- keyring/file.sh@91 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:34:31.834 10:53:20 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:34:31.834 10:53:20 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:34:31.834 10:53:20 keyring_file -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:34:31.834 10:53:20 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:34:31.834 10:53:20 keyring_file -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:34:31.834 10:53:20 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:34:31.834 10:53:20 keyring_file -- common/autotest_common.sh@653 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:34:31.834 10:53:20 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:34:32.092 [2024-11-15 10:53:20.354420] keyring.c: 31:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.73CPXWb0na': No such file or directory 00:34:32.092 [2024-11-15 10:53:20.354457] nvme_tcp.c:2498:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:34:32.092 [2024-11-15 10:53:20.354482] nvme.c: 682:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:34:32.092 [2024-11-15 10:53:20.354496] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, No such device 00:34:32.092 [2024-11-15 10:53:20.354510] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:34:32.092 [2024-11-15 10:53:20.354523] bdev_nvme.c:6669:spdk_bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:34:32.092 request: 00:34:32.092 { 00:34:32.092 "name": "nvme0", 00:34:32.092 "trtype": "tcp", 00:34:32.092 "traddr": "127.0.0.1", 00:34:32.092 "adrfam": "ipv4", 00:34:32.092 "trsvcid": "4420", 00:34:32.092 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:34:32.092 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:34:32.092 "prchk_reftag": false, 00:34:32.092 "prchk_guard": false, 00:34:32.092 "hdgst": false, 00:34:32.092 "ddgst": false, 00:34:32.092 "psk": "key0", 00:34:32.092 "allow_unrecognized_csi": false, 00:34:32.092 "method": "bdev_nvme_attach_controller", 00:34:32.092 "req_id": 1 00:34:32.092 } 00:34:32.092 Got JSON-RPC error response 00:34:32.092 response: 00:34:32.092 { 00:34:32.092 "code": -19, 00:34:32.092 "message": "No such device" 00:34:32.092 } 00:34:32.092 10:53:20 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:34:32.092 10:53:20 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:34:32.092 10:53:20 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:34:32.092 10:53:20 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:34:32.092 10:53:20 keyring_file -- keyring/file.sh@93 -- # bperf_cmd keyring_file_remove_key key0 00:34:32.092 10:53:20 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:34:32.350 10:53:20 keyring_file -- keyring/file.sh@96 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:34:32.350 10:53:20 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:34:32.350 10:53:20 keyring_file -- keyring/common.sh@17 -- # name=key0 00:34:32.350 10:53:20 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:34:32.350 10:53:20 keyring_file -- keyring/common.sh@17 -- # digest=0 00:34:32.350 10:53:20 keyring_file -- keyring/common.sh@18 -- # mktemp 00:34:32.350 10:53:20 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.oBdopsjoOn 00:34:32.350 10:53:20 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:34:32.350 10:53:20 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:34:32.350 10:53:20 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:34:32.350 10:53:20 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:34:32.350 10:53:20 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:34:32.350 10:53:20 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:34:32.350 10:53:20 keyring_file -- nvmf/common.sh@733 -- # python - 00:34:32.350 10:53:20 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.oBdopsjoOn 00:34:32.350 10:53:20 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.oBdopsjoOn 00:34:32.350 10:53:20 keyring_file -- keyring/file.sh@96 -- # key0path=/tmp/tmp.oBdopsjoOn 00:34:32.350 10:53:20 keyring_file -- keyring/file.sh@97 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.oBdopsjoOn 00:34:32.350 10:53:20 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.oBdopsjoOn 00:34:32.607 10:53:20 keyring_file -- keyring/file.sh@98 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:34:32.607 10:53:20 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:34:32.864 nvme0n1 00:34:32.864 10:53:21 keyring_file -- keyring/file.sh@100 -- # get_refcnt key0 00:34:32.864 10:53:21 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:34:32.864 10:53:21 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:34:32.864 10:53:21 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:34:32.864 10:53:21 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:34:32.864 10:53:21 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:34:33.428 10:53:21 keyring_file -- keyring/file.sh@100 -- # (( 2 == 2 )) 00:34:33.428 10:53:21 keyring_file -- keyring/file.sh@101 -- # bperf_cmd keyring_file_remove_key key0 00:34:33.428 10:53:21 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:34:33.428 10:53:21 keyring_file -- keyring/file.sh@102 -- # get_key key0 00:34:33.428 10:53:21 keyring_file -- keyring/file.sh@102 -- # jq -r .removed 00:34:33.428 10:53:21 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:34:33.428 10:53:21 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:34:33.428 10:53:21 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:34:33.685 10:53:22 keyring_file -- keyring/file.sh@102 -- # [[ true == \t\r\u\e ]] 00:34:33.685 10:53:22 keyring_file -- keyring/file.sh@103 -- # get_refcnt key0 00:34:33.685 10:53:22 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:34:33.685 10:53:22 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:34:33.685 10:53:22 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:34:33.685 10:53:22 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:34:33.685 10:53:22 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:34:34.249 10:53:22 keyring_file -- keyring/file.sh@103 -- # (( 1 == 1 )) 00:34:34.249 10:53:22 keyring_file -- keyring/file.sh@104 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:34:34.249 10:53:22 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:34:34.249 10:53:22 keyring_file -- keyring/file.sh@105 -- # bperf_cmd keyring_get_keys 00:34:34.249 10:53:22 keyring_file -- keyring/file.sh@105 -- # jq length 00:34:34.249 10:53:22 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:34:34.506 10:53:22 keyring_file -- keyring/file.sh@105 -- # (( 0 == 0 )) 00:34:34.506 10:53:22 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.oBdopsjoOn 00:34:34.506 10:53:22 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.oBdopsjoOn 00:34:34.763 10:53:23 keyring_file -- keyring/file.sh@109 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.ZQGOUADXJZ 00:34:34.763 10:53:23 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.ZQGOUADXJZ 00:34:35.326 10:53:23 keyring_file -- keyring/file.sh@110 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:34:35.326 10:53:23 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:34:35.584 nvme0n1 00:34:35.584 10:53:23 keyring_file -- keyring/file.sh@113 -- # bperf_cmd save_config 00:34:35.584 10:53:23 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:34:35.841 10:53:24 keyring_file -- keyring/file.sh@113 -- # config='{ 00:34:35.841 "subsystems": [ 00:34:35.841 { 00:34:35.841 "subsystem": "keyring", 00:34:35.841 "config": [ 00:34:35.841 { 00:34:35.841 "method": "keyring_file_add_key", 00:34:35.841 "params": { 00:34:35.841 "name": "key0", 00:34:35.841 "path": "/tmp/tmp.oBdopsjoOn" 00:34:35.841 } 00:34:35.841 }, 00:34:35.841 { 00:34:35.841 "method": "keyring_file_add_key", 00:34:35.841 "params": { 00:34:35.841 "name": "key1", 00:34:35.841 "path": "/tmp/tmp.ZQGOUADXJZ" 00:34:35.841 } 00:34:35.841 } 00:34:35.841 ] 00:34:35.841 }, 00:34:35.841 { 00:34:35.841 "subsystem": "iobuf", 00:34:35.841 "config": [ 00:34:35.841 { 00:34:35.841 "method": "iobuf_set_options", 00:34:35.841 "params": { 00:34:35.841 "small_pool_count": 8192, 00:34:35.841 "large_pool_count": 1024, 00:34:35.841 "small_bufsize": 8192, 00:34:35.841 "large_bufsize": 135168, 00:34:35.841 "enable_numa": false 00:34:35.841 } 00:34:35.841 } 00:34:35.841 ] 00:34:35.841 }, 00:34:35.841 { 00:34:35.841 "subsystem": "sock", 00:34:35.841 "config": [ 00:34:35.841 { 00:34:35.841 "method": "sock_set_default_impl", 00:34:35.841 "params": { 00:34:35.841 "impl_name": "posix" 00:34:35.841 } 00:34:35.841 }, 00:34:35.841 { 00:34:35.841 "method": "sock_impl_set_options", 00:34:35.841 "params": { 00:34:35.841 "impl_name": "ssl", 00:34:35.841 "recv_buf_size": 4096, 00:34:35.841 "send_buf_size": 4096, 00:34:35.841 "enable_recv_pipe": true, 00:34:35.841 "enable_quickack": false, 00:34:35.841 "enable_placement_id": 0, 00:34:35.841 "enable_zerocopy_send_server": true, 00:34:35.841 "enable_zerocopy_send_client": false, 00:34:35.841 "zerocopy_threshold": 0, 00:34:35.841 "tls_version": 0, 00:34:35.841 "enable_ktls": false 00:34:35.841 } 00:34:35.841 }, 00:34:35.841 { 00:34:35.841 "method": "sock_impl_set_options", 00:34:35.841 "params": { 00:34:35.841 "impl_name": "posix", 00:34:35.841 "recv_buf_size": 2097152, 00:34:35.841 "send_buf_size": 2097152, 00:34:35.841 "enable_recv_pipe": true, 00:34:35.841 "enable_quickack": false, 00:34:35.841 "enable_placement_id": 0, 00:34:35.841 "enable_zerocopy_send_server": true, 00:34:35.841 "enable_zerocopy_send_client": false, 00:34:35.841 "zerocopy_threshold": 0, 00:34:35.841 "tls_version": 0, 00:34:35.841 "enable_ktls": false 00:34:35.841 } 00:34:35.841 } 00:34:35.841 ] 00:34:35.841 }, 00:34:35.841 { 00:34:35.841 "subsystem": "vmd", 00:34:35.841 "config": [] 00:34:35.841 }, 00:34:35.841 { 00:34:35.841 "subsystem": "accel", 00:34:35.841 "config": [ 00:34:35.841 { 00:34:35.841 "method": "accel_set_options", 00:34:35.841 "params": { 00:34:35.841 "small_cache_size": 128, 00:34:35.841 "large_cache_size": 16, 00:34:35.841 "task_count": 2048, 00:34:35.841 "sequence_count": 2048, 00:34:35.841 "buf_count": 2048 00:34:35.841 } 00:34:35.841 } 00:34:35.841 ] 00:34:35.841 }, 00:34:35.841 { 00:34:35.841 "subsystem": "bdev", 00:34:35.841 "config": [ 00:34:35.841 { 00:34:35.841 "method": "bdev_set_options", 00:34:35.841 "params": { 00:34:35.841 "bdev_io_pool_size": 65535, 00:34:35.841 "bdev_io_cache_size": 256, 00:34:35.841 "bdev_auto_examine": true, 00:34:35.841 "iobuf_small_cache_size": 128, 00:34:35.841 "iobuf_large_cache_size": 16 00:34:35.841 } 00:34:35.841 }, 00:34:35.841 { 00:34:35.841 "method": "bdev_raid_set_options", 00:34:35.841 "params": { 00:34:35.841 "process_window_size_kb": 1024, 00:34:35.841 "process_max_bandwidth_mb_sec": 0 00:34:35.841 } 00:34:35.841 }, 00:34:35.841 { 00:34:35.841 "method": "bdev_iscsi_set_options", 00:34:35.841 "params": { 00:34:35.841 "timeout_sec": 30 00:34:35.841 } 00:34:35.841 }, 00:34:35.841 { 00:34:35.841 "method": "bdev_nvme_set_options", 00:34:35.841 "params": { 00:34:35.841 "action_on_timeout": "none", 00:34:35.841 "timeout_us": 0, 00:34:35.841 "timeout_admin_us": 0, 00:34:35.841 "keep_alive_timeout_ms": 10000, 00:34:35.841 "arbitration_burst": 0, 00:34:35.841 "low_priority_weight": 0, 00:34:35.841 "medium_priority_weight": 0, 00:34:35.841 "high_priority_weight": 0, 00:34:35.841 "nvme_adminq_poll_period_us": 10000, 00:34:35.841 "nvme_ioq_poll_period_us": 0, 00:34:35.841 "io_queue_requests": 512, 00:34:35.841 "delay_cmd_submit": true, 00:34:35.841 "transport_retry_count": 4, 00:34:35.841 "bdev_retry_count": 3, 00:34:35.841 "transport_ack_timeout": 0, 00:34:35.841 "ctrlr_loss_timeout_sec": 0, 00:34:35.841 "reconnect_delay_sec": 0, 00:34:35.841 "fast_io_fail_timeout_sec": 0, 00:34:35.841 "disable_auto_failback": false, 00:34:35.841 "generate_uuids": false, 00:34:35.841 "transport_tos": 0, 00:34:35.841 "nvme_error_stat": false, 00:34:35.841 "rdma_srq_size": 0, 00:34:35.841 "io_path_stat": false, 00:34:35.841 "allow_accel_sequence": false, 00:34:35.841 "rdma_max_cq_size": 0, 00:34:35.841 "rdma_cm_event_timeout_ms": 0, 00:34:35.841 "dhchap_digests": [ 00:34:35.841 "sha256", 00:34:35.841 "sha384", 00:34:35.841 "sha512" 00:34:35.841 ], 00:34:35.841 "dhchap_dhgroups": [ 00:34:35.841 "null", 00:34:35.841 "ffdhe2048", 00:34:35.841 "ffdhe3072", 00:34:35.841 "ffdhe4096", 00:34:35.841 "ffdhe6144", 00:34:35.841 "ffdhe8192" 00:34:35.841 ] 00:34:35.841 } 00:34:35.841 }, 00:34:35.841 { 00:34:35.841 "method": "bdev_nvme_attach_controller", 00:34:35.841 "params": { 00:34:35.841 "name": "nvme0", 00:34:35.841 "trtype": "TCP", 00:34:35.841 "adrfam": "IPv4", 00:34:35.841 "traddr": "127.0.0.1", 00:34:35.841 "trsvcid": "4420", 00:34:35.841 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:34:35.841 "prchk_reftag": false, 00:34:35.841 "prchk_guard": false, 00:34:35.841 "ctrlr_loss_timeout_sec": 0, 00:34:35.841 "reconnect_delay_sec": 0, 00:34:35.841 "fast_io_fail_timeout_sec": 0, 00:34:35.841 "psk": "key0", 00:34:35.841 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:34:35.841 "hdgst": false, 00:34:35.841 "ddgst": false, 00:34:35.841 "multipath": "multipath" 00:34:35.841 } 00:34:35.841 }, 00:34:35.841 { 00:34:35.841 "method": "bdev_nvme_set_hotplug", 00:34:35.841 "params": { 00:34:35.841 "period_us": 100000, 00:34:35.841 "enable": false 00:34:35.841 } 00:34:35.841 }, 00:34:35.841 { 00:34:35.841 "method": "bdev_wait_for_examine" 00:34:35.841 } 00:34:35.841 ] 00:34:35.841 }, 00:34:35.841 { 00:34:35.841 "subsystem": "nbd", 00:34:35.841 "config": [] 00:34:35.841 } 00:34:35.841 ] 00:34:35.841 }' 00:34:35.841 10:53:24 keyring_file -- keyring/file.sh@115 -- # killprocess 569839 00:34:35.841 10:53:24 keyring_file -- common/autotest_common.sh@952 -- # '[' -z 569839 ']' 00:34:35.841 10:53:24 keyring_file -- common/autotest_common.sh@956 -- # kill -0 569839 00:34:35.841 10:53:24 keyring_file -- common/autotest_common.sh@957 -- # uname 00:34:35.841 10:53:24 keyring_file -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:34:35.841 10:53:24 keyring_file -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 569839 00:34:35.841 10:53:24 keyring_file -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:34:35.841 10:53:24 keyring_file -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:34:35.841 10:53:24 keyring_file -- common/autotest_common.sh@970 -- # echo 'killing process with pid 569839' 00:34:35.841 killing process with pid 569839 00:34:35.841 10:53:24 keyring_file -- common/autotest_common.sh@971 -- # kill 569839 00:34:35.841 Received shutdown signal, test time was about 1.000000 seconds 00:34:35.841 00:34:35.841 Latency(us) 00:34:35.841 [2024-11-15T09:53:24.305Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:35.842 [2024-11-15T09:53:24.305Z] =================================================================================================================== 00:34:35.842 [2024-11-15T09:53:24.305Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:34:35.842 10:53:24 keyring_file -- common/autotest_common.sh@976 -- # wait 569839 00:34:36.100 10:53:24 keyring_file -- keyring/file.sh@118 -- # bperfpid=571313 00:34:36.100 10:53:24 keyring_file -- keyring/file.sh@120 -- # waitforlisten 571313 /var/tmp/bperf.sock 00:34:36.100 10:53:24 keyring_file -- common/autotest_common.sh@833 -- # '[' -z 571313 ']' 00:34:36.100 10:53:24 keyring_file -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:34:36.100 10:53:24 keyring_file -- common/autotest_common.sh@838 -- # local max_retries=100 00:34:36.100 10:53:24 keyring_file -- keyring/file.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:34:36.100 10:53:24 keyring_file -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:34:36.100 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:34:36.100 10:53:24 keyring_file -- common/autotest_common.sh@842 -- # xtrace_disable 00:34:36.100 10:53:24 keyring_file -- keyring/file.sh@116 -- # echo '{ 00:34:36.100 "subsystems": [ 00:34:36.100 { 00:34:36.100 "subsystem": "keyring", 00:34:36.100 "config": [ 00:34:36.100 { 00:34:36.100 "method": "keyring_file_add_key", 00:34:36.100 "params": { 00:34:36.100 "name": "key0", 00:34:36.100 "path": "/tmp/tmp.oBdopsjoOn" 00:34:36.100 } 00:34:36.100 }, 00:34:36.100 { 00:34:36.100 "method": "keyring_file_add_key", 00:34:36.100 "params": { 00:34:36.100 "name": "key1", 00:34:36.100 "path": "/tmp/tmp.ZQGOUADXJZ" 00:34:36.100 } 00:34:36.100 } 00:34:36.100 ] 00:34:36.100 }, 00:34:36.100 { 00:34:36.100 "subsystem": "iobuf", 00:34:36.100 "config": [ 00:34:36.100 { 00:34:36.100 "method": "iobuf_set_options", 00:34:36.100 "params": { 00:34:36.100 "small_pool_count": 8192, 00:34:36.100 "large_pool_count": 1024, 00:34:36.100 "small_bufsize": 8192, 00:34:36.100 "large_bufsize": 135168, 00:34:36.100 "enable_numa": false 00:34:36.100 } 00:34:36.100 } 00:34:36.100 ] 00:34:36.100 }, 00:34:36.100 { 00:34:36.100 "subsystem": "sock", 00:34:36.100 "config": [ 00:34:36.100 { 00:34:36.100 "method": "sock_set_default_impl", 00:34:36.100 "params": { 00:34:36.100 "impl_name": "posix" 00:34:36.100 } 00:34:36.100 }, 00:34:36.100 { 00:34:36.100 "method": "sock_impl_set_options", 00:34:36.100 "params": { 00:34:36.100 "impl_name": "ssl", 00:34:36.100 "recv_buf_size": 4096, 00:34:36.100 "send_buf_size": 4096, 00:34:36.100 "enable_recv_pipe": true, 00:34:36.100 "enable_quickack": false, 00:34:36.100 "enable_placement_id": 0, 00:34:36.100 "enable_zerocopy_send_server": true, 00:34:36.100 "enable_zerocopy_send_client": false, 00:34:36.100 "zerocopy_threshold": 0, 00:34:36.100 "tls_version": 0, 00:34:36.100 "enable_ktls": false 00:34:36.100 } 00:34:36.100 }, 00:34:36.100 { 00:34:36.100 "method": "sock_impl_set_options", 00:34:36.100 "params": { 00:34:36.100 "impl_name": "posix", 00:34:36.100 "recv_buf_size": 2097152, 00:34:36.100 "send_buf_size": 2097152, 00:34:36.100 "enable_recv_pipe": true, 00:34:36.100 "enable_quickack": false, 00:34:36.100 "enable_placement_id": 0, 00:34:36.100 "enable_zerocopy_send_server": true, 00:34:36.100 "enable_zerocopy_send_client": false, 00:34:36.100 "zerocopy_threshold": 0, 00:34:36.100 "tls_version": 0, 00:34:36.100 "enable_ktls": false 00:34:36.100 } 00:34:36.100 } 00:34:36.100 ] 00:34:36.100 }, 00:34:36.100 { 00:34:36.100 "subsystem": "vmd", 00:34:36.100 "config": [] 00:34:36.100 }, 00:34:36.100 { 00:34:36.100 "subsystem": "accel", 00:34:36.100 "config": [ 00:34:36.100 { 00:34:36.100 "method": "accel_set_options", 00:34:36.100 "params": { 00:34:36.100 "small_cache_size": 128, 00:34:36.100 "large_cache_size": 16, 00:34:36.100 "task_count": 2048, 00:34:36.100 "sequence_count": 2048, 00:34:36.100 "buf_count": 2048 00:34:36.100 } 00:34:36.100 } 00:34:36.100 ] 00:34:36.100 }, 00:34:36.100 { 00:34:36.100 "subsystem": "bdev", 00:34:36.100 "config": [ 00:34:36.100 { 00:34:36.100 "method": "bdev_set_options", 00:34:36.100 "params": { 00:34:36.100 "bdev_io_pool_size": 65535, 00:34:36.100 "bdev_io_cache_size": 256, 00:34:36.100 "bdev_auto_examine": true, 00:34:36.100 "iobuf_small_cache_size": 128, 00:34:36.100 "iobuf_large_cache_size": 16 00:34:36.100 } 00:34:36.100 }, 00:34:36.100 { 00:34:36.100 "method": "bdev_raid_set_options", 00:34:36.100 "params": { 00:34:36.100 "process_window_size_kb": 1024, 00:34:36.100 "process_max_bandwidth_mb_sec": 0 00:34:36.100 } 00:34:36.100 }, 00:34:36.100 { 00:34:36.100 "method": "bdev_iscsi_set_options", 00:34:36.100 "params": { 00:34:36.100 "timeout_sec": 30 00:34:36.100 } 00:34:36.100 }, 00:34:36.101 { 00:34:36.101 "method": "bdev_nvme_set_options", 00:34:36.101 "params": { 00:34:36.101 "action_on_timeout": "none", 00:34:36.101 "timeout_us": 0, 00:34:36.101 "timeout_admin_us": 0, 00:34:36.101 "keep_alive_timeout_ms": 10000, 00:34:36.101 "arbitration_burst": 0, 00:34:36.101 "low_priority_weight": 0, 00:34:36.101 "medium_priority_weight": 0, 00:34:36.101 "high_priority_weight": 0, 00:34:36.101 "nvme_adminq_poll_period_us": 10000, 00:34:36.101 "nvme_ioq_poll_period_us": 0, 00:34:36.101 "io_queue_requests": 512, 00:34:36.101 "delay_cmd_submit": true, 00:34:36.101 "transport_retry_count": 4, 00:34:36.101 "bdev_retry_count": 3, 00:34:36.101 "transport_ack_timeout": 0, 00:34:36.101 "ctrlr_loss_timeout_sec": 0, 00:34:36.101 "reconnect_delay_sec": 0, 00:34:36.101 "fast_io_fail_timeout_sec": 0, 00:34:36.101 "disable_auto_failback": false, 00:34:36.101 "generate_uuids": false, 00:34:36.101 "transport_tos": 0, 00:34:36.101 "nvme_error_stat": false, 00:34:36.101 "rdma_srq_size": 0, 00:34:36.101 "io_path_stat": false, 00:34:36.101 "allow_accel_sequence": false, 00:34:36.101 "rdma_max_cq_size": 0, 00:34:36.101 "rdma_cm_event_timeout_ms": 0, 00:34:36.101 "dhchap_digests": [ 00:34:36.101 "sha256", 00:34:36.101 "sha384", 00:34:36.101 "sha512" 00:34:36.101 ], 00:34:36.101 "dhchap_dhgroups": [ 00:34:36.101 "null", 00:34:36.101 "ffdhe2048", 00:34:36.101 "ffdhe3072", 00:34:36.101 "ffdhe4096", 00:34:36.101 "ffdhe6144", 00:34:36.101 "ffdhe8192" 00:34:36.101 ] 00:34:36.101 } 00:34:36.101 }, 00:34:36.101 { 00:34:36.101 "method": "bdev_nvme_attach_controller", 00:34:36.101 "params": { 00:34:36.101 "name": "nvme0", 00:34:36.101 "trtype": "TCP", 00:34:36.101 "adrfam": "IPv4", 00:34:36.101 "traddr": "127.0.0.1", 00:34:36.101 "trsvcid": "4420", 00:34:36.101 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:34:36.101 "prchk_reftag": false, 00:34:36.101 "prchk_guard": false, 00:34:36.101 "ctrlr_loss_timeout_sec": 0, 00:34:36.101 "reconnect_delay_sec": 0, 00:34:36.101 "fast_io_fail_timeout_sec": 0, 00:34:36.101 "psk": "key0", 00:34:36.101 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:34:36.101 "hdgst": false, 00:34:36.101 "ddgst": false, 00:34:36.101 "multipath": "multipath" 00:34:36.101 } 00:34:36.101 }, 00:34:36.101 { 00:34:36.101 "method": "bdev_nvme_set_hotplug", 00:34:36.101 "params": { 00:34:36.101 "period_us": 100000, 00:34:36.101 "enable": false 00:34:36.101 } 00:34:36.101 }, 00:34:36.101 { 00:34:36.101 "method": "bdev_wait_for_examine" 00:34:36.101 } 00:34:36.101 ] 00:34:36.101 }, 00:34:36.101 { 00:34:36.101 "subsystem": "nbd", 00:34:36.101 "config": [] 00:34:36.101 } 00:34:36.101 ] 00:34:36.101 }' 00:34:36.101 10:53:24 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:34:36.101 [2024-11-15 10:53:24.448578] Starting SPDK v25.01-pre git sha1 318515b44 / DPDK 24.03.0 initialization... 00:34:36.101 [2024-11-15 10:53:24.448662] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid571313 ] 00:34:36.101 [2024-11-15 10:53:24.520390] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:36.359 [2024-11-15 10:53:24.585027] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:34:36.359 [2024-11-15 10:53:24.776758] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:34:36.616 10:53:24 keyring_file -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:34:36.616 10:53:24 keyring_file -- common/autotest_common.sh@866 -- # return 0 00:34:36.616 10:53:24 keyring_file -- keyring/file.sh@121 -- # bperf_cmd keyring_get_keys 00:34:36.616 10:53:24 keyring_file -- keyring/file.sh@121 -- # jq length 00:34:36.616 10:53:24 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:34:36.873 10:53:25 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:34:36.873 10:53:25 keyring_file -- keyring/file.sh@122 -- # get_refcnt key0 00:34:36.873 10:53:25 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:34:36.873 10:53:25 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:34:36.873 10:53:25 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:34:36.873 10:53:25 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:34:36.873 10:53:25 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:34:37.130 10:53:25 keyring_file -- keyring/file.sh@122 -- # (( 2 == 2 )) 00:34:37.130 10:53:25 keyring_file -- keyring/file.sh@123 -- # get_refcnt key1 00:34:37.130 10:53:25 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:34:37.130 10:53:25 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:34:37.130 10:53:25 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:34:37.130 10:53:25 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:34:37.130 10:53:25 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:34:37.386 10:53:25 keyring_file -- keyring/file.sh@123 -- # (( 1 == 1 )) 00:34:37.386 10:53:25 keyring_file -- keyring/file.sh@124 -- # bperf_cmd bdev_nvme_get_controllers 00:34:37.386 10:53:25 keyring_file -- keyring/file.sh@124 -- # jq -r '.[].name' 00:34:37.386 10:53:25 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:34:37.644 10:53:25 keyring_file -- keyring/file.sh@124 -- # [[ nvme0 == nvme0 ]] 00:34:37.644 10:53:25 keyring_file -- keyring/file.sh@1 -- # cleanup 00:34:37.644 10:53:25 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.oBdopsjoOn /tmp/tmp.ZQGOUADXJZ 00:34:37.644 10:53:25 keyring_file -- keyring/file.sh@20 -- # killprocess 571313 00:34:37.644 10:53:25 keyring_file -- common/autotest_common.sh@952 -- # '[' -z 571313 ']' 00:34:37.644 10:53:25 keyring_file -- common/autotest_common.sh@956 -- # kill -0 571313 00:34:37.644 10:53:25 keyring_file -- common/autotest_common.sh@957 -- # uname 00:34:37.644 10:53:25 keyring_file -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:34:37.644 10:53:25 keyring_file -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 571313 00:34:37.644 10:53:26 keyring_file -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:34:37.644 10:53:26 keyring_file -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:34:37.644 10:53:26 keyring_file -- common/autotest_common.sh@970 -- # echo 'killing process with pid 571313' 00:34:37.644 killing process with pid 571313 00:34:37.644 10:53:26 keyring_file -- common/autotest_common.sh@971 -- # kill 571313 00:34:37.644 Received shutdown signal, test time was about 1.000000 seconds 00:34:37.644 00:34:37.644 Latency(us) 00:34:37.644 [2024-11-15T09:53:26.107Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:37.644 [2024-11-15T09:53:26.107Z] =================================================================================================================== 00:34:37.644 [2024-11-15T09:53:26.107Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:34:37.644 10:53:26 keyring_file -- common/autotest_common.sh@976 -- # wait 571313 00:34:37.901 10:53:26 keyring_file -- keyring/file.sh@21 -- # killprocess 569826 00:34:37.901 10:53:26 keyring_file -- common/autotest_common.sh@952 -- # '[' -z 569826 ']' 00:34:37.901 10:53:26 keyring_file -- common/autotest_common.sh@956 -- # kill -0 569826 00:34:37.901 10:53:26 keyring_file -- common/autotest_common.sh@957 -- # uname 00:34:37.901 10:53:26 keyring_file -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:34:37.901 10:53:26 keyring_file -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 569826 00:34:37.901 10:53:26 keyring_file -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:34:37.901 10:53:26 keyring_file -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:34:37.901 10:53:26 keyring_file -- common/autotest_common.sh@970 -- # echo 'killing process with pid 569826' 00:34:37.901 killing process with pid 569826 00:34:37.901 10:53:26 keyring_file -- common/autotest_common.sh@971 -- # kill 569826 00:34:37.901 10:53:26 keyring_file -- common/autotest_common.sh@976 -- # wait 569826 00:34:38.465 00:34:38.465 real 0m14.505s 00:34:38.465 user 0m36.924s 00:34:38.465 sys 0m3.226s 00:34:38.465 10:53:26 keyring_file -- common/autotest_common.sh@1128 -- # xtrace_disable 00:34:38.465 10:53:26 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:34:38.465 ************************************ 00:34:38.465 END TEST keyring_file 00:34:38.465 ************************************ 00:34:38.465 10:53:26 -- spdk/autotest.sh@289 -- # [[ y == y ]] 00:34:38.465 10:53:26 -- spdk/autotest.sh@290 -- # run_test keyring_linux /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/keyctl-session-wrapper /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:34:38.465 10:53:26 -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:34:38.465 10:53:26 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:34:38.465 10:53:26 -- common/autotest_common.sh@10 -- # set +x 00:34:38.465 ************************************ 00:34:38.465 START TEST keyring_linux 00:34:38.465 ************************************ 00:34:38.465 10:53:26 keyring_linux -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/keyctl-session-wrapper /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:34:38.465 Joined session keyring: 358440978 00:34:38.465 * Looking for test storage... 00:34:38.465 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:34:38.465 10:53:26 keyring_linux -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:34:38.465 10:53:26 keyring_linux -- common/autotest_common.sh@1691 -- # lcov --version 00:34:38.465 10:53:26 keyring_linux -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:34:38.465 10:53:26 keyring_linux -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:34:38.465 10:53:26 keyring_linux -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:38.465 10:53:26 keyring_linux -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:38.465 10:53:26 keyring_linux -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:38.465 10:53:26 keyring_linux -- scripts/common.sh@336 -- # IFS=.-: 00:34:38.465 10:53:26 keyring_linux -- scripts/common.sh@336 -- # read -ra ver1 00:34:38.465 10:53:26 keyring_linux -- scripts/common.sh@337 -- # IFS=.-: 00:34:38.465 10:53:26 keyring_linux -- scripts/common.sh@337 -- # read -ra ver2 00:34:38.465 10:53:26 keyring_linux -- scripts/common.sh@338 -- # local 'op=<' 00:34:38.465 10:53:26 keyring_linux -- scripts/common.sh@340 -- # ver1_l=2 00:34:38.465 10:53:26 keyring_linux -- scripts/common.sh@341 -- # ver2_l=1 00:34:38.465 10:53:26 keyring_linux -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:38.465 10:53:26 keyring_linux -- scripts/common.sh@344 -- # case "$op" in 00:34:38.465 10:53:26 keyring_linux -- scripts/common.sh@345 -- # : 1 00:34:38.465 10:53:26 keyring_linux -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:38.465 10:53:26 keyring_linux -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:38.465 10:53:26 keyring_linux -- scripts/common.sh@365 -- # decimal 1 00:34:38.465 10:53:26 keyring_linux -- scripts/common.sh@353 -- # local d=1 00:34:38.465 10:53:26 keyring_linux -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:38.465 10:53:26 keyring_linux -- scripts/common.sh@355 -- # echo 1 00:34:38.465 10:53:26 keyring_linux -- scripts/common.sh@365 -- # ver1[v]=1 00:34:38.465 10:53:26 keyring_linux -- scripts/common.sh@366 -- # decimal 2 00:34:38.465 10:53:26 keyring_linux -- scripts/common.sh@353 -- # local d=2 00:34:38.465 10:53:26 keyring_linux -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:38.465 10:53:26 keyring_linux -- scripts/common.sh@355 -- # echo 2 00:34:38.465 10:53:26 keyring_linux -- scripts/common.sh@366 -- # ver2[v]=2 00:34:38.465 10:53:26 keyring_linux -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:38.465 10:53:26 keyring_linux -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:38.465 10:53:26 keyring_linux -- scripts/common.sh@368 -- # return 0 00:34:38.465 10:53:26 keyring_linux -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:38.465 10:53:26 keyring_linux -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:34:38.465 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:38.465 --rc genhtml_branch_coverage=1 00:34:38.465 --rc genhtml_function_coverage=1 00:34:38.466 --rc genhtml_legend=1 00:34:38.466 --rc geninfo_all_blocks=1 00:34:38.466 --rc geninfo_unexecuted_blocks=1 00:34:38.466 00:34:38.466 ' 00:34:38.466 10:53:26 keyring_linux -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:34:38.466 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:38.466 --rc genhtml_branch_coverage=1 00:34:38.466 --rc genhtml_function_coverage=1 00:34:38.466 --rc genhtml_legend=1 00:34:38.466 --rc geninfo_all_blocks=1 00:34:38.466 --rc geninfo_unexecuted_blocks=1 00:34:38.466 00:34:38.466 ' 00:34:38.466 10:53:26 keyring_linux -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:34:38.466 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:38.466 --rc genhtml_branch_coverage=1 00:34:38.466 --rc genhtml_function_coverage=1 00:34:38.466 --rc genhtml_legend=1 00:34:38.466 --rc geninfo_all_blocks=1 00:34:38.466 --rc geninfo_unexecuted_blocks=1 00:34:38.466 00:34:38.466 ' 00:34:38.466 10:53:26 keyring_linux -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:34:38.466 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:38.466 --rc genhtml_branch_coverage=1 00:34:38.466 --rc genhtml_function_coverage=1 00:34:38.466 --rc genhtml_legend=1 00:34:38.466 --rc geninfo_all_blocks=1 00:34:38.466 --rc geninfo_unexecuted_blocks=1 00:34:38.466 00:34:38.466 ' 00:34:38.466 10:53:26 keyring_linux -- keyring/linux.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:34:38.466 10:53:26 keyring_linux -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:38.466 10:53:26 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:34:38.466 10:53:26 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:38.466 10:53:26 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:38.466 10:53:26 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:38.466 10:53:26 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:38.466 10:53:26 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:38.466 10:53:26 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:38.466 10:53:26 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:38.466 10:53:26 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:38.466 10:53:26 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:38.466 10:53:26 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:38.466 10:53:26 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:34:38.466 10:53:26 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=8b464f06-2980-e311-ba20-001e67a94acd 00:34:38.466 10:53:26 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:38.466 10:53:26 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:38.466 10:53:26 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:38.466 10:53:26 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:38.466 10:53:26 keyring_linux -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:38.466 10:53:26 keyring_linux -- scripts/common.sh@15 -- # shopt -s extglob 00:34:38.466 10:53:26 keyring_linux -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:38.466 10:53:26 keyring_linux -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:38.466 10:53:26 keyring_linux -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:38.466 10:53:26 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:38.466 10:53:26 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:38.466 10:53:26 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:38.466 10:53:26 keyring_linux -- paths/export.sh@5 -- # export PATH 00:34:38.466 10:53:26 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:38.466 10:53:26 keyring_linux -- nvmf/common.sh@51 -- # : 0 00:34:38.466 10:53:26 keyring_linux -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:34:38.466 10:53:26 keyring_linux -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:34:38.466 10:53:26 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:38.466 10:53:26 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:38.466 10:53:26 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:38.466 10:53:26 keyring_linux -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:34:38.466 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:34:38.466 10:53:26 keyring_linux -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:34:38.466 10:53:26 keyring_linux -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:34:38.466 10:53:26 keyring_linux -- nvmf/common.sh@55 -- # have_pci_nics=0 00:34:38.466 10:53:26 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:34:38.466 10:53:26 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:34:38.466 10:53:26 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:34:38.466 10:53:26 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:34:38.466 10:53:26 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:34:38.466 10:53:26 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:34:38.466 10:53:26 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:34:38.466 10:53:26 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:34:38.466 10:53:26 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:34:38.466 10:53:26 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:34:38.466 10:53:26 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:34:38.466 10:53:26 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:34:38.466 10:53:26 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:34:38.466 10:53:26 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:34:38.466 10:53:26 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 00:34:38.466 10:53:26 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:34:38.466 10:53:26 keyring_linux -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:34:38.466 10:53:26 keyring_linux -- nvmf/common.sh@732 -- # digest=0 00:34:38.466 10:53:26 keyring_linux -- nvmf/common.sh@733 -- # python - 00:34:38.466 10:53:26 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:34:38.466 10:53:26 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:34:38.466 /tmp/:spdk-test:key0 00:34:38.466 10:53:26 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:34:38.466 10:53:26 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:34:38.466 10:53:26 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:34:38.466 10:53:26 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:34:38.466 10:53:26 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:34:38.466 10:53:26 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:34:38.466 10:53:26 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:34:38.466 10:53:26 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:34:38.466 10:53:26 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 00:34:38.466 10:53:26 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:34:38.466 10:53:26 keyring_linux -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 00:34:38.466 10:53:26 keyring_linux -- nvmf/common.sh@732 -- # digest=0 00:34:38.466 10:53:26 keyring_linux -- nvmf/common.sh@733 -- # python - 00:34:38.724 10:53:26 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:34:38.724 10:53:26 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:34:38.724 /tmp/:spdk-test:key1 00:34:38.724 10:53:26 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=571788 00:34:38.724 10:53:26 keyring_linux -- keyring/linux.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:34:38.724 10:53:26 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 571788 00:34:38.724 10:53:26 keyring_linux -- common/autotest_common.sh@833 -- # '[' -z 571788 ']' 00:34:38.724 10:53:26 keyring_linux -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:38.724 10:53:26 keyring_linux -- common/autotest_common.sh@838 -- # local max_retries=100 00:34:38.724 10:53:26 keyring_linux -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:38.724 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:38.724 10:53:26 keyring_linux -- common/autotest_common.sh@842 -- # xtrace_disable 00:34:38.724 10:53:26 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:34:38.724 [2024-11-15 10:53:27.000564] Starting SPDK v25.01-pre git sha1 318515b44 / DPDK 24.03.0 initialization... 00:34:38.724 [2024-11-15 10:53:27.000673] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid571788 ] 00:34:38.724 [2024-11-15 10:53:27.068789] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:38.724 [2024-11-15 10:53:27.129040] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:38.982 10:53:27 keyring_linux -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:34:38.982 10:53:27 keyring_linux -- common/autotest_common.sh@866 -- # return 0 00:34:38.982 10:53:27 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:34:38.982 10:53:27 keyring_linux -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:38.982 10:53:27 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:34:38.982 [2024-11-15 10:53:27.397410] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:38.982 null0 00:34:38.982 [2024-11-15 10:53:27.429463] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:34:38.982 [2024-11-15 10:53:27.429952] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:34:39.240 10:53:27 keyring_linux -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:39.240 10:53:27 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:34:39.240 580667999 00:34:39.240 10:53:27 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:34:39.240 960843451 00:34:39.240 10:53:27 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=571804 00:34:39.240 10:53:27 keyring_linux -- keyring/linux.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:34:39.240 10:53:27 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 571804 /var/tmp/bperf.sock 00:34:39.240 10:53:27 keyring_linux -- common/autotest_common.sh@833 -- # '[' -z 571804 ']' 00:34:39.240 10:53:27 keyring_linux -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:34:39.240 10:53:27 keyring_linux -- common/autotest_common.sh@838 -- # local max_retries=100 00:34:39.240 10:53:27 keyring_linux -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:34:39.240 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:34:39.240 10:53:27 keyring_linux -- common/autotest_common.sh@842 -- # xtrace_disable 00:34:39.240 10:53:27 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:34:39.240 [2024-11-15 10:53:27.496100] Starting SPDK v25.01-pre git sha1 318515b44 / DPDK 24.03.0 initialization... 00:34:39.241 [2024-11-15 10:53:27.496163] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid571804 ] 00:34:39.241 [2024-11-15 10:53:27.559743] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:39.241 [2024-11-15 10:53:27.615625] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:34:39.499 10:53:27 keyring_linux -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:34:39.499 10:53:27 keyring_linux -- common/autotest_common.sh@866 -- # return 0 00:34:39.499 10:53:27 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:34:39.499 10:53:27 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:34:39.756 10:53:27 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:34:39.756 10:53:27 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:34:40.013 10:53:28 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:34:40.013 10:53:28 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:34:40.270 [2024-11-15 10:53:28.591966] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:34:40.270 nvme0n1 00:34:40.270 10:53:28 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:34:40.270 10:53:28 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:34:40.270 10:53:28 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:34:40.270 10:53:28 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:34:40.270 10:53:28 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:34:40.270 10:53:28 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:34:40.526 10:53:28 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:34:40.526 10:53:28 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:34:40.526 10:53:28 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:34:40.526 10:53:28 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:34:40.526 10:53:28 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:34:40.526 10:53:28 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:34:40.526 10:53:28 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:34:40.783 10:53:29 keyring_linux -- keyring/linux.sh@25 -- # sn=580667999 00:34:40.783 10:53:29 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:34:40.783 10:53:29 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:34:40.783 10:53:29 keyring_linux -- keyring/linux.sh@26 -- # [[ 580667999 == \5\8\0\6\6\7\9\9\9 ]] 00:34:40.783 10:53:29 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 580667999 00:34:40.783 10:53:29 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:34:40.783 10:53:29 keyring_linux -- keyring/linux.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:34:41.040 Running I/O for 1 seconds... 00:34:41.971 9871.00 IOPS, 38.56 MiB/s 00:34:41.971 Latency(us) 00:34:41.971 [2024-11-15T09:53:30.434Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:41.971 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:34:41.971 nvme0n1 : 1.01 9871.91 38.56 0.00 0.00 12876.77 7330.32 18544.26 00:34:41.971 [2024-11-15T09:53:30.434Z] =================================================================================================================== 00:34:41.971 [2024-11-15T09:53:30.434Z] Total : 9871.91 38.56 0.00 0.00 12876.77 7330.32 18544.26 00:34:41.971 { 00:34:41.971 "results": [ 00:34:41.971 { 00:34:41.971 "job": "nvme0n1", 00:34:41.971 "core_mask": "0x2", 00:34:41.971 "workload": "randread", 00:34:41.971 "status": "finished", 00:34:41.971 "queue_depth": 128, 00:34:41.971 "io_size": 4096, 00:34:41.971 "runtime": 1.012874, 00:34:41.971 "iops": 9871.909042980667, 00:34:41.971 "mibps": 38.56214469914323, 00:34:41.971 "io_failed": 0, 00:34:41.971 "io_timeout": 0, 00:34:41.971 "avg_latency_us": 12876.771799254, 00:34:41.971 "min_latency_us": 7330.322962962963, 00:34:41.971 "max_latency_us": 18544.26074074074 00:34:41.971 } 00:34:41.971 ], 00:34:41.971 "core_count": 1 00:34:41.971 } 00:34:41.971 10:53:30 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:34:41.971 10:53:30 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:34:42.228 10:53:30 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:34:42.228 10:53:30 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:34:42.228 10:53:30 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:34:42.228 10:53:30 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:34:42.228 10:53:30 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:34:42.228 10:53:30 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:34:42.485 10:53:30 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:34:42.485 10:53:30 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:34:42.485 10:53:30 keyring_linux -- keyring/linux.sh@23 -- # return 00:34:42.485 10:53:30 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:34:42.485 10:53:30 keyring_linux -- common/autotest_common.sh@650 -- # local es=0 00:34:42.485 10:53:30 keyring_linux -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:34:42.485 10:53:30 keyring_linux -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:34:42.485 10:53:30 keyring_linux -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:34:42.485 10:53:30 keyring_linux -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:34:42.485 10:53:30 keyring_linux -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:34:42.485 10:53:30 keyring_linux -- common/autotest_common.sh@653 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:34:42.486 10:53:30 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:34:42.743 [2024-11-15 10:53:31.194169] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:34:42.743 [2024-11-15 10:53:31.194657] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc40bc0 (107): Transport endpoint is not connected 00:34:42.743 [2024-11-15 10:53:31.195647] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc40bc0 (9): Bad file descriptor 00:34:42.743 [2024-11-15 10:53:31.196646] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:34:42.743 [2024-11-15 10:53:31.196676] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:34:42.743 [2024-11-15 10:53:31.196689] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:34:42.743 [2024-11-15 10:53:31.196704] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:34:42.743 request: 00:34:42.743 { 00:34:42.743 "name": "nvme0", 00:34:42.743 "trtype": "tcp", 00:34:42.743 "traddr": "127.0.0.1", 00:34:42.743 "adrfam": "ipv4", 00:34:42.743 "trsvcid": "4420", 00:34:42.743 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:34:42.743 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:34:42.743 "prchk_reftag": false, 00:34:42.743 "prchk_guard": false, 00:34:42.743 "hdgst": false, 00:34:42.743 "ddgst": false, 00:34:42.743 "psk": ":spdk-test:key1", 00:34:42.743 "allow_unrecognized_csi": false, 00:34:42.743 "method": "bdev_nvme_attach_controller", 00:34:42.743 "req_id": 1 00:34:42.743 } 00:34:42.743 Got JSON-RPC error response 00:34:42.743 response: 00:34:42.743 { 00:34:42.743 "code": -5, 00:34:42.743 "message": "Input/output error" 00:34:42.743 } 00:34:43.001 10:53:31 keyring_linux -- common/autotest_common.sh@653 -- # es=1 00:34:43.001 10:53:31 keyring_linux -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:34:43.001 10:53:31 keyring_linux -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:34:43.001 10:53:31 keyring_linux -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:34:43.001 10:53:31 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:34:43.001 10:53:31 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:34:43.001 10:53:31 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:34:43.001 10:53:31 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:34:43.001 10:53:31 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:34:43.001 10:53:31 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:34:43.001 10:53:31 keyring_linux -- keyring/linux.sh@33 -- # sn=580667999 00:34:43.001 10:53:31 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 580667999 00:34:43.001 1 links removed 00:34:43.001 10:53:31 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:34:43.001 10:53:31 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:34:43.001 10:53:31 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:34:43.001 10:53:31 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:34:43.001 10:53:31 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:34:43.001 10:53:31 keyring_linux -- keyring/linux.sh@33 -- # sn=960843451 00:34:43.001 10:53:31 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 960843451 00:34:43.001 1 links removed 00:34:43.001 10:53:31 keyring_linux -- keyring/linux.sh@41 -- # killprocess 571804 00:34:43.001 10:53:31 keyring_linux -- common/autotest_common.sh@952 -- # '[' -z 571804 ']' 00:34:43.001 10:53:31 keyring_linux -- common/autotest_common.sh@956 -- # kill -0 571804 00:34:43.001 10:53:31 keyring_linux -- common/autotest_common.sh@957 -- # uname 00:34:43.001 10:53:31 keyring_linux -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:34:43.001 10:53:31 keyring_linux -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 571804 00:34:43.001 10:53:31 keyring_linux -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:34:43.001 10:53:31 keyring_linux -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:34:43.001 10:53:31 keyring_linux -- common/autotest_common.sh@970 -- # echo 'killing process with pid 571804' 00:34:43.001 killing process with pid 571804 00:34:43.001 10:53:31 keyring_linux -- common/autotest_common.sh@971 -- # kill 571804 00:34:43.001 Received shutdown signal, test time was about 1.000000 seconds 00:34:43.001 00:34:43.001 Latency(us) 00:34:43.001 [2024-11-15T09:53:31.464Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:43.001 [2024-11-15T09:53:31.464Z] =================================================================================================================== 00:34:43.001 [2024-11-15T09:53:31.464Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:34:43.001 10:53:31 keyring_linux -- common/autotest_common.sh@976 -- # wait 571804 00:34:43.258 10:53:31 keyring_linux -- keyring/linux.sh@42 -- # killprocess 571788 00:34:43.258 10:53:31 keyring_linux -- common/autotest_common.sh@952 -- # '[' -z 571788 ']' 00:34:43.258 10:53:31 keyring_linux -- common/autotest_common.sh@956 -- # kill -0 571788 00:34:43.258 10:53:31 keyring_linux -- common/autotest_common.sh@957 -- # uname 00:34:43.258 10:53:31 keyring_linux -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:34:43.258 10:53:31 keyring_linux -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 571788 00:34:43.258 10:53:31 keyring_linux -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:34:43.258 10:53:31 keyring_linux -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:34:43.258 10:53:31 keyring_linux -- common/autotest_common.sh@970 -- # echo 'killing process with pid 571788' 00:34:43.258 killing process with pid 571788 00:34:43.258 10:53:31 keyring_linux -- common/autotest_common.sh@971 -- # kill 571788 00:34:43.258 10:53:31 keyring_linux -- common/autotest_common.sh@976 -- # wait 571788 00:34:43.516 00:34:43.516 real 0m5.258s 00:34:43.516 user 0m10.474s 00:34:43.516 sys 0m1.550s 00:34:43.516 10:53:31 keyring_linux -- common/autotest_common.sh@1128 -- # xtrace_disable 00:34:43.516 10:53:31 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:34:43.516 ************************************ 00:34:43.516 END TEST keyring_linux 00:34:43.516 ************************************ 00:34:43.516 10:53:31 -- spdk/autotest.sh@307 -- # '[' 0 -eq 1 ']' 00:34:43.516 10:53:31 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:34:43.516 10:53:31 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:34:43.516 10:53:31 -- spdk/autotest.sh@320 -- # '[' 0 -eq 1 ']' 00:34:43.516 10:53:31 -- spdk/autotest.sh@329 -- # '[' 0 -eq 1 ']' 00:34:43.516 10:53:31 -- spdk/autotest.sh@334 -- # '[' 0 -eq 1 ']' 00:34:43.516 10:53:31 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:34:43.516 10:53:31 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:34:43.516 10:53:31 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:34:43.516 10:53:31 -- spdk/autotest.sh@351 -- # '[' 0 -eq 1 ']' 00:34:43.516 10:53:31 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:34:43.516 10:53:31 -- spdk/autotest.sh@362 -- # [[ 0 -eq 1 ]] 00:34:43.516 10:53:31 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:34:43.516 10:53:31 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:34:43.516 10:53:31 -- spdk/autotest.sh@374 -- # [[ '' -eq 1 ]] 00:34:43.516 10:53:31 -- spdk/autotest.sh@381 -- # trap - SIGINT SIGTERM EXIT 00:34:43.516 10:53:31 -- spdk/autotest.sh@383 -- # timing_enter post_cleanup 00:34:43.516 10:53:31 -- common/autotest_common.sh@724 -- # xtrace_disable 00:34:43.516 10:53:31 -- common/autotest_common.sh@10 -- # set +x 00:34:43.516 10:53:31 -- spdk/autotest.sh@384 -- # autotest_cleanup 00:34:43.516 10:53:31 -- common/autotest_common.sh@1394 -- # local autotest_es=0 00:34:43.516 10:53:31 -- common/autotest_common.sh@1395 -- # xtrace_disable 00:34:43.516 10:53:31 -- common/autotest_common.sh@10 -- # set +x 00:34:46.045 INFO: APP EXITING 00:34:46.045 INFO: killing all VMs 00:34:46.045 INFO: killing vhost app 00:34:46.045 INFO: EXIT DONE 00:34:46.611 0000:81:00.0 (8086 0a54): Already using the nvme driver 00:34:46.611 0000:00:04.7 (8086 0e27): Already using the ioatdma driver 00:34:46.611 0000:00:04.6 (8086 0e26): Already using the ioatdma driver 00:34:46.868 0000:00:04.5 (8086 0e25): Already using the ioatdma driver 00:34:46.868 0000:00:04.4 (8086 0e24): Already using the ioatdma driver 00:34:46.868 0000:00:04.3 (8086 0e23): Already using the ioatdma driver 00:34:46.868 0000:00:04.2 (8086 0e22): Already using the ioatdma driver 00:34:46.868 0000:00:04.1 (8086 0e21): Already using the ioatdma driver 00:34:46.868 0000:00:04.0 (8086 0e20): Already using the ioatdma driver 00:34:46.868 0000:80:04.7 (8086 0e27): Already using the ioatdma driver 00:34:46.868 0000:80:04.6 (8086 0e26): Already using the ioatdma driver 00:34:46.868 0000:80:04.5 (8086 0e25): Already using the ioatdma driver 00:34:46.868 0000:80:04.4 (8086 0e24): Already using the ioatdma driver 00:34:46.868 0000:80:04.3 (8086 0e23): Already using the ioatdma driver 00:34:46.868 0000:80:04.2 (8086 0e22): Already using the ioatdma driver 00:34:46.868 0000:80:04.1 (8086 0e21): Already using the ioatdma driver 00:34:46.868 0000:80:04.0 (8086 0e20): Already using the ioatdma driver 00:34:48.242 Cleaning 00:34:48.242 Removing: /var/run/dpdk/spdk0/config 00:34:48.242 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:34:48.242 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:34:48.242 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:34:48.242 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:34:48.242 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:34:48.242 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:34:48.242 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:34:48.242 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:34:48.242 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:34:48.242 Removing: /var/run/dpdk/spdk0/hugepage_info 00:34:48.242 Removing: /var/run/dpdk/spdk1/config 00:34:48.242 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:34:48.242 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:34:48.242 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:34:48.242 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:34:48.242 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:34:48.242 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:34:48.242 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:34:48.242 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:34:48.242 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:34:48.242 Removing: /var/run/dpdk/spdk1/hugepage_info 00:34:48.242 Removing: /var/run/dpdk/spdk2/config 00:34:48.242 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:34:48.242 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:34:48.242 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:34:48.242 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:34:48.242 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:34:48.242 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:34:48.242 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:34:48.242 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:34:48.242 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:34:48.242 Removing: /var/run/dpdk/spdk2/hugepage_info 00:34:48.242 Removing: /var/run/dpdk/spdk3/config 00:34:48.242 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:34:48.242 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:34:48.242 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:34:48.242 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:34:48.242 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:34:48.242 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:34:48.242 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:34:48.242 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:34:48.242 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:34:48.242 Removing: /var/run/dpdk/spdk3/hugepage_info 00:34:48.242 Removing: /var/run/dpdk/spdk4/config 00:34:48.242 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:34:48.242 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:34:48.242 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:34:48.242 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:34:48.242 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:34:48.242 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:34:48.242 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:34:48.242 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:34:48.242 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:34:48.242 Removing: /var/run/dpdk/spdk4/hugepage_info 00:34:48.242 Removing: /dev/shm/bdev_svc_trace.1 00:34:48.242 Removing: /dev/shm/nvmf_trace.0 00:34:48.242 Removing: /dev/shm/spdk_tgt_trace.pid247831 00:34:48.242 Removing: /var/run/dpdk/spdk0 00:34:48.242 Removing: /var/run/dpdk/spdk1 00:34:48.242 Removing: /var/run/dpdk/spdk2 00:34:48.242 Removing: /var/run/dpdk/spdk3 00:34:48.242 Removing: /var/run/dpdk/spdk4 00:34:48.242 Removing: /var/run/dpdk/spdk_pid245890 00:34:48.242 Removing: /var/run/dpdk/spdk_pid246761 00:34:48.242 Removing: /var/run/dpdk/spdk_pid247831 00:34:48.242 Removing: /var/run/dpdk/spdk_pid248270 00:34:48.242 Removing: /var/run/dpdk/spdk_pid249461 00:34:48.242 Removing: /var/run/dpdk/spdk_pid249603 00:34:48.242 Removing: /var/run/dpdk/spdk_pid250326 00:34:48.242 Removing: /var/run/dpdk/spdk_pid250331 00:34:48.242 Removing: /var/run/dpdk/spdk_pid250594 00:34:48.242 Removing: /var/run/dpdk/spdk_pid251927 00:34:48.242 Removing: /var/run/dpdk/spdk_pid252977 00:34:48.242 Removing: /var/run/dpdk/spdk_pid253292 00:34:48.242 Removing: /var/run/dpdk/spdk_pid253493 00:34:48.242 Removing: /var/run/dpdk/spdk_pid253703 00:34:48.242 Removing: /var/run/dpdk/spdk_pid253899 00:34:48.242 Removing: /var/run/dpdk/spdk_pid254118 00:34:48.242 Removing: /var/run/dpdk/spdk_pid254329 00:34:48.243 Removing: /var/run/dpdk/spdk_pid254526 00:34:48.243 Removing: /var/run/dpdk/spdk_pid254716 00:34:48.243 Removing: /var/run/dpdk/spdk_pid257213 00:34:48.243 Removing: /var/run/dpdk/spdk_pid257446 00:34:48.243 Removing: /var/run/dpdk/spdk_pid257652 00:34:48.500 Removing: /var/run/dpdk/spdk_pid257661 00:34:48.500 Removing: /var/run/dpdk/spdk_pid258028 00:34:48.500 Removing: /var/run/dpdk/spdk_pid258097 00:34:48.501 Removing: /var/run/dpdk/spdk_pid258421 00:34:48.501 Removing: /var/run/dpdk/spdk_pid258531 00:34:48.501 Removing: /var/run/dpdk/spdk_pid258709 00:34:48.501 Removing: /var/run/dpdk/spdk_pid258829 00:34:48.501 Removing: /var/run/dpdk/spdk_pid258993 00:34:48.501 Removing: /var/run/dpdk/spdk_pid259012 00:34:48.501 Removing: /var/run/dpdk/spdk_pid259506 00:34:48.501 Removing: /var/run/dpdk/spdk_pid259659 00:34:48.501 Removing: /var/run/dpdk/spdk_pid259864 00:34:48.501 Removing: /var/run/dpdk/spdk_pid262097 00:34:48.501 Removing: /var/run/dpdk/spdk_pid264624 00:34:48.501 Removing: /var/run/dpdk/spdk_pid271649 00:34:48.501 Removing: /var/run/dpdk/spdk_pid272155 00:34:48.501 Removing: /var/run/dpdk/spdk_pid274636 00:34:48.501 Removing: /var/run/dpdk/spdk_pid274841 00:34:48.501 Removing: /var/run/dpdk/spdk_pid277472 00:34:48.501 Removing: /var/run/dpdk/spdk_pid281853 00:34:48.501 Removing: /var/run/dpdk/spdk_pid284042 00:34:48.501 Removing: /var/run/dpdk/spdk_pid290463 00:34:48.501 Removing: /var/run/dpdk/spdk_pid295702 00:34:48.501 Removing: /var/run/dpdk/spdk_pid296905 00:34:48.501 Removing: /var/run/dpdk/spdk_pid297572 00:34:48.501 Removing: /var/run/dpdk/spdk_pid308075 00:34:48.501 Removing: /var/run/dpdk/spdk_pid310367 00:34:48.501 Removing: /var/run/dpdk/spdk_pid338818 00:34:48.501 Removing: /var/run/dpdk/spdk_pid342005 00:34:48.501 Removing: /var/run/dpdk/spdk_pid345829 00:34:48.501 Removing: /var/run/dpdk/spdk_pid350232 00:34:48.501 Removing: /var/run/dpdk/spdk_pid350234 00:34:48.501 Removing: /var/run/dpdk/spdk_pid350891 00:34:48.501 Removing: /var/run/dpdk/spdk_pid351426 00:34:48.501 Removing: /var/run/dpdk/spdk_pid352085 00:34:48.501 Removing: /var/run/dpdk/spdk_pid352487 00:34:48.501 Removing: /var/run/dpdk/spdk_pid352489 00:34:48.501 Removing: /var/run/dpdk/spdk_pid352783 00:34:48.501 Removing: /var/run/dpdk/spdk_pid352979 00:34:48.501 Removing: /var/run/dpdk/spdk_pid352997 00:34:48.501 Removing: /var/run/dpdk/spdk_pid353657 00:34:48.501 Removing: /var/run/dpdk/spdk_pid354711 00:34:48.501 Removing: /var/run/dpdk/spdk_pid355364 00:34:48.501 Removing: /var/run/dpdk/spdk_pid355771 00:34:48.501 Removing: /var/run/dpdk/spdk_pid355888 00:34:48.501 Removing: /var/run/dpdk/spdk_pid356033 00:34:48.501 Removing: /var/run/dpdk/spdk_pid356928 00:34:48.501 Removing: /var/run/dpdk/spdk_pid357669 00:34:48.501 Removing: /var/run/dpdk/spdk_pid362990 00:34:48.501 Removing: /var/run/dpdk/spdk_pid391394 00:34:48.501 Removing: /var/run/dpdk/spdk_pid394320 00:34:48.501 Removing: /var/run/dpdk/spdk_pid395498 00:34:48.501 Removing: /var/run/dpdk/spdk_pid396816 00:34:48.501 Removing: /var/run/dpdk/spdk_pid396917 00:34:48.501 Removing: /var/run/dpdk/spdk_pid397021 00:34:48.501 Removing: /var/run/dpdk/spdk_pid397154 00:34:48.501 Removing: /var/run/dpdk/spdk_pid397682 00:34:48.501 Removing: /var/run/dpdk/spdk_pid399002 00:34:48.501 Removing: /var/run/dpdk/spdk_pid399739 00:34:48.501 Removing: /var/run/dpdk/spdk_pid400167 00:34:48.501 Removing: /var/run/dpdk/spdk_pid401781 00:34:48.501 Removing: /var/run/dpdk/spdk_pid402210 00:34:48.501 Removing: /var/run/dpdk/spdk_pid402779 00:34:48.501 Removing: /var/run/dpdk/spdk_pid405678 00:34:48.501 Removing: /var/run/dpdk/spdk_pid409083 00:34:48.501 Removing: /var/run/dpdk/spdk_pid409084 00:34:48.501 Removing: /var/run/dpdk/spdk_pid409085 00:34:48.501 Removing: /var/run/dpdk/spdk_pid411301 00:34:48.501 Removing: /var/run/dpdk/spdk_pid416285 00:34:48.501 Removing: /var/run/dpdk/spdk_pid418936 00:34:48.501 Removing: /var/run/dpdk/spdk_pid422696 00:34:48.501 Removing: /var/run/dpdk/spdk_pid423641 00:34:48.501 Removing: /var/run/dpdk/spdk_pid424632 00:34:48.501 Removing: /var/run/dpdk/spdk_pid425696 00:34:48.501 Removing: /var/run/dpdk/spdk_pid428469 00:34:48.501 Removing: /var/run/dpdk/spdk_pid431055 00:34:48.501 Removing: /var/run/dpdk/spdk_pid433333 00:34:48.501 Removing: /var/run/dpdk/spdk_pid437669 00:34:48.501 Removing: /var/run/dpdk/spdk_pid437671 00:34:48.501 Removing: /var/run/dpdk/spdk_pid440458 00:34:48.501 Removing: /var/run/dpdk/spdk_pid440712 00:34:48.501 Removing: /var/run/dpdk/spdk_pid440848 00:34:48.501 Removing: /var/run/dpdk/spdk_pid441111 00:34:48.501 Removing: /var/run/dpdk/spdk_pid441126 00:34:48.501 Removing: /var/run/dpdk/spdk_pid444386 00:34:48.501 Removing: /var/run/dpdk/spdk_pid444973 00:34:48.501 Removing: /var/run/dpdk/spdk_pid447639 00:34:48.501 Removing: /var/run/dpdk/spdk_pid449619 00:34:48.501 Removing: /var/run/dpdk/spdk_pid453047 00:34:48.501 Removing: /var/run/dpdk/spdk_pid456367 00:34:48.501 Removing: /var/run/dpdk/spdk_pid463129 00:34:48.501 Removing: /var/run/dpdk/spdk_pid467599 00:34:48.501 Removing: /var/run/dpdk/spdk_pid467601 00:34:48.501 Removing: /var/run/dpdk/spdk_pid481299 00:34:48.501 Removing: /var/run/dpdk/spdk_pid481825 00:34:48.501 Removing: /var/run/dpdk/spdk_pid482232 00:34:48.501 Removing: /var/run/dpdk/spdk_pid482662 00:34:48.501 Removing: /var/run/dpdk/spdk_pid483223 00:34:48.501 Removing: /var/run/dpdk/spdk_pid483646 00:34:48.501 Removing: /var/run/dpdk/spdk_pid484153 00:34:48.759 Removing: /var/run/dpdk/spdk_pid484563 00:34:48.759 Removing: /var/run/dpdk/spdk_pid487074 00:34:48.759 Removing: /var/run/dpdk/spdk_pid487213 00:34:48.759 Removing: /var/run/dpdk/spdk_pid491032 00:34:48.759 Removing: /var/run/dpdk/spdk_pid491200 00:34:48.759 Removing: /var/run/dpdk/spdk_pid494566 00:34:48.759 Removing: /var/run/dpdk/spdk_pid497066 00:34:48.759 Removing: /var/run/dpdk/spdk_pid504103 00:34:48.759 Removing: /var/run/dpdk/spdk_pid504513 00:34:48.759 Removing: /var/run/dpdk/spdk_pid507019 00:34:48.759 Removing: /var/run/dpdk/spdk_pid507181 00:34:48.759 Removing: /var/run/dpdk/spdk_pid509800 00:34:48.759 Removing: /var/run/dpdk/spdk_pid514109 00:34:48.759 Removing: /var/run/dpdk/spdk_pid516273 00:34:48.759 Removing: /var/run/dpdk/spdk_pid522540 00:34:48.759 Removing: /var/run/dpdk/spdk_pid527751 00:34:48.759 Removing: /var/run/dpdk/spdk_pid529043 00:34:48.759 Removing: /var/run/dpdk/spdk_pid529706 00:34:48.759 Removing: /var/run/dpdk/spdk_pid539761 00:34:48.759 Removing: /var/run/dpdk/spdk_pid542018 00:34:48.759 Removing: /var/run/dpdk/spdk_pid544017 00:34:48.759 Removing: /var/run/dpdk/spdk_pid549948 00:34:48.759 Removing: /var/run/dpdk/spdk_pid549953 00:34:48.759 Removing: /var/run/dpdk/spdk_pid552983 00:34:48.759 Removing: /var/run/dpdk/spdk_pid554384 00:34:48.759 Removing: /var/run/dpdk/spdk_pid555793 00:34:48.759 Removing: /var/run/dpdk/spdk_pid556541 00:34:48.759 Removing: /var/run/dpdk/spdk_pid558058 00:34:48.759 Removing: /var/run/dpdk/spdk_pid558938 00:34:48.759 Removing: /var/run/dpdk/spdk_pid564268 00:34:48.759 Removing: /var/run/dpdk/spdk_pid564614 00:34:48.759 Removing: /var/run/dpdk/spdk_pid565002 00:34:48.759 Removing: /var/run/dpdk/spdk_pid566690 00:34:48.759 Removing: /var/run/dpdk/spdk_pid566973 00:34:48.759 Removing: /var/run/dpdk/spdk_pid567369 00:34:48.759 Removing: /var/run/dpdk/spdk_pid569826 00:34:48.759 Removing: /var/run/dpdk/spdk_pid569839 00:34:48.759 Removing: /var/run/dpdk/spdk_pid571313 00:34:48.759 Removing: /var/run/dpdk/spdk_pid571788 00:34:48.759 Removing: /var/run/dpdk/spdk_pid571804 00:34:48.759 Clean 00:34:48.759 10:53:37 -- common/autotest_common.sh@1451 -- # return 0 00:34:48.759 10:53:37 -- spdk/autotest.sh@385 -- # timing_exit post_cleanup 00:34:48.759 10:53:37 -- common/autotest_common.sh@730 -- # xtrace_disable 00:34:48.759 10:53:37 -- common/autotest_common.sh@10 -- # set +x 00:34:48.759 10:53:37 -- spdk/autotest.sh@387 -- # timing_exit autotest 00:34:48.759 10:53:37 -- common/autotest_common.sh@730 -- # xtrace_disable 00:34:48.759 10:53:37 -- common/autotest_common.sh@10 -- # set +x 00:34:48.759 10:53:37 -- spdk/autotest.sh@388 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:34:48.759 10:53:37 -- spdk/autotest.sh@390 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:34:48.759 10:53:37 -- spdk/autotest.sh@390 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:34:48.759 10:53:37 -- spdk/autotest.sh@392 -- # [[ y == y ]] 00:34:48.759 10:53:37 -- spdk/autotest.sh@394 -- # hostname 00:34:48.759 10:53:37 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-gp-12 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:34:49.017 geninfo: WARNING: invalid characters removed from testname! 00:35:21.089 10:54:08 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:35:24.368 10:54:12 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:35:27.645 10:54:15 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:35:30.170 10:54:18 -- spdk/autotest.sh@401 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:35:33.473 10:54:21 -- spdk/autotest.sh@402 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:35:36.753 10:54:24 -- spdk/autotest.sh@403 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:35:39.278 10:54:27 -- spdk/autotest.sh@404 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:35:39.278 10:54:27 -- spdk/autorun.sh@1 -- $ timing_finish 00:35:39.278 10:54:27 -- common/autotest_common.sh@736 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt ]] 00:35:39.278 10:54:27 -- common/autotest_common.sh@738 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:35:39.279 10:54:27 -- common/autotest_common.sh@739 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:35:39.279 10:54:27 -- common/autotest_common.sh@742 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:35:39.279 + [[ -n 176021 ]] 00:35:39.279 + sudo kill 176021 00:35:39.288 [Pipeline] } 00:35:39.303 [Pipeline] // stage 00:35:39.308 [Pipeline] } 00:35:39.321 [Pipeline] // timeout 00:35:39.326 [Pipeline] } 00:35:39.340 [Pipeline] // catchError 00:35:39.344 [Pipeline] } 00:35:39.358 [Pipeline] // wrap 00:35:39.365 [Pipeline] } 00:35:39.377 [Pipeline] // catchError 00:35:39.386 [Pipeline] stage 00:35:39.388 [Pipeline] { (Epilogue) 00:35:39.400 [Pipeline] catchError 00:35:39.401 [Pipeline] { 00:35:39.413 [Pipeline] echo 00:35:39.414 Cleanup processes 00:35:39.419 [Pipeline] sh 00:35:39.705 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:35:39.705 583196 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:35:39.720 [Pipeline] sh 00:35:40.005 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:35:40.005 ++ awk '{print $1}' 00:35:40.005 ++ grep -v 'sudo pgrep' 00:35:40.005 + sudo kill -9 00:35:40.005 + true 00:35:40.017 [Pipeline] sh 00:35:40.301 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:35:50.271 [Pipeline] sh 00:35:50.557 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:35:50.557 Artifacts sizes are good 00:35:50.574 [Pipeline] archiveArtifacts 00:35:50.584 Archiving artifacts 00:35:50.737 [Pipeline] sh 00:35:51.094 + sudo chown -R sys_sgci: /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:35:51.112 [Pipeline] cleanWs 00:35:51.121 [WS-CLEANUP] Deleting project workspace... 00:35:51.121 [WS-CLEANUP] Deferred wipeout is used... 00:35:51.127 [WS-CLEANUP] done 00:35:51.129 [Pipeline] } 00:35:51.144 [Pipeline] // catchError 00:35:51.154 [Pipeline] sh 00:35:51.435 + logger -p user.info -t JENKINS-CI 00:35:51.442 [Pipeline] } 00:35:51.456 [Pipeline] // stage 00:35:51.460 [Pipeline] } 00:35:51.474 [Pipeline] // node 00:35:51.478 [Pipeline] End of Pipeline 00:35:51.511 Finished: SUCCESS